Imaginary Number Probability in Bayesian-type Inference

Abstract Doubt, choice and probability. Bayesian probability computation is the most significant approach in complex maths interesting for all logicians to understand. And its computation and reasoning set us new priorities in further attempts to develop a more human-type reasoning, where ’possible’ and ’probable’ scales are matched and sorted out on subjective basis. We use Bayesian computation models, Finetti’s principle of free observation, dynamic probability, complex number equations, and other formal-logical principles in order to base our own modeling and sub-branching. We aim to understand relation of the computation frequency in probability inference and in imaginary probability computation. And how the Bayesian inference principle could be disturbed by the possibilities of artificial ’doubt’ of imaginary probability. We try to define the common patterns of complex number behavior in probability modeling, and the modeling of such probability in i numbers, so we could say one day that the probability of having a cancer is 1.99 in for 100, and the hypothetical probability of it is none (0). The same subjective manner same subjective manner of a culprit who prefers an idea, or an image over logic, undertaking it as a guidance for his actions; a magnificent specter of a writer, a diamond of an artist and all those things which lure them all to the same jail of a culprit the split of decision.


Abstract Understanding
We understand imaginary unit of i as of any possible data that isn't included in factual observation by Bayesian inference, while the i 2 , the imaginary number, or the 'non-existent' number is opposite -it is a possibility that could have happened but never 'will' or 'would' have.
Bayesian inference may not stipulate whether the iP(A|B) is possible in equation, therefore, we have to expand the probability course in terms of abstract conditioning in the (A|B) probability in where certain occasion may bring out hypothetical conditions of X1,2,3,....n.
Condition 1 → Supplication = Value of Condition 1 So the condition of (A|B) would be evaluated by iP(A|B) as a valid one, and not only from numerical standpoint, but from the 'imaginary probability', from the cognitive 'intuition' if we may say so.

The Hypothesis of Imaginary Inference
We understand Bayesian inference in the complexity of statistical circumspection in decimal numbers of data withdrawal, but not as a dynamic variation of matching between the: • most probable

• least probable
• possibly probable options, which are presumably superficial levels and would rather delay inference.
That's why we combine the principles of P(A|B) inference into its possible 'imaginary' variances and 'imaginary' observations of i.
Hypothesis → proposition → observation → inference Therefore, we have to simulate active inference-probability models on how a certain imaginary conclusion or value is possible in mathematical variation of computational logic.

Bayes' and de Finneti's Observations in Boolean Variances
According to the principles of De Finetti, we suppose factual changes in Xi = 1 by observation, (Finetti's exchangeability, more in Diaconis Persi (1977)), which comprises the premise for dynamic data observation in i number system.
And that is why we have to prove logically the free state observation of i number in X 1 ...X n in Bayes' conditional probability and applicably in P(A|B) observation.
In order to gain the probability of assumption or the 'imaginary probability', simultaneously with the Bayesian inference process, we have to distinguish: • precedent

• hypothesis
• type of data See more on active observation in Robert F. Nau (2001).

Application of iP Probability in Complex Numbers
In order to predict the value, or the location, of any 'imaginary observation' we have to refer to the existing data, or to the existing precedent which we store in variances (v).
From the hypothesis to random observation we stipulate certain variances (v) into the frequency of them being hypothetical|probable and hypothetical|improbable in subjective inference.Hence, we stipulate an (iv) variance with the frequency (w) of logical requests (queries) in Bayesian P(A|B)  wi .Respectively to the logic of computational differentiation there may be different values of inference: X n ⊂ Y n ⊂ Z n 0 in (x1, x2,.xn), and as for the calculus differentiations we may provide the f(x)=dx/dy differentiation in i; thus providing the 'unknown' integer into a ∏ of P(A) n or P(B) n in Boolean x,y data type.
We have to determine the velocity and the proportion of certain probability in iP in contra-pose to Bayesian analysis, in order to make conditional probability less systematic and more dynamic for random observation.
For example: We have 3 doors and 2 of them are closed, what is the probability of the 3rd door being open?
We would say 1/3, or 0.3, or (10%), however it may be different from the cognitive standpoint.So, we have to presume reasonable factors over statistical, thence constructing stereotypical behavior in iP(A|B): 'Imaginary' result i(A|B) Bayes' result (A|B) 0.3 'IF' 0.9, THEN NOT 0.3 'IF' 0.2 OR 0.4, THEN 0.3 0.9 *if the iP result is higher than P, then the foremost option is prefered

Figure 2
The factual probability that the door is open is 90%, 'because Terrence said so'.
We have to choose the iP(A|B) counter-argument supplication of P(A|B), in our example of 0.3i we stem out: In where we take (A) as a factual info ('there are 3 doors'), and (B) n as a probable outcome.
Perhaps, we would understand such 'reverse' from B to Bi(B → Bi), and from 0.03(10%) to 0.9(90%) in the following supposition: We suppose (B)i as an x (a possible state of observation), and infer as (B)i = xi n : x We get roughly 9 actual variances in 100x's in observations.The further clarification may be related to the factual data observation of the factual P(A|B), in order to get matched with the iP(A|B).
For example, in iP(A|B) → P(A|B), regardless of its frequency (w), we may logically presume the closest value of its probability.

The i 2 Compromise of the i Value
Theorem 1 The i 2 = −1 value while in x 0, may be compromised -reversed.
Proof.If we depict the i 2 in x tangent, we would depict it in both numerical values: positive and negative, proving that imaginary number exists in whole numbers, hence it's real.
While having negative value in i 2 √ −1 √ −1, we always yield minus, unless we specify it in: In where we have to find the positive x out of the negative value: Therefore, we would yield a supposition of i 2 = x, and x i 2 = y: (-1).We may produce its results in (1) and (-1) for algebra and logic; in x (+), y(-) in trigonometry; and in P(A)=1, P(B)=-1 in Bayesian inference.

We Combine and Transfer i 2 Into i
If in P(A|B)  Xn we set negative value collocation with positive ones in: In where iP yields no productive result in i, therefore, needs to be regarded only hypothetically (least probable) or transferred into the P.
From here we try to achieve the similarity to the Bayesian probability model, only adding the i (sub)processing to it.
In i = √ −1, we compose: We gain such order of alignment of i 2 into i and vice versa.We do so in order to: • avoid inapplicability of computation due to negative/non-existent value of it.
• have a substantiation of logical order.
• have comparison in the same system of C number (existing numbers).
• compare the inference difference in computational logic between P(A|B) and iP(A|B).
• manipulate further algebraic actions with the different constants, not only with P(A|B), but with other models as well.
• sustain the role of i 2 in probabilistic models.Make it plausible, make it existent, to make it real and rational at the same time.
• and finally, we transfer a negative number of i 2 into the i number, which is imaginary but existent (!), hence compatible with Bayesian analysis computation.

Dynamic State Observation
We know that we're containing the layer of the i 2 into the computable value in dynamics, we may presume the negative value of i in C computation of √ −1 = ix n , so it would be a platform for C equations, where we presume the i derivative of (A)|(B), whereas (A)=1, (the factual probability) in opposite to (B)=-1 of nonexistent probability: Lemma 1 If we presume an 'imaginary' dynamic state observation in i 1x , then we would yield an ix probabilistic reasoning in iP(A|B) of Xn computation.
3 , where x is the state of observation, could be drawn out as in the following (the '3 doors' example): (ix) Hence, we proportion the bundle of the '3 doors' to the average index of 0.3 in proportion to the 100% of its credibility.
And thus, the negative value of i shifts the observation of x in (ix → Xn), where we have only 9 variances of imaginary probability instead of ∞.So, we bind it into stereotypical reasoning.
WE HAVE 3 DOORS, ONE OF THEM IS 'X'.WE YIELD 9 PROBABILITIES ON WHY IT COULD BE OPEN, AND ONLY 1 ON WHY IT COULD BE SHUT.

The Limitation of P(A|B) in P[X,Y,Z]
A practical solution in i 2 → i transfer in computation may lead us into tremendous suppositions of 'intuition thinking' and AI automated inferences.
Nevertheless, the intuition levels of Bayesian inference may be expressed in mathematical limitations of [X, Y, Z] variables, accorded by the mathematical value of x (random observation, also Finetti) in i 2 and i of iP(P(A|B).Hence, may be principled in: Where the frequency (w) of iX may be reducible in i(P(A|B) in order to make the latter more intuitively limited, Whereas in iX = x(w) we may have [X,Y,Z] variables only, in where the reducibility of X i is the reducibility of a variable, but not a state of observation x.
Another example of it exists in the P(Yn, X1) in Carbonari A. and Giretti A. (2014), in where the static Y is observed by the dynamic y, and the comprehension of P(X|Y) simply goes by the ∑ of variable in P(A|B).
However, even such distant observation is processed via the given data of P(X|Y) in where only X and Y observe each over, and in where it is hard to grasp intuiting levels.
Eventually, we could limit ourselves to the existing conclusion of: But we may also have to understand it in: And we'd rather proceed to the stochastic modeling in Bayesian inference computation, instead of calculating infinite maths in maths.So, we pre-set: P(X|A) P(Y|B) P(Z|v)

The iP(A|X)(B|Y) in Stochastic Computation
If we know that its inference reducible in Bayes' conclusion (as well as in HMM) and multiplied via free-state observation (Finetti) wouldn't that logically preclude that there is a certain probability (Bayes') and at the same time uncertain variation of it (Finneti), hence no probability at all?
There wouldn't, unless we specify the free variance v in a certain probability P(A|B) of an uncertain data iPn/iPx in it.
It's presumed, as soon as, any physical observation is already bound into [X, Y, Z] variances of multiple observations of x.
Confusing or not, the goal is to establish the iP and P probabilities in decimal and whole number computations.And we still have to deal with negative value of i = i − 1 in it.
Is it anyhow possible to presume that P=-1?Perhaps in binomial equation of it: In the following we try to elaborate the computational index i , whereas we set: IF ′ 1 ′ , T HEN : P(A|X) → P(A) (B) ( 23) In where the portability of P(A) n varies both in 2.5/2.0i and in P(A) n−1 in 0.4/3.2i;whereas we consider the probability of 3.2i for iP(A) n more likable than 2.5 and 2.0i, because 3.2i as close to 2.5 as possible (the 0.4 is the least probable).
If we stipulate an average coefficient in A/A i from the given data of P(A) n (2.5) and iP(A) −1 in (3.2i) on the same matter: There is a chance of imaginary probability to take place over the factual one.
The similar applies to (B)|(B i ) in:

P(B)(iP(B|Y) iP
in where you may be given different data, but a similar decimal outcome.
Another step, is to initiate both probabilities in i and i 2 as in P(A|B) n−1 and P(A|B) n of (A|X)(B|Y) and formulate them in: and for the separate coefficients in A/A i and B/B i :

Stochastic Reasoning and the i Index
If there is any data given to us by P(A|B) computation would that rouse any inference in data non-existent, in cases of artificial doubt, inference or recollection?
We may always differentiate (A) on (B) and (B) on (A), but we wouldn't ever get as close as possible to the probabilistic reasoning, unless we require a mathematical product to do so: Determining the two fields of probability (X|A)(Y|B), we may use them in a more dynamic way: In Paisley J. and Jordan M. (2012) this concept reviewed as the stochastic search in programming.
The other practical and foreseeable possibilities of iP in computability are traced in Claret G. and Sriram K. ( 2013) models, in where the sequential Bayesian inference in programming is a subset of R numbers.

Inference and the Frequency
In conditional probability modeling we may express the variance/var/(v) in mathematical frequency (w), so, that would yield progression in Log n−1 , the linear computation.
As we understand the linear inference, handled by the inference-algorithm, it's quicker than the random observation.
However, we may consider the following computational pacing similar to the real-time (w) progression of the free-bound variable in Bayesian type probability: We may consider a time-out in real-time (A|B) inference if only there is any limited data in P(A|B) on the other hand; then the iP would be hypothetically infinite for computation, but it shall be confound in the spectrum of [X,Y,Z] instead.
Composing of x ∈ y, z and y ∈ z, we may further consider the frequency of Bayesian probability in its imaginary unit as following: Proof.We consider an iP inference in iP(w) P(w) frequency, and the ix x derivatives in Bayesian probability, in where we set X(1)|Y(−1)|Z(log1) in order to yield iP in both results: in x > 1 and in y < 1, whereas X ∈ Z ≥ Y.
From the existing model of Pr ∫ [X ∈ A] we would try to construe its specifications in log value for our P → iP negation: Specifying that the A|B ∈ i, we would later on proceed at any computation of Bayesian inference.

Time Reducibility in iP
In the timing of the reducibility of variances, we understand the iP inference as imaginative, hence, existent only at particular time (t) and in a particular point of data of A| or B|.In where we compile a nonlinear shift of (t)-time and (v)-variance in bound of (A|B): ∫ ∫ ∫ AvXn t ∈ A(ZY)

Inference, Time, Probability
Timing recurrent, the solid statement is given, a mind is the inference, a blast is the movement of variances v n : ∫ ∫ ∫ AvXn 1 (B|X)Yv The system of it simplified to the existent presence of science: The R → i reducibility is the reducibility of time.We contemplate not the data but the time.Therefore, it is a question of a philosophy to proceed further on, while us consider the logical order of it.

Getting an Average Index
We review variances (v), time (t) and the frequency (w) in derivatives of X n of computational data, and bound variables of [X,Y,Z] in certain logical calculations, and we presume the probability in i in conditioning of time to it.

The computation of P(A|B) → P(A|X)(B|Y −1 )P(A|Z)
P(B|Y)Log−1 in computational programming may be possible in logical terms only.
6.The main computation of stochastic modeling applies to the free bound observation in: in where we yield similar to Markov decimal data but in parallel of 2 or more possibilities in average index per item, A or A i , and we get the both results in positive and negative values (R).

The average index in:
indexA/A i = P(A)(iP(A|X) iP and indexB/B i = P(B)(iP(B|Y) iP apply as contra-arguments of the Bayesian probability and permit i number probability to coexist with the factual one in R. We presume that in some instances the iP even may take over the factual P, if it's in the average index ≥ over it.

Tables and Figures
An example of

Discussion
If we reflect upon the current state of Bayesian inference which coexists in parallel with the statistical conclusion, we would stumble upon an artificial psychology of an actual and subjective choice selection.
Thence, we would never understand the difference between the human imagination and just 'imagining' the probability, unless we define imaginary integer as a 'possibility' index.Which is by far would yield less rational, but nevertheless, existent and accountable probability.
While having an artificial doubt or any other non-factual, hypothetical model, why would a machine collect such 'trifle' alongside with the factual data?Why would we complicate, if we may produce the result regardless of it and make it linear?And it's the same question of independent cognition.The fact of 'A' shall be compared to the possibility of 'B', the solidity of '1' shall be juxtaposed to '-1'.A solid statement of A shall be shattered onto any forms of i ('B', '-1', etc) in order to re-solidify 'A' into a precedent, into a pattern of self-learning by 'imaginary mistakes'.
The deviation of a culprit is the teacher, and if we sift his doubts and strays through the solid logic of comparison, then we would reap the real diamonds.