Skip to main content

An Analogue-Digital Model of Computation: Turing Machines with Physical Oracles

  • Chapter
  • First Online:

Part of the book series: Emergence, Complexity and Computation ((ECC,volume 22))

Abstract

We introduce an abstract analogue-digital model of computation that couples Turing machines to oracles that are physical processes. Since any oracle has the potential to boost the computational power of a Turing machine, the effect on the power of the Turing machine of adding a physical process raises interesting questions. Do physical processes add significantly to the power of Turing machines; can they break the Turing Barrier? Does the power of the Turing machine vary with different physical processes? Specifically, here, we take a physical oracle to be a physical experiment, controlled by the Turing machine, that measures some physical quantity. There are three protocols of communication between the Turing machine and the oracle that simulate the types of error propagation common to analogue-digital devices, namely: infinite precision, unbounded precision, and fixed precision. These three types of precision introduce three variants of the physical oracle model. On fixing one archetypal experiment, we show how to classify the computational power of the three models by establishing the lower and upper bounds. Using new techniques and ideas about timing, we give a complete classification.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    In the ARNN case, a subsystem of about eleven neurones (variables) performs a measurement of a real-valued weight of the network up to some precision and resumes to a computation with advice simulated by a system of a thousand rational neurones interconnected with integer and a very few rational weights.

  2. 2.

    In contrast, in the ARNN model, the time of a measurement is linear due to the fact that the activation function of each neuron is piecewise linear instead of the common analytic sigmoid.

  3. 3.

    In the same way, we could also say that tapes of Turing machines can have as many cells as the number of particles in the universe, but in such a case no interesting theory of computability can be developed.

  4. 4.

    We would get \(y(f) = 0. c(f(0)) 001 c(f(1)) 001 \cdots \) and since each c(f(i)) has a logarithmic size in i, the sequence y(f)(n) would have a size \(\mathcal {O}(n \times \log (n))\).

  5. 5.

    In this case the position \(0 \!\! \downharpoonleft _{\ell }\) or \(1 \!\! \downharpoonleft _{\ell }\) means \(0. \underbrace{0 \cdots 0}_{\ell }\) and \(1. \underbrace{0 \cdots 0}_{\ell }\), respectively, not the dyadic position.

  6. 6.

    Id.

  7. 7.

    There are always two boundary numbers satisfying this equation, as Fig. 4.11 shows.

  8. 8.

    The following example helps to clarify the argument. Suppose that \(y = 0.1100011000 \dots \) The sequence \(v_k\) can be taken as follows: \(v_1 = 1\), \(v_2 = 11\), \(v_3 = 111\), \(v_4 = 1101\), \(v_5 = 11001\), \(v_6 = 110010\), \(v_7 = 1100100\), \(v_8 = 11000111\), \(v_9 = 110001100\), ...

References

  1. Siegelmann, H.T., Sontag, E.D.: Analog computation via neural networks. Theor. Comput. Sci. 131(2), 331–360 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  2. Woods, D., Naughton, T.J.: An optical model of computation. Theor. Comput. Sci. 334(1–3), 227–258 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bournez, O., Cosnard, M.: On the computational power of dynamical systems and hybrid systems. Theor. Comput. Sci. 168(2), 417–459 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  4. Carnap, R.: Philosophical Foundations of Physics. Basic Books, New York (1966)

    Google Scholar 

  5. Beggs, E., Costa, J., Tucker, J.V.: Computational models of measurement and Hempel’s axiomatization. In: Carsetti, A. (ed.), Causality, Meaningful Complexity and Knowledge Construction, vol. 46. Theory and Decision Library A, pp. 155–184. Springer, Berlin (2010)

    Google Scholar 

  6. Geroch, R., Hartle, J.B.: Computability and physical theories. Found. Phys. 16(6), 533–550 (1986)

    Article  MathSciNet  Google Scholar 

  7. Beggs, E., Costa, J., Tucker, J.V.: Three forms of physical measurement and their computability. Rev. Symb. Log. 7(4), 618–646 (2014)

    Google Scholar 

  8. Hempel, C.G.: Fundamentals of concept formation in empirical science. Int. Encycl. Unified Sci. II, 7 (1952)

    Google Scholar 

  9. Krantz, D.H., Suppes, P., Luce, R.D., Tversky, A.: Foundations of Measurement. Dover, New York (2009)

    Google Scholar 

  10. Beggs, E., Costa, J.F., Poças, D., Tucker, J.V.: Oracles that measure thresholds: the Turing machine and the broken balance. J. Log. Comput. 23(6), 1155–1181 (2013)

    Google Scholar 

  11. Beggs, E., Costa, J.F., Poças, D., Tucker, J.V.: Computations with oracles that measure vanishing quantities. Math. Struct. Comput. Sci., p. 49 (in print)

    Google Scholar 

  12. Beggs, E.J., Costa, J.F., Tucker, J.V.: Limits to measurement in experiments governed by algorithms. Math. Struct. Comput. Sci. 20(06), 1019–1050 (2010)

    Google Scholar 

  13. Beggs, E., Costa, J.F., Tucker, J.V.: The impact of models of a physical oracle on computational power. Math. Struct. Comput. Sci. 22(5), 853–879 (2012)

    Google Scholar 

  14. Beggs, E., Costa, J.F., Loff, B., Tucker, J.V.: Computational complexity with experiments as oracles. Proceedings of the Royal Society, Series A (Mathematical,Physical and Engineering Sciences), 464(2098), 2777–2801 (2008)

    Google Scholar 

  15. Beggs, E., Costa, J.F., Loff, B., Tucker, J.V.: Computational complexity with experiments as oracles. II. Upper bounds. Proceedings of the Royal Society, Series A (Mathematical,Physical and Engineering Sciences) 465(2105), 1453–1465 (2009)

    Google Scholar 

  16. Beggs, E.J., Costa, J.F., Tucker, J.V.: Axiomatizing physical experiments as oracles to algorithms. Philosophical Transactions of the Royal Society, Series A(Mathematical, Physical and Engineering Sciences) 370(12), 3359–3384 (2012)

    Google Scholar 

  17. Balcázar, J.L., Díaz, J., Gabarró, J.: Structural Complexity I, vol. 11. Theoretical Computer Science. Springer, Berlin (1990)

    Google Scholar 

  18. Balcázar, José L., Hermo, Montserrat: The structure of logarithmic advice complexity classes. Theor. Comput. Sci. 207(1), 217–244 (1998)

    Google Scholar 

  19. Siegelmann, H.T.: Neural Networks and Analog Computation: Beyond the Turing limit. Birkhäuser, Boston (1999)

    Google Scholar 

  20. Beggs, E., Costa, J.F., Poças, D., Tucker, J.V.: An analogue-digital Church-Turing thesis. Int. J. Found. Comput. Sci. 25(4), 373–389 (2014)

    Google Scholar 

Download references

Acknowledgments

To Bill Tantau for the use of pgf/TikZ applications.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tânia Ambaram .

Editor information

Editors and Affiliations

Appendices

Appendix A: Nonuniform Complexity Classes

A nonuniform complexity class is a way of characterising families \(\{\mathcal {C}_n\}_{n\in \mathbb {N}}\) of finite machines, such as logic circuits, where the element \(\mathcal {C}_n\) decides a restriction of some problem to inputs of size n. Nonuniformity arises because for \(n \ne m\), \(\mathcal {C}_n\) may be unrelated to \(\mathcal {C}_m\); eventually, there is a distinct algorithm for each input size (see [17]). The elements of a nonuniform class can be unified by means of a (possibly noncomputable) advice function, as introduced in Sect. 4.4, making up just one algorithm for all input sizes. The values of such a function provided with the inputs add the extra information needed to perform the computations (see [18]).

The nonuniform complexity classes have an important role in the Complexity Theory. The class \( P /\text {poly}\) contains the undecidable halting set \(\{ 0^n\): n is the encoding of a Turing machine that halts on input \(0 \}\), and it corresponds to the set of families decidable by a polynomial size circuit. The class \(\mathrm {P/\log }\) also contains the halting set defined as \(\{0^{2^n}\): n is the encoding of a Turing machine that halts on input \(0 \}\).

The definition of nonuniform complexity classes was given in Sect. 4.4. Generally we consider four cases: \(\mathbb {C}/\mathbb {F}\), \(\mathbb {C}/\mathbb {F}\star \), \(\mathbb {C}//\mathbb {F}\), and \(\mathbb {C}//\mathbb {F}\star \). The second and the fourth cases are small variations of the first and third, respectively. To understand the difference, note that \(\mathbb {C}/\mathbb {F}\) is the class of sets B for which there exists a set \(A \in \mathbb {C}\) and an advice function \(f \in \mathbb {F}\) such that, for every \(w \in \{ 0 , 1 \}^{\star }\), \(w \in B\) iff \(\langle w, f(|w|) \rangle \in A\). In this case, the advice function is fixed after choosing the Turing machine that decides the set A. As it is more intuitive to fix the Turing machine after choosing the suitable advice function, we considered a less restrictive definition, the type \(\mathbb {C}//\mathbb {F}\): the class of sets B for which, given an advice function \(f \in \mathbb {F}\) , there exists a set \(A \in \mathbb {C}\) such that, for every \(w \in \{ 0 , 1 \}^{\star }\), \(w \in B\) iff \(\langle w, f(|w|) \rangle \in A\).

The following structural relations hold between the nonuniform classes used throughout this paper:

$$\begin{aligned} P/\log \!\star \subseteq BPP/\log \!\star \subseteq BPP//\log \!\star \ . \end{aligned}$$

This result is trivial since we can just use the same Turing machine and the same advice function.

Appendix B: The Cantor Set

We prove Proposition 6 (see [10, 12] for further details). This proposition allows us to frame the distance between a dyadic rational and a real number. Recall that a dyadic rational is a number of the form \(n / 2^k\), where n is an integer and k is a positive integer. If such a number belongs to \(\mathcal {C}_3\) then it is composed by triplets of the form 001, 010 or 100.

Proposition 12

For every \(x \in \mathcal {C}_{3}\) and for every dyadic rational \(z \in \ ]0,1[\) with size \(\mid \!\! z \!\!\mid = m\), if \(|x - z| \ \le 1/2^{i+5}\), then the binary expansion of x and z coincide in the first i bits and \(|y - z| \ > 1/2^{-(m + 10)}\).

Proof

Suppose that x and z coincide in the first \(i - 1\) bits and differ in the ith bit. We have two possible cases:

\(z < x\): In this case \(z_i = 0\) and \(x_i = 1\) and the worst case for the difference occurs when the binary expansion for z after the ith position begins with a sequence of 1s and the binary expansion for x after ith position begins with a sequence of 0s.

\(z > x\): In this case \(z_i = 1\) and \(x_i = 0\) and the worst case for the difference occurs when the binary expansion for z after ith position begins with a sequence of 0s and the binary expansion for x after the ith position begins with a sequence of 1s:

We can conclude that in any case \(|x - z| > 2^{-(i + 5)}\). Thus, if \(|x - z| \le 2^{-(i + 5)}\), then x an z coincide in the first i bits.

The binary expansion of z after some position m is exclusively composed by 0s and since \(x \in \mathcal {C}_3\), it has at most 4 consecutive 0s after the mth bit. Thus, supposing that x and z coincide up to mth position, after this position they can coincide at most in the next 4 positions so they cannot coincide in \(m + 5\) bits. Therefore, by the first part of the statement, \(|x - z| > 2^{-(m + 10)}\).    \(\square \)

Appendix C: Random Sequences

Propositions 7 and 8 show how the SmSE with unbounded or fixed precision could be seen as a biased coin. Given a biased coin, as stated by Proposition 9, we can simulate a fair sequence of coin tosses. Herein, we present the proof of such a statement.

Proposition 13

Given a biased coin with probability of heads \(\delta \in \ ]0, 1[\) and a constant \(\gamma \in \,]0, 1[\), we can simulate, up to probability \(\ge \gamma \), a sequence of independent fair coin tosses of length n by performing a linear number of biased coin tosses.

Proof

Consider that we have a biased coin with probability of heads \(\delta \in \ ]0, 1[\). To simulate a fair coin toss we perform the following algorithm: Toss the biased coin twice and,

  1. 1.

    If the output is HT then output H;

  2. 2.

    If the output is TH then output T;

  3. 3.

    If the output is HH or TT then repeat algorithm.

As the probability of HT is equal to TH, we have the same probability of getting a H and a T and thus we simulate a fair coin. The probability that the algorithm halts in one run is \(r = 2 q (1 - q)\) and the probability of running it again is \(s = 1 - \delta \). We want to run the algorithm until we get a sequence of fair coin tosses with size n. To get this sequence we may need to run the algorithm more than n times and thus we will study the total number of coin tosses required by considering the variable \(T_n\) denoting the number of runs until we get n fair coin tosses. The value \(T_n\) is a random variable that is given by the negative binomial distribution

$$\begin{aligned} T_n \overset{d}{=} NB(n,s) \ . \end{aligned}$$

In this case we have the following mean and variance:

$$ \mu = \frac{ns}{r} + n = \frac{n}{r}, \ \ \ \ \ \ \ \ \ \ \upsilon = \frac{ns}{r^2} \ . $$

Now, using the Chebyshev’s inequality, we get

$$ P(\mid T_n - \mu \mid \ge t) \le \frac{\upsilon }{t^2} \ . $$

And thus, by considering \(t = \alpha n\), for some \(\alpha \), we get

$$ P(T_n \ge \mu + \alpha n) \le \frac{n s}{r^2 (\alpha n)^2} < \frac{1}{r^2 \alpha ^2 n} \ . $$

Since the worst case is for \(n = 1\), in order to get the probability of failure less than \(1 - \gamma \) we need

$$ \alpha \ge \frac{1}{r \sqrt{(1 - \gamma )}} \ . $$

Noticing that \(T_n \ge \mu + \alpha n\), we find that the total number of runs is

$$ \frac{n}{r} + \frac{1}{r \sqrt{(1 - \gamma )}} \times n = \frac{n}{r} \bigg ( 1 + \frac{1}{\sqrt{1 - \gamma }} \bigg ) \ . $$

Since we toss a coin two times in each run, we get that the total number of coin tosses is linear in n

$$ \frac{n}{\delta (1 - \delta )}\bigg (1 + \frac{1}{\sqrt{1 - \gamma }}\bigg ) \ . $$

   \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Ambaram, T., Beggs, E., Félix Costa, J., Poças, D., Tucker, J.V. (2017). An Analogue-Digital Model of Computation: Turing Machines with Physical Oracles. In: Adamatzky, A. (eds) Advances in Unconventional Computing. Emergence, Complexity and Computation, vol 22. Springer, Cham. https://doi.org/10.1007/978-3-319-33924-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-33924-5_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-33923-8

  • Online ISBN: 978-3-319-33924-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics