Operational definition of (brane induced) space-time and constraints on the fundamental parameters

First we contemplate the operational definition of space-time in four dimensions in light of basic principles of quantum mechanics and general relativity and consider some of its phenomenological consequences. The quantum gravitational fluctuations of the background metric that comes through the operational definition of space-time are controlled by the Planck scale and are therefore strongly suppressed. Then we extend our analysis to the braneworld setup with low fundamental scale of gravity. It is observed that in this case the quantum gravitational fluctuations on the brane may become unacceptably large. The magnification of fluctuations is not linked directly to the low quantum gravity scale but rather to the higher-dimensional modification of Newton's inverse square law at relatively large distances. For models with compact extra dimensions the shape modulus of extra space can be used as a most natural and safe stabilization mechanism against these fluctuations.


Introduction
From the inception of quantum mechanics the physical quantities are usually understood to be observable, that is, they should be specified in terms of real or Gedanken measurements performed by well-prescribed measuring procedures. The concept of measurement has proved to be a fundamental notion for revealing the genuine nature of physical reality [1]. Space-time representing a frame in which everything takes place is one of the most fundamental concepts in physics. The importance of operational definition of physical quantities gives a strong motivation for a critical view how one actually measures the space-time geometry [2,3]. The first natural question in this way is to understand to what maximal precision can we mark a point in space by placing there a test particle. Throughout this paper we will use system of units = c = 1. In the framework of quantum field theory a quantum takes up at least a volume, δx 3 , defined by its Compton wavelength δx 1/m. Not to collapse into a black hole, general relativity insists the quantum on taking up a finite amount of room defined by its gravitational radius δx l 2 P m. Combining together both quantum mechanical and general relativistic requirements one finds δx max(m −1 , l 2 P m) .
From this equation one sees that a quantum occupies at least the volume ∼ l 3 P . Therefore in the operational sense the point can not be marked to a better accuracy than ∼ l 3 P . As any measurement we can perform (real or Gedanken) is based on the using of quanta, from Eq.(1) one infers that we can never probe a length to a better accuracy than ∼ l P . Since our understanding of time is * Extended version of the talk given at ICTP. † Electronic address: maziashvili@hepi.edu.ge tightly related to the periodic motion along some length scale, this result implies in general an impossibility of space-time distance measurement to a better accuracy than ∼ l P . This point of view was carefully elaborated in [3]. This apparently trivial conclusion encountered serious bias when it was originally suggested by Mead [4]. Starting from the 1980s the operational definition of space-time attracted considerable continuing interest [5,6,7,8,9,10].
Our fundamental theories of physics involve huge hierarchies between the energy scales characteristic of gravitation E P = 1/ √ G N ∼ 10 28 eV and particle physics E EW ∼ 1TeV. In the atomic and subatomic world therefore, gravity is so weak as to be negligible. This is one reason gravity is not included as part of the Standard Model of particle physics. But when energy scale approaches the Planck one gravity enters the game. The question of operational definition of space-time becomes particulary interesting and important in regard with the higher-dimensional theories with low quantum scale of gravity (close to the electroweak scale). First we summarize different approaches for operational definition of Minkowskian space-time that enables one to estimate the rate of quantum-gravitational fluctuations of the background metric. Then we address some of the implications of these fluctuations. Having discussed the case of 4D space-time, we generalize the operational definition to the brane induced space-time and consider its phenomenological consequences.

Károlyházy uncertainty relation
Approach 1. -For space-time measurement an unanimously accepted method one can find in almost every textbook of general relativity consists in using clocks and light signals [11]. Let us consider a light-clock consisting of a spherical mirror inside which light is bouncing. That is, a light-clock counts the number of reflections of a pulse of light propagating inside a spherical mirror. Therefore the precision of such a clock is set by the size of the clock. The points between which distance is measured are marked by the clocks, therefore the size of the clock 2r c from the very outset manifests itself as an error in distance measurement. Another source of error is due to quantum fluctuations of the clocks. Namely denoting the mass of the clock by m one finds that the clock is characterized with spread in velocity and correspondingly during the time t taken by the light signal to reach the second clock the clock may move the distance tδv. The total uncertainty in measuring the lengths scale l takes the form Minimizing this expression with respect to the size of clock one finds By taking the mass of the clock to be large enough the uncertainty in length measurement can be reduced but one should pay attention that simultaneously the size of the clock diminishes and its gravitational radius increases. The measurement procedure to be possible we should care the size of the clock not to become smaller than its gravitational radius to avoid the gravitational collapse of the clock into a black hole. So that there is an upper bound on the clock mass which through the equation (2) determines the minimal unavoidable error in length measurement as This way of reasoning follows to the papers [2,9]. Approach 2. -One can discuss in a bit different way as well [10]. Let us consider the construction of a coordinate system for a time interval t and with a spatial fineness δx in a Minkowski space-time. Since a clock must be localized in a region with the size δx, the clock inevitably has a momentum of the order δp ∼ 1/δx, obtained from the uncertainty relation of quantum mechanics. Thus the clock moves with a finite velocity of order δv ∼ 1/mδx, where m denotes the mass of the clock. This implies that the coordinate system will be destroyed by the quantum effect in a finite period δx/δv ∼ m(δx) 2 . This period must be larger than the time interval t of the coordinate. Hence we obtain This gives a lower bound for the clock mass m for given t and δx. From Eq.(4), we need clock with a larger mass to construct a finer coordinate system. However we also have a maximum value of a clock mass, because no clock should become a black hole. Thus the clock's Schwarzschild radius should not exceed the localization region of the clock: The clock mass can be chosen arbitrary if it satisfies Eq.(4) and Eq.(5). Combining Eqs.(4, 5) one gets Taking note that our light-clock having the size δx can not measure the time to a better accuracy than δt = δx one arrives at the Eq.(3). Approach 3. -It is instructive to take into account gravitational time delay of the clock [12]. After introducing the clock the metric takes the form The time measured by this clock is related to the Minkowskian time as [11] From this expression one sees that the disturbance of the background metric to be small, the size of the clock should be much greater than its gravitational radius r c ≫ 2l 2 p m. Under this assumption for gravitational disturbance in time measurement one finds Since we are using light-clock its mass can not be less than π/r c , which by taking into account that the size of the clock determining its resolution time represents in itself an error during the time measurement gives δt = 2r c + π tt 2 P r 2 c , which after minimization with respect to r c leads to the Eq.(3). What is common in all of the above approaches is the final result Eq.(3). Nevertheless the third approach strongly discourages to take the optimal size of the clock to be close to its gravitational radius. The first and second approaches do not take into account the gravitational time delay of the clock. For the optimal parameters of the clock in measuring the space-time distance l one finds Eq.(3) was first obtained by Károlyházy in 1966 and was subsequently analyzed by him and his collaborators in much details [13].

Field theory view
Effective quantum field theory with built in IR and UV cutoffs satisfying the black-hole entropy bound leads to the Eq.(3), where l and δl play the roles of IR and UV scales respectively [14]. For an effective quantum field theory in a box of size l with UV cutoff Λ the entropy S scales as, That is, the effective quantum field theory counts the degrees of freedom simply as the numbers of cells Λ −3 in the box l 3 . Nevertheless, considerations involving black holes demonstrate that the maximum entropy in a box of volume l 3 grows only as the area of the box [15] S BH ≃ l l P 2 .
So that, with respect to the Bekenstein bound [15] the degrees of freedom in the volume should be counted by the number of surface cells l 2 P . A consistent physical picture can be constructed by imposing a relationship between UV and IR cutoffs [14] Consequently one arrives at the conclusion that the length l, which serves as an IR cutoff, cannot be chosen independently of the UV cutoff, and scales as Λ −3 .
Rewriting this relation wholly in length terms, δl ≡ Λ −1 , one arrives at the Eq.(3). Is it an accidental coincidence? Indeed not. The relation (7) can be simply understood from the Eq.(3). The IR scale l can not be given to a better accuracy than δl ≃ l 2/3 P l 1/3 . Therefore, one can not measure the volume l 3 to a better precision than δl 3 ≃ l 2 P l and correspondingly maximal number of cells inside the volume l 3 that may make an operational sense is given by (l/l P ) 2 . Thus the Károlyházy relation implies the black-hole entropy bound given by Eq.(7). These ideas lead to the far reaching holographic principle for an ultimate unification that may perhaps be achieved when the basic aspects of quantum theory, particle theory and general relativity are combined [16].

Energy density of the fluctuations
Károlyházy uncertainty relation naturally translates into the metric fluctuations, as if it was possible to measure the metric precisely one could estimate the length between two points exactly. As we are dealing with the Minkowskian space-time the rate of metric fluctuations over a length scale l can be simply estimated through the Eq.(3) as We naturally expect there to be some energy density associated with the fluctuations. One can use the following simple reasoning for estimating the energy budget of Minkowski space [10,12]. With respect to the Eq.(3) a length scale t can be known with a maximum precision δt determining thereby a minimal detectable cell δt 3 ≃ t 2 P t over a spatial region t 3 . Such a cell represents a minimal detectable unit of space-time over a given length scale and if it has a finite age t, its existence due to time energy uncertainty relation can not be justified with energy smaller then ∼ t −1 . Hence, having the above relation, Eq. (3), one concludes that if the age of the Minkowski space-time is t then over a spatial region with linear size t (determining the maximal observable patch) there exists a minimal cell δt 3 the energy of which due to time-energy uncertainty relation can not be smaller than Hence, for energy density of metric fluctuations of Minkowski space one finds gives the observed value [17] The time will lose its physical meaning when δt t which is tantamount to the decreasing of background energy density, Eq.(9), below the t −4 . One can say the existence of this background energy density assures maximal stability of Minkowski space-time against the fluctuations as the Eq.(3) determines maximal accuracy allowed by the nature. On the basis of the above arguments one can go further and see that due to Károlyházy relation, the energy E coming from the time energy uncertainty relation E t ∼ 1 is determined with the accuracy δE ∼ Eδt/t. Respectively, one finds that the energy density ρ = E/δt 3 is characterized by the fluctuations δρ = δE/δt 3 giving The attempts to estimate the dynamics of dark energy predicted by the Károlyházy relation during the cosmological evolution of the universe and other cosmological implications can be found in [18].

Experimental signatures
A question of paramount importance is to estimate the observable effects induced by the quantum gravitational fluctuations of the background metric. Metric fluctuations naturally produce the uncertainties in energymomentum measurements, for the particle with momentum p has the wavelength λ = 2πp −1 and due to length uncertainty one finds δp = 2πλ −2 δλ, δE = pE −1 δp. An interesting idea for detecting the space-time fluctuations was proposed in [19]. The theoretical framework put forward in [19] to describe the incoherence of light from distant astronomical sources due Planck scale quantum gravitational fluctuations of the background metric is as follows. It is assumed that the light coming from the distant extragalactic sources, the diffraction/interference images of which are seen through the two slit telescopes is coherent from the beginning but can accumulate appreciable phase incoherence tδω even for small δω caused by the quantum gravitational fluctuations of the background metric if the length of propagation, t, is large enough. So it is simply understood that the time Dependance of the wave, tω, varies due to quantum gravitational fluctuations as δ(tω) = ωδt + tδω and because the second term is dominating it is taken as a main source of phase incoherence. The condition tδω ≥ 2π is understood as a criterion for incoherence that should lead to the destroy of the diffraction/interference patterns when the source is viewed through a telescope. In [20] the distance through which the wave-front recedes when the phase increases by tδω is taken as an error in measurement of a length, t, by the light with wavelength 2π/ω, and due to this length variation an apparent blurring of distant point sources was estimated. In [21] to mitigate the situation the cumulative factor t/λ in phase incoherence was replaced (actually in an ad hoc manner) by (t/λ) 1/3 . This reduced expression for the phase incoherence is used in [22] as well. Soon after the appearance of the paper [19] it was noticed in [23] that such a naive approach overestimates the effect as the authors of [19] do not take into account the van Cittert -Zernike formalism representing basics of stellar interferometry [24]. Actually the rate of this effect is discouragingly small to be detectable by the stellar interferometry observations [25]. Let us emphasize the main points ignored in [19], which prove to be important in estimating the correct rate of the effect. Light from a real physical source is never strictly monochromatic but rather quasi-monochromatic, even the sharpest spectral line has a finite width. In a wave produced by a real source: the amplitude and phase undergo irregular fluctuations, the rapidity of which depends on the width of spectrum δω. Such a quasimonochromatic wave which is usually referred to as a wave packet is characterized with a mean wave frequencȳ ω, where The width δω determines duration of the wave packet δt ≃ δω −1 , which is an important characteristic for the interference effect during a superposition of the quasimonochromatic beams. Namely, the interference effect to take place the path difference between quasimonochromatic beams must be small than the coherence length δt. There is an increment of the wave packet width due to background metric fluctuations which can be simply estimated as A wavelength of the light from stellar objects considered in [19,20,22] is in the regionλ ≃ µm and correspondingly for the width increment of a wave packet one finds Such a small increment does not affect neither the Eq.(12) nor the requirement the path difference between quasi-monochromatic beams coming from distant stellar objects to be small than the coherence length δω −1 [25]. The expression that comes from the van Cittert -Zernike approach has the form [24] where D denotes maximal separation between the interferometer slits for which the interference still takes place for the light with wavelengthλ received from a celestial source located at a distance r and having the size ρ. As we stressed there is no effect in Eq.(13) due to quantum-gravitational increment ofλ. Now by taking the variations of ρ, r in Eq.(13) one finds Let us estimate the maximum of this variation by choosing the corresponding parameters from the data [19,20,22], that is, r ∼ 1kpc, D ∼ 10 3 cm,λ ∼ 10 −4 cm. For this set of parameters from Eq.(14) one finds δD ∼ 10 −28 cm .
The separation between the slits, D, for observations analyzed in [19,20,22] varies from 1m to the 25m. So that the observations analyzed in [19,20,22] are simply insensitive to such a small variation of D, that is, they have no chance to detect the effect of quantum gravitational fluctuations.

ADD braneworld setup
If E P ∼ 10 19 GeV represents a proper quantum gravity scale, then one can say at least two extremely different fundamental scales, the electroweak scale E EW ∼ 1TeV and the Planck scale E P , appear to be present in the universe. The fact that their ratio appears to be around E EW /E P ∼ 10 −16 is a puzzle for many reasons. First, one can have theoretical prejudice that a deeper comprehension of physics should lead us to a theory with one single energy scale. So the fact that gravity is so much weaker than other forces of Nature seems a problem whose resolution will lead us to a better understanding of our Universe. Second, even if we assume that the fundamental theory has two different energy scales, one has to understand what is there in the "desert" between these two scales, and at which scale new physics will appear? This is a very important question both for experimental purposes (is it worth building accelerators to explore this desert?) and for theoretical problems. In fact, the new physics scale is assumed to set the ultraviolet cutoff for the presently known particle physics. It is well known that the standard model of particle physics suffers from a major theoretical problem, which is the stability of the Higgs mass under radiative corrections: the Higgs mass is quadratically sensitive to the ultraviolet cutoff and if the cutoff scale is much higher than the electroweak scale an extreme fine-tuning between the bare mass and the one-loop correction is required to give a low value for the physical mass. It is plausible therefore that the new physics scale to be very close to E EW . However the problem could still persist going up to the Planck scale, which is the highest known scale, unless the new physics is able to "screen" the sensitivity to E P . This possibility is the main motivation for models of low-scale supersymmetry. However no hint for the low-scale supersymmetry has been found in accelerators until now, and the arrival of the Large Hadron Collider (LHC) calls for other possibilities. An alternative possibility, attracting considerable continuing interest proposed in the framework of models with large extra dimensions assumes the presence of one fundamental scale and the weakness of gravity comes from the fact that only gravity propagates in the bulk [26]. (For earlier braneworld particle physics phenomenology one can see the papers [27]).
Let us briefly recapitulate the basics of ADD model. Extra dimensions run from 0 to L where the points 0 and L are identified [26]. The standard model particles are localized on the brane while the gravity is allowed to propagate throughout the higher dimensional space and the fundamental scale of gravity is taken to be close to the electroweak one, E F ∼TeV. The mass gap between the nth and n+1th KK modes is ∼ L −1 and correspondingly modification of Newton's inverse square law (due to exchange of KK modes) takes place beneath the length scale L. Roughly the gravitational potential on the brane produced by the brane localized point-like particle m looks like Strictly speaking the transition of four-dimensional gravity from the region r ≫ L to the higher-dimensional law for r ≪ L is more complicated near the transition scale ∼ L than it is schematically described in Eq.(16), but it is less significant for purposes of our discussion.
Operational definition of brane induced space-time Approach 1. -Let us repeat the discussions for measurement of space-time distances by the brane localized clocks and light signals. Nothing changes up to the Eq.(2). The upper bound on the mass of the clock is set by the requirement the size of the clock not to be smaller than its gravitational radius where r g denotes gravitational radius of the clock. If the gravitational radius of the clock is smaller than L, that is, r g < L, trough the Eq.(16) one finds If the gravitational radius of the clock is greater than L, that is, r g > L, one gets the Eq.(3). Approach 2. -Nothing changes up to the Eq.(4). If the fineness δx is smaller than L, that is δx L, the requirement the clock not to become black hole gives instead of Eq.(5) Combining Eqs. (20,5) the Eq.(6) changes to l 2+n F t δx 3+n , which by taking into account that our light-clock having the size δx can not measure the time to a better accuracy than δt = δx is nothing but the Eq. (19). Approach 3.
-If the size of the clock is smaller than L, that is r c < L, the gravitational time delay takes the form The disturbance of the background metric to be small, the size of the clock should be much greater than its gravitational radius r c ≫ l 2+n F m 1 1+n . Under this assumption for gravitational disturbance in time measurement one finds Since we are using light-clock its mass can not be less than π/r c , which by taking into account that the size of the clock determining its resolution time represents in itself an error during the time measurement gives which after minimization with respect to r c leads to the Eq. (19).
For the brane induced space-time also all these approaches lead to the same result for space-time uncertainty, Eq. (19), but again one should notice that third approach strongly discourages to take the optimal size of the clock to be close to its gravitational radius. The first and second approaches do not take into account the gravitational time delay of the clock and correspondingly give the misleading results about the optimal parameters of the clock. For the optimal parameters of the clock for measuring the length scale l L 3+n l −(2+n) F one finds From these relations one easily finds that the Eq.
Constraints on the braneworld scenarios Let us start with a simple example. Imprecision in length measurement sets the limitation on the precision of energy momentum measurement λ = 2πp −1 ⇒ δp = 2πλ −2 δλ , δE = pE −1 δp .
The brane localized particle with momentum grater than L −1 , probes the length scale beneath L the gravitational law for which is higher-dimensional. So, in this case one can directly use the Eq.(19) that gives where α = (2 + n)/(3 + n). Using this expression one can simply estimate that for ultra high energy cosmic rays with E ∼ 10 8 TeV , the uncertainty in energy becomes greater than δE ≃ 10 13 TeV .
The experimental uncertainty of the energy of highenergy cosmic rays is almost comparable to the energy itself, that is on the experimental side we know δE 10 8 TeV .
One simply finds that the ultra high energy cosmic rays put the restriction on the fundamental scale From the GZK cutoff we know that the energy of high energy cosmic proton drops below 10 8 TeV (through the successive collisions on the typical CMBR photons accompanied by the production of pions) almost independently upon initial energy after it travels the distance of the order of ∼ 100Mpc [28]. That is, protons detected with energies > 10 8 TeV should be originated within the GZK distance R GZK ≃ 100Mpc. But this mechanism is of little use against the amplification of energy of the protons (coming usually from distances greater than the GZK distance) through the background metric fluctuations, Eq. (22), as this amplification takes place with equal probability within and outside of the GZK distance.
(In itself, as long as the energy scale of high energy cosmic rays is much greater than the fundamental scale of gravity their presence in theory needs a separate consideration [29]). Actually the situation is more dramatic. From Eq.(22) one sees that for the particle with the mass m ≪ E F and energy E ∼ E F , the uncertainty in energy becomes comparable to the energy itself. So that the quantum fluctuations of space-time become appreciable even for the TeV scale physics.
In the case n = 2 one gets δω/ω ≃ 10 −10 and δD ≃ 10 −20 cm, that is, in comparison with Eq.(15) the effect is amplified by 8 orders of magnitude but still it is not so large to affect the observations. So that stellar interferometry observations considered in [19,20,22] are less sensitive to the lowering of fundamental scale in the framework of large extra dimensions.
From Eq. (22) one sees that light speed is given with the precision Thus for photons emitted simultaneously from a distant source coming towards our detector, we expect an energy dependent spread in their arrival times. To maximize the spread in arrival times, it is desirable to look for energetic photons from distant sources. This proposal was first made in another context in [30]. The analyses of the TeV flares observed from active galaxy Markarian 421 [31] puts the limit on the variation of light speed with energy. This limit applied to the Eq.(23) gives the following limitation on E F [32,33] E F 10 16 GeV .
All of the above restrictions are intimately related to the modification of gravity Eq.(16) beneath the length scale L ≫ l P . Therefore one can remove the above experimental bounds in the case when gravity modification scale on the brane is close to the length scale ∼ 10 −30 cm. But at the same time we are interested to keep the fundamental scale of gravity, E F , close to the E EW .

Shape modulus of extra space
What can be a possible protecting mechanism from these unacceptably amplified fluctuations for low lying fundamental scale of gravity? Following the paper [34] let us take note of the role of shape modulus of extra space. A flat, two-dimensional toroidal compactification can be analyzed in much details from this point of view [34]. Such a torus is specified by three real parameters (the two radii L 1 , L 2 of the torus as well as the shift angle θ), and corresponds to identifying points which are related under the two coordinate transformations y 1 → y 1 + 2πL 1 cos θ , y 2 → y 2 + 2πL 2 sin θ .
Note that tori with different angles θ are topologically distinct up to the modular transformations. While most previous discussions of large extra dimensions have focused on the volume of such tori essentially fixing θ = π/2. Given the torus identifications in Eq. (24), it is straightforward to determine the corresponding KK spectrum. The KK eigenfunctions for such a torus are given by where n i ∈ Z.
Applying the (mass) 2 operator −(∂ 2 /∂y 2 1 + ∂ 2 /∂y 2 2 ), we thus obtain the corresponding KK masses We see that while the KK spectrum maintains its invariance under (n 1 , n 2 ) → −(n 1 , n 2 ), it is no longer invariant under n 1 → −n 1 or n 2 → −n 2 individually. The spectrum is, however, invariant under either of these shifts and the simultaneous shift θ → π − θ. We can therefore restrict our attention to tori with angles in the range 0 < θ ≤ π/2 without loss of generality. It is clear from Eq. (26) that the KK masses depend on θ in a non-trivial, level-dependent way. We are interested in the behavior of the KK masses when the volume of the compactification manifold is held fixed. For this purpose it is useful to reparameterize the three torus moduli (L 1 , L 2 , θ) in terms of a single real volume modulus V and a complex shape modulus τ : We shall also define τ 1 ≡ Re τ and τ 2 ≡ Im τ . Using these definitions, we can express (L 1 , L 2 , θ) in terms of (V, τ ) via cos θ = τ 1 /|τ | , sin θ = τ 2 /|τ | , that yields the KK masses Note that although Eq. (29) is merely a rewriting of Eq. (26), we have now explicitly separated the effects of the volume modulus V from those of the shape modulus τ . At the expense of θ one can try to increase the mass gap between KK modes with the fixed volume of extra space. One is therefore led to study the limit θ ∼ ǫ ≪ 1 V 4π 2 M 2 n1,n2 = (n 2 − n 1 |τ |) 2 |τ |ǫ + + n 2 2 + 4n 1 n 2 |τ | + n 2 (Note that in order to keep the volume fixed as θ → 0, the radii are now forced to grow increasingly large.) The first term on the right side of Eq. (30) generally diverges when |τ | ≡ L 2 /L 1 is irrational because in this case n 2 − n 1 |τ | never vanishes exactly. Thus, the general KK state becomes infinitely heavy as θ → 0. However, for any fixed chosen value of ǫ, we can always find special states (n 1 , n 2 ) for which this first term comes arbitrarily close to cancelling; this simply requires choosing sufficiently large values of (n 1 , n 2 ). These special states with large (n 1 , n 2 ) are potentially massless. On the other hand, choosing such large values of (n 1 , n 2 ) drives the second term in Eq. (30) to larger and larger values. The third and higher terms are always suppressed relative to the second term in the ǫ → 0 limit, even as (n 1 , n 2 ) grow large. We will not go into more analysis of the Eq.(30) as reader can find it in paper [34], but simply indicate that in certain cases (for small values of θ) it is possible to maintain the ratio between the higher-dimensional and four-dimensional Planck scales while simultaneously increasing the KK graviton mass gap by an arbitrarily large factor. This mechanism can therefore be used to eliminate the above experimental bounds on theories with large compact extra dimensions.

Concluding remarks
The way of reasoning presented in this paper is completely in the spirit of quantum mechanics, that is to regard reality as that which can be observed. First, following the discussions [2,3,9,10,12,13], we analyzed in a comparative manner principal limitations on space-time measurement in light of quantum mechanics and general relativity. All of the presented approaches lead uniquely to the Károlyházy uncertainty relation (3) but the third approach taking into account the gravitational time delay reveals important disagreement compared with the other approaches in estimating the optimal parameters of the clock. Namely it tells us that optimal parameters of the clock for measuring the space-time distance l is given by where δl min (l) denotes the uncertainty in length measurement given by Eqs. (3,19) in four and higherdimensional scenarios respectively. Thus, from Eq.(3) one finds that for measuring the present Hubble horizon ∼ 10 28 cm the optimal parameters of the clock are estimated as r c ≃ 10 −13 cm, m ≃ 1GeV. Hitherto, say in the framework of approaches 1 and 2 , it was understood mistakenly that the size of an optimal clock had to be close to its gravitational radius, that is, the mass of such a clock was defined as m = r c /l 2 P . The reason of this misconception was the disregard of gravitational time delay.
Operational definition of space-time in light of quantum mechanics and general relativity indicates an ex-pected imprecision in space-time structure. The resultant intrinsic imprecision in space-time structure is quantified by the Károlyházy uncertainty relation. This relation sheds new light on the relation between IR and UV scales in effective quantum field theory satisfying black hole entropy bound [14]. In spite of the fact that minimal uncertainty in distance measurement given by the Károlyházy uncertainty relation is much greater than the Planck length (provided l ≫ l P ), the rate of quantumgravitational fluctuations is still controlled by the Planck scale and is therefore discouragingly small to be detectable by the present experiments and observations. Nevertheless the rate of fluctuations can become unacceptably amplified when the fundamental scale of gravity is lowered in the framework of large extra dimensions. It is important to notice that this amplification of fluctuations is not directly related to the low quantum gravity scale but rather to the higher-dimensional modification of Newton's law at relatively large distances. Therefore the models with compact extra dimensions can be protected from these fluctuations at the expense of shape modulus of extra space. That is, we can keep the volume of extra space fixed in order to have the low fundamental scale of gravity (see relations (17) and (27)) but at the same time using the shape modulus of extra space we can enlarge the mass gap between KK modes to reduce the length scale at which the modification of Newton's inverse square law takes place [34]. This procedure can remove the above experimental bounds on the fundamental scale of gravity as they arise because of relatively large length scale at which Newton's inverse square law of gravity changes to the higher-dimensional one. Presented considerations demonstrate dramatic difference between braneworld models with compact and open extra dimensions respectively. The models with low fundamental scale of gravity having open extra dimensions may be in serious trouble as there seems almost no natural way to protect them from the unacceptably amplified quantum-gravitational fluctuations.