Review The following article is Open access

The new SI and the fundamental constants of nature

Published 16 October 2020 © 2020 European Physical Society
, , Citation Miguel A. Martin-Delgado 2020 Eur. J. Phys. 41 063003 DOI 10.1088/1361-6404/abab5e

0143-0807/41/6/063003

Abstract

The launch in 2019 of the new international system of units is an opportunity to highlight the key role that the fundamental laws of physics and chemistry play in our lives and in all the processes of basic research, industry and commerce. The main objective of these notes is to present the new SI in an accessible way for a wide audience. After reviewing the fundamental constants of nature and its universal laws, the new definitions of SI units are presented using, as a unifying principle, the discrete nature of energy, matter and information in these universal laws. The new SI system is here to stay: although the experimental realizations may change due to technological improvements, the definitions will remain unaffected. Quantum metrology is expected to be one of the driving forces to achieve new quantum technologies of the second generation.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

On May 20th, 2019, matching with the World Metrology Day, the new international system (SI) of units that was approved at the assembly of the 26th General Conference of Weights and Measures (CGPM) met in Versailles during November 13th–16th, 2018 [1]. This is a historical achievement. It is the culmination of many efforts during many years of joint work between the national metrology institutes from the member states and the BIPM (Bureau International des Poids et Mesures), providing a wonderful example of international collaboration.

The CGPM approved to review in 2018 four of the base units (the kilogram, the ampere, the kelvin and the mole). In this way, all the basic measurement units are linked to physical constants instead of arbitrary references. This means the retirement of the famous mass pattern, the kilo IPK [13], which was the only standard linked to a remaining material device. Now all the base units are associated with nature's rules to create our measurement rules [1]. What underlies all these redefinitions is the possibility of carrying out measurements at atomic and quantum scales to perform the units at macroscopic scale.

Although removing of artifacts to define base units is useful to better guarantee their stability and universality, however the new system brings with it the enormous challenge of explaining its functioning to society in layman words, and to high schools and universities. Artifacts are tangible (see figure 1), while the fundamental laws of nature (physics and chemistry) are abstract and harder to grasp by the general public. In this sense, in section 4.1 a unified treatment of all definitions of SI units is presented using as a common framework the discretization of energy, matter and information that is the fundamental ingredient of the laws of physics and chemistry to which the new SI units are linked. These notes arise from several introductory lectures to explain the relationship of the fundamental constants with the new unit definitions.

Figure 1.

Figure 1. The international prototype of the IPK kilogram, kept at the BIPM near Paris, and its six official copies, témoins. Reproduced with permission from BIPM.

Standard image High-resolution image

When introducing the constants of nature in section 3, a distinguishing feature has been highlighted among the five universal constants associated with fundamental laws of nature. While h, c and e are associated with principles of symmetry, it is not the case of Botzmann's k and Avogadro's NA constants.

Even though all the new definitions of the base units are presented here, however an exhaustive presentation of all their experimental realizations is avoided for it is far too technical for the purpose of these notes. There is more detailed documentation for that [24]. The case of the 'quantum kilo' deserves a special treatment, as it is so novel and mass something so common in daily life. Thus, the Kibble balance, which is the practical realization of the new kilo, is given a simple description of how it works.

The rest of the article is organized as follows: in section 2, the new methodology of separating unit definitions from their experimental realizations is explained; section 3 describes the fundamental constants of nature appearing in the new definition of SI units as preparation for the explicit definition in section 4.1 of the seven base units. In section 4.2 the role of quantum metrology in the new SI is explained through three examples: quantum clocks, the Kibble balance and the 'quantum kilo', and the quantum metrological triangle. In section 5 we reflect on the absence of the universal gravitational constant G in the new system of units and its implications. Section 6 is devoted to conclusions.

2. The new SI of units

The new international system (SI) of units that is ruling as of May 20th, 2019 represents a great conceptual and practical revolution since for the first time all units are linked to natural constants, many of them universal, and it means a dream of physics and chemistry come true.

The foundation of the new SI is based on the following premises [2, 3]:

  • (a)  
    The separation of unit definitions from their particular experimental realizations.
  • (b)  
    The linking of unit definitions to natural constants.
  • (c)  
    The new unit system is designed to last over time and not be subject to changes due to the continuous advances in the methods of experimental measurement.

The great conceptual advance of the new SI consists in separating the practical realization of units from their definitions. This allows the units to be materialized independently anywhere and at any time as envisioned by the Committee of Experts of the Decimal Metric System in 1789. This also allows new types of realizations to be added in the future as new technologies get developed, without having to modify the definition of the unit itself. An example of this comes from the new quantum technologies and the development of the quantum clock that will change the experimental realization of a second in the near future (see subsection 4.1).

In the new SI, the units of mass (kg), electric current (A), temperature (K) and quantity of substance (mol) are redefined by linking them to the four universal constants that appear in table 1, whereas the units of time (s), length (m) and luminous efficacy (cd) remain associated with constants of nature as before (see figure 2).

Table 1. CODATA 2018 values [5, 6] for universal constants whose value has been set to define the kilo, ampere, kelvin and mole in the new SI of units enacted by BIPM since May 20th, 2019.

ConstantValue
h6.626 070 15 × 10−34 J s
e1.602 176 634 × 10−19 C
k1.380 649 × 10−23 JK−1
NA6.022 140 76 × 1023 mol−1
Figure 2.

Figure 2. Schematic relationship between the base units of the new SI and its associated natural constants. In the central part the units and their dependencies among each other appear: the second influences the definition of five units, while the mole appears decoupled. The symbols of the constants used to define the units appear on the outside. See subsection 4.1. Reproduced from Emilio Pisanty/Wikipedia. CC BY 4.0.

Standard image High-resolution image

The new dependencies among the units in the new SI are now much more symmetrical than in the previous system as leaps to the eye by taking a look at figure 2. The second is still the base unit on which all the others depend, except for the mole that appears decoupled from the rest of the units. The fundamental constants that are set to an exact value appear outside the scheme and their linked units appear inside. These redefinitions have fundamental consequences in certain magnitudes, such as the electrical epsilon0 and magnetic μ0 constants in vacuum that cease to be exact and become experimentally determined in the new SI. Thus, the magnetic constant is determined by the equation [2]:

Equation (1)

so that all constants in this equation have a fixed value (see table 1) except for the electromagnetic fine structure constant α that is experimentally measured. In turn, this results in a value given by (CODATA2018)

Equation (2)

Equation (3)

Then, the electric constant in vacuum is obtained from the relationship:

Equation (4)

yielding the current value (CODATA2018):

Equation (5)

However, in our daily life the changes will not cause any trouble because they typically correspond to changes of a part in 108, or even less. Their effects are very important though in the high precision measurements that are needed in research laboratories and metrology institutes where it is essential to be able to make exact and precise measurements to know whether a new discovery has been really found.

3. The fundamental constants of nature

Underlying every universal constant of nature there is one of the fundamental laws of physics and chemistry. Of the seven units of the new SI, five are associated to universal constants of nature as shown in table 2.

Table 2. The five universal constants of nature and their corresponding laws they are associated with. The laws of physics and chemistry allow us to describe natural phenomena once the values of the constants are known.

SymbolConstantLaw
cSpeed of lightTheory of relativity
hPlanckQuantum physics
kBotzmannThermodynamics
eElectron chargeQuantum electrodynamics
NAAvogadroAtomic theory

The fundamental constants are like the DNA of our Universe. Other universes, if they exist, may have a different set of universal constants. In our Universe, depending on the particular physical phenomenon and its scale, we will need some fundamental constants to explain it. With this handful of constants we can describe our physical world from atomic, mesoscopic, microscopic, macroscopic, astronomical to cosmological scales.

In addition to these five universal constants, in the new SI there are two extra constants that are used to determine the unit of time and that of luminous efficacy. The first of these is the hyperfine transition frequency of the unperturbed fundamental state of the cesium atom 133, denoted by ΔνCs. Although this is a constant of nature, we cannot consider it as fundamental on an equal footing with the other five. If we did, any energy gap in the spectrum of any atom would also be fundamental and we would end up with an infinite number of fundamental constants. In addition, this frequency is computable in principle using the laws of quantum electrodynamics, while constants such as those in table 1 are not calculable from the first principles currently known. As for the light efficiency Kcd, the associated constant is not even universal and is purely conventional. In summary, only five of the seven constants used in the new SI are really fundamental in the sense expressed here.

There is another essential aspect that deserves to be highlighted: three of these five universal constants are associated with symmetry principles of nature. The speed of light constant c is responsible for the unification of space and time in the theory of relativity [7], one of the pillars of modern physics. What underlies this fundamental law is the principle of relativity, which declares all inertial reference frames in relative motion as physically equivalent. It is this symmetry that is responsible for the constancy of the speed of light. If c were not constant, Lorentz's transformations and therefore the relativity principle, would be broken.

The Planck constant h is responsible for the physical quantities such as energy, angular momentum, etc, can take on discrete values, called quanta. It is the fundamental constant of quantum mechanics, another of the pillars of modern physics. What underlies this fundamental law is the unitary principle, enforcing the probability of finding the particles in their quantum state to be preserved throughout their temporal evolution. Even more basic is the linearity of quantum mechanics represented by the superposition principle of states that is necessary to guarantee unitarity. If h were not constant, the unitarity principle would be broken. It is the principle of superposition (linearity) of quantum mechanics that is at the root of all the counterintuitive surprises that quantum physics brings about as Feynman [8, 9] teaches us. Precision tests on the possible lack of non-linearity of quantum mechanics can be performed using non-linear models that provide theoretical estimates of linearity validity rates of only 10−21 error [10], and up to 4 × 10−27 with direct measures [11]. It so happens that to obtain these estimations, the highly exact measurements of the radio-frequency transitions probed in the frequency standards are used. It turns out that a possible non-linearity would produce a de-tuning of those resonant transitions in the standards.

The charge of the electron e is the value of the elementary (unconfined) source of electric field in quantum electrodynamics, the first of the known elementary particle theories and the one that serves as a reference for the rest of fundamental interactions. What underlies this fundamental law is the principle of gauge invariance that describes known elementary interactions. In the case of electromagnetism, the invariance group is the simplest U(1). It is this symmetry that is responsible for the constancy of the electron charge: if e is not constant, the gauge symmetry breaks down.

In these three examples, the values of the fundamental constants c, h and e are protected by nature's symmetries. An increasingly accurate measurement of them may result in a lack of constancy, and therefore, the violation of one of the fundamental laws of physics. Then metrology is also a source of discovery of new physics through the improvement over time of its measurement methods. A very important example of this fact within the new SI is the so-called quantum metrological triangle that we will see in the subsection 4.2, and also the possible variations in the electromagnetic structure constant α or the ratio of the proton mass to the electron mass (see section 4.2).

The Boltzmann constant k is the conversion factor that allows the thermodynamic temperature T of a body to be related to the thermal energy of its microscopic degrees of freedom (constituents). This is the fundamental constant in statistical physics, which studies the relationship between macroscopic physics and its microscopic constituents, another of the pillars of physics. The constant k appears in the description of the macroscopic world in the probability Pi, or Boltzmann factor, of finding a system in a microscopic state i when it is in thermodynamic equilibrium at the temperature T:

Equation (6)

where Z is the partition function characteristic of the system. Boltzmann established the relationship between macroscopic and microscopic worlds in his formula for entropy S:

Equation (7)

where W is the number of different microscopic states corresponding to a macroscopic state of the system with given energy E. This is the famous equation that appears in the frontispiece of Boltzmann's tomb in Vienna. However, Boltzmann established the relationship (7) as a proportionality law, without explicitly introducing his constant. Historically, Planck was the first to write it in his article where he laid down the black body radiation law, along with his constant h of energy quanta [12]. Planck was the first to give numerical values to these two constants using the experimental values of the universal constants that appear in the Wien law of displacement and the Stefan–Boltzmann's law, which describe essential properties of radiation in thermal equilibrium. These first values turned out to be very close to the current ones [12] (see table 1):

Equation (8)

The Boltzmann constant has no associated symmetry principle, unlike the other three constants mentioned above.

Avogadro's constant NA is a conversion factor that relates the macroscopic amount of a substance to the number of its elementary constituents, whether they are atoms, ions, molecules, etc. It is a fundamental constant in the atomic theory of matter in physics and chemistry. The mole is introduced to handle macroscopic quantities of a substance that is made of a huge number of elementary entities. Avogadro's constant NA is the proportionality factor between the mass of one mole of substance (molar mass) and the average mass of one of its molecules, or whatever its elementary constituents. NA is also approximately equal to the number of nucleons in a gram of matter. To define the mole, the oxygen atom was initially taken as a reference and then carbon. In the new SI, the mass of one mole of any substance, be it hydrogen, oxygen or carbon, is NA times the average mass of each of its constituent particles, which is a physical quantity whose value must be determined experimentally for each substance.

The origin of the word mol comes from the Latin moles which means mass, and molecula which means small portion of mass. Avogadro's constant has also no symmetry principle associated.

As both the Boltzmann constant k and Avogadro's NA are conversion factors for macroscopic and microscopic properties, they are also related to one another:

Equation (9)

where R is the constant of the ideal gases that relates pressure, volume and temperature: PV = nRT, with n the number of moles in the gas.

The atomic hypothesis plays a fundamental role in the description of nature. It states that matter is not a continuum, but is discrete and made of elementary entities called atoms. Feynman considered it the most important idea of all science because it contains a lot of information in a few words, and from which, you can reconstruct many of the properties surrounding us, such as that there are different states of matter depending on the temperature and its phase changes [8]. At the time of Boltzmann in the second half of XIX century, the existence of atoms and molecules was still under debate and is one of the reasons why the Boltzmann constant was introduced late because then macroscopic energies were used with the gas constant, instead of energies per molecule (9) [13]. The works on the Brownian motion, theoretical by Einstein and experimental by Perrin [15], were essential to establish the validity of the atomic hypothesis at the beginning of the XX century.

4. Revision of the SI based on the fundamental constants

4.1. The new definitions

The explanations of the new SI are greatly facilitated by the new viewpoint adopted to separate the definitions of units, which are linked to the constants of nature, from their concrete experimental realizations. The latter may be changing with the technology and the development of new measurement methods in the laboratory (section 3).

The visible Universe is made of matter and radiation. Physics is the science devoted to the study of matter and radiation, and their interactions. The new SI uses the discrete nature of matter and radiation to define its units based on natural constants. The discrete character of matter is historically called the atomic hypothesis and the discrete character of radiation, the quantum hypothesis.

Let us start with electromagnetic radiation, one of whose forms is light. Its velocity c has a property that makes it special for measuring times and distances: it is a universal constant and has the same value for all inertial observers, that is, those who measure physical, i.e. observable, magnitudes.

Since time is the most difficult magnitude to define, then it is defined first as the most basic one. For this we use an oscillator that is very stable: the cycles of cesium atoms in an atomic clock. Galileo used pendulums, or even his own pulse, to measure time. The definition of time given by Einstein is famous: 'What is time? Time is what a clock measures'. It is a very simple and at the same time a very deep definition. In fact, it is a metrological definition of time that fits very well in the new SI: once the time is defined in a generic way in terms of an oscillator or clock, the choice of a suitable clock is left for the realization of the unit second (s). According to the rules of the new SI, the definition and realization of the second is as follows:

Second: 'The second, symbol s, is defined by setting the fixed numerical value of the cesium frequency ΔνCs, the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom, in 9192 631 770, when expressed in the unit of Hz (hertz), equal to 1/s'. □

The realization of the second by means of the transition frequency of cesium

Equation (10)

implies that the second is equal to the duration of 9192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the unperturbed ground-state of the atom 133Cs.

This materialization of the time standard is an example of the provisional character of the experimental realizations of the SI units. Standards based on cesium have been around since the 60s of the XX century. We currently have more precise realizations using quantum clocks and a renewal of the second using this quantum technology is already planned by the BIPM before 2030 (see section 4.2). However, the definition of time will remain unchanged.

The meter is then defined with the time and the speed of light c:

Meter: 'The meter, symbol m, is defined by taking the fixed numerical value of the speed of light in vacuum c as 299 792 458 when expressed in the unit ms−1, where the second is defined in terms of the cesium frequency ΔνCs'. □

With this definition, a meter is the length of the path traveled by the light in vacuum during a time interval with a duration of 1/299 792 458 of a second. This definition is based on setting the speed of light in vacuum exactly at

Equation (11)

The methods for measuring the speed of light have changed over time, from the initial of Ole Römer in 1676, based on the transit of the Moon Io of Jupiter measured by a telescope, to modern techniques using laser interferometry.

Next, the natural thing is to define the unit of mass, the kilo. It turns out that light has also another property that makes it very useful for defining the kilo: light of a fixed frequency (monochromatic) has a minimum discrete energy called photon whose energy is proportional to their frequency as discovered by Planck [12], and then Einstein [16]. That constant of proportionality is Planck's h constant. The units of this constant are the basic three of what was once called the MKS system, precursor of the current SI: meter, kilo and second in the following proportion:

Equation (12)

It is important to note that Newton's universal gravitation constant G has also units of the MKS system, although in another proportion:

Equation (13)

It turns out that h and G are the only fundamental constants with MKS units. There are other constants associated with fundamental interactions, but do not contain mass but other elementary charges. However, for G this is not enough to define the unit of mass with the accuracy that is needed in metrology. The problem is that the precision with which G is measured is much worse than that of h. The 'gravitational kilo' is not a good practical metrological unit. This fact is the origin of the 'quantum way' for the kilo as we will see. In short, we can use h to define the kilo from the second and the meter that are already defined once the value of c is set.

Were it not for this lack of precision in measuring G, the constant h could be decoupled from the kilo and set independently through the purely quantum effects of Hall and Josephson (see section 4.2):

Equation (14)

But if this other quantum route were taken, so natural in theory, then we would decouple the kilogram from h and it would be linked to an artifact again: we find ourselves forced to choose the 'quantum kilo'.

Once the path of the 'quantum kilo' is chosen, the next question is how to use the Planck constant h to define it. To do this, we use the prescription from the new SI to use the h units and the second and meter definitions already introduced above. So, the definition of kilo looks like this:

Kilo: 'The kilogram, symbol kg, is defined by taking the fixed numerical value of the Planck constant, h as 6.626 070 15 × 10−34, when expressed in unit J s, equal to kg m2 s−1, where the meter and the second are defined in terms of c and ΔνCs'. □

Or in equations,

Equation (15)

This definition is equivalent to the exact relationship

Equation (16)

After the definition of the 'quantum kilo', the problem arises as to how to do it experimentally. The simplest thing at first sight would be to use the fundamental energy relations of Einstein [17] and Planck [12], respectively:

Equation (17)

The basis of the 'quantum kilo' is to have a very precise method to measure h and then use it to define the kilo. But for this, the previous fundamental relationships present a problem. The photon, being a quantum of light energy, has no mass. If we want the quantum to have a mass, what is better defined is its de Broglie wavelength [18]:

Equation (18)

However, measuring a wavelength is easy for a plane wave, which is again more typical of monochromatic radiation. To have a real mass m, we need a particle with an associated wavelength. This corresponds to a mass localized in space, which is more naturally described with a wave packet. However, this one does not have a single wavelength in turn. Thus, using the most basic relations of energy (17) is not the most metrologically sensible thing to do.

Hence, the quantum way for the kilo is realized through the Kibble Balance (see section 4.2). This leads us to an important question: what kind of mass, inertial or gravitational, appears in the units of h, and therefore in the new definition of kilo? In the case of the photon that has no mass, such a distinction does not exist. When we have a particle with mass, then it will depend on the mechanical relationship we use to relate it to h, and thus decide whether the kilo we define is inertial or gravitational. For example, if we use Einstein's relationship, the kilo will be inertial, if we use a balance, then the kilo will be gravitational. Therefore, the Kibble balance provides us with a definition of a quantum gravitational kilo. Now, the equivalence principle tells us that both types of mass are equal and is experimentally proven with an accuracy greater than the measurement of the fundamental constants involved in the SI: an uncertainty of (0.3 ± 1.8) × 10−13 [19]. Thus, we can omit the gravitational term while the precision of the equivalence principle is greater than that of the fundamental constants.

The description of the 'quantum kilo' by means of the Kibble balance belongs to the part of the new SI system corresponding to the practical realization of the kilo unit, not to its definition (see section 4.2).

To continue defining the other SI units and derive them from the previous ones already defined, we turn to the discrete nature of matter. Thus, we know that there are atoms (neutral) and electrons (charged). The most elementary charged matter (unconfined) is the electron and with it the ampere is defined using the second already defined:

Ampere: 'The ampere, symbol A, is defined by taking the fixed numerical value of the elementary charge, e, as

Equation (19)

when expressed in the unit coulomb, C, equal to A s, and the second is defined in terms of ΔνCs'.□

Consequently, an ampere is the electric current corresponding to the flow of 1/(1.602 176 634 × 10−19) elementary charges per second. The advantage of the new ampere is that it can be really measured, unlike the old one that had an awful and impracticable definition that actually left it outside the SI system. In addition, it is independent of the kilogram and the uncertainty of the electrical quantities is reduced.

To experimentally realize the ampere, several techniques have been proposed: (a) the most direct is to use the current definition of SI through single electron transport (SET) (see section 4.2) [3, 4] although it is still under development to make it competitive; (b) using Ohm's law and the Hall and Josephson effects to define volt and ohm (see section 4.2) [3, 4]; (c) the relationship between the electric current and the temporal variation of the bias voltage in a capacitor [3, 4].

As for atoms, historically, they are the elementary units of substance and with them the mole can be defined as the unit of quantity of a certain substance. Behind this definition is the discrete nature of matter through the atomic hypothesis and the natural constant associated to it is Avogadro's (9):

Mol: 'The mole, symbol mol, is the unit of the amount of substance. One mole contains exactly 6.022 140 76 × 1023 elementary entities'. □

This value comes from setting the numerical value of the Avogadro constant to

Equation (20)

in units mol−1. As a consequence, the mole is the amount of substance in a system that contains 6.022 140 76 × 1023 specified elementary entities. The amount of substance in a system is a measure of the number of elementary quantities. An elementary quantity can be an atom, a molecule, an ion, an electron, any other particle or a specific group of particles.

For the experimental realization of the ampere, several techniques have been proposed [3, 4]: (a) the Avogadro project (International Avogadro coordination), (b) gravimetric methods, (c) equation of gas, and (d) electrolitic methods.

We still need the fundamental unit to measure temperature, the kelvin. Following the new SI, we must link it to a fundamental constant of nature, in this case the Boltzmann constant k. Boltzmann's constant serves as a conversion factor between energy and temperature:

Equation (21)

We can also see this constant as a result of nature's discrete character. For example, the atomic hypothesis for ideal noble gases allows to calculate their kinetic energy as

Equation (22)

The resulting new definition for kelvin as the unit of thermodynamic temperature is:

Kelvin: 'The kelvin, symbol K, is defined by taking the fixed numerical value of the Boltzman constant, k, as

Equation (23)

when it is expressed in units kg m2 s−1 K−1, where the kilogram, meter and second are defined according to h, c and ΔνCs'.□

This definition implies that the kelvin is equal to a thermodynamic change in temperature resulting in a change in thermal energy kT of 1.380 649 × 10−23 J. As a consequence of the new definition, the triple point of water ceases to have an exact value and now has an uncertainty given by

Equation (24)

as a result of inheriting the uncertainty that the Boltzmann constant k had before the new definition.

Following the guiding principle used so far to introduce the new definitions of SI units using the discrete nature of energy (17) and matter (9), we can go further on and use the discrete nature of information to introduce the Boltzmann constant. Thus, another way to substantiate the discrete origin of Boltzmann's constant is through the Landauer principle [20] which establishes that the minimum energy dissipated in the form of heat at a temperature T in the simplest elementary system, whether classical (bit) or quantum (qubit), is

Equation (25)

The underlying origin of the Landauer principle is the irreversible nature of information deletion or erasure [21, 22] in a system, which leads to a minimum dissipation of energy [23]. Its experimental verification has been possible directly in several recent works [2427]. In this way, we have a unified vision of all SI units based on the fundamental property that both energy, matter and information possess: their discrete nature in our Universe.

Several techniques have been proposed for the experimental realization of the kelvin [3, 4]: (a) by acoustic gas thermometry, (b) radiometric spectral band thermometry, (c) polarizing gas thermometry and (d) Johnson noise thermometry.

The seventh base SI unit used to measure the light efficiency of a source deserves a special comment. It is a measure of the goodness of a light source when its visible light is perceived by the human eye. It is clearly conventional and subjective. Average values of human eye behavior are used. It is quantified by the ratio of the luminous flux to the power, measured in lumens (lm) per watt in the SI. There is no universal law of physics associated with this unit, for the light source does not have to be in thermodynamic equilibrium. The basic concept for the candle is maintained in the new SI:

Candela: 'The candela, symbol cd, is the SI unit of luminous intensity in a given direction. It is defined by taking the fixed numerical value of the luminous efficacy of monochromatic frequency radiation 540 × 1012 Hz, Kcd, to be 683 when expressed in the unit lm W−1, which is equal to cd sr W−1, or cd sr kg−1 m−2 s3, where the kilogram, meter and second are defined in terms of h, c and ΔνCs'. □

This definition is based on taking the exact value for the constant

Equation (26)

for monochromatic radiation of frequency ν = 540 × 1012 Hz. As a consequence, a candela is the light intensity, in a given direction, of a source that emits monochromatic radiation of frequency ν = 540 × 1012 Hz and has a radiating intensity in that direction of (1/683) W sr−1.

For the experimental realization of the candela, several techniques have been proposed [3, 4] such as the practical realization of radiometric units, using two types of primary methods: those based on standard detectors such as the electrical replacement radiometer and photodiodes of predictable quantum efficiency, and those based on standard sources such as Planck's radiator and synchrotron radiation. In practice, a standard lamp with optimized design is more commonly used to emit in a defined direction and at a long distance from the detector.

4.2. Quantum metrology and the new SI

The foundation of the laws of quantum mechanics in the first quarter of the twentieth century allowed us to understand nature on an atomic scale. As a result of that better understanding of the atomic world, applications emerged in the form of new quantum technologies. This first period is known as the first quantum revolution and has produced technologies as innovative as the transistor and the laser. Even today's classic computers are a consequence of these first-generation quantum advances. With the beginning of the 21st century, we are seeing the emergence of new, second-generation quantum technologies [28] that constitute what is known as the second quantum revolution. Both revolutions are based on exploiting specific aspects of the laws of quantum mechanics. Thus, we can classify these technological revolutions into two groups:

First quantum revolution: it is based on the discrete character of the quantum world's properties: energy quanta (such as photons), angular momentum quanta, etc. This discrete nature of physical quantities is the first thing that surprises in quantum physics (see figure 3).□

Figure 3.

Figure 3. The first quantum revolution of technologies is based on the discrete nature of physical quantities, such as energy states in atoms. Photons of the electromagnetic radiation allow us to manipulate the states of well-defined energy (17). Reproduced from Wikipedia. CC BY 3.0.

Standard image High-resolution image

Second quantum revolution: it is based on the superposition principle of quantum states. With it, information can be stored and processed as a result of its quantum entanglement properties (see figure 4). □

In the second quantum revolution, five working areas have been identified that will lead to new technological developments. Enumerating them from least to greatest complexity are: quantum metrology, quantum sensors, quantum cryptography, quantum simulation and quantum computers.

Figure 4.

Figure 4. The second quantum revolution of technologies is based on the principle of superposition of quantum mechanics. The simplest case is exemplified by the two-slit experiment [8, 9] where the properties of a single particle get in superposition. When quantum superposition involves several particles, the resulting phenomenon is quantum entanglement, which is the fundamental resource in quantum information [29]. Reprinted with permission from [30], 2014 by the American Physical Society.

Standard image High-resolution image

Quantum metrology is considered one of the quantum technologies with most immediate development. It is the part of metrology that deals with how to perform measurements of physical parameters with high resolution and sensitivity using quantum mechanics, especially exploiting their entanglement properties. A fundamental question in quantum metrology is how accuracy scales when measured in terms of the variance ${\Delta}\mathfrak{m}$, used to estimate a physical parameter, with the number of particles used N or repetitions of the experiment. It turns out that the classical interferometers for light, electrons etc cannot overcome the so-called shot-noise limit [3133] given by.

Equation (27)

whereas with second-generation quantum metrology it is possible to reach the Heisenberg limit given by

Equation (28)

An example of a very important application of quantum metrology to basic research is the detection of gravitational waves by experiments such as LIGO (laser interferometer gravitational-wave observatory) [34] (see figure 5), where you need to measure distances between separate masses of the order of a few kilometers with a very high precision (thousandth of the diameter of a proton). These variations in distance occur in the lengths of the interferometer arms when a gravitational wave passes through them. Using 'squeezed light', a form of quantum optics [36], the sensitivity of interferometers can be improved according to the quantum limit (28) [3739].

Figure 5.

Figure 5. Diagram of an LIGO type gravitational wave detector. The two perpendicular arms of four km each are shown. The gravitational waves caused by the collision of two black holes can be detected in the interference pattern thanks to the extreme sensitivity of the device that allows resolving distances thousands of times smaller than the atomic nucleus. Reprinted with permission from [35] © Johan Jarnestad/The Royal Swedish Academy of Sciences. Source: www.nobelprize.org/.

Standard image High-resolution image

The LIGO-type experiment measures may have another basic application to try to elucidate the controversy that has recently arisen with the Hubble constant H0. In the standard cosmological model denoted by ΛCDM (Λ = dark energy, CDM = cold dark matter) H0 measures the speed with which the Universe is currently expanding according to Hubble's law. It is a parameter of capital importance in cosmology. There are two sources of measures that give discrepant values. On the one hand, the value provided by the Planck satellite in 2018 by analyzing the microwave radiation background of the early Universe and extrapolating the Hubble parameter to its current value with the standard model ΛCDM gives a value of H0 = 67.4 ± 0.5 km per second and per megaparsec distance [41]. On the other hand, measurements made in the current Universe using distance ladders based on cepheids and type 1a supernovae result in H0 = 74.03 ± 1.42 km s−1 Mpc−1 [42]. This discrepancy implies a statistical confidence of 4.4 sigma (standard deviations), very close to the 5 sigma barrier that is considered as clear evidence that they are different results. If that were the case, it would amount to new physics that the standard model has not taken into account (see figure 6). In addition, to increase the controversy, there is also a measure of the current Universe with another distance ladder that yields an intermediate value between the two discrepants, H0 = 69.8 km s−1 Mpc−1 [44]. All measurements made with the current Universe give values above the values obtained with the early Universe and the standard model. Then, either there are systematic errors, or it may be the indication of new physics beyond the current cosmological model, such as the existence of dynamic dark energy, to name just one possible example [43].

Figure 6.

Figure 6. Value differences for the Hubble constant H0 measured with the Planck 2018 satellite in the early universe and extrapolated with the standard cosmological model (blue) [41], and local measurements made with the cepheid and supernova distance ladders. Reproduced from [43]. CC BY 3.0.

Standard image High-resolution image

LIGO experiments can be very useful in the future to elucidate this controversy, one more, about the Hubble constant. This time, it is not about analyzing black hole collisions as in the original discovery of gravitational waves, but about binary neutron star collisions. It turns out that when two neutron stars merge, the resulting gravitational waves can be used to obtain information about the position of the stars, and hence the galaxies where they are found. By making statistics of at least 50 events of these collisions one could have a direct measure of the Hubble constant with an accuracy not achieved to date and resolve the dispute [45]. And recent results improve these expectations. It has already been possible to detect a neutron star fusion event useful to estimate the Hubble constant, whose resulting value is precisely H0 = 70 km s−1 Mpc−1 with an uncertainty of the order of ∼7% [46]. It is estimated that with 15 events the error can be reduced to 1% and begin to resolve the discrepancy.

Let us look now at three important examples of applications of quantum metrology to the new SI, both from quantum technologies of first and second revolution.

4.2.1. Quantum clocks

The basic mechanism of a clock consists of a system with very stable periodic oscillations where each period defines the unit of time such that the clock counting those periods, measures time. In the past, natural periodic movements such as that of the Earth around its axis or around the Sun have been used, as well as mechanical oscillators such as pendulums or quartz crystal resonators. In the 60s of the last century, atomic oscillations began to be used to measure time using cesium atoms for their greater accuracy and stability than mechanical systems. The current reference to define the time standard is cesium 133 using the resonance frequency corresponding to the energy difference between the two hyperfine levels of its fundamental state (see section 4.1). Cesium atomic clocks can typically measure time with an accuracy of 1 s in 30 million years. In general, the accuracy and stability of an atomic clock is greater the higher the frequency of the atomic transition and the smaller the width of the electronic transition line. If we make the frequency of the oscillator bigger we can increase the resolution of the clock by reducing the period we use as a reference.

Atomic clocks have allowed us to improve multiple technological developments that we are used to in our daily lives: controlling the frequency of television broadcasting waves, global satellite navigation systems such as GPS, financial transactions, internet, mobile phones etc.

The new quantum clocks are a type of atomic clocks where the increase in accuracy is due to using atomic transition frequencies in the optical range instead of the microwaves used by cesium clocks. Optical frequencies of visible light are about five orders of magnitude greater than microwaves. In order to make this leap, it was necessary to use ion traps (see figure 7) with quantum logic techniques used in quantum computing [48, 49], a part of the second revolution quantum technologies [5052]. Quantum clocks achieve an uncertainty of only one part in 1017 s, which is equivalent to an error of a second in a quantum clock that would have begun ticking 13.7 billion years ago, what represents the whole age of the Universe. An ion clock is an instance of quantum clock, which is based on quantum logic and is an example of cooperation between two ions where each provides complementary functionalities. For example, the aluminum ion Al+ has a transition frequency in the optical range that is useful for the clock reference frequency. However, its atomic level structure makes it a bad candidate to cool it down to the temperatures necessary for stabilization. Instead, this is possible with a beryllium ion Be+. Using quantum computing protocols, information on the internal state of the spectroscopic ion Al+, after probing its transition with a laser, can be faithfully transferred to the logical ion Be+, where this information can be detected with almost 100% efficiency [53]. Each ion species provides a different functionality, the reference frequency or the cooling method, and the quantum entanglement between the states of both ions allows them to function as a quantum clock altogether.

Figure 7.

Figure 7. Above: Overview of a trapped-ions quantum clock developed by Rainer Blatt's group in the University of Innsbruck laboratory. It is a compact prototype that fits into a small functional space and is marketed by the company Alpine Quantum Technologies [47]. Below: Detail of the central part of the clock showing the trap, where ions are confined using electromagnetic fields. Reproduced with permission from Rainer Blatt Lab.

Standard image High-resolution image

Current quantum clocks use either (a) one or two trapped ions, or (b) ultra-cold atoms confined in electromagnetic fields in the form of optical lattices.

Each of these realizations has pros and cons. Ion clocks have a very high accuracy for they can confine the ion by cooling it down in a trap so that we get very close to the ideal of an isolated system from external disturbances. However, by using only one ion for the absorption signal, less stability is achieved as the ratio of the signal to external noise gets reduced. On the contrary, clocks of atoms in optical lattices can work with a large number of atoms achieving greater stability and a better signal-to-noise ratio.

Different teams working with both options are developing techniques to get better and better performances. The most recent record with trapped ions has an uncertainty of 9.4 × 10−19 s [54]. Quantum clocks in optical lattices also achieve 10−18 s accuracy. There is still time to decide which of the two alternatives will be chosen to realize a new revision of the second, or whether both of them are complementary (see figure 8).

Figure 8.

Figure 8. Comparison of the recent temporal evolution of uncertainties in cesium atomic (microwave) clocks and quantum (optical) clocks: trapped ions and neutral atoms in optical networks. There is clearly a change in trend with a gain of accuracy in quantum clocks. Reproduced with kind permission of Società Italiana di Fisica from [40]. © 2013, by Società Italiana di Fisica.

Standard image High-resolution image

The improvements in time measurement provided by quantum clocks also have important applications. The technological applications are similar to those mentioned above and it is of great interest to be able to send one of these quantum clocks in space missions and to improve navigation systems. Another application is the high precision measurement of the gravitational field for, according to Einstein's general relativity, there is a time dilation due to gravitational effects in addition to the speedup due to velocity. With a quantum clock you can distinguish gravitational fields only 30 cm high [55], and even less. These measures will allow to better define the heights above sea level, since this is not measured in the same way in different parts of the world and is crucial to know the activity of the oceans. Similarly, these quantum devices can be applied to geodesy, hydrology and telescope network synchronization.

Basic research is one of the first fundamental applications of them. Comparing the operation of several quantum clocks over time we can discover if any of the fundamental constants of physics changes with time, which is essential to find new physics and to define the base units according to the new SI (see section 4.1). Examples of fundamental constants that can be probed for their temporal dependence are the electromagnetic fine structure constant α (1) and the ratio of the proton to the electron mass μ := mp/me. In the past there has been controversy over possible temporal variations of α and μ detected by measures of atomic transitions in distant quasars compared to current measurements in the laboratory [56, 57]. It so happens that all atomic transitions functionally depend on α and also hyperfine transitions depend significantly on the μ ratio. It turns out that quantum clocks allow to improve the levels of variation of these fundamental constants. With these experiments, the ratio of optical frequencies between ions of Al+ and Hg+ can be measured providing a bound to the time variation for α of −1.6 ± 2.3 × 10−17 per year, and with the yterbium ion Yb+ a bound for μ of 0.2 ± 1.1 × 10−16 per year, which are better by a factor of ten than the astrophysical measurements [58, 59]. These negative results for the temporal variation of the fundamental constants serve as justification for the new SI unit system and its universality regardless of space and time, at least as long as the experiments continue to confirm such results.

4.2.2. Kibble balance

The Kibble balance is the current experimental realization of the unit of mass through the quantum way in the new SI [6063]. In this way, we are fulfilling the new methodology of separating unit definitions from their practical materialization (see section 2). Whereas the definition of the new kilo linked to the Planck constant has already been explained in section 4.1, we will now see how to realize it in the laboratory with current technology.

The realization of the 'quantum kilo' consists of two distinct parts: (a) the Kibble balance and (b) the quantum determination of the electric power. The Kibble balance (see figure 9) aims to establish the equivalence of a mechanical power into electrical power. This is then related to Planck's constant through metrology procedures of the first quantum revolution: integer quantum Hall and Josephson effects.

Figure 9.

Figure 9. Kibble NIST-4 balance used to measure the Planck constant h with an uncertainty of 13 parts in one billion in 2017, contributing to the redefinition of the kilogram as a unit of mass in the new SI in 2019. Reproduced with permission from NIST.

Standard image High-resolution image

Let us start with the Kibble balance. We present a simplified discussion, but sufficient to understand its foundations. It looks like an ordinary balance in which it also has two arms, but while in the ordinary balance two masses are compared, one standard and another unknown, in Kibble's we compare gravitational mechanical forces with electromagnetic forces. Its functioning consists of two operating modes: (i) weighing mode and (ii) moving mode.

Weighing mode: a test mass m is available in one of the arms, which could be for example the IPK standard. On the other plate, a circuit of electric coils is mounted where a current I is passed through (see figure 10). The circuit is suspended in a very strong magnetic field created by magnets with a stationary and permanent field B. The length of the circuit is L. Then, the current induces an electromagnetic field that interacts with the constant magnetic field of the magnet. The resulting vertical electromagnetic force is equal to the weight of the test mass:

Equation (29)

During this operating mode, the intensity of direct electric current is measured very accurately by appropriate instruments (integer quantum Hall effect), and it is proportional to the vertical force. The current is adjusted so that the resulting force equals the weight of the test mass.□

Figure 10.

Figure 10. Kibble balance operating in weighing mode. Explanations in the text. Reproduced from [63]. CC BY 3.0.

Standard image High-resolution image

Moving mode: this is a calibration mode that is necessary as the quantity BL is very difficult to measure accurately. Were it not for this, the weighing mode would suffice. An electric motor is used to move the wire circuit vertically through the external magnetic field at a constant speed v (see figure 11). This movement induces a voltage V in the circuit whose origin is also a Lorentz force and is given by

Equation (30)

During this operating mode, the voltage is measured very accurately by appropriate instruments (Josephson effect), and therefore the magnetic field that is proportional. Laser sensors are also used to monitor the vertical movement of the electrical circuit by interferometry. With this, variations of the order of the laser's semi-wavelength used can be detected. In all, it is ensured that the vertical movement happens at constant speed and the constant magnetic field can be measured.□

Figure 11.

Figure 11. Kibble balance operating in moving mode. Explanations in the text. Reproduced from [63]. CC BY 3.0.

Standard image High-resolution image

The result of comparing the weighing mode with the moving mode, eliminating the quantity BL, is the equivalence between mechanical and electrical powers:

Equation (31)

Although it is usual to call the Kibble balance as a power or watt balance, however, note that the Kibble balance does not measure real, but virtual, powers. This point is of crucial importance in metrology: were the mechanical power really measured, then the device would be subject to uncontrollable friction losses; otherwise, if the electrical power were measured directly, then it would be subject to heat dissipation. We see that the moving mode is essential and provides adequate calibration.

It turns out that experimentally it is more accurate to measure resistance than current intensities. Using Ohm's law, we can obtain the mass on the Kibble balance based on resistance and voltage measurements:

Equation (32)

where VR and V are the two necessary voltage measurements.

In the second part of the experimental realization of the 'quantum kilo' we need to relate the electrical power in (31) to the Planck constant h. This is done through the measurement of the electric intensity I in the weighing mode and the voltage V in the moving mode of the Kibble balance. For this, the following quantum effects are used.

Integer quantum Hall effect: a two-dimensional sample contains electrons constrained to move in that plane subject to a longitudinally aligned coplanar electric field and a very intense constant magnetic field B applied perpendicularly to the sample (see figure 12). In addition, the electronic sample is cooled down to temperatures nearby the absolute zero. Then, the system departs from the classical Ohm's law and enters into a quantum regime. As in the classic case, a transverse electric current appears that induces a transverse voltage bias called Hall voltage VH. The electron system enters a new quantum behavior characterized by the appearance of jumps and plateaus in the relationship between the transverse current and the magnetic field [64]. In particular, the Hall resistance RH associated to that Hall potential is quantized:

Equation (33)

where n' is an integer, giving rise to those plateaus appearing in the curves of the Hall resistivity. The von Klitzing constant RK is defined as

Equation (34)

which has dimensions of resistance and is the elementary resistance. The integer quantum Hall effect allows resistance to be measured with an uncertainty of a few parts in 10−11 ohms. For this reason it is used to perform the resistance standard [13].□

Figure 12.

Figure 12. An example of an integer quantum Hall effect device used by NIST to measure resistances. This Hall bar uses graphene components that are outlined by white lines. The source and drain of electrons are at the left and right ends of the bar. The electrical contacts above and below the bar are not shown. Reproduced with permission from NIST.

Standard image High-resolution image

Josephson effect: when a superconducting wire is interrupted at a point with a contact made of insulating material that joins two superconducting portions, the superconducting current can be maintained due to a tunnel effect of the superconducting Cooper pairs. This is known as a Josephson junction (see figure 13). Under these circumstances, if a radiofrequency radiation ν is applied, a potential V is induced through the junction that is proportional to the frequency and is quantized [65, 66]:

Equation (35)

where n is an integer, 2e is the Cooper pair charge and the Josephson constant is defined as

Equation (36)
Figure 13.

Figure 13. Close-up view of a modern Josephson junction used in NIST. Josephson's junctions are built in green circular wells where the two superconducting layers overlap. See explanation in text. Credit: M Malnou/NIST/JILA.

Standard image High-resolution image

This is the so-called Josephson DC effect and Josephson junctions can be made with metallic points or with constrictions, in addition to insulators. It allows measuring voltages with an uncertainty of 10−9,−10 V, that is, of the order of nano volts or less. For this reason it is used for the realization of the voltage standard [13].□

Now we can relate the test mass (32) that is used in the weighing mode with the Planck constant that appears when measuring the resistance, also in the weighing mode, and the voltage in the moving mode. Using the integer quantum Hall effect (33) to measure the resistance and the Josepshon effect (35) to measure the voltages VR and V, we obtain the desired relationship:

Equation (37)

where the integer numbers that appear come from the concrete measurements of the corresponding quantum effects (33) and (35). To measure g a high precision absolute gravimeter is used and v with interferometric methods. With all these high precision measures, the expression (37) has a dual utility: on the one hand, given a standard mass m like that of the old IPK, we can determine h with great precision. On the other hand, since h can be measured with this method with great precision, we can set this value of h as exact and define the unit of mass based on h: this way we can carry out the 'quantum kilo' route and detach the mass unit from the kilo IPK artifact.

4.2.3. Quantum metrology triangle

The new definition of the ampere linked to the value of the electron elementary charge e stands out for its clarity and simplicity compared to the old definition based on the Ampere's law and an unrealizable construction using infinite and null-thick conductor wires [4]. However, it also entails the need to materialize it in some way, and it is not easy for the number of electrons in an ordinary system is immensely large. The BIPM has approved three methods for the practical realization of the ampere [24]. One of them uses the direct definition of ampere A = C/s and a single-electron transport device (SET), which has to be cooled to temperatures close to absolute zero (see figure 14). Through an SET, electrons pass from a source to a drain. A SET consists of a region made of silicon, called an island, between two gates that serve to electrically manipulate the current. The island temporarily stores the electrons coming from the source using another voltage gate. By controlling the voltages at the two gates, you can get a single electron to remain on the island before moving to the drain. Repeating this process many times and very quickly, it is possible to establish a current from which its electrons can be counted.

Figure 14.

Figure 14. Example of a single-electron transport device (SET) used in the NIST for the definition of the ampere. The interaction between the blue and green electrodes (gates) controls the movement of individual electrons in and out of an 'island' in the center. Explanations in the text. The colors are illustrative only. Reproduced with permission from NIST.

Standard image High-resolution image

The electrical sector is the most quantum of them all within the SI system of units. Now that the ampere has been redefined in the new SI by linking it to a fixed value of the electron charge, it is possible to relate the three magnitudes that appear in Ohm's law, V = IR, in terms of only two universal constants, h and e. This is visualized by the so-called quantum metrological triangle (see figure 15). This triangle represents an experimental constraint that the voltage, resistance and intensity standards must fulfill, so that the three are not independent. Thus, if we measure Josephson's constant (36) on the one hand and von Klitzing's on the other (34), they allow us to obtain values for the unit of charge e of the electron that must be compatible, within experimental uncertainties, with the value of e obtained with a single-electron transport. And the same goes for any pair of magnitudes that we take in the triangle. Therefore, the quantum metrological triangle allows us to test experimentally, as better accuracy and precision are achieved, whether the constants h and e are really constants as assumed in the new SI. The uncertainties in these constants must be compatible using these three experimental realizations. If at some point these uncertainties do not overlap, then this is an indication of new physics as it would affect the very foundations of quantum mechanics or quantum electrodynamics as explained in section 3. Again, this is an example of how the Metrology not only serves to maintain unit standards, but to open up new paths to new fundamental laws of nature.

Figure 15.

Figure 15. Quantum metrological triangle: schematic relationship between the universal constants of the Hall effect (34), Josephson effect (36) and the charge of the electron in an SET device. The triangle establishes a constraint between them and the Planck constant h and the elementary charge e. Reproduced from [67]. CC BY 3.0.

Standard image High-resolution image

5. A 'gravitational anomaly' in the SI

Despite the fact that the new system of SI units amounts to a complete linking from the base units to fundamental constants of nature, it is still striking the absence of one of the oldest universal constants of physics: Newton's universal gravitation constant G (see figure 16), colloquially called big G, as opposed to the small g representing local acceleration of gravity at a point on Earth.

Figure 16.

Figure 16. The scheme with the dependencies of the natural constants and the base units of the new SI in figure 2 presents a notable absence: the constant G of Newton's universal gravitation. The triangle encompasses the excluded G with the units on which it depends (13) that are the same as Planck's constant h (12), but in another proportion. Reproduced from Emilio Pisanty/Wikipedia. CC BY 4.0.

Standard image High-resolution image

The main reason to exclude G from the new SI system of units is the lack of precision enough to define a unit of mass. As explained in section 4.1, this is the origin of the 'quantum way' for the definition of the kilo when you want to detach it from a material artifact such as the cylinder of the kilo IPK. This fact is related to the so-called 'Newton's Big G Problem' [6871]. This problem is the lack of compatibility of the G measures in the last thirty years. Various metrology laboratories around the world have tried to measure G with experimental devices designed to reduce uncertainty in the value of G. The result is surprising: G values do not converge to a consistent single value and their uncertainties do not overlap in a compatible way. This can be seen in figure 17, which shows the results of multiple experiments and a vertical zone where a G commitment value is chosen. The situation has become desperate and the NSF (National Science Foundation) has launched a global initiative trying to clarify the problem [72].

Figure 17.

Figure 17. Comparative diagram of the measurements of the constant G over time using classical methods based on variants of the torsion balance, except that of Rossi et al [75] using the cold atoms in gravity (CAG) method. Reproduced with permission from [68].

Standard image High-resolution image

A fundamental question then arises: what is the origin of Newton's big G problem? The most natural solution is that it is due to possible systematic errors in the experiments. Favoring this interpretation is the fact that, despite the increasing sophistication trying to measure G more accurately, all the experimental methods used are variants of the famous Cavendish balance [73, 74]. However, in figure 17 there appears a value [75] whose experimental method is completely different from the methods based on the Cavendish balance. It is a quantum method of measuring G. Their device uses a technique based on atomic interferometry with ultra-cold atoms. With it, the quantum nature of atoms is used at temperatures close to absolute zero, to obtain an accurate measure of the acceleration of gravity.

The cold atoms in gravity (CAG) method consists of two steps. Step 1: measure of the constant small g: value of the local terrestrial gravity. Step 2: measure of the constant big G.

The technique consists of turning cold atoms vertically, up and down, repeatedly. This serves to probe Earth's gravity with a cloud of rubidium atoms Rb in free fall. With this procedure it is possible to measure the force of gravity between an atom of Rb and a reference mass of 516 kg. The result is a measure of G with a relative uncertainty of 0.015%. Remarkably, it is the first time that a quantum method is admitted to be part of the set of values used to determine G.

This gravitational anomaly is still a reflection of the big problem that affects modern physics: the lack of compatibility between the two great theories of our time, quantum mechanics and general relativity. An important observation (see conclusions) is that the CAG method is an indirect measurement method for Newton's big G: it is done by first measuring small g. This contrasts with the classic methods based on the Cavendish balance where G can be measured directly. A direct quantum measure of G would be a first experimental indication of quantum effects on gravity and a first step for a quantum gravity theory. As seen in figure 17, the CAG value is still outside the shaded vertical zone of the most recent recommended value for G. This may be indicative that the CAG method does not suffer from possible systematic errors as in the classical methods of measurement of G and could be the beginning for the solution of Newton's big G problem. The way to confirm this hypothesis is to encourage more CAG experiments in more independent laboratories and use quantum metrology techniques to reduce their uncertainties. If the result of all these new classical and quantum experiments were that there are no systematic errors, then the conclusion would be even more exciting as it would be again a door opened by metrology to new physics.

Direct methods to measure G are not known. In fact, there are few physics equations where G and h appear together. One of them can help us to see the difficulties of getting a direct quantum method to measure G. This is the equation of the Chandrasekhar limit for the radius of a white dwarf star. If we thought naively of producing a gravitational condensate of nucleons (fermions) that were the result of compensating the gravitational pressure of N nucleons of mass M with the degeneration pressure due to the Pauli exclusion principle, using non-relativistic quantum mechanics and Newtonian gravitation to simplify, we get [76] a value of the equilibrium radius given by

Equation (38)

where N is the number of nucleons and q the number of electrons per nucleon with mass m. It has been assumed that the density of the spherical condensate is uniform. To make a simple estimate, consider a system of only neutrons (q = 1, m = Mn mass of the neutron), and substituting the known experimental values, we obtain

Equation (39)

If we want to have a fermion system condensed in a sphere with a radius of the order of one meter to be manageable in a terrestrial laboratory, we can estimate the number of necessary neutrons obtained with (39) as something of the order of N ∼ 1074, an intractable amount if we take into account that the number of atoms in the observable Universe is of the order of 1080. This difficulty is a reflection of the disparity of scales where gravity acts against quantum effects.

6. Conclusions

The adoption of the new SI system of units brings several concrete advantages over the previous system: it solves the ampere problem and the electrical units that had been left out of the SI, it eliminates the dependence of the kilo with the artifact of the kilo IPK, conceptually it is more satisfactory to define them in terms of natural constants, future technological improvements will no longer affect the definitions etc.

The new SI of units has no direct impact on our daily life, but it does in research laboratories and in national metrology centers where they need measures of great accuracy and precision to conduct these investigations and to guard and disseminate the primary unit standards. As usual, in the long run these new discoveries result in applications that do modify our daily life for the better.

Therefore, it is a great conceptual challenge to explain and convey what it entails and is behind the new system of units. In section 4.1 a common view of all the new definitions has been presented using as an unifying principle the discrete nature of energy, matter and information in the fundamental laws of physics and chemistry to which each base unit is linked. Interestingly, the only thing that remains non-discrete is spacetime. An advantage of the new SI is that it facilitates the explanation of the new definitions as it does not need to explain the measuring devices necessary to perform these units.

Metrology has a double mission: (1) to maintain the unit standards and their definitions compatible with the current laws of physics; (2) to measure with increasing accuracy and precision in order to open new doors to discover new laws of physics.

As for its more traditional mission (1), the adoption of the new SI allows to get rid of a material device to define the kilo, which was a long sought-after goal. With this, it is possible to materialize primary standards of the base units in different national metrology centers for the first time. In particular, quantum metrology will drastically change the dissemination and traceability of units by ensuring that they can be materialized autonomously without the need for a single stored standard.

It is interesting to note that in the course of the construction of the new SI the three most famous balances of physics have appeared: Cavendish [73, 74], Eötvös [19] and Kibble [63]. As well as the seminal papers of Einstein in his annus mirabilis 1905 [7, 14, 16, 17].

As for the second mission, we have seen how the new SI uses five universal constants of nature. Of these, three have a special status, c, h and e, as they are associated with symmetry principles of the Universe, such as the principle of relativity, unitarity and gauge symmetry. The other two are the Boltzmann constant k and the Avogadro constant NA, neither of which has an associated symmetry.

Now that the physical units are defined by the fundamental physics of the Universe, and not by a human construct using artifacts, then as for the fundamental constants of the Universe: are they a machination of something? Why do they take those values? And until when? We have thus reached the most fundamental questions of physics. That is why metrology really goes beyond maintaining measurement standards.

Acknowledgments

These notes are the result of several lectures given during 2017, 2018 and 2019. I would like to thank the organizers José Manuel Bernabé and José Ángel Robles from Centro Español de Metrología (CEM) for their kind invitation to the 6o Congreso Español de Metrología (2017), to 8o Seminario Intercongresos de Metrología (2018) and the Congreso del 30 Aniversario del CEM (2019); to Alberto Galindo and Arturo Romero from Real Academia de Ciencias Exactas, Físicas y Naturales de España for their kind invitation to Ciclo Ciencia para Todos (2018) and Jornada sobre 'La revisión del Sistema Internacional de Unidades, (SI). Un gran paso para la ciencia' (2019); to Federico Finkel and Piergiulio Tempesta for their kind invitation to the homage of Artemio González López on the occasion of his 60th anniversary. MAM-D acknowledges financial support from the Spanish MINECO, FIS 2017-91460-EXP, PGC2018-099169-B-I00 FIS-2018 and the CAM research consortium QUITEMAD+, Grant S2018-TCS-4243. The research of MAM-D has been supported in part by the U.S. Army Research Office through Grant No. W911N F-14-1-0103.

Please wait… references are loading.