The New SI and the Fundamental Constants of Nature

The launch in 2019 of the new international system of units is an opportunity to highlight the key role that the fundamental laws of physics and chemistry play in our lives and in all the processes of basic research, industry and commerce. The main objective of these notes is to present the new SI in an accessible way for a wide audience. After reviewing the fundamental constants of nature and its universal laws, the new definitions of SI units are presented using, as a unifying principle, the discrete nature of energy, matter and information in these universal laws. The new SI system is here to stay: although the experimental realizations may change due to technological improvements, the definitions will remain unaffected. Quantum metrology is expected to be one of the driving forces to achieve new quantum technologies of the second generation. ----- La puesta en marcha en 2019 del nuevo sistema internacional de unidades es una oportunidad para resaltar el papel fundamental que las leyes fundamentales de la F\'{\i}sica y la Qu\'{\i}mica juegan en nuestra vida y en todos los procesos de la investigaci\'on fundamental, la industria y el comercio. El principal objetivo de estas notas es presentar el nuevo SI de forma accesible para una audiencia amplia. Tras repasar las constantes fundamentales de la naturaleza y sus leyes universales, se presentan las nuevas definiciones de las unidades SI utilizando como principio unificador la naturaleza discreta de la energ\'{\i}a, la materia y la informaci\'on en esas leyes universales. El nuevo sistema SI tiene vocaci\'on de futuro: aunque las realizaciones experimentales cambien por mejoras tecnol\'gicas, las definiciones permanecer\'an inalteradas. La Metrolog\'{\i}a cu\'antica est\'a llamada a ser uno de las fuerzas motrices para conseguir nuevas tecnolog\'{\i}as cu\'anticas de segunda generaci\'on.

On May 20th, 2019, matching with the World Metrology Day, the new international system (SI) of units that was approved at the assembly of the 26th General Conference of Weights and Measures (CGPM) met in Versailles during November 13nd-16th, 2018 [1]. This is a historical achievement. It is the culmination of many efforts during many years of joint work between the national metrology institutes from the member states and the BIPM (Bureau International des Poids et Mesures), providing a wonderful example of international collaboration.
The CGPM approved to review in 2018 four of the base units (the kilogram, the ampere, the kelvin and the mole). In this way, all the basic measurement units are linked to physical constants instead of arbitrary references. This means the retirement of the famous mass pattern, the kilo IPK [1][2][3], which was the only standard linked to a remaining material device. Now all the base units are associated with nature's rules to create our measurement rules [1]. What underlies all these redefinitions is the possibility of carrying out measurements at atomic and quantum scales to perform the units at macroscopic scale. Although removing of artifacts to define base units is useful to better guarantee their stability and universality, however the new system brings with it the enormous challenge of explaining its functioning to society in layman words, and to high schools and universities. Artifacts are tangible (see Fig.18), while the fundamental laws of nature (physics and chemistry) are abstract and harder to grasp by the general public. In this sense, in X A a unified treatment of all definitions of SI units is presented using as a common framework the discretization of energy, matter and information that is the fundamental ingredient of the laws of physics and chemistry to which the new SI units are linked. These notes arise from several introductory lectures to explain the relationship of the fundamental constants with the new unit definitions.
When introducing the constants of nature in section IX, a distinguishing feature has been highlighted among the five universal constants associated with fundamental laws of nature. While h, c and e are associated with principles of symmetry, it is not the case of Botzmann's k and Avogadro's N A constants.
Even though all the new definitions of the base units are presented here, however an exhaustive presentation of all their experimental realizations is avoided for it is far too technical for the purpose of these notes. There is more detailed documentation for that [2][3][4]. The case of the 'quantum kilo' deserves a special treatment, as it is so novel and mass something so common in daily life. Thus, the Kibble balance, which is   [5,6] for universal constants whose value has been set to define the kilo, ampere, kelvin and mole in the new SI of units enacted by BIPM since May 20th, 2019.
the practical realization of the new kilo, is given a simple description of how it works. The rest of the article is organized as follows: in section VIII, the new methodology of separating unit definitions from their experimental realizations is explained; section IX describes the fundamental constants of nature appearing in the new definition of SI units as preparation for the explicit definition in section X A of the seven base units. In X B the role of quantum metrology in the new SI is explained through three examples: quantum clocks, the Kibble balance and the 'quantum kilo', and the quantum metrological triangle. In section XI we reflect on the absence of the universal gravitational constant G in the new system of units and its implications. Section XII is devoted to conclusions.

II. THE NEW SI OF UNITS
The new International System (SI) of units that is ruling as of May 20th, 2019 represents a great conceptual and practical revolution since for the first time all units are linked to natural constants, many of them universal, and it means a dream of Physics and Chemistry come true.
The foundation of the new SI is based on the following premises [2,3]: 1. The separation of unit definitions from their particular experimental realizations.
2. The linking of unit definitions to natural constants.
3. The new unit system is designed to last over time and not be subject to changes due to the continuous advances in the methods of experimental measurement.
The great conceptual advance of the new SI consists in separating the practical realization of units from their definitions. This allows the units to be materialized independently anywhere and at any time as envisioned by the Committee of Experts of the Decimal Metric System in 1789. This also allows new types of realizations to be added in the future as new technologies get developed, without having to modify the definition of the unit itself. An example of this comes from the new quantum technologies and the development of the quantum clock that will change the experimental realization of a second in the near future (see subsection X A).
In the new SI, the units of mass (kg), electric current (A), temperature (K) and quantity of substance (mol) are redefined by linking them to the four universal constants that appear in table VII, whereas the units of time (s), length (m) and luminous efficacy (cd) remain associated with constants of nature as before (see Fig.19).
The new dependencies among the units in the new SI are now much more symmetrical than in the previous system as leaps to the eye by taking a look at Fig.19. The second is still the base unit on which all the others depend, except for the mole that appears decoupled from the rest of the units. The fundamental constants that are set to an exact value appear outside the scheme and their linked units appear inside. These redefinitions have fundamental consequences in certain magnitudes, such as the electrical 0 and magnetic µ 0 constants in vacuum that cease to be exact and become experimentally determined in the new SI. Thus, the magnetic constant is determined by the equation [2]: so that all constants in this equation have a fixed value (see table VII) except for the electromagnetic fine structure constant α that is experimentally measured. In turn, this results in a value given by (CODATA2018) µ 0 = 4π[1 + 0.0(6.8) × 10 −10 ] × 10 −7 NA −2 (2) Then, the electric constant in vacuum is obtained from the relationship, yielding the current value (CODATA2018) 0 = 8.8541878128(13) × 10 −12 Fm −1 .  However, in our daily life the changes will not cause any trouble because they typically correspond to changes of a part in 10 8 , or even less. Their effects are very important though in the high precision measurements that are needed in research laboratories and metrology institutes where it is essential to be able to make exact and precise measurements to know whether a new discovery has been really found.

III. THE FUNDAMENTAL CONSTANTS OF NATURE
Underlying every universal constant of nature there is one of the fundamental laws of Physics and Chemistry. Of the seven units of the new SI, five are associated to universal constants of nature as shown in table IV.
The fundamental constants are like the DNA of our universe. Other universes, if they exist, may have a different set of universal constants. In our universe, depending on the particular physical phenomenon and its scale, we will need some fundamental constants to explain it. With this handful of constants we can describe our physical world from atomic, mesoscopic, microscopic, macroscopic, astronomical to cosmological scales.
In addition to these 5 universal constants, in the new SI there are two extra constants that are used to determine the unit of time and that of luminous efficacy. The first of these is the hyperfine transition frequency of the unperturbed fundamental state of the cesium atom 133, denoted by ∆ν Cs . Although this is a constant of nature, we cannot consider it as fundamental on an equal footing with the other five. If we did, any energy gap in the spectrum of any atom would also be fundamental and we would end up with an infinite number of fundamental constants. In addition, this frequency is computable in principle using the laws of quantum electrodynamics, while constants such as those in table VII are not calculable from the first principles currently known. As for the light efficiency K cd , the associated constant is not even universal and is purely conventional. In summary, only 5 of the 7 constants used in the new SI are really fundamental in the sense expressed here.
There is another essential aspect that deserves to be highlighted: three of these five universal constants are associated with symmetry principles of nature. The speed of light constant c is responsible for the unification of space and time in the theory of Relativity [7], one of the pillars of modern physics. What underlies this fundamental law is the Principle of Relativity, which declares all inertial reference frames in relative motion as physically equivalent. It is this symmetry that is responsible for the constancy of the speed of light. If c were not constant, Lorentz's transformations and therefore the Relativity Principle, would be broken.
The Planck constant h is responsible for the physical quantities such as energy, angular momentum, etc. can take on discrete values, called quanta. It is the fundamental constant of Quantum Mechanics, another of the pillars of modern physics. What underlies this fundamental law is the Unitary Principle, enforcing the probability of finding the particles in their quantum state to be preserved throughout their temporal evolution. Even more basic is the linearity of Quantum Mechanics represented by the Superposition Principle of states that is necessary to guarantee unitarity. If h were not constant, the Unitarity Principle would be broken. It is the Principle of Superposition (linearity) of Quantum Mechanics that is at the root of all the counterintuitive surprises that quantum physics brings about as R. Feynman [8,9] teaches us. Precision tests on the possible lack of non-linearity of Quantum Mechanics can be performed using non-linear models that provide theoretical estimates of linearity validity rates of only 10 −21 error [10], and up to 4 × 10 −27 with direct measures [11]. It so happens that to obtain these estimations, the highly exact measurements of the radio-frequency transitions probed in the frequency standards are used. It turns out that a possible non-linearity would produce a de-tuning of those resonant transitions in the standards.
The charge of the electron e is the value of the elementary (unconfined) source of electric field in Quantum Electrodynamics, the first of the known elementary particle theories and the one that serves as a reference for the rest of fundamental interactions. What underlies this fundamental law is the Principle of Gauge Invariance that describes known elementary interactions. In the case of electromagnetism, the invariance group is the simplest U (1). It is this symmetry that is responsible for the constancy of the electron charge: if e is not constant, the gauge symmetry breaks down.
In these three examples, the values of the fundamental constants c, h and e are protected by nature's symmetries. An increasingly accurate measurement of them may result in a lack of constancy, and therefore, the violation of one of the fundamental laws of physics. Then metrology is also a source of discovery of new physics through the improvement over time of its measurement methods. A very important example of this fact within the new SI is the so-called quantum metrological triangle that we will see in the subsection X B, and also the possible variations in the electromagnetic structure constant α or the ratio of the proton mass to the electron mass (see X B).
The Boltzmann constant k is the conversion factor that allows the thermodynamic temperature T of a body to be related to the thermal energy of its microscopic degrees of freedom (constituents). This is the fundamental constant in Statistical Physics, which studies the relationship between macroscopic physics and its microscopic constituents, another of the pillars of physics. The constant k appears in the description of the macroscopic world in the probability P i , or Boltzmann factor, of finding a system in a microscopic state i when it is in thermodynamic equilibrium at the temperature T : where Z is the partition function characteristic of the system. Boltzmann established the relationship between macroscopic and microscopic worlds in his formula for entropy S: where W is the number of different microscopic states corresponding to a macroscopic state of the system with given energy E. This is the famous equation that appears in the frontispiece of Boltzmann's tomb in Vienna. However, Boltzmann established the relationship (46) as a proportionality law, without explicitly introducing his constant. Historically, Planck was the first to write it in his article where he laid down the black body radiation law, along with his constant h of energy quanta [12]. Planck was the first to give numerical values to these two constants using the experimental values of the universal constants that appear in the Wien law of displacement and the Stefan-Boltzmann's law, which describe essential properties of radiation in thermal equilibrium. These first values turned out to be very close to the current ones [12] (see Table VII): The Boltzmann constant has no associated symmetry principle, unlike the other three constants mentioned above.
Avogadro's constant N A is a conversion factor that relates the macroscopic amount of a substance to the number of its elementary constituents, whether they are atoms, ions, molecules, etc. It is a fundamental constant in the Atomic Theory of matter in Physics and Chemistry. The mole is introduced to handle macroscopic quantities of a substance that is made of a huge number of elementary entities. Avogadro's constant N A is the proportionality factor between the mass of one mole of substance (molar mass) and the average mass of one of its molecules, or whatever its elementary constituents. N A is also approximately equal to the number of nucleons in a gram of matter. To define the mole, the oxygen atom was initially taken as a reference and then carbon. In the new SI, the mass of one mole of any substance, be it hydrogen, oxygen or carbon, is N A times the average mass of each of its constituent particles, which is a physical quantity whose value must be determined experimentally for each substance.
The origin of the word mol comes from the Latin moles which means mass, and molecula which means small portion of mass. Avogadro's constant has also no symmetry principle associated.
As both the Boltzmann constant k and Avogadro's N A are conversion factors for macroscopic and microscopic properties, they are also related to one another: where R is the constant of the ideal gases that relates pressure, volume and temperature: P V = nRT , with n the number of moles in the gas.
The Atomic Hypothesis plays a fundamental role in the description of nature. It states that matter is not a continuum, but is discrete and made of elementary entities called atoms. Feynman considered it the most important idea of all science because it contains a lot of information in a few words, and from which, you can reconstruct many of the properties surrounding us, such as that there are different states of matter depending on the temperature and its phase changes [8]. At the time of Boltzmann in the second half of XIX century, the existence of atoms and molecules was still under debate and is one of the reasons why the Boltzmann constant was introduced late because then macroscopic energies were used with the gas constant, instead of energies per molecule (48) [13]. The works on the Brownian motion, theoretical by Einstein and experimental by Perrin, were essential to establish the validity of the Atomic Hypothesis at the beginning of the XX century.

A. The New Definitions
The explanations of the new SI are greatly facilitated by the new viewpoint adopted to separate the definitions of units, which are linked to the constants of nature, from their concrete experimental realizations. The latter may be changing with the technology and the development of new measurement methods in the laboratory (section VIII).
The visible universe is made of matter and radiation. Physics is the science devoted to the study of matter and radiation, and their interactions. The new SI uses the discrete nature of matter and radiation to define its units based on natural constants. The discrete character of matter is historically called the atomic hypothesis and the discrete character of radiation, the quantum hypothesis.
Let us start with electromagnetic radiation, one of whose forms is light. Its velocity c has a property that makes it special for measuring times and distances: it is a universal constant and has the same value for all inertial observers, that is, those who measure physical, i.e. observable, magnitudes.
Since time is the most difficult magnitude to define, then it is defined first as the most basic one. For this we use an oscillator that is very stable: the cycles of cesium atoms in an atomic clock. Galileo used pendulums, or even his own pulse, to measure time. The definition of time given by Einstein is famous: "What is time? Time is what a clock measures". It is a very simple and at the same time a very deep definition. In fact, it is a metrological definition of time that fits very well in the new SI: once the time is defined in a generic way in terms of an oscillator or clock, the choice of a suitable clock is left for the realization of the unit second (s). According to the rules of the new SI, the definition and realization of the second is as follows: second: "The second, symbol s, is defined by setting the fixed numerical value of the cesium frequency ∆ν Cs , the unperturbed ground-state hyperfine transition frequency of the cae-sium 133 atom, in 9 192 631 770, when expressed in the unit of Hz (hertz), equal to 1/s".
The realization of the second by means of the transition frequency of cesium ∆ν Cs = 9192631770 Hz, (10) implies that the second is equal to the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the unperturbed groundstate of the atom 133 Cs. This materialization of the time standard is an example of the provisional character of the experimental realizations of the SI units. Standards based on cesium have been around since the 60s of the XX century. We currently have more precise realizations using quantum clocks and a renewal of the second using this quantum technology is already planned by the BIPM before 2030 (see X B). However, the definition of time will remain unchanged.
The metre is then defined with the time and the speed of light c: metre: "The metre, symbol m, is defined by taking the fixed numerical value of the speed of light in vacuum c as 299 792 458 when expressed in the unit ms −1 , where the second is defined in terms of the cesium frequency ∆ν Cs ".
With this definition, a metre is the length of the path traveled by the light in vacuum during a time interval with a duration of 1/299 792 458 of a second. This definition is based on setting the speed of light in vacuum exactly at The methods for measuring the speed of light have changed over time, from the initial of Ole Römer in 1676, based on the transit of the moon Io of Jupiter measured by a telescope, to modern techniques using laser interferometry. Next, the natural thing is to define the unit of mass, the kilo. It turns out that light has also another property that makes it very useful for defining the kilo: light of a fixed frequency (monochromatic) has a minimum discrete energy called photon whose energy is proportional to their frequency as discovered by Planck [12], and then Einstein [16]. That constant of proportionality is Planck's h constant. The units of this constant are the basic three of what was once called the MKS System, precursor of the current SI: metre, kilo and second in the following proportion, It is important to note that Newton's universal gravitation constant G has also units of the MKS system, although in another proportion: It turns out that h and G are the only fundamental constants with MKS units. There are other constants associated with fundamental interactions, but do not contain mass but other elementary charges. However, for G this is not enough to define the unit of mass with the accuracy that is needed in metrology. The problem is that the precision with which G is measured is much worse than that of h. The 'gravitational kilo' is not a good practical metrological unit. This fact is the origin of the 'quantum way' for the kilo as we will see. In short, we can use h to define the kilo from the second and the metre that are already defined once the value of c is set.
Were it not for this lack of precision in measuring G, the constant h could be decoupled from the kilo and set independently through the purely quantum effects of Hall and Josephson (see X B): But if this other quantum route were taken, so natural in theory, then we would decouple the kilogram from h and it would be linked to an artifact again: we find ourselves forced to choose the 'quantum kilo'. Once the path of the 'quantum kilo' is chosen, the next question is how to use the Planck constant h to define it. To do this, we use the prescription from the new SI to use the h units and the second and metre definitions already introduced above. So, the definition of kilo looks like this: After the definition of the 'quantum kilo', the problem arises as to how to do it experimentally. The simplest thing at first sight would be to use the fundamental energy relations of Einstein [17] and Planck [12], respectively: The basis of the 'quantum kilo' is to have a very precise method to measure h and then use it to define the kilo. But for this, the previous fundamental relationships present a problem. The photon, being a quantum of light energy, has no mass. If we want the quantum to have a mass, what is better defined is its de Broglie wavelength [18]: However, measuring a wavelength is easy for a plane wave, which is again more typical of monochromatic radiation. To have a real mass m, we need a particle with an associated wavelength. This corresponds to a mass localized in space, which is more naturally described with a wave packet. However, this one does not have a single wavelength in turn. Thus, using the most basic relations of energy (56) is not the most metrologically sensible thing to do. Hence, the quantum way for the kilo is realized through the Kibble Balance (see X B). This leads us to an important question: What kind of mass, inertial or gravitational, appears in the units of h, and therefore in the new definition of kilo? In the case of the photon that has no mass, such a distinction does not exist. When we have a particle with mass, then it will depend on the mechanical relationship we use to relate it to h, and thus decide wether the kilo we define is inertial or gravitational. For example, if we use Einstein's relationship, the kilo will be inertial, if we use a balance, then the kilo will be gravitational. Therefore, the Kibble Balance provides us with a definition of a quantum gravitational kilo. Now, the Equivalence Principle tells us that both types of mass are equal and is experimentally proven with an accuracy greater than the measurement of the fundamental constants involved in the SI: an uncertainty of (0.3 ± 1.8) × 10 −13 [19]. Thus, we can omit the gravitational term while the precision of the Equivalence Principle is greater than that of the fundamental constants.
The description of the 'quantum kilo' by means of the Kibble Balance belongs to the part of the new SI system corresponding to the practical realization of the kilo unit, not to its definition (see X B).
To continue defining the other SI units and derive them from the previous ones already defined, we turn to the discrete nature of matter. Thus, we know that there are atoms (neutral) and electrons (charged). The most elementary charged matter (unconfined) is the electron and with it the ampere is defined using the second already defined: ampere: "The ampere, symbol A, is defined by taking the fixed numerical value of the elementary charge, e, as: when expressed in the unit coulomb, C, equal to A s, and the second is defined in terms of ∆ν Cs ". Consequently, an ampere is the electric current corresponding to the flow of 1/(1.602176634 × 10 −19 ) elementary charges per second. The advantage of the new ampere is that it can be really measured, unlike the old one that had an awful and impracticable definition that actually left it outside the SI system. In addition, it is independent of the kilogram and the uncertainty of the electrical quantities is reduced.
To experimentally realize the ampere, several techniques have been proposed: a) the most direct is to use the current definition of SI through single electron transport (SET) (see X B) [3,4] although it is still under development to make it competitive; b) using Ohm's law and the Hall and Josephson effects to define volt and ohm (see X B) [3,4]; c) the relationship between the electric current and the temporal variation of the bias voltage in a capacitor [3,4].
As for atoms, historically, they are the elementary units of substance and with them the mole can be defined as the unit of quantity of a certain substance. Behind this definition is the discrete nature of matter through the atomic hypothesis and the natural constant associated to it is Avogadro's (48): mol: "The mole, symbol mol, is the unit of the amount of in units mol −1 . As a consequence, the mole is the amount of substance in a system that contains 6.02214076 × 10 23 specified elementary entities. The amount of substance in a system is a measure of the number of elementary quantities. An elementary quantity can be an atom, a molecule, an ion, an electron, any other particle or a specific group of particles. For the experimental realization of the ampere, several techniques have been proposed [3,4]: a) the Avogadro project (International Avogadro Coordination), b) gravimetric methods, c) equation of gas, and d) electrolitic methods.
We still need the fundamental unit to measure temperature, the kelvin. Following the new SI, we must link it to a fundamental constant of nature, in this case the Boltzmann constant k. Boltzmann's constant serves as a conversion factor between energy and temperature: We can also see this constant as a result of nature's discrete character. For example, the atomic hypothesis for ideal noble gases allows to calculate their kinetic energy as: The resulting new definition for kelvin as the unit of thermodynamic temperature is: kelvin: "The kelvin, symbol K, is defined by taking the fixed numerical value of the Boltzman constant, k, as when it is expressed in units kg m 2 s −1 K −1 , where the kilogram, metre and second are defined according to h, c and ∆ν Cs ". The second quantum revolution of technologies is based on the principle of superposition of quantum mechanics. The simplest case is exemplified by the two-slit experiment [8,9] where the properties of a single particle get in superposition. When quantum superposition involves several particles, the resulting phenomenon is quantum entanglement, which is the fundamental resource in quantum information [29]. Credit: R. Sawant et al. [30] This definition implies that the kelvin is equal to a thermodynamic change in temperature resulting in a change in thermal energy kT of 1.380649 × 10 −23 J. As a consequence of the new definition, the triple point of water ceases to have an exact value and now has an uncertainty given by: as a result of inheriting the uncertainty that the Boltzmann constant k had before the new definition. Following the guiding principle used so far to introduce the new definitions of SI units using the discrete nature of energy (56) and matter (48), we can go further on and use the discrete nature of information to introduce the Boltzmann constant. Thus, another way to substantiate the discrete origin of Boltzmann's constant is through the Landauer Principle [20] which establishes that the minimum energy dissipated in the form of heat at a temperature T in the simplest elementary system, whether classical (bit) or quantum (qubit), is: The underlying origin of the Landauer Principle is the irreversible nature of information deletion or erasure [21,22] in a system, which leads to a minimum dissipation of energy [23]. Its experimental verification has been possible directly in several recent works [24][25][26][27]. In this way, we have a unified vision of all SI units based on the fundamental property that both energy, matter and information possess: their discrete nature in our universe. Several techniques have been proposed for the experimental realization of the kelvin [3,4]: a) by acoustic gas thermometry, b) radiometric spectral band thermometry, c) polarizing gas thermometry and d) Johnson noise thermometry.
The seventh base SI unit used to measure the light efficiency of a source deserves a special comment. It is a measure of the goodness of a light source when its visible light is perceived by the human eye. It is clearly conventional and subjective. Average values of human eye behavior are used. It is quantified by the ratio of the luminous flux to the power, measured in lumens (lm) per watt in the SI. There is no universal law of physics associated with this unit, for the light source does not have to be in thermodynamic equilibrium. The basic concept for the candle is maintained in the new SI: candela: "The candela, symbol cd, is the SI unit of luminous intensity in a given direction. It is defined by taking the fixed numerical value of the luminous efficacy of monochromatic frequency radiation 540 × 10 12 Hz, K cd , to be 683 when expressed in the unit lm W −1 , which is equal to cd sr W −1 , or cd sr kg −1 m −2 s 3 , where the kilogram, metre and second are defined in terms of h, c and ∆ν Cs ". This definition is based on taking the exact value for the constant for monochromatic radiation of frequency ν = 540 × 10 12 Hz. As a consequence, a candela is the light intensity, in a given direction, of a source that emits monochromatic radiation of frequency ν = 540 × 10 12 Hz and has a radiating intensity in that direction of (1/683) W/sr.
For the experimental realization of the candela, several techniques have been proposed [3,4] such as the practical realization of radiometric units, using two types of primary methods: those based on standard detectors such as the electrical replacement radiometer and photodiodes of predictable quantum efficiency, and those based on standard sources such as Planck's radiator and synchrotron radiation. In practice, a standard lamp with optimized design is more commonly used to emit in a defined direction and at a long distance from the detector.  [41], and local measurements made with the cepheid and supernova distance ladders. (Credit Riess et al. [43]).

B. Quantum Metrology and the New SI
The foundation of the laws of quantum mechanics in the first quarter of the twentieth century allowed us to understand nature on an atomic scale. As a result of that better understanding of the atomic world, applications emerged in the form of new quantum technologies. This first period is known as the first quantum revolution and has produced technologies as innovative as the transistor and the laser. Even today's classic computers are a consequence of these first-generation quantum advances. With the beginning of the 21st century, we are seeing the emergence of new, second-generation quantum technologies [28] that constitute what is known as the second quantum revolution. Both revolutions are based on exploiting specific aspects of the laws of quantum mechanics. Thus, we can classify these technological revolutions into two groups: First Quantum Revolution: It is based on the discrete character of the quantum world's properties: energy quanta (such as photons), angular momentum quanta, etc. This discrete nature of physical quantities is the first thing that surprises in quantum physics. (see Fig.20). Second Quantum Revolution: It is based on the superposition principle of quantum states. With it, information can be stored and processed as a result of its quantum entanglement properties (see Fig.21).
In the second quantum revolution, five working areas have been identified that will lead to new technological developments. Enumerating them from least to greatest complexity are: quantum metrology, quantum sensors, quantum cryptography, quantum simulation and quantum computers.
Quantum metrology is considered one of the quantum technologies with most immediate development. It is the part of metrology that deals with how to perform measurements of physical parameters with high resolution and sensitivity using quantum mechanics, especially exploiting their entanglement properties. A fundamental question in quantum metrology is how accuracy scales when measured in terms of the variance ∆m , used to estimate a physical parameter, with the number of particles used N or repetitions of the experiment. It turns out that the classical interferometers for light, electrons etc. cannot overcome the so-called shot-noise limit [31][32][33] given by whereas with second-generation quantum metrology it is possible to reach the Heisenberg limit given by An example of a very important application of quantum metrology to basic research is the detection of gravitational waves by experiments such as LIGO (Laser Interferometer Gravitational-Wave Observatory) [34] (see Fig.22) , where you need to measure distances between separate masses of the order of a few kilometers with a very high precision (thousandth of the diameter of a proton). These variations in distance occur in the lengths of the interferometer arms when a gravitational wave passes through them. Using 'squeezed light', a form of quantum optics [36], the sensitivity of interferometers can be improved according to the quantum limit (67) [37][38][39].
The LIGO-type experiment measures may have another basic application to try to elucidate the controversy that has recently arisen with the Hubble constant H 0 . In the standard cosmological model denoted by ΛCDM (Λ=dark energy, CDM=cold dark matter) H 0 measures the speed with which the universe is currently expanding according to Hubble's law. It is a parameter of capital importance in cosmology. There are two sources of measures that give discrepant values. On the one hand, the value provided by the Planck satellite in 2018 by analyzing the microwave radiation background of the early universe and extrapolating the Hubble parameter to its current value with the standard model ΛCDM gives a value of H 0 = 67.4 ± 0.5 kilometers per second and per megaparsec distance [41]. On the other hand, measurements made in the current universe using distance ladders based on cepheids and type 1a supernovae result in H 0 = 74.03±1.42 kms −1 Mpc −1 [42]. This discrepancy implies a statistical confidence of 4.4 sigma (standard deviations), very close to the 5 sigma barrier that is considered as clear evidence that they are different results. If that were the case, it would amount to new physics that the standard model has not taken into account (see Fig.23). In addition, to increase the controversy, there is also a measure of the current universe with another distance ladder that yields an intermediate value between the two discrepants, H 0 = 69.8 kms −1 Mpc −1 [44]. All measurements made with the current universe give values above the values obtained with the early universe and the standard model. Then, either there are systematic errors, or it may be the indication of new physics beyond the current cosmological model, such as the existence of dynamic dark energy, to name just one possible example [43].
LIGO experiments can be very useful in the future to elucidate this controversy, one more, about the Hubble constant. This time, it is not about analyzing black hole collisions as in the original discovery of gravitational waves, but about binary neutron star collisions. It turns out that when two neutron stars merge, the resulting gravitational waves can be used to obtain information about the position of the stars, and hence the galaxies where they are found. By making statistics of at least 50 events of these collisions one could have a direct measure of the Hubble constant with an accuracy not achieved to date and resolve the dispute [45]. And recent results improve these expectations. It has already been possible to detect a neutron star fusion event useful to estimate the Hubble constant, whose resulting value is precisely H 0 = 70 kms −1 Mpc −1 with an uncertainty of the order of ∼ 7% [46]. It is estimated that with 15 events the error can be reduced to 1% and begin to resolve the discrepancy.
Let us look now at three important examples of applications of quantum metrology to the new SI, both from quantum technologies of first and second revolution.

Quantum Clocks
The basic mechanism of a clock consists of a system with very stable periodic oscillations where each period defines the unit of time such that the clock counting those periods, measures time. In the past, natural periodic movements such as that of the Earth around its axis or around the sun have been used, as well as mechanical oscillators such as pendulums or quartz crystal resonators. In the 60s of the last century, atomic oscillations began to be used to measure time using cesium atoms for their greater accuracy and stability than mechanical systems. The current reference to define the time standard is cesium 133 using the resonance frequency corresponding to the energy difference between the two hyperfine levels of its fundamental state (see X A). Cesium atomic clocks can typically measure time with an accuracy of one second in 30 million years. In general, the accuracy and stability of an atomic clock is greater the higher the frequency of the atomic transition and the smaller the width of the electronic transition line. If we make the frequency of the oscillator bigger we can increase the resolution of the clock by reducing the period we use as a reference.
Atomic clocks have allowed us to improve multiple technological developments that we are used to in our daily lives: controlling the frequency of television broadcasting waves, global satellite navigation systems such as GPS, financial transactions, internet, mobile phones etc. .
The new quantum clocks are a type of atomic clocks where the increase in accuracy is due to using atomic transition frequencies in the optical range instead of the microwaves used by cesium clocks. Optical frequencies of visible light are about five orders of magnitude greater than microwaves. In order to make this leap, it was necessary to use ion traps (see Fig.24) with quantum logic techniques used in quantum computing [48,49], a part of the second revolution quantum technologies [50][51][52]. Quantum clocks achieve an uncertainty of only one part in 10 17 s, which is equivalent to an error of a second in a quantum clock that would have begun ticking 13.7 billion years ago, what represents the whole age of the universe. An ion clock is an instance of quantum clock, which is based on quantum logic and is an example of cooperation between two ions where each provides complementary functionalities. For example, the aluminum ion Al + has a transition frequency in the optical range that is useful for the clock reference frequency. However, its atomic level structure makes it a bad candidate to cool it down to the temperatures necessary for stabilization. Instead, this is possible with a beryllium ion Be + . Using quantum computing protocols, information on the internal state of the spectroscopic ion Al + , after probing its transition with a laser, can be faithfully transferred to the logical ion Be + , where this information can be detected with almost 100% efficiency [53]. Each ion species provides a different functionality, the reference frequency or the cooling method, and the quantum entanglement between the states of both ions allows them to function as a quantum clock alto- gether.
Current quantum clocks use either a) one or two trapped ions, or b) ultra-cold atoms confined in electromagnetic fields in the form of optical lattices.
Each of these realizations has pros and cons. Ion clocks have a very high accuracy for they can confine the ion by cooling it down in a trap so that we get very close to the ideal of an isolated system from external disturbances. However, by using only one ion for the absorption signal, less stability is achieved as the ratio of the signal to external noise gets reduced. On the contrary, clocks of atoms in optical lattices can work with a large number of atoms achieving greater stability and a better signal-to-noise ratio.
Different teams working with both options are developing techniques to get better and better performances. The most recent record with trapped ions has an uncertainty of 9.4 × 10 −19 s [54]. Quantum clocks in optical lattices also achieve 10 −18 s accuracy. There is still time to decide which of the two alternatives will be chosen to realize a new revision of the second, or whether both of them are complementary (see Fig.25).
The improvements in time measurement provided by quantum clocks also have important applications. The technological applications are similar to those mentioned above and it is of great interest to be able to send one of these quantum clocks in space missions and to improve navigation systems. Another application is the high precision measurement of the gravitational field for, according to Einstein's general relativity, there is a time dilation due to gravitational effects in addition to the speedup due to velocity. With a quantum clock you can distinguish gravitational fields only 30 cm high [55], and even less. These measures will allow to better define the heights above sea level, since this is not measured in the same way in different parts of the world and is crucial to know the activity of the oceans. Similarly, these quantum devices can be applied to geodesy, hydrology and telescope network synchronization.
Basic research is one of the first fundamental applications of them. Comparing the operation of several quantum clocks over time we can discover if any of the fundamental constants of physics changes with time, which is essential to find new physics and to define the base units according to the new SI (see X A). Examples of fundamental constants that can be probed for their temporal dependence are the electromagnetic fine structure constant α (40) and the ratio of the proton to the electron mass µ := m p /m e . In the past there has been controversy over possible temporal variations of α and µ detected by measures of atomic transitions in distant quasars compared to current measurements in the laboratory [56,57]. It so happens that all atomic transitions functionally depend on α and also hyperfine transitions depend significantly on the µ ratio. It turns out that quantum clocks allow to improve the levels of variation of these fundamental constants. With these experiments, the ratio of optical frequencies between ions of Al + and Hg + can be measured providing a bound to the time variation for α of −1.6 ± 2.3 × 10 −17 per year, and with the yterbium ion Yb + a bound for µ of 0.2 ± 1.1 × 10 −16 per year, which are better by a factor of ten than the astrophysical measurements [58,59]. These negative results for the temporal variation of the fundamental constants serve as justification for the new SI unit system and its universality regardless of space and time, at least as long as the experiments continue to confirm such results.

Kibble Balance
The Kibble balance is the current experimental realization of the unit of mass through the quantum way in the new SI [60][61][62][63]. In this way, we are fulfilling the new methodology of separating unit definitions from their practical materialization (see VIII). Whereas the definition of the new kilo linked to the Planck constant has already been explained in X A, we will now see how to realize it in the laboratory with current technology.
The realization of the 'quantum kilo' consists of two dis- Let us start with the Kibble balance. We present a simplified discussion, but sufficient to understand its foundations. It looks like an ordinary balance in which it also has two arms, but while in the ordinary balance two masses are compared, one standard and another unknown, in Kibble's we compare gravitational mechanical forces with electromagnetic forces. Its functioning consists of two operating modes: i/ Weighing Mode and ii/ Moving Mode. Weighing Mode: A test mass m is available in one of the arms, which could be for example the IPK standard. On the other plate, a circuit of electric coils is mounted where a current I is passed through (see Fig.27). The circuit is suspended in a very strong magnetic field created by magnets with a stationary and permanent field B. The length of the circuit is L. Then, the current induces an electromagnetic field that interacts with the constant magnetic field of the magnet. The resulting vertical electromagnetic force is equal to the weight of the test mass, During this operating mode, the intensity of direct electric current is measured very accurately by appropriate instruments (integer quantum Hall effect), and it is proportional to the vertical force. The current is adjusted so that the resulting force equals the weight of the test mass.
Moving Mode: This is a calibration mode that is necessary as the quantity BL is very difficult to measure accurately. Were it not for this, the weighing mode would suffice. An electric motor is used to move the wire circuit vertically through the external magnetic field at a constant speed v (see Fig.28). This movement induces a voltage V in the circuit whose origin is also a Lorentz force and is given by, During this operating mode, the voltage is measured very accurately by appropriate instruments (Josephson effect), and therefore the magnetic field that is proportional. Laser sensors are also used to monitor the vertical movement of the electrical circuit by interferometry. With this, variations of the order of the laser's semi-wavelength used can be detected. In all, it is ensured that the vertical movement happens at constant speed and the constant magnetic field can be measured.
The result of comparing the weighing mode with the moving mode, eliminating the quantity BL, is the equivalence between mechanical and electrical powers: Although it is usual to call the Kibble balance as a power or watt balance, however, note that the Kibble balance does not measure real, but virtual, powers. This point is of crucial importance in metrology: were the mechanical power really measured, then the device would be subject to uncontrollable friction losses; otherwise, if the electrical power were measured directly, then it would be subject to heat dissipation. We see that the moving mode is essential and provides adequate calibration. It turns out that experimentally it is more accurate to measure resistance than current intensities. Using Ohm's law, we can obtain the mass on the Kibble balance based on resistance and voltage measurements: where V R and V are the two necessary voltage measurements.
In the second part of the experimental realization of the 'quantum kilo' we need to relate the electrical power in (70) to the Planck constant h. This is done through the measurement of the electric intensity I in the weighing mode and the voltage V in the moving mode of the Kibble balance. For this, the following quantum effects are used. Integer Quantum Hall Effect: a two-dimensional sample contains electrons constrained to move in that plane subject to a longitudinally aligned coplanar electric field and a very intense constant magnetic field B applied perpendicularly to the sample (see Fig.29). In addition, the electronic sample is cooled down to temperatures nearby the absolute zero. Then, the system departs from the classical Ohm's law and enters into a quantum regime. As in the classic case, a transverse electric current appears that induces a transverse voltage bias called Hall voltage V H . The electron system enters a new quantum behavior characterized by the appearance of jumps and plateaus in the relationship between the transverse current and the magnetic field [64]. In particular, the Hall resistance R H associated to that Hall potential is quantized where n is an integer, giving rise to those plateaus appearing in the curves of the Hall resistivity. The von Klitzing constant R K is defined as which has dimensions of resistance and is the elementary resistance. The integer quantum Hall effect allows resistance Josephson Effect: when a superconducting wire is interrupted at a point with a contact made of insulating material that joins two superconducting portions, the superconducting current can be maintained due to a tunnel effect of the superconducting Cooper pairs. This is known as a Josephson junction (see Fig.30). Under these circumstances, if a radiofrequency radiation ν is applied, a potential V is induced through the junction that is proportional to the frequency and is quantized [65,66]: where n is an integer, 2e is the Cooper pair charge and the Josephson constant is defined as This is the so-called Josephson DC effect and Josephson junctions can be made with metallic points or with constrictions, in addition to insulators. It allows measuring voltages with an uncertainty of 10 −9,−10 volts, that is, of the order of nano volts or less. For this reason it is used for the realization of the voltage standard [1][2][3]. Now we can relate the test mass (71) that is used in the weighing mode with the Planck constant that appears when measuring the resistance, also in the weighing mode, and the voltage in the moving mode. Using the integer quantum Hall effect (72) to measure the resistance and the Josepshon effect (74) to measure the voltages V R and V , we obtain the desired relationship where the integer numbers that appear come from the concrete measurements of the corresponding quantum effects (72), (74). To measure g a high precision absolute gravimeter is used and v with interferometric methods. With all these high precision measures, the expression (76) has a dual utility: on the one hand, given a standard mass m like that of the old IPK, we can determine h with great precision. On the other hand, since h can be measured with this method with great precision, we can set this value of h as exact and define the unit of mass based on h: this way we can carry out the 'quantum kilo' route and detach the mass unit from the kilo IPK artifact.

Quantum Metrology Triangle
The new definition of the ampere linked to the value of the electron elementary charge e stands out for its clarity and simplicity compared to the old definition based on the Ampere's law and an unrealizable construction using infinite and nullthick conductor wires [4]. However, it also entails the need to materialize it in some way, and it is not easy for the number of electrons in an ordinary system is immensely large. The BIPM has approved three methods for the practical realization of the ampere [2][3][4]. One of them uses the direct definition of ampere A = C/s and a single-electron transport device (SET), which has to be cooled to temperatures close to absolute zero (see Fig.31). Through a SET, electrons pass from a source to a drain. A SET consists of a region made of silicon, called an island, between two gates that serve to electrically manipulate the current. The island temporarily stores the electrons coming from the source using another voltage gate. By controlling the voltages at the two gates, you can get a single electron to remain on the island before moving to the drain. Repeating this process many times and very quickly, it is possible to establish a current from which its electrons can be counted.
The electrical sector is the most quantum of them all within the SI system of units. Now that the ampere has been redefined in the new SI by linking it to a fixed value of the electron charge, it is possible to relate the three magnitudes that appear in Ohm's law, V = IR, in terms of only two universal constants, h and e. This is visualized by the so-called quantum metrological triangle (see Fig.32). This triangle represents an experimental constraint that the voltage, resistance and intensity standards must fulfil, so that the three are not independent. Thus, if we measure Josephson's constant (75) on the one hand and von Klitzing's on the other (73), they allow us to obtain values for the unit of charge e of the electron that must be compatible, within experimental uncertainties, with the value of e obtained with a single-electron transport. And the same goes for any pair of magnitudes that we take in the triangle. Therefore, the quantum metrological triangle allows us to test experimentally, as better accuracy and precision are achieved, whether the constants h and e are really constants as assumed in the new SI. The uncertainties in these constants must be compatible using these three experimental realizations. If at some point these uncertainties do not overlap, then this is an indication of new physics as it would affect the very foundations of quantum mechanics or quantum electrodynamics as explained in section IX. Again, this is an example of how the Metrology not only serves to maintain unit standards, but to open up new paths to new fundamental laws of nature.

V. A 'GRAVITATIONAL ANOMALY' IN THE SI
Despite the fact that the new system of SI units amounts to a complete linking from the base units to fundamental constants of nature, it is still striking the absence of one of the oldest universal constants of Physics: Newton's universal gravitation constant G (see Fig.33), colloquially called big G, as opposed to the small g representing local acceleration of gravity at a point on Earth.
The main reason to exclude G from the new SI system of units is the lack of precision enough to define a unit of mass. As explained in X A, this is the origin of the 'quantum way' for the definition of the kilo when you want to detach it from a material artifact such as the cylinder of the kilo IPK. This fact is related to the so-called 'Newton's Big G Problem' [68][69][70][71]. This problem is the lack of compatibility of the G measures in the last thirty years. Various metrology laboratories around the world have tried to measure G with experimental devices designed to reduce uncertainty in the value of G. The result is surprising: G values do not converge to a consistent single value and their uncertainties do not overlap in a compatible way. This can be seen in Fig.34, which shows the results of multiple experiments and a vertical zone where a G commitment value is chosen. The situation has become desperate and the NSF (National Science Foundation) has launched a global initiative trying to clarify the problem [72].
A fundamental question then arises: what is the origin of Newton's big G problem? The most natural solution is that it is due to possible systematic errors in the experiments. Favoring this interpretation is the fact that, despite the increasing sophistication trying to measure G more accurately, all the experimental methods used are variants of the famous Cavendish balance [73,74]. However, in Fig.34 there appears a value [75] whose experimental method is completely different from the methods based on the Cavendish balance. It is a quantum method of measuring G. Their device uses a technique based on atomic interferometry with ultra-cold atoms. With it, the quantum nature of atoms is used at temperatures close to absolute zero, to obtain an accurate measure of the acceleration of gravity.
The Cold Atoms in Gravity (CAG) method consists of 2 steps: Step 1: measure of the constant small g: value of the local terrestrial gravity.
Step 2: measure of the constant big G.
The technique consists of turning cold atoms vertically, up and down, repeatedly. This serves to probe Earth's gravity with a cloud of rubidium atoms Rb in free fall. With this procedure it is possible to measure the force of gravity between an atom of Rb and a reference mass of 516 kg. The result is a measure of G with a relative uncertainty of 0.015%. Remarkably, it is the first time that a quantum method is admitted to This gravitational anomaly is still a reflection of the big problem that affects modern physics: the lack of compatibility between the two great theories of our time, quantum mechanics and general relativity. An important observation (see Conclusions) is that the CAG method is an indirect measurement method for Newton's big G: it is done by first measuring small g. This contrasts with the classic methods based on the Cavendish balance where G can be measured directly. A direct quantum measure of G would be a first experimental indication of quantum effects on gravity and a first step for a quantum gravity theory. As seen in Fig.34, the CAG value is still outside the shaded vertical zone of the most recent recommended value for G. This may be indicative that the CAG method does not suffer from possible systematic errors as in the classical methods of measurement of G and could be the beginning for the solution of Newton's big G problem. The way to confirm this hypothesis is to encourage more CAG experiments in more independent laboratories and use quantum metrology techniques to reduce their uncertainties. If the result of all these new classical and quantum experiments were that there are no systematic errors, then the conclusion would be even more exciting as it would be again a door opened by metrology to new physics.
Direct methods to measure G are not known. In fact, there are few physics equations where G and h appear together. One of them can help us to see the difficulties of getting a direct quantum method to measure G. This is the equation of the Chandrasekhar limit for the radius of a white dwarf star. If we thought naively of producing a gravitational condensate of nucleons (fermions) that were the result of compensating the gravitational pressure of N nucleons of mass M with the degeneration pressure due to the Pauli exclusion principle, using non-relativistic quantum mechanics and Newtonian gravitation to simplify, we get [76] a value of the equilibrium radius given by where N is the number of nucleons and q the number of electrons per nucleon with mass m. It has been assumed that the density of the spherical condensate is uniform. To make a simple estimate, consider a system of only neutrons (q = 1, m = M n mass of the neutron), and substituting the known experimental values, we obtain If we want to have a fermion system condensed in a sphere with a radius of the order of one meter to be manageable in a terrestrial laboratory, we can estimate the number of necessary neutrons obtained with (78) as something of the order of N ∼ 10 74 , an intractable amount if we take into account that the number of atoms in the observable universe is of the order of 10 80 . This difficulty is a reflection of the disparity of scales where gravity acts against quantum effects.

VI. CONCLUSIONS
The adoption of the new SI system of units brings several concrete advantages over the previous system: it solves the ampere problem and the electrical units that had been left out of the SI, it eliminates the dependence of the kilo with the artifact of the kilo IPK, conceptually it is more satisfactory to define them in terms of natural constants, future technological improvements will no longer affect the definitions etc.
The new SI of units has no direct impact on our daily life, but it does in research laboratories and in national metrology centers where they need measures of great accuracy and precision to conduct these investigations and to guard and disseminate the primary unit standards. As usual, in the long run these new discoveries result in applications that do modify our daily life for the better.
Therefore, it is a great conceptual challenge to explain and convey what it entails and is behind the new system of units.
In section X A a common view of all the new definitions has been presented using as an unifying principle the discrete nature of energy, matter and information in the fundamental laws of Physics and Chemistry to which each base unit is linked. Interestingly, the only thing that remains non-discrete is spacetime. An advantage of the new SI is that it facilitates the explanation of the new definitions as it does not need to explain the measuring devices necessary to perform these units.
Metrology has a double mission: 1) To maintain the unit standards and their definitions compatible with the current laws of physics. 2) To measure with increasing accuracy and precision in order to open new doors to discover new laws of physics.
As for its more traditional mission 1), the adoption of the new SI allows to get rid of a material device to define the kilo, which was a long sought-after goal. With this, it is possible to materialize primary standards of the base units in different national metrology centers for the first time. In particular, quantum metrology will drastically change the dissemination and traceability of units by ensuring that they can be materialized autonomously without the need for a single stored standard.
It is interesting to note that in the course of the construction of the new SI the three most famous balances of physics have appeared: Cavendish [73,74], Eötvös [19] and Kibble [63]. As well as the seminal papers of Einstein in his annus mirabilis 1905 [7,14,16,17].
As for the second mission, we have seen how the new SI uses five universal constants of nature. Of these, three have a special status, c, h and e, as they are associated with symmetry principles of the universe, such as the principle of relativity, unitarity and gauge symmetry. The other two are the Botzmann constant k and the Avogadro constant N A , neither of which has an associated symmetry. Now that the physical units are defined by the fundamental physics of the universe, and not by a human construct using artifacts, then as for the fundamental constants of the universe: Are they a machination of something? Why do they take those values? And until when? We have thus reached the most fundamental questions of physics. That is why metrology really goes beyond maintaining measurement standards.
Aunque la eliminación de artefactos para definir las unidades básicas sirve para garantizar mejor su estabilidad y universalidad, sin embargo el nuevo sistema trae consigo el enorme desafío de explicar su funcionamiento a la sociedad en general, y a los centros educativos. Los artefactos son tangibles (ver Fig.18), mientras que el conocimiento de las leyes fundamentales de la naturaleza física y química dista mucho de ser de amplio dominio público. En este sentido, en X A se presenta un tratamiento unificado de todas las definiciones de las unidades SI usando como marco común la discretización de la energía, la materia y la información que es el ingrediente fundamental de las leyes de la Física y la Química a las que aparecen vinculadas las nuevas unidades SI. Estas notas surgen de varias conferencias de divulgación con el fin explicar la relación de las constantes fundamentales con las nuevas definiciones de la unidades.
Al introducir las constantes de la naturaleza en la sección IX se ha resaltado un rasgo diferenciador entre las cinco constantes universales asociadas a leyes fundamentales de la naturaleza. Mientras que h, c y e están asociadas a principios de simetría, no es el caso de las constantes de Botzmann k y de Avogadro N A .
El resto del artículo se organiza como sigue: en la sección VIII se explica la nueva metodología de separar las definiciones de las unidades de sus realizaciones experimentales; en la sección IX se describen las constantes fundamentales de la  naturaleza que aparecen en la nueva definición de las unidades SI como preparación a la definición explícita en la sección X A de las siete unidades básicas, así como la descripción en X B del papel de la metrología cuántica en el nuevo SI a través de tres ejemplos: los relojes cuánticos, la balanza de Kibble y el 'kilo cuántico', y el triángulo metrológico cuántico. En la sección XI se reflexiona sobre la ausencia de la constante de la gravitación universal G en el nuevo sistema de unidades y sus implicaciones. La sección XII se dedica a conclusiones.
2. Las vinculación de las definiciones de las unidades a constantes de la naturaleza.
3. El nuevo sistema de unidades está diseñado para que perdure en el tiempo y no esté sujeto a cambios debidos a los continuos avances en los métodos de medida experimental.

IX. LAS CONSTANTES FUNDAMENTALES DE LA NATURALEZA
Hay otro aspecto esencial que merece ser resaltado: tres de estas cinco constantes universales están asociadas a principios de simetría de la naturaleza. La constante c de la velocidad de la luz es la responsable de la unificación del espacio y el tiempo en la teoría de la Relatividad [7], uno de los pilares de la física moderna. Lo que subyace a esta ley fundamental es el Principio de Relativiad, que declara físicamente equivalentes todos los sistemas de referencia inerciales en movimiento relativo. Es esta simetría la responsable de la constancia de la velocidad de la luz. Si c no es constante, las transformaciones de Lorentz y por ende el Principio de Relativiad, se rompen.
La carga del electrón e es el valor de la fuente elemental (no confinada) de campo eléctrico en la Electrodinámica Cuántica, la primera de las teorías de partículas elementales conocidas y que sirve de referencia para el resto de interacciones fundamentales. Lo que subyace a esta ley fundamental es el Principio de Invariancia Gauge que describe las interacciones elementales conocidas. En el caso del electromagnetismo, el grupo de invariancia es el más simple U (1). Es esta simetría la responsable de la constancia del valor de la carga del electrón: si e no es constante, la simetría gauge se rompe.
En estos tres ejemplos, los valores de las constantes fundamentales c, h y e están protegidos por sendas simetrías de la naturaleza. Una medición cada vez más precisa de ellas puede resultar en una falta de constancia, y por tanto, la violación de un de las leyes fundamentales de la Física. Luego la metrología es también una fuente de descubrimiento de nueva física a través de la mejora con el tiempo de sus métodos de medida. Un ejemplo muy importante de este hecho dentro del nuevo SI es el llamado Triángulo Metrológico Cuántico que veremos en la subsección X B, y también las posibles variaciones en la constante de estructura electromagnética α o la razón de la masa del protón al electrón (ver X B).
La constante de Avogadro N A es un factor de conversión que relaciona la cantidad macroscópica de una sustancia con el número de sus constituyentes elementales, ya sean estosátomos, iones, moléculas etc. Es una constante fundamental en la Teoría Atómica de la materia en la Física y la Química. El mol se introduce para poder manejar cantidades macroscópicas de una sustancia que está hecha de un número enorme de entidades elementales. La constante de Avogadro N A es el factor de proporcionalidad entre la masa de un mol de sustancia (masa molar) y la masa promedio de una de sus moléculas, o cualesquiera que sean sus constituyentes elementales. N A también es aproximádamente igual al número de nucleones en un gramo de materia. Para definir el mol, inicialmente se tomó como referencia elátomo de oxigeno y posteriormente al carbono. En el nuevo SI, la masa de un mol de cualquier sustancia, ya sea hidrogeno, oxigeno o carbono, es N A veces la masa promedio de cada una de sus partículas constituyentes, la cual es una cantidad física cuyo valor debe determinarse experimentalmente para cada sustancia.
El metro entonces se define con el tiempo y de la velocidad de la luz c: metro: "El metro, de símbolo m, se define tomando el valor numérico fijo de la velocidad de la luz en el vacío c como 299 792 458 cuando se expresa en la unidad ms −1 , donde el segundo se define en términos de la frecuencia de cesio ∆ν Cs ".
Con esta definición, un metro es la longitud del camino recorrido por la luz en el vacío durante un intervalo de tiempo con una duración de 1/299 792 458 de segundo. Esta definición se basa en fijar la velocidad de la luz en el vacío exactamente en Los métodos para medir la velocidad de la luz han ido cambiado a través de los tiempos, desde el inicial de Ole Römer en 1676, basado en el tránsito de la lunaÍo de Júpiter mediante un telescopio, hasta las modernas técnicas usando interferometría con láseres. A continuación lo natural es definir la unidad de masa, el kilo. Resulta que la luz también tiene otra propiedad que la hace muyútil para la definición del kilo: la luz de una frecuencia fija (monocromática) tiene un mínimo de energía discreto llamado fotón cuya energía es proporcional a su frecuencia según descubrieron Planck [12], y luego Einstein [16]. Esa constante de proporcionalidad es la constante h de Planck. Las unidades de esta constante son las tres básicas de lo que una vez se llamó el Sistema MKS, precursor del SI actual: metro, kilo y segundo en las siguiente proporción, Es importante hacer notar que la constante de la gravitación universal G de Newton también tiene unidades del sistema MKS, aunque en otra proporción: Resulta que h y G son lasúnicas constantes fundamentales con unidades MKS. Hay otras constantes asociadas a interacciones fundamentales, pero no contienen la masa sino otras cargas elementales. Pero con G esto no es suficiente para definir la unidad de masa con la exactitud que se necesita en metrología. El problema radica en que la precisión con la que se mide G es mucho peor que la de h. El 'kilo gravitatorio' no es una buena unidad metrológica práctica. Este hecho es el origen de la 'via cuántica' del kilo como vamos a ver. En definitiva, podemos usar h para definir el kilo a partir del segundo y el metro que ya están definidos una vez fijado el valor de c.
La descripción del 'kilo cuántico' mediante la Balanza de Kibble pertenece a la parte del nuevo sistema SI correspondiente a la realización práctica de la unidad kilo, no a su definición (ver X B).
Las mejoras en la medición del tiempo proporcionadas por los relojes cuánticos también tienen importantes aplicaciones. Las aplicaciones tecnológicas son similares a las anteriormente citadas y es de gran interés poder enviar uno de estos relojes cuánticos en misiones espaciales y para mejorar los sistemas de navegación. Otra aplicación es la medida con gran precisión del campo gravitatorio pues de acuerdo con la relatividad general de Einstein, existe una dilatación temporal por efectos gravitatorios además de la debida a la velocidad. Con un reloj cuántico se pueden distinguir campos gravitatorios de solo 30 cm de altura [55], e incluso menos. Estas medidas van a permitir definir mejor las alturas por encima del nivel del mar, ya queéste no se mide de igual manera en distintas partes del mundo y es crucial para conocer la actividad de los océanos. De forma similar se pueden aplicar estos dispositivos cuánticos a geodesia, hidrología, sincronización de redes de telescopios.
Por tanto, es un gran reto conceptual explicar y transmitir lo que supone y hay detrás del nuevo sistema de unidades. En X A se ha presentado una visión común de todas las nuevas definiciones utilizando como principio unificador la naturaleza discreta de la energía, la materia y la información en las leyes fundamentales de la Física y la Química a las que aparecen vinculadas cada una de las unidades. Curiosamente, loúnico que queda sin discretizar es el espacio-tiempo. Una ventaja del nuevo SI es que facilita la explicación de las nuevas definiciones al no necesitar explicar los aparatos de medida necesarios para realizar dichas unidades.