Evolving scientific discovery by unifying data and background knowledge with AI Hilbert

The discovery of scientific formulae that parsimoniously explain natural phenomena and align with existing background theory is a key goal in science. Historically, scientists have derived natural laws by manipulating equations based on existing knowledge, forming new equations, and verifying them experimentally. However, this does not include experimental data within the discovery process, which may be inefficient. We propose a solution to this problem when all axioms and scientific laws are expressible as polynomials and argue our approach is widely applicable. We model notions of minimal complexity using binary variables and logical constraints, solve polynomial optimization problems via mixed-integer linear or semidefinite optimization, and prove the validity of our scientific discoveries in a principled manner using Positivstellensatz certificates. We demonstrate that some famous scientific laws, including Kepler’s Law of Planetary Motion and the Radiated Gravitational Wave Power equation, can be derived in a principled manner from axioms and experimental data.


Introduction
A fundamental problem in science involves explaining natural phenomena in a manner consistent with noisy experimental data and a body of potentially inexact and incomplete background knowledge about the universe's laws [39].In the past few centuries, The Scientific Method [101] has led to significant progress in discovering new laws.Unfortunately, the rate of emergence of these laws and their contribution to economic growth is stagnating relative to the amount of capital invested in deducing them [27,21].Indeed, Dirac [41] noted that it is now more challenging for first-rate physicists to make second-rate discoveries than it was previously for second-rate physicists to make first-rate ones, while Arora et al. [7] found that the marginal value of scientific discoveries to large companies has declined since the fall of the Berlin Wall.Moreover, Bloom et al. [24] have found that research productivity in the United States halves every thirteen years because good scientific ideas are getting harder to find.This phenomenon can be partly explained by analogy to [33]: The Scientific Method has picked most of the "low-hanging fruit" in science, such as natural laws that relate physical quantities using a small number of low-degree polynomials.This calls for more disciplined and principled alternatives to The Scientific Method, which integrate background information and experimental data to generate and verify higher dimensional laws of nature, thereby promoting scientific discovery [c.f.69,110].Accordingly, Figure 1 provides an overview of these alternatives.
Even as the rate of scientific discovery has decreased, the scalability of global optimization methods has significantly improved.Indeed, as we argue in this paper, global optimization methods are now a mature technology capable of searching over the space of scientific laws -owing to Moore's law and significant theoretical and computational advances by the optimization community [see 22, 58, 15, for reviews].Indeed, Bertsimas and Dunn [18,Chap. 1] observed that the speedup in raw computing power between 1991 and 2015 is at least six orders of magnitude.Additionally, polynomial optimization has become much more scalable since the works of Lasserre [74] and Parrilo [87], and primal-dual interior-point methods [84,92,102] have improved considerably, with excellent implementations now available in, for example, the Mosek solver [6].Indeed, even methods for non-convex quadratically constrained problems have achieved machine-independent speedups of nearly 200 since their integration within commercial solvers in 2019 [1,58].
In this paper, we propose a new approach to scientific discovery that leverages these advances by the optimization community.Given a set of background axioms, theorems, and laws expressible as a basic semialgebraic set (i.e., a system of polynomial equalities and inequalities) and observations from experimental data, we derive new laws representable as polynomial expressions that are either exactly or approximately consistent with existing laws and experimental data by solving polynomial optimization problems via linear and semidefinite optimization.By leveraging fundamental results from real algebraic geometry, we obtain formal proofs of the correctness of our laws as a byproduct of the optimization problems.This is notable, because existing automated approaches to scientific discovery, as reviewed in Section 1.1, often rely upon deep learning techniques that do not provide formal proofs and are prone to "hallucinating" incorrect scientific laws that cannot be automatically proven or disproven, analogously to output from state-of-the-art Large Language Models such as GPT-4 [86].As such, any new laws derived by these systems cannot easily be explained or justified.
Conversely, our approach discovers new scientific laws by solving an optimization problem to minimize a weighted sum of discrepancies between the proposed law and experimental data, plus the distance between the proposed law and its projection onto the set of symbolic laws derivable from background theory.As a result, our approach discovers scientific laws alongside a proof of their consistency with existing background theory by default.Moreover, our approach is scalable; it runs in polynomial time with respect to the number of symbolic variables and axioms (when the degree of the polynomial certificates we search over is bounded; see Section 2.2) with a complete and consistent background theory.
We believe our approach could be a first step towards discovering new laws of the universe which involve higher degree polynomials and are impractical for scientists to discover without the aid of modern optimization solvers and high-performance computing environments.Further, our approach is potentially useful for reconciling mutually inconsistent axioms.Indeed, if a system of scientific laws is mutually inconsistent (in the sense that no point satisfies all laws simultaneously), our polynomial optimization problem offers a formal proof of its inconsistency.

Literature Review
We propose an approach to scientific discovery, which we term AI-Hilbert, that uses polynomial optimization to obtain scientific formulae derivable from background theory axioms and consistent with experimental data.This differs from existing works on scientific discovery that express prior knowledge as constraints on the functional form of a learned model (e.g., shape constraints such as monotonicity [38]).Indeed, shape-constrained approaches to scientific discovery have been proposed [71,72,43,38,19], while discovering scientific laws that are simultaneously derivable from prior knowledge expressed as polynomials and experimental data is, to our knowledge, a new approach.
Figure 1: Comparison of scientific discovery paradigms.Traditional scientific discovery formulates hypotheses using existing theory and observed phenomena.These hypotheses are validated and tested using data.In contrast, machine learning methods rely on large datasets to identify patterns.AI-Descartes [32] proposes an inversion of the conventional scientific discovery paradigm.It generates hypotheses from observed data and validates them against known theory.However, in AI-Descartes, theory and data remain disjoint and do not mutually enhance one another.In contrast, our work, AI Hilbert, combines data and theory to formulate hypotheses.Unlike conventional methods, insights in data and knowledge embedded in the theory collaboratively reduce the search space.These two components complement each other: theory compensates for noisy or sparse data, while data compensates for inconsistent or incomplete theory.Note that blue denotes components associated with data, purple denotes components linked to theory, and dashed lines represent iterative processes.
as systems of sum-of-squares polynomials (see Section 1.3).Consequently, optimizing over a real polynomial system is equivalent to solving a (larger) sum-of-squares optimization problem, and thus a tractable convex problem.These observations have allowed an entire field of optimization to blossom; see Blekherman et al. [23], Hall [59] for reviews.However, to our knowledge, no works have proposed using sum-of-squares optimization to discover scientific formulae.The closest works are Clegg et al. [31], who propose using Gröbner bases to design proofs of unsatisfiability, Curmei and Hall [38], who propose a sum-of-squares approach to fitting a polynomial to data under very general constraints on the functional form of the polynomial, e.g., non-negativity of the derivative over a box, Ahmadi and El Khadir [3], who propose learning the behavior of noisy dynamical systems via semialgebraic techniques, and Fawzi et al. [45], who propose learning proofs of optimality of stable set problems by combining reinforcement learning with the Positivstellensatz.However, determining whether polynomial optimization is useful for scientific discovery is, to our knowledge, open.
Data-Driven Approaches to Scientific Discovery: The availability of large amounts of scientific data generated and collected over the past few decades has spurred increasing interest in data-driven methods for scientific discovery that aim to identify symbolic equations that accurately explain high-dimensional datasets.Bongard and Lipson [25] and Schmidt and Lipson [98] proposed using heuristics and genetic programming to discover scientifically meaningful formulae, and implement their approach in the Eureqa software system [42].Other proposed approaches are based on mixed-integer global optimization [10,35], sparse regression [26,96,19], Cylindrical Algebraic Decomposition [50], neural networks [65,73], and Bayesian Markov Chain Monte Carlo approaches [56].See [68,13] for reviews of data-driven scientific discovery in fundamental physics and chemistry.
Data-driven approaches have been shown by several authors to perform well in highly overdetermined settings with limited amounts of noise.For instance, Udrescu et al. [107,106] proposed a method called AI-Feynman, which combines neural networks with physics-based techniques to discover symbolic formulae.Moreover, they constructed a benchmark dataset of 100 scientific laws derived from Richard Feynman's lecture notes [47], with 100, 000 noiseless experimental observations of each scientific law, and demonstrated that while the Eurequa system could recover an already impressive 71/100 instances from the data, their approach could recover all one hundred; see Cornelio et al. [32] for a review of scientific discovery systems.
Unfortunately, data-driven approaches to scientific discovery have at least three significant drawbacks.First, they are not data efficient [49] and only reliably recover scientific formulae in overdetermined settings with orders of magnitude more data than a human would likely need to make the same discoveries.Indeed, Matsubara et al. [83] recently argued that the sampling regime used by AI-Feynman is unrealistic, because it samples values far from those observable in the real world.Moreover, Cornelio et al. [32] recently benchmarked AI-Feyman on 81 of the 100 laws, but with 10 (rather than 100, 000) observations per law, and where each experimental observation is contaminated with a small amount of noise.In this limited data setting, Cornelio et al. [32] found that AI-Feynman recovered 40 of the 81 laws considered, whereas [32] were able to recover 49/81 laws using their symbolic regression solver.This performance degradation is a significant issue in practice because scientific data is typically expensive to obtain and scarce and noisy.Second, purely data-driven methods are agnostic to important background information, such as existing literature, that valid scientific formulae should be consistent with unless there is extraordinary experimental evidence that the literature is incorrect.This implies that data-driven methods search over a larger space of laws than is necessary, require more data than a human would need to derive a valid law, and frequently propose laws that are not scientifically meaningful.Third, data-driven methods typically do not provide interpretable explanations for why their discoveries are valid [c.f.95], which makes diagnosing whether their discoveries are consistent with existing theory challenging.
To account for background theory in scientific discovery, Cornelio et al. [32] recently proposed an approach called AI-Descartes, which iteratively generates plausible scientific formulae using a mixed-integer nonlinear symbolic regression solver [see also 10], and tests whether these formulae are derivable from the background knowledge.If they are not, the method provides a set of reasoningbased measures to compute how distant the formulae induced from the data are from the background theory but is unable to recover the correct formulae.This is because their approach induces potential scientific laws from data and subsequently tests the hypothesis against the background theory, rather than learning from axioms and data simultaneously.

Contributions and Structure
We propose a new paradigm for scientific discovery that derives polynomial laws simultaneously consistent with experimental data and a body of background knowledge expressible as polynomial equalities and inequalities.We term our approach AI-Hilbert, inspired by the work of David Hilbert, one of the first mathematicians to investigate the relationship between sum-of-squares and non-negative polynomials [62].
Our approach automatically provides an axiomatic derivation of the correctness of a discovered scientific law, conditional on the correctness of our background theory.Moreover, in instances with inconsistent background theory, our approach can successfully identify the sources of inconsistency by performing best subset selection to determine the axioms that best explain the data.This is notably different from current data-driven approaches to scientific discovery, which often generate spurious laws in limited data settings and fail to differentiate between valid and invalid discoveries or provide explanations of their derivations.We illustrate our approach by axiomatically deriving some of the most frequently cited natural laws in the scientific literature, including Kepler's Third Law and Einstein's Relativistic Time Dilation Law, among other scientific discoveries.
A second contribution of our approach is that it permits fine-grained control of the tractability of the scientific discovery process, by bounding the degree of the coefficients in the Positivstellensatz certificates that are searched over (see Section 1.3, for a formal statement of the Positivstellensatz).This differs from prior works on automated scientific discovery, which offers more limited control over their time complexity.For instance, in the special case of scientific discovery with a complete body of background theory and no experimental data, to our knowledge, the only current alternative to our approach is symbolic regression [see, e.g., 35], which requires genetic programming or mixedinteger nonlinear programming techniques that are not guaranteed to run in polynomial time.On the other hand, our approach searches for polynomial certificates of a bounded degree via a fixed level of the sum-of-squares hierarchy [74,87], which can be searched over in polynomial time [84,91].
To contrast our approach with existing approaches to scientific discovery, Figure 2 depicts a stylized version of the scientific method.In this version, new laws of nature are proposed from background theory (which may be written down by humans, automatically extracted from existing literature, or even generated using AI [67]) and experimental data, using classical discovery techniques, datadriven techniques, or AI-Hilbert.Observe that data-driven discoveries may be inconsistent with background theory, and discoveries via classical methods may not be consistent with relevant data sources, while discoveries made via AI-Hilbert are consistent with background theory and relevant data sources.This suggests that AI-Hilbert could be a first step toward discovery frameworks that are less likely to make false discoveries.Moreover, as mentioned in the introduction, AI-Hilbert uses background theory to restrict the effective dimension of the set of possible scientific laws, and, therefore, likely requires less data to make scientific discoveries than purely data-driven approaches.

Background and Notation
The notation is mostly standard to the semidefinite and polynomial optimization literature.We let non-boldface characters such as b denote scalars, lowercase bold-faced characters such as x denote vectors, uppercase bold-faced characters such as A denote matrices, and calligraphic uppercase characters such as Z denote sets.We let [n] denote the set of indices {1, . . ., n}.We let e denote the vector of ones, 0 denote the vector of all zeros, and I denote the identity matrix.We let ∥x∥ p denote the p-norm of a vector x for p ≥ 1.We let R denote the real numbers, S n denote the cone of n × n symmetric matrices, and S n + denote the cone of n × n positive semidefinite matrices.We also use some notations specific to the sum-of-squares (SOS) optimization literature; see [34] for an introduction to computational algebraic geometry and [23] for a general theory of sum-of-squares and convex algebraic optimization.Specifically, we let R[x] n,2d denote the ring of real polynomials in the n-tuple of variables x ∈ R n of degree 2d, P n,2d := {p ∈ R[x] n,2d : p(x) ≥ 0 ∀x ∈ R n } denote the convex cone of non-negative polynomials in n variables of degree 2d, and denote the cone of sum-of-squares polynomials in n variables of degree 2d, which can be optimized over via n+d d dimensional semidefinite matrices [c.f.87] using interior point methods [84].Note

Figure 2:
The scientific method with scientific discoveries made via classical methods, datadriven methods, or AI-Hilbert.AI-Hilbert proposes scientific laws consistent with a body of background theory formally articulated as polynomial equalities, inequalities, and relevant data sources.This likely allows scientific discoveries to be made using fewer data points than state-of-theart approaches, and for missing scientific axioms to be deduced via abductive reasoning as part of the scientific discovery process.On the other hand, existing approaches to scientific discovery propose laws that may be inconsistent with either background theory or existing data sources.
that Σ[x] n,2d ⊆ P n,2d , and the inclusion is strict unless n ≤ 2, 2d ≤ 2 or n = 3, 2d = 4 [61].Nonetheless, Σ[x] n,2d provides a high-quality approximation of P n,2d , since each non-negative polynomial can be approximated (in the ℓ 1 norm of its coefficient vector) to any desired accuracy ϵ > 0 by a sequence of sum-of-squares [75].If the maximum degree d is unknown, we suppress the dependence on d in our notation.
To define a notion of distance between polynomials, we also use several functional norms.Let ∥ • ∥ p denote the ℓ p norm of a vector.Let µ ∈ N n be the vector (µ 1 , . . ., µ n ) and x µ stand for the monomial x µ1 1 . . .x µn n .Then, for a polynomial q ∈ R n,2d [x] with the decomposition q(x) = µ∈N n :∥µ∥1≤2d a µ x µ , we let the notation ∥q∥ p = µ∈N n :∥µ∥1≤2d α p µ 1/p denote the coefficient norm of the polynomial, Finally, to derive new laws of nature from existing ones, we repeatedly invoke a fundamental result from real algebraic geometry called the Positivstellensatz [see, e.g., 103].Various versions of the Positivstellensatz exist, with stronger versions holding under stronger assumptions [see 76, for a review], and any reasonable version being a viable candidate for our approach.For simplicity, we invoke a compact version due to [90], which holds under some relatively mild assumptions but nonetheless lends itself to relatively tractable optimization problems: Theorem 1 (Putinar's Positivstellensatz [90], see also Theorem 5.1 of [87]) Consider the basic (semi)algebraic sets where g i , h j ∈ R[x] n , and G satisfies the Archimedean property2 [see also 23,Chap. 6.4.4], i.e., there exists an R > 0 and α 0 , . . .
holds if and only if there exist SOS polynomials α 0 , . . ., α k ∈ Σ[x] n,2d , and real polynomials Note that strict polynomial inequalities of the form h i (x) > 0 can be modeled by introducing an auxiliary variable τ and requiring that h i (x)τ 2 − 1 = 0, and thus our focus on non-strict inequalities in Theorem 1 is without loss of generality [see also 23].
Remarkably, the Positivstellensatz implies that if we set the degree of α i s to be zero, then polynomial laws consistent with a set of equality-constrained polynomials can be searched over via linear optimization.Indeed, this subset of laws is sufficiently expressive that, as we demonstrate in our numerical results, it allows us to recover Kepler's third law and Einstein's dilation law axiomatically.Moreover, the set of polynomial natural laws consistent with polynomial (in)equalities can be searched via semidefinite or sum-of-squares optimization.
We close this section by remarking that one could develop an alternative version of the Positivstellensatz with only inequality constraints, by expressing each equality via two inequalities.However, this increases the number of decision variables in the optimization problems generated by the Positivstellensatz and solved in this paper, and thus decreases the tractability of these optimization problems; see also [23].Accordingly, we treat equality and inequality constraints separately throughout.

Structure
The rest of the paper is organized as follows: in Section 2, we describe AI-Hilbert, the scientific discovery system proposed in this paper.In Section 3, we argue that it presents an exciting new approach to scientific discovery, by demonstrating that it can rediscover the Hagen-Poiseuille Equation, Einstein's Relativistic Time Dilation Law, Kepler's Third Law, the Radiated Gravitational Wave Power Equation, and the Bell Inequalities (additionally, a detailed comparison of AI-Hilbert with state-of-the-art approaches in Appendix A and examples of scientific discovery via AI-Hilbert in Appendix B).In Section 4, we summarize our conclusions and discuss the limitations of and opportunities arising from this work.

Discovering Scientific Formulae Via Polynomial Optimization
In this section, we formally introduce AI-Hilbert.First, in Section 2.1, we provide an overview of our approach.Next, in Section 2.2, we formalize our approach as a polynomial optimization problem.Further, in Section 2.3, we define a new notion of the distance between a polynomial and a (possibly inconsistent or incomplete) set of polynomial background knowledge axioms.Subsequently, in Section 2.4, we discuss the role of background theory on the amount of data required to discover scientific laws, by revisiting some well-studied examples from the machine learning and statistics literature.Finally, in Section 2.5, we specialize our approach to problem settings where a scientist has access to a complete background theory and no experimental data.

Method Overview
Our scientific discovery method (AI-Hilbert) aims to discover an unknown polynomial formula q(•) ∈ R[x] which describes a physical phenomenon, and is both consistent with a background theory of polynomial equalities and inequalities B (a set of axioms) and a collection of experimental data D (defined below).We provide a high-level overview of AI-Hilbert in Figure 3 and summarize our procedure in Algorithm 1.The inputs to AI-Hilbert are a four-tuple (B, D, C(Λ), d c ), where: • B denotes the relevant background theory, expressed as a collection of axioms, in the scientific discovery setting, i.e., the polynomial laws relevant for discovering q.It is the union of the inequalities {g 1 (x) ≥ 0, . . ., g k (x) ≥ 0} defining G and the equalities {h 1 (x) = 0, . . ., h l (x) = 0} defining H -where G and H are as in ( 1) and (2), respectively.B is defined over n variables x 1 , . . ., x n .However, only t of these n variables can be measured and are directly relevant for explaining the observed phenomenon.In particular, we let x 1 denote the dependent variable.The  remaining n − t variables appear in the background theory but are not directly observable 3 .The background theory B is defined as complete if it contains all the axioms necessary to formally prove the target formula, incomplete otherwise.Moreover, B is called inconsistent if it contains axioms that contradict each other, consistent otherwise (we define these terms more rigorously in Section 2.3).A special case of inconsistency is when a formula that incorrectly describes the studied phenomenon is added to a consistent background theory (see Section 3.6).• D := {x i } i∈[m] denotes a collection of data points, or measurements of an observed physical phenomenon, which may be contaminated by noise, e.g., from measurement error.We assume that xi ∈ R n and xi,j = 0 for j ≥ t + 1, i.e., the value of xi,j -the jth entry of xi -is set to zero for all variables j that cannot or should not be measured.• C denotes a set of constraints and bounds which depend on a set of hyper-parameters Λ.Specifically, we consider a global bound on the degree of the polynomial q; a vector d restricting individual variable degrees in q; a hyperparameter λ that models the fidelity to background theory and data; and a bound over the number of axioms that should be included in a formula derivation.• d c (•, G ∩ H) denotes a distance function that defines the distance from an arbitrary polynomial to the background theory.We formally define d c in Section 2.3.
Algorithm 1 provides a high-level description of AI-Hilbert.The procedure first combines the background theory B and data D to generate a polynomial optimization problem Pr which targets a specific concept identified by a dependent -or target -variable, included in the set of observable entities that can be measured in the environment (x 1 , . . ., x t ).This is achieved by leveraging the distance d c (formally defined in Section 2.3) and integrating the bounds and constraints C (with their hyperparameters Λ) via the PolyJuMP.jlJulia package [82].This corresponds to the Formulate step of Algorithm 1, which we formalize in Section 2.2.

Algorithm 1 AI Hilbert for Scientific Discovery
Pr sd ← Reduce(Pr) 3: q(x) ← Solve(Pr sd ) Output: q(x) = 0 Output: α, β AI-Hilbert then reformulates the problem Pr as a semidefinite (or linear if no inequalities are present in the background theory) optimization problem Pr sd , by leveraging standard techniques from sum-of-squares optimization that are now integrated within the SumOfSquares.jland PolyJuMP.jlJulia packages, as discussed in Section 1.3.This corresponds to the Reduce step of Algorithm 1.
Next, AI-Hilbert solves Pr sd using a mixed-integer conic optimization solver such as Gurobi [1] or Mosek [6].This corresponds to the Solve step of Algorithm 1.
AI-Hilbert then outputs a candidate formula of the form q(x) = 0 where the only monomials with nonzero coefficients are those that only contain the variables x 1 , . . ., x t , the (independent and dependent) variables that are observed in the environment.The background theory may contain additional variables x t+1 , . . ., x n that are not observed in the environment and that will not appear in the derived law.This is because the axioms in the background theory are not constraints on the functional form of the target polynomial, but rather general scientific laws describing the environment, often not including any of the quantities/variables observed in the data.

Finally, AI-Hilbert returns multipliers {α
which is a certificate of the fact that q is derivable from the background theory.If d c > 0, then AI-Hilbert returns a certificate that q is approximately derivable from the background theory, and q is approximately equal to α 0 (x)

Overall Problem Setting
AI-Hilbert aims to discover an unknown polynomial model q(x) = 0, which contains one or more dependent variables raised to some power within the expression (to avoid the trivial solution q ≡ 0), is approximately consistent with our axioms G and H -meaning d c is small, and explains our experimental data well, meaning q(x i ) is small for each data point i, and is of low complexity.
Let x 1 , . . ., x t denote the measurable variables, and let x 1 denote the dependent variable which we would like to ensure appears in our scientific law.Let Ω = {µ ∈ N n : ∥µ∥ 1 ≤ 2d}.Let the discovered polynomial expression be where 2d is a bound on the maximum allowable degree of q.We formulate the following polynomial optimization problem: µ∈Ω:µ1≥1 where d c , the distance between q and the background theory, is the optimal value of an inner minimization problem we define in Section 2.3, λ > 0 is a hyperparameter that balances the relative importance of model fidelity to the data against model fidelity to a set of axioms, the first constraint ensures that x 1 , our dependent variable, appears in q, the second constraint ensures that we do not include any unmeasured variables.In certain problem settings, we constrain d c = 0, rather than penalizing the size of d c in the objective.
Note that the formulation of the first constraint controls the complexity of the scientific discovery problem via the degree of the Positivstellensatz certificate: a smaller bound on the allowable degree in the certificate yields a more tractable optimization problem but a less expressive family of certificates to search over, which ultimately entails a trade-off that needs to be made by the user.Indeed, this trade-off has been formally characterized by Lasserre [75], who showed that every non-negative polynomial is approximable to any desired accuracy by a sequence of sum-of-squares polynomials, with a trade-off between the degree of the SOS polynomial and the quality of the approximation.
After solving Problem (4), one of two possibilities occurs.Either the distance between q and our background information is 0, or the Positivstellensatz provides a non-zero polynomial which defines the discrepancy between our derived physical law and its projection onto our background information.In this sense, solving Problem (4) also provides information about the inverse problem of identifying a complete set of axioms that explain q.In either case, it follows from the Positivstellensatz (Theorem 1) that solving Problem (4) for different hyperparameter values and different bounds on the degree of q eventually yields polynomials that explain the experimental data well and are approximately derivable from background theory.
We close this section with two remarks on the generality and complexity of AI-Hilbert.
Implicit and Explicit Symbolic Discovery: Most prior work [e.g., 38,32,99] aims to identify an unknown symbolic model f ∈ R[x] n,2d of the form y i = f (x i ) for a set of independent variables of interest x i ∈ R n and a dependent variable y i ∈ R, while AI-Hilbert searches for an implicit polynomial function q which links the dependent and independent variables.We do this for two reasons.First, many scientific formulae of practical interest admit implicit representations as polynomials, but explicit representations of the dependent variable as a polynomial function of the independent variables are not possible [c.f.2].For instance, Kepler's third law of planetary motion has this property (see Section 3.6).Second, as proven by Artin [9] to partially resolve Hilbert's 17th problem [c.f.61], an arbitrary non-negative polynomial can be represented as a sum of squares of rational functions.Therefore, by multiplying by the denominator in Artin's representation [9], implicit representations of natural laws become a viable and computationally affordable search space.
We remark that the implicit representation of scientific laws as polynomials where q(x) = 0 introduces some degeneracy in the set of optimal polynomials derivable from (4), particularly in the presence of a correct yet overdetermined background theory.For instance, in the derivation of Kepler's Law of Planetary Motion in Section 3.6, we eventually derive the polynomial To partly break this degeneracy, we propose to either constrain the degree of the proof certificate and gradually increase it (as is done in ( 4)) or, (equivalently in a Lagrangian sense) include a term modeling the complexity of our derived polynomial (e.g., ∥q∥ 1 , the L 1 -coefficient norm of q) in the objective.
Complexity of Scientific Discovery: Observe that, if the degree of our new scientific law q is fixed and the degree of the polynomial multipliers in the definition in d c is also fixed, then Problem (4) can be solved in polynomial time 4 with a consistent set of axioms (resp.nondeterministic polynomial time with an inconsistent set of axioms).This is because solving Problem (4) with a fixed degree and a consistent set of axioms corresponds to solving a semidefinite optimization problem of a polynomial size, which can be solved in polynomial time (assuming that a constraint qualification such as Slater's condition holds) [84].Moreover, although solving Problem (4) with a fixed degree and an inconsistent set of axioms corresponds to solving a mixed-integer semidefinite optimization problem, which is NP-hard, recent evidence [40] shows that integer optimization problems can be solved in polynomial time with high probability.This suggests that Problem (4) may also be solvable in polynomial time with high probability.However, if the degree of q is unbounded then, to the best of our knowledge, no existing algorithm solves Problem (4) in polynomial time.This explains why searching for scientific laws of a fixed degree and iteratively increasing the degree of the polynomial laws searched over, in accordance with Occam's Razor, is a key aspect of our approach.

Distance to Background Theory and Model Complexity
Scientists often start with experimental measurements and a set of polynomial equalities and inequalities (axioms) which they believe to be true.From these axioms and measurements, they aim to deduce a new law, explaining their data, which includes one or more dependent variables and excludes certain variables.The simplest case of scientific discovery involves a consistent and correct set of axioms that fully characterize the problem.In this case, the Positivstellensatz (Theorem 1) facilitates the discovery of new scientific laws via deductive reasoning, without using any experimental data, as we argue in Section 2.5.Indeed, under an Archimedean assumption, the set of all valid scientific laws corresponds precisely to the preprime (see [34] for a definition) generated by our axioms [90], and searching for the simplest polynomial version of a law which features a given dependent variable corresponds to solving an easy linear or semidefinite feasibility problem.
It is not uncommon to have a set of axioms that is inconsistent (meaning that there are no values of x ∈ R n that satisfy all laws simultaneously), or incomplete (meaning the axioms do not 'span' the space of all derivable polynomials; we provide a formal definition later in this section).Therefore, we require a notion of a distance between a body of background theory (which, in our case, consists of a set of polynomial equalities and inequalities) and a polynomial.We now establish this definition, treating the inconsistent and incomplete cases separately; note that the inconsistent and incomplete case may be treated via the inconsistent case.We remark that [23,113] propose related notions of the distance between (a) a point and a variety defined by a set of equality constraints, and (b) the distance between two semialgebraic sets via their Hausdorff distance.However, to our knowledge, the distance metrics in this paper have not previously been proposed.
Incomplete Case: Suppose B is a background theory (consisting of equalities and inequalities in G and H) , where G satisfies the previously defined Archimedean property (see Theorem 1) with constant R, and the axioms are not inconsistent, meaning that G ∩ H ̸ = ∅.Then, a natural notion of distance is the ℓ 2 coefficient distance d c between q and G ∩ H, which is given by: It follows directly from Putinar's Positivstellensatz that d(q, G ∩ H) = 0 if and only if q is derivable from B. We remark that this distance has a geometric interpretation as the distance between a polynomial q and its projection onto the algebraic variety generated by G ∩ H.Moreover, by norm equivalence, this is equivalent to the Hausdorff distance [113] between q and G ∩ H.
With the above definition of d c , and the fact that G ∩ H ̸ = ∅, we say that G ∩ H is an incomplete set of axioms if there does not exist a polynomial p with a non-zero coefficient on at least one monomial involving a dependent variable, such that d c (q, G ∩ H) = 0.
Inconsistent Case: Suppose B is an inconsistent background theory i.e, G ∩ H = ∅.Then, a natural approach to scientific discovery is to assume that a subset of the axioms are valid laws, while the remaining axioms are scientifically invalid (or invalid in a specific context, e.g., micro vs. macro-scale).In line with the sparse regression literature [c.f.20] and related work on discovering nonlinear dynamics [19,80], we assume that scientific discoveries can be made using at most k correct scientific laws and define the distance between the scientific law and the problem data via a best subset selection problem.Specifically, we introduce binary variables z i and y j to denote whether the ith and j-th laws are consistent, and require that α i = 0 if z i = 0 and β j = 0 if y j = 0 and i z i + j y j ≤ τ for a sparsity budget τ .Furthermore, we allow a non-zero ℓ 2 distance between the scientific law f and the reduced background theory, but penalize this distance in the objective.This gives the following notion of distance between a scientific law q and the space G ∩ H: It follows directly from the Positivstellensatz that d = 0 if and only if q can be derived from B. If τ = m + l, then we certainly have d c = 0, since the overall system of polynomials is inconsistent and the sum-of-squares proof system can deduce that " − 1 ≥ 0" from inconsistent proof systems, from which it can claim a distance of 0. However, by treating τ as a hyper-parameter and including the quality of the law on experimental data as part of the optimization problem (see Section 2.2), scientific discoveries can be made from inconsistent axioms by incentivizing solvers to set z i = 0 for inconsistent axioms i.Alternatively, a practitioner may wish to explore the Pareto frontier of scientific discoveries that arise as we vary τ , to detect how large the set of correct background knowledge is.Provided there is a sufficiently high penalty cost on poorly explaining scientific data via the derived law, our optimization problem prefers a subset of correct axioms with a non-zero distance d c to the derived polynomial over a set of inconsistent axioms which gives a distance d c = 0.

Impact of Background Theory on Amount of Data Needed to Discover Scientific Laws
In the introduction, we suggested that providing a partially complete background theory expressible as polynomial equalities and inequalities may accelerate the scientific discovery process by decreasing the amount of data required to recover a scientific law with high probability.We now justify our claim in the introduction, by reviewing examples from the machine learning literature, which may be viewed as special cases of scientific discovery, where including relevant background theory decreases the number of data points required to recover a scientific law with high probability.The first two examples involve discovery settings where the ground truth is known to be sparse, and imposing a sparsity constraint on the discovered law reduces the amount of data required to recover the law with high probability.Note that these examples are due to [52,8,29].However, the similarities between them strongly suggest that similar relations between the amount of background theory introduced and the amount of data required for scientific discovery also hold in other contexts.
Example 1 Sparse Linear Regression [52] Consider a sparse high-dimensional regression model where we aim to recover a τ -sparse regression model β ⋆ ∈ R p given access to n noisy linear observations of the form where X i,j iid ∼ N (0, 1) and W i iid ∼ N (0, σ 2 ) for some parameter σ > 0, it is known that β ⋆ is a τ -sparse vector with binary coefficients, and n ≪ p with p → ∞.Then, it is information-theoretically impossible to recover , the (unique) optimal solution of the polynomial optimization problem (expressible as a binary problem) with high probability as τ → ∞.On the other hand, for any λ > 0, the optimal solution of the popular Lasso method only recovers β ⋆ with high probability when n ≥ Θ(2τ + σ 2 ) log p.We remind the reader that Example 2 Sparse Principal Component Analysis [8] Consider a sparse principal component analysis setting where we aim to recover a τ -sparse binary vector x ⋆ ∈ R n given an observed matrix Y = λ τ x ⋆ x ⋆⊤ + W , where W is a GOE(n) matrix, i.e., is a symmetric matrix with on-diagonal entries taking values W i,i iid ∼ N (0, 2 n ) and off-diagonal entries taking values W i,j iid ∼ N (0, 1 n ), and λ > 0 is the signal-to-noise ratio.Let λ, τ possibly depend on n, and set 1 ≪ τ ≪ n as n → ∞.Then: n , with high probability.
• The (mixed-integer representable) polynomial optimization problem x i = τ achieves exact recovery with high probability when λ • The diagonal thresholding algorithm of [66] • The vanilla PCA method, which disregards backgrounds theory encoded via a sparsity constraint and solves the polynomial optimization problem fails to recover x ⋆ with high probability when λ > 1.
Example 3 Low-Rank Matrix Completion [29] Consider a low-rank matrix completion setting where we aim to recover a fixed rank r n × n matrix A given a uniform random sample of its entries Ω ⊆ [n] × [n] of size m, where A satisfies the mutual incoherence property of [29] with constant µ.Then: • Recovery of A is information-theoretically impossible when m ≤ Θ(nr log n), because there are infinitely many rank-r matrices that match all observed entries perfectly [29].
• The nuclear norm relaxation of [29] recovers A with high probability provided m ≥ Θ(n 6/5 r log n).
• The naive approach of disregarding the rank constraint and solving which admits the solution X i,j = 0 if (i, j) / ∈ Ω, fails to recover A with high probability provided m < n 2 .
The above examples are admittedly more stylized than many scientific discovery settings that arise in practice.Nonetheless, they reveal that in certain circumstances, encoding relevant background provably reduces the amount of data required to recover a scientific law with high probability.This agrees with intuition: if we provide a complete background theory that can be manipulated to recover the scientific law, then, as discussed in the next section, we require no data to recover a scientific law.On the other hand, if we provide no background theory we may require a significant amount of data to recover a law.Therefore, providing relevant background theory that constrains the space of derivable scientific laws should decrease the amount of data needed to recover a scientific law with high confidence.This observation highlights the value of embedding relevant background theory within the scientific discovery process.

Discovering Scientific Laws From Background Theory Alone
Suppose that the background theory B constitutes a complete set of axioms that fully describes our physical system.Then, any polynomial which contains our dependent variable x 1 and is derivable from our system of axioms is a valid physical law.Therefore, we need not even collect any experimental data, and we can solve the following feasibility problem to discover a valid law (let where the second and third constraints ensure that we include the dependent variable x 1 in our formula q and rule out the trivial solution q = 0, and exclude any solutions q which contain uninteresting symbolic variables respectively.
Note that if we do not have any inequality constraints in either problem, then we may eliminate α i and obtain a linear optimization problem.

Experimental Validation
In this section, we showcase AI-Hilbert's ability to recover valid scientific laws across various problem settings, including settings with incomplete or inconsistent axioms and experimental data.
The rest of this section is laid out as follows: First, we provide a detailed explanation of AI-Hilbert's implementation in Section 3.1.Subsequently, we discuss the trade-off between data and theory made by AI-Hilbert in Section 3.2.Next, we validate AI-Hilbert on five selected problems that highlight its different capabilities, as discussed below.Our main findings are as follows: 1.In Sections 3.3-3.4,we demonstrate that AI-Hilbert successfully derives valid scientific laws solely from a complete and consistent background theory.Namely, by deriving the Hagen-Poissuille Equation (Section 3.3) and deriving the Radiated Gravitational Wave Power Equation (Section 3.4).We further demonstrate this capability on a suite of test instances in Appendix B. 2. In Sections 3.5-3.6,we demonstrate that AI-Hilbert is capable of deriving a valid scientific law from an inconsistent yet complete background theory.We illustrate this capability through two scenarios: Firstly, when two axioms in the background theory contradict each other (Section 3.5), by using measurement data to discern the correct axiom.Secondly, where an incorrect candidate formula is incorporated into a complete and consistent background theory (Section 3.6).In both cases, we use real-life, noisy, experimental data.3.In Section 3.7, we demonstrate that AI-Hilbert successfully derives a valid scientific law from an incomplete yet consistent background theory and experimental data.Specifically, we demonstrate that AI-Hilbert is data-efficient, in the sense that as the number of axioms supplied increases, less data is required to successfully derive a scientific law.4. In Section 3.8, we showcase AI-Hilbert's ability to handle and discover inequalities, namely the Bell Inequalities.This is notable because existing scientific discovery methods, to our knowledge, cannot handle inequalities.5. We compare AI-Hilbert with some widely used methods from the literature on a suite of test instances in Appendix A. We demonstrate that AI-Hilbert outperforms these methods by rediscovering more of the scientific laws in this test bed.This occurs because AI-Hilbert integrates data and theory systematically via sum-of-squares optimization.

Implementation Details
We now illustrate the lower-level implementation of AI-Hilbert via a synthetic example where the axioms are assumed to be consistent and complete.Consider a semialgebraic system in two real variables x and y which comprises the axioms: where we let h 1 (x, y) = 0 and h 2 (x, y) = 0 denote Equations ( 6)- (7), respectively.These axioms can be viewed as being true in a subdomain of R 2 , i.e., when |x| = 1.
Let the set S = {(1, 1), (−1, −1)} contain all points satisfying equations ( 6)-( 7).If where β 1 , β 2 are polynomials in x, y, then q is a polynomial that vanishes on S. On the other hand, to certify that a polynomial q(x, y) vanishes on S, we search for a polynomial q(x, y) such that q(x, y) = g(x, y)p(x, y) and q satisfies the expression in (8).For example, q(x, y) = x − y vanishes on S; setting Suppose we wish to find a polynomial function q(x, y) that vanishes on S and explains the dataset where the columns are observed values of x and y, respectively, and each row represents a datapoint.Note that these data observations are noiseless observations from the polynomial f (x, y) = x − y, which we aim to recover.Then we search for q satisfying (8) that fits x.
We assume that β 1 , β 2 , q are unknown polynomials that are comprised of monomials of degree at most 3, i.e., the ten monomials in the vector mon = (1, y, y 2 , y 3 , x, xy, xy 2 , x 2 , x 2 y, x 3 ), where mon 1 = 1, mon 2 = y, . . ., mon 10 = x 3 .Let Further, let v denote the vector (a 1 , . . ., a 10 , b 1 , . . ., b 10 , c 1 , . . ., c 10 ), xi denote the ith row of x, and q( xi ) be the value of the polynomial q evaluated at the point (x, ȳ) = xi .Note that q( xi ) = 10 j=1 c j mon j ( xi ) is a linear function of the unknowns c 1 , . . ., c 10 .If xi is a noiseless experimental observation, we should have q( xi ) = 0. Accordingly, we interpret any nonzero value of |q( xi )| as the error when q is evaluated at xi , and aim to minimize this error when selecting q.Equation (8) implies the following linear equations in the variables a i , b i , c i , which are obtained by equating the coefficients of the monomials in Equation ( 8): . . .
Let these constraints be denoted by Av = 0, and let m, the number of data points, be an integer between 1 and 4.Then, we solve the linear optimization problem min 100 ( In Problem (15), the first two constraints imply that t i ≥ |q(x i )|; the third and fourth force w i ≥ |c i |.
The second-to-last constraint forces a monomial containing x to be present in q to avoid the trivial solution q ≡ 0. We sometimes use a different right-hand-side value to get non-fractional c i values.Thus, the optimization problem above searches for a polynomial q that minimizes a weighted combination of the L 1 -error of q and the L 1 -coefficient norm of q.The latter term is a regularization term incentivizing a sparse q.
If we set m = 1 and solve (15), we obtain the (correct) function q(x, y) = x − y in (15).In other words, a single data point from x suffices to "recover" x − y with the choice of objective in (15).
However, let LPF be the linear optimization problem obtained by dropping the second term in the objective in (15) (and not incentivizing sparsity).If we solve LPF with m = 1, we get q ′ = 4x 3 − x 2 − y 2 − 4y + 2 as a solution.This is not a multiple of (x − y) but vanishes at the points in S and the first datapoint in D. We need m = 2 (and use two datapoints) before we get This illustrates the role of sparsity and regularization in reducing the amount of data required to recover a scientific law.Note that (x − y) 2 + 4 is strictly positive for all real x, y, and therefore q ′ = 0 on S indicates that x − y = 0 on S.
Further, if we drop the constraints Av = 0 and allow q to be an arbitrary degree-3 polynomial in x, y (and not equal to β 1 h 1 + β 2 h 2 ), then we cannot recover a multiple of x − y until we set m = 4 and use all datapoints in x.In the latter case, we get q = x 3 − y 3 .This highlights the value of background theory in restricting the space of feasible scientific laws and reducing the amount of data required to recover a scientific law.
If the only measured variable is x, then we would try to eliminate y in the final formula.Multiplying (7) by y + x 3 and subtracting the result from ( 6), we get the expression x 2 + x 6 − 2 = 0.This satisfies the x-components of points in S. Sometimes, there may be a unique way of eliminating variables and getting a formula on the measured variables.

Trade off Between Data and Theory
There is a fundamental trade-off between the amount of background theory available and the amount of data needed for scientific discovery.Indeed, with a complete and consistent set of background theories, it is possible to perform scientific discovery without any experimental data via the Positivestellensatz (see Section 1.3).On the other hand, the purely data-driven approaches to scientific discovery reviewed in Section 1.1 often require many noiseless or low-noise experimental observations to successfully perform scientific discovery.More generally, as the amount of relevant background theory increases, the amount of experimental data required by AI-Hilbert to perform scientific discovery cannot increase, because increasing the amount of background theory decreases the effective VC-dimension of our scientific discovery problem [see, e.g., 60].This can be seen in the machine learning examples discussed in Section 2.4, where imposing a sparsity constraint (i.e., providing a relevant axiom) reduces the number of data observations needed to discover a ground truth model; see also [79,57] for an analysis of the asymptotic consistency of shape-constrained regression.We provide further evidence of this experimentally in Section 3.7.
The above observations can also be explained via real algebraic geometry: requiring that a scientific law is consistent with an axiom is equivalent to restricting the space of valid scientific laws to a subset of the space of discoverable scientific laws.As such, an axiom is equivalent to infinitely many data observations that penalize scientific laws outside a subspace but provide no information that discriminates between scientific laws within the subspace.

Deriving the Hagen-Poiseuille Equation
We consider the problem of deriving the velocity of laminar fluid flow through a circular pipe, from a simplified version of the Navier-Stokes equations, an assumption that the velocity can be modeled by a degree-two polynomial in the radius of the pipe, and a no-slip boundary condition.Letting u(r) denote the velocity in the pipe where r is the distance from the center of the pipe, R denote the radius of the pipe, ∆p denote the pressure differential throughout the pipe, L denote the length of the pipe, and µ denote the viscosity of the fluid, we have the following velocity profile for r ∈ [0, R]: We now derive this law axiomatically, by assuming that the velocity profile can be modeled by a symmetric polynomial, and iteratively increasing the degree of the polynomial until we obtain a polynomial solution, consistent with Occam's Razor.Accordingly, we initially set the degree of u to be two and add together the following terms with appropriate polynomial multipliers: Here Equation ( 17) posits a quadratic velocity profile in r, Equation ( 18) imposes a simplified version of the Navier-Stokes equations in spherical coordinates, Equation ( 19) imposes a no-slip boundary condition on the velocity profile of the form u(R) = 0, and Equation ( 20) posits that the pressure gradient throughout the pipe is constant.The variables in this axiom system are u, r, R, L, µ, ∆p, c 0 , c 2 , and dp dx .We treat c 0 , c 2 , dp dx as variables that cannot be measured, and use the differentiate function in Julia to symbolically differentiate u = c 0 + c 2 r 2 with respect to r in Equation ( 18) before solving the problem, giving the equivalent expression 4c 2 µr − r dp dx .Solving Problem (5) with u as the dependent variable, and searching for polynomial multipliers (and polynomial q) of degree at most 3 in each variable and an overall degree of at most 6, we get the expression: which confirms the result.The associated polynomial multipliers for Equations ( 17)-( 20) are: 4rLµ,

Radiated Gravitational Wave Power
We now consider the problem of deriving the power radiated from gravitational waves emitted by two-point masses orbiting their common center of gravity in a Keplerian orbit, as originally derived by Peters and Mathews [89] and verified for binary star systems by Hulse and Taylor [64].Specifically, [89] showed that the average power generated by such a system is: where P is the (average) power of the wave, G = 6.6743 × 10 −11 m 3 kg −1 s −2 is the universal gravitational constant, c is the speed of light, m 1 , m 2 are the masses of the objects, and we assume that the two objects orbit a constant distance of r away from each other.Note that this equation is one of the twenty so-called bonus laws considered in the work introducing AI-Feynman [107], and notably, is one of only two such laws that neither AI-Feynman nor Eureqa [42] were able to derive.We now derive this law axiomatically, by combining the following axioms with appropriate multipliers: where we make the variable substitution x = cos ϕ, y = sin ϕ, Tr stands for the trace function, and we manually define the derivative of a bivariate degree-two trigonometric polynomial in ϕ = ϕ 0 + ωt in (x, y) in terms of (x, y, ω) as the following linear operator: sin ϕ cos ϕ .
Note that Equation ( 21) is a restatement of Kepler's previously derived third law of planetary motion, Equation ( 22) provides the gravitational power of a wave when the wavelength is large compared to the source dimensions, by linearizing the equations of general relativity, the third equation defines the quadruple moment tensor, and Equation ( 23) (which we state as x 2 + y 2 = 1 within our axioms) is a standard trigonometric identity.Solving Problem (5) with P as the dependent variable, and searching for a formula involving P, G, r, c, m 1 , m 2 with polynomial multipliers of degree at most 20, and allowing each variable to be raised to a power for the variables (P, x, y, ω, G, r, c, m 1 , m 2 ) of at most (1, 4, 4, 4, 3, 6, 1, 5, 5) respectively, then yields the following equation: which verifies the result.Note that this equation is somewhat expensive to derive, owing to fact that searching over the set of degree 20 polynomial multipliers necessitates generating a large number of linear equalities, and writing these equalities to memory is both time and memory intensive.Accordingly, we solved Problem (5) using the MIT SuperCloud environment [93] with 640 GB RAM.
The resulting system of linear inequalities involves 416392 candidate monomials, and takes 14368s to write the problem to memory and 6.58s to be solved by Mosek.This shows that the correctness of the universal gravitational wave equation can be confirmed via the following multipliers: Finally, Figure 4 illustrates how the Positivstellensatz derives this equation, by demonstrating that (setting m 1 = m 2 = c = G = 1), the gravitational wave equation is precisely the set of points (ω, r, P ) where our axioms hold with equality.

Deriving Einstein's Relativistic Time Dilation Law
Next, we consider the problem of deriving Einstein's relativistic time dilation formula from a complete set of background knowledge axioms plus an inconsistent "Newtonian" axiom, which posits that light behaves like a mechanical object.We distinguish between these axioms using data on the relationship between the velocity of a light clock and the relative passage of time, as measured experimentally by Chou et at.[30] and stated explicitly in the work of Cornelio et al. [32,Tab. 6].Einstein's law describes the relationship between how two observers in relative motion to each other observe time, and demonstrates that observers moving at different speeds experience time differently.Indeed, letting the constant c denote the speed of light, the frequency f of a clock moving at a speed v is related to the frequency f 0 of a stationary clock via We now derive this law axiomatically, by adding together the following five axioms with appropriate polynomial multipliers: plus the following (inconsistent) Newtonian axiom: where dt 0 denotes the time required for a light to travel between two stationary mirrors separated by a distance d, and dt denotes the time required for light to travel between two similar mirrors moving at velocity v, giving a distance between the mirrors of L.
These axioms have the following meaning: Equation (29) relates the time required for light to travel between two stationary mirrors to their distance, Equation (30) similarly relates the time required for light to travel between two mirrors in motion to the effective distance L, Equation (31) relates the physical distance between the mirrors d to their effective distance L induced by the motion v via the Pythagorean theorem, and Equations ( 32)-( 33) relate frequencies and periods.Finally, Equation ( 34) assumes (incorrectly) that light behaves like other mechanical objects, meaning if it is emitted orthogonally from an object traveling at velocity v, then it has velocity By solving Problem (4) with a cardinality constraint that we include at most τ = 5 axioms (corresponding to the exclusion of one axiom), a constraint that we must exclude either Equation (30) or Equation (34), f as the dependent variable, experimental data in f, f 0 , v, c to separate the valid and invalid axioms (obtained from [32,Tab. 6] by setting f 0 = 1 to transform the data in (f − f 0 )/f 0 into data in f, f 0 ), f 0 , v, c as variables that we would like to appear in our polynomial formula q(x) = 0 ∀x ∈ G ∩ H, and searching the set of polynomial multipliers of degree at most 2 in each term, we obtain the law: in 6.04 seconds using Gurobi version 9.5.1.Moreover, we immediately recognize this as a restatement of Einstein's law 28.This shows that the correctness of Einstein's law can be verified by multiplying the (consistent relativistic set of) axioms by the following polynomials: Moreover, it verifies that relativistic axioms, particularly the axiom cdt = 2L, fit the light clock data of [30] better than Newtonian axioms, because, by the definition of Problem (4), AI-Hilbert selects the combination of τ = 5 axioms with the lowest discrepancy between the discovered scientific formula and the experimental data.

Deriving Kepler's Third Law of Planetary Motion
In this section, we consider the problem of deriving Kepler's third law of planetary motion from a complete set of background knowledge axioms plus an incorrect candidate formula as an additional axiom, which is to be screened out using experimental data.To our knowledge, this paper is the first work that addresses this particularly challenging problem setting.Indeed, none of the approaches to scientific discovery reviewed in the introduction successfully distinguish between correct and incorrect axioms via experimental data by solving a single optimization problem.The primary motivation for this experiment is to demonstrate that AI-Hilbert provides a system for determining whether, given a background theory and experimental data, it is possible to improve upon a state-of-the-art scientific formula using background theory and experimental data.
Kepler's law describes the relationship between the distance between two bodies, e.g., the sun and a planet, and their orbital periods and takes the form: where G = 6.6743 × 10 −11 m 3 kg −1 s −2 is the universal gravitational constant,m 1 and m 2 are the masses of the two bodies, d 1 and d 2 are the respective distances between m 1 , m 2 and their common center of mass, and p is the orbital period.We now derive this law axiomatically by adding together the following five axioms with appropriate polynomial multipliers: plus the following (incorrect) candidate formula (as an additional axiom) proposed by Cornelio et al. [32] for the exoplanet dataset (where the mass of the planets can be discarded as negligible when added to the much bigger mass of the star): Here F g and F c denote the gravitational and centrifugal forces in the system, and w denotes the frequency of revolution.Note that we replaced p with 2πp in our definition of revolution period in order that π does not feature in our equations; we divide p by 2π after deriving Kepler's law.
The above axioms have the following meaning: Equation ( 42) defines the center of mass of the dynamical system, Equation ( 43) defines the gravitational force of the system, Equation ( 44) defines the centrifugal force of the system, Equation ( 45) matches the centrifugal and dynamical forces, and Equation ( 46) relates the frequency and the period of revolution.
Accordingly, we solve our polynomial optimization problem under a sparsity constraint that at most τ = 5 axioms can be used to derive our model, a constraint that d c = 0 (meaning we need not specify the hyperparameter λ in (4)), by minimizing the objective where q is our implicit polynomial and { xi } 4 i=1 is a set of observations of the revolution period of binary stars stated in [32,Tab. 5].Searching over the set of degree-five polynomials q derivable using degree six certificates then yields a mixed-integer linear optimization problem in 18958 continuous and 6 discrete variables, with the solution: which is precisely Kepler's third law.The validity of this equation can be verified by adding together our axioms with the weights: as previously summarized in Figure 3.This is significant, because existing works on symbolic regression and scientific discovery [107,56] often struggle to derive Kepler's law, even given observational data.Indeed, our approach is also more scalable than deriving Kepler's law manually; Johannes Kepler spent four years laboriously analyzing stellar data to arrive at his law [97].

Kepler Revisited With Missing Axioms
In this section, we revisit the problem of deriving Kepler's third law of planetary motion considered in Section 3.6, with a view to verify AI-Hilbert's ability to discover scientific laws from a combination of theory and data.Specifically, rather than providing a complete (albeit inconsistent) set of background theory as in Section 3.6, we suppress a subset of the axioms ( 42)- (46) and investigate the number of noiseless data points required to recover Equation (48).To simplify our analysis, we set G = 1 and generate noiseless data observations by sampling the values of the independent variables (the masses of the two bodies and the distance between them) uniformly at random in the ranges observed in real data (i.e., exoplanet dataset in AI-Descartes [32]) and computing the value of the dependent variable (the revolution period) using the ground truth formula.
To exploit the fact that our data observations are noiseless, we solve the following variant of (4): where we set λ 1 = 0.9, λ 2 = 0.01, ϵ = 10 −7 for all experiments, seek a degree 4 polynomial q using a proof certificate of degree at most 6, and use L 1 -coefficient norm of q as a regularization term analogously to Lasso regression [104].Note that the second-to-last constraint ensures that the derived polynomial q explains all (noiseless) observations up to a small tolerance.Further, the last constraint is imposed as a linear inequality constraint with an auxiliary binary variable via the big-M technique [53,16], to ensure that the derived formula includes at least one term not involving the rotation period.Note that the increase in the coefficient distance before m = 71 data points reflects that solutions near q = 0 (and thus closer in coefficient norm to the ground truth) are optimal with m = 0 data points but do not fit a small number of data points perfectly, while polynomials q further from the ground truth in coefficient norm fit a small number of data points perfectly.Indeed, the total error with respect to the training data is less than 10 −5 for all values of m in both problem settings.) is of a similar complexity as Kepler's Third Law.Thus, the value of d 1 m 1 − d 2 m 2 = 0 is 46 data points, while the value of all axioms is 71 data points.The value of data compared to background theory depends, in general, on the problem setting and the data quality, as well as how well dispersed the data samples are.

Bell Inequalities
We now consider the problem of deriving Bell Inequalities in quantum mechanics.Bell Inequalities [14] are well-known in physics because they provide bounds on the correlation of measurements in any multi-particle system which obeys local realism (i.e., for which a joint probability distribution exists), that are violated experimentally, thus demonstrating that the natural world does not obey local realism.For ease of exposition, we prove a version called the GHZ inequality [54].Namely, given random variables A, B, C which take values on {±1}, for any joint probability distribution describing A, B, C, it follows that but this bound is violated experimentally [44].
We solve this problem using Gurobi and Julia, which verifies that γ = 1 is the largest value for which this inequality holds, and obtains the desired inequality.Moreover, the solution to its dual problem yields the certificate 2p −1,−1,−1 + 2p 1,1,1 ≥ 0 which verifies that 1 is indeed a valid lower bound for p AB + p BC + p AC , by adding e ⊤ p to the left-hand side of this certificate and 1 to the right-hand side.
To further demonstrate the generality and utility of our approach, we now derive a more challenging Bell inequality, namely the so-called I3322 inequality (c.f.[48]).Given particles A 1 , A 2 , A 3 , B 1 , B 2 , B 3 which take values on {±1}, the inequality reveals that for any joint probability distribution, we have: Using the same approach as previously, and defining p to be an arbitrary discrete probability measure on {±1} 6 , we verify that the smallest such upper bound which holds for each joint probability measure is 4, with the following polynomial certificate modulo e ⊤ p = 1: where an index of 1 denotes that a random variable took the value −1 and an index of 2 denotes that a random variable took the value 1, and the random variables are indexed in the order

Discussion and Future Developments
In this work, we proposed a new approach to scientific discovery that leverages ideas from real algebraic geometry and mixed-integer optimization to discover new scientific laws from a possibly inconsistent or incomplete set of scientific axioms and noisy experimental data.This improves existing approaches to scientific discovery that typically propose plausible scientific laws from either background theory alone or data alone.Indeed, by combining data and background theory in the discovery process, we potentially allow scientific discoveries to be made in previously inhospitable regimes where there is limited data and/or background theory, and gathering data is expensive.We hope our approach serves as an exciting tool that assists the scientific community in efficiently and accurately explaining the natural world.
Inspired by the success of AI-Hilbert in rediscovering existing scientific laws, we conclude by discussing some exciting research directions that are natural extensions of this work.
Improving the Generality of AI-Hilbert: This work proposes a symbolic discovery framework that combines background theory expressible as a system of polynomial equalities and inequalities, or that can be reformulated as such a system (e.g., in a Polar coordinate system, by substituting x = r cos θ, y = r sin θ and requiring that x 2 + y 2 = r 2 ).However, many scientific discovery contexts involve background theory that cannot easily be expressed via polynomial equalities and inequalities, including differential operators, integrals, and limits, among other operators.Therefore, extending AI-Hilbert to encompass these non-polynomial settings would be of interest.
We point out that several authors have already proposed extensions of the sum-of-squares paradigm beyond polynomial basis functions, and these works offer a promising starting point for performing such an extension.Namely, Löfberg and Parrilo [81] (see also Bach [11] and Bach and Rudi [12]) propose an extension to trigonometric basis functions, and Fawzi et al. [46] propose approximating univariate non-polynomial functions via their Gaussian quadrature and Padé approximants.Moreover, Huchette and Vielma [63] advocate modeling non-convex functions via piecewise linear approximations with strong dual bounds.Using such polynomial approximations of non-polynomial operators offers one promising path for extending AI-Hilbert to the non-polynomial setting.
Automatically Parameter-Tuning AI-Hilbert: AI-Hilbert requires hyperparameter optimization by the user to trade-off the importance of fidelity to a model, fidelity to experimental data, and complexity of the symbolic model.Therefore, one extension of this work could be to automate this hyperparameter optimization process, by automatically solving mixed-integer and semidefinite optimization problems with different bounds on the degree of the proof certificates and different weights on the relative importance of fidelity to a model and fidelity to data, and using machine learning techniques to select solutions most likely to satisfy a scientist using AI-Hilbert; see also [112] for a review of automated hyperparameter optimization.
Improving the Scalability of AI-Hilbert: One limitation of our implementation of AI-Hilbert is that it relies on reformulating sum-of-squares optimization problems as semidefinite problems and solving them via primal-dual interior point methods (IPMs) [84,85].This arguably presents a limitation, because the Newton step in IPMs [see, e.g., 5] requires performing a memory-intensive matrix inversion operation.Indeed, this matrix inversion operation is sufficiently expensive that, in our experience, AI-Hilbert was unable to perform scientific discovery tasks with more than n = 15 variables and a constraint on the degree of the certificates searched over of d = 20 or greater (in general, runtime and memory usage is a function of both the number of symbolic variables and the degree of the proof certificates searched over).
To address this limitation and enhance the scalability of AI-Hilbert, there are at least three future directions to explore.First, one could exploit ideas related to the Newton polytope (or convex hull of the exponent vectors of a polynomial) [94] to reduce the number of monomials in the sum-of-squares decompositions developed in this paper, as discussed in detail in [23,Chap 3.3.4].Second, one could use presolving techniques such as chordal sparsity [55,109] or partial facial reduction [88,114] to reduce the number of variables in the semidefinite optimization problems that arise from sum-ofsquares optimization problems.Third, one could attempt to solve sum-of-squares problems without using computationally expensive interior point methods for semidefinite programs, e.g., by using a Burer-Monteiro factorization approach [28,78] or by optimizing over a second-order cone inner approximation of the positive semidefinite cone [4].

A Comparison with Methods from the Literature
To illustrate our system's capabilities, Table 1 compares AI-Hilbert with four approaches from the literature in terms of their ability to recover various scientific laws from background theory and experimental data.
The data for the first two problems comes from real measurements (denoted with "R" in the table) while for the other problems we rely on simulated data (denoted with "S" in the table).The data is created similarly to the approach used in the literature (see [32] or [108] for more details): we sample ten data points uniformly at random for each problem, independently for each variable in a given continuous range of values.The range is either defined following ranges in real measurements or set artificially (e.g., (0, 1]).We then add 1% Gaussian noise to the dependent variable (following [108]), thereby creating small, noisy datasets to resemble real-life data.
We used the background theory provided by Cornelio et al. [32] for the first eight problems and manually extracted the axioms for the remaining ones.
The systems we consider are: • AI-Descartes [32] is a system that combines logical reasoning with symbolic regression to generate meaningful formulae explaining a given phenomenon.It starts by generating a set of candidate models from data alone using sumbolic regression and then validates and ranks them using a background theory.In this way the system ensure that the chosen hypothesis is not only close to the data but also respects the laws already known about the environment, often providing a logical proof of its correctness.However, in the experiments below, we only run the symbolic regression module of AI-Descartes.• AI Feynman [107,105] is a symbolic regression approach integrating deep learning methodologies with the exploitation of some features commonly found in physics functions including unit presence, utilization of low-order polynomials, and symmetry.The algorithm comprises several modules handling polynomial fitting, exhaustive enumeration, neural network fitting, among other tasks, ultimately generating a list of potential formulas.• PySR [36,37] employs a combination of regularized evolution, simulated annealing, and gradient-free optimization techniques to search for equations that best match the provided input data.• Bayesian Machine Scientist (BMS) [56] utilizes a Markov chain Monte Carlo approach to traverse the potential model space, leveraging prior expectations learned from a large empirical corpus of mathematical expressions.
The above systems return a score (in some cases this corresponds to the error) and the complexity for each output formula.The systems PySR and Bayesian Machine Scientist return a best candidate.For AI-Descartes and AI-Feynman, we declare success if the correct formula is among the candidates returned, whereas for PySR and Bayesian Machine Scientist, we check if the best candidate formula is correct.

A.1 Experimental results for comparison with prior literature
We now compare AI-Hilbert with four methods from the literature in Table 1.Note that all four existing methods that we compare AI-Hilbert against require experimental data to successfully discover scientific laws; further, we take results for these methods for the FSRD problems, the Relativistic time dilation problem, and Kepler's third law from [32].On the other hand, AI-Hilbert combines (potentially limited and noisy) data and (potentially corrupted) theory, including the two limiting cases where either only data or only theory are supplied.
AI-Hilbert outperforms the existing systems, accurately recovering formulas in all studied problems.Moreover it achieves this with less data, by leveraging relevant background theory.Finally, our method is the only one capable of recovering symbolic constants (while the other systems combine them into single or multiple real values) thereby preserving the meaning and semantics of each individual constant.
We remark that for some methods from the literature, inexact recovery (denoted with a ✓ ⋆ ) may occur because these approaches are data-driven, and thus usually cannot derive symbolic constants associated with scientific laws.Instead, they group information concerning multiple constants into one that is less interpretable.For instance, consider the Compton Scattering problem (B.7), the correct formula is l 2 = l 1 + h mec (1 − cos(θ)) while data-driven methods usually only derive l 2 = l 1 + k(1 − cos(θ)) (where k is a real number).Moreover, sometimes integers are reported with a small numerical error (e.g., 1, 001 or 0.999 instead of 1).✓ denotes the successful recovery of a scientific law.✗ denotes the failure to recover a scientific law.✓ * denotes recovery up to constants, but not exact recovery.✓ ▷ denotes successful recovery when some variables that are potentially not observable are still declared to be observable.† Inequalities are not supported by the methods from the literature that we benchmarked against.⋄ To generate data for this problem, we varied the mass values of both the pion and the muon.These variations were intended to mimic small differences in the observed values of the masses, rather than altering the actual values across a broad range of values.
current.Let a metal plate of length L, width h, and depth d be subjected to a potential field across its length dimension.The plate is placed in a homogeneous magnetic field B and perpendicular to the electric field E (oriented with the depth dimension).The electron charges q e moving at velocity v across the length dimension, will be deflected by the magnetic field (due to a Lorenz force F m ).This will entail an excess of electrons on one side of the plate, and a deficiency on the other side, resulting in an electric field E and respective electric force F e opposite to the magnetic force.Let the potential difference across the width dimension of the plate be U H . Finally, let I denote the current, which is defined by the number of electron charges N crossing per time interval dt.The Hall potential is given by the expression: Equation ( 81) is recoverable from the following background theory via AI-Hilbert: where Equation ( 82) models the Lorentz magnetic force, Equation ( 83) represents the electric force (E is the homogeneous electric field between the upper and lower metal plate), Equation ( 84) models Newton's third law, Equation ( 85) relates the electric potential U H to field E per plate of width h, Equation ( 86) models the traversal velocity v = L dt across the plate length L, and Equation (87) models the amount of charge Q, is the current I times dt (can be measured by an amp meter).
We assume the measured variables are L, h, I, B, N, q e and U H . AI-Hilbert provides the relation e U H LN 2 − q e BhL 2 IN = 0, which can easily be rearranged to recover Equation (81).

B.7 Compton Scattering
Compton scattering refers to the behavior of high frequency photons as they scatter after collision with a charged particle, usually an electron e in an atom.Specifically, when a photon knocks out a loosely bound electron from the outer valence shells of of an atom (or molecule), a new photon is emitted from the atom traveling at an angle θ to the incoming photon's path.Compton related the shift in photon wavelengths to the scattering angle: λ 2 − λ 1 = h mec (1 − cos θ) where λ 1 , λ 2 are the initial and final photon wavelengths, m e is the electron rest mass, c is the speed of light, and h is Planck's constant.
The following equations give a complete set of axioms needed to derive Compton's formula.Let E r be the energy of the electron at rest (i.e., an electron bound to an atom), let E m be the energy of the moving electron (after it is knocked out from the atom).Let E 1 , E 2 be the initial and final photon energies, f 1 , f 2 be the initial and final photon frequencies, and p 1 , p 2 be the initial and final photon momentum values.The first constraint is just the expression for conservation of energy.The second and third constraints give the photon energy in terms of frequency.The subsequent two constraints give photon momentum in terms of frequency, and the two constraints after that relate wavelength and frequency.The next constraint is the mass-energy equivalence of the electron at rest, while the constraint after that gives the energy of the moving electron via the relativistic energy-momentum relation.The last equation is a restatement of conservation of momentum (after squaring).Dividing both sides by hm e c 3 (1 + cos θ), we get the desired relationship.An important assumption we made above is that measurements of cos θ are available and not just of θ.This is reasonable in many areas of physics as we often take the component of velocity, force etc. in a direction at an angle θ to the direction of the velocity, force etc.It is also reasonable to assume that h, c, m e are measured quantities, as these are known quantities.

Figure 3 :
Figure 3: Schematic illustration of AI Hilbert and its components.Using background knowledge encoded as multivariate polynomials, experimental data, and hyperparameters (e.g., a sparsity constraint on the background theory) to control our model's complexity, we formulate scientific discovery as a polynomial optimization problem, reformulate it as a semidefinite optimization problem, and solve it to obtain both a symbolic model and its formal derivation.Dashed boxes correspond to optional components.An example is introducing an incorrect candidate formula as a new axiom in the background theory.

Out[ ] = ω 2 r 3  2 20 P + 32 r 4 ω 6  0 5 P r 5 + 64  0 Figure 4 :
Figure 4: Illustration of the Positivstellensatz and its ability to recover the Radiation Gravitational Wave Power Equation in the special case where m 1 = m 2 = c = G = 1.Keeping other variables constant, the points that obey the power equation are the intersection of the points that obey Kepler's Third Law and the points of a linearized equation from general relativity, and the wave equation is recoverable by adding these other equations with appropriate polynomial multipliers.

Figure 5
Figure 5 depicts the ℓ 2 coefficient distance between the scientific formula derived by AI-Hilbert and Equation (48) mod d 1 m 1 − d 2 m 2 = 0 as we increase the number of data points, where we suppress the axiom d 1 m 1 − d 2 m 2 = 0 (left), where we suppress all axioms (right).In both cases, there is an all-or-nothing phase transition [see also 51, for a discussion of this phenomenon throughout machine learning ] in AI-Hilbert's ability to recover the scientific law, where before a threshold number of data points, AI-Hilbert cannot recover Kepler's law, and beyond the threshold, AI-Hilbert recovers the law exactly.

Figure 5 :
Figure 5: Coefficient distance between scientific formula derived by AI-Hilbert and ground truth vs. the number of data points when we omit some axioms (left) or all axioms (right).

Figure 5
Figure 5 reveals that when only the axiom d 1 m 1 − d 2 m 2 = 0 is missing, it is possible to perform scientific discovery with as few as 46 data points, while at least 71 data points are needed when all axioms are missing.This is because the axiom d 1 m 1 − d 2 m 2 = 0 multiplied by the term in the proof certificate −d 2 2 p 2 w 2 (Section 3.6) is of a similar complexity as Kepler's Third Law.Thus, the value of d 1 m 1 − d 2 m 2 = 0 is 46 data points, while the value of all axioms is 71 data points.The value of data compared to background theory depends, in general, on the problem setting and the data quality, as well as how well dispersed the data samples are.

Table 1 :
Comparison of AI-Hilbert with state-of-the-art discovery methods in terms of ability to recover scientific laws from background theory and experimental data.