The Hamilton--Jacobi Theory and the Analogy between Classical and Quantum Mechanics

We review here some conventional as well as less conventional aspects of the time-independent and time-dependent Hamilton-Jacobi (HJ) theory and of its connections with Quantum Mechanics. Less conventional aspects involve the HJ theory on the tangent bundle of a configuration manifold, the quantum HJ theory, HJ problems for general differential operators and the HJ problem for Lie groups.


A Recollection of Preliminary
Notions. The Hamilton-Jacobi formulation of Classical Dynamics is usually presented [2,3,10,11,16,18,26,32,41,47,51,55] as the search, for a Hamiltonian system with a possibly time-dependent Hamiltonian on the cotangent bundle T * Q of some configuration manifold Q (with dim Q = n for some n), for a canonical transformation that is able to "reduce the system to equilibrium".The Hamilton-Jacobi (HJ) equation for the generator S of the transformation, also known as "Hamilton's principal function", is well known to be then: where: H = H (q; p; t) is the Hamiltonian of the system and, if a complete integral S = S (q; Q), i.e. a solution depending on as many additional parameters Q 1 , ..., Q n as the number of degrees of freedom in an essential way, i.e. such that: is available, then, the canonical transformation: (q, p) → (Q, P ) defined by: does the job of reducing the system to equilibrium. For a time-independent Hamiltonian, and hence for a conservative system, denoting by E the total energy, the assumption: yields instead the time-independent HJ equation: for "Hamilton's characteristic function" W , with the energy entering as one of the additional parameters on which the principal function has to depend. However, even in this case, the most general solution of Eq.(1) need not be linear in t.
The HJ equations come from the search for a canonical transformation on the extended phase space R 2n × R such that: with the additional requirement that K ≡ 0, i.e. that, in the sought-for new coordinates: or, more generally: d dt f (P, Q; t) ≡ ∂f ∂t (8) be the "model dynamics" we would like to relate with our starting one, i.e.: Of course, we might consider other "model dynamics". For example, we might require that: K = K (P ) = P 2 1 + ... + P 2 n /2, in such a way that the "model dynamics" would be: Clearly, other models could be implemented. For instance, we might require: and hence: K = i ν i ( Q i 2 + (P i ) 2 )/2. The associated partial differential equations would be: or: While the first two cases are strictly local, the third one is "less local", it identifies some equilibrium points, say Q i = 0, P i = 0, which are also stable ones.
The conditions for solvability of these P DE's are quite strong. For instance, the first one requires: dP i /dt = dQ i /dt = 0, i.e. that the system be maximally integrable. the second one requires the system to be completely integrable, while the third one requires not only complete integrability, but also that there be stable equilibrium points.
Thus, the HJ problem in this generalized form would solve not only a conjugacy problem, i.e. how to transform a given dynamical system into another one with a preassigned form, but one would find also the required transformation to be generated by a function S. ¿From a geometrical point of view, the first case requires the original dynamics to define a fundamental vector field of the natural foliation of the contact manifold after we have removed possible equilibrium points. In the second case the phase space is foliated by invariant cylinders, while in the third case the dynamics will preserve a foliation by tori.
As already noticed, the conditions for the solution of the various problems are quite stringent, and one should expect that global solutions can be found only very rarely. Nevertheless, this more general perspective may be interesting because it could provide the Hamilton-Jacobi theory with a wider range of applications. For instance, it might be applied to the study of scattering problems, where now K would be the "comparison Hamiltonian" and H that of the system one is analyzing. It might be also applicable in Field Theory and General Relativity. In particular, one might consider applying it to Quantum Mechanics going beyond the usual W KBJ approximation.
Remark 1. Let us observe that if we define, keeping the additional parameters fixed: and we form [14] the vector field: it is immediate to show, using Eq. (5), that if: q = q (t) is an integral curve for the vector field X, then: (q (t) , p (q (t))) is in turn an integral curve for Hamilton's canonical equations. Thus, Eq. (5) describes in a unified manner one particular family of the phase-space equations of motion.
It is worth stressing that, already at this rather well-known level, the Hamilton-Jacobi theory establishes a deep connection between first-order partial differential equations (P DE's) and systems of first-order ordinary differential equations (ODE's) [13].
1.2. The HJ Equation and the JW KB Method. Quite similar connections appear when one investigates the short-wavelength limit of "wave-like" equations (including also the Schrödinger equation) such as Hamiltonian optics as the shortwavelength régime of wave optics (the eikonal approximation [12,54], but see also Ref. [36] for applications in Field Theory) and classical mechanics as the shortwavelength régime of wave mechanics [43].
In taking this limit, one starts with some differential operator of hyperbolic type on some manifold Q, passes through a Hamilton-Jacobi-type equation (the characteristics equation [35]) for a function S and, by substituting covectors (the "p"'s) for the first-order derivatives, arrives at a some Hamiltonian function on the cotangent bundle T * Q which yields also the dispersion relation of the wave motion. The associated Hamilton equations give rise, by projection on Q of the solutions, to the bi-characteristics [35] of the original differential system. Rephrased in an "optical" language, the characteristics describe the propagation of wave fronts, while the bi-characteristics describe that of rays. Note that "substituting covectors for first-order derivatives", in the case in which the differential operator is homogeneous, is just a down-to-the -earth way of saying that we deal with what is known as the symbol [22] of the operator.
Also, the JW KB method 1 [22,23,43,45,52] is widely used both in wave optics and wave mechanics to investigate several physical problems in the shortwavelength régime which, in wave mechanics, amounts to a leading-term expansion in powers of the Planck constant . It is also closely related to the saddle-point approximation (plus one-loop corrections) in the path-integral approach to problems in Field Theory [2], Quantum [27] and Statistical Mechanics [31].
In this context, and concentrating for the sake of definiteness on the motion of a particle of mass m in a potential V (x), one represents the wave function (in Gaussian form) as 2 : and: where the x j 0 's are the initial coordinates, and hence the JW KB solution is: As discussed in Ref. [22], Eq.(20) yields the JW KB approximation to the propagator (or Green function) for the Schrödinger operator, and the result becomes exact for quadratic Hamiltonians. Similar conclusions hold [2,31,33] in the pathintegral formalism, where the saddle-point approximation 4 also becomes exact for quadratic Hamiltonians (or Lagrangians).

1.3.
A Geometrical Setting for the HJ Theory. A useful geometrical formulation of the HJ theory can be given as follows. Consider a symplectic manifold: M = T * Q with symplectic structure ω 0 = dp j ∧ dq j . With any symplectomorphism: (φ (q, p) =: (Q, P )) we may associate a "graph" Σ φ , i.e. a submanifold in the symplectic manifold M × M, the second factor being equipped with the symplectic structure −ω 0 : by requiring it to be Lagrangian w.r.t. the symplectic structure on M × M provided by: By using the fact that: , we can consider those Lagrangian submanifolds in T * (Q × Q) that can be written as graphs: When Σ φ projects onto Q × Q we may consider it as the graph of a generating function S. 3 The determinant on the r.h.s. of Eq. (19) is the well-known [27,31] Pauli-Morette-Van Vleck determinant. 4 In this case the Pauli-Morette-Van Vleck determinant is replaced by a functional determinant, whose definition requires careful regularization procedures [27,56] that will not be discussed here.
In many cases M may admit of many alternative cotangent bundle structures, i.e. we may identify "alternative" submanifolds Q ′ such that: M = T * Q ′ , with Q and Q ′ possible alternative "placements" of some external configuration space Q. A typical example is provided by: M = R 2n , where any identification of an affine subspace R n is a possible placement of an "abstract" R n .
¿From this point of view, if we start with φ and Σ φ we may look for a particular placement of Q in M that makes Σ φ projectable (a submersion onto) Q × Q. Remark 2. The usual "colloquial" classification into "four possible sets of independent canonical variables" usually encountered in textbooks on Classical Mechanics, say (q, Q) , (q, P ) , (Q, p) and (p, P ) are exactly different identifications of the "configuration space" over which one constructs the cotangent bundle structure. Therefore we would have the corresponding symplectic structures: d p j dq j − P j dQ j , d p j dq j + Q j dP j , d P j dQ j + q j dp j and: d q j dp j − Q j dP j respectively.
When the graph dS : Q×Q → T * (Q × Q) is viewed as a map: dS : Q×Q → T * Q, i.e. the second factor Q is considered as a family of "parameters", we may obtain a regular foliation of T * Q. Then we may say that S is a "complete solution" of some associated HJ equation, and in this case the Hamiltonian flow associated with the Hamiltonian K would preserve the foliation induced by S on T * Q. On the open dense submanifolds on which: dS : Q × Q → T * Q provides a diffeomorphism we would have equivalence between the Cauchy problem in terms of initial data (q 0 , p 0 ) and the boundary value problem in terms of (q 0 , Q). We notice that in this case: is a symplectic structure (cfr. Eq.(2)) on Q × Q, and hence it would allow for a Hamiltonian formulation on the space of "boundary data" rather than of the "initial conditions". The inverse image of the one-parameter group of evolution on T * Q would provide the "propagator" on the configuration space. By using the geometrical formulation of Quantum Mechanics a similar picture can be considered also for Quantum Mechanics in the Schrödinger picture.
Let us restrict for convenience to a finite-dimensional Hilbert space H. Again a transformation: will be associated with a graph: If we consider on H × H the pseudo-Hermitian form: and noticing that, on Σ Φ , ϕ 1,2 = Φ (ψ 1,2 ), we find that Σ Φ will be isotropic w.r.t. the pseudo-Hermitian form (28) iff Φ is a unitary transformation. By considering the realification of H, the Hermitian product decomposes into a real and an imaginary part, the former providing an Euclidean product and the latter a symplectic product. As for the symplectic part, we have a situation similar to the one we considered previously, i.e. we may consider a generating function of the canonical transformation and require afterwards that this transformation should preserve also the Euclidean product. In a somewhat simplified notation we may write then: and then we require, in addition: The resulting transformation would be a Kählerian transformation derived from the generating function S Φ . For: ϕ = U ψ, ϕ * = ψ * U * , S (ϕ * , ψ) = −ϕ * U ψ: For example, a Hadamard gate: has the generating function: 1.4. Quantum HJ Equations. In the Heisenberg picture, one may devise a direct approach to the HJ problem by replacing classical variables with Hermitian operators. Indeed, shortly after the advent of Quantum Mechanics, various efforts were made [19,21] to formulate both the theory of Canonical Transformations and hence also the Action Principle [50] and the Hamilton-Jacobi equation in operator terms from the very beginning 5 . Following, e.g., the scheme of Eqs.(1) to (3), one might be tempted to "promote" these equations to operator equations defining a (quantum) canonical transformation as: and a (quantum) HJ equation as: Due to operator-ordering problems, these equations are obviously ambiguous. According to Jordan [29,30] and Dirac [19,20] the ambiguity should be resolved by requiring S and the operators in Eqs. (34) and (35) to be "well ordered ", i.e. by requiring all the "uppercase" operators (the Q's) to stay to the right of the "lowercase" ones (the q's). This implies that the "generating operator" S should be of the general form [46]: for suitable functions f α and g α . The "well-ordering" procedure must be applied also, using when necessary the commutation relations, to the Hamiltonian operator H in Eq. (35). The authors in Ref. [46] have devised a procedure for converting the operator equation (35) into a c-number equation that we will illustrate here on a simple example 6 , namely that of a (non-relativistic) 1D particle of mass m subject to a scalar potential V (q). The Hamiltonian is then: and the (operator) HJ equation becomes (S = S( q, Q, t)): Sandwiching between eigenstates 7 |q and |Q of q and Q respectively, the "well ordering" prescription leads to: i.e. to a (uniquely defined and not necessarily real) c-number function S(q, Q, t). Sandwiching between the same eigenstates the quadratic term on the l.h.s. of Eq.(38) is a bit more complicated. Explicitly: Now, the standard canonical commutation relations imply, for any function G = G( q) : Using then the first of Eqs.(34) (i.e.: ∂S/∂ q = p) one finds [46]: which is again a "well ordered" expression, and hence: Dropping then the common factor q|Q one obtains the c-number equation 8 : which is completely equivalent to the operator equation (38). 6 Referring to Ref. [46] for a more general discussion. 7 R dq |q q| = I, q|q ′ = δ (q − q ′ ), and similarly for b Q. 8 As Eq.(44) (as well as the classical equation (1)) contains only derivatives of S, any solution will be ambiguous by the addition of a constant term. This ambiguity, which is totally irrelevant at the classical level, will turn out instead to be useful in what follows.
Eq.(44) is precisely the equation that results [26,46] from the Schrödinger equation (in the variables (q, t)) by expressing the wave function ψ as (see footnote 2): It provides then a solution of the Schrödinger equation depending (in an essential way) from the additional parameter Q, just as the propagator K (q, Q, t) [22,49], which obeys the same equation, does. The propagator is known [22,31,49] to obey the boundary condition: K (q, Q, t) → δ (q − Q) as t → 0 and, in order to complete the identification, one has to check (the equation being first-order in time) that there is a solution of the form (45) which does the same. For example, from the well-known 9 result [31,49] for the propagator of the 1D harmonic oscillator with mass m, proper frequency ω and Hamiltonian: one can check directly that: does indeed solve Eq.(44) with the appropriate boundary condition 10 . It is then immediate to see that the "well-ordered" solution of the operator equation (36) will be: Remark 3. When substituting from Eq.(48) into Eq.(36) we need to square the derivative: ∂S/∂ q = mω( q cos(ωt) − Q)/ sin(ωt). This brings about terms that are not "well-ordered". Bringing them in the correct order [46] requires using the (exact) commutation relation: q, Q = −i sin (ωt) /mω. The time derivative of the last term in Eq.(48) (see also footnote 8) does then the job of compensating for this operation.
The short-time limit of Eq.(48) is: which is the same as for the free particle. The harmonic potential does not contribute to this short-time limit. As remarked in Ref. [46], the same will happen more generally for any nonsingular potential V ( q). So, the result (49) will have a more general significance, going slightly beyond the case of quadratic potentials. 9 As for any quadratic Hamiltonian. 10 In the limit ω → 0, insertion of Eq.(47) into Eq.(45) reproduces of course the kernel for the 1D free particle.

Comments and Plan of the Paper.
Going back now to our general discussion, the attempt to recover the original P DE from the equations of motion of the "rays", i.e. Hamilton's equations, is what is usually called the "quantization problem". This interplay between P DE's, associated first-order P DE's (HJ-type equations) describing the propagation of "wave-fronts" and the corresponding, "ray" ODE's (Hamilton's equations) has been widely investigated in various branches of theoretical Physics. Moreover, because of the Hamiltonian-Lagrangian correspondence, also the calculus of variations appears in this interplay.
¿From all these remarks it should be clear that the HJ theory is very rich in analytic and geometric ideas, and that it unifies apparently diverse topics like higherorder P DE's, first-order P DE's, ODE's and the calculus of variations.
In this paper we will try and use some of our experience with Quantum Mechanics to formulate and give a geometric presentation of many problems which, born in a quantum setting, are of more general validity in the framework of the Hamilton-Jacobi theory. Besides many original papers on this vast subject, we shall rely on some work by A.Vinogradov [53], some more recent work by Grabowski and Poncin [25], a previous paper of ours [38] and recent paper on the Hamilton-Jacobi theory in a Lagrangian setting [14]. We shall discuss the following topics: -Reviewing briefly how the HJ problem can be formulated in geometric terms on the cotangent bundle T * Q of a smooth manifold Q, we shall discuss how the same problem can be formulated on the tangent bundle T Q, hence in a Lagrangian setting.
-How one can pose a generalized HJ problem for differential operators of any order, giving another coordinate-free characterization of the HJ equation.
-We shall try and discuss to which extent, instead of differential operators acting on functions, one can consider differential operators acting on sections of vector bundles, thereby obtaining "wave-like" equations that are not scalar, like the Pauli and the Dirac equations.
-Just as in Quantum Mechanics one poses a joint eigenvalue problem for two (or more) observables, we shall discuss how one can pose a "joint HJ problem" for more than two functions on the cotangent bundle, and, finally, -After "revisiting" briefly the geometrical formulation of the time-dependent Hamilton-Jacobi theory, whose proper setting [38] is on the cotangent bundle T * (Q × R), we shall discuss how one can obtain generalizations thereof when the action of the Abelian group R is replaced with that of a general Lie group G (a "HJ problem on a Lie group", then).

A Geometrical Setting for the Time-Independent Hamilton-Jacobi
Theory on the Cotangent and on the Tangent Bundles.

2.1.
Preliminaries. Let us begin by recalling [38] some preliminary notions. The cotangent bundle T * Q carries with it the canonical (exact) symplectic structure: where, in local coordinates: θ 0 = p i dq i , and hence: ω 0 = dp i ∧ dq i . Let then α ∈ X * (Q) be a one-form on the base manifold Q. Again, in coordinates: With the one-form α we can associate the map (the graph of α): The image of α, Γ [α] will be defined as: where: π : T * Q → Q is the canonical projection. This shows that ϕ α gives a global section of T * Q. Γ [α] will be therefore an n-dimensional transversal [38] submanifold of T * Q and ϕ α will be an embedding [41] of Q into T * Q. Also, α can be recovered from the canonical one-form θ 0 via the pull-back: We can consider also the map: which is a base-invariant translation along the fibers that, for every q ∈ Q, "shrinks" the whole fibre T * q Q to the point α (q). Then, using Eq. (54), one obtains at once: and we conclude [38] that the graph of α, besides being always transversal to the fibers, will be also a Lagrangian submanifold [38,41] of T * Q if and only if α is closed. As discussed in Ref. [38], the converse is not true, i.e. a transversal Lagrangian submanifold of T * Q which projects down to Q under the canonical projection need not be the graph of a closed one-form. Being closed, α will be locally exact, i.e.: α = dW for some function W ∈ F (Q), at least locally. Whether or not such a function exists globally will depend on whether or not H 1 (Q), the first de Rham cohomology group of Q, is trivial.

2.2.
The HJ Theory on the Cotangent Bundle. After these preliminaries, let us re-consider now the time-independent HJ equation, Eq. (5), for Hamilton's characteristic function, i.e.: W can be either a particular solution of the HJ equation or a complete integral (with all the possible intermediate cases in between). In the latter case: W = W (q; a) depending (as already discussed, in an essential way) on n additional variables collectively denoted as a. Whenever necessary, we will denote as W a (q) the function that obtains by keeping the a's constant. Hence: W a ∈ F (Q) ∀a.
It is clear that, considering the graph of the exact one-form dW a , the image of dW a is a Lagrangian submanifold and Eq.(57) can be rewritten as [38]: or, equivalently, as: dW * a (dH) = 0 (59) As it stands, Eq.(58) looks just like a different way of rewriting the standard HJ equation in a different language. However it allows for a deeper geometrical interpretation of the HJ theory, and allows also for some interesting generalizations.
First of all, the image Γ a of dW a is a regular submanifold [41] of T * Q. As such, it can be described, locally at least, as the zero-level set of n independent functions f 1a , ..., f na ∈ F (T * Q), i.e. such that: and : where {., .} is the Poisson bracket in the space of functions on T * Q associated with the symplectic form ω 0 . Eq.(61) implies that the Hamiltonian vector fields X j associated with the f j 's, i.e.: are all tangent to the submanifold (indeed [38] they span the tangent space T m Γ a at each m ∈ Γ a ). Moreover, denoting by X H the Hamiltonian vector field that describes the dynamics, we obtain, contracting both sides of Eq.(59) with the X ja 's: which proves that X H is tangent to Γ a ∀a. Of course X H is also tangent to the (2n − 1)-dimensional energy surface 11 : If dH = 0 12 , Σ E is a regular submanifold that we can assume without loss of generality to be also connected. If it is not, we can always restrict the discussion to each connected component separately. Each Γ a is contained in the energy surface (64) and, in general [38], they will provide a (n − 1)-parameter foliation of the energy surface, with the dynamical vector field X H being tangent to all the leaves of the foliation. This is basically the geometrical content of the fact that, besides the energy E, a complete integral depends in an essential way on n − 1 additional parameters. This formulation of the HJ problem suffers however of some limitations, as the following example shows. Example 1. Consider: Q = S 1 with coordinate q, 0 ≤ q < 2π. The cotangent bundle can be given coordinates (q, p), with p ∈ R, and can be viewed as a cylinder. In this case: θ 0 = pdq and: ω 0 = dp ∧ dq are both well-defined. The vector field: X = p∂/∂q is Hamiltonian with the Hamiltonian: H = p 2 /2, and the "energy 11 Remember that, in the time-independent case, the energy E must be included among the parameters on which a complete integral depends, leaving actually only n−1 independent parameters. 12 Critical points of H, being invariant sets for the dynamics, can be handled separately. surfaces" are pairs of circles on the cylinder: but the energy surfaces are nonetheless the graphs of the closed but not exact oneforms: α E,± = ± √ 2Edq, and the equation: ϕ * α (H − E) = 0 for an unknown closed one-form α is globally defined and has precisely α E,± as solutions.
This suggests that one relaxes the requirement that the pull-back of H − E in Eq.(58) be via an exact one-form, replacing it with the weaker request that it be via a closed but not necessarily exact one-form, i.e. that one replaces Eq.(58) with: The graph of any such form will be of course again a transversal Lagrangian submanifold, and we can re-formulate the "geometric HJ problem" as follows: • A solution of the HJ equation for a given Hamiltonian and a given energy is a transversal Lagrangian submanifold within the energy surface, obtained as the graph of a closed one-form on Q, and • A complete integral is a foliation of T * Q by such solutions for all physically accessible values of the energy, which implies also a foliation of each energy surface as well.
Note that (cfr. Eq.(54)), as: (65) can be replaced by the equivalent one: If, as a further step, one gives up the requirement of transversality, one can pose a "HJ problem" (no more a "HJ equation") consisting in the search for foliations of each energy surface simply by Lagrangian submanifolds (that, being not necessarily transversal, can exhibit caustics [4]). This has the advantage that the full set of canonical symmetries can be implemented as symmetries of the HJ problem (i.e. maps that map solutions into solutions). We will not insist on this point, but refer rather to the literature [38] for a more complete discussion.
2.3. The HJ Theory on the Tangent Bundle. We turn now briefly to the Lagrangian context. The relevant carrier space is now the tangent bundle T Q, with local coordinates q i , u i , i = 1, ..., n, which carries no pre-assigned symplectic structure but, if a Lagrangian L ∈ F (T Q) (assumed here to be regular 13 ) is given, can be endowed with a Lagrangian symplectic structure ω L defined by: [1,3,41]: again, for simplicity, in local coordinates. The Euler-Lagrange equations can be put in "Hamiltonian" form as: where Γ L is the second-order [41] vector field describing the dynamics, the "energy function" E L is given by: and, finally, ∆ is the dilation (Liouville) field along the fibers: Sections (actually, global sections) of the tangent bundle are now provided by vector fields on the base manifold, just as one-forms did the same job for the cotangent bundle. If X ∈ X (Q) is any such vector field, given in local coordinates by: X = X i (q) ∂/∂q i , then X will define the map 14 : satisfying: where: π : T Q → Q is the canonical projection, and hence X is a section of T Q.
With reference to Eq.(66) we can then define a Lagrangian HJ problem as the search for all vector fields X ∈ X (Q) such that: As such, the HJ problem on the tangent bundle can be viewed as the search, instead of a single function as in the case of T * Q, of a "vector-valued function" on Q.
As a (Lagrangian) counterpart of the Remark that was made in Sect.1, we have now the following [14] Remark 4. If: X ∈ X (Q) , X : Q → T Q is a solution of Eq.(73) and γ : R → Q is an integral curve of X, i.e.: · γ = X • γ, then: · γ : R → T Q solves the Euler-Lagrange equations for the Lagrangian L, i.e.: The converse statement, however (i.e. if the integral curves γ of a vector field X ∈ X (Q) are such that · γ are integral curves for the Lagrangian vector field Γ L for a given Lagrangian L, then X is a solution of Eq.(73)) need not be true, as the following example [14] shows.
Example 2. The dynamics of the free particle in R 2 can be described by the regular Lagrangian: with associated geometrical objects: and: 14 With some abuse of notation, we are using here too the same symbol to denote the vector field and the associated map. The two-parameter family of vector fields: satisfies the assumptions of the above (putative) converse statement, but: and: Hence, both conditions of Eq.(73) will be violated.
Guided by this example we will stick to the definition of the Lagrangian Hamilton-Jacobi problem as defined by Eqs.(73), i.e. as the search for all vector fields X ∈ X (Q) such that (using again the same notation for the associated maps Q → T Q): The first of these equations implies, of course, that Im X be a Lagrangian submanifold of T Q. Furthermore, as: 0 = X * ω L = X * (dθ L ) = d (X * θ L ), every point has an open neighborhood U ⊂ Q where there is a function W ∈ F (U ) such that: We have already shown in a previous Remark that if X is a solution of the Lagrangian HJ problem, then it satisfies Eq.(74) (while the converse is not true) 15 . It has been proved in Ref. [14] that X is a solution of Eq.(73) iff the following diagram: commutes, i.e. iff: Eq.(83) is a P DE for the unknown vector field, or "vector-valued function", X which replaces the P DE for the scalar function W . Once a solution is found, one has however to check whether or not it satisfies the conditions (81) as well, in order for it to be a genuine solution of the Lagrangian HJ problem. It may be useful to derive the expression for the P DE (83) in local coordinates. We can write X and Γ L as: where: and H ij is the inverse of the Hessian matrix: A direct calculation shows that: This is a vertical vector field along X whose vanishing implies: and this is the required local form of the P DE (83).
One of the main results of Ref. [14] can now be rephrased as follows: Proposition. The following statements are equivalent: 1. X is a solution of the Lagrangian HJ problem.
2. Besides being a Lagrangian submanifold of T Q, Im X is also invariant under the dynamics represented by Γ L , i.e. Γ L is everywhere tangent to Im X. 3. The integral curves of Γ L with initial conditions on Im X project onto the integral curves of X.
In the context of the tangent bundle, the notion of a complete solution of the Lagrangian HJ problem can be posed as follows: Definition. A complete solution of the Lagrangian Hamilton-Jacobi problem is provided by a family of solutions {X λ } λ∈Λ , Λ an open set in R n , such that the map: is a local diffeomorphism. It follows from the definition that a complete solution yields a foliation 16 of T Q with leaves transversal to the fibers, and that the Lagrangian vector field Γ L is tangent to the fibers of the foliation.
Example 3. Consider the two-dimensional harmonic oscillator with the standard Lagrangian: The dynamical vector field is: and: It is known that the functions: are all constants of the motion, not functionally independent, of course. Let, e.g., In particular, f 1 and f 2 are in involution: The P DE equations (88) read now: and it is easy to check that the four two-parameter families of vector fields 17 : do indeed solve them. As: and: Remark 5. It is useful to stress here that the P DE (88) (or, for that matter, Eq.(94) in the case of the harmonic oscillator), which defines [14] the "generalized" Lagrangian HJ problem, depends on the Lagrangian only through the vector field Γ L , while the HJ problem that we are discussing here depends also on additional structures derived from the Lagrangian, namely the symplectic form ω L and the "energy function" E L .
It is known [44] that the so-called "Inverse Problem in the Calculus of Variations", i.e. the problem of whether or not a dynamics described by a given second-order vector field on T Q admits of a Lagrangian description can have no solutions at all, only one solution 18 , or more than one solution, leading then to (genuinely) alternative Lagrangian descriptions for a given dynamics.
If this is the case, vector fields that are solutions of the Lagrangian HJ problem (or families thereof, yielding complete solutions) for a given Lagrangian need not be such for an alternative Lagrangian description, while remaining, according to what has just been said, solutions (or complete solutions) of the generalized HJ problem, as the following example shows. 17 It is easy to recognize that the components of the vector fields are obtained by expressing u 1,2 in terms of the coordinates on the base manifold and of the two parameters E 1,2 .

Example 4. Consider again the two-dimensional harmonic oscillator, but now with the alternative Lagrangian
The associated structures will be now: while, of course, the dynamical vector field and the P DE (94) will be the same. The vector fields (95) will satisfy again (X E1E2 ) * ω L1 = 0 as well as: (X E1,E2 ) * dE L1 = 0 and hence the X E1E2 's will provide a complete solution of the HJ problem for the Lagrangian (98) as well.
If one considers instead the Lagrangian [44]: we have: and one finds, e.g.: (as well as: (X E1E2 ) * (dE L2 ) = 0). Hence, the X E1E2 's will be no more solutions of the HJ problem for the alternative Lagrangian (100).
3. The Generalized Hamilton-Jacobi Problem for Differential Operators.

Differential Operators and Principal Symbols.
In order to deal with differential operators on manifolds, we will first review differential operators on R n . We will do this by providing an algebraic characterization that will allow us to deal with differential operators on arbitrary manifolds. We consider then the algebra A = F (R n ) of smooth functions on R n . A differential operator of degree at most k is defined as a linear map: D (k) : A → A of the form: where we have introduced multi-indices: σ = (i 1 , i 2 , ..., i n ) , |σ| = i 1 + i 2 + ... + i n , and: It is possible to give an algebraic characterization, appropriate to arbitrary manifolds, in the following way. With functions f ∈ A we associate differential operators f of order zero that act by multiplication, i.e.: f g =: f g on all smooth functions. We notice that the commutator bracket gives: and, more generally: for |τ | > 0 (strictly) and some set of constants c τ . It follows then easily that: is a differential operator of degree at most k − 1. Iterating the procedure for a set of k + 1 functions f 0 , f 1 , ..., f k , we find: The converse statement holds also true, namely, a linear operator which does not increase the support and satisfying the property (108) on any set of k+1 elements in A will be a differential operator [5]. By means of this algebraic characterization it is then possible to define differential operators as linear maps satisfying the property (108).
We notice that: • Setting: k = j = 1 we obtain that differential operators of degree at most one are a subalgebra and that A, as an Abelian subalgebra of operators of degree zero, is an invariant subalgebra thereof. • Differential operators of degree at most one are derivations of A if they are zero on constants. • Derivations are a subalgebra (the subalgebra of homogeneous differential operators of degree one).

Remark 6.
Considering the algebra A of smooth functions and the algebra DerA of the derivations on A, and noticing that: we can form a semi-direct product of Lie algebras by setting: The enveloping algebra of this Lie algebra will be isomorphic with the algebra of differential operators.
If D is a differential operator, then for any two functions f and g: It follows then that: Proposition. If D (k) is a differential operator of degree at most k, the expression: .
is a function which is symmetric with respect to all the permutations of f 1 , ..., f k . We can set up an equivalence relation " ≃ " among differential operators of the same degree, say k, by saying that D The equivalence class is what is called the principal symbol of the differential operator. The set of principal symbols is a commutative algebra, this following from the fact that two operators of degree k are in the same equivalence class iff their difference is a differential operator of degree < k. The set of the symbols of the differential operators of degree k will be denoted as S (k) (Q). The action on a set f = (f 1 , .., f k ) of function of the principal symbol of an operator D (k) of order k will be denoted as σ P D (k) (f ) and, of course: We note that: with, now: f = (f 1 , ..., f k+m ), which shows also that: σ P D (k) σ P D (m) = σ P D (m) σ P D (k) , as well as that: for any three differential operators D 1 , D 2 and D 3 . It is possible to define a Lie algebra product on principal symbols by associating with any two of them the principal symbol of their commutator. If we denote by: σ P : Dif f (k) (Q) → S (k) (Q) the map that associates a symbol with an operator, we can define: In this way we define a Poisson bracket on the commutative algebra of the principal symbols of differential operators. Moreover, we have: which shows that σ P (D) , · is a derivation on the commutative algebra of the principal symbols.
It is not difficult to see from Eq.(114) that one can define (and in an unique way) a symmetric contravariant tensor D (k) of rank k via: evaluated on the symmetrized product df 1 ⊗ df 2 ⊗ ... ⊗ df k . For example, for a homogeneous second-order operator of the form: a ij = a ji we find: This characterization of principal symbols by means of symmetric contravariant tensor fields allows us to conclude that the Poisson bracket on principal symbols is isomorphic with the Lie algebra product on symmetric contravariant tensors defined by the Schouten bracket [48].
Remark 7. If Qis parallelizable, there will be a global basis, say (X 1 , ..., X n ) of vector fields, and we can consider the F (Q)-module of contravariant tensor fields generated by them. For example, the monomial X i1 ⊗ ... ⊗ X i k defines a differential operator of order k by setting: which corresponds to the differential operator ... D (k) , f ... f k commutators , which is of order zero.
A further identification is possible by considering the principal symbol σ P D (k) as a fiberwise polynomial function f D (k) on T * Q defined as: For example, with: θ 0 = p i dq i , the rank-two tensor (121) leads to: Let us stress the fact that what Eqs.(119) and (123) show is that it is possible to characterize principal symbols completely in tensorial terms.
Remark 8. We are committing a slight abuse of notation in making this definition, as we are contracting tensor fields on Q with forms on T * Q. However, the abuse is justified by the fact that θ 0 is a semi-basic one-form.
With this association, the Poisson bracket (associated with dθ 0 ) on polynomial functions defined by contracting D (k) with covariant tensor fields defined by means of powers of θ 0 turns out to be isomorphic with the "abstract" Poisson bracket (117) defined on principal symbols of differential operators.
We should stress here that this realization in terms of functions that are polynomials in the momenta depends on the semi-basic one-form we have used (even though θ 0 is a natural one-form [41] on T * Q). It would be possible to consider other semi-basic one-forms whose exterior derivative would be a symplectic structure. For example, we might consider, for any non-singular numerical matrix K = K i j : whereby: dθ K = dp i K i j ∧ dq j will be a symplectic structure. Then, again using the rank-two tensor (121), one finds the quadratic form: In this way, again the associated Poisson algebra would be isomorphic with the abstract Poisson algebra defined by means of the principal symbols. These alternatives may turn out to be relevant if we would like to consider bi-Hamiltonian systems. In such a situation, semi-basic one-forms θ such that: which implies: would be of particular interest. Example 5. On T * R 2 with coordinates q 1 , q 2 , p 1 , p 2 the dynamics of the 2D isotropic harmonic oscillator can be represented by the vector field: which is Hamiltonian w.r.t. the "canonical" symplectic form ω 0 = dθ 0 = i dp i ∧ dq i with the "standard" Hamiltonian: [44]) the symplectic form:

. It is also Hamiltonian (and hence bi-Hamiltonian) w.r.t. (among others
which is of the form (125) with: and where: 3.2. Hamilton-Jacobi-Type Equations Associated with Symbols. Up to now we have seen that with the principal symbol of any differential operator we can associate both a symmetric contravariant tensor field and a polynomial function on T * Q. Clearly, if we consider this polynomial function f D , it is possible to construct a first-order P DE by setting, as in Sect.2: or, more generally: In this way, by means of the principal symbols, we are able to construct a firstorder P DE of the Hamilton-Jacobi type as well as Hamilton's equations that define the vector field Γ. Let us look now at a couple of examples.
Example 6. The Schrödinger operator. The operator associated with the evolution equation is: The associated HJ equation may be written in the form: i.e.: We find that the principal symbol, or the HJ equation associated with it, contains no information on the potential nor on the time evolution.

Example 7. The Klein-Gordon operator. A similar situation prevails for the Klein-Gordon operator:
which leads to: i.e. the principal symbol and the associated HJ equation do not take into account the mass of the particle. We see from these examples that the principal symbol does not capture the full physical information contained in the differential operator (with the associated P DE) we started from. To remedy this situation, we may define an associated, homogeneous, differential operator on a larger space by adding one more degree of freedom. To be specific, let us consider the case of a second-order differential operator. Locally, the operator will have the representation: (a ij , b j , e ∈ F (Q)). Adding then one more variable, denoted as τ , we obtain the extended differential operator on Q × R: which is now homogeneous of degree two. We will also restrict the space of functions on which D operates to functions of the form: in such a way that: The polynomial function on T * (Q × R) (with coordinates: x j , τ ; p j , p τ ) associated with the principal symbol of D will be: and we will look for solutions HJ equation: of the form: In this way, the HJ equation will become, when written in local coordinates: and we have gotten rid in this way of the additional degree of freedom whose introduction was made necessary in order to be able to deal with tensorial objects (see the discussion in Sect.3.1).

Example 8. With the procedure outlined above, both the Schrödinger and the Klein-Gordon operators get replaced by:
and: respectively, and the associated HJ equations (looking for solutions of the form (148)) will capture the full physics of the respective problems. The procedure outlined here is completely general, and we see that in this way the full information contained in our original differential operator will be captured by the principal symbol of the extended operator, and we can proceed now with the first-order P DE and Hamilton's equations just as before. As for transformations, we should remark that now we have to restrict to bundle automorphisms.

3.3.
Vector-Valued Differential Operators and the Hamilton-Jacobi Problem. In several physical situations, the systems we may want to describe have also some inner structure, and therefore scalar differential operators are not enough. For instance, the quantum-mechanical description of a particle with spin requires that we replace Schrödinger's equation with Pauli's in the non-relativistic case and with Dirac' in the relativistic case. More generally, this kind of situation occurs when we consider [8,9] Yang-Mills fields and generalized Wong equations.
Let us consider then the general aspects of this situation. We consider here two vector bundles: E 1 → Q and: E 2 → Q. We denote by Sec (E 1 ) and Sec (E 2 ) the spaces of sections: s : Q → E 1 and: τ : Q → E 2 . The operator of multiplication by a function f ∈ F (Q) will be denoted here too by f . Following the algebraic setting for differential operators of Sect.3.1, we define: A differential operator of order at most k, acting from Sec (E 1 ) to Sec (E 2 ), is a linear map: such that: In a chosen trivialization of the two bundles, their sections are vector-valued functions on Q, and the operator D will be described by a matrix whose entries are coordinate expressions of scalar differential operators. Thus, here too the relevant differential part can be written as: but now, instead of the g σ 's being pointwise scalars, we have: (pointwise), and each matrix element of the g σ 's will be a smooth function on Q.
We can again identify the principal symbol with a symmetric and totally contravariant tensor, i.e.: but now the tensor field must be understood as a multilinear function on one-forms with values in Hom (E 1 , E 2 ). Intrinsically: and the symmetry follows again from Eq.(111). By using the contraction with the k-fold product of θ 0 with itself we obtain a polynomial function with matrix-valued coefficients. Example 9. Let; D = d, the exterior differential: d : Λ j (Q) → Λ j+1 (Q). The value of its symbol on a differential one-form α is a homomorphism from Λ j (Q) to Λ j+1 (Q) given by [22]: The symbol of second-order scalar differential operators are symmetric contravariant tensor fields of rank two, that is: When this tensor is not degenerate, it defines a (pseudo) Riemannian metric: with: Thus, D is elliptic if g D is Riemannian and hyperbolic if it is Lorentzian.
In the scalar situation we have defined a Hamiltonian as: and in this way we have been able to define Poisson brackets, Hamilton-Jacobi-type equations and, finally, Hamilton's canonical equations.
In the present context, the symbol of a matrix-valued differential operator D will be a matrix-valued polynomial on T * Q. For example, for a second-order operator we will get: where now H (q, p) will be a matrix. If our bundles were Hermitian bundles of the same dimension, they can be identified and H becomes Hermitian: H = H † . Then, H will have real eigenvalues, and each eigenvalue function can be used as a Hamiltonian function on T * Q [6]. At this point we can repeat whatever has been said in the case of scalar differential operators.
4. Joint Hamilton-Jacobi Problems. As anticipated in the Introduction, one can pose a "joint Hamilton-Jacobi" problem also for two or more dynamical variables. For reasons that will become apparent shortly, we will limit ourselves here to the "conventional" HJ problem as discussed in Sect.2. In order to discuss the "joint" problem, we will have to discuss first some preliminary notions related to the restriction of Poisson brackets (on a cotangent bundle) to Lagrangian submanifolds that are graphs of closed one-forms.
In Sect.2 we have shown that, given the cotangent bundle T * Q of a configuration manifold Q with the canonical symplectic form ω 0 , with every one-form α ∈ X * (Q) we can associate the maps (52) and (55), i.e. (we omit here for brevity the suffix "α" that was employed in Sect.2): and: and that the image of α: Γ [α] = ((q, α (q)) ∈ T * Q; q ∈ Q) will be a Lagrangian submanifold of T * Q iff α is closed. Notice that, if (q, p) belongs already to Γ [α], the map (165) will leave it unaltered: in a fiber-preserving way.
With every function f ∈ F (T * Q) we can associate the pull-back: i.e. the extension of the restriction of f to Γ [α] with the property of being constant along the fibers. With this in mind, we want here to relate the restriction ψ * {f, g} of the Poisson bracket of any two functions f and g to Γ [α] to the corresponding restrictions ψ * f and ψ * g of f and g.
If π : T * Q → Q is the canonical projection, Eq.(165) implies: π • ψ = π, and hence: for the corresponding tangent maps. Let now m = (q, p) ∈ T * Q and: γ = ψ (m). Then to every tangent vector X ∈ T m T * Q one can associate the vector: T ψ (X) ∈ T γ T * Q and Eq.(168) implies: Notice that the two arguments of T π in Eq.(169) are in general tangent vectors at different points of T * Q. It is only when m = γ ∈ Γ [α] that we can factor out the tangent map T π and write: What Eq.(170) proves is that, if X ∈ T γ T * Q, then (T ψ (X)) − X is a vertical field. Let now f, g ∈ F (T * Q) and let X f , X g be the corresponding Hamiltonian vector fields defined via: Then: Now, in the first term on the r.h.s. of this equation, ω 0 is evaluated on a pair of vertical fields (at γ), and hence this term vanishes. As to the last term, using the definition of the pull-back [41]: But: ψ * ω 0 = dα and, as α is closed by assumption, this term vanishes as well and, using also Eq.(171), we are left with: The last term here is in turn the pull-back via ψ of the function: L Xg ψ * f −L X f ψ * g, and in this way we obtain the result [38]: which expresses the relation between the pull-back (the restriction) of the Poisson bracket of any two functions and the restrictions of the functions themselves. All that has been proved here relies in a crucial way on Γ [α] being the graph in a cotangent bundle of a closed one-form on the base manifold, and cannot be extended straightforwardly [38] to more general contexts such as Lagrangian sumbanifolds in general symplectic manifolds without further qualifications (see however Ref. [38] for some possible generalizations).
Having established the result (176), let's turn now to what we have called at the beginning of this Section the "joint HJ problem". To be specific, let: A, B ∈ F (T * Q) be two dynamical variables. We look then for a closed one-form α such that (cfr. Eqs. (164) and (165)): or: Eq.(176) tells us immediately that this implies: This (necessary) condition, i.e.: can be interpreted [38] as a sort of Poisson theorem, i.e. a condition saying that if a solution exists for the joint HJ problem for A and B, then it must be also a solution for the HJ problem for the Poisson bracket {A, B} on its zero level-set, and also as a classical counterpart of the quantum condition [43] according to which two observables must commute in order to be simultaneously diagonalizable. It is obvious that if the Poisson bracket {A, B} does not vanish anywhere, then there is no possible solution for the joint HJ problem for A and B.
The obvious generalization to any set A 1 , ..., A k , k ≤ n = dim (Q) will be, of course: As the one-form α is closed by assumption, at least locally (and globally if the first de Rham cohomology group of Q vanishes): α = dW, W ∈ F (Q), and Eq.(180) will become a P DE for the unknown function W . For example, in local (Darboux) coordinates, ω 0 = dp i ∧ dq i will imply the equation: We will close this Section by discussing a simple example that can help clarifying the status of the condition (180) as a necessary but not sufficient condition.
Example 10. It is well known [44] that the dynamics of the 2D harmonic oscillator on T * R 2 , with coordinates q 1 , q 2 , p 1 , p 2 and equipped with the canonical symplectic form: ω 0 = dp j ∧ dq j can be described by in many alternative ways and, among others, by: • The "standard" Hamiltonian: and vector field: or by: • The Hamiltonian and vector field: or, eventually, by: and: which describes the dynamics of the 2D harmonic oscillator on the configuration space. can be recasted in a geometrical setting [38] just as its time-independent counterpart.
Although the "minimal" extension of the carrier manifold [41] appropriate to the description of a time-dependent dynamics seems to be from T * Q to T * Q × R, where R stands for the time variable t, the resulting manifold, being odd-dimensional, is a contact [1] and not a symplectic manifold. As anticipated in Sect.1.5, it is more convenient [1,38,41] to extend the original configuration space Q (with local coordinates q i , i = 1, ..., n = dim Q) to: Q = Q × R, considering then time as an additional coordinate on the same footing as the q i 's, and hence to consider the (extended) tangent bundle: Coordinates 21 for T * R will be denoted as (t, h), with h the (energy) variable canonically conjugate to the time t, and a point m ∈ M will consist of a triple: m = (m, t, h) with m ∈ M (hence: m = (q, p) in local coordinates). Associated with the manifold M there will be various projection maps, for instance from T * Q × T * R to T * Q × R or to T * Q or to T * R and so on. One can further endow M with a canonical one-form θ 0 obtained by "adding" a contribution from T * R to θ 0 = p i dq i on M, i.e.: Hence, M will acquire the structure of a symplectic manifold with the canonical two-form 22 : The canonical equations of motion on M = T * Q can be "lifted" to M = T * (Q × R) by defining, guided by the structure of the time-dependent equation (192), the Hamiltonian H ∈ F( M) on the extended phase space M as: transformations of T * (Q × R) generated by X e H will permute the ( Σ 0 ) t 's among themselves, i.e: Remark 9. One should keep in mind that, while on the extended phase space T * (Q × R), due to the fact that the dynamical vector field X e H does not depend explicitly on the evolution parameter τ , the dynamics is "autonomous" [1,41], and gives rise therefore to a "bona fide" one-parameter group, this is not so on T * Q. There, due to the time-dependence, the dynamics will give rise in general [1,38] to a two-parameter family {φ t2t1 } of canonical transformations of T * Q obeying the law of combination [38]: which will "collapse", of course, into a one-parameter group if the Hamiltonian happens to be time-independent, i.e., in such a case: φ t2t1 = φ t2−t1 .Explicitly (see also Eqs. (198) and (199)), the one-parameter group {Φ τ } will act as [38]: At this point, it becomes apparent how one can cast the time-dependent HJ problem as stated in Eq.(192) into a geometrical form on the extended phase space M, paralleling to some extent the discussion in Sect.2.2. With any S ∈ F( Q) (dS ∈ X * ( Q)) we can associate the map: which is a global section w.r.t. the projection map: π : M → Q. Then: will be the graph of the closed (actually exact) one-form dS on Q, and hence a ((n + 1)-dimensional) transversal Lagrangian submanifold in M. The requirement that S be a solution of the P DE (192) can be rephrased in geometrical terms by requiring that: Remark 10. Eq.(210) should make it clear why, at variance with the time-independent case, the "zero-level set" Σ 0 plays a distinguished rôle in the geometrical approach to the time-dependent HJ problem.
The geometrical meaning of the time-dependent HJ problem in the present formulation is therefore the following: We look for transversal Lagrangian submanifolds Γ in M that are contained in Σ 0 and are graphs of exact one-forms on Q. Γ will have codimension n in Σ 0 . A complete solution of the time-dependent HJ problem will be an n-dimensional foliation of the "zero-level set" Σ 0 by such submanifolds. Usually, these submanifolds are defined by the "dispersion relations" of our P DE.
By reasoning as in Sect.2.2 we can conclude that the vector field X e H will be tangent to the submanifold Γ. Hence, the dynamical flow will leave it invariant, i.e.: Φ τ ( Γ) = Γ ∀τ (211) All of the above picture is on the extended phase space M, and one should see now how it can be made to "descend" to the physical phase space M = T * Q, i.e. how it behaves under the projection: For every fixed t we may consider S as a function S t ∈ F (Q) with dS t ∈ X * (Q). Then, the map: ϕ S,t : Q ∋ q → (q, (dS t ) (q)) ∈ T * Q giving a global section w.r.t. the projection: π 0 : T * Q → Q, will define the transversal Lagrangian submanifold Γ t = ϕ S,t (Q) in T * Q, the graph of the exact one-form dS t .
Going back now to the situation in M = T * (Q × R), we can consider the family of intersections { Γ ∩ ( Σ 0 ) t } T ∈R of Γ with the constant-t sections of Σ 0 . Explicitly: and, for every t, Γ ∩ ( Σ 0 ) t will be an n-dimensional (hence isotropic [38] in M) submanifold contained in ( Σ 0 ) t . Using then Eqs.(205) and (211) one sees that Φ τ permutes again these submanifolds among themselves, i.e.: and, under the projection (212) we obtain: Moreover, using Eq.(207), we see that the Γ t 's evolve in time as: φ t+τ,t (Γ t ) = Γ t+τ ; t, τ ∈ R We obtain then the following picture: Any solution of the HJ problem in M = T * (Q × R), i.e. any single geometrical object Γ, gives rise in M = T * Q to a family {Γ t } of time-dependent, Lagrangian submanifolds in T * Q, each Γ t being the graph of an exact one-form dS t on Q, that evolves in time according to Eq.(217).
One should keep in mind, however, that time-evolution need not be, and quite often is not, a harmless process, to the extent that solutions that are "well behaved" at a certain time may develop caustics and/or become ill-behaved at later times. To "cure" these and other possible pathologies, and also to fully implement symmetries in the time-dependent context as well as in the time-independent one (see also the discussion at the end of Subsect.2.2), one might be forced to require that the relevant submanifolds be graphs of closed but not necessarily exact one-forms and/or to abandon the requirement of transversality. We will not discuss here these generalizations, but refer rather to the literature [38] for further details.
6. Concluding Remarks. Following Dirac's prescription [21] according to which Classical Mechanics must be a suitable limit of Quantum Mechanics, we have considered in this paper the Hamilton-Jacobi theory as emerging from Quantum Mechanics when we consider an approximation in which Planck's constant is treated as a parameter.
¿From this point of view, while we keep on with the originating idea of the theory of considering Hamiltonian Optics as a suitable limit of Wave Optics, we have taken advantage of the fact that Wave Mechanics concerns also with particles with internal structure. This has suggested that we deal not only with "scalar differential operators" like Schrödinger's or Klein-Gordon's, but also with "matrixvalued differential operators" as those appearing in the Pauli as well as in the Dirac equations. The net result is an extension of the usual Hamilton-Jacobi formalism to a formalism where the Hamiltonian "scalar function" is replaced by a matrix-valued Hamiltonian.
In the same spirit, following what happens in relativistic field theories, we have replaced the one-parameter group of time evolution with a Lie group, e.g. the Poincaré group.
We have not considered the Hamilton-Jacobi theory for field theories proper but, following the ideas outlined in this paper, it should not be very difficult to foresee how to proceed.
The joint Hamilton-Jacobi problem turns out also to be very useful to deal with holonomic and non-holonomic constraints within the Hamilton-Jacobi formalism [15]. In this connection we should also mention some recent relevant contributions [28,34] to the same subject.
The geometrization of the Hamilton-Jacobi problem presented here has many advantages over more conventional presentations. For example, as mentioned already in Sect.2.2, posing a "Hamilton-Jacobi problem" as the search of foliations of the energy surfaces by Lagrangian but not necessarily transversal submanifolds opens the possibility of implementing the full set of canonical symmetries as symmetries of the Hamilton-Jacobi problem as well. We have also shown elsewhere [38] how to deal, in the same fully geometric spirit, with transformations and symmetries for partial differential equations of the Monge-Ampère type. Using the present generalization to differential operators acting on sections of vector bundles it should be possible to incorporate into the formalism more general P DE's than those of the above "Monge-Ampère" type.
Using our matrix-valued Hamiltonians it will be possible to deal with equations of the Wong type [8], i.e. equations describing particles interacting with Yang-Mills fields. Also, in this approach, treating, say, electrons moving in some monopole-like magnetic field, algebroids arise in quite a natural way. We shall postpone more details on these aspects to a forthcoming paper.