Non-commutative worlds

This paper presents a mathematical view of aspects of physics, showing how the forms of gauge theory, Hamiltonian mechanics and quantum mechanics arise from a non-commutative framework for calculus and differential geometry. This work is motivated by discrete calculus, as it is shown that classical discrete calculus embeds in a non-commutative context. It is shown how various processes are modeled by non-commutative discrete calculus, and how aspects of differential geometry, such as the Levi-Civita connection, arise naturally from commutator equations and the Jacobi identity. A new and generalized version of the Feynman–Dyson derivation of electromagnetic equations is given, with corresponding discrete models.


Introduction
We present a view of aspects of mathematical physics, showing how the forms of gauge theory, Hamiltonian mechanics and quantum mechanics arise from a non-commutative framework for calculus and differential geometry.
We assume that all constructions are performed in a Lie algebra A. One may take A to be a specific matrix Lie algebra, or abstract Lie algebra. If A is taken to be an abstract Lie algebra, then it is convenient to use the universal enveloping algebra so that the Lie product can be expressed as a commutator. In making general constructions of operators satisfying certain relations, it is understood that we can always begin with a free algebra and make a quotient algebra where the relations are satisfied.
We build a variant of calculus on A by defining derivations as commutators (or more generally as Lie products). That is, if for a fixed N in A we define ∇ : There are many motivations for replacing derivatives by commutators, or more generally by the derivations induced by multiplication in a Lie algebra. In section 2 we give a new motivation [1] in terms of the structure of classical discrete calculus. The idea behind this motivation is very simple. If f(x) denotes (say) a function of a real variable x, andf (x) = f(x + h) for a fixed increment h, then we can define the discrete derivative Df by the formula Df = (f − f )/h, and one finds that in this classical discrete calculus the Leibniz rule is not satisfied. Instead, one has the basic formula for the discrete derivative of a product: D(fg) = D(f )g +f D(g).
We correct this deviation from the Leibniz rule by introducing a new non-commutative operator J with the property that fJ = Jf , and we define a new discrete derivative in an extended algebra by the formula It follows at once that In the extended algebra, discrete derivatives are represented by commutators, and naturally satisfy the Leibniz rule. This mode of translation shows that we can regard models based on discrete calculus as a significant subset of non-commutative calculus based on commutators.
In A there are as many derivations as there are elements of the algebra, and these derivations behave quite wildly with respect to one another. If we have the abstract concept of curvature as the non-commutation of derivations, then A is a highly curved world indeed. Within A we shall build, in a natural way, a tame world of derivations that mimics the behaviour of flat coordinates in Euclidean space. We will then find that the description of the structure of A with respect to these flat coordinates contains many of the equations and patterns of mathematical physics.
Note that for any A, B, C in A we have the Jacobi identity This is the well-known formula for the curvature of a gauge connection. Section 5 goes on to discuss how other aspects of geometry arise naturally in this context, including the Levi-Civita connection (which is seen as a consequence of the Jacobi identity in an appropriate noncommutative world). The section includes a discussion of the relationships of these structures with classical physics and the Poisson bracket.
Section 6 takes up the theme of the consequences of the commutator [X i ,Ẋ i ] = g ij that we have already seen to produce the Levi-Civita connection for the generalized metric g ij . Here we carry out a sharpening of the work of Tanimura [12], deriving thaẗ where G r is the analogue of a scalar field, F rs is the analogue of a gauge field and rst is the Levi-Civita connection associated with g ij . This decomposition of the acceleration is uniquely determined by the given framework. Section 7 revisits the Feynman-Dyson derivation of electromagnetism from commutator equations, showing that most of the derivation is independent of any choice of commutators, but highly dependent upon the choice of definitions of the derivatives involved. Without any assumptions about initial commutator equations, but taking the right (in some sense simplest) definitions of the derivatives we prove a significant generalization of the result of Feynman-Dyson. See theorem 7.5.
We then apply this result to produce many discrete models of the theorem. These models show that, just as the commutator [X,Ẋ] = Jk describes Brownian motion in one dimension, a generalization of electromagnetism describes the interaction of triples of time series in three dimensions.
Section 8 is a discussion of the Jacobi identity. We devote part of this section to a proof that general Poisson brackets (not assuming Hamilton's equations) satisfy the Jacobi identity. This is part of the thematic structure of this paper. We are investigating the relationship of physics and its variables. When the variables are commutative it is a classical matter to have precise locations and standard coordinates. When the 'variables' are non-commutative one gives up the notion of location in varying degrees, and gets the benefit of the extra mathematical structures in non-commutative worlds. The Poisson bracket is singular in that it is a way to produce a Lie algebra structure from the algebra of derivations of a commutative algebra. Thus the Poisson bracket is a key link between commutative and non-commutative worlds. This section is intended to make our story complete and to raise the question of how this connection really comes about. Section 9 is a diagrammatic extension of section 8. We show how, in a diagrammatic framework, the Jacobi identity can be articulated, and how it can arise from purely combinatorial grounds. These are hints of further discrete physics. Section 10 is an epilogue, discussing the themes of the paper.
Remark. This paper is essentially self-contained, and hence it is written in an elementary style. While there is a large literature on non-commutative geometry, emanating from the idea of replacing a space by its ring of functions, this paper is not written in that tradition. Noncommutative geometry does occur here, in the sense of geometry occurring in the context of non-commutative algebra. Derivations are represented by commutators. There are relationships between the present work and the traditional non-commutative geometry, but that is a subject for further exploration. In no way is this paper intended to be an introduction to that subject.
The following references in relation to non-commutative calculus are useful in comparing with our approach [2]- [5]. Much of the present work is the fruit of a long series of discussions with Pierre Noyes (see [1] and [25]- [31]), and we will be preparing collaborative papers on it. I particularly thank Eddie Oshins for pointing out the relevance of minimal coupling. The paper [6] also works with minimal coupling for the Feynman-Dyson derivation. The first remark about the minimal coupling occurs in the original paper by Dyson [7], in the context of Poisson brackets. The paper [8] is worth reading as a companion to Dyson. In the present paper we generalize the minimal coupling to contexts including both commutators and Poisson brackets. It is the purpose of this paper to indicate how non-commutative calculus can be used in foundations.

Discrete derivatives become commutators
Consider a discrete derivative Df = (f(x + ) − f(x))/ . It is easy to see that D does not satisfy the Leibniz rule. In fact, if and one calculates that D(fg) = D(f )g +f D(g).
In the limit as goes to zero,f approaches f and the Leibniz rule is satisfied. Now define a shift operator J that satisfies the equation Note that the existence of J is accomplished by taking the commutative algebra C that we started with, and extending it to the free product of C with an algebra generated by the symbol J, modulo the ideal generated by fJ − Jf for all f in C. Setting Hence The adjusted derivative ∇ satisfies the Leibniz rule.
In fact, this adjusted derivative is a commutator in the algebra of functions C, extended by the operator J: Hence Thus ∇(x) = J. This underlines the fact that these derivatives now take values in a noncommutative algebra. Note however that, if Hence we can proceed in calculations with power series just as in ordinary discrete calculus, keeping in mind powers of J that are shifted to the left. That is, a typical power series should be expressed in terms of the falling powers x (n) . We would define The price paid for having the Leibniz rule restored and the derivatives expressed in terms of commutators is the appearance factors of J on the left in final expressions of functions and derivatives.
Note that we have Of course, we can simply posit such a P, but in fact, we can redefine J so that and we can take In this interpretation, This double readjustment of the discrete derivative allows us to transfer standard calculus to an algebra of commutators. For physical applications however, there remains a difficulty in adding in a time variable t and allowing that all the other elements of the algebra should be functions of time. If the derivative with respect to time is represented by commutation with H, then we cannot assume that H commutes with x. For this reason we will not proceed in the rest of the paper via this method of double readjustment. The cost for the double readjustment is that we must have a collection of functions in the original algebra C such that one can sensibly definef (x) = f(x + J −1 ). Polynomial and power series functions have such natural extensions. For other function algebras it will be an interesting problem in analysis, and algebra, to understand the structure of such extensions of commutative rings of functions to non-commutative rings of functions.

Time, discrete observation, Brownian walks and the simplest commutator
For temporal discrete derivatives there is a very neat interpretation of the shift operator of the previous section. Consider a time series {X, X , X , . . .} with commuting scalar values. Let where τ is an elementary time step. (If X denotes a times series value at time t, then X denotes the value of the series at time t + τ.) The shift operator J is defined by the equation XJ = JX where this refers to any point in the time series so that X (n) J = JX (n+1) for any non-negative integer n. Moving J across a variable from left to right, corresponds to one tick of the clock. We already know that this discrete, non-commutative time derivative satisfies the Leibniz rule.
This derivative D also fits a significant pattern of discrete observation. Consider the act of observing X at a given time and the act of observing (or obtaining) DX at a given time. Since X and X are ingredients in computing (X − X)/τ, the numerical value associated with DX, it is necessary to let the clock tick once. Thus, if we first observe X and then obtain DX, the result is different (for the X measurement) if we first obtain DX, and then observe X. In the second case, we shall find the value X instead of the value X, due to the tick of the clock.
We then see that the evaluation of these expressions in the non-commutative calculus parallels the observational situation: The numerical evaluations for two such orderings are obtained by moving all occurrences of J all the way to the left. Thus we could write for an expression where A has no appearance of J. Then |XẊ| = X (X − X)/τ and |ẊX| = (X − X)X/τ.
Elsewhere [1] we have called this interpretation of the temporal discrete derivative the 'discrete ordered calculus' or DOC for short.
The commutator [X,Ẋ] expresses the difference between these two orders of discrete measurement. In the simplest case, where the elements of the time series are commuting scalars, we have Thus we can interpret the equation This means that the process is a walk with constant spatial step where k is a constant. In other words, we have We have shown that a walk with spatial step size and time step τ will satisfy the commutator equation above exactly when the square of the spatial step divided by the time step remains constant. This means that a given commutator equation can be satisfied by walks with arbitrarily small spatial step and time step, just so long as these steps are in this fixed ratio.
Remarkably, we can identify the constant k/2 as the diffusion constant for a Brownian process. To make this comparison, let us recall how the diffusion equation usually arises in discussing Brownian motion. We are given a Brownian process where so that the time step is τ and the space step is of absolute value . We regard the probability of left or right steps as equal, so that if P(x, t) denotes the probability that the Brownian particle is at point x at time t then From this equation for the probability we can write a difference equation for the partial derivative of the probability with respect to time: The expression in brackets on the right-hand side is a discrete approximation to the second partial of P(x, t) with respect to x. Thus if the ratio C = 2 /2τ remains constant as the space and time intervals approach zero, then this equation goes in the limit to the diffusion equation C is called the diffusion constant for the Brownian process.
The appearance of the diffusion constant from the observational commutator shows that this ratio is fundamental to the structure of the Brownian process itself, and not just to the probabilistic analysis of that process.

Planck's numbers, Schrödinger's equation and the diffusion equation
where denotes a space interval, and τ denotes a time interval as explained in the last section about the Brownian walk. With this we can ask: what space interval and time interval will satisfy this relationship? Remarkably, the answer is that this equation is satisfied when m is the Planck mass, is the Planck length and τ is the Planck time. For note that 2 /τ = ik. We can take = √ i where is a real step-length. This gives a Brownian walk in the complex plane with the correct DOC diffusion constant. However, the relationship of this walk with the Schrödinger equation is less clear because the ψ in that equation is not the probability for the Brownian process. To see a closer relationship we will take a different tack.
Consider a discrete function ψ(x, t) defined (recursively) by the following equation: In other words, we are thinking here of a random 'quantum walk' where the amplitude for stepping right or stepping left is proportional to i while the amplitude for not moving at all is proportional to (1 − i). It is then easy to see that ψ is a discretization of Just note that ψ satisfies the difference equation This gives a direct interpretation of the solution to the Schrödinger equation as a limit of a sum over generalized Brownian paths with complex amplitudes. We can then reinterpret this in DOC terms by the equation [X,Ẋ] = J( 2 /τ) or [X,Ẋ] = 0, each of these contingencies happening probabilistically. For a different (and deeper) relationship between Brownian motion and quantum mechanics see [9].

DOC chaos
Along with the simple Brownian motion solution to the one-dimensional commutator equation, there is a hierarchy of time series that solve this equation, with periodic and chaotic behaviour. These solutions can be obtained by taking where Y is a numerical scalar, and taking the commutator equation to be [X,Ẋ] = J 2n+1 k, where k is a scalar. Expanding this equation, we find This last equation expresses the time series recursively where Y refers to the value of the series that is n time steps back from Y n . The first case of this recursion is The next case is These recursions depend critically on the value of the parameter k. In the first case one sees periodic oscillations that (for appropriate values of k) destabilize and blow up, alternating between an unbounded phase and a bounded semi-periodic phase. In figures 1 and 2 we illustrate the case of the equation for k = 0.0001 in figure 1 (a bounded phase) and k = 0.009 in figure 2 (an unbounded phase). There is an intricate recursive structure in this hierarchy and it deserves further study.

Non-commutative calculus and Hamilton's equations
We now set up a framework for non-commutative calculus in an arbitrary number of dimensions. We shall assume that each derivative is represented by a commutator, and that the basic space and time derivatives commute with one another as is customary for the flat space of standard multi-variable calculus. This production of a flat space for calculus forms a reference domain within the containing Lie algebra A. Since all derivatives are represented by commutators, this includes the time derivative as well. We shall assume that there is an element H in A representing the time derivative. This means that for any A in A. Note that it follows at once from this choice that H itself is time independent, since dH/dt = [H, H ] = 0. We shall see that H behaves formally like the Hamiltonian operator in classical mechanics.
We will assume that there is a set of coordinates {X 1 , . . . , X d } that are as flat as possible. It is assumed that the X i all commute with one another, and that the derivatives with respect to them commute with one another. The partial derivatives with respect to X i will be represented by a set of elements {P 1 , . . . , P d } with for any F in A. Since we want the equation we need the equation Since we want and since (as we compute in section 1 and in the next section) we see that these partial derivatives will commute with one another exactly when [P i , P j ] belongs to the centre of the algebra A for all choices of i and j.
For simplicity, we shall assume that With these choices the flat coordinates satisfy Note that we also havê This formalism looks like bare quantum mechanics and can be so interpreted (if we take ih dA/dt = [A, H] and H the Hamiltonian operator). But these coordinates can also be viewed as the simplest flat set of coordinates for referring the description of temporal phenomena in a non-commutative world. There are various things to note. For example These are exactly Hamilton's equations of motion. The pattern of Hamilton's equations is built into the system!

Hamilton's equations in classical mechanics
It is worth recalling how Hamilton's equations appear in classical mechanics. For simplicity, we shall restrict to one spatial variable q (the analogue of the operator X) and one momentum variable p (the analogue of the operator P). In classical mechanics in one space and one time dimension, we have the equations where the first equation is the definition of momentum of a particle of mass m, the second equation is the expression for the energy of the system as the sum of the kinetic energy p 2 /2m and the potential energy V(q). The third equation is Newton's law of motion. We see that These are Hamilton's equations of motion.
Hamilton went on to observe that for any function F of q and p, where {F, H } is the Poisson bracket defined by the equation Remarkably, the Poisson bracket satisfies the Jacobi identity, and hence gives a Lie algebra structure on the commutative space of functions of position and momentum. We have shown in this section that the pattern of Hamilton's equations is inherent in the Lie algebra context. We shall have more to say about Poisson brackets and Hamilton's equations in section 5.

Curvature
Note that for any A, B, C in A we have the Jacobi identity for all F in A. Hence the curvature of {∇ i } measures the deviation of the concatenations of these derivations from commutativity.

Proof. First,
Hence This proves the proposition.
This elementary notion of curvature for a collection of derivations just measures the extent to which they do not commute with one another. We have a collection X of elements X of the algebra, and corresponding derivations ∇ X where ∇ X Z = [Z, P X ] for a corresponding set of elements P X representing these derivations. This is exactly the situation of the main framework of this section where In this case we have ∇ i Z = [Z, P i ], and the curvature of the collection is the collection of commutators {R ij = [P i , P j ]}. More generally, the elements of the collection X may not commute with one another. In this case, we shall define the curvature as an operator R(X, Y ) defined on X × X by the equation in direct analogy with the usual definition in standard differential geometry. However, in order to do this we shall need a collection X closed under the commutator, so that ∇ [X,Y ] is defined.
The following paragraphs outline one way to accomplish this end. We will build the notion of curvature in terms of a general concept of covariant differentiation. Let X denote a specific collection of elements of the algebra A that we shall refer to as the variables in A. We include in A a notion of time in the sense that there is the temporal derivativeḞ = [F, H ] and we designate a single special variable T to correspond to this temporal derivative. In general, to each variable X there is associated a derivative ∇ X with associated action where P X represents this derivative. (Thus we take P T = H.) In this generality, we make no assumptions about the commutativity of the variables or of the corresponding elements that represent the derivatives.
We define the following set S of 'scalar functions over X ': That is, S is a set of elements f that commute with each other and with all the elements of X . We then consider the closure of X under addition and multiplication by elements of S. Call this closure For f ∈ S and X ∈ X , define X[f ] by the formula Note that even though f commutes with all the elements of X , it can still have non-trivial derivations with respect to these variables.

Lemma 4. 2.
We have the following properties: Proof. For the first property, note that For the second property, we have For the third property, note that This completes the proof.
Definition. We define the curvature as a function R : Thus for given elements X, Y ∈ X , the curvature operator R(X, Y ) measures the noncommutativity of the operators ∇ X and ∇ Y in relation to the non-commutativity of X and Y . If X and Y commute, then and we are returned to our initial definition of curvature for a collection of derivations.

General equations of motion
Given a set of coordinates {X 1 , X 2 , . . . , X d } and dual coordinates {P 1 , P 2 , . . . , P d } as in the previous section, a general description of dX i /dt takes the form of a system of equations then we have the curvature This is the well-known formula that expresses the gauge field as the curvature of the gauge connection. From this point of view everything comes naturally from the assumption that all derivatives are represented by commutators, and that one refers all equations to the flat background coordinates.

Curvature and connection at the next level
The dynamical law is This gives rise to new commutation relations where this equation defines g ij , and We define the 'covariant derivative' while we can still writê It is natural to think that g ij is analogous to a metric. This analogy is strongest if we assume that By assuming that the spatial coordinates commute with the metric coefficients we have that Here, we shall let not assuming this commutator to vanish. Theṅ A stream of consequences then follows by differentiating both sides of the equation We will detail these consequences in section 6. For now, we show how the form of the Levi-Civita connection appears naturally in this context. In the following we shall use D as an abbreviation for d/dt.
The Levi-Civita connection associated with g ij comes up almost at once from the differentiation process described above. To see how this happens, view the following calculation wherê We apply the operator∂ i∂j to the second time derivative of X k . Then Proof. Note that by the Leibniz rule Thereforê This completes the proof.
It is remarkable that the form of the Levi-Civita connection comes up directly from this non-commutative calculus without any a priori geometric interpretation.
The upshot of this derivation is that it confirms our interpretation of as an abstract form of metric (in the absence of any actual notion of distance in the noncommutative world). This calls for a re-evaluation and reconstruction of differential geometry based on non-commutativity and the Jacobi identity. This is differential geometry where the fundamental concept is no longer parallel translation, but rather a non-commutative version of a physical trajectory. This approach will be the subject of a separate paper. At this stage we face the mystery of the appearance of the Levi-Civita connection. There is a way to see that the appearance of this connection is not an accident, but rather quite natural. We shall explain this point of view in the next subsection where we discuss Poisson brackets and the connection of this formalism with classical physics. On the other hand, we have seen in this section that it is quite natural for curvature in the form of the non-commutativity of derivations to appear at the outset in a non-commutative formalism. We have also seen that this curvature and connection can be understood as a measurement of the deviation of the theory from the 'flat' commutation relations of ordinary quantum mechanics. Electromagnetism and Yang-Mills theory can be seen as the theory of the curvature introduced by such a deviation. On the other hand, from the point of view of metric differential geometry, the Levi-Civita connection is the unique connection that preserves the inner product defined by the metric under the parallel translation defined by the connection. We would like to see that the formal Levi-Civita connection produced here has this property as well.
To this end let us recall the formalism of parallel translation. The infinitesimal parallel translate of A is denoted by Here we are writing in the usual language of vectors and differentials with the Einstein summation convention for repeated indices. We assume that the Christoffel symbols satisfy the symmetry condition k ij = k ji . The inner product is given by the formula Note that here the bare symbols denote vectors whose coordinates may be indicated by indices. The requirement that this inner product be invariant under parallel displacement is the requirement that δ(g ij A i A j ) = 0. Calculating, one finds From this it follows that Note that the above ijk corresponds to ijk in the notation of lemma 5.1. Certainly these notions of variation can be imported into our abstract context. The question remains how to interpret the new connection that arises. We now have a new covariant derivative in the form The question is how the curvature of this connection interfaces with the gauge potentials that gave rise to the metric in the first place. The theme of this investigation has the flavour of gravity theories with a qauge theoretic background. We will investigate these relationships in a sequel to this paper.

Poisson brackets and commutator brackets
Dirac [10] introduced a fundamental relationship between quantum mechanics and classical mechanics that is summarized by the maxim replace Poisson brackets by commutator brackets. In making this backwards journey to classical physics we see how our earlier assertion that bare mechanics of commutators can be regarded as the background for the coupling with other fields (as in the description of formal gauge theory), fits with Poisson brackets. The bare Poisson brackets satisfy In our previous formalism, we would identify X i as the correspondent with q i and P j as the correspondent of p j . And, given a classical vector potential A, we could write the coupling dq i /dt = p i − A i to describe the motion of a particle in the presence of an electromagnetic field. Similar remarks apply to the analogues for gauge theory and curvature. In particular, it is of interest to see that our derivation of the Levi-Civita connection corresponds to the motion of a particle in generalized coordinates that satisfies Hamilton's equations. The fact that such a particle moves in a geodesic according to the Levi-Civita connection is a classical fact. Our derivation of the Levi-Civita connection, interpreted in Poisson brackets, reproduces this result. To see how this works, let ds 2 = g ij dx i dx j denote the metric in the generalized coordinates x k . Then the velocity of the particle has square v 2 = (ds/dt) 2 = g ijẋ iẋj . The Lagrangian for the system is the kinetic energy L = mv 2 /2 = mg ijẋ iẋj /2. Then the canonical momentum is p j = ∂L/∂ẋ j , and with q i = x i we have the Poisson brackets Taking m = 1 for simplicity, we can rewrite this bracket as This, in Poisson brackets, is our generalized equation of motion.
The classical derivation applies Lagrange's equation of motion to the system. Lagrange's equation reads Since this equation is equivalent to Hamilton's equation of motion, it follows that the Poisson brackets satisfy the Leibniz rule. With this, we can proceed with our derivation of the Levi-Civita connection in relation to the acceleration of the particle. In the classical derivation, one writes out the Lagrange equation and solves for the acceleration. The advantage of using only the Poisson brackets is that it shows the relationship of the connection with the Jacobi identity and the Leibniz rule.
This discussion raises further questions about the nature of the generalization that we have made. Originally, Hermann Weyl [11] generalized classical differential geometry and discovered gauge theory by allowing changes of length as well as changes of angle to appear in the holonomy.
Here we arrive at a similar situation via the properties of a non-commutative discrete calculus of observations.

Consequences of the metric
In this section we shall follow the formalism of the metric commutator equation very far in a semi-classical context. That is, we shall set up a non-commutative world, and we shall make assumptions about the non-commutativity that bring the operators into close analogy with variables in standard calculus. In particular, we shall regard an element F of the Lie algebra to be a 'function of the X i ' if F commutes with the X i , and we shall assume that if F and G commute with the X i , then F and G commute with each other. We call this the principle of commutativity. With these background assumptions, it is possible to get a very sharp result about the behaviour of the theory. In particular, the results of this section sharpen the work in [12], where special orderings and averages of orderings of the operators were needed to obtain analogous results.
We assume that We assume that there exists a g ij with We also assume that if To say that [A, X i ] = 0 is to say the analogue of the statement that A is a function only of the variables X i and not a function of theẊ j . This is a stong assumption about the algebraic structure, and it will not be taken when we look at strictly discrete models. It is, however, exactly the assumption that brings the non-commutative algebra closest to the classical case of functions of positions and momenta.
The main result of this section will be a proof thaẗ and that this decompositon of the acceleration is uniquely determined by the given framework. Since we can regard this result as a description of the motion of the non-commutative particle influenced by a scalar field G r , a gauge field F rs , and geodesic motion with respect to the Levi-Civita connection corresponding to g ij . Let us begin. Note that, as before, we have that g ij = g ji by taking the time derivative of the equation [X i , X j ] = 0. Note also that the Einstein summation convention (summing over repeated indices) is in effect when we write equations, unless otherwise specified.
As before, we define We also make the definitionṡ Note that we do not assume the existence of a variable X j whose time derivative isẊ j . Note that we haveẊ k = g kiẊ i .
Note that it follows at once that We assume the following postulate about the time derivative of an element F with [X i , F] = 0 for all k: This is in accord with the concept that F is a function of the variables X i . Note that in one interpretation of this formalism, one of the variables X i could be itself a time variable. In the next section, we shall return to three dimensions of space and one dimension of time, with a separate notation for the time variable. Here there is no restriction on the number of independent variables X i .
We have the following lemma. Lemma 6. 1.

[X
The second part of the proposition is an application of the Leibniz rule: This completes the proof of the lemma.
It follows from this lemma that ∂ i can be regarded as ∂/∂X i . We have seen that it is natural to consider the commutator of the velocities R ij = [Ẋ i ,Ẋ j ] as a field or curvature. For the present analysis, we would prefer the field to commute with all the variables X k in order to identify it as a 'function of the variables X k '. We shall find, by a computation, that R ij does not so commute, but that a compensating factor arises naturally. The result is as follows. Proposition 6. 2. Let F rs = [Ẋ r ,Ẋ s ] + (∂ r g ks − ∂ s g kr )X k and F rs = [Ẋ r ,Ẋ s ]. Then 1. F rs and F rs commute with the variables X k . 2. F rs = g ri g sj F ij . Proof. We begin by computing the commutator of X i and R rs = [Ẋ r ,Ẋ s ] by using the Jacobi identity.
Hence [X i , (∂ r g ks − ∂ s g kr )X k ] = ∂ r g is − ∂ s g ir .
Therefore [X i , F rs ] = [X i , [Ẋ r ,Ẋ s ] + (∂ r g ks − ∂ s g kr )X k ] = 0. This, and a similar computation that we leave for the reader, proves the first part of the proposition. We prove the second part by direct computation. Note the following identity: Using this identity we find This completes the proof of the proposition.
We now consider the full form of the acceleration termsẌ k . We have already shown that we define G r by the formulä X r = G r + F rsẊ s + rstẊ sẊt . Proposition 6. 3. Let rst and G r be defined as above. Then both rst and G r commute with the variables X i .
Proof. Since we know that [X l , ∂ i g jk ] = 0, it follows at once that [X l , rst ] = 0. It remains to examine the commutator [X l , G r ]. We have It is easy to see that rltẊ t = rslẊ s . Hence On the other hand, This completes the proof of the proposition.
We now know that G r , F rs and rst commute with the variables X k . As we now shall see, the formulaẌ r = G r + F rsẊ s + rstẊ sẊt allows us to extract these functions fromẌ r by differentiating with respect to the dual variables. We already know that ∂ i∂jẌk = 2 kij , and now note that We see now that the decomposition X r = G r + F rsẊ s + rstẊ sẊt of the acceleration is uniquely determined by these conditions. Since we can regard this result as a description of the motion of the non-commutative particle influenced by a scalar field G r , a gauge field F rs , and geodesic motion with respect to the Levi-Civita connection corresponding to g ij . The structural appearance of all of these physical aspects is a mathematical consequence of the choice of non-commutative framework.
Remark. It follows from the Jacobi identity that F ij = g ir g js F rs satisfies the equation identifying F ij as a non-commutative analogue of a gauge field. G i is a non-commutative analogue of a scalar field. The derivation in this section generalizes the Feynman-Dyson derivation of noncommutative electromagnetism [7] where g ij = δ ij . In the next section we will say more about the Feynman-Dyson result. The results of this section sharpen considerably an approach of Tanimura [12]. In Tanimura's paper, normal ordering techniques are used to handle the algebra.
In the derivation given above, we have used straight non-commutative algebra, just as in the original Feynman-Dyson derivation.

Remark.
It is interesting to note that we can rewrite the equation (Just substitute the expression for F rs and recollect the terms.) The reader may enjoy trying her hand at other ways to reorganize this data. It is important to note that in the first form of the equation, the basic terms G r , F rs and rst commute with the coordinates X k . It is this decomposition into parts that commute with the coordinates that guides the structure of this formula in the non-commutative context.

An abstract version of the Feynman-Dyson derivation
In this section we assume that specific time-varying coordinate elements X 1 , X 2 , X 3 of the algebra A are given. We do not assume any commutation relations about X 1 , X 2 , X 3 . We define the field The field H is an analogue of the magnetic field in electromagnetic theory and should not be confused with our earlier notation for the Hamiltonian.
In this section we no longer avail ourselves of the principle of commutativity that is behind the original Feynman-Dyson derivation. (See the last section.) We do not base the derivation to follow on any particular commutation relations about the variables X i , but we do take the definitions of the derivations that we use from that previous context. Surprisingly, the result is very similar to the one of Feynman and Dyson, as we shall see.
Here A × B is the non-commutative vector cross product: (We will drop this summation sign for vector cross products from now on.) Then We define the field E by the equation We will see that E and H obey a generalization of the Maxwell equations, and that this generalization describes specific discrete models. The reader should note that this means that a significant part of the form of electromagnetism is the consequence of choosing three coordinates of space, and the definitions of spatial and temporal derivatives with respect to them. The background process that is being described is otherwise aribitrary, and yet appears to obey physical laws once these choices are made.

Remarks on the derivatives.
1. Since we do not assume that [X i ,Ẋ j ] = δ ij , nor do we assume [X i , X j ] = 0, it will not follow that E and H commute with the X i . 2. We continue to define and the reader should note that these spatial derivations are no longer flat in the sense of section 4 (nor were they in the original Feynman-Dyson derivation). 3. We define ∂ t = ∂/∂t by the equation for all elements or vectors of elements F . We take this equation as the global definition of the temporal partial derivative, even for elements that are not commuting with the X i . This notion of temporal partial derivative ∂ t is the least relation that we can write to describe the temporal relationship of an arbitrary non-commutative vector F and the non-commutative coordinate vector X.

In defining
we are using the definition itself to obtain a notion of the variation of F with respect to time. The definition itself creates a distinction between space and time in the non-commutative world. 5. The reader will have no difficulty verifying the following formula: This formula shows that ∂ t does not satisfy the Leibniz rule in our non-commutative context. This is true for the original Feynman-Dyson context, and for our generalization of it. All derivations in this theory that are defined directly as commutators do satisfy the Leibniz rule. Thus ∂ t is an operator in our theory that does not have a representation as a commutator.

We define divergence and curl by the equations
We now prove a few useful formulae about the vector products. Firstly, we have the basic identity about the epsilon. The proof of this identity is left to the reader. The identity itself will be referred to as the epsilon identity. The epsilon identity is a key structure in the work of this section, and indeed in all formulae involving the vector cross product.

Proof. Note that
This completes the proof of the lemma.

Lemma 7. 3. Let A be any vector of three elements of the algebra A. Then
Proof. We shall use the summation convention for repeated indices in this calculation.
This completes the proof of the lemma.

Lemma 7. 4. For A and B any elements in the algebra A,
Proof.
This completes the proof of the lemma.
Remark. This lemma, and the observation that the formula in the lemma works in the noncommutative context is due to the author and Keith Bowden in conversations around 1999. See [13]. We now give the generalization of the Feynman-Dyson result in this formalism.

Theorem 7. 5. With the above definitions of the operators, and taking
Remark. Note that this theorem is a non-trivial generalization of the Feynman-Dyson derivation of electromagnetic equations. In the Feynman-Dyson case, one assumes that the commutation relations [X i , X j ] = 0 and [X i ,Ẋ j ] = δ ij are given, and that the principle of commutativity is in place, so that if A and B commute with the X i then A and B commute with each other. One then can interpret ∂ i as a standard derivative with ∂ i (X j ) = δ ij . Furthermore, one can verify that E j and H j both commute with the X i . From this it follows that ∂ t (E) and ∂ t (H ) have standard intepretations and that H × H = 0. The above formulation of the theorem adds the description of E as ∂ t (Ẋ), a non-standard use of ∂ t in the original context of Feyman-Dyson, where ∂ t would only be defined for those A that commute with X i . In the same vein, the last formula ∂ t E − ∇ × H = (∂ 2 t − ∇ 2 )Ẋ gives a way to express the remaining Maxwell equation in the Feynman-Dyson context.

Proof of theorem. We begin by calculating
Hence This follows from lemma 7.1. Hence This completes the proof of the first part.
since it is easy to verify that (A × B) · C = A · (B × C) for the non-commutative vector cross product. Since Now, using the formula for ∇ × (A × B), we obtain Note that ∇ ·Ẋ = i [Ẋ i ,Ẋ i ] = 0 and that ∇ · H = 0. Thus

Now note that
The last part of the theorem follows immediately from this formula. This completes the proof.
Remark. Note the role played by the epsilon tensor ijk throughout the construction of generalized electromagnetism in this section. The epsilon tensor is the structure constant for the Lie algebra of the rotation group SO(3). If we replace the epsilon tensor by a structure constant f ijk for a Lie algebra G of dimension d such that the tensor is invariant under cyclic permutation (f ijk = f kij ), then most of the work in this section will go over to that context. We would then have d operator/variables X 1 , . . . , X d and a generalized cross product defined on vectors of length d by the equation The Jacobi identity for the Lie algebra G implies that this cross product will satisfy This extension of the Jacobi identity holds as well for the case of a non-commutative cross product defined by the epsilon tensor. The reader will enjoy looking back over this section and seeing that we can still carry theorem 7.5 up to the following conclusion with E defined by the second equation below. We can no longer take E = ∂ tẊ , as this depends upon the specific properties of the epsilon.
It is therefore of interest to explore the structure of generalized non-commutative electromagnetism over other Lie algebras (in the above sense). This will be the subject of another paper.

The original Feynman-Dyson derivation and its gauge-theoretic context
The original Feynman-Dyson derivation [1], [6]- [8] assumes that we have three variables {X 1 , X 2 , X 3 } and the commutation relations It is also assumed that if A and B commute with the X i , then A and B commute with each other. That is, A and B are then 'functions of X i '. We have called this the principle of commutativity. With these assumptions one proves that with H =Ẋ ×Ẋ (non-commutative vector cross product) and E defined bÿ then E and H satisfy the Maxwell equations in the sense that these differential operators have been described in detail (in the non-commutative framework) in this section. A key to the original demonstration is the principle of commutativity, providing a compass for comparing the results with the context of classical calculus. In this section we have seen that an abstraction of the Feynman-Dyson argumemt provides a serious generalization that encompasses a number of discrete models (to be discussed below). In this subsection, we compare the Feyman-Dyson framework with our already-constructed formality of noncommutative gauge theory. We use the dynamics as before. We restrict to the case where [X i , A j ] = 0 so that This is the domain to which the original Feynman-Dyson derivation applies. We then have Note that even under these restrictions we are still looking at the possibility of a non-abelian gauge field. The pure electromagnetic case is when the commutators of A i and A j vanish. In the Feynman-Dyson context, this commutator does vanish, since it is given that [X i , A j ] = 0 for all i and j, and the principle of commutativity applies.

With this interpretation, E is defined by the Lorentz force laẅ
where H represents the magnetic field. To see how this works, suppose thatẌ i = E i + F ijẊj and suppose that E i and F ij commute with X k . Then we can compute This implies that It is then easy to verify that the Lorentz force equation is satisfied with H k = ijk R ij and that in this case [A i , A j ] = 0 leads directly to standard electromagnetic theory when the bracket is a Poisson bracket. When this bracket is not zero but the potentials A i are functions only of the X j we can look at a generalization of gauge theory where the non-commutativity comes from internal Lie algebra parameters. This shows how a shift of the original Feynman-Dyson derivation supports generalizations of classical electromagnetism.

Discrete thoughts
In the hypotheses of the above theorem, we are free to take any non-commutative world, and the theorem will be satisfied in that world. For example, we can take each X i to be an arbitrary time series of real or complex numbers, or bit strings of zeroes and ones. The global time derivative is defined byḞ where FJ = JF . This is the non-commutative discrete context discussed in sections 2 and 3. We will writeḞ where (F ) denotes the classical discrete derivative With this interpretation, X is a vector with three real or complex coordinates at each time, and Note how the non-commutative vector cross products are composed through time shifts in this context of temporal sequences of scalars. The advantage of the generalization now becomes apparent. We can create very simple models of generalized electromagnetism with only the simplest of discrete materials. In the case of the model in terms of triples of time series, the generalized electromagnetic theory is a theory of measurements of the time series whose key quantities are (X ) × (X) and (X ) × ( (X ) × (X)).
It is worth noting the forms of the basic derivations in this model. We have, assuming that F is a commuting scalar (or vector of scalars) and taking i = X i − X i , and for the temporal derivative we have

Discrete classical electromagnetism.
It is of interest to compare these results with a direct discretization of classical electromagnetism. Suppose that X, X , X , X , . . . is a time series of vectors in R 3 (where R denotes the real numbers). Let DX = X − X be the usual discrete derivative (with time step equal to one for convenience). Let A · B denote the usual inner product of vectors in three dimensions.
Assume that there are fields E and H such that Since E is perpendicular to DX we know there is a λ such that E × DX = λH and we have H · DX = 0 since H is perpendicular to DX. Therefore The formula for H is in exactly the same pattern as the formula for H in the discrete model for generalized electromagnetism as described in this subsection. Up to the time-shifting algebra and a proportionality constant, the expressions are the same. The expression for E is similar, but involves different time-shift structure. Clearly more work is needed in comparing classical discrete electromagnetism with the results of a discrete analysis of this generalized Feynman-Dyson derivation.

More discrete thoughts
In the Feynman-Dyson derivation of electromagnetic formalism from commutation relations [1] one uses the relations where k is a scalar. In this subsection we shall use as we did in analysing the one-dimensional case. We shall takė taking the time-step equal to one for convenience. This allows us to have scalar evolution of the time series, but changes the issues in the original Feynman-Dyson derivation due to the presence of the non-commutative operator J in the second equation. These issues are handled by the more general formalism that we discussed in this section. We aim to see to what extent one can make simple models for this version of the Feynman-Dyson relations. Models of this sort will be another level of approximation to discrete electromagnetism. Writing out the commutation relation [X,Ẋ] = Jk, and not making any assumption that X commutes with X, we find Thus the commutation relation [X,Ẋ] = Jk becomes the equation By a similar calculation, the equation [X,Ẏ ] = 0 becomes the equation These equations are impossible to satisfy simultaneously for k = 0 if we assume that X and X commute and that X and Y commute and that [Y,Ẏ ] = Jk. For then we would need to solve: with the first two equations implying that (X − X ) and (Y − Y ) are each non-zero, and the third implying that their product is equal to zero. In other words, the equations below cannot be satisfied if the time series are composed of commuting scalars: In order to make such models we shall have to introduce non-commutativity into the time series themselves.
Here is an example of such a model. Return to the equations expressing the behaviour for two distinct variables X and Y . If [X, X ] = 0, then we have (X − X) 2 = k so that In order for the second equation to be satisfied, we need that where the ambiguity of sign is linked with the varying signs in the temporal behaviour of X and Y . We will make the sign more precise in a moment, but the radical part of this suggestion is that for two distinct spatial variables X and Y , there will be a commutation relation between one and a time shift of the other.
If the space variables are labelled X i , then we can write Thus each space variable performs a walk with the fixed step-length k. We shall write informally where it is understood that the epsilon without the superscript connotes the sign change that occurs in this juncture of the process. We then demand the commutation relations Each X i is a scalar in its own domain, but does not commute with the time shifts of the other directions. We then can have the full set of commutation relations: In this system, the elements of a given time series X i , X i , X i , . . . commute with one another. The basic field element in the generalized Feynman-Dyson setup is the magnetic field H defined by the (non-commutative) vector cross product Here we haveẊ where = ( 1 , 2 , 3 ) and denotes this vector of signs at the next time step. In this way we see that we can think of each spatial coordinate as providing a long temporal bit string and the three coordinates together give the field in terms of the vector cross product of their temporal cross sections at neighbouring instants. It is interesting to compare this model with the colour algebra in [14].

The Jacobi identity and Poisson brackets
It is worth thinking through the message of the non-commutative world in respect to the existence of the Poisson brackets and their connection with continuous differentiation and the commutative world of topology and differential geometry from which the classical and the quantum models of physics are derived. In the classical world there are specific point locations, and the notion of a trajectory is given in terms of a continuous sequence of such locations. But there is no inherent operational structure intrinsic to the space. There is great freedom in the world of commutative and continuous calculus, a freedom that allows the construction of many models of temporal evolution. Yet we have seen that non-commutative worlds have built-in laws, and built-in patterns of evolution. These patterns of evolution do not lead directly to trajectories but rather to patterns of concatenations of operators. At first sight it would seem that there could be no real connection between these worlds. The Poisson bracket and the reformulation of mechanics in Hamiltonian form shows that this is not so. There is a special non-commutativity inherent in the continuous calculus, via the Poisson bracket. It is easy to see the truth of the Jacobi identity for commutators. It is just a little harder to see the Jacobi identity of Poisson brackets. It is the purpose of this section to recall these verifications and to discuss the nature of the identity. This is the Jacobi identity. More generally, a Lie algebra is an algebra A with a (non-associative) product ab, not necessarily a commutator, that satisfies 1. Jacobi identity (ab)c + (bc)a + (ca)b = 0 and 2. ba = −ab.
It follows that if we define ρ a : A −→ A by the equation ρ a (x) = ax for each a in A, then so that products go to commutators naturally in the left-regular representation of the algebra upon itself.
Here is another point of view. We have the following equivalent form of the Jacobi identity (when ab = −ba for all a and b): for all a, x and y in the algebra. This identity says that each element a in the algebra acts, by left multiplication, as a derivation on the algebra. In this way, we see that Lie algebras are the natural candidates as contexts for non-commutative worlds that contain an image of the calculus.

Proving the Jacobi identity for Poisson brackets
There are examples of Lie algebras where the non-associative product is not a commutator, the most prominent being the Poisson bracket. Here we start with a commutative algebra CA with two (or more) derivations on CA. Let there be operators a and b acting on CA (ab is the commutative multiplication) such that these operators satisfy the Leibniz rule and commute with one another: ab = a b + a b and ab = a b + a b and a = a for all elements of CA. Then we define the Poisson bracket on CA by the formula We wish to see that this product satisfies the Jacobi identity. In order to do this we first prove a lemma about the Jacobi identity for commutators in a non-associative algebra. We then apply that lemma to the specific non-associative product Suppose that * denotes a non-commutative and non-associative binary operation. We want to determine when the commutator [A, B] = A * B − B * A satisfies the Jacobi identity. We first prove a lemma about the Jacobi identity for commutators in a non-associative algebra. Let N A be a non-associative linear algebra with multiplication denoted by * as above. Let and call this the Jacobi sum of a, b and c. We say that the Jacobi identity is satisfied for all elements  a, b, c ∈ N A if J(a, b, c) = 0 for all a, b, c ∈ N A. We define the associator of elements a, b, c by the formula Let σ be an element of the permutation group S 3 on three letters, acting on the set {a, b, c}. Let a σ , b σ , c σ be the images of a, b, c under this permutation. Let sgn(σ) denote the sign of the permutation.  J(a, b, c Thus the Jacobi identity is satisfied in NA iff the following identity is true for all a, b, c ∈ NA.
Proof. For the duration of this proof we shall write ab for a * b. Then , c − b, a, c + c, a, b − a, c, b + b, c, a − c, b, a . This completes the proof.

Remark.
We discovered this lemma in the course of the research for this paper. Gregory Wene pointed out to us that a version of the lemma can be found in [15]. We now apply this result to prove that Poisson brackets satisfy the Jacobi identity. Then the commutator in this algebra [a, b] A = a * b − b * a will satisfy the Jacobi identity. Note that this commutator is the Poisson bracket with respect to the above derivations in the original commutative algebra: This result implies that Poisson brackets satisfy the Jacobi identity. Proof. Consider the associator in the non-associative algebra defined in the statement of the theorem: Note that an expression of the form a b c will return zero when averaged in the summation σ∈S 3 sgn(σ) a σ , b σ , c σ since a b c = a c b (the underlying algebra is commutative) and these terms will appear with opposite signs in the summation. Therefore we find that Jac(a, b, c) = 0 for all a, b, c in R. This completes the proof.

Diagrammatics and the Jacobi identity
We have seen that a commutative world equipped with distinct derivations that commute with one another is sufficient to produce a non-commutative world (via the Poisson brackets) that is strong enough to support our story of physical patterns. Many combinatorial patterns mimic the Jacobi identity, and hence provide fuel for further study. In order to illustrate these connections, we give in this section a diagrammatic version of the Jacobi identity and an interpretation in terms of graph colouring. We will initially work in a Lie algebra G whose product ab satisfies ba = −ab and the Jacobi identity a(bc) = (ab)c + a(bc). In figure 3 we show a diagrammatic interpretation of multiplication, consisting in a trivalent vertex labelled with a, b, and ab. As one moves around the vertex in the plane, clockwise, one encounters first a, then b, and then ab.
In figure 4 we illustrate the Jacobi identity in the form To illustrate how this pattern can occur in a different context, consider diagrams D of intersecting chords on a circle as shown in figure 5. By a circle we mean a curve in the plane without self-intersections, which is a topological circle. By a chord, we mean an arc without self-intersections that is embedded in the interior of the circle, touching the circle in two distinct points. Let us suppose that we wish to colour the chords from a set of q colours such that if two chords intersect in an odd number of points, then they receive different colours. Let C(D, q) denote the number of distinct colourings of the chords of the diagram D, as a function of q. Call such a diagram of intersecting chords an intersection graph. We extend such diagrams by allowing internal trivalent vertices as illustrated in the abstract by diagram D in figure 4 and by the diagram with the same label, D , in figure 5. Interpret the trivalent vertex as an instruction that all chord lines meeting at a trivalent vertex receive the same colour. The diagrammatic Jacobi identity of figure 4 corresponds directly to the logical colouring identity that says that if we have three diagrams D, D , D with two chords touching in an odd number of points in D, one point removed in D , and the two chords fused by a trivalent vertex in D so that they must receive the same colour, then the number of colourings of D is the number of colourings of D minus the number of colourings of D . This is just the colouring version of the logical identity Different = Anything − Same.
For graph colouring problems, this identity was first articulated by Whitney [16]. In formulae, we have C(D, q) = C(D , q) − C (D , q).
The convention that we have adopted here-that two chords are coloured differently if and only if they intersect in an odd number of points, makes a demand on the interpretation of the trivalent nodes. All arcs entering a given node must receive the same colour. After more nodes are added we will have connected components of the resulting graph that contain nodes (the outer circle is not regarded as part of the graph). Call such a connected component a web in a given diagram. Each web is coloured by a single colour. We regard a chord without nodes as a (degenerate) web. We take the convention that if the total number of intersections between two distinct webs is odd, then they must receive different colours. Of course, a web may have self-intersections; we define the sign of the colouring of a given web to be −1 if it has an odd number of self-intersections and +1 if it has an even number of self-intersections. The sign of the colouring of a diagram is the product of the signs of its component webs. Note the sign of a chord is positive. With these conventions, the formulas in figures 4 and 5 match perfectly and can be understood as indicating parts of larger diagrams that differ only as indicated. We see, as in figure 6, that an extra self-intersection added to a trivalent vertex changes the sign of its web. This corresponds to the algebraic interpretation of such a vertex as ab = −ba. See figure 3.
In figure 6 we illustrate how these sign conventions are consistent with the colouring formula/Jacobi identity. In this figure, we begin with the Jacobi identity with a twist (crossing) added to each diagram. The original diagram with one crossing now has two, and hence is equivalent to a diagram with none (no local requirement of difference). The original diagram with no crossing now has one, and is interpreted as a requirement of difference. Rearranging, we find the Jacobi identity again, but with an extra crossing and change of sign for the noded diagram. The conclusion is that adding a crossing to a node changes the sign of its diagram.
We see that the patterns of counting colourings of chord diagrams correspond formally to the axioms for a Lie algebra. This example indicates how a combinatorial context can lead to the very formalism on which this paper is based, but though different structures than one could have initially visualized. Diagrammatic Lie algebras similar to this example feature prominently in the theory of Vassiliev invariants [17,18] of knots and links, and may form the basis for new models for the structures that we have discussed in this paper.
We have concentrated in this section on a colouring example not only because the occurrence of the Jacobi identity in this context may appear startling, but also because there is a more direct relationship with colouring in regard to the fundamental Lie algebra of SU(2) (or equivalently SO (3)) that underlies the structures we have discussed in this paper. The Lie algebra of SO (3) has structure constant the alternating epsilon symbol ijk that we have used again and again in section 7 for the generalization of the Feynman-Dyson derivation. This epsilon can be expressed diagrammatically as a trivalent vertex. The basic epsilon identity abi cdi = −δ ad δ bc + δ ac δ bd can be written diagrammatically, and it leads at once to a diagrammatic Jacobi identity. See figure 7 for the diagrammatic form of the epsilon identity. The epsilon itself is closely related to colouring (see [19]- [21]), but that is another story and we shall stop here.

Epilogue
We have sought in this paper, to begin in an algebraic framework that naturally contains the formalism of the calculus, but not its notions of limits or constructions of spaces with specific locations, points and trajectories. It is remarkable that so many patterns of physical law fit so well such an abstract framework. We believe that this is indicative of the secondary nature of point sets, topologies and classical differential geometries in physics (compare [22]). In this paper we have dispensed with spacetime and replaced it by algebraic structure. But behind that structure, the space stands ready to be constructed, by discrete derivatives and patterns of steps, or by starting with a discrete pattern in the form of a diagram, a network, a lattice, a knot, or a simplicial complex, and elaborating that structure until the specificity of spatio-temporal locations appear.
There are many ideas for producing location. Poisson brackets allow us to connect classical notions of location with the non-commutative algebra used herein. Below the level of the Poisson brackets is a treatment of processes and operators as though they were variables in the same context as the variables in the classical calculus. In different degrees we let go the notion of classical variables and yet retained their form, as we made a descent into the discrete.
In order for locations to appear from process, one may need an appropriate degree of recursiveness. Lie algebras begin the process with their fully self-operant structure of derivations. It is just such bootstrapping that fits into the basis of our concerns and produces the ways to make spaces emerge, through process, from the abstract algebra.