Training of photonic neural networks through in situ backpropagation

Recently, integrated optics has gained interest as a hardware platform for implementing machine learning algorithms. Of particular interest are artificial neural networks, since matrix-vector multi- plications, which are used heavily in artificial neural networks, can be done efficiently in photonic circuits. The training of an artificial neural network is a crucial step in its application. However, currently on the integrated photonics platform there is no efficient protocol for the training of these networks. In this work, we introduce a method that enables highly efficient, in situ training of a photonic neural network. We use adjoint variable methods to derive the photonic analogue of the backpropagation algorithm, which is the standard method for computing gradients of conventional neural networks. We further show how these gradients may be obtained exactly by performing intensity measurements within the device. As an application, we demonstrate the training of a numerically simulated photonic artificial neural network. Beyond the training of photonic machine learning implementations, our method may also be of broad interest to experimental sensitivity analysis of photonic systems and the optimization of reconfigurable optics platforms.


I. INTRODUCTION
Artificial neural networks (ANNs), and machine learning in general, are becoming ubiquitous for an impressively large number of applications [1]. This has brought ANNs into the focus of research in not only computer science, but also electrical engineering, with hardware specifically suited to perform neural network operations actively being developed. There are significant efforts in constructing artificial neural network architectures using various electronic solid-state platforms [2,3], but ever since the conception of ANNs, a hardware implementation using optical signals has also been considered [4,5]. In this domain, some of the recent work has been devoted to photonic spike processing [6,7] and photonic reservoir computing [8,9], as well as to devising universal, chipintegrated photonic platforms that can implement any arbitrary ANN [10,11]. Photonic implementations benefit from the fact that, due to the non-interacting nature of photons, linear operations -like the repeated matrix multiplications found in every neural network algorithm -can be performed in parallel, and at a lower energy cost, when using light as opposed to electrons.
A key requirement for the utility of any ANN platform is the ability to train the network using algorithms such as error backpropagation [12]. Such training typically demands significant computational time and resources and it is generally desirable for error backpropagation to implemented on the same platform. This is indeed possible for the technologies of Refs. [2,13,14] and has also been demonstrated e.g. in memristive devices [3,15]. In optics, as early as three decades ago, an adaptive platform that could approximately implement the backpropagation algorithm experimentally was proposed [16,17]. However, this algorithm requires a number of complex optical operations that are difficult to implement, particularly in integrated optics platforms. Thus, the current implementation of a photonic neural network using integrated optics has been trained using a model of the system simulated on a regular computer [11]. This is inefficient for two reasons. First, this strategy depends entirely on the accuracy of the model representation of the physical system. Second, unless one is interested in deploying a large number of identical, fixed copies of the ANN, any advantage in speed or energy associated with using the photonic circuit is lost if the training must be done on a regular computer. Alternatively, training using a brute force, in situ computation of the gradient of the objective function has been proposed [11]. However, this strategy involves sequentially perturbing each individual parameter of the circuit, which is highly inefficient for large systems.
In this work, we propose a procedure, which we label the time-reversal interference method (TRIM), to compute the gradient of the cost function of a photonic ANN by use of only in situ intensity measurements. Our procedure works by physically implementing the adjoint variable method (AVM), a technique that has typically been implemented computationally in the optimization and inverse design of photonic structures [18][19][20]. Furthermore, the method scales in constant time with respect to the number of parameters, which allows for backpropagation to be efficiently implemented in a hybrid optoelectronic network. Although we focus our discussion on a particular hardware implementation of a photonic ANN, our conclusions are derived starting from Maxwells equations, and may therefore be extended to other photonic platforms.
The paper is organized as follows: In Section II, we introduce the working principles of a photonic ANN based on the hardware platform introduced in Ref. [11]. We also derive the mathematics of the forward and backward propagation steps and show that the gradient computation needed for training can be expressed as a modal overlap. Then, in Section III we discuss how the adjoint method may be used to describe the gradient of the ANN cost function in terms of physical parameters. In Section IV, we describe our procedure for determining this gra-dient information experimentally using in situ intensity measurements. We give a numerical validation of these findings in Section V and demonstrate our method by training a model of a photonic ANN in Section VI. We provide final comments and conclude in Section VII.

II. THE PHOTONIC NEURAL NETWORK
In this Section, we introduce the operation and gradient computation of a feed-forward photonic ANN. In its most general case, a feed-forward ANN maps an input vector to an output vector via an alternating sequence of linear operations and element-wise nonlinear functions of the vectors, also called 'activations'. A cost function, L, is defined over the outputs of the ANN and the matrix elements involved in the linear operations are tuned to minimize L over a number of training examples via gradient-based optimization. The 'backpropagation algorithm' is typically used to compute these gradients analytically by sequentially utilizing the chain rule from the output layer backwards to the input layer.
Here, we will outline these steps mathematically for a single training example, with the procedure diagrammed in Fig. 1a. We focus our discussion on the photonic hardware platform presented in [11], which performs the linear operations using optical interference units (OIUs). The OIU is a mesh of controllable Mach-Zehnder interferometers (MZIs) integrated in a silicon photonic circuit. By tuning the phase shifters integrated in the MZIs, any unitary N × N operation on the input can be implemented [21,22], which finds applications both in classical and quantum photonics [23,24]. In the photonic ANN implementation from Ref. [11], an OIU is used for each linear matrix-vector multiplication, whereas the nonlinear activations are performed using an electronic circuit, which involves measuring the optical state before activation, performing the nonlinear activation function on an electronic circuit such as a digital computer, and preparing the resulting optical state to be injected to the next stage of the ANN. We first introduce the notation used to describe the OIU, which consists of a number, N , of single-mode waveguide input ports coupled to the same number of single-mode output ports through a linear and lossless device. In principle, the device may also be extended to operate on a different number of inputs and outputs. We further assume directional propagation such that all power flows exclusively from the input ports to the output ports, which is a typical assumption for the devices of Refs. [11,[21][22][23][24][25][26]. In its most general form, the device implements the linear operation where X in and Z out are the modal amplitudes at the input and output ports, respectively, andŴ , which we will refer to as the transfer matrix, is the off-diagonal block of the system's full scattering matrix, Here, the diagonal blocks are zero because we assume forward-only propagation, while the off-diagonal blocks are the transpose of each other because we assume a reciprocal system. Z in and X out correspond to the input and output modal amplitudes, respectively, if we were to run this device in reverse, i.e. sending a signal in from the output ports. Now we may use this notation to describe the forward and backward propagation steps in a photonic ANN. In the forward propagation step, we start with an initial input to the system, X 0 , and perform a linear operation on this input using an OIU represented by the matrix W 1 . This is followed by the application of a elementwise nonlinear activation, f 1 (·), on the outputs, giving the input to the next layer. This process repeats for the each layer l until the output layer, L. Written compactly, for l = 1 ... L Finally, our cost function L is an explicit function of the outputs from the last layer, L = L(X L ). This process is shown in Fig. 1(a).
To train the network, we must minimize this cost function with respect to the linear operators,Ŵ l , which may be adjusted by tuning the integrated phase shifters within the OIUs. While a number of recent papers have clarified how an individual OIU can be tuned by sequential, in situ methods to perform an arbitrary, pre-defined operation [25][26][27][28], these strategies do not straightforwardly apply to the training of ANNs, where nonlinearities and several layers of computation are present. In particular, the training of ANN requires gradient information which is not provided directly in the methods of Ref. [25][26][27][28].
In Ref. [11], the training of the ANN was done ex situ on a computer model of the system, which was used to find the optimal weight matricesŴ l for a given cost function. Then, the final weights were recreated in the physical device, using an idealized model that relates the matrix elements to the phase shifters. Ref. [11] also discusses a possible in situ method for computing the gradient of the ANN cost function through a serial perturbation of every individual phase shifter ('brute force' gradient computation). However, this gradient computation has an unavoidable linear scaling with the number of parameters of the system. The training method that we propose here operates without resorting to an external model of the system, while allowing for the tuning of each parameter to be done in parallel, therefore scaling significantly better with respect to the number of parameters when compared to the brute force gradient computation.
To introduce our training method we first use the backpropagation algorithm to derive an expression for the gradient of the cost function with respect to the permittivities of the phase shifters in the OIUs. In the following, 1. (a) A schematic of the ANN architecture demonstrated in Ref [11]. The boxed regions correspond to OIUs that perform a linear operation represented by the matrixŴ l . Integrated phase shifters (blue) are used to control the OIU and train the network. The red regions correspond to nonlinear activations f l (·). (b) Illustration of operation and gradient computation in an ANN. The top and bottom rows correspond to the forward and backward propagation steps, respectively. Propagation through a square cell corresponds to matrix multiplication. Propagation through a rounded region corresponds to activation.
is element-wise vector multiplication.
we denote l as the permittivity of a single, arbitrarily chosen phase shifter in layer l, as the same derivation holds for each of the phase shifters present in that layer. Note thatŴ l has an explicit dependence on l , but all field components in the subsequent layers also depend implicitly on l . As a demonstration, we take a mean squared cost function where T is a complex-valued target vector corresponding to the desired output of our system given input X 0 . Starting from the last layer in the circuit, the derivative of the cost function with respect to the permittivity L of one of the phase shifters in the last layer is given by where is element-wise vector multiplication, defined such that, for vectors a and b, the i-th element of the vector a b is given by a i b i . R{·} gives the real part, f l (·) is the derivative of the lth layer activation function with respect to its (complex) argument. We define the vector δ L ≡ Γ L f L in terms of the error vector Γ L ≡ X L − T * .
For any layer l < L, we may use the chain rule to perform a recursive calculation of the gradients Figure 1(b) diagrams this process, which computes the δ l vectors sequentially from the output layer to the input layer. A treatment for non-holomorphic activations is derived Appendix A. We note that the computation of δ l requires performing the operation Γ l =Ŵ T l+1 δ l+1 , which corresponds physically to sending δ l+1 into the output end of the OIU in layer l + 1. In this way, our procedure 'backpropagates' the vectors δ l and Γ l physically through the entire circuit.

III. GRADIENT COMPUTATION USING THE ADJOINT VARIABLE METHOD
In the previous Section, we showed that the crucial step in training the ANN is computing gradient terms of the form R δ T l dŴ l d l X l−1 , which contain derivatives with respect to the permittivity of the phase shifters in the OIUs. In this Section, we show how this gradient may be expressed as the solution to an electromagnetic adjoint problem.
The OIU used to implement the matrixŴ l , relating the complex mode amplitudes of input and output ports, can be described using first-principles electrodynamics. This will allow us to compute its gradient with respect to each l , as these are the physically adjustable parameters in the system. Assuming a source at frequency ω, at steady state Maxwell's equations take the form which can be written more succinctly aŝ Here,ˆ r describes the spatial distribution of the relative permittivity ( r ), k 0 = ω 2 /c 2 is the free-space wavenumber, e is the electric field distribution, j is the electric current density, andÂ =Â T due to Lorentz reciprocity. Eq. (12) is the starting point of the finitedifference frequency-domain (FDFD) simulation technique [29], where it is discretized on a spatial grid, and the electric field e is solved given a particular permittivity distribution, r , and source, b.
To relate this formulation to the transfer matrixŴ , we now define source terms b i , i ∈ 1 . . . 2N , that correspond to a source placed in one of the input or output ports. Here we assume a total of N input and N output waveguides. The spatial distribution of the source term, b i , matches the mode of the i-th single-mode waveguide. Thus, the electric field amplitude in port i is given by b T i e, and we may establish a relationship between e and X in , as for i = 1 ... N over the input port indices, where X in,i is the i-th component of X in . Or more compactly, Similarly, we can define for i + N = (N + 1) ... 2N over the output port indices, or, and, with this notation, Eq. (1) becomeŝ We now use the above definitions to evaluate the cost function gradient in Eq. (10). In particular, with Eqs. (10) and (17), we arrive at Here b x,l−1 is the modal source profile that creates the input field amplitudes X l−1 at the input ports.
The key insight of the adjoint variable method is that we may interpret this expression as an operation involving the field solutions of two electromagnetic simulations, which we refer to as the 'original' (og) and the 'adjoint' (aj)Â where we have made use of the symmetric property ofÂ. Eq. (18) can now be expressed in a compact form as If we assume that this phase shifter spans a set of points, r φ in our system, then, from Eq. (11), we obtain whereδ r,r is the Kronecker delta. Inserting this into Eq. (21), we thus find that the gradient is given by the overlap of the two fields over the phase-shifter positions This result now allows for the computation in parallel of the gradient of the loss function with respect to all phase shifters in the system, given knowledge of the original and adjoint fields.

IV. EXPERIMENTAL MEASUREMENT OF GRADIENT
We now introduce our time-reversal interference method (TRIM) for computing the gradient from the previous section through in situ intensity measurements. This represents the most significant result of this paper. Specifically, we wish to generate an intensity pattern with the form R e og e aj , matching that of Eq. (23). We note that interfering e og and e * aj directly in the system results in the intensity pattern: the last term of which matches Eq. (23). Thus, the gradient can be computed purely through intensity measurements if the field e * aj can be generated in the OIU. The adjoint field for our problem, e aj , as defined in Eq. (20), is sourced byP T out δ l , meaning that it physically corresponds to a mode sent into the system from the output ports. As complex conjugation in the frequency + step 1 domain corresponds to time-reversal of the fields, we expect e * aj to be sent in from the input ports. Formally, to generate e * aj , we wish to find a set of input source amplitudes, X T R , such that the output port source amplitudes, Z T R =Ŵ X T R , are equal to the complex conjugate of the adjoint amplitudes, or δ * l . Using the unitarity property of transfer matrixŴ l for a lossless system, along with the fact thatP outP T out =Î for output modes, the input mode amplitudes for the time-reversed adjoint can be computed as As discussed earlier,Ŵ T l is the transfer matrix from output ports to input ports. Thus, we can experimentally determine X T R by sending δ l into the device output ports, measuring the output at the input ports, and taking the complex conjugate of the result.
We now summarize the procedure for experimentally measuring the gradient of an OIU layer in the ANN with respect to the permittivities of this layer's integrated phase shifters: 1. Send in the original field amplitudes X l−1 and measure and store the intensities at each phase shifter.
2. Send δ l into the output ports and measure and store the intensities at each phase shifter.

4.
Interfere the original and the time-reversed adjoint fields in the device, measuring again the resulting intensities at each phase shifter.
5. Subtract the constant intensity terms from steps 1 and 2 and multiply by k 2 0 to recover the gradient as in Eq. (23).
This procedure is also illustrated in Fig. 2.

V. NUMERICAL GRADIENT DEMONSTRATION
We numerically demonstrate this procedure in Fig. 3 with a series of FDFD simulations of an OIU implementing a 3 × 3 unitary matrix [21]. These simulations are intended to represent the gradient computation corresponding to one OIU in a single layer, l, of a neural network with input X l−1 and delta vector δ l . In these simulations, we use absorbing boundary conditions on the outer edges of the system to eliminate back-reflections. The relative permittivity distribution is shown in Fig.  3(a) with the positions of the variable phase shifters in blue. For demonstration, we simulate a specific case where X l−1 = [0 0 1] T , with unit amplitude in the bottom port and we choose δ l = [0 1 0] T . In Fig. 3(b), we display the real part of e og , corresponding to the original, forward field.
The real part of the adjoint field, e aj , corresponding to the cost function L = R δ T lŴ l X l−1 is shown in Fig. 3(c). In Fig. 3(d) we show the real part of the time-reversed copy of e aj as computed by the method described in the previous section, in which X * T R is sent in through the input ports. There is excellent agreement, up to a constant, between the complex conjugate of the field pattern of (c) and the field pattern of (d), as expected.
In Fig. 3(e), we display the gradient of the objective function with respect to the permittivity of each point of space in the system, as computed with the adjoint method, described in Eq. (23). In Fig. 3(f), we show the same gradient information, but instead computed with the method described in the previous section. Namely, we interfere the field pattern from panel (b) with the field pattern from panel (d), subtract constant intensity terms, and multiply by the appropriate constants. Again, (b) and (d) agree with good precision.
We note that in a realistic system, the gradient must be constant for any stretch of waveguide between waveguide The gradient information as obtained by the method introduced in this work, normalized by its maximum absolute value. Namely, the field pattern from (b) is interfered with the time-reversed adjoint field of (d) and the constant intensity terms are subtracted from the resulting intensity pattern. Panels (e) and (f) match with high precision.
couplers because the interfering fields are at the same frequency and are traveling in the same direction. Thus, there should be no distance dependence in the corresponding intensity distribution. This is largely observed in our simulation, although small fluctuations are visible because of the proximity of the waveguides and the sharp bends, which were needed to make the structure compact enough for simulation within a reasonable time. In practice, the importance of this constant intensity is that it can be detected after each phase shifter, instead of inside of it.
Finally, we note that this numerically generated system experiences a total power loss of 41% due to scattering caused by very sharp bends and stair-casing of the structure in the simulation. We also observe approximately 5-10% mode-dependent loss, as determined by measuring the difference in total transmitted power corresponding to injection at different input ports. Minimal amounts of reflection are also visible in the field plots. Nevertheless, TRIM still reconstructs the adjoint sensitivity with very good fidelity.

VI. EXAMPLE OF ANN TRAINING
Finally, we use the techniques from the previous Sections to numerically demonstrate the training of a photonic ANN to implement a logical XOR gate, defined by the following input to target (X 0 → T) pairs As diagrammed in Fig. 6a, we choose a network architecture consisting of two 3 × 3 unitary OIUs. On the forward propagation step, the binary representation of the inputs, X 0 , is sent into the first two input elements of the ANN and a constant value of 1 is sent into the third input element, which serves to introduce artificial bias terms into the network. These inputs are sent through a 3 × 3 unitary OIU and then the element-wise activation f (z) = z 2 is applied. The output of this step is sent to another 3 × 3 OIU and sent through another activation of the same form. Finally, the first output element is taken to be our prediction, X L , ignoring the last two output elements. Our network is repeatedly trained on the four training examples defined in Eq. (26) and using the mean-squared cost function presented in Eq. (4).
For this demonstration, we utilized a matrix model of the system, as described in [21,22], with mathematical details described in Appendix B. This model allows us to compute an output of the system given an input mode and the settings of each phase shifter. Although this is not a first-principle electromagnetic simulation of the system, it provides information about the complex fields at specific reference points within the circuit, which enables us to implement training using the backpropagation method as described in Section II, combined with the adjoint gradient calculation from Section III. Using these methods, at each iteration of training we compute the gradient of our cost function with respect to the phases of each of the integrated phase shifters, and sum them over the four training examples. Then, we perform a simple steepest-descent update to the phase shifters, in accordance with the gradient information. This is consistent with the standard training protocol for an ANN implemented on a conventional computer. Our network successfully learned the XOR gate in around 400 iterations. The results of the training are shown in Fig. 6b-d. We note that this is meant to serve as a simple demonstration of using the in-situ backpropagation technique for computing the gradients needed to train photonic ANNs. However, our method may equally be performed on more complicated tasks, which we show in the Appendix C.

VII. DISCUSSION AND CONCLUSION
Here, we justify some of the assumptions made in this work. Our strategy for training a photonic ANN relies on the ability to create arbitrary complex inputs. We note that a device for accomplishing this has been proposed and discussed in [31]. Our recovery technique further requires an integrated intensity detection scheme to occur in parallel and with virtually no loss. This may be implemented by integrated, transparent photo-detectors, which have already been demonstrated in similar systems [28]. Furthermore, as discussed, this measurement may occur in the waveguide regions directly after the phase shifters, which eliminates the need for phase shifter and photodetector components at the same location. Finally, in our procedure for experimentally measuring the gradient information, we suggested running isolated forward and adjoint steps, storing the intensities at each phase shifter for each step, and then subtracting this information from the final interference intensity. Alternatively, one may bypass the need to store these constant intensities by introducing a low-frequency modulation on top of one of the two interfering fields in Fig. 2(c), such that the product term of Eq. (24) can be directly measured from the low-frequency signal. A similar technique was used in [28].
We now discuss some of the limitations of our method. In the derivation, we had assumed theŴ operator to be unitary, which corresponds to a lossless OIU. In fact, we note that our procedure is exact in the limit of a lossless, feed-forward, and reciprocal system. However, with the addition of any amount of uniform loss,Ŵ is still unitary up to a constant, and our procedure may still be performed with the added step of scaling the measured gradients depending on this loss (see a related discussion in Ref. [31]). Uniform loss conditions are satisfied in the OIUs experimentally demonstrated in Refs. [11,26]. Mode-dependent loss, such as asymmetry in the MZI mesh layout or fabrication errors, should be avoided as its presence limits the ability to accurately reconstruct the time-reversed adjoint field. Nevertheless, our simulation in Fig. 3 indicates that an accurate gradient can be obtained even in the presence of significant mode-dependent loss. In the experimental structures of Refs. [11,26], the mode-dependent loss is made much lower due to the choice of the MZI mesh. Thus we expect our protocol to work in practical systems. Our method, in principle, computes gradients in parallel and scales in constant time. In practice, to get this scaling would require careful design of the circuits controlling the OIUs.
Conveniently, since our method does not directly assume any specific model for the linear operations, it may gracefully handle imperfections in the OIUs, such as deviations from perfect 50-50 splits in the MZIs. Lastly, while we chose to make an explicit distinction between the input ports and the output ports, i.e. we assume no backscattering in the system, this requirement is not strictly necessary. Our formalism can be extended to the full scattering matrix. However, this would require special treatment for subtracting the backscattering.
The problem of overfitting is one that must be addressed by 'regularization' in any practical realization of a neural network. Photonic ANNs of this class provide a convenient approach to regularization based on 'dropout' [32]. In the dropout procedure, certain nodes are probabilistically and temporarily 'deleted from the network during train time, which has the effect of forcing the network to find alternative paths to solve the problem at hand. This has a strong regularization effect and has become popular in conventional ANNs. Dropout may be implemented simply in the photonic ANN by 'shutting off channels in the activation functions during training. Specifically, at each time step and for each layer l and element i, one may set f l (Z i ) = 0 with some fixed probability.
In conclusion, we have demonstrated a method for performing backpropagation in an ANN based on a photonic circuit. This method works by physically propagating the adjoint field and interfering its time-reversed copy with the original field. The gradient information can then be directly measured out as an in-situ intensity measurement. While we chose to demonstrate this procedure in the context of ANNs, it is broadly applicable to any reconfigurable photonic system. One could imagine this setup being used to tune phased arrays [33], optical delivery systems for dielectric laser accelerators [34], or other systems that rely on large meshes of integrated optical phase shifters. Furthermore, it may be applied to sensitivity analysis of photonic devices, enabling spatial sensitivity information to be measured as an intensity in the device.
Our work should enhance the appeal of photonic circuits in deep learning applications, allowing for training to happen directly inside the device in an efficient and scalable manner. Furthermore, this method is broadly applicable to integrated and adaptive optical systems, enabling the possibility for automatic self-configuration and optimization without resorting to brute force gradient computation or model-based methods, which often do not perfectly represent the physical system. In the previous derivation, we have assumed that the functions f l (·) are holomorphic. For each element of input Z l , labeled z, this means that the derivative of f l (z) with respect to its complex argument is well defined, or the derivative does not depend on the direction that ∆z approaches 0 in the complex plane.
Here we show how to extend the backpropagation derivation to non-holomorphic activation functions. We first examine the starting point of the backpropagation algorithm, considering the change in the mean-squared loss function with respect to the permittivity of a phase shifter in the last layer OIU, as written in Eq. (7) of the main text as Where we had defined the error vector Γ L ≡ X L − T * for simplicity and X L = f L (Ŵ L X L−1 ) is the output of the final layer.
To evaluate this expression for non-holomorphic activation functions, we split f L (Z) and its argument into their real and imaginary parts where i is the imaginary unit and α and β are the real and imaginary parts of Z L , respectively. We now wish to evaluate dX L d L = df L (Z) d L , which gives the following via the chain rule where we have dropped the layer index for simplicity.
Here, terms of the form dx dy correspond to element-wise differentiation of the vector x with respect to the vector y. For example, the i-th element of the vector dx dy is given by dxi dyi . Now, inserting into Eq. (A2), we have We now define real and imaginary parts of Γ L as Γ R and Γ I , respectively. Inserting the definitions of α and β in terms ofŴ L and X L−1 and doing some algebra, we recover Finally, the expression simplifies to As a check, if we insert the conditions for f L (Z) to be holomorphic, namely Eq. (A10) simplifies to as before. This derivation may be similarly extended to any layer l in the network. For holomorphic activation functions, whereas we originally defined the δ vectors as for non-holomorphic activation functions, the respective definition is where Γ R and Γ I are the respective real and imaginary parts of Γ l , u and v are the real and imaginary parts of f l (·), and α and β are the real and imaginary parts of Z l , respectively. We can write this more simply as In polar coordinates where Z = r exp (iφ) and f = f (r, φ), this equation becomes where all operations are element-wise.

Appendix B: Photonic neural network simulation
In Sections 4 and 5 of the main text, we have shown, starting from Maxwell's equations, how the gradient information defined for an arbitrary problem can be obtained through electric field intensity measurements. However, since the full electromagnetic problem is too large to solve repeatedly, for the purposes of demonstration of a functioning neural network, in Section 6 we use the analytic, matrix representation of a mesh of MZIs as described in Ref. [22]. Namely, for an even N , the matrix W of the OIU is parametrized as the product of N + 1 unitary matrices: where eachR i implements a number of two-by-two unitary operations corresponding to a given MZI, andD is a diagonal matrix corresponding to an arbitrary phase delay added to each port. This is schematically illustrated in Fig. 5(a) for N = 3. For the ANN training, we need to compute terms of the form for an arbitrary phase φ and vectors X and Y defined following the steps in the main text. Because of the feedforward nature of the OIU-s, the matrixŴ can also be split asŴ whereF φ is a diagonal matrix which applies a phase shift e iφ in port i (the other elements are independent of φ), whileŴ 1 andŴ 2 are the parts that precede and follow the phase shifter, respectively ( Fig. 5(b)). Thus, Eq. (B2) becomes where (V) i is the i-th element of the vector V, and I denotes the imaginary part. This result can be written more intuitively in a notation similar to the main text. Namely, if A φ is the field amplitude generated by input X from the right, measured right after the phase shifter corresponding to φ, while A adj φ is the field amplitude generated by input Y from the right, measured at the same point, then By recording the amplitudes in all ports during the forward and the backward propagation, we can thus compute in parallel the gradient with respect to every phase shifter. Notice that, within this computational model, we do not need to go through the full procedure outlined in Section 4 of the main text. However, this procedure is crucial for the in situ measurement of the gradients, and works even in cases which cannot be correctly captured by the simplified matrix model used here.

Appendix C: Training demonstration
In the main text we show how the in-situ backpropagation method may be used to train a simple XOR network.
Here we demonstrate training on a more complex problem. Specifically, we generate a set of one thousand training examples represented by input and target (X 0 → T) pairs. Here, X 0 = [x 1 , x 2 , P, 0] T where x 1 and x 2 are the independent inputs, which we constrain to be real for simplicity, and P (x 1 , x 2 ) = P 0 − x 2 1 − x 2 2 represents a mode added to the third port to make the norm of X 0 the same for each training example. In this case, we choose P 0 = 10. Each training example has a corresponding label, y ∈ {0, 1} which is encoded in the desired output, T, as [1, 0, 0, 0]. T and [0, 1, 0, 0]. T for y = 0 and y = 1 respectively.
For a given x 1 and x 2 , we define r and φ as the magnitude and phase of the vector (x 1 , x 2 ) in the 2D-plane, respectively. To generate the corresponding class label, we first generate a uniform random variable between 0 and 1, labeled U, and then set y = 1 if exp − (r − r 0 − ∆ sin(2φ)) 2 2σ 2 + 0.1 U > 0.5. (C1) Otherwise, we set y = 0. For the demonstration, r 0 = 0.6, ∆ = 0.15, and σ = 0.2. The underlying distribution thus resembles an oblong ring centered around x 1 = x 2 = 0, with added noise. As diagrammed in Fig. 6(a), we use a network architecture consisting of six 4×4 layers of unitary OIUs, with an element-wise activation f (z) = |z| after each unitary transformation except for the last in the series, which has an activation of f (z) = |z| 2 . After the final activation, we apply an additional 'softmax' activation, which gives a normalized probability distribution corresponding to the predicted class of X 0 . Specifically, these are given by s(z i ) = exp (z i )/ j exp (z j ) , where z i=1,2 is the first/second element of the output vector of the last activation (the other two elements are ignored). The ANN prediction for the input X 0 is set as the larger one of these two outputs, while the total cost function is defined in the cross-entropy form where L (m) is the cost function of the m-th example, the summation is over all training examples, and z m,t is the output from the target port, t, as defined by the target output T (m) of the m-th example. We randomly split our generated examples into a training set containing 75% of the originally generated training examples, while the remaining 25% are used as a test set to evaluate the performance of our network on unseen examples. As in the XOR demonstration, we utilized our matrix model of the system described in Section B. As in the main text, at each iteration of training we compute the gradient of the cost function with respect to the phases of each of the integrated phase shifters, and sum this over each of the training examples. For the backpropagation through the activation functions, since |z| and |z| 2 are non-holomorphic, we use eq. A22 from Section A, to obtain where φ l is a vector containing the phases of Z l and Γ L is given by the derivative of the cross-entropy loss function for a single training example where δ i,t is the Kronecker delta.
With this, we can now compute the gradient of the loss function of eq. C2 with respect to all trainable parameters, and perform a parallel, steepest-descent update to the phase shifters, in accordance with the gradient information. Our network successfully learned the this task in around 4000 iterations. The results of the training are shown in Fig. 6(b). We achieved a training and test accuracy of 91% on both the training and test sets, indicating that the network was not overfitting to the dataset. This can also be confirmed visually from Fig. 6(c). The lack of perfect predictions is likely due to the inclusion of noise.