Squeezing as a resource for time series processing in quantum reservoir computing

,


I. INTRODUCTION
Squeezing is a quantum phenomenon characterized by reduced light field quadrature fluctuations below shot noise levels [1][2][3].Initially employed in fundamental quantum tests, such as Einstein-Podolsky-Rosen (EPR) paradox experiments [4,5], squeezing has emerged as a crucial resource in diverse quantum technologies.Notably, squeezed states have been extensively utilized in quantum metrology to enhance measurement sensitivity for parameter estimation [6,7], clock synchronization [8], and gravitational wave detection [9].Moreover, their role as a resource for quantum entanglement has been harnessed for quantum cryptography protocols [10,11].In boson sampling experiments, large multimode squeezed states have made it possible to achieve a quantum advantage [12,13].Additionally, they serve as the primary resource for universal measurement-based quantum computing in continuous variables (CV) [14] through the generation of cluster states [15][16][17].In the context of quantum machine learning, squeezing is fundamental for CV quantum neural networks to outperform their classical counterparts [18].In this work, we will focus on the favorable impact of squeezing on time-series prediction and forecasting in the context of Quantum Reservoir Computing (QRC) [19].
Reservoir Computing (RC) constitutes an unconventional paradigm within the realm of machine learning techniques rooted in recurrent neural networks [20][21][22].Particularly tailored for time series processing, RC allows fast learning with minimal training costs.The RC framework has demonstrated its effectiveness in real-world scenarios including temporal prediction tasks [23][24][25][26] as well as classification tasks [27,28].By harnessing the information processing capabilities of high-dimensional dynamical systems, RC concepts have seamlessly transitioned to physical substrates [29], with photonic and optoelectronic implementations receiving attention for their high-speed attributes [30][31][32][33].
Recently, the scope of RC has expanded to encompass quantum systems, capitalizing on their augmented Hilbert space for enhanced performance [19].Notably, quantum enhancements in temporal tasks under ideal conditions have been observed in both spin [34,35] and photonic setups [36,37].Different aspects influencing quantum reservoirs perfomance have been considered and the effectiveness of complex task solving has been addressed considering different evolution maps [34,[37][38][39], the role of statistics [40,41], or different quantum phases [42].These investigations assumed ideal conditions, attributing performance improvements to factors such as improved memory properties, more favorable nonlinearities, and expanded accessible Hilbert spaces.
However, QRC faces several challenges, with substantial attention directed towards addressing the presence of noise in output observables [43][44][45][46][47].While readout noise is relevant to classical RC as well [48,49], it acquires heightened significance in QRC due to intrinsic sampling noise arising from the stochastic nature of quantum measurements.This noise significantly hinders potential quantum enhancements [44][45][46].The strategy of monitoring the output accounting on the effects of quantum measurement for temporal tasks is also particularly critical [45,50].Preserving quantum advantage within these non-ideal circumstances is needed to ensure the viability of QRC protocols.
In Ref. [50] it was shown how weak -instead of projective-measurements allow continuous monitoring in QRC -instead of buffering inputs and rewind the reservoir dynamics.Continuous homodyne monitoring in QRC was addressed in Ref. [45] in a photonic platform.Here, a loop-based photonic platform was proposed, suitable for online time series processing.These works show the possibility of sustaining the RC performance in the quantum setting in the presence of non-ideal measurement conditions.Motivated by these results, in this work, we adopt a slightly simplified version of the photonic platform used in Ref. [45] and investigate how the presence of squeezing in the optical cavity can improve the performance of the reservoir.We will numerically analyze the role of squeezing, addressing active and passive coupling terms in a photonic network, in both lin-ear and nonlinear tasks.An analytical argument is made to justify such an improvement.
The paper is structured as follows: in Sect.II, the general framework of RC as well as a detailed description of our photonic platform are exposed.In Sect.III the simulation results for some benchmark RC tasks under non-ideal noise conditions are shown: we test the linear memory of the system (Sect.III A) as well as the non-linear memory (Sect.III B) and its performance on time-series forecasting (III C).In Sect.IV we show numerical evidence to explain the noise robustness improvement caused by squeezing in the previous tasks.Finally, conclusions are given in Sect.V.

II. LOOP-BASED ARCHITECTURE
A. Reservoir computing RC architectures are mainly composed of three distinct layers: the input layer, the reservoir, and the readout layer [51].The external signal is encoded and fed into the system in the input layer.This is done sequentially at each time step.The reservoir layer (or just reservoir) is usually a complex dynamical system that applies a non-linear map to the inputs.The reservoir must retain short-term memory of previous inputs to be able to perform temporal tasks.This short-term or fading memory, together with the echo state property, is part of the universality proofs of RC [52].The readout layer is then made of a certain number of reservoir observables, which are monitored sequentially after the input is encoded.The output from this layer is a linear combination of the measured observables.Supervised learning is performed by optimizing this linear combination of the output to yield the desired target.
In more detail, if we have a training set consisting of a sequence of L inputs {s 1 , s 2 , . . ., s L }, at time step k the input s k is encoded and introduced into the reservoir.If we call x k to the reservoir degrees of freedom, we can write the reservoir map at time step k as This map is fixed throughout the whole protocol.For the readout layer at time step k, we use O k as readout observables (functions of x k ).The reproduced function at each time is obtained by performing a linear regression on the readout observables, where the weight vector W = w 0 , w ⊤ ⊤ is optimized through training examples.The way training works is as follows: for the sequence of L training inputs, we define the matrices so Eq. ( 2) can be rewritten as y = V W. If we want the output y to get as close as possible to the desired target function, ȳ, we choose the set of weights in order to minimize the mean square error (MSE), It can be shown that the optimal set of weights to reach the minimum of Eq. ( 4) are the ones obtained through W opt = V MP ȳ, where Moore-Penrose inverse of V [22].The higher the value of L the more precise our estimation of the optimal weights will usually be.Once the system has been trained, we consider a (smaller) test set of L ′ inputs to be fed into the reservoir afterwards.The RC performance is then checked with this test set of new unseen data using the MSE metric from Eq. (4).

B. Description of the platform
Our architecture works in the CV quantum optical regime.The physical substrate is an N -mode optical pulse traveling through a closed optical loop or cavity (N denotes the size of the reservoir).The N -mode internal degrees of freedom inside each pulse can be attained via, for example, frequency multiplexing [16,[53][54][55][56].In optics, frequency multiplexing has already been shown to be a useful strategy for classical RC [57].In our approach, the external information is injected from a pulsegenerating light source, which provides squeezed vacuum states.The input sequence is encoded in the squeezing phases of the input pulses (one input value for each pulse), depicted as source in Fig. 1.Fast, accurate, and reconfigurable phase-setting devices have already been used in experiments with great impact [13].Each input pulse is coupled to the loop pulse using a beam-splitter (BS), with reflectivity R shown in Fig. 1, yielding two output pulses.One of them remains in the loop and gives feedback to the next iteration (creating a quantum memory).In this way, the reservoir can retain information from previous inputs without the need for external memory.The remaining output pulse is passed to a detector that measures each mode and uses the obtained observables for the readout layer.The fraction of light that remains in the cavity on each round trip is determined by the BS reflectivity R.
Inside the cavity, a nonlinear medium is placed (NL in Fig. 1), which applies a dynamical transformation to the loop pulse each time it passes through.This creates a complex optical network [54,55,[58][59][60] and can be modeled by a Hamiltonian that is quadratic in the field operators, where âi (â † i ) is the annihilation (creation) operator of mode i.The coupling terms α ij and β ij encode different network topologies and lead to entanglement among modes inside the loop pulse, at each round trip.If all the terms β ij = 0, the dynamical transformation is called passive, whereas if there are any β ij ̸ = 0 the transformation is active.Active transformations in CV quantum optics do not conserve the average number of photons of the quantum state and are known to generate squeezing, the main resource for entanglement [2,3].It is important to note that a passive cavity also produces entangled states because the external input pulses are already squeezed (even though it does not generate additional squeezing).⊤ can be computed and used as observables for the readout layer.As we are injecting squeezed vacuum states, only even-order moments are considered.For the tasks shown in Sect.III, the chosen set of observables is composed of second and fourth-order moments.Concretely, the chosen set is , which has a total of N (3N +1)/2 observables.In Gaussian states, fourth-order moments can be written as nonlinear functions of the second-order moments.In our case, they are useful for enhancing the accessible nonlinear terms, which are relevant for several tasks and come at no experimental expenditure [45].We note that the readout size scales quadratically with the number of modes N .Usually, in optical reservoir computing the dimensionality scales linearly with N , as the information is encoded in field amplitudes [32,33,61].By introducing the inputs in the field quantum fluctuations we access a broader dimensional space by generating mode correlations and entanglement, which allows more complex information processing in relatively smaller reservoirs [36,45].To access the averaged values of the field correlations we consider averaging over an ensemble of realizations [34].
Each RC time step encompasses the whole process we have detailed: BS coupling of the input and loop pulses, detection of the output pulse, and transformation of the loop pulse under the non-linear crystal.There are as many time steps as samples in the input sequence, and they will be labeled with the letter k.So at the k-th time step, the input s k will be encoded and introduced into the system and the vector of observables 2 , . . . is measured and used for the readout layer.
The platform is based on the proposal in Ref. [45], where real-time information processing was reported.The main novelty here is that we specifically address the role of a squeezing reservoir, with both active and passive transformation.The goal is to assess the importance of quantum resources for the performance of QRC.In order to simplify the experimental footprint consider a single NL crystal.Indeed this has not significant effect when processing past inputs.This design can be also adapted for single-loop ensemble processing adding a fiber, as shown in [45].

III. RESERVOIR COMPUTING TASKS UNDER ADDITIVE NOISE
Noise in the readout layer is known to be significantly detrimental to RC performance [48,49].Some strategies have been developed in different architectures to make reservoirs more robust to noise [43,48].In this section, we study the effect of the amount of squeezing produced by the cavity crystal on the noise robustness of the platform in the readout layer and compare it to the one obtained by tuning the BS reflectivity.In our simulations, readout noise is included as additive fluctuations in the measured observables, O (k) meas , concretely where ideal stands for the observable that would be measured in the ideal case of zero fluctuations, which would correspond to an infinite number of measurements, and E (k) stands for the additive noise vector.For the noise, we model it as normally distributed fluctuations of variance equal to σ 2 noise applied to the measured quadratures.
Although the added noise is absolute (it does not depend on the magnitude of the observables), we can use the vacuum noise variance (σ 2 vacuum = 1/2 in our case) or shot noise as a relative measure of the additive noise intensity.In that regard, added noise of variance equal to 0.1 would be equivalent to 20% of vacuum fluctuations.
In our simulations, we generate every crystal Hamiltonian, Eq. ( 5), randomly with the condition that every one of its supermodes is squeezed by e −r (see App.A for details).In every realization, the modes of the input pulses are squeezed with a fixed squeezing strength, r input = 2 (approximately 8.7 dB).The encoding function of the squeezing phase, ϕ k = f (s k ), is tuned depending on the task we are considering.Concretely, we consider the family of linear functions ϕ (m) k = mπs k .In this respect, the smaller the value of m, the better the reservoir is at reproducing linear and quadratic functions of s k .For increasing values of m, higher non-linear contributions become more relevant [36,45].
We consider three temporal tasks: the linear memory task, the nonlinear autoregressive moving average (NARMA) task, and the forecasting of the Mackey-Glass chaotic time series [62,63].These three tasks provide a broad overall picture of the properties of the reservoir memory for both linear and nonlinear computations.After applying the training protocol described in Sect II A for a given target function, we check the performance of the trained reservoirs on an additional test input sequence.For evaluation, we mainly use the mean-square error, Eq. ( 4), after training optimization, min W MSE (y, ȳ).To get the MSE to be normalized between 0 and 1, we normalize both the target and the reproduced function data to zero mean and one standard deviation.

A. Linear memory task
The linear memory task is the simplest way to check the accessible memory of our reservoir in the presence of noise.We train the reservoir to reproduce past entries from the input series.That is, we consider the target function where we aim to to reproduce, at time step k, the input that was introduced d time steps in the past (or the input at delay d).For simulations, we consider an input sequence composed of random entries from a uniform distribution in the interval [−1, 1].We tune the squeezing angle encoding to be ϕ k = πs k /4 to maximize linear contributions.For visualization purposes, we use the linear capacity, defined as C(d) = 1 − min W MSE (y, ȳ(d)), to test the performance of this task.The vector ȳ(d) is composed of the target function from Eq. ( 7) at each time step as a function of the delay d.Training and test set sizes are set to be 4000 and 1000, respectively.

Figures 2a and 2b
show the linear capacity as a function of the delay d on the target function from Eq. ( 7) for different values of the reflectivity (R = 0.75 in Fig. 2a and R = 0.9 in Fig. 2b).In both plots, the noise variance is 10 −2 (2% of vacuum fluctuations).We see that for both reflectivities, increasing the cavity squeezing improves the attainable memory.Having a higher reflectivity provides a longer 'tail' in the linear capacity at the expense of reducing it for small and intermediate delays.This is the effect of the smaller amount of light leaving the cavity for increasing values of R. In Fig. 2c the delay at which the linear capacity drops below 0.9 (which we also call the delay cut) is plotted as a function of the noise variance.We find that cavity squeezing provides significant noise robustness.For the reflectivity R = 0.75, adding cavity squeezing equal to r = 1.5 provides a high linear capacity beyond delay 10 for a noise intensity of σ 2 noise = 0.1.In Fig. 2c the drawback of increasing the BS reflectivity can also be noted: for small noise intensity (σ 2 noise = 10 −3 ) and no cavity squeezing (light color), having R = 0.9 (dashed line) improves the delay cut compared to R = 0.75 (solid line), as increasing the reflectivity improves the memory robustness to noise.However, when the noise intensity is increased, the delay cut for R = 0.9 drops below the one for R = 0.75.This is because the higher the reflectivity, the less light leaves the cavity and travels to the detector.If the readout fluctuations are large compared to the intensity of the light coming from the loop, the accessible memory of the reservoir will be severely degraded.The cavity squeezing increases significantly the accessible linear memory in the presence of a large noise intensity.

B. NARMA10 task
In this section, we are analyzing the performance of the reservoir for the NARMA task, which requires high linear and non-linear memory.It is one of the most common benchmark tasks and has been used to test several QRC proposals [42,43].In this article, we consider the NARMA10 task, with a target function at time k being where the default constant parameters are set to (α, β, γ, δ) = (0.3, 0.05, 1.5, 0.1).The function input parameters (µ, ν) are chosen to be (0, 0.2), where s k are taken from a uniform random distribution in the interval [−1, 1] (they are the source inputs).For the reservoir to be able to have a good performance on the NARMA10 task it is necessary for it to be able to have a high linear capacity up to delay 10 and a low error reproducing the function To perform this task we consider the same input encoding as in Sect.III A, namely ϕ k = πs k /4.To test the performance we use the MSE defined in Eq. ( 4).In Figs.3a and 3b the performance is shown, comparing the effect of cavity squeezing (x-axis in Fig. 3a) and BS reflectivity (x-axis in Fig. 3b).Three different noise scenarios are considered in both figures: the ideal case (blue boxes) and noise variance equal to 10 −2 (green boxes) and 10 −1 (pink boxes).In the ideal case, the optimal values of cavity squeezing and BS reflectivity are found to be r = 0 and R = 0.5.It can be seen that increasing either cavity squeezing or reflectivity degrades the performance in the absence of noise.The reason for this is that, for the chosen encoding and observables, increasing these two parameters increases linear memory at the expense of quadratic memory, which is also very relevant for the NARMA task.This balance among linear and non-linear memory is well known in the field of RC [36,65,66].
Moving to the realistic case of a finite number of measurements, for a noise intensity of σ 2 noise = 10 −2 , the improvement of the cavity squeezing over the reflectivity is clearly seen.In that noise scenario, increasing both parameters improves the performance up to an optimal value (r ∼ 1 and R ∼ 0.8).However, in the passive cavity case, the optimal value has a higher error in comparison to the active cavity and is further away from the ideal case error.For higher values of the noise, σ 2 noise = 10 −1 , most examples completely fail at attempting to reproduce the NARMA function.Only when the cavity squeezing r ≳ 1, the MSE drops below 1.This is due to the fact that in most scenarios the noise level makes it impossible for the reservoir to resolve inputs with a delay of 10 or more as its intensity becomes comparable with the value of the system observables, giving a bad signal-to-noise ratio [45].
Even though there is a counterbalance between linear and quadratic memory, and thus increasing the cavity squeezing and the BS reflectivity is detrimental to the performance in ideal scenarios, in the presence of readout noise the active cavities with high squeezing outperform the rest.Indeed, the only case where the performance of the noisy reservoir comes close to the ideal case is when the cavity squeezing is higher than r = 1.This provides an interesting example where the role of a given resource needs to be addressed beyond ideal settings, as benefits could arise in the presence of noise.
FIG. 3. Performance on NARMA10 task: box plot of the mean square error (MSE) of the NARMA10 task as a function of (a) squeezing and (b) reflectivity for different values of the noise variance (different colors).For Fig. 3a we take the reflectivity equal to R = 0.5 and in Fig. 3b the cavity squeezing is equal to zero.For a given value of the x-axis, the boxes for each noise scenario are split to avoid overlapping.

C. Time series prediction of a chaotic signal
One of the main applications of RC is time series forecasting, and thus in this section, we consider the task of forecasting the Mackey-Glass time series [67].The differential equation that describes the dynamics of the signal is where for τ = 17 the time series is chaotic [62,63].For the input sequence, we have sampled the solutions to Eq. ( 10) with time resolution t r = 3 [61], so that s k = s(t 0 + kt r ) (the initial conditions are chosen randomly).The target function for the training will be to predict the next input in the sequence, that is, ȳk = s k+1 .For this specific task, we use the input encoding ϕ k = πs k , which provides higher nonlinear memory [45].Once the reservoir has been trained to predict the next value of the signal, we can feed the predicted values as new inputs for the protocol.The reservoir thus, ideally, faithfully reproduces the chaotic signal without the need for new input data.We call this protocol autonomous driving.
In Fig. 4a the autonomously driven signal evolution is plotted for two different values of the cavity squeezing (green curve for r = and blue curve for r = 1.25) while the noise variance is kept at σ 2 noise = 10 −1 .The real signal is shown as a black curve for comparison.We see that the active cavity performs a much better prediction than the passive one in the long term, achieving accurate predictions up to 50 time steps.The passive cavity reservoir cannot overcome the effects of noise, and thus the prediction performance drops dramatically.In Figs.4b and 4c, we compare the Mackey-Glass chaotic attractor, black dots, to 3000 values from single realizations of trained reservoirs with a passive cavity (Fig. 4b, green dots) and an active cavity with r = 1.25 (blue dots).Also in this case, we see that the reservoir with cavity squeezing is able to approximately reproduce the attractor, while the passive reservoir is not.

IV. ACCESSIBLE MEMORY ENHANCEMENT
In this section, we will explain in detail the reason behind the performance improvement due to the cavity squeezing that we have shown in the previous sections.From a physical point of view, the BS and the cavity crystal can induce competing effects on the reservoir memory.The BS causes a loss of photons in the loop pulse.On the other hand, the non-linear processes that arise from the interactions inside the crystal can increase the energy of the loop pulse.Concretely, active crystals (those that produce squeezing) increase the total photon number inside the pulse, as opposed to passive crystals, which maintain it constant.This energy enhancement from active crystals counteracts BS losses and helps retain information inside the loop pulse for longer times.
To quantify how these effects contribute constructively to the functioning of the QRC, we study the time evolution of the loop pulse during the protocol.At each round trip, the field quadratures of the loop pulse transform via the symplectic matrix A = √ RS, where R stands for the BS reflectivity and S is the symplectic matrix modeling the evolution of the pulse inside the cavity crystal (see Apps.A and B for details on the matrices S and A, respectively).The memory retention of our reservoir is directly related to the powers of the symplectic matrix A (App.B 1) and can be quantified using the spectral norm of A d (written as ∥A d ∥ 2 ), in which d resembles the delay of the input information (Eq.(B5) from App. B 2).In that regard, the faster ∥A d ∥ 2 decays to zero, the smaller the memory retention of our reservoir will be (and viceversa).It can be analytically shown that, if the crystal inside the cavity is passive, then . From these equations, we infer that active cavity crystals can improve the memory retention of the reservoir (similarly to the direct effect of the BS reflectivity).In Fig. 5 we show the values of ∥A d ∥ 2 as a function of the delay for the randomly generated Hamiltonians that we considered in Sect.III.In Fig. 5a the spectral norm of A is plotted for active cavities with different values of cavity squeezing, r, while in Fig. 5b the same is shown for passive cavities with different values of the BS reflectivity.We see how the negative slope of the spectral norm decreases as we in-crease either the cavity squeezing (Fig. 5a) or the BS reflectivity (Fig. 5b).This means that the magnitude of the delayed input information decays more slowly, and thus it may be reproduced by our trained reservoir more easily.In the presence of readout noise, input information decaying more slowly with time translates into more accessible memory and improves the reservoir's robustness to noise This is the reason why cavity squeezing improves the performance in every benchmark task shown in Sect.III.While both R and A contribute to such a memory enhancement, increasing the BS reflectivity has the drawback of reducing the cavity light reaching the detector (lowering the signal-to-noise ratio) and thus is not as effective for increasing the noise robustness.

V. CONCLUSION
In light of the significant achievements in classical RC [68], photonic platforms are emerging as promising candidates for quantum implementations.Commendable features are, for instance, fast processing rates and low decoherence at room temperature [19].Different photonic QRC platforms have already been theoretically explored and show improvements due to the enlarged Hilbert space [36,69] as well as the ability of real-time processing without the use of external memories [45].
Detrimental effects of readout noise have been discussed both in classical RC [48,49] and in QRC settings [43][44][45].In the case of quantum reservoirs, the problem is even more profound, as readout noise is theoretically unavoidable due to the stochastic nature of quantum measurements, which produce statistical fluctuations that can hinder any possible quantum advantage [19,44,45].In this paper, we demonstrate the performance-enhancing potential of quantum squeezing (active cavity) applied to a vacuum state quantum memory (loop pulse) to overcome noise in realistic scenarios.This establishes squeezing as a quantum resource for accessing the enlarged space of quantum correlations and entanglement and for improving the performance in relevant benchmark tasks, either predictive or requiring memory.Even though tuning the BS reflectivity also improves memory retention, as in [45], increasing cavity squeezing is shown to be a preferred method to improve the reservoir robustness under adverse noise conditions.Interestingly, the effect of squeezing on the QRC performance when accounting for measurement noise completely deviates from predictions under ideal conditions.
In summary, state-of-the-art frequency multiplexed quantum networks [16,55,56,60] represent a powerful set-up for near-term experimental implementations of QRC.Our results serve as a guide for the experimental design laying the foundations for photonic QRC in CV in realistic noisy scenarios and exploiting quantum resources.â † i − âi (i = 1, . . ., N ) are, respectively, the amplitude and phase quadratures of each mode.For quadratic Hamiltonians as in Eq. ( 5) the evolution of the quadrature vector can be written as e i Ĥt Qe −i Ĥt = S Ĥ (t) Q, (A1) where S Ĥ is a 2N × 2N symplectic matrix [2,3].For the sake of clarity, we will drop the Ĥ subscript (and the time dependency) and call the symplectic matrix simply as S. Every symplectic matrix admits a Bloch-Messiah decomposition [70,71], where U and V are orthogonal symplectic matrices and ξ −1 i (ξ i ) provides the squeezing (anti-squeezing) applied to the i-th supermode (we consider ξ i ≥ 1 ∀i).Thus the squeezing and anti-squeezing values obtained from the Bloch-Messiah decomposition correspond to the singular values of S. So the transformation is passive iff all the singular values of S are equal to 1 (∆ ξ is equal to the identity).It is trivial to show that if S is passive, then all its powers S d are also passive.
For the simulations we have performed in this manuscript, we have considered the non-linear crystal squeezes every supermode by the same amount, so ξ −1 i ≡ e −r/2 (∀i), where r denotes the squeezing strength per mode applied by the Hamiltonian.So for our simulations we set ∆ ξ = N i=1 diag e r/2 , e −r/2 and choose the matrices U and V randomly to generate the symplectic matrix S, applying Eq. (A2).

FIG. 2 .
FIG. 2. Linear memory under additive noise: (a-b) linear capacity as a function of the delay for different values of squeezing (green for r = 0, orange for r = 0.75 and purple for r = 1.5) and different reflectivities: R = 0.75 in (a) and R = 0.9 in (b).In both figures, the noise variance is σ 2 noise = 10 −2 .The curves are taken from averaging among 100 different random realizations and the shadows depict the standard deviation.(c) delay at which the linear capacity drops below 0.9 as a function of σ 2 noise , including the ideal case (σ 2 noise = 0) for different values of squeezing (different colors) and different values of the reflectivity (line format).

FIG. 4 .
FIG. 4. Mackey-Glass time series prediction under additive noise: (a) time series as a function of the reservoir time steps: the black curve shows the real-time series while the green and blue curves show the autonomously driven predictions for r = 0 and r = 1.25, respectively (averages taken among 100 realizations, shadows depicting the standard deviation).(b-c) chaotic attractor in the phase space y(t)-y(t−6): the black points provide the real attractor while the green and blue dots provide the results of an autonomously driven realization with r = 0 and r = 1.25, respectively.In every simulation, the BS reflectivity is set to R = 0.75 and the noise variance is σ 2 noise = 10 −1 .

FIG. 5 .
FIG. 5. Spectral norm of loop matrix dynamics: spectral norm of the loop dynamical matrix A to the power of the delay d for (a) active transformations with R = 0.5 and different values of the squeezing, r, and (b) passive transformations (r = 0) for different values of the reflectivity.Each curve shows the median from 100 different realizations while the shades range from the first to the ninth decile.