Abstract
A control-bounded analog-to-digital converter consists of a linear analog system that is subject to digital control, and a digital filter that estimates the analog input signal from the digital control signals. Such converters have many commonalities with delta–sigma converters, but they can use more general analog filters. The paper describes the operating principle, gives a transfer function analysis, and describes the digital filtering. In addition, the paper discusses two examples of such architectures. The first example is a cascade structure reminiscent of, but simpler than, a high-order MASH converter. The second example combines two attractive properties that have so far been considered incompatible. Its nominal conversion noise (assuming ideal components) essentially equals that of the first example. However, its analog filter is a fully connected network to which the input signal is fed in parallel, which potentially makes it more robust against nonidealities.
Similar content being viewed by others
1 Introduction
Control-bounded analog-to-digital conversion was proposed in [13], as a simplification of control-aided analog-to-digital conversion proposed in [10]. Like several other conversion principles (including pipelined conversion [5], beta-expansion conversion [2, 6, 7], and modulo conversion [15]), control-bounded conversion works by multiple stages of analog amplification with intermediate steps of adding (or subtracting) digitally controlled quantities. However, control-bounded conversion stands out by its analog part operating in continuous time (rather than with discrete-time samples), which is reminiscent of continuous-time delta–sigma (\(\varDelta \Sigma \)) converters. Moreover, the reconstruction principle of control-bounded conversion differs from other conversion principles. In consequence, control-bounded converters can use more general analog filter structures (potentially consuming less power) than delta–sigma converters.
The description in [13] is terse and the performance analysis is rudimentary. In this paper, we describe the operating principle and the digital estimation in more detail and give a more detailed transfer function analysis. Moreover, we discuss two examples of such architectures. The first example (first presented in [13]) is a chain of integrators resembling a multi-stage noise shaping (MASH) \(\varDelta \Sigma \) ADC [8, 16, 17], but with a simpler analog part that precludes a conventional digital cancellation scheme.
Before we move on to the second example, we recall here that the challenge of real-world analog-to-digital conversion is not to minimize the nominal conversion noise with ideal analog circuits, but to cope efficiently (in particular, with limited power consumption) with nonideal circuits and disturbances including component mismatch, thermal noise, etc. But analog cascade structures (as in high-order MASH ADCs and in our first example) are particularly sensitive to disturbances and imperfections at the early stage(s). Therefore, these early stage(s) need to be implemented with much higher precision (and therefore with much higher power consumption) than the later stages,Footnote 1 which counteracts the idea of a uniform cascade.
With this background, we now turn to our second example, which is obtained from the first example by an orthogonal transformation of the analog state space. In consequence, the nominal conversion performance (with ideal analog circuits) remains essentially unchanged and is easily scaled to any desired level. However, the physical state space is no longer a cascade, but a fully connected (and nearly uniform) network, into which the analog input signal is fed fully in parallel. In consequence, this new architecture promises to be quite robust against component mismatch and other nonidealities.
The paper is structured as follows. The operating principle and the basic transfer function analysis of control-bounded ADCs are given in Sect. 2. A conversion noise analysis is given in Sect. 3. The first example architecture is presented and analyzed in Sect. 4. Section 5 introduces the state space representations that are used in the remaining sections. The second example architecture is described in Sect. 6. The digital estimation filter is described in Sect. 7. The actual derivation of this filter is outlined in the Appendix.
2 Operating Principle
2.1 Analog Part and Digital Control
Consider the system of Fig. 1. The continuous-time input signal is a scalar u(t) or a vector
The input signal is assumed to be bounded, i.e., \(|u(t)| \le b_{u}\) or \(|u_\ell (t)| \le b_{u}\) for all times t and all components \(\ell =1,\ldots ,k\). This input signal is fed into a continuous-time analog linear system, which produces a continuous-time vector signal
and the digital control in Fig. 1 ensures that
The digital control signals \(s_1(t), \ldots ,\, s_n(t)\) remain constant between the ticks of the digital clock. We will assume that the control is additive, i.e.,
where \(\breve{\mathbf {y}}(t)\) (given by (7)) is the fictional signal \(\mathbf {y}(t)\) that would result without the digital control and where \(\mathbf {q}(t)\) is fully determined by the control signals \(s_1(t)\), ..., \(s_n(t)\). The dependence of \(\mathbf {q}(t)\) on \(s_1(t)\), ..., \(s_n(t)\) may be complicated, but we will never need (nor attempt) to determine \(\mathbf {q}(t)\) explicitly.
At this point, we have already finished the discussion of the digital control in this section: its role and its effect are fully described by (3) and (4).
Note that both \(\breve{\mathbf {y}}(t)\) and \(\mathbf {q}(t)\) are fictional signals that are not subject to any physical limits. In fact, the first key idea of control-bounded conversion is to use the approximation
Roughly speaking, the relative error of the approximation (5) can be made to vanish by letting the magnitudes of \(\breve{\mathbf {y}}(t)\) and \(\mathbf {q}(t)\) grow to infinity while the difference (4) is kept small by (3).
We now assume that the uncontrolled analog filter is time-invariant and stableFootnote 2 with impulse response matrix
where \(g_{i,j}(t)\) is the impulse response from \(u_j(t)\) to \(y_i(t)\). We then have
We will also need the (elementwise) Fourier transform of (6), which will be denoted by \(\mathbf {G}(\omega )\) and will be called analog transfer function (ATF) matrix.
2.2 Digital Estimation and Transfer Functions
Using the approximation (5), the digital estimation produces an estimate of \(\mathbf {u}(t)\) from
which is a continuous-time deconvolution problem. The basic estimate is given by
where \(\mathbf {h}(t)\) is a matrix of stable impulse responses with (elementwise) Fourier transform given in (15).
Note that \(\hat{\mathbf {u}}(t)\) is mathematically defined in continuous time, but it will in practice be computed at discrete times \(t_1, t_2,\, \ldots \) . These computations will be discussed in Sect. 7; it will be shown there that \(\hat{\mathbf {u}}(t_{1}), \hat{\mathbf {u}}(t_{2}),\, \ldots \) can be computed with a digital linear filter directly from the digital control signals \(s_1(t),\, \ldots , s_n(t)\), without actually computing \(\mathbf {q}(t)\).
Using (4), the estimate (10) can be written as
Note that the step from (11) to (12) uses (5) or, equivalently, the approximation
as illustrated in Fig. 1.
The impulse response matrix \(\mathbf {h}\) in (10) is determined by its (elementwise) Fourier transform
where \((\cdot )^\mathsf {H}\) denotes Hermitian transposition, \(\mathbf {I}_{m}\) is the m-by-m identity matrix, and \(\eta >0\) is a design parameter. The estimate (10) with \(\mathbf {h}(t)\) as in (15) can be viewed as a statistical estimate or as the solution of a least-squares problem, as will be detailed in Sect. 2.3.1.
In the important special case where \(\mathbf {u}(t)\) is scalar (i.e., \(k=1\)), the ATF matrix \(\mathbf {G}(\omega )\) is a column vector and \(\mathbf {H}(\omega )\) is a row vector; in this case, using the matrix inversion lemma, (15) can be written as
and
Note that \(\mathbf {H}(\omega ) \mathbf {G}(\omega ) \approx 1\) for frequencies \(\omega \) such that \(\Vert \mathbf {G}(\omega ) \Vert \gg \eta \) while \(\mathbf {H}(\omega ) \mathbf {G}(\omega ) \approx 0\) for \(\Vert \mathbf {G}(\omega ) \Vert \ll \eta \).
Equations (11) and (13) can then be interpreted as follows. Eq. (13) is the signal path: the signal \(\mathbf {u}(t)\) is filtered with the signal transfer function (STF) matrix
The second term in (11) is the conversion error
with \(\mathbf {y}(t)\) bounded as in (3). Because of (20), \(\mathbf {H}(\omega )\) will be called noise transfer function (NTF) matrix. The NTF (16) is the starting points of the performance analysis in Sect. 3.
Note that the STF (17) or (18) does not entail a phase shift and is free of aliasing (hence the title of [13]): the sampling in Fig. 1 (which is used for the digital control) affects the error signal (20), but not (13).
2.3 More About the Estimation Filter
2.3.1 Alternative Characterizations
The estimate (10) and (15) is further illustrated by noting that it is the solution of the continuous-time least-squares problem
where the minimization is subject to the constraints (4) and (7). The first term in (21) quantifies (14) while the second term in (21) is a regularizer.
Moreover, the estimate (10) and (15) coincides also with the Wiener filter that computes the LMMSE (linear minimum mean squared error) estimate of \(\mathbf {u}(t)\) from \(\mathbf {q}(t)\) under the assumptions that \(y_1(t),\, \ldots , y_m(t)\) are independent white-noise signals with power \(\sigma _Y^2\) and \(u_1(t),\, \ldots , u_k(t)\) are independent white-noise signals with power \(\sigma _U^2 = \sigma _Y^2 / \eta ^2\).
The proof of these claims is beyond the scope of this paper; for the essential ideas, we refer to [4] and the Appendix.
However, the estimate (10) and (15) is not justified by these characterizations, but by its practicality.
2.3.2 Bandwidth and the Parameter \(\eta \)
For the following discussion of the parameter \(\eta \) in (15), we restrict ourselves to the scalar-input case, where the STF and the NTF are given by (17) and (16), respectively. In this case, it is easily seen from (17) that \(\eta \) determines the bandwidth of the estimate (10). For example, assuming that \(\Vert \mathbf {G}(\omega )\Vert _\infty \) decreases with \(|\omega |\), the bandwidth is roughly given by \(0\le |\omega | \le \omega _\text {crit}\) with \(\omega _\text {crit}\) determined by
However, the bandwidth of the estimate may be reduced by postfiltering as mentioned in Sect. 2.3.3.
It is also worth noting that the parameter \(\eta \) equals the ratio of the STF (17) and the NTF at \(\omega _\text {crit}\)
as illustrated in Fig. 5.
2.3.3 Postfiltering
The basic estimate (10) need not be the final converter output. For example, an extra (digital!) anti-aliasing filter before sampling \(\hat{\mathbf {u}}(t)\) at discrete times \(t_1, t_2, \ldots \) will normally be advantageous. The integration of such an extra filter in the computations of the basic estimate is straightforward, cf. Sect. 7.3.
2.4 Remarks
We conclude this section with a number of remarks. First, we note that the conversion error (19) is not due to the quantizers in Fig. 1, but due to the approximation (14) or equivalently (5). In other words, the conversion error (19) is fundamentally unrelated to the precision of the quantizer circuits in Fig. 1 (except indirectly via the effectiveness of the digital control).
Second, we note that the details of the digital control (clock frequency, thresholds, etc.) do not enter the transfer function analysis of Sect. 2.2.
Third, the digital estimate (10) is fundamentally a continuous-time quantity, and the resulting STF (18) and (17) are exact continuous-time expressions. Sampling this estimate at discrete times may be required in most applications, but it is not essential to the converter in itself. In fact, nontrivial continuous-time digital signal processing (e.g., beamforming) can be done before any sampling, as suggested in [14, Section 10.2].
Forth, the digital estimation and the transfer function analysis of Sect. 2.2 work for arbitrary stable analog transfer functions \(\mathbf {g}(t)\). In fact, stability of the uncontrolled analog system has here been assumed only for the sake of the analysis: the actual digital filter in Sect. 7 is indifferent to this assumption (hence the title of [10]).
Finally, the purpose of the analog linear system in Fig. 1 is not to prepare the input signal for quantization, but to amplify the input signal, over a frequency band of interest, into the vector signal (7) such that (5) is a good approximation. This very general setting offers design opportunities for the analog system/filter beyond the limitations of conventional \(\varDelta \Sigma \) modulators, as will be illustrated by the examples in Sects. 4 and 6.
3 Conversion Noise Analysis
In this section, we derive an expression (32) for the nominal conversion noise (assuming ideal analog circuits and no thermal noise) in terms of the amplitude response of the analog system. While the analysis in Sect. 2 was mathematically exact, we will here resort to approximations similar to those routinely made in the analysis of \(\varDelta \Sigma \) ADCs. We again restrict ourselves to the case where \(\mathbf {u}(t)\) is scalar (i.e., \(k=1\)) and will be denoted by u(t).
3.1 SNR and Statistical Noise Model
The conversion performance can be expressed as the signal-to-noise ratio (SNR)
where S and \(S_\text {N}\) are the power of \(\hat{u}(t)\) and the power of the conversion error (20), respectively, both within some frequency band \(\mathcal {B}\) of interest.
The numerator in (24) depends, of course, on the input signal. A trivial upper bound is \(S\le b_u^2\), and for a full-scale sinusoid, we have
As for the in-band power \(S_\text {N}\) of the conversion error (20), we begin by writing
where \(\mathbf {y}(t)\) is modeled as a stationary stochastic process with power spectral density matrix
(These statistical assumptions cannot be literally true, but they are a useful model.) Restricting (26) to the frequency band \(\mathcal {B}\) of interest, we have
3.2 White-Noise Analysis
If \(\mathbf {S}_{\mathbf {y}\mathbf {y}^\mathsf {T}}(\omega )\) in (28) is approximated by
we further obtain
where the last step is justified by \(\Vert \mathbf {G}(\omega )\Vert \ge \eta \) for \(\omega \in \mathcal {B}\), cf. (17) and Sect. 2.3.2.
Note that the approximation (29) is restricted to \(\mathcal {B}\) and is ultimately vindicated by the accuracy of (32). Using (32), the scale factor \(\sigma _{\mathbf {y}|\mathcal {B}}^2\) can be determined by simulations.
It is obvious from (32) that a large SNR (24) requires a large analog amplification, i.e., \(\Vert \mathbf {G}(\omega )\Vert \) must be large throughout \(\mathcal {B}\).
4 A First Example: A Chain of Integrators
This example was first presented in [13], but it is here analyzed much further. Moreover, this example is the basis of the examples in Sect. 6. (An even more detailed analysis of this architecture as well as a prototype implementation is reported in [14] and [12].)
4.1 Analog Part and Digital Control
The analog part including the digital control is shown in Fig. 2. The input signal u(t) is a scalar. The state variables \(x_1(t)\), ..., \(x_n(t)\) obey the differential equation
with \(\rho _\ell \ge 0\), \(\kappa _\ell \beta _\ell \ge 0\), and with \(x_{0}(t) {\mathop {=}\limits ^{\scriptscriptstyle \bigtriangleup }} u(t)\). The switches in Fig. 2 represent sample-and-hold circuits that are controlled by a digital clock with period T. The threshold elements in Fig. 2 produce the control signals \(s_\ell (t)\in \{ +1, -1 \}\) depending on the sign of \(x_\ell (kT)\) at sampling time kT immediately preceding t.
We will assume \(|u(t)| \le b\), and the system parameters will be chosen such that
holds for \(\ell =1,\ldots , n\).
The control-bounded signals \(y_1(t),\ldots ,y_m(t)\) are selected from the state variables \(x_1(t),\ldots ,x_n(t)\) as will be discussed below.
4.2 Relation to MASH Converters
Figure 2 has some similarity with a continuous-time MASH \(\varDelta \Sigma \) modulator [8, 16]. However, MASH converters are fundamentally built around the idea of passing only (or primarily) the quantization error of previous stages to the next stage. By contrast, Fig. 2 does not compute any quantization error signal at all, which is a significant simplification; in consequence, we conjecture that Fig. 2 can be implemented with lower power consumption than the analog part of a MASH converter.
Indeed, Fig. 2 cannot be handled by the digital cancellations schemes normally used in MASH converters. To see this, consider Fig. 3, which shows how the first stage in Fig. 2 would conventionally be modeled (perhaps with \(\tilde{\kappa }\ne \kappa \)), where \(e_1(t)\) is the local quantization error [17]. Since \(e_1(t)\) enters the system in exactly the same way as u(t) (except for a scale factor), these two signals cannot be separated by any subsequent processing.
Nonetheless, the analysis in [14, Section 5.5.4] shows that Fig. 2 achieves essentially the same nominal performance as a MASH converter.
4.3 Conditions Imposed by the Digital Control
The bound (34) can be guaranteed by the conditions
and
With the definition
(36) becomes
which implies \(\gamma _\ell \le 1/2\), and \(\gamma _\ell = 1/2\) is admissible if and only if \(|\kappa _\ell | = b\). In this case (i.e., if \(|\kappa _\ell | = b\)), the control frequency 1/T is admissible if and only if
4.4 Transfer Functions
As mentioned, the control-bounded signals \(y_1(t),\,\ldots ,y_m(t)\) are selected from the state variables \(x_1(t),\, \ldots ,x_n(t)\). An obvious choice is \(m=n\) and \(y_1(t)=x_1(t)\), ..., \(y_n(t)=x_n(t)\). In this case, the ATF \(\mathbf {G}(\omega ){\mathop {=}\limits ^{\scriptscriptstyle \bigtriangleup }} \big ( G_1(\omega ),\, \dots , G_n(\omega ) \big )^\mathsf {T}\) of the uncontrolled analog system (as defined in Sect. 2) is given by
Another reasonable choice is \(m=1\) and \(y_1(t)=x_n(t)\) as in [13]. In this case, the ATF is simply
We now specialize to the case where \(\beta _1=\ldots =\beta _n=\beta \) and \(\rho _1=\ldots =\rho _n=\rho \), which makes the analysis more transparent. For \(m=1\) as in (41), we then have
For \(m=n\), we obtain
Note that, for \(\omega ^2+\rho ^2 < \beta ^2\), \(|G_n(\omega )|^2\) as in (42) is the dominant term in (43). In consequence, \(\mathbf {G}(\omega )\) as in (42) yields almost the same performance as (43).
For illustration, the amplitude responses \(|G_1(\omega )|\), ..., \(|G_n(\omega )|\) are plotted in Fig. 4 for \(n=5\), \(\beta =10\), and \(\rho \in \{ 0, \, 0.03 \beta \}\). Fig. 5 shows the resulting STF (17) and the components \(H_1(\omega )\), ..., \(H_n(\omega )\) of the NTF (16) for \(m=n\) (i.e., with \(\Vert \mathbf {G}(\omega ) \Vert \) as in (44)) and \(\eta ^2 = 104.3\).
From now on, we will normally assume \(\rho =0\) (i.e., undamped integrators).
4.5 Bandwidth
Using (42) (with \(\rho =0\)), the bandwidth \(\omega _\text {crit}\) defined by (22) is easily determined to be
For \(\mathbf {G}(\omega )\) as in (44), Eq. (45) does not strictly hold, but it is a good proxy for the bandwidth also in this case.
In the following, we will use the quantity
with \(f_\text {crit} {\mathop {=}\limits ^{\scriptscriptstyle \bigtriangleup }} \omega _\text {crit}/(2\pi )\), which may be viewed as an analog of the oversampling ratio of \(\varDelta \Sigma \) converters. With (45) and with
as in (37), we then obtain
Finally, we recall from Sect. 4.3 that stability can be guaranteed if and only if \(\gamma \le 1/2\).
4.6 Simulation Results
Figures 6 and 7 show the power spectral density (PSD) of the digital estimate \(\hat{u}(t)\) for the numerical example in Figs. 4 and 5 with \(\rho =0\) and with further details as given below. In Fig. 6, the input signal u(t) is a full-scale sinusoid; in Fig. 7, the input signal is \(u(t)=0\). Except for the peak in Fig. 6, both Figs. 6 and 7 thus show the PSD of the conversion error (19).
As for the details in these simulations,Footnote 3 we have \(\text {OSR}=32\), \(b=1\), \(\kappa =1.05\), and \(T=1/21.5\), resulting in \(\gamma = 10/21.5\). The frequency of the sinusoidal input signal is 0.1 Hz.
A key point of Figs. 6 and 7 is that the PSD of the conversion error appears to be well described by the white-noise analysis of Sect. 3.2.
4.7 Concluding Remarks
Throughout this section, we have just discussed the nominal performance of a converter with the structure of Fig. 2. A detailed discussion of circuit mismatch, thermal noise, etc., is beyond the scope of this paper, but given in [12, 14]. Further contributions of [14] that are not reported here include a working hardware prototype and variations of the integrator chain including a chain of oscillators and a leapfrog structure.
5 State Space Representation
Both our further examples (in Sect. 6) and the digital estimation (in Sect. 7) require the analog linear system of Fig. 1 to be described in state space form. Specifically, we write
and
where \(\mathbf {x}(t)\) is the state vector, \(\mathbf {s}(t) {\mathop {=}\limits ^{\scriptscriptstyle \bigtriangleup }} \big ( s_1(t),\, \ldots ,\, s_n(t) \big )^\mathsf {T}\) comprises the digital control signals, and \(\mathbf {A}\), \(\mathbf {B}\), \(\mathbf {C}\), \(\varvec{\Gamma }\), are matrices of suitable dimensions, The ATF matrix can then be written as
For the example of Sect. 4, we have
\(\mathbf {B} = \mathbf {B}_\text {C} {\mathop {=}\limits ^{\scriptscriptstyle \bigtriangleup }} \big ( \beta _1, 0, \ldots , 0\big )^\mathsf {T}\), and
If we choose \(m=n\) and \(y_1(t)=x_1(t)\), ..., \(y_n(t)=x_n(t)\), we have \(\mathbf {C}^\mathsf {T}= \mathbf {I}_{n}\); if, instead, we choose \(m=1\) and \(y_1(t)=x_n(t)\), we have \(\mathbf {C}^\mathsf {T}= (0,\,\ldots , 0, 1)\).
6 Hadamard Converters
The chain of integrators discussed in Sect. 4 provides excellent nominal performance (i.e., assuming ideal analog circuits). However, the real problem of analog-to-digital conversion is to efficiently cope with nonideal circuits. But every cascade structure, including that of Fig. 2, is sensitive to disturbances and imperfections at the early stage(s). In consequence, these early stage(s) need to be implemented with much higher precision (and therefore with much higher power consumption) than the later stages, which counteracts the apparent symmetry between the stages in Fig. 2.
We now show that the symmetry of the physical analog circuitry can be restored by a transformation of the state space. The resulting structure is conjectured to be more robust against disturbances and imperfections, as will be discussed in Sect. 6.4.
6.1 The Transform
For the transform, we use the orthogonal \(n\times n\) matrix
where \(\mathbf {H}\) is a Hadamard matrix. (Other orthogonal matrices could be used, but the Hadamard matrix yields circuit friendly coefficients.)
The state space representation of the Hadamard converters is given by (49) and (50) with
and
where \(\alpha >0\) is a scale factor and we chose \(\mathbf {C}_\text {C} = \mathbf {I}_n\). (The digital control and the matrix \(\varvec{\Gamma }\) will be discussed below.) Note that (55)–(58) is just the chain of integrators in a different coordinate system. In particular, the ATF (51) is unchanged by this transformation. However, the circuit topology has changed: it is obvious from (57) that the input signal u(t) is fed to all integrators equally and in parallel, and the matrix (55) is fully connected.
For example, for \(n=4\), the Hadamard matrix
\(\beta _1 = \ldots = \beta _4 = \beta \), and \(\rho _1=\ldots =\rho _4=0\), we obtain
We also note from (51) that \(\Vert \mathbf {G}(\omega ) \Vert ^2\) is unchanged if (58) is replaced by \(\mathbf {C}^\mathsf {T}= \alpha ^{-1} \mathbf {U}\), where \(\mathbf {U}\) is an arbitrary orthogonal matrix. In fact, replacing (58) by \(\mathbf {C}^\mathsf {T}= a \mathbf {U}\), with an arbitrary nonzero scale factor \(a\in \mathbb {R}\), leaves the nominal conversion noise (32) unchanged (since the scale factor a enters quadratically both into \(\Vert \mathbf {G}(\omega ) \Vert ^2\) and into \(\sigma ^2_{\mathbf {y}|\mathcal {B}}\)). In particular, (58) can be replaced by
with no effect on the nominal conversion noise.
6.2 Digital Control
The digital control can be effected in different ways, resulting in different Hadamard converters. In the following discussion, we refer to Fig. 8 and the symbols defined therein.
We also keep in mind that the mapping
preserves the Euclidean norm of \(\xi \), but it does not preserve bounds on the individual components of \(\mathbf {\xi } = \big ( \xi _1,\,\ldots ,\xi _n \big )^\mathsf {T}\). In fact, if \(\mathbf {\xi }\) is only constrained by \(|\xi _\ell |\le b\), \(\ell \in \{1,\,\ldots ,n\}\), then the best bound on the components of \(\tilde{\mathbf {H}}\xi \) is \(\sqrt{n}b\).
6.2.1 Integrator Chain Control (ICC)
This mode emulates the control of the chain of integrators (Fig. 2) using the \(\{+1,-1\}\)-valued control signals \(\tilde{s}_1(t),\,\ldots ,\tilde{s}_n(t)\) with \(\tilde{\kappa }_\ell = \kappa _\ell \beta _\ell \), \(\ell \in \{1,\ldots ,n\}\); the control signals \(\breve{s}_1(t),\,\ldots ,\breve{s}_n(t)\) are not used (i.e., \(\breve{\kappa }_1 = \ldots = \breve{\kappa }_n = 0\)). The variables \(\tilde{\mathbf {x}}(t) = \big ( \tilde{x}_1(t),\, \ldots , \tilde{x}_n(t) \big )^\mathsf {T}\) in Fig. 8 are the outputs of the integrators in the chain (Fig. 2), which are related to the physical states \(\mathbf {x}(t) = \big ( x_1(t),\, \ldots , x_n(t) \big )^\mathsf {T}\) of the Hadamard converter by
Choosing
makes sure that all components of \(\mathbf {x}\) are kept within the same limits as the components of \(\tilde{\mathbf {x}}\).
6.2.2 Diagonal Control (DC)
In this mode, the integrator outputs \(x_1(t),\,\ldots ,x_n(t)\) in Fig. 8 are kept within an admissible range using the \(\{+1,-1\}\)-valued control signals \(\breve{s}_1(t),\,\ldots ,\breve{s}_n(t)\); the signals \(\tilde{s}_1(t),\,\ldots ,\tilde{s}_n(t)\) are not used (i.e., \(\tilde{\kappa }_1 = \ldots = \tilde{\kappa }_n = 0\)). In (49), this is expressed by
and \(\mathbf {\Gamma }\) is a diagonal matrix with diagonal elements \(-\breve{\kappa }\).
For guaranteed stability, the analysis of Sect. 4.3 can be adapted as follows. For the sake of illustration, we here specialize to \(n=4\) and \(\mathbf {A}\) as in (60). We assume \(|u(t)|\le b\) and we wish to guarantee
for all \(\ell \in \{1,\,\ldots ,n\}\). Disregarding the control, it follows from (57) and (60) that the input of each integrator is upper bounded by
Thus, (66) can be guaranteed by the conditions
and
Conditions (68) and (69) can be simultaneously satisfied if and only if
This bound is more restrictive than (39). However, extensive simulations have shown that (70) is overly pessimistic; in fact, even \(|\beta | T=1/2\) as in (39) appears to suffice (cf. Fig. 9, which will be discussed in Sect. 6.3).
6.2.3 Combined Control (CC)
The best results (to be detailed below) are obtained by using both the control signals \(\breve{s}_1(t),\,\ldots ,\breve{s}_n(t)\) and \(\tilde{s}_1(t),\,\ldots ,\tilde{s}_n(t)\) in Fig. 8. In this case, mathematical guarantees for \(|x_\ell (t)|\le b\) may be difficult to obtain, or too conservative to be useful.
6.3 Simulation Results
Figures 9, 10, 11, and 12 show some simulation results with these different control schemes. In all these simulations, we have \(\beta _2=\ldots =\beta _n=\beta \), \(|\beta |T = 1/2\), \(n=4\), and \(\alpha =1/\sqrt{n}=1/2\). Moreover, we have:
- ICC::
-
\(\breve{\kappa }=0\), \(\tilde{\kappa }=\beta \), and \(\beta _1=\beta \).
- DC::
-
\(\breve{\kappa }=\beta \), \(\tilde{\kappa }=0\), and \(\beta _1=0.8\beta \).
- CC::
-
\(\breve{\kappa }=\tilde{\kappa }=\beta /\sqrt{2}\), and \(\beta _1=0.8\beta \).
These choices for the parameters of DC and CC are heuristic.
Figure 9 shows the effectiveness of the different control schemes by showing histograms of \(\max _{\ell \in \{1,\,\ldots ,n\}} \{ |x_\ell (t)| \}\) and of \(\max _{\ell \in \{1,\,\ldots ,n\}} \{ |\tilde{x}_\ell (t)| \}\), sampled over the time t, for specific parameter settings as in Fig. 10. The corresponding histograms for Figs. 11 and 12 look quite similar.
Figures 10, 11, and 12 show the PSD of the digital estimate \(\hat{u}(t)\). In Figs. 10 and 11, the input signal u(t) is a full-scale sinusoid; in Fig. 12, the input signal is a small positive constant (specifically, \(u(t)=0.025\)). The sharp peaks in Fig. 12 (and probably also in the other figures) are limit cycles.
From these figures, DC appears to offer little advantages while CC appears to be most attractive; in particular, CC can be used with very low OSR (46), where ICC fails due to limit cycles.
6.4 Robustness Against Nonidealities
So far, we have only considered the functionality of Hadamard converters with ideal circuits. However, our primary motivation for considering such converters is that we conjecture them to be potentially very robust against component mismatch and other nonidealities. This conjecture is suggested by the fact that Hadamard converters behave physically much like parallel structures, as is obvious from (57).
In order to demonstrate these robustness properties, we consider a possible hardware implementation as shown in Fig. 13.
This implementation uses a differential op-amp with capacitive feedback to facilitate the integrators of the Hadamard converter. The transformation \(\tilde{\mathbf {H}}\) is realized using a resistor network as shown in Fig. 14.
The global resistive values R and the capacitors values C, in the feedback path from the op-amps, are chosen such that
We now consider the following mismatch scenario. The resistors are independently drawn from a uniform distribution with support of \(\pm 1\)% deviation from their respective nominal values. The same scenario is repeated for the chain-of-integrators hardware realization from [12]. The resulting PSDs of the estimate, averaged over 500 such simulations, are shown in Fig. 15. It is obvious that the Hadamard converter effectively suppresses the harmonic distortion caused by the mismatch and results in better SNR performance.
Of course, such simulations do not prove the conjectured robustness of the Hadamard converter in an actual implementation, but they support the conjecture.
7 Computing \(\hat{\mathbf {u}}(t)\)
The job of the digital estimation in Fig. 1 is to compute samples of the continuous-time estimate \(\hat{\mathbf {u}}(t)\) defined by (10) and (15). At first sight, this computation looks daunting, involving not only the continuous-time convolution (10), but also the computation of \(\mathbf {q}(t)\) from the control signals \(s_1(t),\,\ldots ,s_n(t)\).
It turns out, however, that samples of \(\hat{\mathbf {u}}(t)\) can be computed quite easily and efficiently by the recursions given in Sect. 7.1. A brief derivation of these recursions is given in the Appendix; in outline, it involves the following steps.
The starting point is that the filter (15) is formally identical with the optimal filter (the Wiener filter) [1, 9] for a certain statistical estimation problem (cf. Sect. 2.3.1). This same statistical estimation problem can also be solved by a variation of Kalman smoothing [9], which leads to recursions based on a state space model of the analog system. The precise form of the required Kalman smoother is not standard, as it combines input signal estimation as in [3] with a limit to continuous-time observations.
Throughout, we will use the state space representation of the analog system as in Sect. 5.
7.1 Basic Filter Algorithm
Assume that we wish to compute the basic estimate \(\hat{\mathbf {u}}(t)\) given by (10) for \(t=t_1,t_2,\ldots .\) We will here restrict ourselves to regular samplingFootnote 4 with \(t_k = k T_u\) such that T (the period of the clock in Figs. 1 and 2) is an integer multiple of \(T_u\); in other words, we interpolate regularly between the ticks of the clock in Fig. 1. Moreover, we focus on the steady-state case \(k\gg 1\) where border effects can be neglected. The algorithm consists of a forward recursion and a backward recursion.
-
Forward recursion: for \(k=0, 1, 2, \ldots ,\) compute the vectors \(\overrightarrow{\mathbf {m}}_{k}\) (of the same dimension as \(\mathbf {x}(t)\)) by
$$\begin{aligned} \overrightarrow{\mathbf {m}}_{k+1}&{\mathop {=}\limits ^{\scriptscriptstyle \bigtriangleup }}&\mathbf {A_f} \overrightarrow{\mathbf {m}}_{k} + \mathbf {B_f} \mathbf {s}(t_{k}) \end{aligned}$$(72)starting from \(\overrightarrow{\mathbf {m}}_{0} {\mathop {=}\limits ^{\scriptscriptstyle \bigtriangleup }} \mathbf {0}\).
The required matrices \(\mathbf {A_f}\) and \(\mathbf {B_f}\) will be given in Sect. 7.4.
-
Backward recursion: Compute the vectors \(\overleftarrow{\mathbf {m}}_{k}\) (of the same dimension as \(\mathbf {x}(t)\)) by
$$\begin{aligned} \overleftarrow{\mathbf {m}}_{k}&{\mathop {=}\limits ^{\scriptscriptstyle \bigtriangleup }}&\mathbf {A_b} \overleftarrow{\mathbf {m}}_{k+1} + \mathbf {B_b} \mathbf {s}(t_k) \end{aligned}$$(73)starting from \(\overleftarrow{\mathbf {m}}_{N}=\mathbf {0}\) for some \(N>k\), as well as
$$\begin{aligned} \hat{\mathbf {u}}(t_k)= & {} \mathbf {W}^\mathsf {T}\! \left( \overleftarrow{\mathbf {m}}_{k} - \overrightarrow{\mathbf {m}}_{k} \right) . \end{aligned}$$(74)
The required matrices \(\mathbf {A_b}\) and \(\mathbf {B_b}\) and the matrix \(\mathbf {W}\) will be given in Sect. 7.4.
To be precise, (74) agrees with (10) only for \(k\gg 0\) and \(k\ll N\). In practice, however, \(N-k\) need not be very large for (74) to be accurate, i.e., only a moderate delay (i.e., latency) is required.
7.2 FIR Filter Version
The computation of (74) can be formulated as a finite impulse response (FIR) filter. For \(T_u=T\) (i.e., samples of \(\hat{\mathbf {u}}(t)\) are produced at the clock rate), we thus obtain
with coefficient matrices
where \(L_1>0\) and \(L_2>0\) need to be chosen large enough such that the truncation of (74) to the finite sum (75) does not significantly affect the overall performance.
If the control signals \(\mathbf {s}(t) = \big ( s_1(t),\, \ldots ,\, s_n(t) \big )^\mathsf {T}\) are \(\{ +1, -1 \}\) valued, the computation of (75) requires \(n(L_1+L_2+1)\) additions (and no multiplications) per time index k for a scalar input signal (or for each scalar component of a vector input signal).
Multiple alternative ways to organize the computation of (74) are discussed in [14].
7.3 Decimation Filtering
In many applications, the clock rate samples (75) will be subsampled to a lower rate. In this case, including an anti-aliasing filter before subsampling (like in a \(\varDelta \Sigma \) converter) is mandatory. If the control signals \(\mathbf {s}(t)\) are \(\{ +1, -1 \}\) valued, combining the anti-aliasing filtering with the filtering (75) retains the multiplierless FIR filter structure.
7.4 Offline Computations
We now turn to the matrices \(\mathbf {A_f}, \mathbf {B_f}, \mathbf {A_b}, \mathbf {B_b}\) and the matrix \(\mathbf {W}\) in (72)–(74), which can be precomputed.
We first need the symmetric square matrices \(\overrightarrow{\mathbf {V}}\) and \(\overleftarrow{\mathbf {V}}\) (of the same dimension as \(\mathbf {A}\)) as follows. The matrix \(\overrightarrow{\mathbf {V}}\) is the limit
of the iteration
equivalently, \(\overrightarrow{\mathbf {V}}\) is the solution of the continuous-time algebraic Riccati equation
The matrix \(\overleftarrow{\mathbf {V}}\) is defined almost identically, but with a sign change in \(\mathbf {A}\), i.e., \(\overleftarrow{\mathbf {V}}\) is the solution of the continuous-time algebraic Riccati equation
The matrix \(\mathbf {W}\) in (74) is then obtained by solving the linear equation
for \(\mathbf {W}\).
The matrix \(\mathbf {A_f}\) in (72) is given by
and the matrix \(\mathbf {A_b}\) in (73) is
Finally, the matrix \(\mathbf {B_f}\) in (72) is
and the matrix \(\mathbf {B_b}\) in (73) is
Note that the only free parameter of the digital filter is \(\eta ^2\) as in (15).
Care must be taken that the quantities of this section are computed with sufficient numerical precision, and the matrices \(\overrightarrow{\mathbf {V}}\) and \(\overleftarrow{\mathbf {V}}\) should be exactly symmetric.
For the example of Sect. 4 (and Fig. 2) with \(n=2\) and \(\rho =0\), the quantities in (81) turn out to be
and \(\mathbf {W} = \frac{1}{2\sqrt{2\eta }}(1, 0)^\mathsf {T}\), which may be a useful test case for numerical computations.
8 Conclusion
Control-bounded conversion is a new type of analog-to-digital conversion where a digital estimate of the continuous-time analog input signal(s) is obtained from a principled solution of a natural inverse problem. We have developed the fundamentals of such converters, including a transfer function analysis and the implementation of the digital estimate as a practical linear filter. The flexibility of the digital control and estimation allows to use more general analog filter structures than conventional converters.
We gave two examples of such architectures. The first example is a chain of integrators (first proposed in [13]), which is reminiscent of a continuous-time MASH ADC, but with a simpler analog part that cannot be satisfactorily handled by a conventional digital cancellation scheme. The second example is obtained from the first example by a transformation of the state space, resulting in essentially the same nominal performance, but with a fully connected physical structure that is conjectured to be more robust against component mismatch and other nonidealities.
Availability of data and materials
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current work.
Change history
26 February 2022
The article was published without the open access funding note. The funding note “Open access funding provided by Swiss Federal Institute of Technology Zurich” has been updated.
Notes
This is common knowledge among designers, but textbooks seem to be taciturn about it.
The extension of the following transfer function analysis to unstable analog systems is possible, but beyond the scope of this paper.
Simulating the analog system requires to solve differential equations. We used the SciPy software package [18], which implements a Runge–Kutta method.
In this section, we use k to index time steps, which is unrelated to the dimensionality of \(\mathbf {u}(t)\) as in (1).
In this appendix, we use K, rather than k as in (1), to denote the number of input signals.
The discrete times \(t_1, t_2, \ldots \) in this appendix (with \(t_k - t_{k-1}=\varDelta \rightarrow 0\)) are unrelated to the discrete time steps in Sect. 7.
References
B.D.O. Anderson, J.B. Moore, Optimal Filtering (Prentice Hall, Hoboken, 1979)
J. Biveroni, H.-A. Loeliger, On sequential analog-to-digital conversion with low-precision components, in 2008 Information Theory & Applications Workshop, UCSD, La Jolla, CA, January 27–February 1 (2008)
L. Bolliger, H.-A. Loeliger, C. Vogel, LMMSE estimation and interpolation of continuous-time signals from discrete-time samples using factor graphs, arXiv:1301.4793
L. Bruderer, H.-A. Loeliger, Estimation of sensor input signals that are neither bandlimited nor sparse, in 2014 Information Theory & Applications Workshop (ITA), San Diego, CA, February 9–14 (2014)
T.C. Carusone, D.A. Johns, K.W. Martin, Analog Integrated Circuit Design, 2nd edn. (Wiley, Hoboken, 2012)
I. Daubechies, R. A. DeVore, C. S. Güntürk, V. A. Vaishampayan, Beta expansions: a new approach to digitally corrected A/D conversion, in Proceedings of 2002 IEEE International Symposium on Circuits and Systems (ISCAS), May 26–29, (2002), pp. II-784–II-787
I. Daubechies, R.A. DeVore, C.S. Güntürk, V.A. Vaishampayan, A/D conversion with imperfect quantizers. IEEE Trans. Inf. Theory 52(3), 874–885 (2006)
J.M. de la Rosa, Sigma-delta modulators: tutorial overview, design guide, and state-of-the-art survey. IEEE Trans. Circuits Syst. I 58(1), 1–21 (2011)
T. Kailath, A.H. Sayed, B. Hassibi, Linear Estimation (Prentice Hall, Hoboken, 2000)
H.-A. Loeliger, L. Bolliger, G. Wilckens, J. Biveroni, Analog-to-digital conversion using unstable filters, in 2011 Information Theory & Applications Workshop (ITA), UCSD, La Jolla, CA, USA, February 6–11 (2011)
H.-A. Loeliger, J. Dauwels, J. Hu, S. Korl, L. Ping, F.R. Kschischang, The factor graph approach to model-based signal processing. Proc. IEEE 95(6), 1295–1322 (2007)
H.-A. Loeliger, H. Malmberg, G. Wilckens, Control-bounded analog-to-digital conversion: transfer function analysis, proof of concept, and digital filter implementation, arXiv:2001.05929
H.-A. Loeliger, G. Wilckens, Control-based analog-to-digital conversion without sampling and quantization, in 2015 Information Theory & Applications Workshop (ITA), UCSD, La Jolla, CA, USA, February 1–6 (2015)
H. Malmberg, Control-Bounded Converters, Ph.D. thesis at ETH Zurich no. 27025 (2020). https://doi.org/10.3929/ethz-b-000469192
O. Ordentlich, G. Tabak, P.K. Hanumolu, A.C. Singer, G.W. Wornell, A modulo-based architecture for analog-to-digital conversion. IEEE J. Sel. Top. Signal Process. 12(5), 825–840 (2018)
M. Ortmanns, F. Gerfers, Continuous-Time Sigma-Delta A/D Conversion: Fundamentals, Performance Limits and Robust Implementations (Springer, Berlin, 2006)
S. Pavan, R. Schreier, G.C. Temes, Understanding Delta-Sigma Data Converters, 2nd edn. (Wiley, Piscataway, 2017)
P. Virtanen et al., SciPy 1.0: Fundamental algorithms for scientific computing in Python. Nat. Methods 17, 261–272 (2020). https://doi.org/10.1038/s41592-019-0686-2
Funding
Open access funding provided by Swiss Federal Institute of Technology Zurich.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
In this appendix, we give a condensed derivation of the algorithm of Sect. 7. (A detailed development of all the required background is beyond the scope of this paper.)
As mentioned in Sect. 2.3.1, the filter (15) can be viewed as a multivariate extension of the continuous-time Wiener filter [1] that estimates a multivariate zero-mean white Gaussian noise “signal” \(\mathbf {U}(t)\) from the signal
where \(\mathbf {Z}(t)\) is m-dimensional zero-mean white Gaussian noise that is independent of \(\mathbf {U}(t)\). In this statistical model, the average
(for \(\varDelta >0\)) is a K-dimensionalFootnote 5 zero-mean Gaussian random variable with covariance matrix \(\frac{\sigma _U^2}{\varDelta } \mathbf {I}_{K}\). The covariance matrix \(\frac{\sigma _Z^2}{\varDelta }\mathbf {I}_{m}\) of \(\mathbf {Z}(t)\) is defined analogously.
By “estimating \(\mathbf {U}(t)\),” we really mean to estimate the random variable(s) (89) for any fixed t, and then taking the limit \(\varDelta \rightarrow 0\) [4]. In this setting, the MAP estimate, the MMSE estimate, and the LMMSE estimate agree and equal the mean of the posterior distribution of \(\tilde{\mathbf {U}}(t,\varDelta )\) conditioned on the observation of \(\tilde{\mathbf {Y}}(t)\). The Wiener filter computes this estimate (for \(\varDelta \rightarrow 0\)) as
where the Fourier transform of \(\mathbf {h}(t)\) is (15) with
Applying this Wiener filter to the signal \(\mathbf {q}(t)\) as in (10) means that we solve the statistical estimation problem for the observation \(\tilde{\mathbf {Y}}(t) = \mathbf {q}(t)\).
The same statistical estimation problem can also be solved by a variation of Kalman smoothing (or by an equivalent variation of recursive least squares, cf. (21)). In contrast to the Wiener filter, the Kalman approach is based on the state space equations (49) and (50), which leads to recursive estimation algorithms. We will use a discrete-time approximation of the state space model with discrete timesFootnote 6\(t_1, t_2, \ldots \) and fixed \(t_k - t_{k-1} = \varDelta >0\); our continuous-time results will then be obtained by taking the limit \(\varDelta \rightarrow 0\).
From now on, we will use factor graphs as in [11], which allow to compose recursive estimation algorithms from lookup tables of “local” computations. A factor graph of (the discrete-time approximation of) our statistical model in state space form is shown in Fig. 16. Note that Fig. 16 represents the uncontrolled analog system with the observations \(\tilde{\mathbf {Y}}(t_k) = \mathbf {q}(t_k)\).
Now we plug in the (known and piecewise constant) control signals \(\mathbf {s}(t) = (s_1(t),\ldots , s_n(t))\) into the state space model. We thus obtain the factor graph of Fig. 17, where all the observed signals are now zero, cf. (14). This second factor graph is easy to work with, and to take the \(\varDelta \rightarrow 0\) to continuous time at the end.
Using the notation of [11], we now consider the quantities \(\overrightarrow{\mathbf {m}}_{\mathbf {X}(t)}\) and \(\overrightarrow{\mathbf {V}}_{\mathbf {X}(t)}\) as well as \(\overleftarrow{\mathbf {m}}_{\mathbf {X}(t)}\) and \(\overleftarrow{\mathbf {V}}_{\mathbf {X}(t)}\). The former denote the mean vector and the covariance matrix, respectively, of the forward sum-product message, which equals the Gaussian probability density of the time-t state \(\mathbf {X}(t)\) given past observations (up to a scale factor); the latter denote the mean vector and the covariance matrix, respectively, of the backward sum-product message, which equals the likelihood of the (given) future observations conditioned on \(\mathbf {X}(t)\) (up to a scale factor).
From Fig. 17, we determine these quantities using Tables II–IV of [11] as follows. From (III.1) and (II.7) of [11], we have
and from (IV.2) and (IV.3) of [11], we have
For \(\varDelta \approx 0\), we have
thus (92) becomes
and (93) becomes
Combining (95) and (96) yields (77)–(79) as the steady-state condition for
in the limit \(\varDelta \rightarrow 0\).
The derivation of (80) is essentially identical except that the matrix \(e^{\mathbf {A}\varDelta }\) is replaced by its inverse, which amounts to a sign change in \(\mathbf {A}\).
As for \(\overrightarrow{\mathbf {m}}_{\mathbf {X}(t)}\), we have
from (III.2) and (II.9) of [11], and
from (IV.1) and (IV.3) of [11]. For \(\varDelta \approx 0\), we obtain with (94)
where we have used the normalized stationary covariance matrix (97). Note that (100) is exact in the limit \(\varDelta \rightarrow 0\) and amounts to the differential equation
The solution of this differential equation (for \(t>0\)) is
with \(\tilde{\mathbf {A}} {\mathop {=}\limits ^{\scriptscriptstyle \bigtriangleup }} \mathbf {A} - \overrightarrow{\mathbf {V}} \mathbf {C}\mathbf {C}^\mathsf {T}/ \eta ^2\). This solution applies to any interval between \(t_k\) and \(t_{k+1}\) in Sect. 7.1 and yields (72) with (82) and (84).
The derivation for \(\overleftarrow{\mathbf {m}}_{\mathbf {X}(t)}\) is essentially identical except for a sign change in both \(\mathbf {A}\) and \(\varvec{\Gamma }\), where the latter is due to (II.10) of [11].
Finally, we use the result from [3] that the MAP/MMSE/LMMSE estimate of U(t) (i.e., the posterior mean of (89) for \(\varDelta \rightarrow 0\)) is given by
with
which yields (74) and (81). Note that (103) and (104) may also be obtained directly from Fig. 17 using (II.12), (III.8), and (III.9) of [11] and then taking the limit \(\varDelta \rightarrow 0\).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Malmberg, H., Wilckens, G. & Loeliger, HA. Control-Bounded Analog-to-Digital Conversion. Circuits Syst Signal Process 41, 1223–1254 (2022). https://doi.org/10.1007/s00034-021-01837-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00034-021-01837-z