Quick Estimation of Periodic Signal Parameters from One-bit Measurements

—Estimation of periodic signals, based on quantized data, is a topic of general interest in the area of instrumentation and measurement. While several methods are available, new applications require low-power, low-complexity, and adequate estimation accuracy. In this paper, we consider the simplest possible quantization, that is binary quantization, and describe a technique to estimate the parameters of a sampled periodic signal, using a fast algorithm. By neglecting the possibility that the sampling process is triggered by some signal-derived event, sampling is assumed to be asynchronous , that is the ratio between the signal and the sampling periods is defined to be an irrational number. To preserve enough information at the quantizer output, additive Gaussian input noise is assumed as the information encoding mechanism. With respect to published techniques addressing the same problem, the proposed approach does not rely on the numerical estimation of the maximum likelihood function, but provides solutions that are very closed to this estimate. At the same time, since the main estimator is based on matrix inversion, it proves to be less time-consuming than the numerical maximization of the likelihood function, especially when solving problems with a large number of parameters. The estimation procedure is described in detail and validated using both simulation and experimental results. The estimator performance limitations are also highlighted. identification, nonlinear quantizers.


I. INTRODUCTION
Estimating the characteristic parameters of a system or a signal using quantized data is a central problem in instrumentation and measurement. Conventional procedures, such as least squares estimators, are shown to be sub-optimal and perform increasingly worse when signals are more coarsely quantized [1]. The challenge to recover input signal information is maximum when a binary quantizer, e.g. a comparator, is used. On the other hand, binary quantization is attractive, mainly because of the ease in generating and processing binary data, even at very large sampling rates. Clearly, if suitable modulation of the input sequence is possible, one-bit quantization may be performed so to reduce information loss, as in one-bit ∆Σ analog-to-digital converters (ADC) [2]. Conversely, if the input signal is not pre-processed and is directly converted by a one-bit quantizer, processing needed to recover some or all of the signal parameters can only benefit from the signal encoding properties of the additive noise possibly affecting the comparator input. In fact, if the input signal is consistently above or below the comparator threshold, the output sequence can take only a single value and any estimator would fail in providing meaningful information. In fact, if the input signal is unaffected by noise, the quantizer output only produces information about signal zero-crossings, possibly useful for estimating the signal frequency components but insufficient to allow estimation of the amplitudes of these components. Thus, it is exactly the noise source at the input that acts as a sequence randomizer and that encodes information about the signal parameters so that it can be extracted at the output of the 1-bit quantizer. As an example, consider the quantization of a constant signal affected by white Gaussian noise.
Reference [3] shows that the amount of information at the quantizer output vanishes when the noise standard deviation tends to zero.
Importance for the instrumentation and measurement community: In general, advanced processing of signals based on complexity reduction is of interest for the measurement community [4]. Practical usage of single bit quantization is the subject of one-bit measurements of sparse signals [5]- [7]. Similarly, system identification based on binary quantized data [8]- [10], is a technically challenging problem with applications in the identification of both linear and nonlinear systems. At the same time, both measurement of ADCs performance and of the impact ADC resolution to the estimation of input signals' parameters play a central role in instrumentation and measurement [11].
Applications in ADC testing: In fact, when testing ADCs, test signals are sinusoidal or multi-tone signals [12]. The input signal parameters are estimated through the same data record used to find the ADC threshold levels. The accuracy with which these parameters are known affects the ADC test results. Many published paper address this problem [13], [14]. A simple estimator, like the one described in this paper, may ease this task. Moreover, ADCs are among the first devices in the signal chain of most modern instrumentation. This paper shows how to process basic information such as binary data to estimate the parameters of a periodic input signal.
High frequency applications: By observing that binary quantization may be performed using off-the-shelf high sample rate (5 GSa/s) comparators [15] or experimental (20 GHz) comparators [16], the analysis of techniques to estimate parameters of analog signals using very coarsely quantized data, can lead to the usage of simple instrumentation, also in that interval of frequencies traditionally characterized by the usage of complex instruments.
State-of-the-Art: Previous research on this subject considered the estimation of systems parameters when the input sequence is known [17], [18] and when the input signal is synchronously sampled that is when the ratio between the sampling sequence period and the signal period is a rational number [19]. Several other references have addressed a wide class of similar problems in the context of system identification [20]- [27]. In the areas of signal processing and communications, several contributions were made over the years to address signal and channel estimation problems and threshold value optimization. For a comprehensive list of these papers see [28]. The topic of single-bit quantization is the subject of extensive research covering the characteristics of scalar-and vector-signal parameter estimation problems, under the assumptions of both known and unknown noise distributions [29], [30]. In these references, general properties were derived such as the Cramér-Rao lower bound and the log-convexity of the likelihood function. This latter property also applies to the case considered in this paper. Noisy single-bit quantization was considered also in [31], where the observed signal is a single sinusoid. Similarly, [32] describes the properties of the maximum likelihood estimator (MLE) of the parameters of a noisy single sinusoidal signal after binary quantization. The analysis is x j x j Fig. 1: The signal chain considered in this paper (a) and its equivalent version (b). The effect of the invertible nonlinear function is attributed to the one bit quantizer Q T (x). extended to the binary quantization of multiple sinusoids in [33], where a non-linear least squares estimator is described to solve the problem. Finally, numerical methods for the estimation of the likelihood function may resort to iterative procedures such as the Expectation-maximization approach [34]- [36].
The case considered in this paper: This paper describes the properties of a quick estimator based on the transformation of the considered estimation problem into a linear problem enabling the usage of ordinary matrix-inversion operations on the data. This approach overcomes the time-performance of published estimators resorting to numerical maximization of the likelihood function, under similar accuracy performance. Alternatively, this procedure can be used to initizialize the MLE in a starting point that is very close to the final solution, so to improve the MLE convergence speed.
In fact, when severe quantization occurs, as in the binary case, conventional estimation procedures, such as least squares estimators, either fail to provide meaningful results [1] or require substantial numerical processing when the problem complexity increases, e.g. when the number of parameters to be estimated exceeds 50, as shown in section VII in the case of the MLE [37]. This allows the solution of very large estimation problems, for which the approach based on the MLE would result in larger computational time. This setting of the problem that is important for the instrumentation and measurement community is not covered in the literature.

II. PROBLEM STATEMENT
The problem analyzed in this paper can be described according to the following characteristics.
Measurement setup: The signal chain considered in this paper is depicted in Fig. 1(a), where Q(·) is a binary quantizer and ηj is a sequence of zero-mean independent random variables, with variance σ 2 . The function f (·) is an invertible nonlinear function possibly distorting the signal prior to its binary quantization.
Signal: The sequence xj is obtained by sampling a periodic signal x(·), asynchronously. By assuming that N samples are processed, when j = 0, . . . , N − 1, we can write: where P is the known number of harmonic components in the periodic signal, θm are the signal parameters to be estimated and λ is the normalized sampling rate. Since sampling is asynchronous, λ is irrational with probability 1. In practice, however, digital signal generators and clock sources have a finite frequency resolution that may result in a rational value of λ, especially if the generators are synchronized. Under suitable conditions, this number can be treated as being close to an irrational number, as shown in section III-B. Thus, the case when λ is a rational number is also included in the procedure described in the following. Noise assumptions: The input noise is a zero-mean white Gaussian random sequence.
Goal of this paper: It is shown how to achieve a fast estimation of the amplitude θm of each component in xj by processing the quantizer output samples yj. The resulting estimator is called in the following the Binary Quantile-Based Estimator (BQBE). In addition, a procedure is proposed to validate the assumption about the noise probability distribution.

III. THE ESTIMATION PROCEDURE
The description of the working principle of BQBE is based on the mathematical modeling of the three major entities involved in this problem: (a) the signal chain, (b) the process of asynchronously sampling a periodic signal with known frequency, (c) the actual parametric estimator.

A. Modeling the signal chain
The quantizer input-output characteristic can be described as: where T0 is the quantizer threshold. Thus, Q(x) models the behavior of a simple comparator. It can be observed that the cascade of the nonlinear function f (x) and the binary quantizer is equivalent to a new binary quantizer having a possibly different threshold. In fact, where f −1 (·) is the inverse function of f (·) and T = f −1 (T0). Accordingly, the resulting signal chain is shown in Fig. 1(b). Observe that without the noise source, information about the input sequence would be poor and possibly insufficient to fully identify signal parameters [3].

B. Modeling the asynchronous sampling of a periodic signal
Data processing by the BQBE requires knowledge about the structure and density of time instants generated by the irrational frequency λ multiplied by increasing time indices in (1). In addition, it requires suitable reordering of the measured binary samples, according to their amplitudes, before processing. To address both issues, this subsection contains a description of the mathematical tools useful to describe and predict the behavior of the sequence of generated samples.
We assume that the sequence xj in (1) is obtained by sampling the periodic continuous-time signal x(·), having period TP with a constant sampling period TS.
Since any real number α can be written as the sum ⌊α⌋ + ⟨α⟩, where ⌊α⌋ is the largest integer lower than or equal to α, and ⟨α⟩ is its fractional part, we can write: where the last equality follows since, being x(·) a periodic signal, its value will not change when an integer number of periods is added to its argument. The properties of the sampled sequence depend on the properties of the fractional map, where λ = T S/T P is assumed known. Two cases are of interest: when λ is an irrational and a rational number. 1) Irrational λ: in this case, the map (5) produces the orbit 0, ⟨λ⟩, ⟨2λ⟩, . . . , ⟨(N − 1)λ⟩ of unique values that divides the interval [0, 1) into N distinct intervals, whose magnitude can take at most three possibly different values. This result is known as the three gap theorem or the three distance theorem [38]- [40]. These intervals can be evaluated by sorting the elements in the sequence (5) and by calculating the distance δn between neighbouring values. By recalling that the Farey series of order N is the sequence of increasing irreducible rationals n /d such that both n and d are not greater than N [41], the three gap theorem states that δn belongs to the 3-element set where the rationals n 1/d 1 , n 2/d 2 are the consecutive elements of the Farey series of order N − 1, such that n1 d1 < λ < n2 d2 .
Moreover, each element in the set (6) occurs N − d2, N − d1 and d1 + d2 − N times, respectively. Since λ is irrational, at least two out of three elements in the set (6) always occur.
2) Rational λ: when λ is the ratio D /M in its lowest terms, results presented in subsection III-B1 still hold true provided that N < M . When N ≥ M , values generated by (5) are no longer distinct and the only possible gap is 1 /M [42].
3) Sorting the sequence: The application of the procedure described in the following requires ordering of the samples according to their amplitudes. Clearly, given the chaotic nature of the map (5), ordering of uj requires a permutation of the indices j. The set of indices pj, j = 0, . . . , N −1 producing an increasingly sorted version up j of the sequence uj is obtained through the following algorithm [43]: It was also shown that [43], To clarify the usage of these expressions, an example follows. Assume N = 7 and λ =
The estimator working principle is based on the estimation of the probability with which, for a given value of xj, the quantizer outputs a logical 0 that is [19], where Φ(·) is the cumulative distribution function (CDF) of a standard normal random variable. The estimation of pj in (11) requires the percentage count of the binary quantizer output, based on a possibly large number of samples, obtained under the same given constant value xj. However, incoherent sampling of a periodic signal results in time-samples that neither remain constant over a varying time index, nor repeat theirselves, as stated in subsection III-B1. To overcome this limitation, input samples are grouped by selecting close enough time-indices so that corresponding signal amplitudes provided by (1) may be considered sufficiently close. Clearly, closeness among time samples does not imply proximity among signal amplitudes. A needed requirement is that the signal derivative has a locally small magnitude.
1) Partitioning the input samples: In practice, measured data can be partitioned in H sets such that the values provided by xj in the same set are close together. This can be done for instance by sorting, at first, λj in increasing order and by partitioning the corresponding indices in subsets I h defined as where ϵ is a small interval-length, H = 1 ϵ , and because of (10), indices in I h select amplitudes in the sequence xj that are close enough to allow estimation of pj. The sorting operation can be done as described in subsection III-B3.
2) Estimating the probability of the quantizer outcomes: The probability pj refers to the input signal being approximately equal to xj. As described in in subsection III-C1, indices in I h select amplitudes in the sequence xj that are close enough to allow estimation of pj. Consequently, the percentage count over the indices in I h can be written as, where |I h | is the cardinality of I h and [E] is the indicator function of the event E. Since each term in the summation in (14) is a Bernoulli random variable with success probability depending on i, where E(·) is the expectation operator, x h = 1 |I h | i∈I h xi and where the rightmost approximation is proved in Appendix A, where also the Taylor series expansion of the error term is described. The error in Thus,p h can be considered an estimator of the rightmost term in (15) from which the parameters embedded in x h can be linearly related to σΦ −1 (p h ) that, in turn, is obtained using measurement results.
3) Building and solving the model: Using matrix notation, we can write the observation model (12). This is in the form Y = AΘ, where and where Y is a H × 1 vector, A is a H × (2P + 1) matrix and Θ a (2P + 1) × 1 vector. For each h = 0, 1, . . . , H − 1, the variance ofp h can be calculated as [44] The variance of each component of Y can be approximated using a Taylor series expansion of Φ −1 (p h ) about the mean value ofp h . In [19] it is shown that this calculation provides Finally, an estimateΘ of Θ is obtained by using the weighted least squares estimator [45]: where W is a H × H diagonal matrix that contains the reciprocal of the variance of each estimator, 1 /σ 2 Y h , on the principal diagonal [45], estimated according to (19). An estimate of the estimator covariance matrix is given by [45]:

D. Some remarks
The following remarks can be made about the devised estimator: 1) A requirement is that H ≥ (2P + 1), that is the number of unknown parameters; 2) The observation model (12) shows that (20) estimates T − θ1.
Thus, either θ1 or T must be known if the other parameter needs to be estimated; 3) When σ is unknown, both lefthand and righthand terms in (12) can be divided by σ; in this case, a new estimation problem can be setup if T is known and if the reciprocal of σ is defined as a new parameter to be estimated [19]; 4) The observation model (12) assumes that 0 <p h < 1, ∀h. In practice, when σ is small compared to the input signal span, p h may be 0 or 1 for several values of h. The estimator still provides meaningful results provided H ≥ (2P + 1). Clearly, the dimensions of the lefthand vector and of the matrix in (12) scale accordingly.
IV. ESTIMATOR PROPERTIES In this section, at first, the estimator bias is analyzed. Then, an analysis is done to show how the BQBE can be seen as the Discrete Fourier Transform (DFT) of preprocessed data.

A. BQBE Bias
Because of the approximation in (15), BQBE is a biased estimator even when N is large. Using (15), and the approximation the bias vector is given by, where, and where it is assumed that H is the number of rows in A. It will be shown in section V that an estimate of the bias vector is needed to find a value of ϵ that optimizes the estimator performance. To this aim, an estimatexi of the i-th sample in the input signal is firstly obtained by using the estimated parameters in (9). Then, an estimateP (ϵ, σ) of P (ϵ, σ) is provided by (25), by substituting each occurrence of xi withxi. To appreciate both the behavior of B(ϵ, σ) and of its estimator with respect to ϵ, consider the graphs of B(ϵ, σ) and of ⟨B(ϵ, σ)⟩ , where ⟨·⟩ is the mean value operator and where ∥·∥ is the Euclidean norm. Both norms, obtained by Monte Carlo simulations, are graphed in Fig. 3 as a function of ϵ, for P = 1, 7 and N = 5 · 10 3 . To guarantee that H is always sufficiently large to allow estimation of all 2P + 1 parameters, ϵ must be upper bounded by ϵB(P ) = 1 2P + 1 . This is the value that normalizes the x-axis in Fig. 3.
Observe that for a given value of ϵ and N , the bias norm increases with P . Conversely, for given values of ϵ and P , increasing N does not affect B(ϵ, σ) significantly. Observe also that the norm of the estimated vector provides a reasonable approximation of B(ϵ, σ) for most values of ϵ.

B. An interpretation of BQBE as a DFT processor
To better appreciate how the BQBE process data, in this section, the properties of the unweighted estimator will be analyzed by assuming N → ∞. This estimator has the following expression: With respect to (26), (20) weights the results by also including the effects of the noise covariance. The simple interpretation of the estimation properties of (26) will, however, provide some insights also in the properties of (20). Because of the Kolmogorov's strong law of large numbers, when N → ∞,p h in (14) converges almost surely to E(p h ) in (15). Because of the Weyl's equidistribution theorem, λj in (10)   such that [46], [θ2m sin 2πm(xϵ + h) Similarly, when m = 1, . . . , P and h = 0, . . . , H − 1, entries in A become: where sinc(x) = lim u→x sin(πu) πu and, [sin(2πm(h + 1)ϵ) − sin(2πmhϵ)] = − sinc(mϵ) cos πm(2h + 1)ϵ .
The evaluation of the pseudoinverse of A in (26)  sin(πm(2h + 1)ϵ) sin(πn(2h + 1)ϵ) , (30) where m, n are two integers in 1, . . . , P . In Appendix B, it is shown that, when ϵ → 0 + , all summations approximately vanish when m ̸ = n and that, when n = m, as expected on the basis of the orthogonality of the sine and cosine functions. Under these approximations, that is N → ∞ and ϵ → 0 + , Consequently, when N → ∞, (26) becomes approximately equal to (33). Observe that if ϵ = 1 H , apart from a phase and a scaling factor, (33) is a DFT matrix. In fact, by defining as the m-th element of the DFT of the sequence x h , it can be recognized that when m = 1, ..., P : where Ym is the m-th element of the DFT of Y , in (26). Thus, apart from an amplitude and a phase factor, the asymptotic behavior of (26) is based on the DFT of the statistics σΦ −1 (p h,∞ ) that is on on data preprocessed by the CDF inverse operation decoding the quantizer input signal. Since the correcting factor removes the product by a sinc(·) function in the frequency domain, we can conclude that the equivalent amplitude domain effect is the convolution with a boxcar function. This is coherent with the impact of 1-bit quantization on input signals that is often modeled as the summation of independent noise, having uniform probability density function. A similar behavior was highlighted in [47] when modeling the effect of jitter in waveform digitizers.

V. SELECTING THE ESTIMATOR'S PARAMETERS
Being based on matrix inversion, the practical implementation of BQBE is rather straightforward. It requires however the choice of ϵ and, if allowed by the adopted system setup, the selection of a suitable value of σ. In fact, depending on the application requirements, the noise magnitude may not be user-selectable, e.g. when the noise is entirely due to self-dithering sources inside the used circuits [48]. When this is case, σ can only be estimated beforehand. Alternatively, the BQBE can be reparametrized to account for the joint estimation of noise and signal parameters.

A. Selecting ϵ
The user always needs to choose ϵ. In this section, it is shown how to select it by using two methods. Consider that the optimal choice is a trade-off choice. In fact, large values of ϵ allow for the inclusion of a large number of signal samples in each interval I h and thus reduce the estimation variance but contribute to a larger value of the estimation bias and a reduced number H = ⌊ 1 /ϵ⌋ of rows in the observation matrix. On the contrary, when ϵ is a small value, the bias is reduced, at the expense of an increase in the estimation variance, possibly counterbalanced by the increase of H.
1) Selection based on minimum mean square error : To select the optimal value for ϵ, an expression for the mean square estimation error (MSE) is first derived. By recalling that in the scalar parameter case the MSE is equivalent to the summation of the estimator variance and squared bias, an equivalent MSE vector can be defined in the vector parameter case. Accordingly, an estimate of the QBEB bias can be obtained through (25). Moreover, by definingV (ϵ, σ) as the vector that contains the diagonal elements in CΘ, the estimated mean square error vector associated with the vector estimatorΘ is given bŷ M (ϵ, σ) =V (ϵ, σ)+B(ϵ, σ)•B(ϵ, σ), where '•' is the Hadamard product operator that provides the entrywise multiplication of the elements inB(ϵ, σ). Thus, the optimal value ϵopt of ϵ can be found be solving the following constrained minimization problem, where σ is given minimize ϵ M (ϵ, σ) subject to 0 < ϵ < ϵB(P ) = 1 2P + 1 .
The solution to (36) can be found by a constrained numerical minimization procedure. To appreciate the characteristics of the minimization problem, consider the behavior of M (ϵ, σ) as a function of ϵ, shown in Fig. 5 for several values of P and N , obtained through Monte Carlo simulations. It can be recognized that by increasing the number of samples, this function has a smooth behavior and a single minimum resulting in simple numerical processing aimed at the computation of the minimum value.

B. Selecting σ
While σ may not always be user-selectable, its value affects the BQBE performance. The Fisher information matrix associated with the estimation problem described in this paper was derived in [30]: where J n denotes the Jacobian of xn and ∆n = (T −xn) /σ. Notice that the magnitude of ∆n must not exceed 3−4 for the corresponding sample xn to add significant information about the searched parameter. When this does not occur, the probability of xn + η to cross the threshold will vanish and so will the information associated with xn. In order to compare the behavior of the BQBE and MLE with respect to σ, Monte Carlo simulations were carried out under the assumption of P = 1, N = 10 4 , T = 0, θ1 = 0.03, θ2 = 0.07 and θ3 = 0.05. The normalized Euclidean norm of the estimator variance was calculated using R = 10 3 records and plotted in Fig. 4 as a function of σ, in the case of both the BQBE and the MLE. The behavior of the estimator variances largely coincides for the two estimation procedures. Fig. 4 also shows the graph of the normalized Euclidean norm of the Cramér-Rao lower bound (CRLB), obtained using (38). The lack of superposition among the three curves for small values of σ is due to the contribution of the estimator bias.
Observe that, when σ is a small value and ϵ is given, the number of rows in A will be much lower than H = 1 ϵ . This will result in an overall increase of the estimator bias. On the contrary, by allowing the noise to extend over all input signal span, the triggering probability will never become negligible for any of the considered input signal samples. In fact, when σ increases, so will the number of rows in A that will reach, for a given value of σ the maximum value H. This will occur, when the magnitude of ∆n does not exceed 3 − 4 for every n. Beyond this value, the number of rows in A will not grow, so that any increase in σ will increase the variance of the estimator. Fig. 6(a) shows the Euclidean norm of the bias vector as a function of σ, when P = 1, 3, 4, 7, 15, 20 assuming N = 10 5 , ϵ = 0.01 and σ = 0.1. To be noticed in this figure is the overall decrease in the bias when σ increases. Observe also that increasing N does not significantly modify the behavior of the graphs in Fig. 6(a). However, when N increases, the optimal value of ϵ will decrease resulting in an overall decrease in the estimator bias.
The optimal value of σ can be determined by looking at the behavior of the Euclidean norm of the root-mean-square-error, graphed in Fig. 6(b) obtained using the same simulation parameters. As this figure shows, normalization of σ by the input signal span will result in a minimum MSE that is almost insensitive to the number P of harmonics. Thus, as a practical rule, 6 ÷ 7σ should be comparable to the signal voltage span so to exploit the information associated with every signal sample.    (41). A binary quantizer with T = 0 was assumed. Values of A and φ were chosen according to (40 This section shows how to validate the assumption about the noise Gaussianity. The noise CDF can be estimated using a point estimator based on (11). In fact, A very rough estimate of pj is yj, that is the binary-valued quantizer output. Thus,p wherexj is obtained by (9), in which all occurrences of the unknown parameters are substituted by the corresponding estimates, obtained through (20). By sortingF1η(·) according to increasing values of its argumentT −xj, a 1-bit coded versionF1M (xj) of the noise CDF results. In order to obtain a higher resolution estimate of the noise CDF, F1M (xj), needs to be interpolated. This can be done by: • a parametric approach based on the mathematical expression of the Gaussian CDF; • a non-parametric approach, based on zero-phase numerical filtering [49].
After interpolation, a higher resolution estimateFη(xj) of Fη(xj) results. Observe that estimates are obtained at values of their argument that are not uniformly spaced. Thus,F1η(·) is the 1-bit unsorted CDF estimator,F1M (·) its 1-bit sorted version andFη(·) the higher resolution estimator obtained after interpolation.

VII. SIMULATIONS
Monte Carlo simulations were done to validate the BQBE and to compare its properties with those of the MLE, under several different parameters' values. Solution in the MLE case is assured by the log-likelihood function being a convex function, under the made assumptions [30]. Assuming m = 1, . . . , P , the DC component θ1 and the amplitudes of the P harmonics were chosen according to: where U1 and U2m were outcomes of uniformly distributed random variables, respectively in the intervals (−0.025, 0.025), (0, 0.05), while U3m was an outcome of a uniformly distributed random variable in the interval [0, 2π), Simulations were performed using R = 30 records of data. Each time, amplitudes and phases were generated according to (40). All simulations assumed a normalized frequency λ = π /(1250 √ 2) and T = 0. Results, displayed in Tab. I, were obtained by processing records of N samples, where N = 5 · 10 3 , 1 · 10 4 , 5 · 10 4 , 7 · 10 4 , with P = 10, 20, 50, 80 and σ = 0.7, 1.0, 2.0, 2.4. The table reports the mean value of the normalized Euclidean norm of the error vectors when using both the BQBE with ϵ chosen as described in subsectionV-A2 and the MLE to estimate and the mean processing times, as measured on a computer using a 2.5 GHz -i7, two-core processor. Processing of phase errors included the wrapping operations to be applied when calculating differences in phase angles, as suggested in [50]. Solutions in the MLE case, were obtained using standard numerical procedures for unconstrained optimization. The function fminunc (MATLAB-R2017b) was used to perform a quasi-newton numerical maximization. Each time the MLE was initialized with a DC value equal to 0, with θ2m and θ2m+1 in (40) as uniform random variables in the interval (−0.5, 0.5).
Derivatives of the likelihood function were supplied to the numerical algorithm and vectorized operators were used, whenever possible, to maximize its time performance. The same data used to obtained results shown in Tab. I were used to calculate data shown in Tab. II, where the performance of the MLE, when initialized by the BQBE, is shown.

A. Results Discussion
Data show that both estimators are characterized by similar accuracy performance. The BQBE outperforms the MLE in terms of mean processing time, especially when the problem complexity increases. and it is about 30 faster than the MLE in the most complex case (P = 80) reported in Tab. I. Observe also that normalized error norm related to the entire parameter vector S is always the lowest when using the MLE.
While careful optimization of the numerical maximization procedure might result in improvements in the MLE processing times, the iterative nature of this approach makes it less appealing in case of problems with a large number of parameters, e.g. P = 80. The two rightmost columns in Tab. I report the mean number of function evaluations and algorithm iterations. Moreover, if the MLE is used regardless of its time performance, the BQBE can still provide accurate initial estimates, very close to the optimal ones. In this case, MLE can be used as a post processing step starting from BQBE estimates. In fact, by comparing corresponding data shown in Tab. I and Tab. II it can be appreciated that the initialization by the BQBE still reduces by a factor about equal to 4 − 5 the processing time, in large scale problems.

VIII. EXPERIMENTS
Using off-the-shelf components and measurement instruments, experiments were performed to further validate the BQBE, as described in the following. A. The experimental setup

Two-tone source
The scheme of the realized measurement setup is shown in Fig. 7. The signal measured by a commercial 1 GSa/s 8-bit digital storage oscilloscope (DSO) was generated through the mixing of a twotone signal generated by a Stanford Research Systems DS360 signal generator and by a commercial Gaussian noise source. The tone frequencies were set to 1.839.07 Hz and 3.678.14 Hz, respectively.
The nominal values of their maximum amplitudes were set in both cases to 0.4 V. The DSO recorded an 8-bit sequence of N = 10 6 samples at a rate of 100 MSa/s, in AC mode, to filter the contribution of DC values in the input signal. The recorded sequence was then re-quantized to provide the 1-bit stream of data and processed by the BQBE. As a consequence, (1) results in P = 2, θ1 = 0 and λ = D M = 183907 10 8 , where the fraction D /M is already in an irreducible form. Thus, the condition described in subsection (III-B2) applies with M = 10 8 > N = 10 6 and distinct time indices are generated by (5), when j = 0, . . . , N − 1. A noise-only record of data was first collected by switching off the two-tone generator to allow estimation of the noise standard deviation, asσ = 0.2359 · · · V. Results presented in the next subsection were obtained by selecting ϵ through the solution of (36).

B. Experimental results
Obtained experimental results are shown in Fig. 8. The 8-bit data sequence collected from the DSO was first re-quantized to provide the 1-bit sequence processed by the BQBE. Automatic selection of ϵ provided by the method described in subsection V-A1, resulted in ϵ = 0.0031. The 4 estimated signal components were used to reconstruct the input signal. This is shown using the white color in subfigures (a) and (b). In subfigure (a), the 8-bit noisy version of the recorded signal is also shown, while in subfigure (b) the same 8-bit signal was obtained by first turning off the noise generator. This sequence was collected for validation purposes: the graph in subfigure (c) shows the difference between the estimated signal, based on 1bit data not shown in Fig. 8(a), and the measured sequence shown in (b). Finally the estimated noise CDF is graphed in subfigure (d), along with the superimposed interpolated Gaussian CDF. The error sequence in (c) highlights the presence of some residual bias when the estimated input signal is near the local maxima crossing the zero line. This might be explained both by limitations in the BQBE and in circuital changes when switching off the noise generator, to collect the data shown in subfigure (b).
In fact, when the range of the input signal is much larger than that shown in Fig. 8, the noise amplitude span might not be sufficiently large to toggle the quantizer output. Accordingly, these samples will contribute with a reduced amount of Fisher information. As an example, consider the same signal as in Fig. 8, but having larger amplitudes of both harmonics. In this case, the measured noise standard deviation isσ = 0.2169 · · · V and the BQBE provides the error sequence shown in Fig. 9, with a selected value of ϵ = 0.0111. The estimation error is shown subfigure (b). It highlights a larger bias in the estimation of the signal extreme values. This is not surprising, as the Cramer-Rao lower bound in the estimation of noisy DC amplitudes using binary information increases significantly with the increasing distance between the DC value and the quantizer threshold, for a given amount of noise [19]. Thus, if the distance between the maximum signal amplitude in (9) and the quantizer threshold increases, we may expect worse results for a given amount of noise, as the probability that the noise will toggle the binary quantizer, will decrease accordingly. Improvements in the performance of the BQBE or in the performance of any other estimator in this case, would require a larger amount of processed samples or a larger noise span. Finally, Fig. 8(d) shows the noise CDF estimated as described in section VI, both by using the zero-phase filter based on 1000 taps (discontinuous line) and through the Gaussian parametric interpolation (smoothed line). All results confirm the validity of the proposed approach.

IX. CONCLUSION
We proved in this paper that is possible to estimate the parameters of an asynchronously and incoherently sampled periodic signal with known frequency, based on its binary quantized version. This is possible because of the information encoding nature of the additive noise affecting the quantizer input. Being based on matrix inversion, the described algorithm is not computational intensive and it is easy to code on simple microprocessor-based platforms. Presented simulations and experimental results were used to validate the properties of the BQBE. Given the ease in both generating wide-band Gaussian noise and in performing binary quantization using a comparator, the BQBE can easily be applied even at high frequencies, when accurately measuring the parameters of periodic signals becomes increasingly more difficult.

X. ACKNOWLEDGEMENT
The authors acknowledge the contribution of the anonymous reviewers who provided constructive suggestions for significantly improving the paper content. This research activity was partially funded through grant PRIN 2015C37B25 by the Italian Ministry of Instruction, University and Research (MIUR) and through grants Identificazione e caratterizzazione accurata di sistemi e segnali, Ricerca di Base 2017 and 2018 -University of Perugia. APPENDIX A. APPROXIMATING THE PROBABILITY (14) At first, let us highlight the dependance of x h on the parameters in Θ.
θ2m sin (2πmλi) + θ2m+1 cos (2πmλi) , (A.1) from which (16) results. Moreover, a first order Taylor series expansion of Φ T − xi σ about x h provides: where fφ(·) is the probability density function of a standard normal random variable and ϵi = xi − x h . Thus, By observing that it follows that the error contribution by the first-order terms is zero and that the approximation holds up to the averaged summation of ϵ 2 i . This approximation can be further refined by recalling that the n-th derivative of fφ(·), f (n) φ (·) is given by [51], where He(n, x) is the n-th Hermite polynomial. Thus, we have, where the development of the error using a Taylor series expansion provides, e = For given values of ω and ϕ, [45], [52], [53] provide: By recalling that x = ⟨x⟩ + ⌊x⌋, where ⟨·⟩ is the fractional part operator, it follows that where M is a positive integer value. Thus, when 0 < ϵ < 1, g(ϵ) is a piecewise linear function such that 0.5 < g(ϵ) < 1 and g(ϵ) = 1 when ϵ = 1 M . Moreover, Because of (B.12), sin πϵ 1 ϵ (m ± n) 2 sin πϵ(m ± n) = − cos(π(m ± n)) sin πϵ 1 ϵ (m ± n) 2 sin πϵ(m ± n) ,