Originally formulated to assess the stability of oscillators in atomic clocks, Allan variance provides a robust measure of frequency stability over varying timescales, which statistics such as standard deviation do not take into account. In this white paper, we review the mathematical foundation of Allan variance, and share how Allan variance and its related quantities serve as helpful tools for the precise analysis of time series data in practical applications such as oscillator characterization.

Allan variance can be performed using the Moku, an FPGA-based device that houses a reconfigurable suite of test and measurement instruments. Leveraging the unique Phasemeter instrument, one can record ultra-precise phase, frequency, and amplitude data of an incoming periodic signal, as well as calculate and plot statistics such as Allan variance in real time.

**A brief history of Allan variance**

How stable is your system? There are many tools available to answer this question. In the 1960s, while working on optical clocks at the National Institute of Standards and Technology (then known as the National Bureau of Standards), David W. Allan invented a new one [1].

Allan found that existing statistical measures such as standard deviation diverged for particular noise sources as the number of samples increased. This motivated him to develop a new time-domain metric, which today bears his name.

As Allan was then working on optical clocks, Allan variance was developed for the field of atomic frequency standards. As a result, discussion of the topic is often relegated to the vernacular of the field, especially with regard to characterizing the frequency stability of an oscillator.

However, Allan variance can be computed for any time series. This series could represent the signal itself, such as the output of a temperature sensor, or any of its properties (frequency, phase, amplitude etc.), evaluated at a constant rate. As a result, Allan variance has found favor in diverse applications from communications [2] to navigation [3].

The Moku Phasemeter instrument offers Allan variance as one of the available post-processing options, shown on the Moku:Pro in Figure 1 below. This note provides a primer on the statistic aimed at users encountering it for the first time.

Figure 1: To display Allan deviation (the square root of Allan variance) in the Phasemeter instrument, first display the data visualization panel. Then select “Allan deviation” from the drop-down menu.

**The mathematics of Allan variance**

The basic principle of Allan variance is to divide a time series into sections of equal duration and consider how the time average of each section differs from that of the previous section. If these differences, taken over the dataset as a whole, are small, then the system is stable on this timescale.

Figure 2: The first step in the computation of is to segment the data into stretches of length *τ* and compute the time average, , of over each stretch. We then subtract consecutive averages and compute the RMS value of these differences. Dividing the result by yields the Allan variance for observation time *τ*.

More concretely, suppose we have a continuous time series , its Allan variance, , is defined as

where denotes the expectation value (average) and is the *i*th sample of the average of *y* over observation time *τ* (Figure 2). Allan deviation is then simply the square root of Allan variance, or . One can evaluate this expression for a series of observation times τ to gain insight into how self-similar (i.e., stable) the data is over different timescales.

As can be inferred from equation (1), the dimension of matches that of *y*. We interpret the value to be the expected root mean square difference between two *τ*-second-long measurements of *y* taken *τ* seconds apart.

For example, consider a clock oscillating at . If the Allan variance of its fractional frequency difference, Y, defined as

is 1.23×10^{10 }for a 10-second observation (*τ* = 10 s), then we would expect that two randomly chosen, consecutive, 10-second measurements of Y would differ by 1.23×10^{10} RMS. Given the definition of fractional frequency difference, this is equivalent to an expected absolute frequency difference of RMS.

Consider now the case of a real, finite, dataset of length *M*, sampled with period (Figure 3). In a sampled system we cannot choose the total time *τ* freely, so we divide the set into *K *segments of length for some . The Allan variance may be approximated as

Very roughly, the uncertainty in is . A full treatment of measurement uncertainty is beyond the scope of this work, so see Ref [4] for more detail. To improve confidence in the results, and make more efficient use of valuable data, one can partition the data into overlapping segments (Figure 4). This yields pairs of consecutive segments, as opposed to previously. This overlapping Allan deviation is given by

Figure 3: In a real, sampled system, the amount of data is finite and *τ* is constrained to be a multiple of the sampling period (here *n*=2).

**Obtaining Allan variance via integration**

In many common measurement scenarios, there exists a quantity x with the property that

For example, in clock stability measurements, where the time deviation, *X*, is the integral of the fractional frequency difference, *Y*, and in gyroscope systems where the measured angle, *θ*, is the integral of the rotation rate, *Ω*, i.e.

We are also free to compute *x* via numerical integration of equation (5), even if it does not correspond to a measured physical variable.

Figure 4: To better use valuable data, segments may be overlapped. This creates additional pairs of consecutive observations, therefore increasing the number of possible summands in equation (3). In this *n *= 2 case, limited to non-overlapping segments, we could perform the subtractions: , , etc. Now, we additionally have: , , etc. Although the samples are not entirely independent, confidence in our result is nevertheless improved.

In such cases

or in discrete terms,

and equation (4) becomes

where *N = M + 1* is the length of *x*. To understand this, consider that *y* may be constructed via numerical derivatives (differences) of *x*, so *M = N − 1*.

This may seem like an abstract simplification but for reasons of computational efficiency, Equation (10) provides the most commonly implemented formulation of Allan variance. Note that, here, measurements of *x* provide the Allan variance of *y*, not *x*.

**Plotting Allan variance**

Allan variance is typically calculated for multiple averaging times and plotted on a log-log scale (Figure 5). Such plots are helpful in determining the best averaging time for a given measurement. Note that longer averaging times are not always preferable, particularly in the presence of low-frequency drifts.

In addition, common noise sources are typically described by power laws, which present known slopes on Allan deviation plots. For example, white noise is reduced with the square root of averaging time. We would thus expect the slope of white noise to be , and this is indeed the case. More generally, if the slope of a particular noise source in terms of power spectral density *S* is , then the slope in terms of Allan deviation will be . i.e.

This fact allows one to easily determine which noise source is dominant over different averaging times, build a noise budget to ascertain whether system performance is well-understood, or quantify the contribution of each error source (Figure 6).

Table 1 presents the slopes of noise sources commonly encountered in the study of clocks and gyroscopes.

Table 1: Slope powers of (i.e., *β* where ) for various noise sources in selected applications [3, 5]. FM: frequency modulation, PM: phase modulation.

Figure 5: An example time series (upper axes) and its corresponding Allan deviation plot (lower axes). has been evaluated for multiple averaging times, τ, with the results shown on a log-log scale.

Figure 6: Power-law noise sources present known slopes on a plot of Allan deviation, permitting one to easily model system noise. Total noise is given by the incoherent sum of individual contributions, i.e. . In this instance, stability improves with averaging time (as the influence of white noise is reduced) until pink/flicker noise becomes dominant. On longer timescales, stability is limited by a linear drift in the data (see Figure 5, upper axes). Measurements will be most stable when taken with an averaging time of around 5000 s.

**Power spectral density vs. Allan variance**

As mentioned, there are a number of tools available to describe the stability of a system. While Allan deviation is a time-domain metric of stability, a frequency-domain counterpart is the power spectral density (PSD), . If the dimension of y is then the dimension of is . Of course, the information contained in the Allan deviation, , is simply an alternative representation of that contained in the PSD, and there exists a closed form conversion (see App. I of [6]). Note that it is only possible to convert from PSD to Allan variance and not the other way around. The conversion formula is:

Here, is the transfer function of the time domain sampling function.

In this context, a useful expression to be aware of is

where . As an illustrative example, one may convert a PSD of phase (*φ*) noise into one of frequency (*f*) noise according to

**Conclusion **

In this work we have provided an introduction to Allan variance, demonstrating how it may be computed and interpreted. Originally developed in the context of oscillator stability, this is still where the statistic is most often employed. However, we emphasize that it is applicable to any time series and useful in a broad range of fields.

Allan variance can assist in both determining the ideal observation time for a particular measurement and identifying dominant noise sources. Conversion from power spectral densities to Allan variance is also possible.

Allan variance is a valuable statistical tool, and is one of many such tools available for data post-processing on the Moku Phasemeter. This capability, along with the Phasemeter’s microradian accuracy and intuitive interface via the Moku:app, make Moku an exceptional device for characterizing the stability of oscillator systems.

**Beyond Allan variance**

Just as limitations were found with standard deviation, Allan deviation, too, is not the ideal statistic in all cases. Two commonly used derivatives of Allan deviation, offering improved performance in certain circumstances, are briefly discussed here for completeness.

### Modified Allan deviation

We noted above the possibility of identifying noise sources based on the gradient traced on a plot of Allan deviation (Figure 6). Yet multiple noise sources present identical slopes. In particular, oscillator white phase modulation (WPM) noise and flicker phase modulation (FPM) noise both generate a slope of (see Table 1). However, WPM is sensitive to measurement bandwidth whereas FPM is not. By implementing an additional averaging over *n* adjacent measurements, where , this *modified* Allan deviation, , yields an effective bandwidth that narrows linearly with τ and enables these noise sources to be differentiated [7]. The modified Allan variance is given by

or, more practically,

with

### Time deviation

A further metric, based on the modified Allan deviation, is the time deviation, or time Allan deviation , defined by

Note that this is nothing other than a “tilted” version of the modified Allan deviation (all slopes on a log-log plot will be reduced by one power of *τ*). The normalization factor is chosen such that agrees with the standard deviation for white phase modulation (WPM) noise when *n* = 1.

TDEV is also often denoted by , making explicit the fact that it describes the stability of *x* (rather than *y*), due to the additional factor of *τ*.

As the name suggests, this measure is useful in the characterization of distributed timing signals, where it is used to describe the phase variation of a clock as a function of averaging time.

## References

[1] D. W. Allan, “Statistics of atomic frequency standards,” *IEEE Proceedings*, vol. 54, pp. 221–230, Feb. 1966.

[2] L. Hua, Y. Zhuang, L. Qi, J. Yang, and L. Shi, “Noise Analysis and Modeling in Visible Light Communication Using Allan Variance,” *IEEE Access*, vol. 6, pp. 74 320–74 327, 2018.

[3] IEEE, “IEEE Standard Specification Format Guide and Test Procedure for Single-Axis Interferometric Fiber Optic Gyros,” *IEEE Std 952-1997*, pp. 1–84, 1997.

[4] C. A. Greenhall and W. Riley, “Uncertainty of Stability Variances Based on Finite Differences,” September 2004. [Online]. Available: https://ntrs.nasa.gov/citations/20050061319

[5] W. Riley and D. Howe, “NIST Special Publication 1065: Handbook of Frequency Stability Analysis,” July 2008. [Online]. Available: https://tsapps.nist.gov/publication/getpdf.cfm?pub id=50505

[6] J. A. Barnes, A. R. Chi, L. S. Cutler, D. J. Healey, D. B. Leeson, T. E. McGunigal, J. A. Mullen, W. L. Smith, R. L. Sydnor, R. F. C. Vessot, and G. M. R. Winkler, “Characterization of frequency stability,” *IEEE Transactions on Instrumentation and Measurement*, vol. IM-20, no. 2, pp. 105–120, 1971.

[7] D. W. Allan and J. A. Barnes, “A Modified “Allan Variance” with In- creased Oscillator Characterization Ability,” in *Proc. 35th Ann. Freq. Control Symposium*. USAERADCOM, Ft. Nonmouth, NJ 07703: Time and Frequency Division, National Bureau of Standards, May 1981, https://tf.nist.gov/general/pdf/560.pdf.