TEXT SIZE

CrossRef (0)
Computational explosion in the frequency estimation of sinusoidal data

Kaimeng Zhanga, Chi Tim Ng1,a, and Myunghwan Naa

aDepartment of Statistics, Chonnam National University, Korea
Correspondence to: 1Department of Statistics, Chonnam National University, 77, Yongbong-ro, Buk-gu, Gwangju 61186, Korea. E-mail: easterlyng@gmail.com
Received April 20, 2018; Revised June 1, 2018; Accepted June 8, 2018.
Abstract

This paper highlights the computational explosion issues in the autoregressive moving average approach of frequency estimation of sinusoidal data with a large sample size. A new algorithm is proposed to circumvent the computational explosion difficulty in the conditional least-square estimation method. Notice that sinusoidal pattern can be generated by a non-invertible non-stationary autoregressive moving average (ARMA) model. The computational explosion is shown to be closely related to the non-invertibility of the equivalent ARMA model. Simulation studies illustrate the computational explosion phenomenon and show that the proposed algorithm can efficiently overcome computational explosion difficulty. Real data example of sunspot number is provided to illustrate the application of the proposed algorithm to the time series data exhibiting sinusoidal pattern.

Keywords : ARMA, sinusoidal data, computational explosion
1. Introduction

Time series exhibiting sinusoidal pattern is ubiquitous in digital signal processing, econometrics, agriculture, and biomedicine. For example, Stroebel et al. (2010) summarize statistical methods for detecting biological rhythms. Human breathing data collected during radiation therapy are studied in Choi et al. (2014). White et al. (2011) suggests that the location of the tumor that moves periodically due to the breathing motion during the therapy can be predicted based on autoregressive moving average (ARMA) model. Classical treatment of ARMA model based time series methods is Brockwell and Davis (1991). Unlike the applications in econometrics and agriculture where the period can be naturally considered to be one-year, the frequency and periodicity of certain biological data (e.g., breathing cycle) depend on the physical conditions of the subjects. In such situations, frequency has to be estimated. In addition, the seasonal autoregressive integrated moving average (SARIMA) model requires known periodicity fail to handle the unknown-frequency cases. Periodogram is another important frequency estimation method for the time series data with unknown frequency. However, interpretation of estimation results from periodogram can be difficult when the time series deviates from the sinusoidal pattern, for example, a linear combination of a sinusoidal function and a linear function. In such a situation, non-zero Fourier frequencies are no longer sparse.

The feasibility of analyzing sinusoidal data via ARMA model is studied in Platonod et al. (1992). It is illustrated that sine/cosine model perturbed with error terms can be rewritten as an ARMA model with unit complex characteristic roots. After fitting the ARMA model, the frequency can be inferred indirectly through these characteristic roots. In addition, the combined linear trend and sinusoidal pattern can be described via the multiplicity of complex characteristic roots. In the simulation study, Platonod et al. (1992) only consider examples with small sample size n = 100. As noted in p.292 of Ozaki (2012), most existing numerical nonlinear optimization procedure of ARMA model estimation face the risk of “computational explosion” due to the violation of invertibility condition of MA parameters during the iteration. Such a computational explosion difficulty becomes more serious in the sinusoidal data because the unit complex roots provide non-invertible MA coefficients. This hinders the applications of ARMA model to biomedicine data exhibiting periodic patterns.

Both stationarity condition and invertibility condition are violated in the sinusoidal data and make the situation more difficult. The maximum likelihood estimate cannot be obtained unless the initial values of the innovation terms are supplied by the user since the stationary distribution is not well-defined. The initial values are unknown, but is necessary as the starting point of the recursive procedure as described in Brockwell and Davis (1991). Without the invertibility condition, it is not guaranteed that the influence of such initial values on the maximum likelihood estimate is negligible.

In this paper, a new algorithm is proposed to estimate the ARMA model violating both stationarity and invertibility conditions. The important idea is that the univariate ARMA model always has multivariate vector state-space representation, see Chapter 12 of Brockwell and Davis (1991). By storing the normalized vector and the logarithm of the norm of the vector only, we are able to circumvent the “computational explosion” difficulties.

The paper is organized as follows. In Section 2, the relationship between sinusoidal data and non-invertible ARMA model is discussed. In Section 3, a procedure allowing estimation under noninvertibility is presented. The computational explosion difficulty is further dealt with in Section 4. Simulation studies are given in Section 5, followed by concluding remarks in Section 6.

2. Preliminaries: ARMA versus sinusoidal pattern

Suppose that the model for the data is generated by

$yt=∑k=1KAk sin 2πωkt+∑k=1KBk cos 2πωkt+ɛt,$

where εt, t = 1, 2, . . . , T are independent and identically-distributed N(0, 1) random variables. Direct estimation based on the above model is extremely difficult because the local minimum of

$∑t=1T(yt-∑k=1KAk sin 2πωkt-∑k=1KBk cos 2πωkt)2$

is not unique due to the periodicity of sine/cosine function.

Can the ARMA model be used to handle the sinusoidal pattern? The answer is positive, see Platonod et al. (1992). The crucial point is that sine/cosine function is the solution to a recursive relationship. To illustrate some basic ideas, consider a special case where the coefficients are constants of time,

$yt=Art sin 2πωt+Brt cos 2πωt+ɛt.$

Here, A, B, r, ω are unknown parameters. Note that ytεt = Art sin 2πωt + Brt cos 2πωt fulfills

$(yt-ɛt)=φ1(yt-1-ɛt-1)+φ2(yt-2-ɛt-2)$

when the characteristic equation 1 − φ1zφ2z2 = 0 has complex roots $r-1 cos 2πω±r-1 sin 2πω-1$. See Brockwell and Davis (1991) for the application of recursive relationships to time series analysis. Transposing terms, Equation (2.1) can be rewritten as

$yt=φ1yt-1+φ2yt-2-θ1ɛt-1-θ2ɛt-2+ɛt,$

where θ1 = φ1 and θ2 = φ2. This is a special case of ARMA model with MA coefficients equaling to the AR coefficients. Then, r and ω can be obtained from the roots of the characteristic equation 1 − φ1zφ2z2 = 0. Given r and ω, the coefficients A and B can further be obtained using the usual regression method and the amplitude is estimated as (A2 + B2)1/2.

If the true model is

$yt=∑k=1KAkrkt sin 2πωkt+∑k=1KBkrkt cos 2πωkt+ɛt.$

One can consider the following model fitting procedure

### Procedure 1

(Finding frequency via ARMA model)

• Fit the ARMA(2K, 2K) model.

• Find all roots z1, . . . , zK (excluding the conjugates z̄1, . . . , K) of the AR characteristic polynomial.

• Estimate ω̂K = Arg(1/zK)/2π and r̂k = |1/zk|.

• Estimate Ak and Bk, k = 1, 2, . . . , K by regressing yt against $r^kt sin 2πω^kt$and $r^kt cos 2πω^kt$, k = 1, 2, . . . , K.

3. Obtaining ε̂t, = 1,2, . . . under non-invertible cases

Consider the model

$yt=φ1yt-1+⋯+φpyt-p+ɛt-θ1ɛt-1-⋯-θqɛt-q,$

where εt, t = 1, 2, . . . , n are independent N(0, σ2) random variables.

Consider the special cases with p = q = 2K and

$yt-φ1yt-1-⋯-φpyt-p=∏k=1K(1-2rk cos 2πωkL+rk2L2) yt=∏k=1K(1-2rk cos 2πωkL+rk2L2) ɛt=ɛt-θ1ɛt-1-⋯-θqɛt-p,$

where L is the back-shrift operator. Note that yt exhibits periodic pattern. In such a situation, the MA polynomial has complex roots and the model is non-invertible when rk ≥ 1.

Let ε̂− (q −1) = ε̂− (q −2) = · · · = ε̂0 = 0 be the initial values. If the invertibility condition holds, the innovation terms for t = 1, 2, . . . can be obtained recursively as

$ɛ^t( φ, θ)=yt-φ1yt-1-⋯-φpyt-p+θ1ɛ^t-1( φ, θ)+⋯+θqɛ^t-q( φ, θ).$

However, if the invertibility condition is violated, the influence of the misspecified initial values ε̂− (q −1) = ε̂− (q −2) = · · · = ε̂0 = 0 on ε̂t(φ, θ) is not guaranteed to diminish as t →∞. Our solution to the above-mentioned non-invertibility problem is based on the idea that the influences of the unknown true initial values (ε− (q −1), ε− (q −2), . . . , ε0) can be eliminated by regressing η̂t(φ, θ) against q covariates η̂t(φ, θ) = (η̂t1(φ, θ), η̂t2(φ, θ), . . . , η̂tq(φ, θ)) defined in Procedure 2. For i = 1, 2, . . . , q, the ith component η̂ti(φ, θ) measures the sensitivity of ε̂t(φ, θ) against unit change in the ith misspecified initial value ε̂iq. Denote the vector of regression coefficients by α. The details are shown in the following procedure and the rigorous justifications are given after the presentation of the procedure.

### Procedure 2

(New definition of conditional sum of squares)

• Initialize ε̂− (q −1) = ε̂− (q −2) = · · · = ε̂0 = 0 and [η̂− (q −1), η̂(q −2), . . . , η̂0] = −Iq.

• Compute ε̂ and η̂ via the recursive relationships,$ɛ^t( φ, θ)=yt-φ1yt-1-⋯-φpyt-p+θ1ɛ^t-1( φ, θ)+⋯+θqɛ^t-q( φ, θ), η^t( φ, θ)=θ1 η^t-1( φ, θ)+⋯+θq η^t-q( φ, θ).$

• Minimize the sum of squares$Q( φ, θ, α)=∑t=1T(ɛ^t( φ, θ)- αT η^t( φ, θ))2.$

Given (φ, θ), the optimalα is$α=(∑t=1T η^t( φ, θ) η^tT( φ, θ))-1(∑t=1Tɛ^t( φ, θ) η^t( φ, θ)).$

Note that the usual conditional sum of squares method minimizes $∑t=1Tɛ^t( φ, θ)$instead.

To illustrate the rationale underlying the procedure, notice that the model can be rewritten as

$ΘL(ɛ1ɛ2⋮ɛT)+ΘU(ɛ-(q-1)ɛ-(q-2)⋮ɛ0)=(y1-φ1y0-⋯-φpy-(p-1)y2-φ1y1-⋯-φpy-(p-2)⋮yT-φ1yT-1-⋯-φpyT-p),$

where

$ΘL=(1 -θ11 -θ2-θ11 ⋮⋮⋮⋱ -θq-θq-1⋯-θ11 -θq-θq-1⋯-θ11 ⋱⋱⋱⋱⋱ -θq-θq-1⋯-θ11)$

and

$ΘU=(-θq-θq-1⋯-θ1 -θq⋯-θ2 ⋱⋮ -θq00⋯0⋮⋮⋮⋮00⋯0).$

Equivalently,

$ΘL-1Z=ΘL-1(y1-φ1y0-⋯-φpy-(p-1)y2-φ1y1-⋯-φpy-(p-2)⋮yT-φ1yT-1-⋯-φpyT-p)=ΘL-1ΘU(ɛ-(q-1)ɛ-(q-2)⋮ɛ0)+(ɛ1ɛ2⋮ɛT).$

One can consider this as a regression model with a left-hand-side response, the coefficients are the initial values (ε− (q −1), ε− (q −2), . . . , ε0), and the design matrix is $ΘL-1ΘU$. It is possible to eliminate the influence of the initial guesses via the above regression model. The computation of the quantities $ΘL-1Z$ and $ΘL-1ΘU$ can be simplified by noting that

$[ΘU,ΘL](0ΘL-1Z)=Z and [Θ,ΘL](-IqΘL-1ΘU)=0.$

Set ε̂− (q −1) = ε̂− (q −2) = · · · = ε̂0 = 0 and [η̂− (q −1), η̂− (q −2), . . . , η̂0] =−Iq. Denote the elements of the response vector $ΘL-1Z$ by ε̂t(φ, θ) and the rows of the design matrix by η̂t(φ, θ). Equations (3.3) is equivalent to the recursive formulas in Step 2 of the procedure since all rows in [ΘU,ΘL] are the same excepting zeros entries. The conditional sum of squares defined in Step 3 can be minimized by using any suitable numerical optimization routines. In the simulation studies presented in Section 5, this is performed using the “unconstrained optimization by quadratic approximation (UObyQA)” algorithm in Powell (2002). Note that in invertible cases, η̂t(φ, θ) has the exponentially decaying pattern. However, this is not true in non-invertible cases. Neglecting such terms gives the usual maximum conditional likelihood estimation method. They are not negligible under non-invertible cases.

4. Computational explosion issues

To avoid the difficulties related to the exploding ε̂t(φ, θ) and η̂t(φ, θ), consider an algorithm that stores only the logarithms of the conditional sum of squares Q(φ, θ, α). Denote

$Et=(1,ɛ^t( φ, θ),…,ɛ^t-q+1( φ, θ), η^t( φ, θ),…, η^t-q+1( φ, θ))$

and et = Et/||Et||2 be the normalized vector. Let

$Bt=(100⋯00qT0qT⋯0qTatθ1θ2⋯θq0qT0qT⋯0qT010⋯00qT0qT⋯0qT00⋱⋯00qT0qT⋯0qT000100qT0qT⋯0qT0q0q0q⋯0qθ1Iqθ2Iq⋯θqIq0q0q0q⋯0qIq0q⋯0q0q0q0q⋯0q0q⋱⋯0q0q0q0q⋯0q0q⋯Iq0q)$

and

$at=yt-φ1yt-1-⋯-φpyt-p.$

### Procedure 3

(Algorithm storing the logarithm of conditional sum of squares)

• Set g0= 0 andS0 = 0.

• Define recursively for t = 1, 2, . . . , T$ct=‖Btet-1‖2,et=Btet-1ct,gt=gt-1+log ct2,St=etet′+St-1ct2.$

• Compute log Q(φ, θ, α) = gT + log bTSTb. Here, $bT=(0,1,0,…,0,- αT,0qT,…,0qT)$

andα can be obtained as $S[3:(p+q+2),3:(p+q+2)]-1S[3:(p+q+2),2]$.

The equivalence between Procedures 2 and 3 can be verified easily by mathematical induction arguments.

5. Simulation study

The simulation study illustrates: (i) the computational explosion phenomenon, (ii) the necessity of introducing η̂t in Procedure 2, and (iii) the feasibility of frequency estimation. The proposed methods are implemented in CRAN R language. The optimization is carried out using the R package “powell” that implements the “UObyQA” algorithm in Powell (2002).

### 5.1. Computational explosion phenomenon

Consider the frequency representations of the ARMA model like Equation (2.2)

$yt=∑k=1KAkrkt sin 2πωkt+∑k=1KBkrkt cos 2πωkt+0.5ɛt,$

, with the following three settings,

• Setting 1: p = q = 2K = 4, r1 = r2 = 1, (A1, A2) = (2.5, −0.8), (B1, B2) = (1.3, −0.5), and 2πω = (0.25, 0.5).

• Setting 2: p = q = 2K = 6, r1 = r2 = r3 = 1, (A1, A2, A3) = (2.5, −0.8, 1.2), (B1, B2, B3) = (1.3, −0.5, 0.3), and 2πω = (0.25, 0.5, 0.75).

The time series are then standardized so that the standard deviations always equal to one. The coefficients φ* = θ* are obtained from the expansion

$1-φ1L-⋯-φpLt-p=∏k=1K(1-2rk cos 2πωkL+rk2L2).$

To illustrate the computational explosion phenomenon, the critical sample sizes for the stack overflow are investigated. The programs are implemented in CRAN R language. Procedure 2 requires the storage of the values of sums of squares Q. There are stack-overflow-related errors if Q is greater than 10308 since double precision is used in R to store numerical values. The experiment is conducted as follows. Perturb each component of the coefficients φ* and θ* by N(0, 0.032) random variables. For each T = 10, 20, 30, . . . , 2500, the conditional sum of squares Q is evaluated at such perturbed coefficients using Procedures 2 and 3. The experiment is repeated for 100 times. Different perturbed coefficients are used in different trials. For each T, if there is no stack-overflow-related error in all 100 trials, the average log(Q) value is reported. For Procedure 2, the results of Settings 1 and 2 are presented in Figure 1, panels (a) and (b) respectively. Under Setting 1, computation of average log(Q) results in stack overflow problem after T ≈ 750 and under Setting 2, the critical sample size of the explosion is ≈ 600. Before the stack overflow occurs, average log(Q) is close to 400, or equivalently, the value of Q exceeds 10308, the limit of double precision. This means that Procedure 2 alone can result in computational explosion problem. Therefore, Procedure 3 is necessary. For Procedure 3, the results of average log(Q) under Settings 1 and 2 are reported in Figure 1, panels (c) and (d). Procedure 3 results in no computational explosion problem.

### 5.2. Necessity of introducing η̂t in Procedures 2 and 3

To see the necessity of introducing η̂t, Procedures 2 and 3 based on $Q( φ, θ, α)=∑t=1T(ɛ^t( φ, θ)-α′ η^t( φ, θ))2$ is compared to the usual conditional sum of squares $∑t=1Tɛ^t( φ, θ)$. In the simulation, Procedure 3 is implemented in CRAN R language and conditional sum of squares based estimation is performed using the arima function provided by R. Consider the following three settings

• Setting 1: p = q = 2K = 4, r1 = r2 = 1, (A1, A2) = (2.5, −0.8), (B1, B2) = (1.3, −0.5), and 2πω = (0.25, 0.5).

• Setting 2: p = q = 2K = 6, r1 = r2 = r3 = 1, (A1, A2, A3) = (2.5, −0.8, 1.2), (B1, B2, B3) = (1.3, −0.5, 0.3), and 2πω = (0.25, 0.5, 0.75).

• Setting 3: p = q = 2K = 8, r1 = r2 = r3 = r4 = 1, (A1, A2, A3, A4) = (2.5, −0.8, 1.2, 0.7), (B1, B2, B3, B4) = (1.3, −0.5, 0.3, −0.5), and 2πω = (0.25, 0.5, 0.75, 0.9).

For each setting, the experiment is repeated for N = 100 times with different sample sizes T = 100, 300, 600, 1500. The performance is evaluated via the 2% trimmed relative mean square error (RMSE) of the coefficients φ, θ,

$RMSE φ=(10.8N∑n=10.8N‖ φ(n)- φ* φ*‖2)12 and RMSE θ=(10.8N∑n=10.8N‖ θ(n)- θ* θ*‖2)12.$

Here, φ(n) and θ(n) refers to the estimates obtained in the trial with nth smallest mean relative errors and φ* and θ̂* refer to the true parameters. The coefficients φ* = θ* are obtained from the expansion

$1-φ1L-⋯-φpLt-p=∏k=1K(1-2rk cos 2πωkL+rk2L2).$

Table 1 shows the results of the RMSE. It can be seen that ARMA models fitted by R function arima tend to have greater RMSE than those fitted by Procedure 3, especially for large p cases with p = 6, 8.

### 5.3. Feasibility of frequency estimation

The same settings as in Section 5.2 are considered. To study the performance of the frequency estimation under ARMAeta model, 2πω is estimated via Procedure 1. Step (1) of Procedure 1 is carried out by minimizing the sum of squares in Procedure 3 using UObyQA optimization algorithm. The computational explosion difficulty is avoided since the sum of squares is obtained from Procedure 3 and the performance is evaluated via the 2% trimmed RMSE of 2πω,

$RMSEωk=(10.8N∑n=10.8N‖2πωk(n)-2πωk*‖2)12,$

where the $ωk*$ is the true value and $ωk(n)$ is the estimated value with nth smallest mean relative errors.

Table 2 shows the results of RMSE of ω1, . . . , ωk. It can be see that the RMSE increases as k, T, and p increase.

6. Real data analysis

Consider the hysteresis of the solar cycles, that can be measured by the number of sunspots, see Dmitriev et al. (2002) and Suyal et al. (2012). We use the monthly Wolf’s sunspot number data from 1749 to March 2018, with 3230 observations in total, see Figure 2. The data can be obtained from the website of Royal Observatory of Belgium; similar data is also used in the literature of time series analysis, for example, Li et al. (2012) and Tong (1990).

The ARMAeta is applied for the sunspot data and the optimal model is selected by minimizing the Bayesian information criterion (BIC). For ARMAeta(p, p) model, including α, there are a total of 3 × p parameters. The BICs can be defined as

$BIC=n*ln(L)+3p ln(n),$

where ln(L) is the fitted sum of squares of the ARMAeta model.

Table 3 shows the BIC values when p = 2, 4, 6, 8 and the coefficients (φ, θ,ω) of each model are shown in Table 4. From Table 3, the ARMAeta(2, 2) model is chosen because it has the smallest BIC. Table 4 indicates the estimated frequency is 2πω = 0.0511 or ω ≈ 123. This suggests that there are 500/123 ≈ 4 cycles per 500 units of time and agrees with the pattern shown in

7. Conclusions

The estimation problem of ARMA model is discussed under settings where the invertibility is not fulfilled. New algorithms are proposed to deal with non-invertibility difficulty and computational explosion difficulty. The methods developed are useful for the sinusoidal time series data where the frequency is unknown. This facilitates the application of the ARMA method to biomedicine and meteorology such as breathing motion data obtained during radiation therapy.

Acknowledgments

Chi Tim, Ng’s work is supported by National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. NRF-2017R1C1B2011652).

Figures
Fig. 1. The log sum of squares values against sample size T. (a) p = 4 case and (b) p = 6 case with sum of squares computed using Procedure 2. (c) p = 4 case and (d) p = 6 case with sum of squares computed using Procedure 3.
Fig. 2. The monthly sunspot number data.
TABLES

### Table 1

The RMSE of coefficients (φ, θ) of ARMA and ARMAeta models

pTARMAARMAeta

RMSEφRMSEθRMSEφRMSEθ
41001.20980.95290.07140.0572
3001.22920.94670.04130.0350
6001.22270.94640.04180.0286
15001.17960.96630.03900.0495

61001.09720.96970.02970.0038
3001.12680.96510.01830.0027
6001.13570.96380.03420.0029
15001.12040.96240.06090.0024

81001.01891.03080.00350.0019
3001.00551.02270.00550.0014
6001.06140.98250.00900.0015
15001.15550.92170.00950.0014

ARMA is obtained by arima function in R and ARMAeta is obtained by Procedure 3. RMSE = relative mean square error.

### Table 2

The RMSE of coefficients 2πω

pTARMAeta

RMSEω1RMSEω2RMSEω3RMSEω4
41000.00790.0149
3000.00960.0199
6000.06390.0416
15000.10190.0613

61000.07230.06880.0498
3000.16050.11970.1184
6000.16870.11520.0850
15000.18360.14090.1101

81000.17240.08360.83500.0656
3000.21060.10680.10070.0737
6000.20780.10680.09890.0695
15000.21040.11190.09690.0617

ARMAeta is fitted by Procedure 3 and 2πω is obtained by Procedure 1. RMSE = relative mean square error.

### Table 3

The BIC values of ARMAeta models for the sunspot data

ModelsARMAeta(2, 2)ARMAeta(4, 4)ARMAeta(6, 6)ARMAeta(8, 8)
BIC55112.4355498.7869445.98176840.6

BIC = Bayesian information criterion.

### Table 4

The coefficients (φ, θ, ω) of ARMAeta models for the sunspot data

p

12345678
ARMAeta(2, 2)φp−1.99670.9993
θp−1.99861.0019
ωk0.0511

ARMAeta(4, 4)φp−3.89225.7861−3.89291.0004
θp−3.89275.7862−3.89291.0001
ωk0.05650.3249

ARMAeta(6, 6)φp−5.631313.5533−17.844013.6633−5.63130.9990
θp−5.631313.5533−17.843913.5533−5.63120.9990
ωk2.051 × 10−160.33340.5162

ARMAeta(8, 8)φp−7.073322.6740−43.020152.8390−43.020122.6739−7.07331
θp−7.073322.6740−43.020152.8390−43.020122.6739−7.07331
ωk0.04980.32470.51740.7655

References
1. Brockwell, PJ, and Davis, RA (1991). Time Series: Theory and Method. New York: Springer
2. Choi, S, Chang, Y, Kim, N, Park, S, Song, S, and Kang, H (2014). Performance enhancement of respiratory tumor motion prediction using adaptive support vector regression: Comparison with adaptive neural network method. International Journal of Imaging Systems and Technology. 24, 8-15.
3. Dmitriev, AV, Suvorova, AV, and Veselovsky, IS (2002). Expected hysteresis of the 23-rd solar cycle in the heliosphere. Advances in Space Research. 29, 475-479.
4. Li, GD, Guan, B, Li, WK, and Yu, PLH (2012). Hysteretic autoregressive time series models. Biometrika. 99, 1-8.
5. Ozaki, T (2012). Time Series Modeling of Neuroscience Data.
6. Platonod, AA, Gajo, ZK, and Szabatin, J (1992). General method for sinusoidal frequencies estimation using ARMA algorithms with nonlinear prediction error transformation. IEEE. 5, 441-444.
7. Powell, MJD (2002). UOBYQA: unconstrained optimization by quadratic approximation. Mathematical Programming, Series B. 92, 555-582.
8. Stroebel, AM, Bergner, M, Reulbach, U, Biermann, T, Groemer, TW, Klein, I, and Kornhuber, J (2010). Statistical methods for detecting and comparing periodic data and their application to the nycthemeral rhythm of bodily harm: a population based study. Journal of Circadian Rhythms. 8, 1-10.
9. Suyal, V, Prasad, A, and Singh, HP (2012). Hysteresis in a solar activity cycle. Solar Physics. 276, 407-414.
10. Tong, G (1990). A Dynamical System Approach.
11. White, BM, Low, DA, and Zhao, T (2011). Investigation of a breathing surrogate prediction algorithm for prospective pulmonary gating. Medical Physics. 38, 1587-1595.