search for

CrossRef (0)
Dynamic bivariate correlation methods comparison study in fMRI
Communications for Statistical Applications and Methods 2024;31:87-104
Published online January 31, 2024
© 2024 Korean Statistical Society.

Jaehee Kim1,a

aDepartment of Statistics, Duksung Women’s University, Korea
Correspondence to: 1 Department of Statistics, Duksung Women’s University, Samyang-ro 144-gil 33, Dobong-gu, Seoul 01369, Korea. Email: jaehee@duksung.ac.kr
Received June 7, 2023; Revised October 6, 2023; Accepted October 30, 2023.
Most functional magnetic resonance imaging (fMRI) studies in resting state have assumed that the functional connectivity (FC) between time series from distinct brain regions is constant. However, increased interest has recently been in quantifying possible dynamic changes in FC during fMRI experiments. FC study may provide insight into the fundamental workings of brain networks to brain activity. In this work, we focus on the specific problem of estimating the dynamic behavior of pairwise correlations between time courses extracted from two different brain regions. We compare the sliding-window techniques such as moving average (MA) and exponentially weighted moving average (EWMA), dynamic causality with vector autoregressive (VAR) model, dynamic conditional correlation (DCC) based on volatility, and the proposed alternative methods to use differencing and recursive residuals. We investigate the properties of those techniques in a series of simulation studies. We also provide an application with major depressive disorder (MDD) patient fMRI data to demonstrate studying dynamic correlations.
Keywords : connectivity, DCC, fMRI, sliding window, VAR
1. Introduction

Spontaneous brain activity measured by functional magnetic resonance imaging (fMRI) has provided the evidence that the human brain is intrinsically organized into large-scale functional networks. Functional magnetic resonance imaging can provide evidence for functional specialization and integration via its ability to simultaneously localize neural activity in the entire brain. Friston et al. (1993) defined functional connectivity (FC) as the temporal correlation between spatially remote neurophysiological events. FC is one way to characterize such interactions. Understanding functional connectivity within the brain is crucial to understanding neural function, which supports highly distributed neural circuits.

Temporally varying information provides insight into the fundamental properties of brain networks. However, most fMRI studies have assumed that the essential connectivity between time series from distinct brain regions is constant. Recently there have been increasing attempts to quantify the dynamic changes in FC, such as Lindquist et al. (2014).

Though it is of increasing importance, interpreting temporal fluctuations in FC is difficult due to low signal-to-noise ratio, physiological artifacts, and variation in BOLD signal mean and variance over time (Hutchison et al., 2013). For these reasons, it is often difficult to determine whether observed fluctuations in FC should be attributed to neuronal activity or whether they are due to random noise, and thus research is still ongoing in this area.

Most current approaches to examining functional connectivity (FC) implicitly assume that relationships are constant throughout the length of the recordings. However, investigations of intrinsic brain organization based on resting-state fMRI provides the presence and potential of temporal variability. Temporal trends in the occurrence of different FC states motivate theories regarding their functional roles and relationships with vigilance/arousal.

A variety of other approaches to identify FC states are also possible, including using topological descriptions of brain connectivity as features, e.g., modularity or community membership (Bassett et al., 2011; Jones et al., 2012; Kinnison et al., 2012). Lindquist et al. (2007) detected dynamic changes in BOLD response using EWMA related approach. Formal models for detecting change points in connectivity are introduced by Hampson et al. (2002), Robinson et al. (2010). Cribben et al. (2012) and Aston and Kirch (2012) proposed other promising approaches to improve methods for identifying FC states and state transitions. Huang et al. (2019) proposed windowless dynamic correlation using a heat kernel. Recently, phase synchronization (PS) methods were proposed as a measuring method of the level of synchrony between time series from two different regions of interest (ROI)s in the brain (Glerean et al., 2012; Pedersen et al., 2018). Honari et al. (2021) suggested assessing the relationship between signals from different brain regions and measuring their phase synchronization (PS) across time. Kim et al. (2021) proposed the change-point estimation based on random matrix theory for the epilepsy fMRI. They utilized a PS metric and a sliding window for instantaneous phase synchronization (IPS).

Dynamic functional connectivity (DFC) uses the correlations with the fMRI time series according to the time-points. This temporally varying information may be used to provide insight into the fundamental properties of brain networks. Here the correlation measurement method is an issue in studying functional connectivity. It is often difficult to determine whether observed fluctuations in FC should be due to neuronal activity or random noise. Thus significant research is still in need in the area. In particular, more research should be needed to add the appropriate analysis strategy.

This paper focuses on pairwise correlations between time courses from two brain regions. We consider the most commonly used sliding-window approach (Chang and Glover, 2010; Handwerket et al., 2012), EWMA, as proposed in Lindquist et al. (2007), the recently suggested volatility model-based DCC approach in Lindquist et al. (2014), and VAR model-based dynamic causality approach. In particular, we try the method with the difference in time series and a moving model with recursive residuals. Overall, we suggest that the study of time-varying aspects of FC can unveil flexibility in the operational coordination between different neural systems. The exploitation of these dynamics and the corresponding FC estimation method may improve our understanding of behavioral shifts and adaptive processes.

This paper is organized as follows. Section 2 provides pairwise correlation estimation methods. Section 3 gives simulation results, and the final discussions are in Section 4.

2. Methods

In this section, we set up the problem for pairwise correlation and introduce its estimation methods for dynamic connectivity.

The bivariate time series y1t and y2t, measured over two separate ROIs in the brain, at equally spaced time-points t = 1, . . ., T. Let yt = (y1t, y2t)′ be a vector containing the values of both time series at time-point t. Assume that


where μt = (μ1t, μ2t)′ is the conditional mean of yt using all information in the time series observed up to time t, denoted Et(yt). The noise term et has mean zero and its conditional covariance matrix at time t as


The diagonal terms in (2.2) represents the conditional variance of yi,t using all the information in the time course up to time t. σi,t2 is typically referred to as the volatility of the time series in the finance literature. The off-diagonal term is σ12,t = ρtσ1,tσ2,t where


represents the conditional correlation coefficient. This definition shows that the conditional correlation at time t relies on information that is obtained up to time t – 1. The quantity lies in [−1, 1].

Without loss of generality we assume that μt = 0 and therefore yt = et. The conditional covariance matrix in (2.2) can be written as


where Dt is a diagonal matrix such as Dt = diag{σ1,t, σ2,t} and Rt is the correlation matrix as


Our primary interest is in developing estimation of the conditional covariance whose components are σ1,t, σ2,t and ρt. In the following section, we discuss several methods for our comparison.

2.1. Sliding-window approach

The simplest approach to estimating the elements of the covariance matrix is to use the sliding-window technique. A time window of fixed length w is selected, and data points within the window are used to calculate the correlation coefficients. The window is moved across time, and a new correlation coefficient is computed for each time-point.

Chang and Glover (2010) define the sliding-window correlation at time t at the resting-state as follows:


We consider the sliding-window correlation based only past values as a more suitable estimate of the conditional correlation as


The sliding-window technique allows for a simple approach for exploring changes in connectivity. However, it has some obvious shortcomings. First, it gives equal weight to all observations less than w time points in the past and 0 importance to all others. Highly influential outlying data points will cause a sudden change in the dynamic correlation and may be mistakenly a critical aspect of brain connectivity. To circumvent this problem, Allen et al. (2014) suggested using a tapered sliding window.

2.2. EWMA approach

The EWMA (exponentially weighted moving average) (Hunter, 1986) approach applies declining weights to the past observations and the most weight on recent observations in the time series based on a parameter λ as follows:


where t is the conditional covariance matrix.

Decomposing the covariance matrix (2.8), the conditional variances and covariances are expressed in the component as follows:

σi,t2=(1-λ)ei,t-12+λσt-12, β€Šβ€Š β€Šβ€Ši=1,2,



With recursive computaion, a small value of λ gives high weight to the recent time-points, while a large value produces the estimates more gradually to new information. The parameter λ takes values between 0 and 1, This value determines how many data points are included in the correlation calculation and serves equivalently to the window size used in the sliding-window approach.

If one assumes that yt are bivariate normal, the optimal λ can be chosen through maximum likelihood estimation as:

log L(λ)-12t=1T|Σt|-12etΣtet.

2.3. DCC method

Lindquist et al. (2014) introduced DCC (dynamic conditional correlation) as an approach to estimating the conditional variances and correlations that has become increasingly popular in the finance literature. DCC incorporates the GARCH process (Bollerslev, 1986; Engle, 1982) used to model volatility in univariate time series, providing conditional variance in the following. Let yt = et again then with GARCH expression we have

σi.t2=ωi+αiyi,t-12+βiσi,t-12, β€Šβ€Š β€Šβ€Š β€Šβ€Ši=1,2.



where Dt = diag{σ1,t, σ2,t}. Let


where QΜ„ is the unconditional covariance matrix. Decompose the covariance matrix as


where Λt=diag{Q}t-1/2Qtdiag{Q}t-1/2. Here θ1 and θ2 are non-negative scalars satisfying 0 < θ1+θ2< 1. The estimation procedure of θ1 and θ2 is in Appendix A in Lindquist et al. (2014).

Wiener (1956) first discussed the causality between the variables in the observed multivariate time series. Granger (1969) studied Wiener’s idea and introduced a few concepts related to causality, mainly in the framework of bivariate AR modeling. Recently a similarity between the causality study in economics and neuroscience has been recognized.

2.4. VAR model

Goebel and Roebroeck (2003), and Roebroeck and Formisano (2005) have proposed the use of VAR (vector autoregressive model) and shown their utility in the analysis of fMRI experiments.


where ut = (u1,t, u2,t)′ is a random error vector with mean zero and covariance matrix given by


This can be expressed with the autoregressive order p as




and ut is an error vector of random variables with zero mean and covariance matrix (t) given by


This Granger causality is shown helpful for inferring functional brain connectivity. However, VAR modeling is an adequate approach for stationary time series i.e. the autoregressive coefficients and error matrix covariance are time-invariant.

We let VARw as the VAR method with the sliding window to localize the correlation calculation. Within the observed data in the window, we apply the VAR method locally.

2.5. Difference AR coefficient model

Consider the AR(1) structure for each time series since there usually exists autocorrelation at lag1 at least,


Estimate each AR coefficient as Ο•Μ‚1 and Ο•Μ‚2 and then compute the followings


After the autoregressive and linear trend is removed, the pairwise correlation with the window w is calculated such as


We also try the conditional correlation method with the sliding window as denoted DCCw to localize the correlation calculation. AR (autogressive) of order 1 and temporal linear trends are estimated for each observed series in the window and then their residuals are used in the pairwise correlation calculation. This allows observed points to enter and exit from the window as it moves across time removing the linear trend.

2.6. Linear moving model with recursive residuals

Starting with Brown et al. (1975), residuals became one of the most important tools in change-point analysis for testing the constancy of regression relationship. The recursive (rec) residual for l = 1, 2 is defined for each time series such as

wl,t=yl,t-xtβ^t-11+xt(Xt-1Xt-1)-1xt, β€Šβ€Š β€Šβ€Š β€Šβ€Št=w+1,,T,

where Xt-1=[x1,,xt-1],Yl,t-1=[y1,1,,yl,t-1] and β^t-1=(Xt-1Xt-1)-1Xt-1Yt-1 is the ordinary least-squares estimator. Here xi = ti is the same time-points for each times series. Assuming that εi’s follow the normal distribution, Brown et al. (1975) showed that under H0, wl,k+1, . . .,wl,T are independent N(0, σ2).

These recursive residuals are used for correlation calculation as


Since the recursive residuals are obtained with removing the linear trend, the correlation with these recursive residuals may not be mixed with the linear trend.

3. Simulations

In this section, we set up the problem situations for pairwise correlation and compare several of its estimation methods for dynamic connectivity. The two time courses are designed to be generated from the bivariate normal distributions. In each case, the covariance matrix of the distribution was set as a possible structure. The following cases help us understand the different underlying change patterns. We choose T = 300 for the total time-points. The simulated data are mean corrected for calculation. For change cases, we set up one change-point at T1 = T/2, For two change cases, we consider change-points at T1 = T/3 and T2 = (2/3)T.


where the covariance term p(t) are allowed to vary across time for t = 1, . . ., T, which are allowed to control the dynamic relationship between the two time courses y1,t and y2,t. The number of time-points T = 300 is considered in the simulation. The following simulation setup is considered.

  • Case 1. Uncorrelated and no change in correlations

    p(t)=0, β€Šβ€Š β€Šβ€Š β€Šβ€Št=1,,T.

  • Case 2. Slowly varying smooth change in correlation.

    p(t)=0.5sin(t/Δ), β€Šβ€Š β€Šβ€Š β€Šβ€Št=1,,T,Δ=1024/2k,k=3(chosen).

  • Case 3. Uncorrelated but variances change.


  • Case 4. p(t) is linear according to time.

    p(t)=bt, β€Šβ€Š β€Šβ€Š β€Šβ€Š1tT, β€Šβ€Šb=0.3.

  • Case 5. p(t) has one abrupt change-point in correlation (Case 1 + one change-point at T1).


  • Case 6. p(t) is linear according to time with one change-point (Case 2 + one change-point).


    where a = 1/Δ, Δ = 500.

  • Case 7. One change-point in correlation with variance change (Case 3 + one change-point).




  • Case 8. p(t) is linear according to time (Case 4 + one change-point).


    where a = 0.1, b = 0.2.

  • Case 9. Two change-points (Case 1 + two change-points at T1 and T2).


  • Case 10. Two change-points in periodic correlation (Case 2 + two change-points).


    with Δ = 1024/2k, k = 3 (in the simulation).

The following abbreviation for methods are used in the comparison results:

  • Sl: slide window estimation with the bandwidth h = 0.1 that is, w = 30.

  • EWMA: exponentially weighted moving average.

  • DCC: dynamic conditional correlation.

  • DCCw: driving conditional correlation based on AR(1) and linear trend removed in the window w.

  • VAR: vector autoregressive model.

  • VARw: vector autoregressive model with the moving window w.

  • Dc: autoregrssive difference corrected model, advantage with AR(1) model.

  • LMc: correlation with linear moving recursive residuals.

The mean square error (MSE) is computed by calculating the mean of the squared difference between the estimated correlation and the true correlation at time-points. Table 1 provides the mean, median, and SD (standard deviation) of MSE of the corresponding method in each case. We get the results using Sl, EWMA, DCC, DCCw, VAR, VARw, Dc, and LMc models in 1,000 repetitions with T = 300. We use the window size w = 30 as fixed for sliding window and moving average methods. For EWMA λ = 0.3 is chosen to give more weight to the recent terms. There is some potential to slightly improve the performance based on the optimal widow size. In each case, two figures show the time-varying correlation estimates according to time with the boxplot of MSE’s. The left panel of Figures shows an example of the true correlation plotted together with the estimated correlations obtained by the considered methods for a single simulation repetition. The smallest MSE means are highlighted in each case in Table 1. The best method varies according to case situation. The right panel shows boxplots of the MSEs for all 1,000 repetitions in the simulation. In Figures 14 VAR and Dc have smaller mean of MSEs compared to others. VARw and DCCw show similar behavior since they both use AR coefficients. DCC works better with smooth correlation changes in Figure 2. In Figures 58 for one abrupt change-point in correlation function, the best performance of the methods depends on the underlying change pattern. When there are two abrupt change-points in Figures 910, DCC, VAR, and Dc show similar performance. Since it incorporates multivariate relations, DCC and VAR show good performance generally. The sliding window methods depend on the window size that accounts for the underlying dynamic correlation patterns. We can choose the optimal window size to minimize some criteria. However, there is an inherent difficulty in determining an appropriate window length in a real-world situation. In our simulation, the performance of Dc would improve upon DCC and VAR for the autoregressive situation. Even though DCC is proposed and preferred by Lindquist et al. (2014), the estimation should be improved when there are change-points in correlations. We should consider change-point analysis previously beforehand.

Our simulations show that the performance of each method varies according to the underlying functional form. Removing the underlying trend and considering multivariate relatedness, then estimating the bivariate correlation is a principle. We expect that there is a way to circumvent these problems. The model’s complexity and data variability affect the decision of the correlation estimation method. As a result, we suggest the practical estimation of dynamic changes and correlations. In contrast to commonly used sliding windows techniques, DCC and VAR methods are captivating options.

4. Real fMRI data applicaion

We consider resting-state fMRI data from 20 patients with major depressive disorder (MDD) participating in a depression treatment biomarker study. Mayberg et al. (1999) proposed that the bidirectional nature of this limbic-cortical reciprocity provides additional evidence of potential mechanisms mediating cognitive, pharmacological, and surgical treatments of mood disorders such as depression. Kemmer et al. (2017) estimated the FC networks from 20 healthy subjects and 20 patients with major depressive disorder (MDD). The MDD patients were matched with the healthy control subjects by age and gender, with an average age of 45.8 years and 50% male. The mean Hamilton depression rating scale (HAM-D) score was 19 (SD: 3.4) for the MDD subjects. This score value indicates severe depression (Hamilton, 1960). The average length of their current depression episode was 82 weeks. Functional images were collected over 150 time-points on 14 ROIs. For each subject, 150 fMRI volumes were obtained in 7.5 minutes during a resting-state, in which subjects are left to think for themselves while visually focusing on a cross displayed on a monitor in the scanner. The following data preprocessing steps are acquired, including motion correction to offset subject movement in the scanner, slice timing correction to adjust for the fact that each 3D scan represents a single time point as a series of 2D slices over time, normalization to adopt a common reference space across subjects and spatial smoothing. They focus on the interactions between some of those reported ROIs (based on the Brodmann atlas) in the prefrontal cortex (PFC) including medial frontal cortex (mF10), orbital frontal cortex (OF11), and lateral prefrontal cortex (latF9); subgenual cingulate cortex (Cg25) and anterior cingulate (Cg24) shown in Figure 11.

For each subject, we have (142)=91 pairwise correlations. For example, we apply the correlation methods for fMRI data with MDD patients. There has been a growing interest in estimating potential dynamic correlations between two brain ROIs, as it is thought to provide important information about the properties of brain networks. Figure 12 shows the fMRI data on 14 ROIs for one MDD patient (MDD001) and one healthy control (CON021). For the pairwise correlations considered in this paper, we have Figure 14 which shows difference according to the method and some different patterns due to groups. Figures 15 and 16 show the bivariate correlations between ROI1 and other ROIs by the considered methods. In this application with the window size w = 30 out of T = 150. These examplary figures give a hint that the dynamic correlations should be carefully dealt in fMRI analysis. We observed a fair amount of variability in the behavior of these correlations across time, regions and subject.

5. Discussion

This study presents a preliminary analysis of dynamic correlations between two brain regions, providing vital information about the properties of brain networks. We use several pairwise correlation measures to analyze time-varying interactions between brain regions. This work illustrates that we should consider change situations for correlation computation. Commonly used sliding-window techniques may not be beneficial for tracking dynamic correlations.

DCC and VAR models are extensively used in the finance literature for modeling time-varying variances and correlations and are generally considered preferable to sliding-window type approaches (Bauwens et al., 2006). Both have adaptability even though there are change-points. While the sliding window estimator is non-parametric, the DCC and EWMA models are parametric. These parametric approaches are powerful if the model is reasonably accurate. Moving estimators, including differences, can also be considered to avoid parameter estimation in the model. The more flexible non-parametric methods have advantages when the underlying function is changeable.

The wealth of information provided by time-frequency connectivity analysis presents additional challenges for studying multiple subjects and spatial locations. One way of handling this information is to summarize the dynamic information along several potentially-relevant dimensions.

Modeling time-varying variances and correlations is preferable to sliding-window type approaches. When considering the N–variate case, a total of N(N + 1)/2 variances and covariances need to be estimated, some of which may be time-varying. In addition, the resulting covariance matrix must be positive definite at each time-point. When there are underlying changes, consider the estimation of change-points (Kim et al., 2021), for example. Finally, the practical estimation of dynamic correlations depends on accurately analyzing the underlying change function.


This research was supported by 2023 Duksung Women’s University Research Fund.

Fig. 1. Case 1. Uncorrelated.
Fig. 2. Case 2. Slowly varying periodic change in correlation.
Fig. 3. Case 3. Uncorrelated but variances change.
Fig. 4. Case 4. Linear change in correlation.
Fig. 5. Case 5. One abrupt change-point in correlation.
Fig. 6. Case 6. One abrupt change-point linear correlation.
Fig. 7. Case 7. One abrupt change-point with covariance.
Fig. 8. Case 8. One abrupt change-point in linear correlation.
Fig. 9. Case 9. Two abrupt change-points in correlation.
Fig. 10. Case 10. Two abrupt change-points in varying periodic correlation.
Fig. 11. Brain ROIs for MDD (Chen et al., 2016).
Fig. 12. fMRI data from MDD patient (left) and heatly control one (right).
Fig. 13. Example of Correlations between ROI1 and ROI2 from MDD patient (left)and heatly control (right).
Fig. 14. Example of Correlations between ROI1 and ROI3 from MDD patient (left) and heatly control (right).
Fig. 15. Correlations between ROI1 and other ROIs from MDD001 patient.
Fig. 16. Correlations between ROI1 and other ROIs from CON021 heatly control subject.

Table 1

Mean and SD of MSE’s as performance of pairwise correlation methods with T = 300 in 1,000 repetitions

CasestatisticsSlEWMADCCDCCw(AR linw)VARVARwDc(AR diff)LMc




one abrupt change




two abrupt change


  1. Allen EA, Damaraju E, Plis SM, Erhardt EB, Eichele T, and Calhoun VD (2014). Tracking whole-brain connectivity dynamics in the resting state. Cerebral Cortex, 24, 663-676.
    Pubmed KoreaMed CrossRef
  2. Aston JAD and Kirch C (2012). Evaluating stationarity via change-point alternatives with applications to fMRI data. Annals of Applied Statistics, 6, 1906-1948.
  3. Bassett DS, Brown JA, Deshpande V, Carlson JM, and Grafton ST (2011). Conserved and variable architecture of human white matter connectivity. NeuroImage, 54, 1262-1279.
    Pubmed CrossRef
  4. Bauwens L, Laurent S, and Rombouts JV (2006). Multivariate Garch models: A survey. Journal of Applied Econometrics, 21, 79-109.
  5. Bollerslev T (1986). Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 31, 307-327.
  6. Brown RL, Durbin J, and Evans M (1975). Techniques for testing the constancy of regression relationships over time. Journal of the Royal Statistical Society: Series B, 37, 149-163.
  7. Chang C and Glover GH (2010). Time–frequency dynamics of resting-state brain connectivity measured with fMRI. NeuroImage, 50, 81-98.
    Pubmed KoreaMed CrossRef
  8. Chen S, Bowman FD, and Mayberg HS (2016). A Bayesian hierarchical framework for modeling brain connectivity for neuroimaging data. Biometrics, 72, 596-605.
    Pubmed KoreaMed CrossRef
  9. Cribben I, Haraldsdottir R, Atlas LY, Wager TD, and Lindquist M (2012). Dynamic connectivity regression: Determining state-related changes in brain connectivity. NeuroImage, 61, 907-920.
    Pubmed KoreaMed CrossRef
  10. Engle RF (1982). Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. Econometrica, 50, 987-1007.
  11. Friston KJ, Frith CD, and Liddle PF (1993). Functional connectivity: The principal-component analysis of large (PET) data sets. Journal of Cerebral Blood Flow & Metabolism, 13, 5-14.
    Pubmed CrossRef
  12. Glerean E, Salmi J, Lahnakoski JM, Jääskeläinen IP, and Sams M (2012). Functional magnetic resonance imaging phase synchronization as a measure of dynamic functional connectivity. Brain Connectivity, 2, 91-101.
    Pubmed KoreaMed CrossRef
  13. Goebel R and Roebroeck A (2003). Investigating directed cortical interactions in time-resolved fMRI data using vector autoregressive modeling and Granger causality mapping. Magnetic Resonance Imaging, 21, 1251-1261.
    Pubmed CrossRef
  14. Granger CWJ (1969). Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 37, 424-438.
  15. Hamilton M (1960). A rating scale for depression. Journal of Neurology, Neurosurgery, and Psychiatry, 23, 56-62.
    Pubmed KoreaMed CrossRef
  16. Handwerker DA, Roopchansingh V, Gonzalez-Castillo J, and Bandettini PA (2012). Periodic changes in fMRI connectivity. NeuroImage, 63, 1712-1719.
    Pubmed KoreaMed CrossRef
  17. Hampson M, Peterson BS, Skudlarski P, Gatenby JC, and Gore JC (2002). Detection of functional connectivity using temporal correlations in MR images. Human Brain Mapping, 15, 247-262.
    Pubmed KoreaMed CrossRef
  18. Honari H, Choe AS, and Lindquist MA (2021). Evaluating phase synchronization methods in fMRI: A comparison study and new approaches. NeuroImage, 228, 117704.
    Pubmed KoreaMed CrossRef
  19. Huang S-G, Chung MK, Carrollz IC, and Goldsmithz HH (2019). Dynamic functional connectivity using heat kernel. Proceedings of Conferences in 2019 IEEE Data Science Workshop Minneapolis, MN, USA. .
  20. Hunter JS (1986). The exponentially weighted moving average. Journal of Quality Technology, 18, 203-210.
  21. Hutchison RM, Womelsdorf T, and Allen EA, et al. (2013). Dynamic functional connectivity: Promises, issues, and interpretations. NeuroImage, 80, 360-378.
    Pubmed KoreaMed CrossRef
  22. Jones DT, Vemuri P, and Murphy MC, et al. (2012). Non-stationarity in the resting brain’s modular architecture. PLoS One, 7, e39731.
    Pubmed KoreaMed CrossRef
  23. Kinnison J, Padmala D, Choi J-M, and Pessoa L (2012). Network analysis reveals Increased integration during emotional and motivational processing. Journal of Neuroscience, 32, 8361-8372.
    Pubmed KoreaMed CrossRef
  24. Kemmer PB, Bowman FD, Mayberg H, and Guo Y (2017). Quantifying the strength of structural connectivity underlying functional brain networks.
  25. Kim J, Jeong W, and Chung CK (2021). Dynamic functional connectivity change-point detection with random matrix theory inference. Frontiers in Neuroscience, 15, 565029.
    Pubmed KoreaMed CrossRef
  26. Lindquist MA, Waugh C, and Wager TD (2007). Modeling state-related fMRI activity using change-point theory. NeuroImage, 35, 1125-1141.
    Pubmed CrossRef
  27. Lindquist MA, Xu Y, Nebel MB, and Caffo BS (2014). Evaluating dynamic bivariate correlations in resting-state fMRI: A comparison study and a new approach. NeuroImage, 101, 531-546.
    Pubmed KoreaMed CrossRef
  28. Mayberg HS, Liotti M, Brannan SK, McGinnis S, Mahurin RK, Jerabek PA, and Fox PT (1999). Reciprocal limbic-cortical function and negative mood: Converging PET findings in depression and normal sadness. American Journal of Psychiatry, 156, 675-682.
    Pubmed CrossRef
  29. Pedersen M, Omidvarnia A, Zalesky A, and Jackson GD (2018). On the relationship between instantaneous phase synchrony and correlation-based sliding windows for time-resolved fMRI connectivity analysis. Neuroimage, 181, 85-94.
    Pubmed CrossRef
  30. Robinson LF, Wager TD, and Lindquist MA (2010). Change point estimation in multi-subject fMRI studies. NeuroImage, 49, 1581-1592.
    Pubmed KoreaMed CrossRef
  31. Roebroeck A and Formisano E (2005). Mapping directed influence over the brain using Granger causality and fMRI. NeuroImage, 25, 230-242.
    Pubmed CrossRef
  32. Wiener N (1956). Modern Mathematics for Engineers, New York, McGraw-Hill.