教学方

Martin Lindquist, PhD, MSc

Professor, Biostatistics

Tor Wager

PhD

脚本

To date, most resting state fMRI studies have assumed that the functional connectivity between distinct brain regions is constant across time, so it's stationary. Recently, there's been a great deal of interest in trying to quantify possible dynamic changes in connectivity. Here the thought is that changes in connectivity might provide an insight into the fundamental properties of brain networks. So how was this done? Well the most common approach is to use so called sliding-window approach. Here you define a window of time and you calculate the correlation within that window between different time’s series. And you move that window slowly across times, getting a series of correlation matrices, as we move across time. Here's an example from a paper by Hutchison, et al. Other approaches include independent component analysis, time frequency coherence analysis, and change point detection methods. And here's an example of the latter from a paper by Cribben, et al. However, interpreting fluctuations in connectivity is difficult due to low signal-to-noise ratio, physiological artifacts, and variation in signal mean, and variance over time. So often times it might be unclear whether observed fluctuations in connectivity should be attributed to neuronal activity or simply to random noise. There also remains uncertainty regarding which is the appropriate analysis strategy to use and how to interpret the results. So let's talk briefly about the most common approach which is sliding windows. So again, the sliding-windows approach, a time window of fixed length is selected. And this is selected by the researcher a priori, and data points within that window are used to compute the correlation. That correlation is then slid along time and the correlation is then computed for each subsequent window, giving us a time series of correlation. In contrast, a tapered sliding window approach is also used. Here the window is first convolved with a Gaussian kernel, and this allows points to gradually enter and exit from the window as it moves across time. Because in the sliding window approach, you're either inside the window or you're not. so this can give abrupt changes if you have an influential point. Now the tapered sliding window allows points to gradually enter and exit from the window. Here's an example of two time series and of length 300. And let's apply the sliding window approach to this. Here's the results from the sliding window with window length twenty time points. So basically it calculates the correlation between tiny time points and then slides that window across time. And so, this is the results that we'll get. So we get kind of an oscillating pattern here, and the correlations range from 0.6 to -0.6, and it's sort of in a sinusoidal pattern here. Now, this might seem to be compelling, but the fact of the matter is that these two time series are just random IID noise. So, there should be no really correlation between the time series. So why do we see this sort of seemingly compiling results here? Well the fact of the matter is that if we have a time window thatonly 's 20 times points larger have a large amount of variations. So it's not unusual to have colorations ranging from 0.6 to negative 0.6. And also we have this smoothly varying fluctuations because points gradually enter and exit into the window. So, for example, two adjacent time points are highly related with each other, because they share 19 points in common. So we're going to have a high degree of auto-correlation here. So if we study this further, we could look at the estimated max correlation using null data of length T, where T ranges from 150, 300, 600 to 1000 for different sliding window lengths, ranging from 15, 30, 60 to 120. So here you see that for small window lengths of size 15, we tend to get the max correlation value. Actually, quite high. So it ranges from around 0.6 to 0.8. So seeing correlations of this size, and similarly, negative correlations of the same size wouldn't be so uncommon. So, for example, if we had a dynamic correlation that vary between negative 0.8 to 0.8 and then we had a window length of the 15 time points, and T equal to 1,000, these results are saying that that wouldn't be uncommon even for null data. The estimated max correlation decreases as the window length becomes bigger. So, for example, if we have a window length of size 120, the max correlation is going to be much, much smaller but, however, becomes more difficult to define transient effects. And so determining the appropriate window length is one of the difficulties of the sliding window technique. In the finance literature there's both time varying variances in correlations between time series have been extensively studied. So sliding window methods have been widely used. There's something called rolling window methods in finance literature. However, it's generally accepted that time series models are preferable. And here, one commonly used model is the GARCH processes. These are Generalized Auto Regressive Conditional Heteroscedastic models. So a univariate GARCH(1,1) process simply models the time varying variant in a time series. And basically, this time varying variants depends on the current values of the time series, as well as the past values of the variance. And is related to the armor process that we use in modeling the noise and the GLM analysis. In order to model time varying connectivity, we need to use the multivariate GARCH model. And there exists many such. One example is the Dynamic Conditional Correlation or DCC. This is an example of the DCC model. You first fit GARCH model to each time series and compute standardized residuals, and there after you've used kind of a type of moving window technique called an exponentially weighted moving average technique to compute time varying correlation matrices. One of the benefits of this approach, if we go back to this example where there's no relationship between the two time series, DCC tends not be confused as much by null data. Here it shows a clear static correlation across time as should the case. In general, the estimated max correlation using null data of length T, analyzed using DCC, tends to be much lower. So this also illustrates that, while not always the case, DCC is less bound to be tricked by null data. And so that can be quite useful, so we don't get erroneous results in our dynamic correlation analysis. So the field of dynamic connectivity has really taken off in the past couple of years. And here's an overview of a standard analysis which is done by Vince Calhoun's group. Here they first identify intrinsic connectivity networks using ICA, like we talked about in the previous modules. Thereafter, they fit a tapered sliding window to these look at the functional network connectivity. So the connectivity between these intrinsic connectivity networks identified by ICA. This gives them a series of network connectivity, dynamic network connectivity windows, which they then cluster into different states. And so here we see that using K means clustering of these matrices, dynamic matrices, we can now identify five different states, which we can then study further. So using these states here in this particular results they show that the amount of time that schizophrenics spent in states 4 and 5 were much higher than healthy controls. And this is interesting because here states 1 and 3 are more connected and states 4 and 5 have a more diffuse connection. So it turns out that schizophrenics are more often in these less connected states 4 and 5. So these are promising results, and this is sort of the type of thing that people want to do with dynamic connectivity methods. And so whether you do it using sliding windows or methods like DCC, they're interchangeable, and so the jury's still open about which are the best approaches to use. This is certainly an exciting area of future research. Okay, so that's the end of this module. In the next module, we'll talk a little bit about network analysis. Okay, see you then. Bye.