Covariance and Correlation

Initialize NeuroAnalyzer
using LinearAlgebra
using StatsKit
using Plots
using LinearAlgebra
using NeuroAnalyzer
eeg = NeuroAnalyzer.load("files/eeg.hdf")
eeg_noisy = import_edf("files/eeg.edf")
e10 = epoch(eeg, ep_len = 10)
eeg1 = NeuroAnalyzer.filter(e10,
                            ch = "all",
                            fprototype = :butterworth,
                            ftype = :bp,
                            cutoff = (4, 8),
                            order = 8)
eeg2 = NeuroAnalyzer.filter(e10,
                            ch = "all",
                            fprototype = :butterworth,
                            ftype = :bp,
                            cutoff = (8, 12),
                            order = 8)

In EEG analysis, covariance and correlation are statistical measures used to understand relationships between signals from different electrodes or brain regions.

Application in EEG Analysis

  • Connectivity Analysis: Covariance and correlation are used to assess functional connectivity between different brain regions. High correlation between EEG signals from different electrodes may indicate strong functional connectivity.
  • Feature Extraction: These measures can be used to extract features for further analysis, such as identifying patterns associated with specific cognitive or pathological states.
  • Artifact Detection: Correlation can help in identifying artifacts or noise that may be common across multiple channels.

Covariance

Variance measures the spread or variability within a single set of data.

Covariance measures the degree to which two ordered variables move together - whether they tend to increase and decrease in tandem.

A few important properties:

  • Units: covariance retains the units of both variables, which can make it harder to interpret. For example, cov(height, weight) has units of cm × kg.
  • Mean-centering: variables must be mean-centered before computing covariance.
  • Symmetry: covariance is commutative - cov(x, y) = cov(y, x) - meaning the result is the same regardless of which variable is treated as x or y. As a consequence, covariance alone cannot be used to infer causality or directionality.

Corrected covariance:

\[ cov(x, y) = \frac{1}{n - 1} \sum(x_i - \bar{x})(y_i - \bar{y}) \]

Computing corrected covariance:

x1 = [1.0, 2.0, 3.0, 4.0]
x2 = [1.5, 2.0, 3.5, 4.0]
cc = cov(x1, x2, corrected = true)
print("Corrected covariance: $cc")
Corrected covariance: 1.5

Uncorrected covariance:

\[ cov(x, y) = \frac{1}{n} \sum(x_i - \bar{x})(y_i - \bar{y}) \]

Computing uncorrected covariance:

x1 = [1.0, 2.0, 3.0, 4.0]
x2 = [1.5, 2.0, 3.5, 4.0]
uc = cov(x1, x2, corrected = false)
print("Unorrected covariance: $uc")
Unorrected covariance: 1.125

Covariance measures the degree of linear relationship between two variables - how strongly and in what direction they are linearly associated.

A convenient way to summarize these relationships across multiple variables is the variance-covariance matrix (also called the covariance matrix). It arranges variances along the main diagonal and pairwise covariances in the off-diagonal elements, giving a compact view of both individual spread and inter-variable relationships in a single square matrix.

x y z
x var(x) cov(x, y) cov(x, z)
y cov(x, y) var(y) cov(y, z)
z cov(x, z) cov(y, z) var(z)

Interpretation:

  • Diagonal terms (variances): large values indicate high variability in that variable - and by extension, potentially interesting structure in the data worth retaining or examining further.
  • Off-diagonal terms (covariances): large magnitudes indicate strong linear relationships between pairs of variables, and therefore high redundancy - the two variables are carrying overlapping information.

Computing covariance matrix of a vector:

\[ C = A^T \times A \]

x = rand(10)
x' .* x # or x * x'

Computing covariance matrix of two vectors:

x1 = rand(10)
x2 = rand(10)
x1' .* x2 # or x1 * x2'

Vectors have to be normalized:

x1 = x1 .- mean(x1)
x2 = x2 .- mean(x2)
x1' .* x2 # or x1 * x2'

Covariance should be divided by the vectors length so that components are centered at (0, 0):

x1x2_cov = cov(x1 * x2') / length(x1)
plot_matrix(x1x2_cov,
            xlabels = string.(1:10),
            ylabels = string.(1:10))
Warning: Found `resolution` in the theme when creating a `Scene`. The `resolution` keyword for `Scene`s and `Figure`s has been deprecated. Use `Figure(; size = ...` or `Scene(; size = ...)` instead, which better reflects that this is a unitless size and not a pixel resolution. The key could also come from `set_theme!` calls or related theming functions.

@ Makie ~/.julia/packages/Makie/kJl0u/src/scenes.jl:264

Computing covariance matrix of the matrix:

# 10 channels × 20 time points
eeg_signal = rand(10, 20)
eeg_signal = eeg_signal .- mean(eeg_signal)
eeg_cov = (eeg_signal * eeg_signal') ./ size(eeg_signal, 2)

Alternatively, covariance matrix may be calculated using Statistics.jl cov() function:

# 10 channels × 20 time points
eeg_signal = rand(10, 20)
# channels × channels
eeg_cov = Statistics.cov(eeg_signal')

Plotting the covariance matrix:

plot_matrix(eeg_cov,
            xlabels = string.(1:10),
            ylabels = string.(1:10))

Covariance captures how two variables vary together - summarising the relationship between their respective variances across paired observations.

The sign of the covariance indicates the direction of the relationship:

  • Positive - the variables tend to increase and decrease together (positive slope)
  • Negative - one variable tends to increase as the other decreases (negative slope)
  • Zero - no linear relationship between the variables

One important limitation of covariance is that its magnitude is sensitive to the scale of the data - it does not, on its own, indicate the strength of a relationship, only its direction. Normalizing by the standard deviations of both variables yields the more interpretable Pearson correlation coefficient.

Despite this, covariance is a foundational quantity in many multichannel analyses, including Principal Component Analysis (PCA), Independent Component Analysis (ICA), source-space imaging of M/EEG data, and least-squares fitting.

For a dataset with N channels, covariance produces an N × N covariance matrix.

Important: covariance is only valid when computed between zero-mean variables - subtract the mean of each variable before computing.

The diagonal elements (variances) can themselves be informative. Interpolating them over electrode locations produces a topographic map of signal variability across the scalp - revealing, for example, that certain electrodes consistently show higher variance than others. These maps are typically expressed in units of μV²/cm².

Computing the covariance matrix:

cm = covm(e10, ch = "eeg")
display(cm)
19×19×100 Array{Float64, 3}:
[:, :, 1] =
  5.49725   -0.877418   4.65408   …  0.578566   1.98831    2.50965
 -0.877418   3.55497   -0.587503     0.147426  -0.481884  -0.156361
  4.65408   -0.587503   5.65773      0.963434   3.8414     5.04733
 -0.990382   1.99365   -0.691194     0.311552   1.14465    1.47973
  3.52344   -0.904112   5.56831      2.26976   10.7458    14.9108
  0.914803   0.543917   2.41042   …  2.2797     9.57062   11.8385
  3.56976   -0.521875   5.87482      3.13202   13.5764    20.3067
  1.26673    0.201363   3.29984      3.35041   13.7067    19.5009
  3.48594   -0.380281   5.79858      3.31641   13.6595    21.3711
  1.736      0.67007    4.12708      3.89439   14.6496    22.5087
  4.86864   -0.860545   5.52567   …  1.11829    4.42398    5.79263
  0.715763   0.941518   1.96599      2.18562    7.91955    9.59188
  4.50652   -0.661161   6.20877      2.47075   10.0311    15.2883
  0.788406   1.13357    2.6295       3.13185   11.4888    15.5164
  4.3011    -0.574948   6.06771      2.61943   10.4995    17.1547
  1.0423     1.20386    3.10019   …  3.5747    12.7014    18.4138
  0.578566   0.147426   0.963434     2.59504    2.54327    3.2063
  1.98831   -0.481884   3.8414       2.54327   12.5579    14.5807
  2.50965   -0.156361   5.04733      3.2063    14.5807    21.9844

[:, :, 2] =
  4.86061    -0.831519   4.21967   …  0.171373   0.874163   1.56598
 -0.831519    2.77732   -0.650168     0.475239  -0.124703   0.493005
  4.21967    -0.650168   5.3837       0.420103   2.3661     3.6307
 -1.17864     1.52722   -0.910281     0.305824   1.299      1.10204
  2.57323    -0.254997   4.44811      1.46507    8.49625   12.3828
  0.107543    0.879359   1.27244   …  1.60155    8.70787    9.34323
  2.52081     0.146122   4.9058       2.2034    10.1133    17.4076
  0.62133     0.885947   2.46018      2.42971   10.6563    16.4601
  2.73911     0.279606   4.82654      2.30883   10.4861    18.3244
  0.616542    1.58133    2.667        3.17984   11.2279    18.7256
  4.47239    -0.533881   5.19329   …  0.707895   2.40188    4.04092
 -0.175729    1.00903    0.292701     1.518      6.50726    6.35832
  4.20765    -0.171391   6.01112      1.88977    7.61838   12.6876
 -0.0382878   1.80487    1.42044      2.47693    8.49839   12.7209
  4.44578    -0.122759   6.11479      2.0812     8.1976    14.5418
  0.245313    1.96478    1.90198   …  3.01189    9.39564   15.0122
  0.171373    0.475239   0.420103     2.57433    1.68225    2.23532
  0.874163   -0.124703   2.3661       1.68225   13.5257    11.1782
  1.56598     0.493005   3.6307       2.23532   11.1782    18.5492

[:, :, 3] =
  7.07195   -0.824291   5.68427   …  1.40939    2.13875    2.6868
 -0.824291   3.02106   -0.651872     0.216458   0.465304   0.280547
  5.68427   -0.651872   5.81396      1.19975    2.92681    3.58477
 -1.52794    1.45399   -1.25659      0.106189   0.898495   1.15642
  3.84688   -0.655404   4.78201      1.74783    7.94134   11.6073
  0.983695   1.01355    1.56185   …  1.93306    8.23419    9.72854
  3.16598   -0.49531    4.30696      2.52783    9.27605   16.2881
  2.03351    1.01263    2.73183      3.39145   11.0472    17.9398
  4.12789   -0.158951   5.0031       2.93214    9.99324   17.5309
  2.21527    1.33774    2.84706      4.25117   11.7335    20.4486
  5.83021   -0.946673   5.9928    …  1.25752    3.26075    4.38527
  0.719129   1.74529    1.16235      1.97001    6.34611    8.29727
  5.17998   -0.923343   5.96029      2.22542    7.32333   12.3398
  0.91432    1.59453    1.4564       3.18432    8.51391   13.5744
  5.14894   -0.781542   5.75453      2.6772     7.67551   13.6725
  1.52174    1.57001    1.91765   …  3.99478    9.88445   16.8266
  1.40939    0.216458   1.19975      3.02195    2.20361    2.82881
  2.13875    0.465304   2.92681      2.20361   11.9412    10.8276
  2.6868     0.280547   3.58477      2.82881   10.8276    18.3271

;;; … 

[:, :, 98] =
 16.8637    -0.329551   6.82026   …  0.451978  1.81603    3.08067
 -0.329551   8.74352    0.126821     0.489001  0.98177    1.24735
  6.82026    0.126821   7.14812      0.637714  3.34164    5.39455
 -2.03334    1.29513   -1.1506       0.121249  0.672182   0.722953
  5.20382    0.496557   6.05762      1.54032   6.11325   11.2499
 -0.530146   1.3844     0.781053  …  0.863481  3.28376    4.91589
  5.382      0.76828    6.50449      2.12642   7.89664   16.138
  1.63921    1.94125    3.76259      2.87883   8.51172   16.5395
  5.6087     0.842645   6.75838      2.5691    8.62458   18.0671
  1.90316    2.18119    3.65511      3.39283   8.95903   18.6863
  8.05588   -0.167565   7.35531   …  0.5524    2.70044    4.51986
 -0.767134   1.71383    0.378563     0.817954  2.69408    3.79131
  7.04368    0.27687    7.88765      1.65716   5.89613   12.085
  0.887877   2.39568    2.52572      2.49957   6.61872   11.4593
  7.27642    0.477968   7.7685       2.27236   7.12692   15.183
  1.89024    2.41527    3.27705   …  3.02459   7.80559   14.8421
  0.451978   0.489001   0.637714     2.47735   1.32406    2.11978
  1.81603    0.98177    3.34164      1.32406   6.09216    8.72023
  3.08067    1.24735    5.39455      2.11978   8.72023   18.7475

[:, :, 99] =
 15.8732     0.0505591   5.71959   …  0.623615  1.2467     1.75335
  0.0505591  6.70507     0.513631     0.773223  1.01206    1.18359
  5.71959    0.513631    7.69171      0.652398  3.60134    5.59995
 -1.83273    0.839333   -1.28202      0.182059  0.290002   0.126425
  4.26549    0.679504    7.03247      1.20918   6.27492   10.5114
 -0.0805713  1.14847     1.23245   …  0.766092  3.0275     4.25116
  3.72735    0.994652    6.9286       2.21282   7.31956   14.379
  1.72478    1.74692     4.68109      2.34017   7.38125   13.7945
  4.35341    1.21606     7.03297      2.31194   7.59366   15.2567
  2.16309    1.90891     4.84573      2.70456   7.69693   15.2083
  6.73391    0.0910844   7.97547   …  0.558675  3.07812    5.20847
 -0.412244   1.52853     0.624142     0.797326  2.43262    3.0523
  6.15152    0.406604    7.98946      1.70855   5.67536   10.7956
  1.23353    1.92483     3.66472      2.00153   5.95818   10.0994
  6.4168     0.943772    7.97229      2.14608   6.66656   13.7443
  1.48838    2.10794     4.62253   …  2.59941   6.9719    12.8934
  0.623615   0.773223    0.652398     2.03895   1.20817    2.05565
  1.2467     1.01206     3.60134      1.20817   5.9922     7.75703
  1.75335    1.18359     5.59995      2.05565   7.75703   16.2219

[:, :, 100] =
 14.3166    0.536341   5.50112   -1.63961    …  0.852747  1.16779   2.91682
  0.536341  4.34123    0.629973   0.82643       0.977335  1.41964   2.02606
  5.50112   0.629973   6.97436   -1.09004       0.991374  3.16618   5.35546
 -1.63961   0.82643   -1.09004    1.53669       0.375736  0.66639   0.605401
  3.79254   1.21421    6.32932   -0.149297      2.09004   5.8817   10.3224
 -0.179976  1.35836    1.00172    1.1296     …  0.994382  3.08151   4.32336
  3.94329   1.63566    6.41662    0.432694      3.17484   7.55753  15.0691
  2.7207    2.4656     3.90635    1.38375       2.9542    7.84635  14.2391
  3.98433   1.8019     6.44441    0.588815      3.48906   7.91432  16.7241
  2.6049    2.91255    4.09132    1.63243       4.14999   8.46437  16.8481
  7.10037   0.377625   7.64317   -1.39164    …  1.27461   2.55163   5.12467
 -0.377928  1.77724    0.451407   1.23474       1.08846   2.66751   3.41407
  6.04909   1.12202    7.70553   -0.376144      2.74049   5.65679  11.9611
  1.45033   2.68262    2.63989    1.45527       2.73688   6.00991  10.3185
  5.67317   1.36097    7.2748     0.0403333     3.46626   6.79643  14.9083
  2.13948   3.30673    3.47432    1.77828    …  4.07889   7.33212  14.0898
  0.852747  0.977335   0.991374   0.375736      2.81384   1.60879   2.79831
  1.16779   1.41964    3.16618    0.66639       1.60879   5.86539   7.62624
  2.91682   2.02606    5.35546    0.605401      2.79831   7.62624  16.0619

Tip: covm() calculates covariance matrix for channel × channel for all epochs.

Plotting the matrix for the first epoch.

ch = get_channel(e10, ch = "eeg")
plot_matrix(cm[:, :, 1],
            xlabels = labels(e10)[ch],
            ylabels = labels(e10)[ch])

Seeded covariance topographic maps illustrate the covariance between a selected seed electrode and all other electrodes, providing a spatial view of how strongly each channel co-varies with the chosen reference point.

Correlation

Correlation is a standardized form of covariance - scaled to the range \([-1, 1]\) so that the strength of the linear relationship between two variables can be compared regardless of their original units or scale.

\[ cor = \frac{cov(x, y)}{\sqrt{s^2_x} \times \sqrt{s^2_y}} = \frac{cov(x, y)}{s_x \times s_y} \]

For two EEG signals (s1 and s2), the correlation matrix is computed as cor(s1 * s2'). For a multichannel signal s (channels × time points), it is computed as cor(s').

A correlation matrix is a covariance matrix normalized by the standard deviations of the individual variables, rescaling all values to the range \([-1, +1]\).

Two correlation methods are commonly used:

  • Pearson correlation - measures the strength of a linear relationship between two normally distributed variables. It is the standard choice when linearity and normality can be assumed.
  • Spearman correlation - tests for a monotonic relationship by first converting values to rank order (e.g. \([0.1, 1, 10, 100]\) becomes \([1, 2, 3, 4]\)). It makes no assumption about linearity or distribution, making it more appropriate for non-Gaussian data such as power spectra, neuron firing rates, and image pixel values.

Geometrically, correlation is equivalent to the cosine similarity between two vectors - it equals the cosine of the angle between two mean-centered unit vectors in a high-dimensional space.

Computing correlation matrix:

# 10 channels × 20 time points
eeg_signal = rand(10, 20)
# channels-vs-channels
eeg_cor = Statistics.cor(eeg_signal')

Computing correlation matrix for all EEG channels (channel vs. channel):

cm = corm(e10, ch = "eeg")

Tip: corm() calculates correlation matrix for channel × channel for all epochs.

Plotting the correlation matrix for the first epoch:

plot_matrix(cm[:, :, 1],
            xlabels = labels(e10)[ch],
            ylabels = labels(e10)[ch])

Auto- and Cross- Forms

These functions come in two forms:

  • Auto- - a signal compared against a shifted version of itself
  • Cross- - one signal compared against a shifted version of another

In both cases, the comparison is performed across a range of lags (denoted \(L\)), producing a function that describes how the relationship evolves as one signal is progressively shifted in time relative to the other.

The cross-covariance between two zero-mean signals \(x\) and \(y\) at lag \(L\) is:

\[ C_{xy}(m) = \frac{1}{N} \sum_{n = 0}^{N - 1} x(n) \times y(n + L) \]

where:
\(N\) is the number of samples,
\(L\) is the lag.

For auto-covariance, \(y=x\):

\[ C_{xx}(m) = \frac{1}{N} \sum_{n = 0}^{N - 1} x(n) \times x(n + L) \]

The cross-correlation is the normalized form, scaled by the standard deviations of both signals:

\[ R_{xy}(m) = \frac{C_{xy}(m)}{\sigma_x \times \sigma_y} \]

which constrains the result to the range \([−1, +1]\).

The auto-covariance of a zero-mean signal \(x\) at lag \(L\) is:

\[ C_{xx}(m) = \frac{1}{N} \sum_{n = 0}^{N - 1} x(n) \times x(n + m) \]

where:
\(N\) is the number of samples,
\(L\) is the lag.

At lag \(L = 0\), auto-covariance reduces to the variance of the signal:

\[ C_{xx}(0) = \frac{1}{N} \sum_{n = 0}^{N - 1} x(n)^2 = \sigma_x^2 \]

The auto-correlation is the normalized form, scaled by the variance:

\[ R_{xx}(m) = \frac{C_{xx}(m)}{C_{xx}(0)} = \frac{C_{xx}(m)}{\sigma_x^2} \]

which constrains the result to the range \([−1, +1]\), with \(R_{xx}(0) = 1\) always.

Auto-covariance

Auto-covariance measures the dependence structure of a signal with respect to itself - detecting rhythmic or periodic activity by quantifying how well the signal at time \(n\) n predicts the signal at a later (or earlier) time. For example, 60 Hz line noise will produce a strong auto-covariance peak at lags of 1/60 s and its multiples.

Algorithm:

  1. Optionally subtract the mean from the signal (required for zero-mean assumption)
  2. For each sample at index \(n\), multiply it by the sample at index \(n \pm L\)
  3. Sum all products across all indices \(n\):

\[ r_{xx}(L) = \sum_{n} x(n) \times x(n \pm L) \]

where \(L\) is the lag.

Biased auto-covariance divides the sum by the total data length \(N\), regardless of the lag:

\[ r_{xx}(L) = \frac{1}{N} \sum_{n} x(n) \times x(n \pm L) \]

This estimator is called “biased” because at large lags, fewer sample pairs contribute to the sum - yet the denominator remains \(N\), systematically underestimating the true covariance at those lags. It is however guaranteed to produce a positive semi-definite matrix, which is a desirable property in many applications.

Unbiased auto-covariance divides the sum by the number of sample pairs actually contributing at each lag, \(N - L\):

\[ r_{xx}(L) = \frac{1}{N - L} \sum_{n} x(n) \times x(n \pm L) \]

This corrects the underestimation of the biased estimator at large lags. However, because the denominator shrinks as \(L\) increases, estimates at large lags are based on progressively fewer sample pairs - making them less reliable and increasing variance. For this reason, auto-covariance estimates at large lags should be interpreted with caution regardless of which estimator is used.

One practical consequence of this trade-off is that the biased estimator is often more reliable at large lags - despite its systematic underestimation, the fixed denominator \(N\) stabilizes the estimate by implicitly down-weighting contributions from lag ranges where fewer sample pairs are available.

This matters in practice: for signals with known periodic structure - such as 50 Hz line noise, which should produce consistent auto-covariance peaks at every multiple of 1/50 s - the estimator should behave uniformly across all lags. The biased estimator satisfies this requirement better than the unbiased one, whose shrinking denominator introduces unequal weighting across lags and can distort the periodic structure you are trying to detect.

At lag \(L = 0\) auto-covariance reaches its maximum - the signal is perfectly aligned with itself, so all products \(x(n) \times x(n)\) are positive and their sum is maximized.

As the lag increases, the auto-covariance reveals the periodic structure of the signal:

  • Positive peaks occur at lags corresponding to full cycles of the dominant frequency - e.g. every 1/50 = 0.02 s for 50 Hz line noise, where the shifted signal realigns with itself.
  • Negative peaks occur at lags corresponding to half-cycles - e.g. every 0.01 s for 50 Hz noise, where the shifted signal is in anti-phase with the original.

Computing unbiased auto-covariance:

ac, l = acov(eeg_noisy,
             ch = "eeg",
             l = 20,
             biased = false)

Plotting auto-covariance:

plot_xac(ac[1, :, 1],
         l,
         title = "Auto-covariance")

The auto-covariance plot shows repetitive positive peaks every 0.02 s and negative peaks every 0.01 s, both arising from dominant 50 Hz line noise in the source signal - the positive peaks occur at full-cycle intervals (1/50 s) and the negative peaks at half-cycle intervals (1/100 s).

The function axc2frq() can be used to automatically detect these peaks and convert their lag positions into corresponding frequencies:

ac, l = acor(eeg_noisy,
             ch = "all",
             l = 10)
# channel 2 of the first epoch
axc2frq(ac[2, :, 1], l)
1-element Vector{Float64}:
 50.0

This can be verified by examining a Power Spectral Density (PSD) plot.

plot_psd(eeg_noisy, ch="Fp2")

By default, biased auto-covariance is calculated. To calculate unbiased auto-covariance:

ac, l = acov(eeg_noisy,
             ch = "all",
             biased = false)

Auto-correlation

Auto-correlation measures the Pearson correlation between a signal and a delayed copy of itself - quantifying how similar the signal is to itself as a function of the time lag between the two copies. Formally, it is the correlation between values \(x_i\) and \(x_i + nx_{i + n}\) for varying \(n\).

Key properties:

  • At lag \(L = 0\), auto-correlation is always 1 - the signal is perfectly correlated with itself.
  • For periodic signals, auto-correlation is an oscillating function whose frequency matches the dominant rhythmic component of the original signal - revealing both the presence and the period length of any repeating structure.
  • For random signals (e.g. white noise), auto-correlation should be near zero at all non-zero lags.

Auto-correlation is therefore a practical tool for answering two questions:

  1. Is the signal periodic? - look for oscillating structure in the auto-correlation function.
  2. What is the period? - the lag at which the first positive peak occurs corresponds to the length of one cycle.

Common applications:

  1. Detecting non-randomness - evaluate auto-correlation at lag \(L = 1\); a value significantly different from zero suggests temporal structure in the data.
  2. Identifying a time series model - evaluate auto-correlation across multiple lags to characterise the dependence structure and select an appropriate model.

Wiener-Khinchin theorem: the auto-correlation function and the power spectral density are a Fourier transform pair - the Fourier transform of the auto-correlation function equals the power spectral density of the signal, and vice versa.

Computing auto-correlation:

ac, l = acor(e10,
             ch = "all",
             l = 100)

Plotting auto-correlation (which is a standardized covariance):

plot_xac(ac[1, :, 1],
         l,
         title = "Auto-correlation")

Cross-covariance

Cross-covariance extends the concept of auto-covariance to two different signals - measuring their similarity as a function of the time lag applied to one signal relative to the other.

Where auto-covariance asks “how similar is this signal to a delayed version of itself?”, cross-covariance asks “how similar is signal \(x\) to a delayed version of signal \(y\)?”

This makes cross-covariance particularly useful for detecting shared periodic structure between two channels, estimating time delays between signals, and identifying directional relationships in multichannel data.

Biased cross-covariance divides the sum by the total data length \(N\), regardless of the lag:

\[ r_{xy}(L) = \frac{1}{N} \sum_{n} x(n) \times y(n \pm L) \]

where \(L\) is the lag.

As with biased auto-covariance, the denominator remains fixed at \(N\) for all lags - underestimating the true covariance at large lags where fewer sample pairs contribute, but producing a stable and positive semi-definite result.

Unbiased cross-covariance divides the sum by the number of sample pairs actually contributing at each lag, \(N - L\):

\[ r_{xy}(L) = \frac{1}{N - L} \sum_{n} x(n) \times y(n \pm L) \]

Computing biased cross-covariance:

xc, l = xcov(eeg1,
             eeg2,
             ch1 = "Fp1",
             ch2 = "Fp1",
             l = 100)

Plotting cross-covariance:

plot_xac(xc[1, :, 1],
         l,
         title = "Cross-covariance")

Cross-correlation

Cross-correlation is the normalized form of cross-covariance - scaled by the standard deviations of both signals so that results fall in the range \([−1, +1]\). It measures the similarity between two signals as a function of the lag applied to one relative to the other, and reveals at which offset the two signals are most strongly related.

Computationally, cross-correlation is equivalent to a sliding dot product - at each lag, the dot product between the two signals is computed, producing a function that peaks where the signals are most aligned.

\[ R_{xy}(L) = \frac{r_{xy}(L)}{\sigma_x \times \sigma_y} \]

where:
\(r_{xy}(L)\) is the cross-covariance at lag \(L\),
\(\sigma_x\), \(\sigma_y\) are the standard deviations of \(x\) and \(y\) respectively.

Computing cross-correlation (which is a standardized cross-covariance):

xc, l = xcor(eeg1,
             eeg2,
             ch1 = "Fp1",
             ch2 = "Fp1",
             l = 100)

Plotting cross-correlation

plot_xac(xc[1, :, 1],
         l,
         title = "Cross-correlation")

Peak offset for cross-correlation between two channels indicate the delay between activation of channel 1 and 2; if peak offset is within a specific range then two channels are connected.