Multiscale

Introduction

Multiscale complexity analysis is pervasive in the nonlinear time series analysis literature. Although their names, like "refined composite multiscale dispersion entropy", might seem daunting, they're actually conceptually very simple. A multiscale complexity measure is just any regular complexity measure computed on several gradually more coarse-grained samplings of the input data (example).

We've generalized this type of analysis to work with complexity measure that can be estimated with ComplexityMeasures.jl.

Multiscale API

The multiscale API is defined by the functions

which dispatch any of the MultiScaleAlgorithms listed below.

ComplexityMeasures.multiscaleFunction
multiscale(algorithm::MultiScaleAlgorithm, [args...], x)

A convenience function to compute the multiscale version of any InformationMeasureEstimator or ComplexityEstimator

The return type of multiscale is either a Vector{Real} or a Vector{Vector{Real}}, see the available coarse-graining methods below.

It utilizes downsample with the given algorithm to first produce coarse-grained, downsampled versions of x for scale factors algorithm.scales. Then, information or complexity, depending on the input arguments, is applied to each of the coarse-grained timeseries. If N = length(x), then the length of the most severely downsampled version of x is N ÷ maximum(algorithm.scales), while for scale factor 1, the original time series is considered.

Description

This function generalizes the multiscale entropy of (Costa et al., 2002) to any discrete information measure, any differential information measure, and any other complexity measure.

Coarse-graining algorithms

The available downsampling routines are:

Examples

multiscale can be used with any discrete or differential information measure estimator. For example, here's two ways of computing multiscale Tsallis entropy:

using ComplexityMeasures
x = randn(1000)
downsampling = RegularDownsampling(scales = 1:5) # multiscale algorithm

# Symbolic (ordinal-pattern-based) probabilities estimation using Bayesian regularization,
# jackknife estimation of the entropy.
o = OrdinalPatterns{3}(2) # outcome space
probest = BayesianRegularization() # probabilities estimator
hest = Jackknife(Tsallis(q = 1.5)) # entropy estimator
multiscale(downsampling, hest, probest, o, x)

# Differential kNN-based estimator:
hest = LeonenkoProzantoSavani(Tsallis(q = 1.5), k = 10) # 10 neighbors
multiscale(downsampling, hest, x)

Multiscale variants of any ComplexityEstimator are also trivial to compute. Let's compute the "generalized multiscale sample entropy (Costa and Goldberger, 2015)" using the second-order moment.

using ComplexityMeasures, Statistics
multiscale(CompositeDownsampling(; f = Statistics.var), SampleEntropy(x), x)
source
ComplexityMeasures.RegularDownsamplingType
RegularDownsampling <: MultiScaleAlgorithm
RegularDownsampling(; f::Function = Statistics.mean, scales = 1:8)

The original multi-scale algorithm for multiscale entropy analysis (Costa et al., 2002), which yields a single downsampled time series per scale s.

Description

Given a scalar-valued input time series x, the Regular multiscale algorithm downsamples and coarse-grains x by splitting it into non-overlapping windows of length s, and then constructing a new downsampled time series $D_t(s, f)$ by applying the function f to each of the resulting length-s windows.

The downsampled time series D_t(s) with t ∈ [1, 2, …, L], where L = floor(N / s), is given by:

\[\{ D_t(s, f) \}_{t = 1}^{L} = \left\{ f \left( \bf x_t \right) \right\}_{t = 1}^{L} = \left\{ {f\left( (x_i)_{i = (t - 1)s + 1}^{ts} \right)} \right\}_{t = 1}^{L}\]

where f is some summary statistic applied to the length-ts-((t - 1)s + 1) tuples xₖ. Different choices of f have yield different multiscale methods appearing in the literature. For example:

  • f == Statistics.mean yields the original first-moment multiscale sample entropy (Costa et al., 2002).
  • f == Statistics.var yields the generalized multiscale sample entropy (Costa and Goldberger, 2015), which uses the second-moment (variance) instead of the mean.

Keyword Arguments

  • scales. The downsampling levels. If scales is set to an integer, then this integer is taken as maximum number of scales (i.e. levels of downsampling), and downsampling is done over levels 1:scales. Otherwise, downsampling is done over the provided scales (which may be a range, or some specific scales (e.g. scales = [1, 5, 6]). The maximum scale level is length(x) ÷ 2, but to avoid applying the method to time series that are extremely short, consider limiting the maximum scale (e.g. scales = length(x) ÷ 5).

See also: CompositeDownsampling.

source
ComplexityMeasures.CompositeDownsamplingType
CompositeDownsampling <: MultiScaleAlgorithm
CompositeDownsampling(; f::Function = Statistics.mean, scales = 1:8)

Composite multi-scale algorithm for multiscale entropy analysis (Wu et al., 2013), used with multiscale to compute, for example, composite multiscale entropy (CMSE).

Description

Given a scalar-valued input time series x, the composite multiscale algorithm, like RegularDownsampling, downsamples and coarse-grains x by splitting it into non-overlapping windows of length s, and then constructing downsampled time series by applying the function f to each of the resulting length-s windows.

However, Wu et al. (2013) realized that for each scale s, there are actually s different ways of selecting windows, depending on where indexing starts/ends. These s different downsampled time series D_t(s, f) at each scale s are constructed as follows:

\[\{ D_{k}(s) \} = \{ D_{t, k}(s) \}_{t = 1}^{L}, = \{ f \left( \bf x_{t, k} \right) \} = \left\{ {f\left( (x_i)_{i = (t - 1)s + k}^{ts + k - 1} \right)} \right\}_{t = 1}^{L},\]

where L = floor((N - s + 1) / s) and 1 ≤ k ≤ s, such that $D_{i, k}(s)$ is the i-th element of the k-th downsampled time series at scale s.

Finally, compute $\dfrac{1}{s} \sum_{k = 1}^s g(D_{k}(s))$, where g is some summary function, for example information or complexity.

Keyword Arguments

  • scales. The downsampling levels. If scales is set to an integer, then this integer is taken as maximum number of scales (i.e. levels of downsampling), and downsampling is done over levels 1:scales. Otherwise, downsampling is done over the provided scales (which may be a range, or some specific scales (e.g. scales = [1, 5, 6]). The maximum scale level is length(x) ÷ 2, but to avoid applying the method to time series that are extremely short, consider limiting the maximum scale (e.g. scales = length(x) ÷ 5).
Relation to RegularDownsampling

The downsampled time series $D_{t, 1}(s)$ constructed using the composite multiscale method is equivalent to the downsampled time series $D_{t}(s)$ constructed using the RegularDownsampling method, for which k == 1 is fixed, such that only a single time series is returned.

See also: RegularDownsampling.

source

Example literature methods

A non-exhaustive list of literature methods, and the syntax to compute them, are listed below. Please open an issue or make a pull-request to ComplexityMeasures.jl if you find a literature method missing from this list, or if you publish a paper based on some new multiscale combination.

MethodSyntax exampleReference
Refined composite multiscale dispersion entropymultiscale(CompositeDownsampling(), Dispersion(), est, x, normalized = true)(Azami et al., 2017)
Multiscale sample entropy (first moment)multiscale(RegularDownsampling(f = mean), SampleEntropy(x), x)(Costa et al., 2002)
Generalized multiscale sample entropy (second moment)multiscale(RegularDownsampling(f = std), SampleEntropy(x), x)(Costa and Goldberger, 2015)