Complexity measures
Be sure you have gone through the Tutorial before going through the API here to have a good idea of the terminology used in ComplexityMeasures.jl.
Complexity measures API
The complexity measure API is defined by the complexity
function, which may take as an input an ComplexityEstimator
. The function complexity_normalized
is also useful.
ComplexityMeasures.complexity
— Functioncomplexity(c::ComplexityEstimator, x) → m::Real
Estimate a complexity measure according to c
for input data x
, where c
is an instance of any subtype of ComplexityEstimator
:
ComplexityMeasures.complexity_normalized
— Functioncomplexity_normalized(c::ComplexityEstimator, x) → m::Real ∈ [a, b]
The same as complexity
, but the result is normalized to the interval [a, b]
, where [a, b]
depends on c
.
ComplexityMeasures.ComplexityEstimator
— TypeComplexityEstimator
Supertype for estimators for various complexity measures that are not entropies in the strict mathematical sense.
See complexity
for all available estimators.
Approximate entropy
ComplexityMeasures.ApproximateEntropy
— TypeApproximateEntropy <: ComplexityEstimator
ApproximateEntropy([x]; r = 0.2std(x), kwargs...)
An estimator for the approximate entropy (Pincus, 1991) complexity measure, used with complexity
.
The keyword argument r
is mandatory if an input timeseries x
is not provided.
Keyword arguments
r::Real
: The radius used when querying for nearest neighbors around points. Its value should be determined from the input data, for example as some proportion of the standard deviation of the data.m::Int = 2
: The embedding dimension.τ::Int = 1
: The embedding lag.base::Real = MathConstants.e
: The base to use for the logarithm. Pincus (1991) uses the natural logarithm.
Description
Approximate entropy (ApEn) is defined as
\[ApEn(m ,r) = \lim_{N \to \infty} \left[ \phi(x, m, r) - \phi(x, m + 1, r) \right].\]
Approximate entropy is estimated for a timeseries x
, by first embedding x
using embedding dimension m
and embedding lag τ
, then searching for similar vectors within tolerance radius r
, using the estimator described below, with logarithms to the given base
(natural logarithm is used in Pincus, 1991).
Specifically, for a finite-length timeseries x
, an estimator for $ApEn(m ,r)$ is
\[ApEn(m, r, N) = \phi(x, m, r, N) - \phi(x, m + 1, r, N),\]
where N = length(x)
and
\[\phi(x, k, r, N) = \dfrac{1}{N-(k-1)\tau} \sum_{i=1}^{N - (k-1)\tau} \log{\left( \sum_{j = 1}^{N-(k-1)\tau} \dfrac{\theta(d({\bf x}_i^m, {\bf x}_j^m) \leq r)}{N-(k-1)\tau} \right)}.\]
Here, $\theta(\cdot)$ returns 1 if the argument is true and 0 otherwise, $d({\bf x}_i, {\bf x}_j)$ returns the Chebyshev distance between vectors ${\bf x}_i$ and ${\bf x}_j$, and the k
-dimensional embedding vectors are constructed from the input timeseries $x(t)$ as
\[{\bf x}_i^k = (x(i), x(i+τ), x(i+2τ), \ldots, x(i+(k-1)\tau)).\]
In the original paper, they fix τ = 1
. In our implementation, the normalization constant is modified to account for embeddings with τ != 1
.
Sample entropy
ComplexityMeasures.SampleEntropy
— TypeSampleEntropy([x]; r = 0.2std(x), kwargs...) <: ComplexityEstimator
An estimator for the sample entropy complexity measure (Richman and Moorman, 2000), used with complexity
and complexity_normalized
.
The keyword argument r
is mandatory if an input timeseries x
is not provided.
Keyword arguments
r::Real
: The radius used when querying for nearest neighbors around points. Its value should be determined from the input data, for example as some proportion of the standard deviation of the data.m::Int = 1
: The embedding dimension.τ::Int = 1
: The embedding lag.
Description
An estimator for sample entropy using radius r
, embedding dimension m
, and embedding lag τ
is
\[SampEn(m,r, N) = -\ln{\dfrac{A(r, N)}{B(r, N)}}.\]
Here,
\[\begin{aligned} B(r, m, N) = \sum_{i = 1}^{N-m\tau} \sum_{j = 1, j \neq i}^{N-m\tau} \theta(d({\bf x}_i^m, {\bf x}_j^m) \leq r) \\ A(r, m, N) = \sum_{i = 1}^{N-m\tau} \sum_{j = 1, j \neq i}^{N-m\tau} \theta(d({\bf x}_i^{m+1}, {\bf x}_j^{m+1}) \leq r) \\ \end{aligned},\]
where $\theta(\cdot)$ returns 1 if the argument is true and 0 otherwise, and $d(x, y)$ computes the Chebyshev distance between $x$ and $y$, and ${\bf x}_i^{m}$ and ${\bf x}_i^{m+1}$ are m
-dimensional and m+1
-dimensional embedding vectors, where k
-dimensional embedding vectors are constructed from the input timeseries $x(t)$ as
\[{\bf x}_i^k = (x(i), x(i+τ), x(i+2τ), \ldots, x(i+(k-1)\tau)).\]
Quoting Richman & Moorman (2002): "SampEn(m,r,N) will be defined except when B = 0, in which case no regularity has been detected, or when A = 0, which corresponds to a conditional probability of 0 and an infinite value of SampEn(m,r,N)". In these cases, NaN
is returned.
If computing the normalized measure, then the resulting sample entropy is on [0, 1]
.
The original algorithm fixes τ = 1
. All formulas here are modified to account for any τ
.
See also: entropy_sample
.
Missing dispersion patterns
ComplexityMeasures.MissingDispersionPatterns
— TypeMissingDispersionPatterns <: ComplexityEstimator
MissingDispersionPatterns(est = Dispersion())
An estimator for the number of missing dispersion patterns ($N_{MDP}$), a complexity measure which can be used to detect nonlinearity in time series (Zhou et al., 2023).
Used with complexity
or complexity_normalized
, whose implementation uses missing_outcomes
.
Description
If used with complexity
, $N_{MDP}$ is computed by first symbolising each xᵢ ∈ x
, then embedding the resulting symbol sequence using the dispersion pattern estimator est
, and computing the quantity
\[N_{MDP} = L - N_{ODP},\]
where L = total_outcomes(est)
(i.e. the total number of possible dispersion patterns), and $N_{ODP}$ is defined as the number of occurring dispersion patterns.
If used with complexity_normalized
, then $N_{MDP}^N = (L - N_{ODP})/L$ is computed. The authors recommend that total_outcomes(est.symbolization)^est.m << length(x) - est.m*est.τ + 1
to avoid undersampling.
Dispersion
's linear mapping from CDFs to integers is based on equidistant partitioning of the interval [0, 1]
. This is slightly different from Zhou et al. (2023)Zhou2023, which uses the linear mapping $s_i := \text{round}(y + 0.5)$.
Usage
In Zhou et al. (2023)Zhou2023, MissingDispersionPatterns
is used to detect nonlinearity in time series by comparing the $N_{MDP}$ for a time series x
to $N_{MDP}$ values for an ensemble of surrogates of x
. If $N_{MDP} > q_{MDP}^{WIAAFT}$, where $q_{MDP}^{WIAAFT}$ is some q
-th quantile of the surrogate ensemble, then it is taken as evidence for nonlinearity.
See also: Dispersion
, ReverseDispersion
, total_outcomes
.
Reverse dispersion entropy
ComplexityMeasures.ReverseDispersion
— TypeReverseDispersion <: ComplexityEstimator
ReverseDispersion(; c = 3, m = 2, τ = 1, check_unique = true)
Estimator for the reverse dispersion entropy complexity measure (Li et al., 2019).
Description
Li et al. (2019) defines the reverse dispersion entropy as
\[H_{rde} = \sum_{i = 1}^{c^m} \left(p_i - \dfrac{1}{{c^m}} \right)^2 = \left( \sum_{i=1}^{c^m} p_i^2 \right) - \dfrac{1}{c^{m}}\]
where the probabilities $p_i$ are obtained precisely as for the Dispersion
probability estimator. Relative frequencies of dispersion patterns are computed using the given encoding
scheme , which defaults to encoding using the normal cumulative distribution function (NCDF), as implemented by GaussianCDFEncoding
, using embedding dimension m
and embedding delay τ
. Recommended parameter values(Li et al., 2019) are m ∈ [2, 3]
, τ = 1
for the embedding, and c ∈ [3, 4, …, 8]
categories for the Gaussian mapping.
If normalizing, then the reverse dispersion entropy is normalized to [0, 1]
.
The minimum value of $H_{rde}$ is zero and occurs precisely when the dispersion pattern distribution is flat, which occurs when all $p_i$s are equal to $1/c^m$. Because $H_{rde} \geq 0$, $H_{rde}$ can therefore be said to be a measure of how far the dispersion pattern probability distribution is from white noise.
Data requirements
The input must have more than one unique element for the default GaussianCDFEncoding
to be well-defined. Li et al. (2019) recommends that x
has at least 1000 data points.
If check_unique == true
(default), then it is checked that the input has more than one unique value. If check_unique == false
and the input only has one unique element, then a InexactError
is thrown when trying to compute probabilities.
Statistical complexity
ComplexityMeasures.StatisticalComplexity
— TypeStatisticalComplexity <: ComplexityEstimator
StatisticalComplexity([x]; kwargs...)
An estimator for the statistical complexity and entropy, originally by (Rosso et al., 2007), but here generalized see Rosso et al. (2013) to work with any ProbabilitiesEstimator
in combination with any OutcomeSpace
with a priori known total_outcomes
, any valid distance metric, and any normalizable discrete information measure (e.g. entropies like Shannon
, Renyi
. Used with complexity
.
Keyword arguments
est::ProbabilitiesEstimator = RelativeAmount(OrdinalPatterns())
: TheProbabilitiesEstimator
used to estimate probabilities from the input data. AnOutcomeSpace
must be given as the first argument to the estimator to control how discretization within pixel windows is performed.dist<:SemiMetric = JSDivergence()
: The distance measure (from Distances.jl) to use for estimating the distance between the estimated probability distribution and a uniform distribution with the same maximal number of outcomes.entr::InformationMeasure = Renyi()
: AnInformationMeasure
of choice. Any information measure that definesinformation_maximum
is valid here. Typically, an entropy is used, e.g.Shannon
orRenyi
is used.
Description
Statistical complexity is defined as
\[C_q[P] = \mathcal{H}_q\cdot \mathcal{Q}_q[P],\]
where $Q_q$ is a "disequilibrium" obtained from a distance-measure and H_q
a disorder measure. In the original paper(Rosso et al., 2007), this complexity measure was defined via an ordinal pattern-based probability distribution, the Shannon entropy and the Jensen-Shannon divergence as a distance measure. This implementation allows for a generalization of the complexity measure as developed in Rosso et al. (2013). Here, $H_q$can be the (q-order) Shannon-, Renyi or Tsallis entropy and
Q_q
` based either on the Euclidean, Wooters, Kullback, q-Kullback, Jensen or q-Jensen distance as
\[Q_q[P] = Q_q^0\cdot D[P, P_e],\]
where $D[P, P_e]$ is the distance between the obtained distribution $P$ and a uniform distribution with the same maximum number of bins, measured by the distance measure dist
.
Usage
The statistical complexity is exclusively used in combination with the related information measure (typically an entropy). complexity(c::StatisticalComplexity, x)
returns only the statistical complexity.
The entropy (or other information measure) can be accessed as a Ref
value of the struct as
x = randn(100)
c = StatisticalComplexity()
compl = complexity(c, x)
entr = c.entr_val[]
To obtain both the entropy (or other information measure) and the statistical complexity together as a Tuple
, use the wrapper entropy_complexity
.
ComplexityMeasures.entropy_complexity
— Functionentropy_complexity(c::StatisticalComplexity, x)
Return both the entropy and the corresponding StatisticalComplexity
. Useful when wanting to plot data on the "entropy-complexity plane". See also entropy_complexity_curves
.
ComplexityMeasures.entropy_complexity_curves
— Functionentropy_complexity_curves(c::StatisticalComplexity; num_max=1, num_min=1000) -> (min_entropy_complexity, max_entropy_complexity)
Calculate the maximum complexity-entropy curve for the statistical complexity according to Rosso et al. (2007) for num_max * total_outcomes(c.est)
different values of the normalized information measure of choice (in case of the maximum complexity curves) and num_min
different values of the normalized information measure of choice (in case of the minimum complexity curve).
This function can also be used to compute the maximum "complexity-extropy curve" if c.entr
is an InformationMeasure
, which is the equivalent of the complexity-entropy curves, but using extropy
instead of information
.
Description
The way the statistical complexity is designed, there is a minimum and maximum possible complexity for data with a given permutation entropy. The calculation time of the maximum complexity curve grows as O(total_outcomes(c.est)^2)
, and thus takes very long for high numbers of outcomes. This function is inspired by S. Sippels implementation in statcomp (Sippel et al., 2016).
This function will work with any ProbabilitiesEstimator
where total_outcomes
is known a priori.
Lempel-Ziv complexity
ComplexityMeasures.LempelZiv76
— TypeLempelZiv76 <: ComplexityEstimator
LempelZiv76()
The Lempel-Ziv, or LempelZiv76
, complexity measure (Lempel and Ziv, 1976), which is used with complexity
and complexity_normalized
.
For results to be comparable across sequences with different length, use the normalized version. Normalized LempelZiv76
-complexity is implemented as given in Amigó et al. (2004). The normalized measure is close to zero for very regular signals, while for random sequences, it is close to 1 with high probability[Amigó2004]. Note: the normalized LempelZiv76
complexity can be higher than 1[Amigó2004].
The LempelZiv76
measure applies only to binary sequences, i.e. sequences with a two-element alphabet (precisely two distinct outcomes). For performance optimization, we do not check the number of unique elements in the input. If your input sequence is not binary, you must encode
it first using one of the implemented Encoding
schemes (or encode your data manually).