TransferEntropy.jl

Exported functions

This package exports two functions, transferentropy and mutualinfo.

In order to compute either quantity, combine your input data with one of the available estimators. Docstrings for transferentropy and mutualinfo give overviews of currently implemented estimators for either function.

Estimators

Binning based

Entropies.VisitationFrequencyType
VisitationFrequency(r::RectangularBinning) <: BinningProbabilitiesEstimator

A probability estimator based on binning data into rectangular boxes dictated by the binning scheme r.

Example

# Construct boxes by dividing each coordinate axis into 5 equal-length chunks.
b = RectangularBinning(5)

# A probabilities estimator that, when applied a dataset, computes visitation frequencies
# over the boxes of the binning
est = VisitationFrequency(b)

See also: RectangularBinning.

Entropies.TransferOperatorType
TransferOperator(ϵ::RectangularBinning) <: BinningProbabilitiesEstimator

A probability estimator based on binning data into rectangular boxes dictated by the binning scheme ϵ, then approxmating the transfer (Perron-Frobenius) operator over the bins, then taking the invariant measure associated with that transfer operator as the bin probabilities. Assumes that the input data are sequential (time-ordered).

This implementation follows the grid estimator approach in Diego et al. (2019)[Diego2019].

Description

The transfer operator $P^{N}$is computed as an N-by-N matrix of transition probabilities between the states defined by the partition elements, where N is the number of boxes in the partition that is visited by the orbit/points.

If $\{x_t^{(D)} \}_{n=1}^L$ are the $L$ different $D$-dimensional points over which the transfer operator is approximated, $\{ C_{k=1}^N \}$ are the $N$ different partition elements (as dictated by ϵ) that gets visited by the points, and $\phi(x_t) = x_{t+1}$, then

\[P_{ij} = \dfrac {\#\{ x_n | \phi(x_n) \in C_j \cap x_n \in C_i \}} {\#\{ x_m | x_m \in C_i \}},\]

where $\#$ denotes the cardinal. The element $P_{ij}$ thus indicates how many points that are initially in box $C_i$ end up in box $C_j$ when the points in $C_i$ are projected one step forward in time. Thus, the row $P_{ik}^N$ where $k \in \{1, 2, \ldots, N \}$ gives the probability of jumping from the state defined by box $C_i$ to any of the other $N$ states. It follows that $\sum_{k=1}^{N} P_{ik} = 1$ for all $i$. Thus, $P^N$ is a row/right stochastic matrix.

Invariant measure estimation from transfer operator

The left invariant distribution $\mathbf{\rho}^N$ is a row vector, where $\mathbf{\rho}^N P^{N} = \mathbf{\rho}^N$. Hence, $\mathbf{\rho}^N$ is a row eigenvector of the transfer matrix $P^{N}$ associated with eigenvalue 1. The distribution $\mathbf{\rho}^N$ approximates the invariant density of the system subject to the partition ϵ, and can be taken as a probability distribution over the partition elements.

In practice, the invariant measure $\mathbf{\rho}^N$ is computed using invariantmeasure, which also approximates the transfer matrix. The invariant distribution is initialized as a length-N random distribution which is then applied to $P^{N}$. The resulting length-N distribution is then applied to $P^{N}$ again. This process repeats until the difference between the distributions over consecutive iterations is below some threshold.

Probability and entropy estimation

  • probabilities(x::AbstractDataset, est::TransferOperator{RectangularBinning}) estimates probabilities for the bins defined by the provided binning (est.ϵ)
  • genentropy(x::AbstractDataset, est::TransferOperator{RectangularBinning}) does the same, but computes generalized entropy using the probabilities.

See also: RectangularBinning, invariantmeasure.

Entropies.RectangularBinningType
RectangularBinning(ϵ) <: RectangularBinningScheme

Instructions for creating a rectangular box partition using the binning scheme ϵ. Binning instructions are deduced from the type of ϵ.

Rectangular binnings may be automatically adjusted to the data in which the RectangularBinning is applied, as follows:

  1. ϵ::Int divides each coordinate axis into ϵ equal-length intervals, extending the upper bound 1/100th of a bin size to ensure all points are covered.

  2. ϵ::Float64 divides each coordinate axis into intervals of fixed size ϵ, starting from the axis minima until the data is completely covered by boxes.

  3. ϵ::Vector{Int} divides the i-th coordinate axis into ϵ[i] equal-length intervals, extending the upper bound 1/100th of a bin size to ensure all points are covered.

  4. ϵ::Vector{Float64} divides the i-th coordinate axis into intervals of fixed size ϵ[i], starting from the axis minima until the data is completely covered by boxes.

Rectangular binnings may also be specified on arbitrary min-max ranges.

  1. ϵ::Tuple{Vector{Tuple{Float64,Float64}},Int64} creates intervals along each coordinate axis from ranges indicated by a vector of (min, max) tuples, then divides each coordinate axis into an integer number of equal-length intervals. Note: this does not ensure that all points are covered by the data (points outside the binning are ignored).

Example 1: Grid deduced automatically from data (partition guaranteed to cover data points)

Flexible box sizes

The following binning specification finds the minima/maxima along each coordinate axis, then split each of those data ranges (with some tiny padding on the edges) into 10 equal-length intervals. This gives (hyper-)rectangular boxes, and works for data of any dimension.

using Entropies
RectangularBinning(10)

Now, assume the data consists of 2-dimensional points, and that we want a finer grid along one of the dimensions than over the other dimension.

The following binning specification finds the minima/maxima along each coordinate axis, then splits the range along the first coordinate axis (with some tiny padding on the edges) into 10 equal-length intervals, and the range along the second coordinate axis (with some tiny padding on the edges) into 5 equal-length intervals. This gives (hyper-)rectangular boxes.

using Entropies
RectangularBinning([10, 5])

Fixed box sizes

The following binning specification finds the minima/maxima along each coordinate axis, then split the axis ranges into equal-length intervals of fixed size 0.5 until the all data points are covered by boxes. This approach yields (hyper-)cubic boxes, and works for data of any dimension.

using Entropies
RectangularBinning(0.5)

Again, assume the data consists of 2-dimensional points, and that we want a finer grid along one of the dimensions than over the other dimension.

The following binning specification finds the minima/maxima along each coordinate axis, then splits the range along the first coordinate axis into equal-length intervals of size 0.3, and the range along the second axis into equal-length intervals of size 0.1 (in both cases, making sure the data are completely covered by the boxes). This approach gives a (hyper-)rectangular boxes.

using Entropies
RectangularBinning([0.3, 0.1])

Example 2: Custom grids (partition not guaranteed to cover data points):

Assume the data consists of 3-dimensional points (x, y, z), and that we want a grid that is fixed over the intervals [x₁, x₂] for the first dimension, over [y₁, y₂] for the second dimension, and over [z₁, z₂] for the third dimension. We when want to split each of those ranges into 4 equal-length pieces. Beware: some points may fall outside the partition if the intervals are not chosen properly (these points are simply discarded).

The following binning specification produces the desired (hyper-)rectangular boxes.

using Entropies, DelayEmbeddings

D = Dataset(rand(100, 3));

x₁, x₂ = 0.5, 1 # not completely covering the data, which are on [0, 1]
y₁, y₂ = -2, 1.5 # covering the data, which are on [0, 1]
z₁, z₂ = 0, 0.5 # not completely covering the data, which are on [0, 1]

ϵ = [(x₁, x₂), (y₁, y₂), (z₁, z₂)], 4 # [interval 1, interval 2, ...], n_subdivisions

RectangularBinning(ϵ)

Kernel density based

Entropies.NaiveKernelType
NaiveKernel(ϵ::Real, ss = KDTree; w = 0, metric = Euclidean()) <: ProbabilitiesEstimator

Estimate probabilities/entropy using a "naive" kernel density estimation approach (KDE), as discussed in Prichard and Theiler (1995) [PrichardTheiler1995].

Probabilities $P(\mathbf{x}, \epsilon)$ are assigned to every point $\mathbf{x}$ by counting how many other points occupy the space spanned by a hypersphere of radius ϵ around $\mathbf{x}$, according to:

\[P_i( X, \epsilon) \approx \dfrac{1}{N} \sum_{s} B(||X_i - X_j|| < \epsilon),\]

where $B$ gives 1 if the argument is true. Probabilities are then normalized.

The search structure ss is any search structure supported by Neighborhood.jl. Specifically, use KDTree to use a tree-based neighbor search, or BruteForce for the direct distances between all points. KDTrees heavily outperform direct distances when the dimensionality of the data is much smaller than the data length.

The keyword w stands for the Theiler window, and excludes indices $s$ that are within $|i - s| ≤ w$ from the given point $X_i$.

Nearest neighbor based

Entropies.KozachenkoLeonenkoType
KozachenkoLeonenko(; k::Int = 1, w::Int = 0) <: EntropyEstimator

Entropy estimator based on nearest neighbors. This implementation is based on Kozachenko & Leonenko (1987)[KozachenkoLeonenko1987], as described in Charzyńska and Gambin (2016)[Charzyńska2016].

w is the Theiler window (defaults to 0, meaning that only the point itself is excluded when searching for neighbours).

Info

This estimator is only available for entropy estimation. Probabilities cannot be obtained directly.

Entropies.KraskovType

k-th nearest neighbour(kNN) based

Kraskov(; k::Int = 1, w::Int = 0) <: EntropyEstimator

Entropy estimator based on k-th nearest neighbor searches[Kraskov2004]. w is the Theiler window.

Info

This estimator is only available for entropy estimation. Probabilities cannot be obtained directly.

TransferEntropy.Kraskov1Type
Kraskov1(k::Int = 1; metric_x = Chebyshev(), metric_y = Chebyshev()) <: MutualInformationEstimator

The $I^{(1)}$ nearest neighbor based mutual information estimator from Kraskov et al. (2004), using k nearest neighbors. The distance metric for the marginals $x$ and $y$ can be chosen separately, while the Chebyshev metric is always used for the z = (x, y) joint space.

source
TransferEntropy.Kraskov2Type
Kraskov2(k::Int = 1; metric_x = Chebyshev(), metric_y = Chebyshev()) <: MutualInformationEstimator

The $I^{(2)}(x, y)$ nearest neighbor based mutual information estimator from Kraskov et al. (2004), using k nearest neighbors. The distance metric for the marginals $x$ and $y$ can be chosen separately, while the Chebyshev metric is always used for the z = (x, y) joint space.

source

Permutation based

Entropies.SymbolicPermutationType
SymbolicPermutation(; τ = 1, m = 3, lt = Entropies.isless_rand) <: ProbabilityEstimator
SymbolicWeightedPermutation(; τ = 1, m = 3, lt = Entropies.isless_rand) <: ProbabilityEstimator
SymbolicAmplitudeAwarePermutation(; τ = 1, m = 3, A = 0.5, lt = Entropies.isless_rand) <: ProbabilityEstimator

Symbolic, permutation-based probabilities/entropy estimators. m is the permutation order (or the symbol size or the embedding dimension) and τ is the delay time (or lag).

Repeated values during symbolization

In the original implementation of permutation entropy [BandtPompe2002], equal values are ordered after their order of appearance, but this can lead to erroneous temporal correlations, especially for data with low-amplitude resolution [Zunino2017]. Here, we resolve this issue by letting the user provide a custom "less-than" function. The keyword lt accepts a function that decides which of two state vector elements are smaller. If two elements are equal, the default behaviour is to randomly assign one of them as the largest (lt = Entropies.isless_rand). For data with low amplitude resolution, computing probabilities multiple times using the random approach may reduce these erroneous effects.

To get the behaviour described in Bandt and Pompe (2002), use lt = Base.isless).

Properties of original signal preserved

  • SymbolicPermutation: Preserves ordinal patterns of state vectors (sorting information). This implementation is based on Bandt & Pompe et al. (2002)[BandtPompe2002] and Berger et al. (2019) [Berger2019].
  • SymbolicWeightedPermutation: Like SymbolicPermutation, but also encodes amplitude information by tracking the variance of the state vectors. This implementation is based on Fadlallah et al. (2013)[Fadlallah2013].
  • SymbolicAmplitudeAwarePermutation: Like SymbolicPermutation, but also encodes amplitude information by considering a weighted combination of absolute amplitudes of state vectors, and relative differences between elements of state vectors. See description below for explanation of the weighting parameter A. This implementation is based on Azami & Escudero (2016) [Azami2016].

Probability estimation

Univariate time series

To estimate probabilities or entropies from univariate time series, use the following methods:

  • probabilities(x::AbstractVector, est::SymbolicProbabilityEstimator). Constructs state vectors from x using embedding lag τ and embedding dimension m, symbolizes state vectors, and computes probabilities as (weighted) relative frequency of symbols.
  • genentropy(x::AbstractVector, est::SymbolicProbabilityEstimator; α=1, base = 2) computes probabilities by calling probabilities(x::AbstractVector, est), then computer the order-α generalized entropy to the given base.

Speeding up repeated computations

A pre-allocated integer symbol array s can be provided to save some memory allocations if the probabilities are to be computed for multiple data sets.

Note: it is not the array that will hold the final probabilities that is pre-allocated, but the temporary integer array containing the symbolized data points. Thus, if provided, it is required that length(x) == length(s) if x is a Dataset, or length(s) == length(x) - (m-1)τ if x is a univariate signal that is to be embedded first.

Use the following signatures (only works for SymbolicPermutation).

probabilities!(s::Vector{Int}, x::AbstractVector, est::SymbolicPermutation) → ps::Probabilities
probabilities!(s::Vector{Int}, x::AbstractDataset, est::SymbolicPermutation) → ps::Probabilities

Multivariate datasets

Although not dealt with in the original paper describing the estimators, numerically speaking, permutation entropies can also be computed for multivariate datasets with dimension ≥ 2 (but see caveat below). Such datasets may be, for example, preembedded time series. Then, just skip the delay reconstruction step, compute and symbols directly from the $L$ existing state vectors $\{\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x_L}\}$.

  • probabilities(x::AbstractDataset, est::SymbolicProbabilityEstimator). Compute ordinal patterns of the state vectors of x directly (without doing any embedding), symbolize those patterns, and compute probabilities as (weighted) relative frequencies of symbols.
  • genentropy(x::AbstractDataset, est::SymbolicProbabilityEstimator). Computes probabilities from symbol frequencies using probabilities(x::AbstractDataset, est::SymbolicProbabilityEstimator), then computes the order-α generalized (permutation) entropy to the given base.

Caveat: A dynamical interpretation of the permutation entropy does not necessarily hold if computing it on generic multivariate datasets. Method signatures for Datasets are provided for convenience, and should only be applied if you understand the relation between your input data, the numerical value for the permutation entropy, and its interpretation.

Description

All symbolic estimators use the same underlying approach to estimating probabilities.

Embedding, ordinal patterns and symbolization

Consider the $n$-element univariate time series $\{x(t) = x_1, x_2, \ldots, x_n\}$. Let $\mathbf{x_i}^{m, \tau} = \{x_j, x_{j+\tau}, \ldots, x_{j+(m-1)\tau}\}$ for $j = 1, 2, \ldots n - (m-1)\tau$ be the $i$-th state vector in a delay reconstruction with embedding dimension $m$ and reconstruction lag $\tau$. There are then $N = n - (m-1)\tau$ state vectors.

For an $m$-dimensional vector, there are $m!$ possible ways of sorting it in ascending order of magnitude. Each such possible sorting ordering is called a motif. Let $\pi_i^{m, \tau}$ denote the motif associated with the $m$-dimensional state vector $\mathbf{x_i}^{m, \tau}$, and let $R$ be the number of distinct motifs that can be constructed from the $N$ state vectors. Then there are at most $R$ motifs; $R = N$ precisely when all motifs are unique, and $R = 1$ when all motifs are the same.

Each unique motif $\pi_i^{m, \tau}$ can be mapped to a unique integer symbol $0 \leq s_i \leq M!-1$. Let $S(\pi) : \mathbb{R}^m \to \mathbb{N}_0$ be the function that maps the motif $\pi$ to its symbol $s$, and let $\Pi$ denote the set of symbols $\Pi = \{ s_i \}_{i\in \{ 1, \ldots, R\}}$.

Probability computation

SymbolicPermutation

The probability of a given motif is its frequency of occurrence, normalized by the total number of motifs (with notation from [Fadlallah2013]),

\[p(\pi_i^{m, \tau}) = \dfrac{\sum_{k=1}^N \mathbf{1}_{u:S(u) = s_i} \left(\mathbf{x}_k^{m, \tau} \right) }{\sum_{k=1}^N \mathbf{1}_{u:S(u) \in \Pi} \left(\mathbf{x}_k^{m, \tau} \right)} = \dfrac{\sum_{k=1}^N \mathbf{1}_{u:S(u) = s_i} \left(\mathbf{x}_k^{m, \tau} \right) }{N},\]

where the function $\mathbf{1}_A(u)$ is the indicator function of a set $A$. That is, $\mathbf{1}_A(u) = 1$ if $u \in A$, and $\mathbf{1}_A(u) = 0$ otherwise.

SymbolicAmplitudeAwarePermutation

Amplitude-aware permutation entropy is computed analogously to regular permutation entropy but probabilities are weighted by amplitude information as follows.

\[p(\pi_i^{m, \tau}) = \dfrac{\sum_{k=1}^N \mathbf{1}_{u:S(u) = s_i} \left( \mathbf{x}_k^{m, \tau} \right) \, a_k}{\sum_{k=1}^N \mathbf{1}_{u:S(u) \in \Pi} \left( \mathbf{x}_k^{m, \tau} \right) \,a_k} = \dfrac{\sum_{k=1}^N \mathbf{1}_{u:S(u) = s_i} \left( \mathbf{x}_k^{m, \tau} \right) \, a_k}{\sum_{k=1}^N a_k}.\]

The weights encoding amplitude information about state vector $\mathbf{x}_i = (x_1^i, x_2^i, \ldots, x_m^i)$ are

\[a_i = \dfrac{A}{m} \sum_{k=1}^m |x_k^i | + \dfrac{1-A}{d-1} \sum_{k=2}^d |x_{k}^i - x_{k-1}^i|,\]

with $0 \leq A \leq 1$. When $A=0$ , only internal differences between the elements of $\mathbf{x}_i$ are weighted. Only mean amplitude of the state vector elements are weighted when $A=1$. With, $0<A<1$, a combined weighting is used.

SymbolicWeightedPermutation

Weighted permutation entropy is also computed analogously to regular permutation entropy, but adds weights that encode amplitude information too:

\[p(\pi_i^{m, \tau}) = \dfrac{\sum_{k=1}^N \mathbf{1}_{u:S(u) = s_i} \left( \mathbf{x}_k^{m, \tau} \right) \, w_k}{\sum_{k=1}^N \mathbf{1}_{u:S(u) \in \Pi} \left( \mathbf{x}_k^{m, \tau} \right) \,w_k} = \dfrac{\sum_{k=1}^N \mathbf{1}_{u:S(u) = s_i} \left( \mathbf{x}_k^{m, \tau} \right) \, w_k}{\sum_{k=1}^N w_k}.\]

The weighted permutation entropy is equivalent to regular permutation entropy when weights are positive and identical ($w_j = \beta \,\,\, \forall \,\,\, j \leq N$ and $\beta > 0)$. Weights are dictated by the variance of the state vectors.

Let the aritmetic mean of state vector $\mathbf{x}_i$ be denoted by

\[\mathbf{\hat{x}}_j^{m, \tau} = \frac{1}{m} \sum_{k=1}^m x_{j + (k+1)\tau}.\]

Weights are then computed as

\[w_j = \dfrac{1}{m}\sum_{k=1}^m (x_{j+(k+1)\tau} - \mathbf{\hat{x}}_j^{m, \tau})^2.\]

Note: in equation 7, section III, of the original paper, the authors write

\[w_j = \dfrac{1}{m}\sum_{k=1}^m (x_{j-(k-1)\tau} - \mathbf{\hat{x}}_j^{m, \tau})^2.\]

But given the formula they give for the arithmetic mean, this is not the variance of $\mathbf{x}_i$, because the indices are mixed: $x_{j+(k-1)\tau}$ in the weights formula, vs. $x_{j+(k+1)\tau}$ in the arithmetic mean formula. This seems to imply that amplitude information about previous delay vectors are mixed with mean amplitude information about current vectors. The authors also mix the terms "vector" and "neighboring vector" (but uses the same notation for both), making it hard to interpret whether the sign switch is a typo or intended. Here, we use the notation above, which actually computes the variance for $\mathbf{x}_i$.

There is a possible performance optimization to be made with this method:

transferentropy!(symb_s, symb_t, s, t, [c], est::SymbolicPermutation; 
        base = 2, q = 1, m::Int = 3, τ::Int = 1, ...) → Float64

You can optionally provide pre-allocated (integer) symbol vectors symb_s and symb_t (and symb_c), where length(symb_s) == length(symb_t) == length(symb_c) == N - (est.m-1)*est.τ. This is useful for saving memory allocations for repeated computations.

Hilbert

TransferEntropy.HilbertType
Hilbert(est; 
    source::InstantaneousSignalProperty = Phase(),
    target::InstantaneousSignalProperty = Phase(),
    cond::InstantaneousSignalProperty = Phase())
) <: TransferEntropyEstimator

Compute transfer entropy on instantaneous phases/amplitudes of relevant signals, which are obtained by first applying the Hilbert transform to each signal, then extracting the phases/amplitudes of the resulting complex numbers[Palus2014]. Original time series are thus transformed to instantaneous phase/amplitude time series. Transfer entropy is then estimated using the provided est on those phases/amplitudes (use e.g. VisitationFrequency, or SymbolicPermutation).

Info

Details on estimation of the transfer entropy (conditional mutual information) following the phase/amplitude extraction step is not given in Palus (2014). Here, after instantaneous phases/amplitudes have been obtained, these are treated as regular time series, from which transfer entropy is then computed as usual.

See also: Phase, Amplitude.

source
TransferEntropy.PhaseType
Phase <: InstantaneousSignalProperty

Indicates that the instantaneous phases of a signal should be used.

source

Automated variable selection

Boostrap-based non-uniform embedding (BBNUE)

TransferEntropy.bbnueFunction
bbnue(source, target, [cond], est; 
    η = 1, include_instantaneous = true, 
    method_delay = "ac_min", maxlag::Union{Int, Float64} = 0.05,
    surr::Surrogate = RandomShuffle(), nsurr = 100, α = 0.05)
    ) → te, js, τs, idxs_source, idxs_target, idxs_cond

Estimate transfer entropy (TE) from source to target (conditioned on cond if given) for prediction lag η, using the bootstrap-based non-uniform embedding (BBNUE) method from Montalta et al. (2004) [Montalto2014].

Implementation details

The BBNUE method uses a bootstrap-based criterion to identify the most relevant and minimally redundant variables from the the past of target, present/past of source, and (if given) the present/past of cond, that contribute most to target's future.

This implementation uses a conditional entropy minimization criterion for selecting variables, which is what Montalto et al. (2014)[Montalto2014] uses for their bin-estimator. Here, any estimator can be used, but variables will be selected using conditional entropy minimization regardless of the choice of estimator.

Montalto et al.'s bin-estimator corresponds to using the VisitationFrequency estimator with bins whose sides are equal-length, e.g. VisitationFrequency(RectangularBinning(0.5)). In this implementation, any rectangular binning can be used.

Input data

Multivariate source, target and cond (if given) can be given as univariate AbstractVectors or as multivariate Datasets or Vector{AbstractVector}.

For example, if you want to compute the BBNUE-transfer entropy from a univariate source to a univariate target, potentially conditioned on many different variables, you can do the following:

n = 1000
# Source and target variables
s, t = rand(n), rand(n)

# Variables that might potentially influence `t` along with `s`
c1, c2, c3 = rand(n), rand(n), rand(n)

est = NaiveKernel(0.3)
bbnue(s, t, Dataset([c1, c2, c3]), est)

Variable selection and significance testing

In this implementation, the maximum lag for each embedding variable is determined using estimate_delay from DelayEmbeddings. The keywords method_delay (the default is "ac_min") controls the method for estimating the delay, and maxlag is the maximum allowed delay (if maxlag ∈ [0, 1] is a fraction, then the maximum lag is that fraction of the input time series length, and if maxlag is an integer, then the maximum lag is maxlag).

If instantaneous is true, then instantaneous interactions are also considered, i.e. effects like source(t) → target(t) are allowed. η is the forward prediction lag.

Significance testing is performed using a permutation test. At each iteration of the variable selection routine, we first compute the transfer entropy using the new candidate $c_k$. Then, the computation is repeated nsurr times, at each iteration replacing $c_k$ with a surrogate of type surr. If transfer entropy using the original $c_k$ exceeds the the 1 - α-quantile of that of the surrogate ensemble, then $c_k$ is deemed significant to the future of target and is included in the set of selected variables.

If no relevant variables pass the permutation test, then TE is not well-defined, and a value of 0.0 is returned.

Returns

A 6-tuple is returned, consisting of:

  • te: The computed transfer entropy value.
  • js: The indices of the selected variables. js[i] is the i-th entry in the array [idxs_source..., idxs_target..., idxs_cond...,].
  • τs: The embedding lags of the selected variables. τs[i] corresponds to js[i].
  • idxs_source: The indices of the source variables.
  • idxs_target: The indices of the target variables.
  • idxs_cond: The indices of the conditional variables (empty if cond is not given).

Example

using CausalityTools, DynamicalSystems
sys = ExampleSystems.logistic2_unidir(c_xy = 0.8, r₁ = 3.78, r₂ = 3.92)
orbit = trajectory(sys, 10000, Ttr = 10000)
x, y = columns(orbit)

# Use a coarse-grained rectangular binning with subdivisions in each dimension,
# to keep computation costs low and to ensure the probability distributions 
# over the bins don't approach the uniform distribution (need enough points 
# to fill bins).
est = NaiveKernel(0.3)
te_xy = bbnue(x, y, est, surr = RandomShuffle(), nsurr = 100, include_instantaneous = true)
te_yx = bbnue(y, x, est, surr = RandomShuffle(), nsurr = 100, include_instantaneous = true)

te_xy, te_yx
source
  • Diego2019Diego, D., Haaga, K. A., & Hannisdal, B. (2019). Transfer entropy computation using the Perron-Frobenius operator. Physical Review E, 99(4), 042212.
  • PrichardTheiler1995Prichard, D., & Theiler, J. (1995). Generalized redundancies for time series analysis. Physica D: Nonlinear Phenomena, 84(3-4), 476-493.
  • Charzyńska2016Charzyńska, A., & Gambin, A. (2016). Improvement of the k-NN entropy estimator with applications in systems biology. Entropy, 18(1), 13.
  • KozachenkoLeonenko1987Kozachenko, L. F., & Leonenko, N. N. (1987). Sample estimate of the entropy of a random vector. Problemy Peredachi Informatsii, 23(2), 9-16.
  • Kraskov2004Kraskov, A., Stögbauer, H., & Grassberger, P. (2004). Estimating mutual information. Physical review E, 69(6), 066138.
  • BandtPompe2002Bandt, Christoph, and Bernd Pompe. "Permutation entropy: a natural complexity measure for time series." Physical review letters 88.17 (2002): 174102.
  • Berger2019Berger, Sebastian, et al. "Teaching Ordinal Patterns to a Computer: Efficient Encoding Algorithms Based on the Lehmer Code." Entropy 21.10 (2019): 1023.
  • Fadlallah2013Fadlallah, Bilal, et al. "Weighted-permutation entropy: A complexity measure for time series incorporating amplitude information." Physical Review E 87.2 (2013): 022911.
  • Rényi1960A. Rényi, Proceedings of the fourth Berkeley Symposium on Mathematics, Statistics and Probability, pp 547 (1960)
  • Azami2016Azami, H., & Escudero, J. (2016). Amplitude-aware permutation entropy: Illustration in spike detection and signal segmentation. Computer methods and programs in biomedicine, 128, 40-51.
  • Fadlallah2013Fadlallah, Bilal, et al. "Weighted-permutation entropy: A complexity measure for time series incorporating amplitude information." Physical Review E 87.2 (2013): 022911.
  • Zunino2017Zunino, L., Olivares, F., Scholkmann, F., & Rosso, O. A. (2017). Permutation entropy based time series analysis: Equalities in the input signal can lead to false conclusions. Physics Letters A, 381(22), 1883-1892.
  • Palus2014Paluš, M. (2014). Cross-scale interactions and information transfer. Entropy, 16(10), 5263-5289.
  • Montalto2014Montalto, A.; Faes, L.; Marinazzo, D. MuTE: A MATLAB toolbox to compare established and novel estimators of the multivariate transfer entropy. PLoS ONE 2014, 9, e109462.