Estimating transfer entropy directly from time series
The following methods allows you to skip the technicalities of constructing generalised delay reconstructions and taking care of the mapping of marginals.
Transfer entropy between two time series
#
TransferEntropy.transferentropy — Method.
transferentropy(source::AbstractArray{<:Real, 1},
response::AbstractArray{<:Real, 1},
k::Int, l::Int, m::Int,
binning_scheme::Union{RectangularBinning, Vector{RectangularBinning}};
summary_statistic::Function = StatsBase.mean,
η::Int = 1, τ::Int = 1,
estimator::BinningTransferEntropyEstimator = VisitationFrequency(b = 2)) -> FLoat64
Convenience function for computing transfer entropy from a source time series to a response time series.
Transfer entropy is computed using the provided estimator over a k + l + m-dimensional generalised delay reconstruction of the input data. The embedding delay used for the reconstructions is τ, and the prediction lag is η. Transfer entropy is computed using the following conventions for the generalised delay reconstructions.
where T_f denotes the k-dimensional set of vectors furnishing the future states of T, T_{pp} denotes the l-dimensional set of vectors furnishing the past and present states of T, and S_{pp} denotes the m-dimensional set of vectors furnishing the past and present of S. \eta is the prediction lag. This convenience function uses \tau_1 = τ, \tau_2 = 2*τ, \tau_3 = 3*τ, and so on.
Combined, we get the generalised embedding \mathbb{E} = (T_f^{(k)}, T_{pp}^{(l)}, S_{pp}^{(m)}). TE is then computed as
The probabilities needed for computing TE_{S \rightarrow T} are estimated over a discretization of the state space reconstruction, and this discretization is dictated by the provided binning_scheme(s). If there is more than one binning scheme, then the transfer entropy is summarised using summary_statistic over the partitions, and a single value for the transfer entropy is returned.
Arguments
source: The source data series.target: The target data series.k: The dimension of the T_{f} component of the embedding.l: The dimension of the T_{pp} component of the embedding.m: The dimension of the S_{pp} component of the embedding.binning_scheme: The binning scheme(s) used to construct the partitions over which TE is computed. Must be either one or several instances ofRectangularBinnings (provided as a vector). TE is computed for each of the resulting partitions.
Keyword arguments
τ::Int = 1: The embedding lag (lags the T_{pp} component of the embedding).η::Int = 1: The prediction lag.estimator::BinningTransferEntropyEstimator = VisitationFrequency(): The transfer entropy estimator to use.
Returns
A single value for the transfer entropy.
Transfer entropy between two time series conditioned on a third time series
#
TransferEntropy.transferentropy — Method.
transferentropy(source::AbstractArray{<:Real, 1},
response::AbstractArray{<:Real, 1},
cond::AbstractArray{<:Real, 1},
k::Int, l::Int, m::Int,
binning_scheme::Union{RectangularBinning, Vector{RectangularBinning}};
summary_statistic::Function = StatsBase.mean,
η::Int = 1, τ::Int = 1,
estimator::BinningTransferEntropyEstimator = VisitationFrequency(b = 2)) -> FLoat64
Convenience function for computing transfer entropy from a source time series to a response time series, conditioned on a third time series cond.
Transfer entropy is computed using the provided estimator over a k + l + m + n-dimensional generalised delay reconstruction of the input data. The embedding delay used for the reconstructions is τ, and the prediction lag is η. Transfer entropy is computed using the following conventions for the generalised delay reconstructions.
where T_f denotes the k-dimensional set of vectors furnishing the future states of T, T_{pp} denotes the l-dimensional set of vectors furnishing the past and present states of T, S_{pp} denotes the m-dimensional set of vectors furnishing the past and present of S, and C_{pp} denotes the n-dimensional set of vectors furnishing the past and present of C. \eta is the prediction lag. This convenience function uses \tau_1 = τ, \tau_2 = 2*τ, \tau_3 = 3*τ, and so on.
Combined, we get the generalised embedding \mathbb{E} = (T_f^{(k)}, T_{pp}^{(l)}, S_{pp}^{(m)}, C_{pp}^{(n)}). TE is then computed as
The probabilities needed for computing TE_{S \rightarrow T | C} are estimated over a discretization of the state space reconstruction, and this discretization is dictated by the provided binning_scheme(s). If there is more than one binning scheme, then the transfer entropy is summarised using summary_statistic over the partitions, and a single value for the transfer entropy is returned.
Arguments
source: The source data series.target: The target data series.k::Int: The dimension of the T_{f} component of the embedding.l::Int: The dimension of the T_{pp} component of the embedding.m::Int: The dimension of the S_{pp} component of the embedding.n::Int: The dimension of the C_{pp} component of the embedding.binning_scheme: The binning scheme(s) used to construct the partitions over which TE is computed. Must be either one or several instances ofRectangularBinnings (provided as a vector). TE is computed for each of the resulting partitions.
Keyword arguments
τ::Int = 1: The embedding lag (lags the T_{pp} component of the embedding).η::Int = 1: The prediction lag.estimator::BinningTransferEntropyEstimator = VisitationFrequency(): The transfer entropy estimator to use.
Returns
A single value for the transfer entropy.