Conditional mutual information API
The condition mutual information (CMI) API is defined by
ConditionalMutualInformation
, and its subtypes.condmutualinfo
,ConditionalMutualInformationEstimator
, and its subtypes.
CausalityTools.condmutualinfo
— Functioncondmutualinfo([measure::CMI], est::CMIEstimator, x, y, z) → cmi::Real
Estimate a conditional mutual information (CMI) of some kind (specified by measure
), between x
and y
, given z
, using the given dedicated ConditionalMutualInformationEstimator
, which may be discrete, continuous or mixed.
Estimators
Estimator | Principle | CMIShannon | CMIRenyiPoczos |
---|---|---|---|
FPVP | Nearest neighbors | ✓ | x |
MesnerShalizi | Nearest neighbors | ✓ | x |
Rahimzamani | Nearest neighbors | ✓ | x |
PoczosSchneiderCMI | Nearest neighbors | x | ✓ |
condmutualinfo([measure::CMI], est::ProbabilitiesEstimator, x, y, z) → cmi::Real ∈ [0, a)
Estimate the conditional mutual information (CMI) measure
between x
and y
given z
using a sum of entropy terms, without any bias correction, using the provided ProbabilitiesEstimator
est
. If measure
is not given, then the default is CMIShannon()
.
With a ProbabilitiesEstimator
, the returned cmi
is guaranteed to be non-negative.
Estimators
Estimator | Principle | CMIShannon | CMIRenyiSarbu |
---|---|---|---|
CountOccurrences | Frequencies | ✓ | ✓ |
ValueHistogram | Binning (histogram) | ✓ | ✓ |
SymbolicPermutation | Ordinal patterns | ✓ | ✓ |
Dispersion | Dispersion patterns | ✓ | ✓ |
condmutualinfo([measure::CMI], est::DifferentialEntropyEstimator, x, y, z) → cmi::Real
Estimate the mutual information between x
and y
conditioned on z
, using the differential version of the given conditional mutual information (CMI) measure
. The DifferentialEntropyEstimator
est
must must support multivariate data. No bias correction is performed. If measure
is not given, then the default is CMIShannon()
.
DifferentialEntropyEstimator
s have their own base
field which is not used here. Instead, this method creates a copy of est
internally, where est.base
is replaced by measure.e.base
. Therefore, use measure
to control the "unit" of the mutual information.
Estimators
Estimator | Principle | CMIShannon |
---|---|---|
Kraskov | Nearest neighbors | ✓ |
Zhu | Nearest neighbors | ✓ |
Gao | Nearest neighbors | ✓ |
Goria | Nearest neighbors | ✓ |
Lord | Nearest neighbors | ✓ |
LeonenkoProzantoSavani | Nearest neighbors | ✓ |
condmutualinfo([measure::CMI], est::MutualInformationEstimator, x, y, z) → cmi::Real
Estimate the mutual information between x
and y
conditioned on z
, using the given conditional mutual information (CMI) measure
, computed as a a difference of mutual information terms (just the chain rule of mutual information)
\[\hat{I}(X; Y | Z) = \hat{I}(X; Y, Z) - \hat{I}(X; Z).\]
The MutualInformationEstimator
est
may be continuous/differential, discrete or mixed. No bias correction in performed, except the bias correction that occurs for each individual mutual information term. If measure
is not given, then the default is CMIShannon()
.
Estimators
Estimator | Type | Principle | CMIShannon |
---|---|---|---|
KraskovStögbauerGrassberger1 | Continuous | Nearest neighbors | ✓ |
KraskovStögbauerGrassberger2 | Continuous | Nearest neighbors | ✓ |
GaoKannanOhViswanath | Mixed | Nearest neighbors | ✓ |
GaoOhViswanath | Continuous | Nearest neighbors | ✓ |
Definitions
CausalityTools.ConditionalMutualInformation
— TypeConditionalMutualInformation <: AssociationMeasure
CMI # alias
The supertype of all conditional mutual information measures. Concrete subtypes are
ConditionalMutualInformationEstimator
s
CausalityTools.ConditionalMutualInformationEstimator
— TypeConditionalMutualInformationEstimator <: InformationEstimator
CMIEstimator # alias
The supertype of all conditional mutual information estimators.
Subtypes
GaussianCMI
(parametric)
CausalityTools.GaussianCMI
— TypeGaussianCMI <: MutualInformationEstimator
GaussianCMI(; normalize::Bool = false)
GaussianCMI
is a parametric estimator for Shannon conditional mutual information (CMI) (Vejmelka and Paluš, 2008).
Description
GaussianCMI
estimates Shannon CMI through a sum of two mutual information terms that each are estimated using GaussianMI
(the normalize
keyword is the same as for GaussianMI
):
\[\hat{I}_{Gaussian}(X; Y | Z) = \hat{I}_{Gaussian}(X; Y, Z) - \hat{I}_{Gaussian}(X; Z)\]
FPVP
CausalityTools.FPVP
— TypeFPVP <: ConditionalMutualInformationEstimator
FPVP(k = 1, w = 0)
The Frenzel-Pompe-Vejmelka-Paluš (or FPVP
for short) estimator is used to estimate the differential conditional mutual information using a k
-th nearest neighbor approach that is analogous to that of the KraskovStögbauerGrassberger1
mutual information estimator (Frenzel and Pompe (2007); Vejmelka and Paluš (2008)).
w
is the Theiler window, which controls the number of temporal neighbors that are excluded during neighbor searches.
MesnerShalizi
CausalityTools.MesnerShalizi
— TypeMesnerShalizi <: ConditionalMutualInformationEstimator
MesnerShalizi(k = 1, w = 0)
The MesnerShalizi
estimator is an estimator for conditional mutual information for data that can be mixtures of discrete and continuous data MesnerShalizi2020.
PoczosSchneiderCMI
CausalityTools.PoczosSchneiderCMI
— TypePoczosSchneiderCMI <: ConditionalMutualInformationEstimator
PoczosSchneiderCMI(k = 1, w = 0)
The PoczosSchneiderCMI
estimator computes various (differential) conditional mutual informations, using a k
-th nearest neighbor approach (Póczos & Schneider, 2012)[Póczos2012].
Rahimzamani
CausalityTools.Rahimzamani
— TypeRahimzamani <: ConditionalMutualInformationEstimator
Rahimzamani(k = 1, w = 0)
The Rahimzamani
estimator, short for Rahimzamani-Asnani-Viswanath-Kannan, is an estimator for Shannon conditional mutual information for data that can be mixtures of discrete and continuous data (Rahimzamani et al., 2018).
This is very similar to the GaoKannanOhViswanath
mutual information estimator, but has been expanded to the conditional case.
- Póczos2012Póczos, B., & Schneider, J. (2012, March). Nonparametric estimation of conditional information and divergences. In Artificial Intelligence and Statistics (pp. 914-923). PMLR.