Skip to content

Naive resampling

Uncertainties only in values

# CausalityToolsBase.causalityMethod.

causality(x, y, test::CausalityTest)

Test for a causal influence from source to target using the provided causality test.

Both x and y can be a real-valued vector, Vector{<:AbstractUncertainValue} or a AbstractUncertainValueDataset. If either x, y or both are uncertain, then the test is applied to a single draw from the uncertain data.

Examples

Generate some example data series x and y, where x influences y:

n_pts = 300
a₁, a₂, b₁, b₂, ξ₁, ξ₂, C₁₂ = 0.7, 0.1, 0.75, 0.2, 0.3, 0.3, 0.5

D = rand(n_pts, 2) 
for t in 5:n_pts
    D[t,1] = a₁*D[t-1,1] - a₂*D[t-3,1] +                ξ₁*rand(Normal(0, 1))
    D[t,2] = b₁*D[t-1,2] - b₂*D[t-2,2] + C₁₂*D[t-1,1] + ξ₂*rand(Normal(0, 1))
end

Gather time series and add some uncertainties to them:

x, y = D[:, 1], D[:, 2]
uvalx = UncertainValue.(Normal.(x, rand()))
uvaly = UncertainValue.(Normal.(y, rand()))

xd = UncertainValueDataset(uvalx)
yd = UncertainValueDataset(uvaly)

Any combination of certain and uncertain values will work:

causality(x, y, pa_test)
causality(x, yd, pa_test)
causality(xd, yd, pa_test)
causality(x, uvaly, pa_test)
causality(uvalx, uvaly, pa_test)

On multiple realisations of the uncertain yd, but fixing x:

[causality(x, yd, pa_test) for i = 1:100]

source

Uncertainties in both indices and values

# CausalityToolsBase.causalityMethod.

causality(source::AbstractUncertainIndexValueDataset, 
    target::AbstractUncertainIndexValueDataset, 
    test::CausalityTest)

Test for a causal influence from source to target using the provided causality test.

The test is performed on a single draw of the values of source and target, disregarding the ordering resulting from resampling, but respecting the order of the points in the dataset.

Note: if the uncertain values furnishing the indices have overlapping supports, you might mess up the index-ordering (e.g. time-ordering) of the data points.

Example

Generate some example data series x and y, where x influences y:

n_pts = 300
a₁, a₂, b₁, b₂, ξ₁, ξ₂, C₁₂ = 0.7, 0.1, 0.75, 0.2, 0.3, 0.3, 0.5

D = rand(n_pts, 2) 
for t in 5:n_pts
    D[t,1] = a₁*D[t-1,1] - a₂*D[t-4,1] +                ξ₁*rand(Normal(0, 1))
    D[t,2] = b₁*D[t-1,2] - b₂*D[t-4,2] + C₁₂*D[t-1,1] + ξ₂*rand(Normal(0, 1))
end

x, y = D[:, 1], D[:, 2]

Add some uncertainties and gather in an UncertainIndexValueDataset

t = 1:n_pts
tu = UncertainValue.(Normal.(t, rand()))
xu = UncertainValue.(Normal.(x, rand()))
yu = UncertainValue.(Normal.(y, rand()))
X = UncertainIndexValueDataset(tu, xu)
Y = UncertainIndexValueDataset(tu, yu)

Define a causality test, for example the predictive asymmetry test:

# Define causality test 
k, l, m = 1, 1, 1
ηs = -8:8
n_subdivs = floor(Int, n_pts^(1/(k+l+m+1)))
bin = RectangularBinning(n_subdivs)
te_test = VisitationFrequencyTest(k = k, l = l, m = m, binning = bin, ηs = ηs)
pa_test = PredictiveAsymmetryTest(predictive_test = te_test)

Run the causality test on a single draw of X and a single draw of Y:

pa_XY = causality(X, Y, pa_test)
pa_YX = causality(Y, X, pa_test)

Repeat the test on multiple draws:

pa_XY = [causality(X, Y, pa_test) for i = 1:100]
pa_YX = [causality(Y, X, pa_test) for i = 1:100]

source