# Convenience functions

## Common entropy-based measures

We provide a few convenience functions for widely used names for entropy or "entropy-like" quantities. Other arbitrary specialized convenience functions can easily be defined in a couple lines of code.

We emphasize that these functions really aren't anything more than 2-lines-of-code wrappers that call `information`

with the appropriate `OutcomeSpace`

and `InformationMeasure`

.

`ComplexityMeasures.entropy_permutation`

— Function`entropy_permutation(x; τ = 1, m = 3, base = 2)`

Compute the permutation entropy of `x`

of order `m`

with delay/lag `τ`

. This function is just a convenience call to:

```
est = OrdinalPatterns(; m, τ)
information(Shannon(base), est, x)
```

See `OrdinalPatterns`

for more info. Similarly, one can use `WeightedOrdinalPatterns`

or `AmplitudeAwareOrdinalPatterns`

for the weighted/amplitude-aware versions.

`ComplexityMeasures.entropy_wavelet`

— Function`entropy_wavelet(x; wavelet = Wavelets.WT.Daubechies{12}(), base = 2)`

Compute the wavelet entropy. This function is just a convenience call to:

```
est = WaveletOverlap(wavelet)
information(Shannon(base), est, x)
```

See `WaveletOverlap`

for more info.

`ComplexityMeasures.entropy_dispersion`

— Function`entropy_dispersion(x; base = 2, kwargs...)`

Compute the dispersion entropy. This function is just a convenience call to:

```
est = Dispersion(kwargs...)
information(Shannon(base), est, x)
```

See `Dispersion`

for more info.

`ComplexityMeasures.entropy_approx`

— Function`entropy_approx(x; m = 2, τ = 1, r = 0.2 * Statistics.std(x), base = MathConstants.e)`

Convenience syntax for computing the approximate entropy (Pincus, 1991) for timeseries `x`

.

This is just a wrapper for `complexity(ApproximateEntropy(; m, τ, r, base), x)`

(see also `ApproximateEntropy`

).

`ComplexityMeasures.entropy_sample`

— Function`entropy_sample(x; r = 0.2std(x), m = 2, τ = 1, normalize = true)`

Convenience syntax for estimating the (normalized) sample entropy (Richman & Moorman, 2000) of timeseries `x`

.

This is just a wrapper for `complexity(SampleEntropy(; r, m, τ, base), x)`

.

See also: `SampleEntropy`

, `complexity`

, `complexity_normalized`

).

`ComplexityMeasures.entropy_distribution`

— Function`entropy_distribution(x; τ = 1, m = 3, n = 3, base = 2)`

Compute the distribution entropy (Li *et al.*, 2015) of `x`

using embedding dimension `m`

with delay/lag `τ`

, using the Chebyshev distance metric, and using an `n`

-element equally-spaced binning over the distribution of distances to estimate probabilities.

This function is just a convenience call to:

```
x = rand(1000000)
o = SequentialPairDistances(x, n, m, τ, metric = Chebyshev())
h = information(Shannon(base = 2), o, x)
```

See `SequentialPairDistances`

for more info.