Library
Contractors
These are interval contractors implemented here; in the future they may be substituted by established libraries, as IntervalRootFinding.jl
RigorousInvariantMeasures.preimage_monotonic
— FunctionCompute the guaranteed interval preimage f⁻¹(y) of a point y ∈ R according to the function f: [a,b] → R. This preimage is guaranteed to lie inside the real interval x = hull(a,b)
.
f
must be strictly monotonic on [a,b], at least in the sense of MonotonicBranch functions (allowing for uncertainty on the endpoints). f
must be differentiable; zero and infinite derivative are allowed.
y1
, y2
must be guaranteed to be equal to f(a), f(b) or "more outside", so that y ∉ hull(y1,f(z)) ⟹ f⁻¹(y) ∉ hull(a, z).
Stops when the interval reaches a fixed point, when the diameter is smaller than ε, or when max_iter iterations are reached (with an error)
RigorousInvariantMeasures.unique_increasing
— Methodunique_increasing(a, b)
Given intervals a, b, returns true
if a < b, false
if b < a, and raises an error if it is not uniquely determined.
Preimages
Optimized methods to compute preimages of $1$ dimensional maps.
RigorousInvariantMeasures.ComposedDynamic
— TypeComposed map D1 ∘ D2 ∘ D3. We store with [D1, D2, D3] in this order.
We overwrite ∘ in base, so one can simply write D1 ∘ D2 or ∘(D1, D2, D3) to construct them.
RigorousInvariantMeasures.first_overlapping
— Methodfirst_overlapping(y, a)
Smallest possible i such that a is in the semi-open interval [y[i], y[i+1]).
This should work properly even if a, y
are intervals; in this case it returns the smallest possible value of i over all possible "assignments" of a, y inside those intervals. Assumes y is sorted, i.e., map(y, x->Interval(x).lo) and map(y, x->Interval(x).hi) are sorted.
RigorousInvariantMeasures.last_overlapping
— Methodlast_overlapping(y, a)
Largest possible j such that a-ε is in the semi-open interval [y[j], y[j+1]).
This should work properly even if a, y
are intervals; in this case it returns the largest possible value of i over all possible "assignments" of a, y inside those intervals. Assumes y is sorted, i.e., map(y, x->Interval(x).lo) and map(y, x->Interval(x).hi) are sorted.
RigorousInvariantMeasures.preimage
— FunctionCompute the preimage f⁻¹(y), knowing that it lies inside search_interval
.
RigorousInvariantMeasures.preimages
— Functionpreimages(y, D::Dynamic, ylabel = 1:length(y), ϵ = 0.0; progress = true)
Construct preimages of an increasing array y under a dynamic, propagating additional labels `ylabel`
RigorousInvariantMeasures.preimages
— Functionpreimages(y, br::MonotonicBranch, ylabel = 1:length(y); ϵ, max_iter)
Construct preimages of a partition y under a monotonic branch defined on X = (a, b), propagating additional labels ylabel
Construct preimages of a partition y
under a monotonic branch br
defined on X = (a, b), propagating additional labels ylabel
It is assumed that it cannot happen that $f(x) < y[1]$.
Extended help
The sequence y subdivides the y-axis into semi-open intervals [y[l], y[l+1]); each of them is identified by the label ylabel[l]
. We construct an increasing sequence x that splits X (in the x-axis) into semi-open intervals, each of them with f([x[k], x[k+1]) ⊂ [y[l], y[l+1]) for a certain l. We set xlabel[k] = ylabel[l], and return the pair (x, xlabel).
It is assumed that it cannot happen that f(x) < y[1].
In the simplest case where D is full-branch, the points in x are preimages of the points in y, but in the general case they can also include D.endpoints: in general, there may be a certain number of points in y that have no preimage at the beginning and the end of the sequence, because they fall out of the range R = [f(a), f(b)]. In the worst case, no point has a preimage, because y[i] < R < y[i+1] for some i (or vice versa with orientations), and in this case we just return the 1-element vectors x = [branch.X[1]] and xlabel = [i].
x[begin] always coincides with branch.X[1], while branch.X[2] is "the point after x[end]", and is not stored explicitly in x, for easier composing. In this way x and xlabel have the same length.
This function fills the array by using a bisection strategy to save computations: if y ∈ [a,b], then $f^{-1}(y) \\in [f^{-1}(a),f^{-1}(b)]$ (paying attention to orientation). So we can fill v by filling in first entries v[k+1]
with higher dyadic valuation of k.
For a dynamic with multiple branches, preimages(y, D) is simply the concatenation of x, xlabel for b in all branches. These values still form an increasing sequence that splits X into intervals, each of which is mapped into a different semi-open interval [y[k], y[k+1]).
Example
julia> using RigorousInvariantMeasures
julia> D = mod1_dynamic(x->2x)
Piecewise-defined dynamic with 2 branches
julia> RigorousInvariantMeasures.preimages(0:0.1:1, D.branches[1]; ϵ = 10^(-15), max_iter = 100)
(Interval{Float64}[[0, 0], [0.05, 0.0500001], [0.1, 0.100001], [0.149999, 0.15], [0.2, 0.200001], [0.25, 0.25], [0.299999, 0.3], [0.349999, 0.35], [0.4, 0.400001], [0.45, 0.450001]], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
RigorousInvariantMeasures.preimages_and_derivatives
— Functionpreimages_and_derivatives(y, br::MonotonicBranch, ylabel = 1:length(y); ϵ, maxiter, left=true)
Compute preimages of D and the derivatives f'(x) in each point.
Returns: x, xlabel, x′
We combine the two computations in the same function because in this way it can be implemented more efficiently for composed dynamics.
This assumes that (1) the dynamic is full-branch, and (2) all branches have the same orientation. This is not restrictive because we'll need it only for the Hat assembler (at the moment).
If left==true
, x′
contains the derivatives f'.(x)
. If right==true, x′
contains derivatives in the right endpoint of each interval of the partition, i.e., for each branch, [f'(x[2]), f'(x[3]), ... f'(x[end]), f'(br.X[2])]
. In any case, length(x) == length(x′)
.
RigorousInvariantMeasures.range_estimate_monotone
— MethodUtility function that estimates the range of a monotone function
Differentiation interface
RigorousInvariantMeasures.check_derivatives
— Functioncheck_derivatives(f, x=rand())
Checks (using assertions) that the derivatives of f agree (up to the square root of machine precision) with those computed by TaylorSeries.
This is a useful sanity check if you redefine derivatives.
The derivative are checked in a point x
(default: rand()), which should be in the domain of f
.
```jldoctest
julia> f = @definewithderivatives x->x^2 x->2x x->2;
ERROR: LoadError: UndefVarError: @define_with_derivatives
not defined
in expression starting at none:1
julia> check_derivatives(f, 0.2)
ERROR: UndefVarError: check_derivatives
not defined
Stacktrace:
[1] top-level scope
@ none:1
julia> g = @definewithderivatives x->x^2 x->2x x->3;
ERROR: LoadError: UndefVarError: @define_with_derivatives
not defined
in expression starting at none:1
julia> check_derivatives(g, 0.2)
ERROR: UndefVarError: check_derivatives
not defined
Stacktrace:
[1] top-level scope
@ none:1
```
RigorousInvariantMeasures.distortion
— MethodThe distortion of f is f'' / (f')^2.
RigorousInvariantMeasures.inverse_derivative
— MethodThe inverse_derivative of f is 1/f'.
RigorousInvariantMeasures.value_and_derivative
— Methodvalue_and_derivative(f, x)
Evaluate f(x) and f'(x) at the same time. We define a single method to do this because often there are shared subexpressions between a function and its derivatives, or they are computed together by automatic differentiation packages, so this is often more efficient than having two separate functions for f and f' and calling them separately.
The generic implementation uses TaylorSeries
to compute derivatives. It can be overwritten for individual functions:
f = x -> x^2
value_and_derivative(f::typeof(f), x) = (x^2, 2x)
See @definewithderivatives to define derivatives with three separate expressions.
Remark: when specializing derivatives by hand, always specialize the two-argoment functions (e.g., value_and_derivative(f, x)
) rather than the one-parameter ones (e.g., value_and_derivative(f)
), since the latter are defined generically in terms of the former.
Remark: tricky point that can cause subtle bugs: functions created in the same line-of-code will often have the same type; so for instance
```jldoctest
julia> fs = [x -> k*x for k in 1:3]; # the three elements here have the same type
julia> typeof(fs[1]) == typeof(fs[3])
true
julia> RigorousInvariantMeasures.derivative(f::typeof(fs[1]), x) = 1;
julia> RigorousInvariantMeasures.derivative(f::typeof(fs[2]), x) = 2;
julia> RigorousInvariantMeasures.derivative(f::typeof(fs[3]), x) = 3;
julia> RigorousInvariantMeasures.derivative(fs[1], 0.5) # this returns 3, not 1
3
```
RigorousInvariantMeasures.value_derivative_and_second_derivative
— Methodvalue_derivative_and_second_derivative(f, x)
Generic method to evaluate f(x), f'(x) and f''(x) at the same time.
See value_and_derivative
for more detail.
RigorousInvariantMeasures.@define_with_derivatives
— Macrof = @define_with_derivatives(ex1, ex2, ex3)
Declares a new function f, and redefines the various functions related to derivatives so that its derivatives are computed with the given expressions.
```jldoctest
julia> f = @definewithderivatives x->x^2 x->2x x->2;
ERROR: LoadError: UndefVarError: @define_with_derivatives
not defined
in expression starting at none:1
julia> valueandderivative(f, 45)
ERROR: UndefVarError: value_and_derivative
not defined
Stacktrace:
[1] top-level scope
@ none:1
```
Note that the three macro parameters are separated just by spaces (no commas or parentheses)
```jldoctest
julia> g = @definewithderivatives x->12 x->34 x->56;
ERROR: LoadError: UndefVarError: @define_with_derivatives
not defined
in expression starting at none:1
julia> valuederivativeandsecondderivative(g, -23.5)
ERROR: UndefVarError: value_derivative_and_second_derivative
not defined
Stacktrace:
[1] top-level scope
@ none:1
```
This is provided for convenience, but note that in many cases one can find common subexpressions in a function and its derivatives; hence it is more efficient to define the two functions separately.
If you define derivatives in this way, it is recommended to run check_derivatives
to ensure that you did not make any mistakes (e.g., forgetting a factor 2). We do not run it automatically because that would require knowing a valid x
in the domain of f
.