0

I'm trying to solve the LotkaVolterra problem that Chris Rackauckas explained in the JuliaCon 2020 and get an error when try to calculate the loss between the prediction and the values.

I have a separate Project Environment for this exercise with it's own Manifest.toml and Project.toml files. I have added all the packages and modules, and I do not get any error when run the lines "using ...". I change the folder and activate the environment every time before I run the code. I even have removed all the packages and modules and installed Julia again. I always get the same error.

Here is my code, which is exactly the same Chris uses during the video in You Tube

## Environment and packages
cd(@__DIR__)
using Pkg; Pkg.activate("."); Pkg.instantiate()

using OrdinaryDiffEq
using ModelingToolkit
using DataDrivenDiffEq
using LinearAlgebra, DiffEqSensitivity, Optim
using DiffEqFlux, Flux
using Plots
gr()

## Data generation
function lotka!(du, u, p, t)
    α, β, γ, δ = p
    du[1] = α*u[1] - β*u[2]*u[1]
    du[2] = γ*u[1]*u[2]  - δ*u[2]
end

# Define the experimental parameter
tspan = (0.0,3.0)
u0 = [0.44249296,4.6280594]
p_ = [1.3, 0.9, 0.8, 1.8]
prob = ODEProblem(lotka!, u0,tspan, p_)
solution = solve(prob, Vern7(), abstol=1e-12, reltol=1e-12, saveat = 0.1)

scatter(solution, alpha = 0.25)
plot!(solution, alpha = 0.5)

# Ideal data
tsdata = Array(solution)
# Add noise in terms of the mean
noisy_data = tsdata + Float32(1e-5)*randn(eltype(tsdata), size(tsdata))

plot(abs.(tsdata-noisy_data)')

# Define the neural_network
ann = FastChain(FastDense(2,32,tanh), FastDense(32,32,tanh), FastDense(32,2))
p = initial_params(ann)

# Now we define the function with the parameters that we now (p_[1] and p_[4]) and we use the neural
# network for the parameters that we do not know (z[1] and z[2])

function dudt_(u,p,t)
    x, y = u
    z = ann(u,p)
    [p_[1]*x + z[1],
    -p_[4]*y + z[2]]
end

prob_nn = ODEProblem(dudt_, u0, tspan, p)
s = concrete_solve(prob_nn, Tsit5(), u0, p, saveat = solution.t)

plot(solution)
plot!(s)

# The predictions are not perfect, so we build a predict function and try to optimize

function predict(θ)
    Array(concrete_solve(prob_nn, Vern7(), u0, θ, saveat = solution.t,
                        abstol=1e-6, reltol=1e-6,
                        sensealg = InterpolatingAdjoint(autojacvec = true)))
end

# No regularization right now

function loss(θ)
    pred = predict(θ)
    sum(abs2, noisy_data .- pred), pred
end

loss(p)

The error that I get is:

WARNING: both DiffEqFlux and DiffEqSensitivity export "InterpolatingAdjoint"; uses of it in module Main must be qualified
ERROR: UndefVarError: InterpolatingAdjoint not defined

I'm using Julia v1.6.7 in a Windows 10 OS and I code with VSCode.

David
  • 25
  • 1
  • 8

2 Answers2

2

While the OP should probably follow Mr. Rackauckas' advice, his answer doesn't address the error that the OP is getting. The error Julia is reporting comes from the keyword argument:

sensealg = InterpolatingAdjoint(autojacvec = true)

and it basically tells us that two imported packages DiffEqFlux and DiffEqSensitivity have defined an object/function called InterpolatingAdjoint and so any use of it in the main module must be qualified; or in other words we must tell Julia which package's InterpolatingAdjoint we would like Julia to use. This can be done as: DiffEqFlux.InterpolatingAdjoint or DiffEqSensitivity.InterpolatingAdjoint.

Now like Mr. Rackauckas pointed out, the video is pretty old and this qualified use of InterpolatingAdjoint may or may not fix other issues that can exist with a deprecated package.

ITA
  • 3,200
  • 3
  • 18
  • 32
1

DiffEqSensitivity was deprecated about a year ago. The video is pretty old. I would recommend following the current documentation:

https://docs.sciml.ai/SciMLSensitivity/dev/getting_started/

In this it uses SciMLSensitivity.

(Note: you should be able to get DiffEqSensitivity to work of course because that's what it was using back then, but the simplest thing to do is to follow the current documentation which got a revamp in 2022)

The first tutorial page is the same example:

https://docs.sciml.ai/SciMLSensitivity/dev/tutorials/parameter_estimation_ode/

This looks like:

using DifferentialEquations, Optimization, OptimizationPolyalgorithms, SciMLSensitivity,
      Zygote, Plots

function lotka_volterra!(du, u, p, t)
    x, y = u
    α, β, δ, γ = p
    du[1] = dx = α * x - β * x * y
    du[2] = dy = -δ * y + γ * x * y
end

# Initial condition
u0 = [1.0, 1.0]

# Simulation interval and intermediary points
tspan = (0.0, 10.0)
tsteps = 0.0:0.1:10.0

# LV equation parameter. p = [α, β, δ, γ]
p = [1.5, 1.0, 3.0, 1.0]

# Setup the ODE problem, then solve
prob = ODEProblem(lotka_volterra!, u0, tspan, p)
sol = solve(prob, Tsit5())

# Plot the solution
using Plots
plot(sol)
savefig("LV_ode.png")

function loss(p)
    sol = solve(prob, Tsit5(), p = p, saveat = tsteps)
    loss = sum(abs2, sol .- 1)
    return loss, sol
end

callback = function (p, l, pred)
    display(l)
    plt = plot(pred, ylim = (0, 6))
    display(plt)
    # Tell Optimization.solve to not halt the optimization. If return true, then
    # optimization stops.
    return false
end

adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, p)

result_ode = Optimization.solve(optprob, PolyOpt(),
                                callback = callback,
                                maxiters = 100)

And for completeness, the universal differential equation automated model discovery example is found at this documentation page:

https://docs.sciml.ai/Overview/stable/showcase/missing_physics/

That is what you were looking at, which is:

# SciML Tools
using OrdinaryDiffEq, ModelingToolkit, DataDrivenDiffEq, SciMLSensitivity, DataDrivenSparse
using Optimization, OptimizationOptimisers, OptimizationOptimJL

# Standard Libraries
using LinearAlgebra, Statistics

# External Libraries
using ComponentArrays, Lux, Zygote, Plots, StableRNGs
gr()

# Set a random seed for reproducible behaviour
rng = StableRNG(1111)

function lotka!(du, u, p, t)
    α, β, γ, δ = p
    du[1] = α * u[1] - β * u[2] * u[1]
    du[2] = γ * u[1] * u[2] - δ * u[2]
end

# Define the experimental parameter
tspan = (0.0, 5.0)
u0 = 5.0f0 * rand(rng, 2)
p_ = [1.3, 0.9, 0.8, 1.8]
prob = ODEProblem(lotka!, u0, tspan, p_)
solution = solve(prob, Vern7(), abstol = 1e-12, reltol = 1e-12, saveat = 0.25)

# Add noise in terms of the mean
X = Array(solution)
t = solution.t

x̄ = mean(X, dims = 2)
noise_magnitude = 5e-3
Xₙ = X .+ (noise_magnitude * x̄) .* randn(rng, eltype(X), size(X))

plot(solution, alpha = 0.75, color = :black, label = ["True Data" nothing])
scatter!(t, transpose(Xₙ), color = :red, label = ["Noisy Data" nothing])

rbf(x) = exp.(-(x .^ 2))

# Multilayer FeedForward
U = Lux.Chain(Lux.Dense(2, 5, rbf), Lux.Dense(5, 5, rbf), Lux.Dense(5, 5, rbf),
              Lux.Dense(5, 2))
# Get the initial parameters and state variables of the model
p, st = Lux.setup(rng, U)

# Define the hybrid model
function ude_dynamics!(du, u, p, t, p_true)
    û = U(u, p, st)[1] # Network prediction
    du[1] = p_true[1] * u[1] + û[1]
    du[2] = -p_true[4] * u[2] + û[2]
end

# Closure with the known parameter
nn_dynamics!(du, u, p, t) = ude_dynamics!(du, u, p, t, p_)
# Define the problem
prob_nn = ODEProblem(nn_dynamics!, Xₙ[:, 1], tspan, p)

function predict(θ, X = Xₙ[:, 1], T = t)
    _prob = remake(prob_nn, u0 = X, tspan = (T[1], T[end]), p = θ)
    Array(solve(_prob, Vern7(), saveat = T,
                abstol = 1e-6, reltol = 1e-6))
end

function loss(θ)
    X̂ = predict(θ)
    mean(abs2, Xₙ .- X̂)
end

losses = Float64[]

callback = function (p, l)
    push!(losses, l)
    if length(losses) % 50 == 0
        println("Current loss after $(length(losses)) iterations: $(losses[end])")
    end
    return false
end

adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, ComponentVector{Float64}(p))

res1 = Optimization.solve(optprob, ADAM(), callback = callback, maxiters = 5000)
println("Training loss after $(length(losses)) iterations: $(losses[end])")

optprob2 = Optimization.OptimizationProblem(optf, res1.u)
res2 = Optimization.solve(optprob2, Optim.LBFGS(), callback = callback, maxiters = 1000)
println("Final training loss after $(length(losses)) iterations: $(losses[end])")

# Rename the best candidate
p_trained = res2.u

and then has the sparse regression for symbolic recovery part.

Chris Rackauckas
  • 18,645
  • 3
  • 50
  • 81
  • Thank you for the advice. I was using the video of the JuliaCon 2020 because it's a great lesson with the coding and the complete explanations of every single step, it's one of the bests I have ever watched. I will watch again the complete video because of the explanations and then I will use the links you suggest. Thank you very much!!! – David Apr 02 '23 at 09:44