1

I'm using scipy.integrate's odeint function to evaluate the time evolution of to find solutions to the equation

$$ \dot x = -\frac{f(x)}{g(x)}, $$

where $f$ and $g$ are both functions of $x$. $f,g$ are given by series of the form

$$ f(x) = x(1 + \sum_k b_k x^{k/2}) $$

$$ g(x) = 1 + \sum_k a_k (1 + k/2) x^{k/2}. $$

All positive initial values for $x$ should result in the solution blowing up in time, but they aren't...well, not always.

The coefficients $a_n, b_n$ are long polynomials, where $b_n$ is dependent on $x$ in a certain way, and $a_n$ is dependent on several terms being held constant.

Depending on the way I compute $g(x)$, I get very different behavior.

The first way I tried is as follows. 'a' and 'b' are 1x8 and 1x9 numpy arrays. Note that in the function g(x, a), a is multiplied by gterms in line 3, and does not appear in line 2.

def g(x, a):
    gterms = [(0.5*k + 1.) * x**(0.5*k) for k in range( len(a) )]
    return = 1. + np.sum(a*gterms)

def rhs(u,t)
    x = u
    a, b = An(), Bn(x) #An() and Bn(x) are functions that return an array of coefficients
    return -f(x, b)/g(x, a)

t = np.linspace(.,.,.)
solution = odeint(rhs, <some initial value>, t)

The second way was this:

def g(x, a):
    gterms = [(0.5*k + 1.) * a[k] * x**(0.5*k) for k in range( len(a) )]
    return = 1. + np.sum(gterms)

def rhs(u,t)
    x = u
    a, b = An(), Bn(x) #An() and Bn(x) are functions that return an array of coefficients
    return -f(x, b)/g(x, a)

t = np.linspace(.,.,.)
solution = odeint(rhs, <some initial value>, t)

Note the difference: using the first method, I stuck the array 'a' into the sum in line 3, whereas using the second method, I suck the values of 'a' into the list 'gterms' in line 2 instead.

The first method gives the expected behavior: solutions blow up positive x. However, the second method does not do this. The second method gives a bifurcation for some x0 > 0 that acts as a source. For initial conditions greater than x0, solutions blow up as expected, but initial conditions less than x0 have the solutions tending to 0 very slowly.

Something else of note: in the rhs function, if I change it from

def rhs(u,t)
    x = u
    ...
    return .

to def rhs(u,t) x = u[0] ... return .

the same exact change occurs

So my question is: what is the difference between the two different methods I used? I can't tell for the life of me what is actually going on here. Sorry for being so verbose.

Waldinian
  • 11
  • 1
  • 1
    Have you verified that `a` is, in fact, a one-dimensional numpy array? Print `a.shape`. Also, simply "blowing up" is a pretty coarse assessment of the correct behavior. For the cases where both methods blow up as expected, do they give the same sequence of values? – Warren Weckesser Jun 08 '17 at 21:05
  • So when you print `a.shape`, you get `(n,)`, and not something like `(n, 1)` or `(1, n)`? – Warren Weckesser Jun 09 '17 at 14:50
  • Hi, thanks for the reply. a and b are definitely 1-dimentional arrays (`a.shape` and `b.shape` return `(8,)` and `(9,)`, respectively). And I was definitely vague there. The two solutions that blow up as expected do give the same sequence of values. I don't know how much this will help, but here are some images of the drastic change in behavior I'm getting. [Plots](http://imgur.com/a/cAHhn) – Waldinian Jun 09 '17 at 15:03
  • Thanks for the update. I think anyone who wants to help you will need a [minimal, complete and verifiable example](https://stackoverflow.com/help/mcve) that can be run to demonstrate the problem. – Warren Weckesser Jun 09 '17 at 15:11
  • Hi Warren, this is getting back to you a little late, but here's what I've come up with, if you're still interested in this strange problem. I've made what should be a minimal, complete and verifiable [code](https://pastebin.com/URCwaxKp), let me know if this is what you were referring to, and if it helps – Waldinian Jun 22 '17 at 19:39

0 Answers0