0

I 'm using google colab to solve the homogeneous heat equation. I had made a program earlier with scipy using sparse matrices which worked upto N = 10(hyperparameter) but I need to run it for like N = 4... 1000 and thus it won't work on my pc. I therefore converted the code to tensorflow and here I 'm unable to use sparse matrices like I could in sympy but even the GPU/TPU computation is also slow and slower than my pc. Problems that I'm facing in the code and require solution for

1) tf.contrib is removed and thus I 've to use an older version of tensorflow for odeint function. Where is it in 2.0? 2)If the computation can be computed with sparse matrices it could be good since matrices are tridiagonal.I know about sparse_dense_mul() function but that returns dense tensor and it wouldn't do the job. The "func" function applies time independent boundary conditions and then requires matrix multiplication of (nxn) with (nX1) which gives (nX1) with multiple matrices.

Also the program was running faster without I created the class.

Also it's giving this

WARNING: Logging before flag parsing goes to stderr.
W0829 09:12:24.415445 139855355791232 lazy_loader.py:50] 
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

W0829 09:12:24.645356 139855355791232 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/integrate/python/ops/odes.py:233: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.

when I run code for loop in range(2, 10) and tqdm does not display and cell keeps running forever but it works fine for in (2, 5) and tqdm bar does appears.

#find a way to use sparse matrices
class Heat:

    def __init__(self, N):

        self.N = N
        self.H = 1/N

        self.A = ts.to_dense(ts.SparseTensor(indices=[[0, 0], [0, 1]] + \
                 [[i, i+j] for i in range(1, N) for j in [-1, 0, 1]] +[[N, N-1], [N, N]],
                  values=self.H*np.array([1/3, 1/6] + [1/6, 2/3, 1/6]*(N-1) + [1/6, 1/3], dtype=np.float32),
                  dense_shape=(N+1, N+1 )))

        self.D = ts.to_dense(ts.SparseTensor(indices=[[0, 0], [0, 1]] + [[i, i+j] \
                 for i in range(1, N) for j in [-1, 0, 1]] +[[N, N-1], [N, N]],
                 values=N*np.array([1-(1), -1 -(-1)] + [-1, 2, -1]*(N-1) + [-1-(-1), 1-(1)], dtype=np.float32),
                 dense_shape=(N+1, N+1)))

        self.domain = tf.linspace(0.0, 1.0, N+1)



        def f(k):

            if k == 0:

                return (1 + math.pi**2)*(math.pi*self.H - math.sin(math.pi*self.H))/(math.pi**2*self.H)

            elif k == N:

                return -(1 + math.pi**2)*(-math.pi*self.H + math.sin(math.pi*self.H))/(math.pi**2*self.H)

            else:

                return -2*(1 + math.pi**2)*(math.cos(math.pi*self.H) - 1)*math.sin(math.pi*self.H*k)/(math.pi**2*self.H)


        self.F = tf.constant([f(k) for k in range(N+1)], shape=(N+1,), dtype=tf.float32)  #caution! shape changed caution caution 1, N+1(problem) is different from N+1,

        self.exact = tm.scalar_mul(scalar=np.exp(1), x=tf.sin(math.pi*self.domain))



    def error(self):

        return np.linalg.norm(self.exact.numpy() -  self.approx, 2)


    def func (self, y, t):
        y = tf.Variable(y)
        y = y[0].assign(0.0)
        y = y[self.N].assign(0.0)
        if self.N**2> 100:
            y_dash = tl.matvec(tf.linalg.inv(self.A), tl.matvec(a=tm.negative(self.D), b=y, a_is_sparse=True) + tm.scalar_mul(scalar=math.exp(t), x=self.F))  #caution! shape changed F is (1, N+1) others too
        else:
            y_dash = tl.matvec(tf.linalg.inv(self.A), tl.matvec(a=tm.negative(self.D), b=y) + tm.scalar_mul(scalar=math.exp(t), x=self.F))  #caution! shape changed F is (1, N+1) others too

        y_dash = tf.Variable(y_dash) #!!y_dash performs Hadamard product like multiplication not matrix-like multiplication;returns 2-D

        y_dash = y_dash[0].assign(0.0)
        y_dash = y_dash[self.N].assign(0.0)

        return y_dash    

    def algo_1(self):

        self.approx = tf.contrib.integrate.odeint(
            func=self.func,
            y0=tf.sin(tm.scalar_mul(scalar=math.pi, x=self.domain)),
            t=tf.constant([0.0, 1.0]),
            rtol=1e-06,
            atol=1e-12,
            method='dopri5',
            options={"max_num_steps":10**10},
            full_output=False,
            name=None
           ).numpy()[1]

    def algo_2(self):

        self.approx = tf.contrib.integrate.odeint_fixed(
                      func=self.func,
                      y0=tf.sin(tm.scalar_mul(scalar=math.pi, x=self.domain)),
                      t=tf.constant([0.0, 1.0]),
                      dt=tf.constant([self.H**2], dtype=tf.float32),
                      method='rk4',
                      name=None
                    ).numpy()[1]


df = pd.DataFrame(columns=["NumBasis", "Errors"])
Ns = [2**r for r in range(2, 10)]
l =[]
for i in tqdm_notebook(Ns):
    heateqn = Heat(i)
    heateqn.algo_1()
    l.append([i, heateqn.error()])
    df.append({"NumBasis":i, "Errors":heateqn.error()}, ignore_index=True)
    tf.keras.backend.clear_session()


  • Also it wouldn't be much relevent but this program is trying to solve the heat equation with finite element method using weak form of derivative – VISHESH MANGLA Aug 29 '19 at 09:44
  • I don't think that linear algebra and LU decomposition parallelize easily. Why are you using Tensor Flow? numpy and scipy, yes; Tensor Flow and GPU, no. – duffymo Aug 29 '19 at 14:45
  • LU decomposition but where? – VISHESH MANGLA Aug 29 '19 at 21:01
  • Also I have modified the code a bit. I recently saw the a_is_sparse=True option which I'm currently using but I want to store both A and D as sparse matrices since both are tridiogonal and multiply them by a column matrix. But the only option I got in tensorflow to multiply a (N, N) matrix with (N, 1) is tensorflow.linalg.matvec. If I use A as a sparse matrix it throws error both with and without a_is_sparse=True. How can I store A D as sparse and multiply them? – VISHESH MANGLA Aug 29 '19 at 21:11
  • Or is there any other method to solve my problem since both scipy and tensorflow are taking more than 2 hours to finish the job(I waited for atleast that much time). Also the program uses similar operations which tensorflow uses in neural nets so what's the big deal here? How are they fast but my program slow? I need to run this program for some other equation which has a few more matrices. Is there anything that I can do? – VISHESH MANGLA Aug 29 '19 at 21:21
  • one change I made, made the program faster and upto N=5 it completed in a few seconds but then again further than that it takes too much time. I saw that I was using the inv(A) in the odeint method which was being computed every iteration .Now I 'm using self.A =inv(A) instead of self.A=A. – VISHESH MANGLA Aug 29 '19 at 23:53

0 Answers0