0

So, I am currently translating an equation on rust, which functions will be used in Python. For your reference, here's the equation: (Equation Image)

I have two questions regarding the convolution process that I am currently working on:

  1. How to determine the shape required for convolutional arrays? For example, in the code, I stated that convolve[[m,j]] is the shape of (number of time x number of available producer wells), but I have my doubt.
  2. Why does these calculations results in very large values? Is it related to Question 1, which causes the calculation to be wrong? For your reference, I have also attached the snippet of the results. These values will be used for optimization (through scipy.minimize), with these results being inputted into the loss function required for optimization. When calculating the mean squared error, using np.mean(), the large values of the results is calculated as NaN, which causes the optimization to fail.
# Results snippet
[[5.68561108e+001 5.61980823e+001 5.61675861e+001 5.61661613e+001]
 [2.35307528e+002 2.34679642e+002 2.34608585e+002 2.34637574e+002]
 [7.18489028e+003 7.18386393e+003 7.18388719e+003 7.18351637e+003]
 [1.90516275e+005 1.90523205e+005 1.90512232e+005 1.90519840e+005]
 [3.74431205e+006 3.74488940e+006 3.74428963e+006 3.74486795e+006]
 [8.66579029e+007 8.66498208e+007 8.66607474e+007 8.66526724e+007]
 [2.07434218e+009 2.07448157e+009 2.07428866e+009 2.07442773e+009]
 [4.66149113e+010 4.66176532e+010 4.66161252e+010 4.66188671e+010]
 [7.41128735e+011 7.41177498e+011 7.41146858e+011 7.41195646e+011]
 [1.25832053e+013 1.25839703e+013 1.25759536e+013 1.25767179e+013]
 [3.13527991e+014 3.13539602e+014 3.13626007e+014 3.13637621e+014]
 ...
 [6.05974959e+285 6.06077063e+285 6.05912366e+285 6.06014464e+285]
 [1.19827323e+287 1.19782672e+287 1.19831491e+287 1.19786874e+287]
 [2.86110632e+288 2.86121783e+288 2.86107733e+288 2.86118882e+288]
 [5.59234029e+289 5.59217971e+289 5.59188827e+289 5.59172842e+289]
 [1.19840987e+291 1.19841850e+291 1.19834897e+291 1.19835744e+291]
 [2.51358782e+292 2.51380008e+292 2.51383269e+292 2.51404447e+292]
 [4.99748565e+293 4.99665753e+293 4.99716953e+293 4.99634444e+293]]

The codes on rust is as follows:

fn proxy_crm(_py: Python, m: &PyModule) -> PyResult<()> {
    fn q_prim(prod: ArrayView2<'_, f64>, time: ArrayView1<'_, f64>, lambda_prod: ArrayView1<'_, f64>, tau_prim: ArrayView1<'_, f64>) -> Array1<f64> {
        let n_prod: usize = prod.raw_dim()[1];
        let mut result: Array1<f64> = Array1::zeros([n_prod]);

        for j in 0..n_prod {
            let time_decay = (-&time / tau_prim[j]).mapv(f64::exp);
            result[j] = time_decay[j] * prod[[0,j]] * lambda_prod[j]
        }
        result
    }

    fn q_crm(inj: ArrayView2<'_, f64>, time: ArrayView1<'_, f64>, lambda_ip: ArrayView2<'_, f64>, tau: ArrayView1<'_, f64>, mask: ArrayView2<'_, f64> ) -> Array2<f64> {
        let n_t: usize = time.raw_dim()[0];
        let n_inj: usize = lambda_ip.raw_dim()[1];
        let n_prod: usize = mask.raw_dim()[1];
        let mut convolve: Array2<f64> = Array2::zeros([n_t, n_prod]);
        
        let lambda_ip_mod = calc_sh_mask(lambda_ip, mask);

        for i in 0..n_inj {
            for j in 0..n_prod {
                convolve[[0,j]] = (1.0 - ((time[0] - time[1]) / tau[j]).exp()) * lambda_ip_mod[[0,j,i]] * inj[[0,i]];
                for m in 1..n_t {
                    for n in 1..m+1 {
                        let time_decay = (1.0 - ((time[m-1] - time[m]) / tau[j]).exp()) * ((time[n] - time[m]) / tau[j]).exp();

                        convolve[[m,j]] += time_decay * lambda_ip_mod[[m,j,i]] * inj[[m,i]];
                    }
                }
            }
        }
        convolve
    }

    fn q_bhp(time: ArrayView1<'_, f64>, tau: ArrayView1<'_, f64>, press: ArrayView2<'_, f64>, prod_index: ArrayView1<'_, f64>) -> Array2<f64> {
        let n_t = time.raw_dim()[0];
        let n_prod = press.raw_dim()[1];
        let mut convolve: Array2<f64> = Array2::zeros([n_t,n_prod]);
        
        for j in 0..n_prod {
            convolve[[0,j]] = (1.0 - ((time[0] - time[1]) / tau[j]).exp()) * prod_index[0] * tau[j] * (press[[1,0]]-press[[0,0]]) / (time[1]-time[0]);
            for m in 1..n_t {
                for n in 1..m+1 {
                    let time_decay = (1.0 - ((time[m-1] - time[m]) / tau[j]).exp()) * ((time[n] - time[m]) / tau[j]).exp();
                    let delta_bhp = press[[m-1,j]] - press[[m,j]];

                    convolve[[m,j]] += time_decay * prod_index[j] * tau[j] * (delta_bhp / (time[m-1]-time[m]));
                }
            }
        }
        convolve
    }

    fn sh_mask(prod: ArrayView2<'_, f64>) -> Array2<f64> {
        let n_prod: usize = prod.raw_dim()[1];
        let n_t: usize = prod.raw_dim()[0];
        let mut mask = Array2::zeros([n_t, n_prod]);

        for t in 0..n_t {
            for j in 0..n_prod {
                if prod[[t,j]] == 0.0 {
                    mask[[t,j]] = 1.0;
                }
            }
        }
        mask
    }

    fn calc_sh_mask(lambda_ip:ArrayView2<'_, f64>, mask: ArrayView2<'_, f64>) -> Array3<f64> {
        let n_prod: usize = mask.raw_dim()[1];
        let n_inj: usize = lambda_ip.raw_dim()[1];
        let n_t: usize = mask.raw_dim()[0];
        
        let tensor: Array3<f64> = Array3::ones([n_t,n_prod,n_inj]);
        let mut lambda_ip_result: Array3<f64> = Array3::zeros([n_t,n_prod,n_inj]);

        for t in 0..n_t {
            let mut sum_lambda_sh: Array1<f64> = Array1::zeros([n_inj]);
            for j in 0..n_prod {
                for i in 0..n_inj {
                    lambda_ip_result[[t,j,i]] = f64::abs(f64::abs((mask[[t,j]]-1.0) * lambda_ip[[j,i]]) * (tensor[[t,j,i]] + tensor[[t,j,i]] * mask[[t,j]]));
                    if mask[[t,j]] == 1.0 {
                        sum_lambda_sh[i] += lambda_ip[[j,i]];
                        for k in 0..n_prod {
                            if k!=j {
                                lambda_ip_result[[t, k, i]] += f64::abs(f64::abs(lambda_ip[[k,i]]) * (tensor[[t,j,i]] + sum_lambda_sh[i] * tensor[[t,j,i]] * mask[[t,j]]));
                            }
                        }
                    }
                }
            }
        }
        lambda_ip_result
    }

I adapted those codes with the reference of a repository on GitHub, and will link it here if it helps.

tanosheet
  • 1
  • 2
  • 1
    It seems to me you don't have a programming problem but rather a [maths problem](https://math.stackexchange.com)⁉ – cafce25 Aug 21 '23 at 05:38
  • I think it's more like a 'translating' problem, so I thought that I have more of a coding problem rather than maths problem. I'm new to the forum and I'm sorry that I have misunderstood it. Should I go to math stackexchange instead in this case? – tanosheet Aug 21 '23 at 09:18

0 Answers0