Questions tagged [numerical-methods]

Algorithms which solve mathematical problems by means of numerical approximation (as opposed to symbolic computation).

Numerical methods include the study of algorithms that use numerical approximation (as opposed to general symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). Numerical methods naturally find applications in all fields of science and engineering, and include implementations of many important aspects of computation including: solving ordinary and partial differential equations, numerical linear algebra, stochastic differential equations, Markov chains, and so forth.

Numerical methods use several approaches to calculate observables. For example, iterative methods that form successive approximations that converge to the exact solution only in a limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Examples include Newton's method, the bisection method, and Jacobi iteration. Another example is the use of discretization, a procedure that is used when continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem.

The field of numerical methods includes many sub-disciplines. Some of the major ones are:

  • Computing values of functions

  • Interpolation, extrapolation, and regression

  • Solving equations and systems of equations

  • Solving eigenvalue or singular value problems

  • Optimization

  • Evaluating integrals

  • Differential equations

2104 questions
6
votes
2 answers

The precision of a large floating point sum

I am trying to sum a sorted array of positive decreasing floating points. I have seen that the best way to sum them is to start adding up numbers from lowest to highest. I wrote this code to have an example of that, however, the sum that starts on…
codingnight
  • 231
  • 1
  • 7
6
votes
1 answer

Numerical Stability of Forward Substitution in Python

I am implementing some basic linear equation solvers in Python. I have currently implemented forward and backward substitution for triangular systems of equations (so very straightforward to solve!), but the precision of the solutions becomes very…
Andrei Bârsan
  • 3,473
  • 2
  • 22
  • 46
6
votes
2 answers

Symbolic vs Numeric Math - Performance

Do symbolic math calculations (especially for solving nonlinear polynomial systems) cause huge performance (calculation speed) disadvantage compared to numeric calculations? Are there any benchmark/data about this? Found a related question: Symbolic…
6
votes
4 answers

Calculating a 3D gradient with unevenly spaced points

I currently have a volume spanned by a few million every unevenly spaced particles and each particle has an attribute (potential, for those who are curious) that I want to calculate the local force (acceleration) for. np.gradient only works with…
brokenseas
  • 310
  • 1
  • 12
6
votes
1 answer

Numerical Method for SARIMAX Model using R

My friend is currently working on his assignment about estimation of parameter of a time series model, SARIMAX (Seasonal ARIMA Exogenous), with Maximum Likelihood Estimation (MLE) method. The data used by him is about the monthly rainfall from 2000…
crhburn
  • 103
  • 1
  • 8
6
votes
1 answer

Steepest descent spitting out unreasonably large values

My implementation of steepest descent for solving Ax = b is showing some weird behavior: for any matrix large enough (~10 x 10, have only tested square matrices so far), the returned x contains all huge values (on the order of 1x10^10). def…
6
votes
2 answers

std::pow very different behavior for different exponents

I am currently trying to optimize some code where 50% of the time is spent in std::pow(). I know that the exponent will always be a positive integer, and the base will always be a double in the interval (0, 1). For fun, I wrote a function: inline…
MAB
  • 545
  • 2
  • 8
6
votes
1 answer

Is there a hyperreal datatype implementation for doing computations in non-standard analysis?

Non-standard mathematical analysis extends the real number line to include "hyperreals" -- infinitesimals and infinite numbers. Is there (specification for an) implementation of a data type to implement computations using hyperreals? I'm looking…
Aaron Watters
  • 2,784
  • 3
  • 23
  • 37
6
votes
1 answer

How to get SciPy.integrate.odeint to stop when path is closed?

edit: It's been five years, has SciPy.integrate.odeint learned to stop yet? The script below integrates magnetic field lines around closed paths and stops when it returns to original value within some tolerance, using Runge-Kutta RK4 in Python. I…
uhoh
  • 3,713
  • 6
  • 42
  • 95
6
votes
2 answers

what is the inverse function of Sinc

I've searching the whole day to calculate the inverse function of sinc(x) between -pi and pi , but couldn't find anything: Does anybody know a way to get the angle value from the a given sinc value ? If it make easier I'm only interested in the…
Engine
  • 5,360
  • 18
  • 84
  • 162
6
votes
3 answers

Speed up SymPy equation solver

I am trying to solve a set of equations using the following python code (using SymPy of course): def Solve(kp1, kp2): a, b, d, e, f = S('a b d e f'.split()) equations = [ Eq(a+b, 2.6), Eq(2*a + b + d + 2*f, 7), Eq(d + e,…
Algo
  • 841
  • 1
  • 14
  • 26
6
votes
1 answer

Power Method in MATLAB

I would like to implement the Power Method for determining the dominant eigenvalue and eigenvector of a matrix in MATLAB. Here's what I wrote so far: %function to implement power method to compute dominant %eigenvalue/eigenevctor function…
user466534
6
votes
1 answer

What is the numerical stability of std::pow() compared to iterated multiplication?

What sort of stability issues arise or are resolved by using std::pow()? Will it be more stable (or faster, or at all different) in general to implement a simple function to perform log(n) iterated multiplies if the exponent is known to be an…
6
votes
5 answers

How to make numpy.cumsum start after the first value

I have: import numpy as np position = np.array([4, 4.34, 4.69, 5.02, 5.3, 5.7, ..., 4]) x = (B/position**2)*dt A = np.cumsum(x) assert A[0] == 0 # I want this to be true. Where B and dt are scalar constants. This is for a numerical integration…
GlassSeaHorse
  • 429
  • 2
  • 7
  • 14
6
votes
2 answers

Cumulative summation in CUDA

Can someone please point me in the right direction on how to do this type of calculation in parallel, or tell me what the general name of this method is? I don't think these will return the same result. C++ for (int i = 1; i < width; i++) …
Gswffye
  • 180
  • 3
  • 13