In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorithms for solving ordinary and partial differential equations by discrete approximation.
Questions tagged [numerical-stability]
104 questions
7
votes
4 answers
How can I avoid value errors when using numpy.random.multinomial?
When I use this random generator: numpy.random.multinomial, I keep getting:
ValueError: sum(pvals[:-1]) > 1.0
I am always passing the output of this softmax function:
def softmax(w, t = 1.0):
e = numpy.exp(numpy.array(w) / t)
dist = e /…

capybaralet
- 1,757
- 3
- 21
- 31
7
votes
2 answers
Interpreting error from computing Jordan form of 36-by-36 matrix
I've been trying to compute the jordan normal form of a 36-by-36 matrix composed of only three distinct entries, 1, 1/2, and 0. The matrix is a probability transition matrix so, given these entries, the matrix is obviously sparse.
The issue I've…

mrY
- 71
- 2
7
votes
2 answers
Strategies for debugging numerical stability issues?
I'm trying to write an implementation of Wilson's spectral density factorization algorithm [1] for Python. The algorithm iteratively factorizes a [QxQ] matrix function into its square root (it's sort of an extension of the Newton-Raphson square-root…

Dan
- 5,763
- 3
- 17
- 13
6
votes
1 answer
Tensorflow issue with softmax
I have a Tensorflow multiclass classifier that is generating nan or inf while computing probabilities using tf.nn.softmax. See the following snippet (logits is of shape batch_size x 6, since I have 6 classes and the output is one-hot encoded).…

Nik
- 5,515
- 14
- 49
- 75
6
votes
1 answer
Numerically stable evaluation of log of integral of a function with extremely small values
If I have a random number Z that is defined as the sum of two other random numbers, X and Y, then the probability distribution of Z is the convolution of the probability distributions for X and Y. Convolution is basically to an integral of the…

drhagen
- 8,331
- 8
- 53
- 82
6
votes
4 answers
Interchangeability of IEEE 754 floating-point addition and multiplication
Is the addition x + x interchangeable by the multiplication 2 * x in IEEE 754 (IEC 559) floating-point standard, or more generally speaking is there any guarantee that case_add and case_mul always give exactly the same result?
#include…

plasmacel
- 8,183
- 7
- 53
- 101
6
votes
3 answers
positive semi-definite matrices and numerical stability?
i'm trying to do factor analysis for a co-occurrence matrix(C) , which is computed from the term-document matrix(TD) as follows:
C=TD*TD'
In theory C should be positive semi-definite , but it isn't and the factor analysis algorithm can't work with…

Alex Brooks
- 5,133
- 4
- 21
- 27
5
votes
1 answer
Unexpected solution using JiTCDDE
I'm trying to investigate the behavior of the following Delayed Differential Equation using Python:
y''(t) = -y(t)/τ^2 - 2y'(t)/τ - Nd*f(y(t-T))/τ^2,
where f is a cut-off function which is essentially equal to the identity when the absolute value…

tigro
- 53
- 4
5
votes
2 answers
Avoiding numerical overflow when calculating the value AND gradient of the Logistic loss function
I am currently trying to implement a machine learning algorithm that involves the logistic loss function in MATLAB. Unfortunately, I am having some trouble due to numerical overflow.
In general, for a given an input s, the value of the logistic…

Berk U.
- 7,018
- 6
- 44
- 69
4
votes
1 answer
One pass common statistics. Numerical stability for integers
I want to calculate mean,std, skewness, kurtosis and covariance using one pass algorithms. The simplest and fastest one approach I found was published by Stuart McCrary from Berkeley Research Group. For example for std one may use:
std =…

zlon
- 812
- 8
- 24
4
votes
1 answer
Symmetrical Lerp & compiler optimizations
I had a function:
float lerp(float alpha, float x0, float x1) {
return (1.0f - alpha) * x0 + alpha * x1;
}
For those who haven't seen it, this is preferable to x0 + (x1-x0)
* alpha because the latter doesn't guarantee that lerp(1.0f, x0, x1)…

Ben
- 9,184
- 1
- 43
- 56
4
votes
1 answer
Numerical accuracy with log probability Java implementation
Sometimes when you do calculations with very small probabilities using common data types such as doubles, numerical inaccuracies cascade over multiple calculations and lead to incorrect results. Because of this it is recommended to use log…

Atte Juvonen
- 4,922
- 7
- 46
- 89
4
votes
0 answers
How to calculate robust softmax function with temperature?
This is a branching from another quesion/answer
I want a function equivalent to this:
def softmax(x, tau):
""" Returns softmax probabilities with temperature tau
Input: x -- 1-dimensional array
Output: s -- 1-dimensional array
…

Ciprian Tomoiagă
- 3,773
- 4
- 41
- 65
4
votes
2 answers
Good algorithm for calculating ln(1-x) for small (and occasionally large) x
I'm looking for an algorithm to calculate ln(1-x). x is often small (<0.01), but occasionally it might be larger. Algorithm needs to be accurate, and not too slow. I'd rather not use library for ln(x), because I might lose accuracy.

willem
- 2,617
- 5
- 26
- 38
4
votes
0 answers
What are the constraints on the divisor argument of scipy.signal.deconvolve to ensure numerical stability?
Here is my problem: I am going to process data coming from a system for which I will have a good idea of the impulse response. Having used Python for some basic scripting before, I am getting to know the scipy.signal.convolve and…

user6109023
- 41
- 3