I have a code in double-precision complex arithmetic that forms large, sparse matrices (thanks to PETSc) in order to solve problems where high accuracy is typically needed (results converged to 7/8 digits, say)
The resulting linear system Ax=b is solved by means of parallel LU decomposition. I'm aiming at solving large 3D problems so the leading dimension of the matrix can reach several dozens of millions.
The matrix contains elements resulting from the weights of the finite-difference method multiplied by the coefficients of the different physical equations solved and metric terms.
If one matrix entry is lower in absolute value than the double-precision machine accuracy, I discard it and chop it to zero. Would you consider that a sane approach, or, at the very least, pointless? One reason is that we want to save every possible MegaByte of memory. But what's open to debate is whether
'Very small' entries might contaminate the LU inversion process and make it unstable while increasing fill-in.
If matrix entries close to machine precision are used in the process, I would assume that the result of any arithmetic operation involving those entries can be considered "unreliable"? If yes I don't see why they should be kept.
I'm aware that machine precision is a different concept than smallest representable positive number. Therefore it's possible my reasoning is conceptually wrong and would enjoy having your opinion or corrections. Thanks!