I'm fairly certain this is fundamentally impossible with our current understanding of computation and automata theory. I could be wrong though.
Anyone more directly knowledgeable (rigorous background) feel free to pipe in, most of what follows is self-taught and heavily based on professional observations over the last decade doing Systems Engineering/SRE/automation.
Modern day computers are an implementation of Automata Theory and Computation Theory. At the signal level, they require certain properties to do work.
Signal Step, Determinism and Time Invariance are a few of those required properties.
Deterministic behavior, and deterministic properties rely on there being a unique solution, or a 1:1 mapping of directed nodes (instructions & data context) on the state graph from the current state to the next state. There is only one unique path. Most of this is implemented and hidden away at low level abstractions (i.e. the Signal, Firmware, and kernel/shell level).
Non-deterministic behavior is the absence of the determinism signal property. This can be introduced into any interconnected system due to a large range of issues (i.e. hardware failures, strong EM fields, Cosmic Rays, and even poor programming between interfaces).
Anytime determinism breaks, computers are unable to do useful work, the scope may be limited depending on where it happens. Usually it will either be caught as an error and the program or shell will halt, or it may continue running indefinitely or provide bogus data, both because of the class of problem it turns into, and the fundamental limitations on the types of problems turing machines can solve (i.e. computers).
Please bare in mind, I am not a Computer Science major, nor do I hold a degree in Computer Engineering or related IT field. I'm self taught, no degree.
Most of this explanation has been driven by doing years of automation, segmenting problem domains, design and seeking a more generalized solution to many of the issues I ran into, mostly to come to a better usage of my time (hence this non-rigorous explanation).
The class of non-deterministic behavior is the most costly type of errors I've run into because this behavior is the absence of the expected. There isn't a test for non-determinism as a set or group. You can infer it by the absence of properties which you can test (at least interactively)
Normal computer behavior is emergent from the previous mentioned required signals and systems properties, and we see problems when they don't always hold true and we can't quickly validate the system for non-determinism due to its nature.
Interestingly, testing for the presence of those properties interactively, is a useful shortcut, as if the properties are not present it will fall into this class of troubles which we as human beings can solve, but computers cannot, but it can only effectively be done by humans as you can run into issues with the halting problem, and other more theoretical aspects which I didn't bother understanding during my independent studies.
Unfortunately, know how to test for these properties does often require knowledgeable view of the systems and architecture being tested spanning most abstraction layers (depending on where the problem originates).
More formal or rigorous material may use NFAs v. DFAs with more complex vocabularies, non-finite versus discrete-finite automata iirc.
The differences being basically the presence of that 1:1 state map/path or its absence that define determinism.
Where most people trip up with this property, with regards to programming is between interfaces where the interface fails to preserve data and this property by extension, such as accidentally using the empty or NULL state of an output field to mean more than one thing that gets passed to another program.
A theoretical view of a shell program running a series of piped commands might look like this:
DFA->OutInterface ->DFA->OutInterface->NFA->crash/meaningless/unexpected data/infinite loop, etc depending on the code that comes after the NFA, the behavior varies unpredictably in indeterminable ways. (OutInterface being pipe at the shell '|' )
For an actual example in the wild, ldd on recent versions of linux had two such errors that injected non-determinism into the pipe. Trying to identify linked dependencies for a arbitrary binary, for use with a build system was not possible using ldd because of this issue.
More specifically, the in-memory structures, and then also the flattening of the output fields in a non-deterministic way that varies across different binaries.
Most of the material mentioned above is normally covered in a BS Compiler design course at the undergraduate level, one can also find it in the dragon compiler book which is what I did instead, it does require a decent background in math fundamentals (i.e. Abstract Algebra/Linear Algebra) to grok the basis and examples, and the properties are best described in Oppenheim's Signals and Systems.
Without knowing how to test that certain system properties hold true, you can easily waste months of labor trying to document and/or trying to narrow the issue down. All you really have in those non-deterministic cases is a guess and check model/strategy which becomes very expensive especially if you don't realize its an underlying systems property issue.