The word "to imply" has a very clear and technical meaning, and its use in everyday language matches the technical language (so long as we don't let the populace stomp the reason out, that is). Implication means that "if A, then B too". This means "if A, then always B too". It doesn't mean "when the weather's good" :)
There's no implication as stated, since here, A is "evaluation order optimizations" and B is "using different cores for different operands". And evaluation order optimizations almost never lead to use of different cores, although they may well lead to use of parallel execution units within a single pseudo-serial thread of execution. Modern CPUs already do a lot of parallelization automatically, and a good code generator can really allow the parallel execution units to shine (ahem, get hot).
Now, if what you ask is whether the operands could be evaluated on separate cores: in general - NO. Such transformation would require that the operands are mutually thread-safe, i.e. that they cannot, ever, in any circumstances, modify shared state, since that's clear undefined behavior.
Compilers can in - in limited circumstances - prove that the operands in fact don't modify shared state. They have to do such "reasoning" to do everyday optimizations. Alias analysis is one example of this. That's a positive.
Given the cost of multi-thread dispatch, the evaluation of the operands would require a substantial amount of work to be worth dispatching to worker threads. So, the compiler would need to "prove" that the amount of work to be parallelized is such that the overheads of parallelization won't dwarf the benefits.
The compiler could - in very limited circumstances - prove that mutual exclusion could be added to protect the shared modified state, without introducing deadlocks. Thus, it could add mutexes "on the fly". In practice, those would be spinlocks, as worker dispatch threads shouldn't be stalled (blocked).
Given the overhead of synchronization, the compiler would also need to show that the synchronization is infrequent enough that its overhead would be acceptable.
Doing all of the above well enough to be worth the trouble is still somewhat beyond the means of any single existing production compiler, and is subject of intensive research. There are proofs-of-concept, but nothing in everyday use. This might change quickly, though.
So - at the moment (mid-2020) - the answer is still NO, in practice.
Alas, we really got distracted from the real reason the evaluation order is undefined: it provides the compiler with opportunities to generate better code. Better "serial" code, that is. But this is not quite so: the "serial" code that runs on a single CPU thread is still using parallel execution units. So, in practice, the compiler can and does indeed parallelize the "serial" code - it's just done without involving multiple threads. Reordering of evaluation enables other optimizations that reduce register pressure, improve utilization of the CPU's execution units through better instruction scheduling and vectorization of code, reduce impact of data dependencies, etc.