The answer is simple but probably quite disappointing: Stanford CoreNLP is driven by a complex statistical model trained on manually annotated examples (and so are all modern dependency parsers), so it sometimes will output different structures for different inputs, sometimes even if they are very similar and have in fact the same underlying structure. As far as I know, there are no rules that would enforce consistent behaviour, it is just expected that the massive amount of consistently annotated training data results in consistency in most real-life cases (and this happens, doesn't it?).
Internally the parser weighs evidence for many candidate parses and multiple factors can influence this. You can imagine this as various structures competing for being chosen. Sometimes two alternative readings can have very similar probabilities assigned by the parser. In such situations even very small differences in other parts of the sentence are likely to influence the final decision on labelling and attachment that takes place in other parts (think butterfly effect).
Account is an inanimate noun, probably most often used as an object or in passive constructs. User is usually animate, so it is more likely to play the role of agens. It is hard to guess what exactly the parser “thinks” when seeing these sentences but the context in which nouns usually appear can have deciding role (CoreNLP also deals with word embeddings).
What you can do to enforce consistency? Theoretically you could add in extra training examples to a training corpus and train the parser yourself (mentioned here: https://nlp.stanford.edu/software/nndep.shtml). I guess this is might be not trivial, I'm also not sure if the original training corpus is publicly available. Some parsers offer a possibility of post-training an existing model. I've faced issues similar to yours and managed to overcome them by post-training in Spacy dependency parser (see the discussion under https://github.com/explosion/spaCy/issues/1015 if you're interested).
What could have happened in these examples?
Each of these has been mislabelled. I think the main verb ‘means’ should be pointed to its clausal complement (clause headed with ‘created’) with a ccomp
dependency (http://universaldependencies.org/u/dep/ccomp.html) but this just never happened. Perhaps more importantly, “all or any account” should be a subject of this clause, which is also not reflected in any of these structures. The parser guessed that this phrase is either an adverb modifier (which is kinda weird) or a direct object (account means all).
My guess is that the linking of ‘means’ with its dependents is heavily influenced by other parser guesses (this is a complex probabilistic model, all of the decisions made within a sentence may have influence on probability of decisions taken in other parts).