The generic answer for when you don't know which of multiple different ideas is better, youy try them each separately & see which evaluates as better on your robust, repeatable evaluations.
(If you don't have a way to evaluate which is better, that's a bigger & more foundational thing to address than any other choices.)
Given what you've said, other observations:
The word2vec & FastText algorithms are very similar, with the most experience supporting their use being in the fuzzy sorts of menaings inherent in natural-language text. And, the main advantage of FastText is in being able to synthesize better-than-nothing guess-vectors for words that weren't seen during training, but might be similar in substrings that hint their meaning to other known words.
Smart contract source code (or bytecode) is sufficiently unlike natural language, in its narrow vocabulary, token frequencies, purposes, & rigorous execution model that it's not immediately clear word-vectors could help. Word-vectors often have been useful with language-like token-sets that aren't natural-language, but even there, usually for discovering gradations of meaning. With smart contracts, the difference between "Works as hoped" and "fatally vulnerable" may just be a tiny matter of a single misplaced operation, or subtle missed error case. Those are the kind of highly contextual, ordering-based outcomes that word-vectors simply do not model. (At best, I think you might discover that competent coders tend to use mroe of certain kinds of operations or names than incompetent ones.)
Further, the main advantage of FastText – synthesizing vectors for unknown but morphologically-similar tokens – may be far less relevant for bytecode-analysis, where unknown tokens are rare or even impossible. (Maybe, if you're analyzing source-code including freely chosen variable names, new unknown variable names will have hints of relations to previously-trained names.)
So: word-vectors may the an improper or underpowered tool for doing the sort of high-stakes, subtle classification you're attempting. But, as with the topmost answer: the only way to know, & test ideas of what works or not, is to try each approach, & evaluate it in some fair, repeatable way. (This even includes testing different ways of training the word-vectors from a single algorithm like just word2vec itself: different modes, parameters, preprocessing, etc.)