The page at: https://fasttext.cc/docs/en/pretrained-vectors.html
- provides 294 different language-labeled sets of vectors, each labeled with only a single language
- describes the models as having been trained by "using the skip-gram model described in Bojanowski et al. (2016) with default parameters" - a paper which does not describe the creation of multilingual vectors
Thus it's safe to assume none of them are explicitly multilingual. (If one or more were, wouldn't they be clearly labeled that way?)
Similarly, considering the page at: https://fasttext.cc/docs/en/crawl-vectors.html
- does not include the word 'multlingual' anywhere in the page text
- provides 158 different language-labeled sets of vectors, each labeled with only a single language
Thus I also think it's safe to assume none of them are explicitly multilingual. (If you thought one or more of them were, try downloading them and see if they give good results across whatever multiple languages you're conjecturing, in the absence of descriptions, they might cover.)
I believe the quote you've highlighted, "…a newer version of multi-lingual word vectors are available at…", is using 'multi-lingual word vectors' loosely as 'multiple language word vectors', and describing the total contents of the page, not any single download.
Note that there is later work which aligns alternate-langauge sets of word-vectors, such that the same(ish) meanings have simialr coordinates:
https://fasttext.cc/docs/en/aligned-vectors.html
However, even there, each language's vectors are provided as a single download.
There are so many colliding-tokens, and colliding subwords, that mean very different things across different languages that it would be hard to provide a usable single model for multiple languages, that considerd individual word-tokens alone (without full context that provides extra author-intended-language hints).