Essentially, no - you can't perform sentiment analysis without some labeled data.
Without labels, of some sort, you have no way of evaluating whether you're getting anything right. So, you could just use this sentiment-analysis function:
get_sentiment(text):
return random.choice(['positive', 'negative'])
Woohoo! You've got a 'sentiment' for every text!
What's that? You object that for some text, it's giving the "wrong" answer?
Well, how do you know what's wrong? Do you have a desired correct answer – a label – for that text?
OK, now you have some hope, but you also have at least one label. And if you have one, you can get more – even if it's just hand-annotating some texts that are representative of what you want your code to classify.
Another answer shares an article which purports to do unsupervised sentiment analysis. That article's meandering grab-bag of techniques sneaks in supervision via the coder's labeling of his two word-clusters as positive and negative. And, he's only able to claim success based on target labels for some of the data. And the data appears to be about 635,000 'positive' texts and just 9800 'negative' texts – where you could get 99.5% accuracy just by answering 'positive' to every text. So its techniques may not be very generalizable.
But the article does do one thing that could be re-used elsewhere, in a very crude approach, if you've really just got word-vectors and nothing else: labeling every word as positive or negative. It does this by forcing all words into 2 clusters, then hand-reviewing the clusters to choose one as positive and one as negative. (This might only work well with certain kinds of review texts with strong underlying positive/negative patterns.) Then, it gives every other word a score based on closeness to those cluster centroids.
You could repeat that for another language. Or, just create a hand-curated list of a few dozen known 'positive' or 'negative' words, then assign every other word a positive or negative value based on relative closeness to your 'anchor' words. You're no longer strictly 'unsupervised' at this point, as you've injected your own labeling of individual words.
I'd guess this could work even better than the just-2-centroids approach of the article. (All 'positive' or 'negative' words, in a real semantic space, could be spread across wildly-shaped coordinate-regions that aren't reducible to a single centroid summary point.)
But again, the only way to check if this is working would be to compare against a lot of labeled data, with preferred "correct" answers, to see if tallying a net-positive/net-negative score for texts, based on their individual words, performs satisfactorily. And once you have that labeled data for scoring, you could use a far more diverse & powerful set of text-classification methods than a simple tallying-of-word-valences.