I've not seen any proven techniques for your need.
But, it is a bit similar to how people try to track the drift in word meanings over different eras. There's been some published work like HistWords from Stanford on that task.
I have also in past answers suggested people working on the eras-drift task try probabilistically replacing words whose sense may vary with alternate, context-labeled tokens. That is, if king
is one of the words that you expect to vary based on your geography-contexts, expand your training corpus to sometimes replace king
in UK contexts with king_UK
, and in US contexts with king_US
. (In some cases, you might even repeat your texts to do this.) Then, at the end of training, you'll have separate (but close) vectors for all of king
, king_UK
, & king_US
– and the subtle difference between them may be reflective of what you're trying to study/capture.
You can see other discussion of related ideas in previous answers:
https://stackoverflow.com/a/57400356/130288
https://stackoverflow.com/a/59095246/130288
I'm not sure how well this approach might work, nor (if it does) optimal ways to transform the corpus to capture all the geography-flavored meaning-shifts.
I suspect the extreme approach of transforming every word in a UK-context to its UK-specific token, & same for other contexts, would work less well than only sometimes transforming the tokens – because a total transformation would mean each region's tokens only get trained with each other, never with shared (non-regionalized) words that help 'anchor' variant-meanings in the same shared overall context. But, that hunch would need to be tested.
(This simple "replace-some-tokens" strategy has the advantage that it can be done entirely via corpus preprocessing, with no change to the algorithms. If willing/able to perform big changes to the library, another approach could be more fasttext-like: treat every instance of king
as a sum of both a generic king_en
vector and a region king_UK
(etc) vector. Then every usage example would update both.)