That is a very interesting question - probably with many possible answers. You could possibly add in bigram (n-gram) analysis to rank how likely the letters would be related to each other in typical words.
Presuming your system doesn't "know" the target word, but someone types "bouk". Then it analyses all the bigrams:
bo, ou, uk
or trigrams
bou, ouk
I would guess here that "bo", "ou", "bou" would score well as they are common, but "uk" and "ouk" would be not likely in English. So this could simply have a 3/5 score, but actually each trigram would have its own frequency score (probability), so the overall number for the proposed word could be quite refined.
Then comparing that to "bo0k" you'd look at all bigrams:
bo, o0, 0k
or trigrams
bo0, o0k
Now you can see that only "bo" would score well here. All the others would not be found in a common n-gram corpus. So this word would score much lower than "bouk" for likelihood, e.g. 1/5 compared to the 3/5 for "bouk".
There would be roughly three parts to the solution:
You would need a corpus of established n-gram frequencies for the language. For example this random blog I found discusses that: https://blogs.sas.com/content/iml/2014/09/26/bigrams.html
Then you would need to process (tokenise and scan) your input words into n-grams and then look up their frequencies in the corpus. You could use something like SK Learn,
Then you can sum the parts in whatever way you like to establish the overall score for the word.
Note you may find most tokenisers and n-gram processing for natural language centres around word relations not letters within words. It's easy to get lost on that, as often the fact a library is focused on word-grams is not explicitly mentioned because it's the most common. I've noticed that before, but n-grams are used in all sorts of other data sets too (timeseries, music, any sequence really) This question does discuss how you can convert SK Learn's vectoriser to do letter-grams, but I've not tried this myself: N-grams for letter in sklearn