Short answer: You "normalize" both strings and then do the search/comparison.
Note that Unicode represents many accented characters in more than one way. There is a single codepoint (U+00E9 LATIN SMALL E WITH ACUTE ACCENT) to represent the character with the accent, but it can also be represented by a combination of codepoints (U+0065 LATIN SMALL LETTER E and U+0301 COMBINING ACUTE ACCENT). The general way to deal with this is to choose one Normal Form C (for pre-composed characters) or D (for de-composed characters). Normalizing can be more complex than it seems. Once both strings are in the same normal form, you can compare them directly.
If you want to ignore the diacritics altogether, you can make up your own normalization scheme. For example, you can decompose any pre-composed characters and then drop all the combining codepoints. The will allow the base character to match an accented character regardless of how the accented character was originally represented.
There are also "kompatibility" normal forms in Unicode (KC and KD) which substitute most special characters with the most common similar base character. In the case of diacritics, I think this'll do the same thing. So if you have a Unicode library, you might be able to use it to do all the hard work of normalizing.
In many cases, the database is already in some normal form, so you just have to normalize the search string.
If all that is too complicated, another approach would be to build a regex that will match any representation. For example, if your search key is telefono
, you'd turn that into a regex like t(e|\u00E9|e\u0301)l(e|\u00E9|e\u0301)f(o|\u00F3|o\u0301)n(o|\u00F3|o\u0301)
. Those regexes can be bulky pretty fast, depending on how flexible you want the matches to be.