There is nothing special you have to do to use those values for querying or indexing them; but you have to decide how they should be used.
If you have a Tokenizer that tokenizes on word boundaries, these special characters will mean that the Tokenizer can decide that it separates two tokens, and thus, not index it.
If you use a tokenizer that doesn't do anything special with those characters, they'll be available just the same as any other character. You'll need to escape them if your library doesn't do that for you - but that depends on how you're querying Solr.
A string
field won't do anything with the input tokens, and any value would retain its special characters in one single token without splitting it further.