I don't have a concrete answer (there isn't one I don't think), but some questions and points you might need to consider.
Does the tokenizer need to know those are hobbies that are comma separated? If so you've got a bigger problem that tokenization. If not, you still have a problem with how you're going to handle the ends to the parts that are comma separated. BTW, the comma usage is wrong in this case, because there should be a comma after roller skating. I have a feeling you actually need to build a "Named Entity recognition model" rather than a tokenizer, but hopefully my comments here will get you one step further (possibly towards nowhere)
For instance,
if you tokenize your sentence by comma, you'll get these tokens:
My hobbies are reading books
magazines
Roller skating and playing football.
which does not separate roller skating from playing football, and it wouldn't know what is between the commas in an arbitrary sentence.
So, the simple answer is that OpenNLP does not really do "context based tokenization," you would have to roll that logic in your own way.
Here are a few ideas
use the sentence chunker to create the tokens... this would be based on noun phrases,and verb phrases etc, which may be useful
Use an NER model to extract "hobby" entities, which would be noisy but it would give you some probabilistic tokens
Use regex to find what you want and create the tokens just based on the regex hits.
As an example, Here's what a sentence chunker pulls out.
NP: My hobbies
VP: are reading
NP: books
NP: magazines
NP: Roller skating and playing football
and you could assume verb phrases (VP) are the action aspect, and the noun phrases (NP) are the 'thing'...I'm not sure why the chunker didn't see playing as a verb but that's the way NLP goes....You could always look for 'and' within NPs and split on that, but anything you do I guarantee you will find a piece of text that will make it suck.
Remember any tokenization method you use at train time must be used at classification time.
Hope this helps, but I have a feeling it won't help much.
UPDATE IN RESPONSE TO THE OP EDIT
The NER (NameFinder) will find multi token entities in a single token array; so don't worry about that. The Span object that returns from the nameFinder.find() method is the head and tail to the sentence tokens. It is very common to have multi part names. Now, if you're particular NER model is NOT returning multipart names, that's a different story, and you probably need to train on more data, don't blame it on the tokenization.