I'm the primary author of the sklearn OPTICS module. Parallelism is difficult because there is an ordering loop which cannot be run in parallel; that said, the most computationally intensive task is distance calculations, and these can be run in parallel. More specifically, sklearn OPTICS calculates the upper triangle distance matrix one row at a time, starting with 'n' distance lookups, and decreasing to 'n-1, n-2' lookups for a total of n-squared / 2 distance calculations... the problem is that parallelism in sklearn is generally handled by joblib, which uses processes (not threads), which have rather high overhead for creation and destruction when used in a loop. (i.e., you create and destroy the process workers per row as you loop through the data set, and 'n' setup/teardowns of processes has more overhead then the parallelism benefit you get from joblib--this is why njobs is disabled for OPTICS)
The best way to 'force' parallelism in OPTICS is probably to define a custom distance metric that runs in parallel-- see this post for a good example of this:
https://medium.com/aspectum/acceleration-for-the-nearest-neighbor-search-on-earths-surface-using-python-513fc75984aa
One of the example's above actually forces the distance calculation onto a GPU, but still uses sklearn for the algorithm execution.