0

I have a set of ontologies for which I use the Pellet reasoner to reason over in Protege 5.5.0. The reasoner completes inference over the ontologies in 20 seconds.

When I use the OWL API code implementation, it takes the Pellet reasoner 45 minutes to reason over the set of ontologies.

Where or how can I find the configuration settings used in the Protege Plugin, to compare it to the defaults for the OWL API implementation, which are currently being used?

I have tried to locate configurations in files in the installation, as well as in the source code for the Protege plu-gin, but I cannot figure out how it is configured.

Ignazio
  • 10,504
  • 1
  • 14
  • 25
  • Please provide enough code so others can better understand or reproduce the problem. – Community Nov 10 '22 at 15:27
  • 1
    I don't know what kind of configuration you mean, but the default inferences being computed are here: https://github.com/protegeproject/protege/blob/master/protege-editor-owl/src/main/java/org/protege/editor/owl/model/inference/ReasonerPreferences.java – UninformedUser Nov 10 '22 at 17:52
  • 1
    I also don't understand what you mean by multiple ontologies - how do you open the ontologies in Protege? Each in a separate window? – UninformedUser Nov 10 '22 at 17:53
  • 1
    Please add the owlapi code you're using. – Ignazio Nov 11 '22 at 07:08
  • Thank you for your responses. In protege I open the root ontology which imports the other ontologies. I have since looked at the code implementation (which I did not implement) and it appears the programmer basically stripped all import statements from the ontologies and imported the rest of the axioms/statements into one ontology/document. We are busy investigating if this could cause the reasoner to slow down significantly. – user3168890 Dec 09 '22 at 11:53

0 Answers0