You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been running experiments to find the time taken for consistency checking with increasing size ontologies. I performed each experiment for 5 iterations and in each iteration, the results show two different behaviours (as shown in attached images el_results.jpeg and dl_results.jpeg) for EL and DL ontologies.: 1) inconsistency in result: sometimes the time taken by an ontology is very less and sometimes it times out (Time-out is 1.5 hrs). (Along the rows) 2) wrong result: smaller size ontology is taking more time (even times-out) than a larger size ontology (takes only a few seconds). (Along the columns)
Below is the link for the EL and DL ontologies. https://drive.google.com/drive/folders/1HYURRLaQkLK8cQwV-UBNKK4_Zur2nU68?usp=sharing
I am using Openllet 2.6.4 and OWL API 5.1.0.
Also, pellet did not show the first behaviour. Although, it is also showing the second behaviour for DL ontologies (OWL2DL-2.owl is taking less time than OWL2DL-1.owl).
More Details about the ontologies:
I am working on a benchmark for different OWL 2 profiles https://github.com/kracr/owl2bench . The TBox is a university ontology (just like UOBM) and user gives 'number of universities' as input to generate ontologies of varying size. So, the numbers 1, 2, 5... in the file names OWL2EL-1.owl, OWL2EL-2.owl ... represent the increasing size of ontologies.
The text was updated successfully, but these errors were encountered:
Interresting. Many time ago I have done some benchmark to choose the best fitted reasoner for my needs. Pellet/Openllet was'nt the fastest on specific ontology (short or long) but was fast on ontology with frequent update. It had also a fair support of OWL2.
What is mark as Inconsistent results, may be the result of cashing / hashing of constraints that change exploration orders; or bugs (there are bugs, some tests don't pass). Some time renaming, and changing axiom ordering may change computation time, or make some bugs appear or other vanish...
Maybe number of axioms isn't a really good measure. From memory in the tests of openllet there is an ontology with less than 10 axioms that result in a timeout. While I am using very large (100M+ axioms) ontologies without problems.
Interesting. I've seen similar variability in other benchmarks, will take a look at the test framework. Are you aware of the ORE competition? (2015 episode here https://link.springer.com/article/10.1007/s10817-017-9406-8 ) Their framework is open source, I think there might be synergy waiting to be exploited.
I have been running experiments to find the time taken for consistency checking with increasing size ontologies. I performed each experiment for 5 iterations and in each iteration, the results show two different behaviours (as shown in attached images el_results.jpeg and dl_results.jpeg) for EL and DL ontologies.:
1) inconsistency in result: sometimes the time taken by an ontology is very less and sometimes it times out (Time-out is 1.5 hrs). (Along the rows)
2) wrong result: smaller size ontology is taking more time (even times-out) than a larger size ontology (takes only a few seconds). (Along the columns)
Below is the link for the EL and DL ontologies.
https://drive.google.com/drive/folders/1HYURRLaQkLK8cQwV-UBNKK4_Zur2nU68?usp=sharing
I am using Openllet 2.6.4 and OWL API 5.1.0.
Also, pellet did not show the first behaviour. Although, it is also showing the second behaviour for DL ontologies (OWL2DL-2.owl is taking less time than OWL2DL-1.owl).
More Details about the ontologies:
I am working on a benchmark for different OWL 2 profiles https://github.com/kracr/owl2bench . The TBox is a university ontology (just like UOBM) and user gives 'number of universities' as input to generate ontologies of varying size. So, the numbers 1, 2, 5... in the file names OWL2EL-1.owl, OWL2EL-2.owl ... represent the increasing size of ontologies.
The text was updated successfully, but these errors were encountered: