Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent results for EL and DL ontologies. #50

Open
GunjanSingh1 opened this issue Apr 28, 2020 · 2 comments
Open

Inconsistent results for EL and DL ontologies. #50

GunjanSingh1 opened this issue Apr 28, 2020 · 2 comments

Comments

@GunjanSingh1
Copy link

GunjanSingh1 commented Apr 28, 2020

I have been running experiments to find the time taken for consistency checking with increasing size ontologies. I performed each experiment for 5 iterations and in each iteration, the results show two different behaviours (as shown in attached images el_results.jpeg and dl_results.jpeg) for EL and DL ontologies.:
1) inconsistency in result: sometimes the time taken by an ontology is very less and sometimes it times out (Time-out is 1.5 hrs). (Along the rows)
2) wrong result: smaller size ontology is taking more time (even times-out) than a larger size ontology (takes only a few seconds). (Along the columns)
dl_results
el_results

Below is the link for the EL and DL ontologies.
https://drive.google.com/drive/folders/1HYURRLaQkLK8cQwV-UBNKK4_Zur2nU68?usp=sharing
I am using Openllet 2.6.4 and OWL API 5.1.0.
Also, pellet did not show the first behaviour. Although, it is also showing the second behaviour for DL ontologies (OWL2DL-2.owl is taking less time than OWL2DL-1.owl).

More Details about the ontologies:
I am working on a benchmark for different OWL 2 profiles https://github.com/kracr/owl2bench . The TBox is a university ontology (just like UOBM) and user gives 'number of universities' as input to generate ontologies of varying size. So, the numbers 1, 2, 5... in the file names OWL2EL-1.owl, OWL2EL-2.owl ... represent the increasing size of ontologies.

@Galigator
Copy link
Owner

Galigator commented Apr 29, 2020

Interresting. Many time ago I have done some benchmark to choose the best fitted reasoner for my needs. Pellet/Openllet was'nt the fastest on specific ontology (short or long) but was fast on ontology with frequent update. It had also a fair support of OWL2.

What is mark as Inconsistent results, may be the result of cashing / hashing of constraints that change exploration orders; or bugs (there are bugs, some tests don't pass). Some time renaming, and changing axiom ordering may change computation time, or make some bugs appear or other vanish...

Maybe number of axioms isn't a really good measure. From memory in the tests of openllet there is an ontology with less than 10 axioms that result in a timeout. While I am using very large (100M+ axioms) ontologies without problems.

@ignazio1977
Copy link
Contributor

Interesting. I've seen similar variability in other benchmarks, will take a look at the test framework. Are you aware of the ORE competition? (2015 episode here https://link.springer.com/article/10.1007/s10817-017-9406-8 ) Their framework is open source, I think there might be synergy waiting to be exploited.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants