-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any idea how to improve the performance when handling large ontologies? #44
Comments
Do you have any sample ontology in mind? I'm trying to wrap my head around this problem so it'd be useful to have some sample models for testing. |
you may try with this ontology: http://bioportal.bioontology.org/ontologies/NCBITAXON |
The main problem is that ontospy attempts to build the entire ontology model in memory, and that takes time if there are many classes and properties to correlated. I've tried using threads, but with no real performance improvements as the main tasks (extract classes, properties, concepts etc..) tend to be reliant on each other. For very large ontologies maybe it's more indicated to use a triplestore. Otherwise I'm kind of out of ideas here.. |
You may want to take look of https://pythonhosted.org/Owlready2/ It seems to having better performance on large ontologies. |
Thanks! Looks like they use an ad-hoc back end, maybe that's it. Will look more into it |
Yes, they use SQLite as backend. Do you think it is helpful for improving the performance of ontospy? |
It's also not too difficult to load an ontology into Apache Fuseki Jena. The main issue is the non-Python dependency (Fuseki), but once the store is running it's easy to use rdflib to mediate querying. |
No description provided.
The text was updated successfully, but these errors were encountered: