Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Readme & docs updated, test for iri collection added #22

Merged
merged 6 commits into from
Jul 12, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 25 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,23 +4,40 @@ Creditrisk-poc is a Hydra powered API which serves loan portfolio data using EBA

## Features
creditrisk-poc consist following features:
* Loan & Borrower classes.
* Loan ,Borrower & Collateral classes.
* Borrower class collection.
* Loan & Borrower class are linked using a foreign key ("CounterpartyId").
* Loan, Borrower & Collateral classes are linked using `foreign keys`.
* Loan class can perform all the CRUD operations ( GET, PUT, POST, DELETE).
* Borrower can perform all the CRUD operations.
* Borrower class collection can perform all the CRUD operations.

## Classes are linked in the following manner:
![Creditrisk_class_linking](https://user-images.githubusercontent.com/49719371/125194774-4d897280-e270-11eb-95af-4242bb1bffc2.jpg)


## NonPerformingLoan.jsonld
The `NonPerformingLoan.jsonld` is a subset vocabulary for NonPerformingLoan portfolios,
vocabulary is generated automatically using `vocab_generator.py` from `NonperformingLoan.owl` ontology.
```bash
python npl_vocab/vocab_generator.py
```
It will generate the JSON-LD vocabulary which can be used to create ApiDoc.

## API_DOC
API_Doc is generated through hydra-python-core module doc_writer.
API_Doc is generated through hydra-python-core module `doc_writer` and `nplvoac_parse.py` which automates the creation
of classes and properties from JSON-LD vocabulary.

API_Doc & doc_writer file can be found here :
API_Doc, doc_writer & `nplvocab_parser.py` files can be found here :
```
api_doc
|
|___ ApiDoc.jsonld
|___ api_docwriter.py
|___ nplvocab_parser.py
```
**nplvocab_parser** parse all the classes & properties from `NonPerformingLoan.jsonld` and provide functions for converting
them to HydraClass & HydraClassProp.

`ApiDoc` is a JSON serialized object, It can be accessed as follows:
```python
import json
Expand All @@ -30,6 +47,10 @@ doc = json.load(ApiDoc_file)
```
you will get the doc in `python dict` format.

### ApiDoc is generated with this flow:
```
NonPerformingLoan.owl --> ( vocab_generator.py ) NonPerformingLoan.jsonld --> ( nplvocab_parser.py) ApiDoc.jsonld
```
## Demo
To run hydra powered creditrisk-poc API, just do the following:
1) Clone creditrisk-poc
Expand Down
28 changes: 28 additions & 0 deletions docs/nplvocab_parser.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# nplvocab_parser

`nplvocab_parser` parses all the classes & properties from `NonPerformingLoan.jsonld` and convert them to HydraClass & HydraClassProp.

nplvocab_parser is located in `api_doc` directory
```python
api_doc
|
|___ nplvocab_parser.py
```
It can be used by importing as a python module:
```python
import NPLVocab_parse as parser

npl_vocab = parser.get_npl_vocab()
classes = parser.get_all_classes(npl_vocab)
hydra_classes = parser.create_hydra_classes(classes)
```
nplvocab_parser provide following functions:
* `get_all_classes()` -> Return all the classes from the given Vocabulary.
* `create_hydra_classes()` -> Return list of HydraClass objects.
* `get_class_properties()` -> Return all the properties of the given class.
* `create_hydra_properties()` -> Return list of HydraclasProps from the list of properties.
* `get_class_id()` -> Returns the class id of given class.
* `add_operations_to_class()` -> Return list of hydra properties of given class.



17 changes: 17 additions & 0 deletions docs/vocab_generator.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# vocab_generator

`vocab_genrator.py` generates `NonPerformingLoan.jsonld` vocabulary from the owl ontology.

It is located inside the `npl_vocab` directory.
```python
npl_vocab
|
|___ vocab_generator.py
```
vocab_generator uses [rdflib](https://github.com/RDFLib/rdflib-jsonld) and [pyld](https://github.com/digitalbazaar/pyld) libaries to parse & serialize owl ontology
to jsonld with the `@context.`

To generate JSON-LD voabulary:
```python
python npl_vocab/vocab_generator.py
```
6 changes: 4 additions & 2 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
import json
import uuid
import os
from os.path import abspath, dirname
from pathlib import Path
from hydra_python_core import doc_maker
from hydra_python_core.doc_writer import DocUrl, HydraLink
from sqlalchemy import create_engine
Expand Down Expand Up @@ -121,8 +123,8 @@ def test_doc(constants):
"""
HYDRUS_SERVER_URL = constants['HYDRUS_SERVER_URL']
API_NAME = constants['API_NAME']
API_DOC_PATH = os.path.relpath("tests/ApiDoc.jsonld")
print(API_DOC_PATH)
cwd_path = Path(dirname(dirname(abspath(__file__))))
API_DOC_PATH = cwd_path / "tests" / "ApiDoc.jsonld"
doc_file = open(API_DOC_PATH, "r")
doc = json.load(doc_file)

Expand Down
28 changes: 28 additions & 0 deletions tests/test_functional.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
import json
import re
import uuid
from hydra_python_core.doc_writer import DocUrl
from tests.conftest import gen_dummy_object


Expand Down Expand Up @@ -338,3 +339,30 @@ def test_Collections_member_DELETE(self, test_app_client, constants, doc):
delete_response = test_app_client.delete(full_endpoint)
assert delete_response.status_code == 200

def test_IriTemplate(self, test_app_client, constants, doc):
"""Test structure of IriTemplates attached to parsed classes"""
API_NAME = constants['API_NAME']
index = test_app_client.get(f'/{API_NAME}')
assert index.status_code == 200
endpoints = json.loads(index.data.decode('utf-8'))
expanded_base_url = DocUrl.doc_url
for endpoint in endpoints['collections']:
collection_name = '/'.join(endpoint["@id"].split(f'/{API_NAME}/')[1:])
collection = doc.collections[collection_name]['collection']
class_name = collection.manages["object"].split(expanded_base_url)[1]
response_get = test_app_client.get(endpoint["@id"])
assert response_get.status_code == 200
response_get_data = json.loads(response_get.data.decode('utf-8'))
assert 'search' in response_get_data
assert 'hydra:mapping' in response_get_data['search']
class_ = doc.parsed_classes[class_name]['class']
class_props = [x.prop for x in class_.supportedProperty]
for mapping in response_get_data['search']['hydra:mapping']:
prop = mapping['hydra:property']
prop_name = mapping['hydra:variable']
is_valid_class_prop = prop not in ['limit', 'offset', 'pageIndex']
# check if IRI property is for searching through a nested_class
# and not this class_
is_nested_class_prop = "[" in prop_name and "]" in prop_name
if is_valid_class_prop and not is_nested_class_prop:
assert prop in class_props