-
Notifications
You must be signed in to change notification settings - Fork 1
Usage
The HTTP API features 2 API paths:
-
/api/v1/jobs
to interact with scheduled graph packaging jobs. -
/api/v1/users
to interact with the user store, i.e. creating/listing/removing users.
Both paths have GET
, POST
and DELETE
endpoints to list, create and remove entries, respectively.
Some endpoints expect valid authentication in the form of Basic Authentication
of registered users. These are:
POST /api/v1/jobs
DELETE /api/v1/jobs
-
POST /api/v1/users
: needs even admin privileges, i.e. only theADMIN_EMAIL
account can create new users DELETE /api/v1/users
Note, the configured admin user is created on API startup and can start administrating the API right away.
The authentication credentials are best placed in the header of the request and is expected to be a base64
encoded string of user_email:user_pass
, e.g.
from base64 import b64encode
import requests
auth = f'Basic {b64encode("[email protected]:admin").decode()}'
print(requests.post('http://localhost:5000/api/v1/users', json={"email": "[email protected]", "password": "bla"}, headers={'Authorization': auth}).json())
The API documentation (via Swagger) is accessible under /api/v1
(e.g. http://<IP/domain/localhost>:\<port\>/api/v1
), which is also the base URL for the whole API.
The documentation is grouped by API paths showing their endpoints and allowed methods with details on how to access them. Each endpoint has additional information on expected parameters, example response values and possible return status codes and their meaning.
E.g. users
has 4 endpoints:
-
POST /users
: corresponds to/api/v1/users
, expects theemail
andpassword
in the JSON payload and responds with either200
(success),401
for invalid authorization,409
(already existing user) and a generic500
(unknown error). E.g.curl -XPOST 'http://localhost:5000/api/v1/users' -d '{"email": "[email protected]", "password": "secpass"}' --header 'Authorization: Basic bmlsc0BnaXMtb3BzLmNvbTp'
-
GET /users
: corresponds to/api/v1/users
, expects no parameters and responds either with200
(success) or500
(unknown error). E.g.
curl -XGET 'http://localhost:5000/api/v1/users'
- and so on
Apart from documentation you can use Swagger to fire requests against the HTTP API.
When you click on an endpoint, you'll see a Try it out
button on the right. When you click that, you'll be able to fill out the endpoint parameters (if any) and firing the request by hitting Execute
. The response will be showing up in the section right below.
Note, that some endpoints expect authenticated requests. In Swagger you'll see a small lock on the endpoint if it expects authentication. When you try to request your browser will ask you for user name and password. To make it easier, you can also set the credentials globally by hitting the Authorize
button on top of the documentation.
Most of the schemas described in the Swagger documentation should be straight-forward. However, some endpoints need some more information.
The table below gives some insights. Note, a lot of fields are auto-generated so they only show up in the response.
Field | Type | Only response | Description |
---|---|---|---|
name |
str | The name you want to give the entry. Will determine part of the dataset path, where the full path is $DATA_DIR/<router>/<router>_<provider>_<name>.<compression> . |
|
description |
str | The description for the dataset. | |
provider |
str | The data provider, one of ["osm", "tomtom", "here"]. | |
router |
str | The routing engine, one of ["valhalla", "osrm", "ors", "graphhopper"] | |
bbox |
str | The bounding box to crop the graph package to in format minx,miny,maxx,maxy , e.g. 1.531906,42.559908,1.6325,42.577608
|
|
interval |
str | The update interval for the graph package, one of ["once", "daily", "weekly", "monthly"] | |
compression |
str | The compression method to use, one of ["zip", "tar"] | |
id |
int | yes | The database ID |
user_id |
int | yes | The user's database ID |
status |
int | yes | The current processing status of the job, one of ["Queued", "Extracting", "Tiling", "Failed", "Deleted", "Completed"] |
job_id |
str | yes | The Redis ID of the job. Will be populated during processing. |
container_id |
str | yes | The docker container ID of the routing engine. Useful for debugging purposes if something goes wrong. Will be populated during processing. |
last_started |
str | yes | The UTC timestamp of the last time this job was started. |
last_finished |
str | yes | The UTC timestamp of the last time this job was finished. Will be populated after processing. |
path |
str | yes | The full path to the graph package. |
pbf_path |
str | yes | The full path to the PBF file used to generate the graph. |