You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is a bit out there. Ideally before we ship a public version of our API, we need to make sure the test coverage is excellent, and also get coverage of the endpoints & their options into our documentation.
Doing both of these requires the same thing to happen: for someone to build up extensive knowledge of each endpoint and what it is capable of and ensure it's both tested and documented. That's a pretty time-consuming task.
Testing
The heart of Ghost's API lives in the functions in core/server/api/<resource>.js, E.g. posts.api.browse(). Currently, those functions are tested in the integration tests in core/test/integration/api - these tests go through the complete stack, no stubbing or mocking, and test what actually happens. Some of the endpoints are well-tested there, others not so much.
If you're using the API over HTTP, those functions get wrapped by the http method in core/server/api/index.js. That method is a sort of middleware, taking a request with the req and res functions, converts the incoming options from being in request format to being in method call format of object and options, calls the api method, and converts the result into JSON or a correctly formatted error to be sent in a response.
The HTTP version of the API is tested in the functional/integration tests that live in core/test/functional/routes/api these tests also go through the complete stack, no stubbing or mocking, and test what actually happens, but start with an HTTP call, rather than with a method call. Again, some of the end points are well-tested there and others are not.
Some endpoints are tested by one set of tests and not the others, some have both.
As well as the HTTP call wrapper around the methods, soon we're also going to have a get-helper wrapper around the methods. I wouldn't expect to have yet-another-suite of integration tests for that too.
Instead, I believe what we should have is a complete suite of integration tests for the method calls. The tests for the HTTP wrapper should then be unit tests which stub out the method calls and check that we get the correct response including status code, headers, and body format.
This means, core/test/integration/api should have 100% coverage for the tests, and we'd then have unit tests in core/test/unit/api_spec.js, which just tested the methods in the core/server/api/index.js file. All of the HTTP tests in core/test/routes/functional/api can go away, or we can keep a single set of integration tests in core/test/routes/funcitonal/api_spec.js to check key concepts, rather than every endpoint in detail.
Coverage
Our coverage tasks currently only relate to unit tests. Integration tests tend to execute a lot of intialisation code, meaning that if they're included in coverage a lot of lines get marked as covered when they're not being directly tested.
However, it would be good to be able to get a picture of how much test coverage there is on the API, and so I think it would be worthwhile to add a second coverage task (grunt coverage-api or grunt coverage-integration?) to run the integration tests or just api integration tests.
Secondly, we have API documentation setup in readme.io, but we need to populate it with the details of each endpoint. Anyone wanting to help with this will need to be added to the readme.io project.
Approach
I recommend that we start with the soon-to-be public endpoints: read & browse for posts, tags, and users. Once that is done we can extend out to edit/add/destroy for each of those, and then to other important resources like settings.
The text was updated successfully, but these errors were encountered:
This issue is a bit out there. Ideally before we ship a public version of our API, we need to make sure the test coverage is excellent, and also get coverage of the endpoints & their options into our documentation.
Doing both of these requires the same thing to happen: for someone to build up extensive knowledge of each endpoint and what it is capable of and ensure it's both tested and documented. That's a pretty time-consuming task.
Testing
The heart of Ghost's API lives in the functions in
core/server/api/<resource>.js
, E.g.posts.api.browse()
. Currently, those functions are tested in the integration tests incore/test/integration/api
- these tests go through the complete stack, no stubbing or mocking, and test what actually happens. Some of the endpoints are well-tested there, others not so much.If you're using the API over HTTP, those functions get wrapped by the
http
method incore/server/api/index.js
. That method is a sort of middleware, taking a request with thereq
andres
functions, converts the incoming options from being in request format to being in method call format ofobject
andoptions
, calls the api method, and converts the result into JSON or a correctly formatted error to be sent in a response.The HTTP version of the API is tested in the functional/integration tests that live in
core/test/functional/routes/api
these tests also go through the complete stack, no stubbing or mocking, and test what actually happens, but start with an HTTP call, rather than with a method call. Again, some of the end points are well-tested there and others are not.Some endpoints are tested by one set of tests and not the others, some have both.
As well as the HTTP call wrapper around the methods, soon we're also going to have a get-helper wrapper around the methods. I wouldn't expect to have yet-another-suite of integration tests for that too.
Instead, I believe what we should have is a complete suite of integration tests for the method calls. The tests for the HTTP wrapper should then be unit tests which stub out the method calls and check that we get the correct response including status code, headers, and body format.
This means,
core/test/integration/api
should have 100% coverage for the tests, and we'd then have unit tests incore/test/unit/api_spec.js
, which just tested the methods in thecore/server/api/index.js
file. All of the HTTP tests incore/test/routes/functional/api
can go away, or we can keep a single set of integration tests incore/test/routes/funcitonal/api_spec.js
to check key concepts, rather than every endpoint in detail.Coverage
Our coverage tasks currently only relate to unit tests. Integration tests tend to execute a lot of intialisation code, meaning that if they're included in coverage a lot of lines get marked as covered when they're not being directly tested.
However, it would be good to be able to get a picture of how much test coverage there is on the API, and so I think it would be worthwhile to add a second coverage task (
grunt coverage-api
orgrunt coverage-integration
?) to run the integration tests or just api integration tests.The grunt-mocha-istanbul library we're using allows for configuration of multiple coverage tasks by adding a new config: https://github.com/TryGhost/Ghost/blob/master/Gruntfile.js#L266 and wiring it up to a grunt task.
Documentation
Documentation for the API can come in 2 forms. Firstly it would be good to ensure that there is at least basic inline docs explaining what
object
andoptions
are/can be for each endpoint, a bit like this: https://github.com/TryGhost/Ghost/blob/master/core/server/api/posts.js#L32Secondly, we have API documentation setup in
readme.io
, but we need to populate it with the details of each endpoint. Anyone wanting to help with this will need to be added to the readme.io project.Approach
I recommend that we start with the soon-to-be public endpoints: read & browse for posts, tags, and users. Once that is done we can extend out to edit/add/destroy for each of those, and then to other important resources like settings.
The text was updated successfully, but these errors were encountered: