A plugin to send pytest test results to ELK stack, with extra context data
- Report each test result into Elasticsearch as they finish
- Automatically append contextual data to each test:
- git information such as
branch
orlast commit
and more - all of CI env variables
- Jenkins
- Travis
- Circle CI
- Github Actions
- username if available
- git information such as
- Report a test summary to Elastic for each session with all the context data
- Append any user data into the context sent to Elastic
- having pytest tests written
You can install "pytest-elk-reporter" via pip from PyPI
pip install pytest-elk-reporter
We need this auto_create_index
setting enabled for the indexes that are going to be used,
since we don't have code to create the indexes, this is the default
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"persistent": {
"action.auto_create_index": "true"
}
}
'
For more info on this Elasticsearch feature check their index documention
pytest --es-address 127.0.0.1:9200
# or if you need user/password to authenticate
pytest --es-address my-elk-server.io:9200 --es-username fruch --es-password 'passwordsarenicetohave'
# or with api key (see https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html)
pytest --es-address my-elk-server.io:9200 --es-api-key 'VnVhQ2ZHY0JDZGJrUW0tZTVhT3g6dWkybHAyYXhUTm1zeWFrdzl0dk5udw=='
from pytest_elk_reporter import ElkReporter
def pytest_plugin_registered(plugin, manager):
if isinstance(plugin, ElkReporter):
# TODO: get credentials in more secure fashion programmatically, maybe AWS secrets or the likes
# or put them in plain-text in the code... what can ever go wrong...
plugin.es_index_name = 'test_data'
plugin.es_address = "my-elk-server.io:9200"
plugin.es_user = 'fruch'
plugin.es_password = 'passwordsarenicetohave'
# or use api key (see https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html
plugin.es_api_key = 'VnVhQ2ZHY0JDZGJrUW0tZTVhT3g6dWkybHAyYXhUTm1zeWFrdzl0dk5udw=='
# put this in pytest.ini / tox.ini / setup.cfg
[pytest]
es_address = my-elk-server.io:9200
es_user = fruch
es_password = passwordsarenicetohave
es_index_name = test_data
# or with api key (see https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html)
es_api_key = VnVhQ2ZHY0JDZGJrUW0tZTVhT3g6dWkybHAyYXhUTm1zeWFrdzl0dk5udw==
see pytest docs for more about how to configure pytest using .ini files
In this example, I'll be able to build a dashboard for each version:
import pytest
@pytest.fixture(scope="session", autouse=True)
def report_formal_version_to_elk(request):
"""
Append my own data specific, for example which of the code under test is used
"""
# TODO: programmatically set to the version of the code under test...
my_data = {"formal_version": "1.0.0-rc2" }
elk = request.config.pluginmanager.get_plugin("elk-reporter-runtime")
elk.session_data.update(**my_data)
import requests
def test_my_service_and_collect_timings(request, elk_reporter):
response = requests.get("http://my-server.io/api/do_something")
assert response.status_code == 200
elk_reporter.append_test_data(request, {"do_something_response_time": response.elapsed.total_seconds() })
# now, a dashboard showing response time by version should be quite easy
# and yeah, it's not exactly a real usable metric, but it's just one example...
Or via the record_property
built-in fixture (that is normally used to collect data into junit.xml reports):
import requests
def test_my_service_and_collect_timings(record_property):
response = requests.get("http://my-server.io/api/do_something")
assert response.status_code == 200
record_property("do_something_response_time", response.elapsed.total_seconds())
One cool thing that can be done now that you have a history of the tests, is to split the tests based on their actual runtime when passing. For long-running integration tests, this is priceless.
In this example, we're going to split the run into a maximum of 4 min slices. Any test that doesn't have history information is assumed to be 60 sec long.
# pytest --collect-only --es-splice --es-max-splice-time=4 --es-default-test-time=60
...
0: 0:04:00 - 3 - ['test_history_slices.py::test_should_pass_1', 'test_history_slices.py::test_should_pass_2', 'test_history_slices.py::test_should_pass_3']
1: 0:04:00 - 2 - ['test_history_slices.py::test_with_history_data', 'test_history_slices.py::test_that_failed']
...
# cat include000.txt
test_history_slices.py::test_should_pass_1
test_history_slices.py::test_should_pass_2
test_history_slices.py::test_should_pass_3
# cat include000.txt
test_history_slices.py::test_with_history_data
test_history_slices.py::test_that_failed
### now we can run each slice on its own machine
### on machine1
# pytest $(cat include000.txt)
### on machine2
# pytest $(cat include001.txt)
Contributions are very welcome. Tests can be run with tox
. Please ensure
the coverage at least stays the same before you submit a pull request.
Distributed under the terms of the MIT license, "pytest-elk-reporter" is free and open source software
If you encounter any problems, please file an issue along with a detailed description.
This pytest plugin was generated with Cookiecutter along with @hackebrot's cookiecutter-pytest-plugin template.