Skip to content

Commit

Permalink
SAT: make all tests mandatory when test_strictness_level == high (#…
Browse files Browse the repository at this point in the history
  • Loading branch information
alafanechere authored Oct 25, 2022
1 parent 47f77f1 commit 0865c62
Show file tree
Hide file tree
Showing 7 changed files with 247 additions and 29 deletions.
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
# Changelog

## 0.2.13
Fail tests in `high` `test_strictness_level` if all tests are not configured. [#18414](https://github.com/airbytehq/airbyte/pull/18414/).

## 0.2.12
Declare `bypass_reason` field in test configuration. [#18364](https://github.com/airbytehq/airbyte/pull/18364).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ COPY pytest.ini setup.py ./
COPY source_acceptance_test ./source_acceptance_test
RUN pip install .

LABEL io.airbyte.version=0.2.12
LABEL io.airbyte.version=0.2.13
LABEL io.airbyte.name=airbyte/source-acceptance-test

ENTRYPOINT ["python", "-m", "pytest", "-p", "source_acceptance_test.plugin", "-r", "fEsx"]
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@

import inflection
import pytest
from source_acceptance_test.config import Config


@pytest.mark.usefixtures("inputs")
Expand All @@ -16,3 +17,5 @@ def config_key(cls):
if class_name.startswith("Test"):
class_name = class_name[len("Test") :]
return inflection.underscore(class_name)

MANDATORY_FOR_TEST_STRICTNESS_LEVELS = [Config.TestStrictnessLevel.high]
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,16 @@
#


from enum import Enum
from pathlib import Path
from typing import List
from typing import Callable, List, Tuple, Type

import pytest
from _pytest.config import Config
from _pytest.config.argparsing import Parser
from source_acceptance_test.base import BaseTest
from source_acceptance_test.config import Config as AcceptanceTestConfig
from source_acceptance_test.config import GenericTestConfig
from source_acceptance_test.utils import diff_dicts, load_config

HERE = Path(__file__).parent.absolute()
Expand All @@ -34,36 +38,75 @@ def pytest_addoption(parser):
)


class TestAction(Enum):
PARAMETRIZE = 1
SKIP = 2
FAIL = 3


def pytest_generate_tests(metafunc):
"""Hook function to customize test discovery and parametrization.
It does two things:
1. skip test class if its name omitted in the config file (or it has no inputs defined)
2. parametrize each test with inputs from config file.
For example config file contains this:
tests:
test_suite1:
- input1: value1
input2: value2
- input1: value3
input2: value4
test_suite2: []
Hook function will skip test_suite2 and test_suite3, but parametrize test_suite1 with two sets of inputs.
It parametrizes, skips or fails a discovered test according the test configuration.
"""

if "inputs" in metafunc.fixturenames:
config_key = metafunc.cls.config_key()
test_name = f"{metafunc.cls.__name__}.{metafunc.function.__name__}"
config = load_config(metafunc.config.getoption("--acceptance-test-config"))
if not hasattr(config.acceptance_tests, config_key) or not getattr(config.acceptance_tests, config_key):
pytest.skip(f"Skipping {test_name} because not found in the config")
test_config_key = metafunc.cls.config_key()
global_config = load_config(metafunc.config.getoption("--acceptance-test-config"))
test_configuration: GenericTestConfig = getattr(global_config.acceptance_tests, test_config_key, None)
test_action, reason = parametrize_skip_or_fail(
metafunc.cls, metafunc.function, global_config.test_strictness_level, test_configuration
)

if test_action == TestAction.PARAMETRIZE:
metafunc.parametrize("inputs", test_configuration.tests)
if test_action == TestAction.SKIP:
pytest.skip(reason)
if test_action == TestAction.FAIL:
pytest.fail(reason)


def parametrize_skip_or_fail(
TestClass: Type[BaseTest],
test_function: Callable,
global_test_mode: AcceptanceTestConfig.TestStrictnessLevel,
test_configuration: GenericTestConfig,
) -> Tuple[TestAction, str]:
"""Use the current test strictness level and test configuration to determine if the discovered test should be parametrized, skipped or failed.
We parametrize a test if:
- the configuration declares tests.
We skip a test if:
- the configuration does not declare tests and:
- the current test mode allows this test to be skipped.
- Or a bypass_reason is declared in the test configuration.
We fail a test if:
- the configuration does not declare the test but the discovered test is declared as mandatory for the current test strictness level.
Args:
TestClass (Type[BaseTest]): The discovered test class
test_function (Callable): The discovered test function
global_test_mode (AcceptanceTestConfig.TestStrictnessLevel): The global test strictness level (from the global configuration object)
test_configuration (GenericTestConfig): The current test configuration.
Returns:
Tuple[TestAction, str]: The test action the execution should take and the reason why.
"""
test_name = f"{TestClass.__name__}.{test_function.__name__}"
test_mode_can_skip_this_test = global_test_mode not in TestClass.MANDATORY_FOR_TEST_STRICTNESS_LEVELS
skipping_reason_prefix = f"Skipping {test_name}: "
default_skipping_reason = skipping_reason_prefix + "not found in the config."

if test_configuration is None:
if test_mode_can_skip_this_test:
return TestAction.SKIP, default_skipping_reason
else:
test_inputs = getattr(config.acceptance_tests, config_key).tests
if not test_inputs:
pytest.skip(f"Skipping {test_name} because no inputs provided")

metafunc.parametrize("inputs", test_inputs)
return (
TestAction.FAIL,
f"{test_name} failed: it was not configured but must be according to the current {global_test_mode} test strictness level.",
)
else:
if test_configuration.tests is not None:
return TestAction.PARAMETRIZE, f"Parametrize {test_name}: tests are configured."
else:
return TestAction.SKIP, skipping_reason_prefix + test_configuration.bypass_reason


def pytest_collection_modifyitems(config, items):
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
#
# Copyright (c) 2022 Airbyte, Inc., all rights reserved.
#

import pytest
from source_acceptance_test import config, plugin

HIGH_TEST_STRICTNESS_LEVEL = config.Config.TestStrictnessLevel.high
LOW_TEST_STRICTNESS_LEVEL = config.Config.TestStrictnessLevel.low

PARAMETRIZE_ACTION = plugin.TestAction.PARAMETRIZE
SKIP_ACTION = plugin.TestAction.SKIP
FAIL_ACTION = plugin.TestAction.FAIL


class MyTestClass:
def dumb_test_function(self):
assert 2 > 1


@pytest.mark.parametrize(
"parametrize_skip_or_fail_return",
[(PARAMETRIZE_ACTION, "parametrize reason"), (SKIP_ACTION, "skip reason"), (FAIL_ACTION, "fail_reason")],
)
def test_pytest_generate_tests(mocker, parametrize_skip_or_fail_return):
test_config = config.Config(
connector_image="foo",
acceptance_tests=config.AcceptanceTestConfigurations(spec=config.GenericTestConfig(tests=[config.SpecTestConfig()])),
)
mocker.patch.object(plugin.pytest, "skip")
mocker.patch.object(plugin.pytest, "fail")
mocker.patch.object(plugin, "parametrize_skip_or_fail", mocker.Mock(return_value=parametrize_skip_or_fail_return))
mocker.patch.object(plugin, "load_config", mocker.Mock(return_value=test_config))
metafunc_mock = mocker.Mock(
fixturenames=["inputs"],
function=mocker.Mock(__name__="test_function"),
cls=mocker.Mock(config_key=mocker.Mock(return_value="spec"), __name__="MyTest"),
)
plugin.pytest_generate_tests(metafunc_mock)
action, reason = parametrize_skip_or_fail_return
if action == PARAMETRIZE_ACTION:
metafunc_mock.parametrize.assert_called_once_with("inputs", test_config.acceptance_tests.spec.tests)
if action == SKIP_ACTION:
plugin.pytest.skip.assert_called_once_with(reason)
if action == FAIL_ACTION:
plugin.pytest.fail.assert_called_once_with(reason)


@pytest.mark.parametrize(
"TestClass, test_class_MANDATORY_FOR_TEST_STRICTNESS_LEVELS, global_test_mode, test_configuration, expected_action, expected_reason",
[
pytest.param(
MyTestClass,
(HIGH_TEST_STRICTNESS_LEVEL),
HIGH_TEST_STRICTNESS_LEVEL,
None,
FAIL_ACTION,
"MyTestClass.dumb_test_function failed: it was not configured but must be according to the current high test strictness level.",
id="Discovered test is mandatory in high test strictness level, we're in high test strictness level, it was not configured: FAIL",
),
pytest.param(
MyTestClass,
(HIGH_TEST_STRICTNESS_LEVEL),
LOW_TEST_STRICTNESS_LEVEL,
None,
SKIP_ACTION,
"Skipping MyTestClass.dumb_test_function: not found in the config.",
id="Discovered test is mandatory in high test strictness level, we are in low strictness level, it is not configured: SKIP",
),
pytest.param(
MyTestClass,
set(),
HIGH_TEST_STRICTNESS_LEVEL,
None,
SKIP_ACTION,
"Skipping MyTestClass.dumb_test_function: not found in the config.",
id="Discovered test is not mandatory in any test strictness level, it was not configured: SKIP",
),
pytest.param(
MyTestClass,
(HIGH_TEST_STRICTNESS_LEVEL),
HIGH_TEST_STRICTNESS_LEVEL,
config.GenericTestConfig(bypass_reason="A good reason."),
SKIP_ACTION,
"Skipping MyTestClass.dumb_test_function: A good reason.",
id="Discovered test is mandatory in high test strictness level, a bypass reason was provided: SKIP",
),
pytest.param(
MyTestClass,
(HIGH_TEST_STRICTNESS_LEVEL),
LOW_TEST_STRICTNESS_LEVEL,
config.GenericTestConfig(bypass_reason="A good reason."),
SKIP_ACTION,
"Skipping MyTestClass.dumb_test_function: A good reason.",
id="Discovered test is mandatory in high test strictness level, we are in low test strictness level, a bypass reason was provided: SKIP (with bypass reason shown)",
),
pytest.param(
MyTestClass,
(HIGH_TEST_STRICTNESS_LEVEL),
HIGH_TEST_STRICTNESS_LEVEL,
config.GenericTestConfig(tests=[config.SpecTestConfig()]),
PARAMETRIZE_ACTION,
"Parametrize MyTestClass.dumb_test_function: tests are configured.",
id="[High test strictness level] Discovered test is configured: PARAMETRIZE",
),
pytest.param(
MyTestClass,
(HIGH_TEST_STRICTNESS_LEVEL),
LOW_TEST_STRICTNESS_LEVEL,
config.GenericTestConfig(tests=[config.SpecTestConfig()]),
PARAMETRIZE_ACTION,
"Parametrize MyTestClass.dumb_test_function: tests are configured.",
id="[Low test strictness level] Discovered test is configured: PARAMETRIZE",
),
],
)
def test_parametrize_skip_or_fail(
TestClass, test_class_MANDATORY_FOR_TEST_STRICTNESS_LEVELS, global_test_mode, test_configuration, expected_action, expected_reason
):
TestClass.MANDATORY_FOR_TEST_STRICTNESS_LEVELS = test_class_MANDATORY_FOR_TEST_STRICTNESS_LEVELS
test_action, reason = plugin.parametrize_skip_or_fail(TestClass, TestClass.dumb_test_function, global_test_mode, test_configuration)
assert (test_action, reason) == (expected_action, expected_reason)
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
# See [Source Acceptance Tests](https://docs.airbyte.com/connector-development/testing-connectors/source-acceptance-tests-reference)
# for more information about how to configure these tests
connector_image: airbyte/source-pokeapi:dev
test_strictness_level: high
acceptance_tests:
spec:
bypass_reason: "TODO: activate this test!"
bypass_reason: "The spec is currently invalid: it has additionalProperties set to false"
connection:
tests:
- config_path: "integration_tests/config.json"
Expand All @@ -19,3 +18,5 @@ acceptance_tests:
tests:
- config_path: "integration_tests/config.json"
configured_catalog_path: "integration_tests/configured_catalog.json"
incremental:
bypass_reason: "This connector does not support incremental syncs."
Original file line number Diff line number Diff line change
Expand Up @@ -232,3 +232,49 @@ This test verifies that sync produces no records when run with the STATE with ab
| `configured_catalog_path` | string | `integration_tests/configured_catalog.json` | Path to configured catalog |
| `future_state_path` | string | None | Path to the state file with abnormally large cursor values |
| `timeout_seconds` | int | 20\*60 | Test execution timeout in seconds |


## Strictness level

To enforce maximal coverage of acceptances tests we expose a `test_strictness_level` field at the root of the `acceptance-test-config.yml` configuration.
The default `test_strictness_level` is `low`, but for generally available connectors it is expected to be eventually set to `high`.

### Test enforcements in `high` test strictness level

#### All acceptance tests are declared, a `bypass_reason` is filled if a test can't run
In `high` test strictness level we expect all acceptance tests to be declared:
* `spec`
* `connection`
* `discovery`
* `basic_read`
* `full_refresh`
* `incremental`

If a test can't be run for a valid technical or organizational reason a `bypass_reason` can be declared to skip this test.
E.G. `source-pokeapi` does not support incremental syncs, we can skip this test when `test_strictness_level` is `high` by setting a `bypass_reason` under `incremental`.
```yaml
connector_image: "airbyte/source-pokeapi"
test_strictness_level: high
acceptance_tests:
spec:
tests:
- spec_path: "source_pokeapi/spec.json"
connection:
tests:
- config_path: "integration_tests/config.json"
status: "succeed"
discovery:
tests:
- config_path: "integration_tests/config.json"
basic_read:
tests:
- config_path: "integration_tests/config.json"
configured_catalog_path: "integration_tests/configured_catalog.json"
full_refresh:
tests:
- config_path: "integration_tests/config.json"
configured_catalog_path: "integration_tests/configured_catalog.json"
incremental:
bypass_reason: "Incremental syncs are not supported on this connector."
```

0 comments on commit 0865c62

Please sign in to comment.