Skip to content

Latest commit

 

History

History
181 lines (129 loc) · 8.13 KB

README.md

File metadata and controls

181 lines (129 loc) · 8.13 KB

cve-bin-tool tests

You can see all existing tests in test/

Running all tests

To run the tests for cve-bin-tool

pytest

To run scanner and checker tests

pytest test/test_scanner.py test/test_checkers.py

By default, some longer-running tests are turned off. If you want to enable them, you can set the environment variable LONG_TESTS to 1. You can do this just for a single command line as follows:

LONG_TESTS=1 pytest

For scanner tests

LONG_TESTS=1 pytest test/test_scanner.py test/test_checkers.py

By default, tests which rely on external connectivity are turned off. If you want to enable them, you can set the environment variable EXTERNAL_SYSTEM to 1. You can do this just for a single command line as follows:

EXTERNAL_SYSTEM=1 pytest

For nvd tests

EXTERNAL_SYSTEM=1 pytest test/test_source_nvd.py

Running a single test

To run a single test, you can use the unittest framework. For example, here's how to run the test for sqlite:

python -m unittest test.test_scanner.TestScanner.test_sqlite_3_12_2

To run a single test in test_scanner you can use pytest. For example, here's how to run the test for vendor_package_pairs:

 pytest test/test_scanner.py::TestScanner::test_version_mapping

Running tests on different versions of Python

Our CI runs tests on all currently supported Python versions under Linux. The testing configuration file is available on GitHub. This file will always show the currently used versions on both Linux and Windows.

The recommended way to do this yourself is to use python's virtualenv

You can set up virtualenv for all these environments:

virtualenv -p python3.8 venv3.8
virtualenv -p python3.9 venv3.9

To activate one of these (the example uses 3.8), run the tests, and deactivate:

source venv3.8/bin/activate
pytest
deactivate

Adding new tests: CVE mapping tests

  • You can see the code for scanner tests in 'test/test_scanner.py'
  • You can see checker wise test data in 'test/test_data'
  • If you just want to add a new mapping test for a checker, add a dictionary of product, version and version_strings in the mapping_test_data list . Here, version_strings are the list of strings that contain version signature or strings that commonly can be found in the module. For example: this is how the current mapping_test_data for gnutls look like. You should add the details of the new test case data at the end of mapping_test_data list:
mapping_test_data = [
    {
        "product": "gnutls",
        "version": "2.1.6",
        "version_strings": ["gnutls-cli 2.1.6"]
    },
    {
        "product": "gnutls",
        "version": "2.3.11",
        "version_strings": ["gnutls-serv 2.3.11"],
    },
]
  • Please note that sometimes the database we're using doesn't have perfect mapping between CVEs and product versions -- if you try to write a test that doesn't work because of that mapping but the description in the CVE says that version should be vulnerable, don't discard it! Instead, please make a note of it in a github issue can investigate and maybe report it upstream.

Adding new tests: Signature tests against real files

To make the basic test suite run quickly, we create "faked" binary files to test the CVE mappings. However, we want to be able to test real files to test that the signatures work on real-world data.

You can see test data for package tests in package_test_data variable of the test data file you are writing test for.

We have test_version_in_package function in test_scanner that takes a url, and package name, module name and a version, and downloads the package, runs the scanner against it, and makes sure it is the package that you've specified. But we need more tests!

  • To add a new test, find an appropriate publicly available file (linux distribution packages and public releases of the packages itself are ideal). You should add the details of the new test case in the package_test_data variable of the file for which you are writing test for. For example: this is how the current package_test_data for binutils look like. You should add the details of the new test case data at the end of package_test_data list:
package_test_data = [
    {
        "url": "http://security.ubuntu.com/ubuntu/pool/main/b/binutils/",
        "package_name": "binutils_2.26.1-1ubuntu1~16.04.8_amd64.deb",
        "product": "binutils",
        "version": "2.26.1",
         "other_products": [],
    },
    {
        "url": "http://mirror.centos.org/centos/7/os/x86_64/Packages/",
        "package_name": "binutils-2.27-43.base.el7.x86_64.rpm",
        "product": "binutils",
        "version": "2.27",
         "other_products": ["zlib"],
    },
]

The other_products attribute might match any binaries provided, so we can check that only the expected products are found in a given binary. (e.g. if an imaginary package called CryptographyExtensions included OpenSSL, we'd expect to detect both in CryptographyExtensions-1.2.rpm).

Ideally, we should have at least one such test for each checker, and it would be nice to have some different sources for each as well. For example, for packages available in common Linux distributions, we might want to have one from fedora, one from debian, and one direct from upstream to show that we detect all those versions.

Note that we're getting the LONG_TESTS() from tests.util in the top of the files where it's being used. If you're adding a long test to a test file that previously didn't have any, you'll need to add that at the top of the file as well.

Adding new tests: Checker filename mappings

To test the filename mappings, rather than making a bunch of empty files, we're calling the checkers directly in test/test_checkers.py. You can add a new test by specifying a the name of the checker you want to test, the file name, and the expected result that the scanner should say it "is".

    @pytest.mark.parametrize(
        "checker_name, file_name, expected_result",
        [
            ("python", "python", ["python"]),
            ("python", "python3.8", ["python"]),
        ],
    )

The function test_filename_is will then load the checker you have specified (and fail spectacularly if you specify a checker that does not exist), try to run get_version() with an empty file content and the filename you specified, then check that it "is" something (as opposed to "contains") and that the modulename that get_version returns is in fact the expected_result you specified.

For ease of maintenance, please keep the parametrize list in alphabetical order when you add a new tests.

You can then run your new test using pytest:

pytest test/test_checkers.py

You can also use all the pytest functionality to run groups of tests. For example, this will run the python-related tests (but not the bluetooth one):

pytest -v test/test_checkers.py -k python

Known issues

If you're using Windows and plan to run PDF tests, we strongly recommend also pdftotext. We experienced problems running tests without this. The best approach to do this is through conda (click here to find out how to install this package with conda).