Skip to content

wikilinks/neleval

Repository files navigation

Entity linking evaluation

Python command-line evaluation scripts for TAC entity linking and related wikification, named entity disambiguation, and within- and cross-document coreference tasks.

Latest version on PyPi licence Python versions supported

Issue tracker Travis CI build status Documentation Status Test coverage

It aims for fast and flexible coreference resolution and sophisticated named entity recognition evaluation, such as partial scores for partial overlap between gold and system mentions. CEAF, in particular, is much faster to calculate here than in the CoNLL-11/12 scorer. It boasts features such as configurable metrics; accounting for or ignoring cross-document coreference (see the evaluate --by-doc flag); plotting to compare evaluation by system, measure and corpus subset; and bootstrap-based confidence interval calculation for document-wise evaluation metrics.

Requires that python (2.7, with Py3k support experimental/partial) be installed on your system with numpy (and preferably scipy for fast CEAF calculation) and joblib. matplotlib is required for the plot-systems command.

See a list of commands with:

./nel --help

Or install onto your Python path (e.g. with pip install git+https://github.com/wikilinks/neleval) then

python -m neleval --help

TAC-KBP 2014 EDL quickstart

./scripts/run_tac14_evaluation.sh \
    /path/to/gold.xml \              # TAC14 gold standard queries/mentions
    /path/to/gold.tab \              # TAC14 gold standard link and nil annotations
    /system/output/directory \       # directory containing (only) TAC14 system output files
    /script/output/directory \       # directory to which results are written
    number_of_jobs                   # number of jobs for parallel mode

Each file in in the system output directory is scored against gold.tab.

Similar facility is available for TAC-KBP'15 EDL.

More details

See the documentation for more details.