-
Notifications
You must be signed in to change notification settings - Fork 968
Home
WikiExtractor.py is a Python script that extracts and cleans text from a Wikipedia database dump.
The tool is written in Python and requires no additional library.
Wikipedia articles are written in the MediaWiki Markup Language which provides a simple notation for formatting text (bolds, italics, underlines, images, tables, etc.). It also allows inserting HTML markup in the documents. Wiki and HTML tags are sometimes misused (unclosed tags, wrong attributes, etc.), therefore the extractor deploys some heuristics in order to circumvent such problems.
The current beta version of WikiExtrctor.py is capable of performing template expansion to some extent.
The script is invoked with a Wikipedia dump file as an argument. Use the article dumps which are available as http://dumps.wikimedia.org/XXwiki/latest/XXwiki-latest-pages-articles.xml.bz2, where XX is the language identifier (e.g. en, es, zh).
The output is stored in a number of files of similar size in a chosen directory. Each file will contains several documents in this document format.
This version that performs template expansion by preprocesssng the whole dump and extracting template definitions.
Usage:
optional arguments:
-h, --help show this help message and exit -o OUTPUT, --output OUTPUT output directory -b n[KM], --bytes n[KM] put specified bytes per output file (default is 1M) -B BASE, --base BASE base URL for the Wikipedia pages -c, --compress compress output files using bzip -l, --links preserve links -ns ns1,ns2, --namespaces ns1,ns2 accepted namespaces -q, --quiet suppress reporting progress info --debug print debug info -s, --sections preserve sections -a, --article analyze a file containing a single article --templates TEMPLATES use or create file containing templates --no-templates do not expand templates --threads THREADS Number of threads to use (default 8) -v, --version print program version
Saving templates to a file will speed up performing extraction the next time, assuming template definitions have not changed.
Option --no-templates significantly speeds up the extractor, avoiding the cost of expanding MediaWiki templates.
- All Wikipedia database dumps
- torrents for use with a BitTorrent client such as uTorrent
- WikiPrep A Perl tool for preprocessing Wikipedia XML dumps.
- Extracting Text from Wikipedia Another Python tool for text extracting from Wikipedia XML dumps.
- Alternative Parsers A list of links, descriptions, and status reports of the various alternative MediaWiki parsers.