-
Notifications
You must be signed in to change notification settings - Fork 33
/
README
113 lines (70 loc) · 5.34 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
Wikiprep-ESA
This is an effort to implement Explicit Semantic Analysis (ESA) as described in this paper:
"Wikipedia-based semantic interpretation for natural language processing"
2009, Gabrilovich, E. and Markovitch, S.
You can find this paper at: http://www.jair.org/media/2669/live-2669-4346-jair.pdf
This implementation consists of:
* scanData.py : that reads Wikiprep output into a MySQL database.
It creates "article","text" and "pagelinks" tables.
* addAnchors.py : that adds anchor text to target articles.
* addRedirects.py : that adds redirect text to target articles.
The scripts above are able to work on both Wikiprep legacy formats and modern format (as in Zemanta fork).
Evgeniy Gabrilovich provides a preprocessed dump for 5 November 2005 snapshot of Wikipedia English.
It is available at: http://www.cs.technion.ac.il/~gabr/resources/code/wikiprep/wikipedia-051105-preprocessed.tar.bz2
In its current settings, Python scripts of wikiprep-esa are ready to process this dump with --format=gabrilovich or --format=gl setting.
If you need to process dumps in formats of Zemanta, you need to set format with --format argument, e.g. --format=zemanta-modern , --format=zm , --format=modern.
Wikiprep dump Format can be following:
1. Gabrilovich [gl, gabrilovich]
2. Zemanta legacy [zl, legacy, zemanta-legacy]
3. Zemanta modern [zm, modern, zemanta-modern]
After reading preprocessed dump into the database and adding anchors and redirects, you need to use
"esa-lucene" to perform indexing.
* ESAWikipediaIndexer: performs indexing with Lucene by feeding it with article content from database.
* WikipediaNormalSearcher: at this step, you can use this class to perform a search in Lucene index.
keep in mind that at this point, the implementation won't be the same with Gabrilovich et al. (2009),
since cosine normalization is term-based in Gabrilovich et al. but document length based in Lucene.
Additionally, pruning is not yet applied in Lucene index as in Gabrilovich et al.
However, TF.IDF weighing scheme is the same (log-based) and is located in ESASimilarity class.
* IndexModifier: reads term frequency vectors from Lucene index and writes cosine-normalized TF.IDF values into
"tfidf" table in the database. This is done to apply the same normalization method used in Gabrilovich et al. (2009).
[DEPRECATED] * IndexPruner: prunes concept vectors for each term with a sliding window.
By default, window_size = 100 and threshold = 0.05 as in Gabrilovich et al. (2009). You can modify these values
in IndexPruner class.
* ESASearcher: performs search and computes vectors by using the resulting index in the database.
* TestESAVectors: produces and displays regular feature vector.
* TestGeneralESAVectors: produces and displays "Second Order Interpretation" vector filtered with "Concept Generality Filter" as in Gabrilovich et al. (2009).
DEPENDENCIES
Python scripts use MySQL-Python to access database.
MySQL-Python: http://sourceforge.net/projects/mysql-python/
Python scripts also use PyStemmer, which is the project encapsulating Python wrappers of Snowball:
You can find further info at: http://snowball.tartarus.org/download.php
"esa-lucene" Java project used for indexing, pruning etc. uses MySQL Connector/J to access database,
Lucene 3.0 for indexing and Trove and these libraries are included in project files.
MySQL Connector/J: http://www.mysql.com/downloads/connector/j/
Lucene 3.0: http://lucene.apache.org
Trove: http://trove4j.sourceforge.net/
USAGE
This creates the pagelinks table and records incoming and outgoing link counts.
[STANDARD] python scanLinks.py <hgw.xml file from Wikiprep dump>
(e.g. python scanLinks.py simplewiki/simplewiki-20110620-pages-articles.gum.xml )
You can provide a list of stop categories for your Wikipedia dump, to help filter irrelevant articles.
A list for 2005 dump of Gabrilovich et al. is provided in "2005_wiki_stop_categories.txt".
Note that you should prepare your own, updated file for your Wikipedia dump, if you are going to use stop category filtering.
If you want to descend down and include all subtrees of these categories, you can use:
[OPTIONAL] python scanCatHier.py <hgw.xml/gum.xml file from Wikiprep> <output file path> --stopcats=<stop category file>
[The commands below are all STANDARD]
python scanData.py <hgw.xml/gum.xml file from Wikiprep dump> --format=<Wikiprep dump format> [--stopcats=<stop category file>]
(e.g. python scanData.py simplewiki/simplewiki-20110620-pages-articles.gum.xml --format=zm )
python addAnchors.py <anchor_text file from Wikiprep dump> <a writeable folder>' --format=<Wikiprep dump format>
(e.g. python addAnchors.py simplewiki/simplewiki-20110620-pages-articles.anchor_text anchor --format=zm)
java -cp esa-lucene.jar edu.wiki.index.ESAWikipediaIndexer <Lucene index folder>
java -cp esa-lucene.jar edu.wiki.modify.IndexModifier <Lucene index folder>
... or, if you have a sufficient RAM (15 Gb was enough to process en-20090618 dump) try this instead:
java -cp esa-lucene.jar edu.wiki.modify.MemIndexModifier <Lucene index folder>
IndexModifier sorts TF-IDF vectors using sort utility of Unix, also using the disk.
MemIndexModifier handles sorting in memory instead.
Then perform a feature generation to test:
To generate regular features:
java -cp esa-lucene.jar edu.wiki.demo.TestESAVectors
To generate features using only more general links:
java -cp esa-lucene.jar edu.wiki.demo.TestGeneralESAVectors