-
Notifications
You must be signed in to change notification settings - Fork 194
/
README
99 lines (62 loc) · 2.87 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
README file of Scrapy project for douban_movie
------------------------------------------------------------------
Before you crawl anything, you need to make sure some packages installed.
You can check it by typing the following in your terminal:
>> pip install scrapy faker selenium
If there is no ‘data’ file, please make the directory which will store
the json file you crawl from the internet:
>> mkdir data
Then, you need to run the Scrapy project at the ‘bin’ directory:
>> cd bin
And, you have to download and unzip the phantomjs packages at the ‘bin’ directory:
>> wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2
>> tar -jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2
Finally, we can crawl now!
You can check all the spiders by typing the command:
>> scraps list
==============================================================
STEP 1: Crawl for movie_item:
Just run:
>> scraps crawl douban-movie
# 内含我的豆瓣账号和密码,懒得改了~ 请保密。。。。
==============================================================
STEP 2: Crawl for movie_comment
Just run the following command one by one:
>> scrapy crawl douban-comment20 -a pages=1000
>> scrapy crawl douban-comment40 -a pages=1000
>> scrapy crawl douban-comment60 -a pages=1000
>> scrapy crawl douban-comment80 -a pages=1000
>> scrapy crawl douban-comment100 -a pages=1000
>> scrapy crawl douban-comment120 -a pages=1000
>> scrapy crawl douban-comment140 -a pages=1000
>> scrapy crawl douban-comment160 -a pages=1000
>> scrapy crawl douban-comment180 -a pages=1000
>> scrapy crawl douban-comment200 -a pages=1000
>> scrapy crawl douban-comment220 -a pages=1000
>> scrapy crawl douban-comment225 -a pages=1000
>> scrapy crawl douban-comment250 -a pages=1000
in which we have split the 250 movies into 13 parts to crawl and specify
the pages as parameter (1000 by defalt).
(HIGH LEVEL!)
Actually, you can crawl all the douban-comment spiders at once!
But also you would be banned at once! So you can crawl every two spiders
for douban-comment by running the command.
>> scraps crawlallcomment
and modify the file in ./douban_movie/commands/crawlallcomment.py
==============================================================
STEP 3: Crawl for movie_people
Just run the following command one by one:
>> scrapy crawl douban-people5000
>> scrapy crawl douban-people10000
>> scrapy crawl douban-people15000
>> scrapy crawl douban-people20000
>> scrapy crawl douban-people25000
>> scrapy crawl douban-people30000
>> scrapy crawl douban-people35000
>> scrapy crawl douban-people40000
in which we have split the 35776 peoples into 8 parts to crawl.
(HIGH LEVEL!)
Likewise, you also can crawl all the douban-people spiders at once by typing:
>> scraps crawlallpeople
However you would be banned without doubt!
You can modify the file in ./douban_movie/commands/crawlallpeople.py