-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zeor list #53
Comments
|
Thanks a lot! it works but it just scraps the records from the first page. Any suggestions? |
You want everything for all 219 pages? |
Yes! |
Be aware, the code is going to run for quite a bit of time. I recommend you export the resulting data frame right away as a
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I want to download all links/ titles of papers from the web using rvest. I used the following script but it is not the list is zero. Any suggestions?
library(rvest)
Download the HTML and turn it into an XML file with read_html()
Papers <- read_html("https://papers.ssrn.com/sol3/JELJOUR_Results.cfm?npage=1&form_name=journalBrowse&journal_id=1475407&Network=no&lim=false")
Extract specific nodes with html_nodes()
Titles <- html_nodes(Papers, "span.optClickTitle")
The text was updated successfully, but these errors were encountered: