Skip to content

Commit

Permalink
Merge pull request #4 from sts-ryan-holton/dev
Browse files Browse the repository at this point in the history
Dev
  • Loading branch information
sts-ryan-holton committed Nov 24, 2019
2 parents 60b18ad + 5fb8eb0 commit 9153918
Show file tree
Hide file tree
Showing 5 changed files with 18 additions and 14 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ $ git clone [email protected]:sts-ryan-holton/loan-risk-score.git
$ npm install
```

:warning: You will need to **enable** the scraper, and provide a URL to scrape. We've provided an example that you can copy.

### :wrench: Starting

``` bash
Expand Down
16 changes: 8 additions & 8 deletions docs/index.html

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "loan-risk-score",
"version": "1.1.0",
"version": "1.2.0",
"description": "Calculate loan affordability using machine learning.",
"main": "scraper/scrape-jobs.js",
"keywords": [
Expand Down
1 change: 0 additions & 1 deletion scraper/data/salary.json

Large diffs are not rendered by default.

11 changes: 7 additions & 4 deletions scraper/scrape-jobs.js
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ var searchRadius = (args['searchRadius'] && args['searchRadius'] != '') ? pars
scrapeInterval = (args['scrapeInterval'] && args['scrapeInterval'] != '') ? parseInt(args['scrapeInterval']) : 1000,
pauseDelay = (args['thresholdDelay'] && args['thresholdDelay'] != '') ? parseInt(args['thresholdDelay']) : 7500,
pauseScraping = false,
scrapeOnRun = true,
urlToScrape = 'https://www.indeed.co.uk/jobs?l=Bridgend',
scrapeOnRun = false,
urlToScrape = '', // example: https://www.indeed.co.uk/jobs?l=London
jobCard = '.jobsearch-SerpJobCard',
jobCardSalary = '.salarySnippet .salaryText'

Expand Down Expand Up @@ -165,7 +165,7 @@ async function scrapeWebsite(radius, pageNo) {


// run the scraper on loop and increment details
if (scrapeOnRun) {
if (scrapeOnRun && urlToScrape != '') {
setInterval(() => {
if (!pauseScraping) {
scrapeWebsite(searchRadius, pageNumber)
Expand All @@ -175,6 +175,9 @@ if (scrapeOnRun) {


// run the scraper on start
if (scrapeOnRun) {
if (scrapeOnRun && urlToScrape != '') {
scrapeWebsite(searchRadius, pageNumber)
}

if (urlToScrape == '') console.log('=== ERROR: You\'ll need to add a URL in: "scraper/scrape-jobs.js, line: 21" to scrape. ===')
if (!scrapeOnRun) console.log('=== ERROR: You\'ll need to enable the scraper in: "scraper/scrape-jobs.js, line: 20". ===')

0 comments on commit 9153918

Please sign in to comment.