Skip to content

ncover21/proxy-pools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python Proxy Scraper

A proxy scraping class that maintains a pool of usable proxies collected from 4 sources.

Required Libraries

  • beautifulsoup4
  • requests
  • termcolor

Usage

Initialize a Pool

pool = ProxyPools()
pool.start()

Paramaters when initilizing a new pool

Name Type Usage
intervalTime int Time between scraping for proxies in seconds
maxPoolSize int Maximun number of given proxies in pool at any given time
timeout int/double Max timeout when checking if proxy is valid in seconds
debug bool Print debug info

Example

pool = ProxyPools(timeout=5,maxPoolSize=10 )
# Initilize a pool with timeout of 5 seconds 
# and maximum of 10 proxies in pool

Stopping a Pool

pool.kill()

Useful Functions

pool.getOne() #get one proxy from pool which has the lowest response time
pool.getList() #get entire list of proxies
pool.getSize() #get size of pool

About

Python Proxy Scraping Class

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages