-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Usage
usage: photon.py [options]
-u --url root url
-l --level levels to crawl
-t --threads number of threads
-d --delay delay between requests
-c --cookie cookie
-r --regex regex pattern
-s --seeds additional seed urls
-e --export export formatted result
-o --output specify output directory
-v --verbose verbose output
--keys extract secret keys
--clone clone the website locally
--exclude exclude urls by regex
--stdout print a variable to stdout
--timeout http requests timeout
--ninja ninja mode
--update update photon
--headers supply http headers
--dns enumerate subdomains & dns data
--only-urls only extract urls
--wayback Use URLs from archive.org as seeds
--user-agent specify user-agent(s)
Option: -u
or --url
Crawl a single website.
python photon.py -u "http://example.com"
Option: --clone
The crawled webpages can be saved locally for later use by using the --clone
switch as follows
python photon.py -u "http://example.com" --clone
Option: -l
or --level
| Default: 2
Using this option user can set a recursion limit for crawling. For example, a depth of 2
means Photon will find all the URLs from the homepage and seeds (level 1) and then will crawl those levels as well (level 2).
python photon.py -u "http://example.com" -l 3
Option: -t
or --threads
| Default: 2
It is possible to make concurrent request to the target and -t
option can be used to specify the number of concurrent requests to make.
While threads can help to speed up crawling, they might also trigger security mechanisms. A high number of threads can also bring down small websites.
python photon.py -u "http://example.com" -t 10
Option: -d
or --delay
| Default: 0
It is possible to specify a number of seconds to hold between each HTTP(S) request. The valid value is a int, for instance 1 means a second.
python photon.py -u "http://example.com" -d 2
Option: --timeout
| Default: 5
It is possible to specify a number of seconds to wait before considering the HTTP(S) request timed out.
python photon.py -u "http://example.com --timeout=4
Option: -c
or --cookies
| Default: no cookie header is sent
This option lets you add a Cookie header to each HTTP request made by Photon in non-ninja mode.
It can be used when certain parts of the target website require authentication based on Cookies.
python photon.py -u "http://example.com" -c "PHPSESSID=u5423d78fqbaju9a0qke25ca87"
Option: -o
or --output
| Default: domain name of target
Photon saves the results in a directory named after the domain name of the target but you can overwrite this behavior by using this option.
python photon.py -u "http://example.com" -o "mydir"
Option: -v
or --verbose
In verbose mode, all the pages, keys, files etc. will be printed as they are found.
python photon.py -u "http://example.com" -v
Option: --exclude
URLs matching the specified regex will not be crawled or showed in the results at all.
python photon.py -u "http://example.com" --exclude="/blog/20[17|18]"
Option: -s
or --seeds
You can add custom seed URL(s) with this option, separated by commas.
python photon.py -u "http://example.com" --seeds "http://example.com/blog/2018,http://example.com/portals.html"
Option: --user-agent
| Default: entries from user-agents.txt
You can use your own user agent(s) with this option, separated by commas.
python photon.py -u "http://example.com" --user-agent "curl/7.35.0,Wget/1.15 (linux-gnu)"
This option is only present to aid the user to use a specific user agent without modifying the default user-agents.txt
file.
Option: -r
or --regex
It is possible to extract strings during crawling by specifying a regex pattern with this option.
python photon.py -u "http://example.com" --regex "\d{10}"
Option: -e
or --export
With -e
option you can specify a output format in which the data will be saved.
python photon.py -u "http://example.com" --export=json
Currently supported formats are:
- json
- csv
Option: --wayback
This option makes it possible to fetch archived URLs from archive.org and use them as seeds.
Only the URLs crawled within current year will be fetched to make sure they aren't dead.
python photon.py -u "http://example.com" --wayback
Option: --only-urls
This option skips the extraction of data such as intel and js files. It should come in handy when your goal is to only crawl the target.
python photon.py -u "http://example.com" --only-urls
Option: --update
If this option is enabled, photon will check for updates. If a newer version will available, Photon will download and merge the updates into the current directory without overwriting other files.
python photon.py --update
Option: --keys
This switch tells Photon to look for high entropy strings which can be some kind of auth or API keys or hashes.
python photon.py -u http://example.com --keys
Option: --stdout
You can write a variable of choices to stdout for piping with other programs.
Following variables are supported:
files, intel, robots, custom, failed, internal, scripts, external, fuzzable, endpoints, keys
python photon.py -u http://example.com --stdout=custom | resolver.py
Option: --ninja
This option enables Ninja mode. In this mode, Photon uses the following websites to make requests on your behalf.
Contrary to the name, it doesn't stop you from making requests to the target.\
Option: --dns
Saves subdomains in 'subdomains.txt' and also generates an image displaying target domain's DNS data.
python photon.py -u http://example.com --dns
Sample Output: