SkipFish安装Ubuntu

来源:互联网 发布:java输出杨辉三角 编辑:程序博客网 时间:2024/06/04 01:24

skipfish是谷歌开发网站完全扫描工具,强大易用。

主要是需要两个库:libssl和libidn。

在终端中输入命令:apt-cache search libssl | grep ssl然后依次 sudo apt-get install libssl0.9.8sudo apt-get install libssl-devsudo apt-get install openssl类似地,先 apt-cache search libidn | grep libidn然后依次 sudo apt-get install libidn11-devsudo apt-get install libidn11

安装所需软件库:

sudo apt-get install libssl0.9.8sudo apt-get install libssl-devsudo apt-get install openssl

安装skipfish:

wget http://skipfish.googlecode.com/files/skipfish-1.69b.tgztar zxvf skipfish-1.69b.tgzmv skipfish-1.69b skipfishcd skipfishmake//编译完成,在目录中生成skipfish可执行程序cp dictionaries/complete.wl skipfish.wl//拷贝其中一个字典,用来扫描进行扫描:./skipfish -o output_folder http://www.example.com//其中output_folder是输出目录,扫描结束后可打开index.html查看扫描结果

生成执行文件

#cp dictionaries/任意字典 skipfish.wl#./skipfish -o {输出结果目录} {url}

结果会生成在指定目录下的html文档

skipfish web application scanner - version 2.10b
Usage: /home/admin/workspace/skipfish/skipfish [ options … ] -W wordlist -o output_dir start_url [ start_url2 … ]

Authentication and access options:
验证和访问选项:

  -A user:pass  - use specified HTTP authentication credentials  使用特定的http验证  -F host=IP- pretend that 'host' resolves to 'IP'  -C name=val   - append a custom cookie to all requests  对所有请求添加一个自定的cookie  -H name=val   - append a custom HTTP header to all requests  对所有请求添加一个自定的http请求头  -b (i|f|p)- use headers consistent with MSIE / Firefox / iPhone  伪装成IE/FIREFOX/IPHONE的浏览器  -N- do not accept any new cookies  不允许新的cookies  --auth-form url   - form authentication URL  --auth-user user  - form authentication user  --auth-pass pass  - form authentication password  --auth-verify-url -  URL for in-session detectionCrawl scope options:  -d max_depth - maximum crawl tree depth (16)最大抓取深度  -c max_child - maximum children to index per node (512)最大抓取节点  -x max_desc  - maximum descendants to index per branch (8192)每个索引分支抓取后代数  -r r_limit   - max total number of requests to send (100000000)最大请求数量  -p crawl%- node and link crawl probability (100%) 节点连接抓取几率  -q hex   - repeat probabilistic scan with given seed  -I string- only follow URLs matching 'string' URL必须匹配字符串  -X string- exclude URLs matching 'string' URL排除字符串  -K string- do not fuzz parameters named 'string'  -D domain- crawl cross-site links to another domain 跨域扫描  -B domain- trust, but do not crawl, another domain  -Z   - do not descend into 5xx locations 5xx错误时不再抓取  -O   - do not submit any forms 不尝试提交表单  -P   - do not parse HTML, etc, to find new links 不解析HTML查找连接Reporting options:  -o dir  - write output to specified directory (required)  -M  - log warnings about mixed content / non-SSL passwords  -E  - log all HTTP/1.0 / HTTP/1.1 caching intent mismatches  -U  - log all external URLs and e-mails seen  -Q  - completely suppress duplicate nodes in reports  -u  - be quiet, disable realtime progress stats  -v  - enable runtime logging (to stderr)Dictionary management options:  -W wordlist - use a specified read-write wordlist (required)  -S wordlist - load a supplemental read-only wordlist  -L  - do not auto-learn new keywords for the site  -Y  - do not fuzz extensions in directory brute-force  -R age  - purge words hit more than 'age' scans ago  -T name=val - add new form auto-fill rule  -G max_guess- maximum number of keyword guesses to keep (256)  -z sigfile  - load signatures from this filePerformance settings:  -g max_conn - max simultaneous TCP connections, global (40) 最大全局TCP链接  -m host_conn- max simultaneous connections, per target IP (10) 最大链接/目标IP  -f max_fail - max number of consecutive HTTP errors (100) 最大http错误  -t req_tmout- total request response timeout (20 s) 请求超时时间  -w rw_tmout - individual network I/O timeout (10 s)   -i idle_tmout   - timeout on idle HTTP connections (10 s)  -s s_limit  - response size limit (400000 B) 限制大小  -e  - do not keep binary responses for reporting 不报告二进制响应Other settings:  -l max_req  - max requests per second (0.000000)  -k duration - stop scanning after the given duration h:m:s  --config file   - load the specified configuration fileSend comments and complaints to <heinenn@google.com>.
0 0