Testing: Review Webserver Metafiles for Information Leakage (OTG-INFO-003)
来源:互联网 发布:淘宝网红直播费用 编辑:程序博客网 时间:2024/06/06 16:41
Summary
This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s). Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009[1]
该节描述如何测试robots.txt文件以发现网络应用程序的目录/文件夹路径的信息泄露。进一步Spiders/Robots/Crawlers会忽略目录列表。
Test Objectives
1. Information Leakage of the web application's directory/folder path(s).
1、网络应用程序目录/文件夹路径信息泄露。
2. Create the list of directories that are to be avoided by Spiders/Robots/Crawlers.
2、创建Spiders/Robots/Crawlers忽略的目录列表
How to Test
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the Robots Exclusion Protocol of the robots.txt file in the web root directory [1].
网络爬虫检索网页并根据超链接递归的检索网页内容。网络爬虫的行为被网络根目录下的robots.txt文件定义(网络爬虫排除标准)
robots.txt in webroot
As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below:
User-agent: *Disallow: /searchDisallow: /sdchDisallow: /groupsDisallow: /imagesDisallow: /catalogs...
The User-Agent directive refers to the specific web spider/robot/crawler. For example the User-Agent: Googlebot refers to the spider from Google while "User-Agent: bingbot"[2] refers to crawler from Microsoft/Yahoo!. User-Agent: * in the example above applies to all web spiders/robots/crawlers [2] as quoted below:
User-Agent指令指特定的网络爬虫。例如:User-Agent:Googlebot引用google的爬虫而User-Agent:bingbot引用Microsoft/Yahoo的爬虫。User-Agent:*适用于所有网络爬虫
User-agent: *
The Disallow directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:
Disallow指令指被爬虫禁止的资源。
... Disallow: /searchDisallow: /sdchDisallow: /groupsDisallow: /imagesDisallow: /catalogs...
Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3], such as those from Social Networks[3] to ensure that shared linked are still valid. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.
网络爬虫可有目的的忽略robots.txt文件中Disallow指令以保证共享链接的有效性。所以,robots.txt不应该被视为第三方访问、存储、转发的强制限制机制。
<META> Tag
<META> tags are located within the HEAD section of each HTML Document and should be consistent across a web site in the likely event that the robot/spider/crawler start point does not begin from a document link other than webroot i.e. a "deep link"[4].
<META>标签位于HTML文档中的HEAD节,且整个网站中应该是一致的。爬虫的起始点不是文档链接而是webroot(??)
If there is no "<META NAME="ROBOTS" ... >" entry than the "Robots Exclusion Protocol" defaults to "INDEX,FOLLOW" respectively. Therefore, the other two valid entries defined by the "Robots Exclusion Protocol" are prefixed with "NO..." i.e. "NOINDEX" and "NOFOLLOW".
Web spiders/robots/crawlers can intentionally ignore the "<META NAME="ROBOTS"" tag as the robots.txt file convention is preferred. Hence, <META> Tags should not be considered the primary mechanism, rather a complementary control to robots.txt.
Black Box testing and example
robots.txt in webroot - with "wget" or "curl"
The robots.txt file is retrieved from the web root directory of the web server.
For example, to retrieve the robots.txt from www.google.com using "wget" or "curl":
cmlh$ wget http://www.google.com/robots.txt--2013-08-11 14:40:36-- http://www.google.com/robots.txtResolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...Connecting to www.google.com|74.125.237.17|:80... connected.HTTP request sent, awaiting response... 200 OKLength: unspecified [text/plain]Saving to: ‘robots.txt.1’ [ <=> ] 7,074 --.-K/s in 0s 2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]cmlh$ head -n5 robots.txtUser-agent: *Disallow: /searchDisallow: /sdchDisallow: /groupsDisallow: /imagescmlh$
cmlh$ curl -O http://www.google.com/robots.txt % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed101 7074 0 7074 0 0 9410 0 --:--:-- --:--:-- --:--:-- 27312cmlh$ head -n5 robots.txtUser-agent: *Disallow: /searchDisallow: /sdchDisallow: /groupsDisallow: /imagescmlh$
robots.txt in webroot - with rockspider
"rockspider"[5] automates the creation of the initial scope for Spiders/Robots/Crawlers of files and directories/folders of a web site.
For example, to create the initial scope based on the Allowed: directive from www.google.com using "rockspider"[6]:
cmlh$ ./rockspider.pl -www www.google.com"Rockspider" Alpha v0.1_2Copyright 2013 Christian HeinrichLicensed under the Apache License, Version 2.01. Downloading http://www.google.com/robots.txt2. "robots.txt" saved as "www.google.com-robots.txt"3. Sending Allow: URIs of www.google.com to web proxy i.e. 127.0.0.1:8080 /catalogs/about sent /catalogs/p? sent /news/directory sent...4. Done.cmlh$
<META> Tags - with Burp
Based on the Disallow directive(s) listed within the robots.txt file in webroot, a regular expression search for "<META NAME="ROBOTS"" within each web page is undertaken and the result compared to the robots.txt file in webroot.
For example, the robots.txt file from facebook.com has a "Disallow: /ac.php" entry[7] and the resulting search for "<META NAME="ROBOTS"" shown below:
The above might be considered a fail since "INDEX,FOLLOW" is the default <META> Tag specified by the "Robots Exclusion Protocol" yet "Disallow: /ac.php" is listed in robots.txt.
Analyze robots.txt using Google Webmaster Tools
If you are the owner of the website you want to analyze, Google provides an "Analyze robots.txt" function as part of its "Google Webmaster Tools" (https://www.google.com/webmasters/tools), which can assist you with testing [4] and the procedure is as follows:
1. Sign into Google Webmaster Tools with your Google Account.
2. On the Dashboard, write the URL for the site you want to analyze.
3. Choose between the available methods and follow the on screen instruction.
Gray Box testing and example
The process is the same as Black Box testing above.
Tools
- Browser (View Source function)
- curl
- wget
- rockspider[8]
References
Whitepapers
- [1] "The Web Robots Pages" - http://www.robotstxt.org/
- [2] "Block and Remove Pages Using a robots.txt File" - https://support.google.com/webmasters/answer/156449
- [3] "(ISC)2 Blog: The Attack of the Spiders from the Clouds" - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html
- [4] "Telstra customer database exposed" - http://www.smh.com.au/it-pro/security-it/telstra-customer-database-exposed-20111209-1on60.html
- Testing: Review Webserver Metafiles for Information Leakage (OTG-INFO-003)
- Testing: Conduct search engine discovery/reconnaissance for information leakage (OTG-INFO-001)
- Information Leakage
- Atlassian Confluence - Sensitive Information Leakage
- How to get the Information leakage
- Windows Intro - Review port info
- Review: SANS SEC550 Information Reconnaissance
- linux review usb device information
- Information Extration: Quality Matter. A paper review
- review for svn installation
- Review for 4 years.
- For Sammy's Testing
- just for testing
- The data for testing
- First note for testing
- Hello, just for testing
- Steups for Mobile testing
- Testing for World-Readiness
- 轮盘赌算法原理(ACO算法概率选择方法)
- 通过代码组织,让你更好的理解和使用JDK动态代理
- 算法--生成随机数组
- 【字符串&最长无重复子串】Longest Substring Without Repeating Characters
- 要多写博客
- Testing: Review Webserver Metafiles for Information Leakage (OTG-INFO-003)
- 《程序员,你伤不起》读书笔记 (五) :当我彻底放弃自私自利后,前途变得一片光明,不能过多的只是关注自己的功夫,生活不能只局限在方寸虚拟世界里
- 内存监测工具DDMS-->Heap
- multiset的使用
- 微博文本情感分析-开篇
- Myeclipse生成doc文档时出现,编码 GBK 的不可映射字符
- mongodb学习之五:聚合之group复习
- sort()函数与qsort()函数及其头文件
- JIRA6.1.5 设置开机自启动 linux