[WeChall] Training: WWW-Robots (HTTP, Training)

来源:互联网 发布:手机图片点击放大js 编辑:程序博客网 时间:2024/06/07 07:08
question:

Training: WWW-Robots (HTTP, Training)

WWW-Robots
In this little training challenge, you are going to learn about the Robots_exclusion_standard.
The robots.txt file is used by web crawlers to check if they are allowed to crawl and index your website or only parts of it.
Sometimes these files reveal the directory structure instead protecting the content from being crawled.

Enjoy!

solution:

   go to the website :  www.WeChall.net/robots.txt  

   and we saw  this:

 

User-agent: *Disallow: /challenge/training/www/robots/T0PS3CR

CP the directory after www.WeChall.net and go. SUCCESS!


0 0
原创粉丝点击