python-网络爬虫初学一:获取网页源码以及发送POST和GET请求

来源:互联网 发布:淘宝联盟认证 编辑:程序博客网 时间:2024/05/18 18:53

一、工具包urlllib和urllib2导入;

# GET和POST请求需要工具包urllibimport urllib# 导入工具包import urllib2

二、a)爬取网站对应的源码

# 通过资源定位符获取网页对象,通过read方法返回网页的源码response = urllib2.urlopen("http://www.baidu.com")print response.read()

b)将其写得规范一点,则如下所示

# 构造request请求实例request = urllib2.Request("http://www.baidu.com")response = urllib2.urlopen(request)print response.read()

三、构造POST请求

# POST请求values = {"username": "geek", "password": "**********"}# 或者values = {}values["username"] = "geek"values["password"] = "**********"# 将字典编码data = urllib.urlencode(values)url = "https://passport.csdn.net/account/login?from=http://my.csdn.net/my/mycsdn"request = urllib2.Request(url, data)response = urllib2.urlopen(request)print response.read()

四、构造GET请求

# GET请求values = {"username": "geek", "password": "**********"}data = urllib.urlencode(values)url = "https://passport.csdn.net/account/login?from=http://my.csdn.net/my/mycsdn"request = url + "?" + dataresponse = urllib2.urlopen(request)print response.read()

0 0
原创粉丝点击