基于python requests模块的HTTP接口测试(可做爬虫)实例代码

来源:互联网 发布:网络延长器原理 编辑:程序博客网 时间:2024/05/21 17:56
实例代码如下:
# -*- coding: utf-8 -*-import requestsimport jsonurl = 'http://localhost:3000/'headers = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',           'Accept-Encoding': 'gzip, deflate, compress',           'Accept-Language': 'en-us;q=0.5,en;q=0.3',           'Cache-Control': 'max-age=0',           'Connection': 'keep-alive',           'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:22.0) Gecko/20100101 Firefox/22.0'}# 采用会话对象,跨请求自动保持cookiess = requests.session()s.headers.update(headers)# 访问首页 获取csrf 令牌r = s.get(url=url)print(r.cookies)print(r.headers['Content-Type'])# 获取cookiescookies = tuple(r.cookies)# 获取 csrf _csrf_token = cookies[0].valueprint(1111, _csrf_token)# 设置请求头 以便登陆时验证headers['X-CSRF-TOKEN']= _csrf_token# 设置请求类型为jsonheaders['Content-Type'] = 'application/json's.headers.update(headers)# 登陆 以便后续请求默认登陆状态x = s.post(url=str('%sapi/logon' % url), data=json.dumps({'identity': 'wanggangshan','auth_code': 'shanxing123'}))print(2222, x.url,x.status_code, x.text)print(x.headers['Content-Type'], x.headers)# get请求  获取用户订单数据y = s.get(url=str('%sapi/my/order/list' % url))print(3333, y.url,y.status_code)print(y.headers['Content-Type'], y.headers)# post请求  修改用户信息y = s.post(url=str('%sapi/my/info/base_info' % url),           data=json.dumps({'real_name': "王刚山", 'grade_code': "02-2014", 'grade_type': 1,                            'subject_classify': 0, 'qq': "58885855850"}))print(4444, y.url,y.status_code, y.text)print(y.headers['Content-Type'], y.headers)
0 0
原创粉丝点击