Python3爬虫笔记一

来源:互联网 发布:企业管理网络培训 编辑:程序博客网 时间:2024/05/29 09:15

1.提取出‘[ ]’里的数字,比如在爬取煎蛋网妹子图时需要去掉'[ ]'提取出里面的数字,也就是页码,这里用到的是python里的re模块的sub方法。

span_tag = sou.find_all('span', attrs={'class': 'current-comment-page'})[0].text        max_page = int(re.sub(r'\[|\]', '', span_tag))

并且煎蛋网妹子图的图片URL可以通过正则表达式来获得(虽然丑了点,但有用)。

pic_orgin = sou.find_all('a', {'href': re.compile('//wx\d{1,2}\.sinaimg.cn/large/.*?\.jpg')})

2.通用请求代码:

user_agent_list = [        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",        "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",        "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",        "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36"    ]    UA = random.choice(user_agent_list)    header = {'User-Agent': UA,               'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',               # 'Host': 'jandan.net', 自己要访问网站的域名               'Accept - Encoding': 'gzip, deflate, sdch',               'Accept - Language': 'zh - CN, zh;q=0.8',               'Connection': 'keep - alive',               }    url = 'xxx'#xxx:你要访问的URL
3.提取汉字中的数字,也是用到了re模块的sub方法:

page = soup.select('body > div.wrapper > div.photo > div.wrapper.clearfix.imgtitle > div.pages > ul > li > a')[0].textmax_page = re.sub('\D', '', page)
4.当爬取的图片为1kb或打开显示'已损坏或无法打开'时,可以通过以下语句来解决:

img = pic.attrs['src']try:
    # r=requests.get(img,headers=header)    s=requests.Session()    s.headers['User-Agent']=UA    r=s.get(img)
except:    print('sorry! Request pictures url fail.')else:    file_name=img.split('/')[-1]    with open(file_name,'wb') as f:    f.write(r.content)
5.循环:

i = 0while i < 10:    url = mmurl + str(i)    print(url)    i +=1
或者也可以这样:

for n in range(1, int(page)+1):    each_page = url + 'list_10_' + str(n) + '.html'

或者:

for n in range(1,int(page)+1):    same_url = url+'/p{}.html'.format(str(n))

效果都是一样的。







原创粉丝点击