python爬虫爬取网站图片

来源:互联网 发布:文字转语音软件 编辑:程序博客网 时间:2024/06/05 10:48

本文出处:http://blog.csdn.net/qq_27512671/article/details/78022625

效果图

都让让都让让,老司机先来一发效果图斜眼笑源码最下方

“斗鱼爬取结果效果图:)”

实现思路分为三步走:
1. 获取网页数据源
2. 解析网页源数据,获得所有的图片地址列表
3. 遍历列表,并将图片保存到本地

实现步骤

获取网页数据

def gethemltext(url):    r = requests.get(url)    r.raise_for_status()    r.encoding = r.apparent_encoding    return r.text

解析网页源数据,获得所有的图片地址列表

def getImageList(html, lst):    soup = BeautifulSoup(html, 'html.parser')    a = soup.find_all('img')    for i in a:        try:            href = i.attrs['src']            lst.append(href)        except:            continue

遍历列表,并将图片保存到本地

for src in list:    try:        print(root + src)        urllib.request.urlretrieve(root + src, r'D:\pythonPath\%s.jpg' % tmp)        tmp = tmp + 1        print('成功')    except:        print('失败')print('下载完毕')

实现案例

获取全景网首页所有图片数据

import osimport reimport urllibimport uuidimport requestsfrom bs4 import BeautifulSoupfrom requests import requesturlPath = 'http://www.quanjing.com/'localPath = 'd:\\pythonPath'def gethemltext(url):    r = requests.get(url)    r.raise_for_status()    r.encoding = r.apparent_encoding    return r.textdef getImageList(html, lst):    soup = BeautifulSoup(html, 'html.parser')    a = soup.find_all('img')    for i in a:        try:            href = i.attrs['src']            lst.append(href)        except:            continuedef start():    root = "http://www.quanjing.com/"    html = gethemltext("http://www.quanjing.com/?audience=151316")    list = []    getImageList(html, list)    tmp = 0    for src in list:        try:            print(root + src)            urllib.request.urlretrieve(root + src, r'D:\pythonPath\%s.jpg' % tmp)            tmp = tmp + 1            print('成功')        except:            print('失败')    print('下载完毕')#开始获取start()

获取斗鱼神秘主播间头像

import osimport reimport urllibimport uuidimport requestsfrom bs4 import BeautifulSoupfrom requests import requesturlPath = 'http://www.quanjing.com/'localPath = 'd:\\pythonPath'def gethemltext(url):    r = requests.get(url)    r.raise_for_status()    r.encoding = r.apparent_encoding    return r.textdef getImageList(html, lst):    soup = BeautifulSoup(html, 'html.parser')    a = soup.find_all('img')    for i in a:        try:            href = i.attrs['data-original']            lst.append(href)        except:            continuedef start():    root = "http://www.quanjing.com/"    html = gethemltext("https://www.douyu.com/directory/game/yz")    list = []    getImageList(html, list)    tmp = 0    for src in list:        try:            print(root + src)            urllib.request.urlretrieve(  src, r'D:\pythonPath\%s.jpg' % tmp)            tmp = tmp + 1            print('成功')        except:            print('失败')    print('下载完毕')#开始获取start()
原创粉丝点击