Python3--翻页

来源:互联网 发布:淘宝三哥家店名叫什么 编辑:程序博客网 时间:2024/06/05 02:57

在实现第一页内容和头像的获取后,想实现代码自动翻页获取后面内容。其原理就是找到下一页的链接,并且访问即可。主要是在第二个实例中加入部分代码
第二个实例

  • 获取html内容
def get_info( url, data = None):    header = {        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',        'Accept-Encoding': 'gzip, deflate, sdch',        'Accept-Language': 'zh-CN,zh;q=0.8',        'Connection': 'keep-alive',        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36'    }    timeout = random.choice(range(60,180))    while True :        try:            rep = requests.get(url, headers = header, timeout = timeout)            rep.encoding = 'utf-8'            break        except socket.timeout as e:            print('3:',e)            time.sleep(random.choice(range(8,15)))        except socket.error as e:            print('4:', e)            time.sleep(random.choice(range(20, 60)))        except http.client.BadStatusLine as e:            print('5:', e)            time.sleep(random.choice(range(30, 80)))        except http.client.IncompleteRead as e:            print('6:', e)            time.sleep(random.choice(range(5, 15)))    return rep.text

这个部分和前面一样,不需要改变。

  • 相关信息的获取
def get_data(html ):    final = []    pictures = []    bs = BeautifulSoup(html, "html.parser")    body = bs.body    content_left = body.find(id = 'content-left') #找到该页总框    contents = content_left.find_all('div',class_ = 'article block untagged mb15')#找到所有内容框    pages = content_left.find('ul',class_ = 'pagination')#找到翻页框    next_page = pages.find('span',class_= 'next')#找到下一页    nextUrl = next_page.find_parent('a').get('href')#获取下一页的url    for content in contents: #对每个故事进行遍历        temp = []        author = content.find('div',class_='author clearfix')        picture = author.find('img')        picture_src = picture.get('src')        pictures.append(picture_src)        user_name = content.find("h2").string        temp.append(user_name)        data = content.find(class_ = 'content')        story = data.find('span').get_text()        temp.append(story)        numbers = content.find_all('i', class_ = 'number')        good = numbers[0].string + '好笑'        temp.append(good)        comment = numbers[1].string + '评论'        temp.append(comment)        temp.append(picture_src)        final.append(temp)    return final,pictures,nextUrl#返回当页数据,当页头像url和下一页的url

这里主要是添加了下一页url的返回

  • 翻页的实现
def page_turn( url):    results_list = [] #百科信息    pictures_list = [] #头像    currentUrl = url#当前url的信息    count = 1    while count <=3:#这里设定只收集前3页的信息        html = get_info(currentUrl)#调用get_info()函数        results,pictures,nextUrl = get_data(html)#调用get_data()函数        results_list.append(results)#将获取的当前页内容的list集加入总list中        pictures_list.append(pictures)#将当前页的头像url集加入总头像list中        currentUrl = url + nextUrl[1:]#下一页url处理        print(currentUrl)        count = count +1    return results_list,pictures_list#返回总结果

这里是一个翻页函数。需要注意的是results_listpictures_list的元素也是list!

  • 写入CSV
def write_data(datas, name):    file_name = name    for data in datas:        with open(file_name, 'a', errors='ignore', newline='') as f:            f_csv = csv.writer(f)            f_csv.writerows(data)

由于之前的实例都是对单页的内容进行摘取,所以只有传入的参数是格式是[[list1],[list2]…[listn]]。而这里是三个嵌套[[[list1]..[listn]],[[list1]…[listn]]…[[list1]…[listn]]]这种样式。因此需要一个循环去掉一层。

  • 下载头像
def download_pic(imgs):    count = 1    if not os.path.exists('pic'):        os.makedirs('pic')    for pictures in imgs:        for picture in pictures:            if picture == '/static/images/thumb/anony.png?v=b61e7f5162d14b7c0d5f419cd6649c87':                print("静态图片")                continue            else:                try:                    r = requests.get(picture)                except BaseException as e:                    print("图片下载失败", e)                    time.sleep(random.choice(range(30, 80)))                else:                    filename = str(count) + '.jpg'                    path = "pic/" + filename                    f = open(path, 'wb')                    f.write(r.content)                    print(count)                    count = count + 1

原理同上

  • 主函数
if __name__ == '__main__':    url ='http://www.qiushibaike.com/'    result,picture = page_turn(url)    write_data(result, 'qiubai.csv')    download_pic(picture)

源代码

0 0
原创粉丝点击