课时12 第三节练习项目:爬取租房信息

来源:互联网 发布:淘宝上有什么网红的店 编辑:程序博客网 时间:2024/04/28 15:40

小猪租房http://bj.xiaozhu.com/search-duanzufang-p1-0/

首先爬取30页的详情页链接

from bs4 import BeautifulSoupimport requestspage_link = [] # <- 每个详情页的链接都存在这里,解析详情的时候就遍历这个列表然后访问就好啦~def get_page_link(page_number):    for each_number in range(1,page_number): # 每页24个链接,这里输入的是页码        full_url = 'http://bj.xiaozhu.com/search-duanzufang-p{}-0/'.format(each_number)        wb_data = requests.get(full_url)        soup = BeautifulSoup(wb_data.text,'lxml')        for link in soup.select('a.resule_img_a'): # 找到这个 class 样为resule_img_a 的 a 标签即可            page_link.append(link.get('href'))get_page_link(30)#测试print(page_link)#测试

另外一种代码:(反爬虫封锁IP有待验证)

from bs4 import BeautifulSoupimport requestseach_link = []urls = ['http://bj.xiaozhu.com/search-duanzufang-p{}-0/'.format(str(i)) for i in range(1,10)]def get_each_page_link(urls):    for url in urls:        #print(url)        wb_data = requests.get(url)        #print(wb_data)        Soup = BeautifulSoup(wb_data.text, 'lxml')        for i in Soup.select('a.resule_img_a'):            each_link.append(i.get('href'))#获取每一页的链接get_each_page_link(urls)print(each_link)
0 0