利用python网络爬虫爬取赶集网数据

来源:互联网 发布:淘宝手机模板免费下载 编辑:程序博客网 时间:2024/05/21 18:08
import csvimport requestsimport reheaders = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'}url = 'http://sh.ganji.com/zpbanyungong/o1/'with open('E:/infomation.csv', 'wt', encoding='utf8',newline='') as f:    csv_writer = csv.writer(f, dialect='excel')#这里打开csv文件开始写入      for i in range(1,2):        new_url = 'http://sh.ganji.com/zpbanyungong/o' + str(i) +'/'        content = requests.get(new_url,headers=headers).text        pattern = re.compile('gjalog="100000002704000200000010@atype=click">(.*?).*?
(.*?)
.*?
(.*?)
',re.S) results = re.findall(pattern,content) for result in results: name,money,contrict = result name = re.sub('/s+','',name).strip() money = re.sub('/s+','',money).strip() contrict = re.sub('/s+','',contrict).strip() csv_writer.writerow([name, money, contrict]) # print(name,money,contrict)
以上就是本文的代码啦
原创粉丝点击