Python 2.7
IDE Pycharm 5.0.3
Firefox 47.0.1
具体Selenium及PhantomJS请看Python+Selenium+PIL+Tesseract真正自动识别验证码进行一键登录
一些自动化应用实例请看Selenium+PhantomJS自动续借图书馆书籍
至于GUI的入门使用请看Python基于Tkinter的二输入规则器(乞丐版)
比较综合的GUI例子请看基于Python的参考文献生成器1.0
想了想,还是稍微人性化一点,做个成品GUI出来
起因
没办法,在知乎预告了要做个GUI出来,吹的牛逼总得自己填坑,下次一定要慎重啊,话说也复习了一下GUI操作。。。。其实就是以前写的改改,换换输入输出而已,so ,don’t worry,Let’s do this!
目的
1.在Python自定义豆瓣电影种类,排行,点评的爬取与存储(初级)的基础上,增加了GUI界面(我嘴欠的),减少自己的键盘输入,多选用点击操作。
2.保留1特性的基础上,选择加载评论选项,把短评和长评都放在一起,修改了代码结构,扩展性更好(自认为),方便以后增加爬取主题时候的规范性制定
3.当然最后还是要打包成exe啦,不然怎么造福小伙伴呢,如何打包还是请见如何将python打包成exe文件
方案
使用Tkinter+PhantomJS+Selenium+Firefox实现
实现过程
1.get到首页后,根据选择,点击种类,然后根据输入需求,进行排序 –这里的输入时listbox中值的点击键入
2.抓取每个电影及超链接,进入超链接后,抓取当前电影的热评及长评
3.当用户所要求TOP数目大于第一页的20个时候,点击加载更多,再出现20个电影,重复2操作。
4.将输出写入输出框架中,写入txt中等操作
实现效果
py文件实现效果–TV目前还未实现,即使点了也是电影
打包成exe文件执行效果
如果不想要cmd窗口,只要GUI,那么在进行打包的时候请使用-w参数,
pyinstaller -F -w Selenium_PhantomJS_doubanMvGUI.py
具体操作可以看如何将python打包成exe文件
程序框架
直接上那么长的程序可能蒙圈,所以画了个简图
至于内部如何嵌套,懒得画图了,这几个模块大概知道就可以读程序了,程序很简单的。。。
代码
from selenium import webdriverimport selenium.webdriver.support.ui as uiimport timefrom Tkinter import *print "---------------system loading...please wait...---------------"def getURL_Title(): global save_name SUMRESOURCES=0 url="https://movie.douban.com/" driver_item=webdriver.Firefox() wait = ui.WebDriverWait(driver_item,15) Kind_Dict={'Hot':1,'Newest':2,'Classics':3,'Playable':4,'High Scores':5, 'Wonderful but not popular':6,'Chinese film':7,'Hollywood':8, 'Korea':9,'Japan':10,'Action movies':11,'Comedy':12,'Love story':13, 'Science fiction':14,'Thriller':15,'Horror film':16,'Whatever':17} Sort_Dict={'Sort by hot':1,'Sort by time':2,'Sort by score':3} Ask_Dict={'No film reviews':0,'I like film reviews':1} kind=Kind_Dict[Kind_Select.get(Kind_Select.curselection()).encode('utf-8')] sort = Sort_Dict[Sort_Select.get(Sort_Select.curselection()).encode('utf-8')] number = int(input_Top.get()) ask_comments = Ask_Dict[Comment_Select.get(Comment_Select.curselection()).encode('utf-8')] save_name=input_SN.get() Ans.insert(END,"#####################################################################") Ans.insert(END," Reloading ") Ans.insert(END,",#####################################################################") Ans.insert(END,"---------------------------------------system loading...please wait...------------------------------------------") Ans.insert(END,"----------------------------------------------crawling----------------------------------------------") Write_txt('\n##########################################################################################','\n##########################################################################################',save_name) print "---------------------crawling...---------------------" driver_item.get(url) wait.until(lambda driver: driver.find_element_by_xpath("//div[@class='fliter-wp']/div/form/div/div/label[%s]"%kind)) driver_item.find_element_by_xpath("//div[@class='fliter-wp']/div/form/div/div/label[%s]"%kind).click() wait.until(lambda driver: driver.find_element_by_xpath("//div[@class='fliter-wp']/div/form/div[3]/div/label[%s]"%sort)) driver_item.find_element_by_xpath("//div[@class='fliter-wp']/div/form/div[3]/div/label[%s]"%sort).click() num=number+1 time.sleep(2) num_time = num/20+1 wait.until(lambda driver: driver.find_element_by_xpath("//div[@class='list-wp']/a[@class='more']")) for times in range(1,num_time): time.sleep(2) driver_item.find_element_by_xpath("//div[@class='list-wp']/a[@class='more']").click() for i in range(1,num): wait.until(lambda driver: driver.find_element_by_xpath("//div[@class='list']/a[%d]"%num)) list_title=driver_item.find_element_by_xpath("//div[@class='list']/a[%d]"%i) print '----------------------------------------------'+'NO' + str(SUMRESOURCES +1)+'----------------------------------------------' print u'电影名: ' + list_title.text print u'链接: ' + list_title.get_attribute('href') list_title_wr=list_title.text.encode('utf-8') list_title_url_wr=list_title.get_attribute('href') Ans.insert(END,'\n------------------------------------------------'+'NO' + str(SUMRESOURCES +1)+'----------------------------------------------',list_title_wr,list_title_url_wr) Write_txt('\n----------------------------------------------'+'NO' + str(SUMRESOURCES +1)+'----------------------------------------------','',save_name) Write_txt(list_title_wr,list_title_url_wr,save_name) SUMRESOURCES = SUMRESOURCES +1 try: getDetails(str(list_title.get_attribute('href')),ask_comments) except: print 'can not get the details!' driver_item.quit()def getDetails(url,comments): driver_detail = webdriver.PhantomJS(executable_path="phantomjs.exe") wait1 = ui.WebDriverWait(driver_detail,15) driver_detail.get(url) wait1.until(lambda driver: driver.find_element_by_xpath("//div[@id='link-report']/span")) drama = driver_detail.find_element_by_xpath("//div[@id='link-report']/span") print u"剧情简介:"+drama.text drama_wr=drama.text.encode('utf-8') Ans.insert(END,drama_wr) Write_txt(drama_wr,'',save_name) if comments == 1: print "--------------------------------------------Hot comments TOP----------------------------------------------" for i in range(1,5): try: comments_hot = driver_detail.find_element_by_xpath("//div[@id='hot-comments']/div[%s]/div/p"%i) print u"最新热评:"+comments_hot.text comments_hot_wr=comments_hot.text.encode('utf-8') Ans.insert(END,"--------------------------------------------Hot comments TOP%d----------------------------------------------"%i,comments_hot_wr) Write_txt("--------------------------------------------Hot comments TOP%d----------------------------------------------"%i,'',save_name) Write_txt(comments_hot_wr,'',save_name) except: print 'can not caught the comments!' try: driver_detail.find_element_by_xpath("//img[@class='bn-arrow']").click() time.sleep(1) comments_get = driver_detail.find_element_by_xpath("//div[@class='review-bd']/div[2]/div") if comments_get.text.encode('utf-8')=='提示: 这篇影评可能有剧透': comments_deep=driver_detail.find_element_by_xpath("//div[@class='review-bd']/div[2]/div[2]") else: comments_deep = comments_get print "--------------------------------------------long-comments---------------------------------------------" print u"深度长评:"+comments_deep.text comments_deep_wr=comments_deep.text.encode('utf-8') Ans.insert(END,"--------------------------------------------long-comments---------------------------------------------\n",comments_deep_wr) Write_txt("--------------------------------------------long-comments---------------------------------------------\n",'',save_name) Write_txt(comments_deep_wr,'',save_name) except: print 'can not caught the deep_comments!'def Write_txt(text1='',text2='',title='douban.txt'): with open(title,"a") as f: for i in text1: f.write(i) f.write("\n") for j in text2: f.write(j) f.write("\n")def Clea(): input_Top.delete(0,END) input_SN.delete(0,END) Ans.delete(0,END)root=Tk()root.title('豆瓣影视抓取器beta--by哈士奇说喵')frame_select=Frame(root)title_label=Label(root,text='豆瓣影视TOP抓取器')title_label.pack()Mov_Tv=Listbox(frame_select,exportselection=False,width=9,height=4)list_item1 = ['Movies','TV']for i in list_item1: Mov_Tv.insert(END,i)scr_MT = Scrollbar(frame_select)Mov_Tv.configure(yscrollcommand = scr_MT.set)scr_MT['command']=Mov_Tv.yviewKind_Select=Listbox(frame_select,exportselection=False,width=12,height=4)list_item2 = ['Hot','Newest','Classics','Playable','High Scores', 'Wonderful but not popular','Chinese film','Hollywood', 'Korea','Japan','Action movies','Comedy','Love story', 'Science fiction','Thriller','Horror film','Whatever']for i in list_item2: Kind_Select.insert(END,i)scr_Kind = Scrollbar(frame_select)Kind_Select.configure(yscrollcommand = scr_Kind.set)scr_Kind['command']=Kind_Select.yviewSort_Select=Listbox(frame_select,exportselection=False,width=12,height=4)list_item3 = ['Sort by hot','Sort by time','Sort by score']for i in list_item3: Sort_Select.insert(END,i)scr_Sort = Scrollbar(frame_select)Sort_Select.configure(yscrollcommand = scr_Sort.set)scr_Sort['command']=Sort_Select.yviewComment_Select=Listbox(frame_select,exportselection=False,width=16,height=4)list_item4 = ['No film reviews','I like film reviews']for i in list_item4: Comment_Select.insert(END,i)scr_Com = Scrollbar(frame_select)Comment_Select.configure(yscrollcommand = scr_Com.set)scr_Com['command']=Comment_Select.yviewLabel_TOP=Label(frame_select, text='TOP(xx)', font=('',10))var_Top = StringVar()input_Top = Entry(frame_select, textvariable=var_Top,width=8)Label_SN=Label(frame_select, text='SAVE_NAME(xx.txt)', font=('',10))var_SN = StringVar()input_SN = Entry(frame_select, textvariable=var_SN,width=8)frame_output=Frame(root)out_label=Label(frame_output,text='Details')Ans = Listbox(frame_output,selectmode=MULTIPLE, height=15,width=80)crawl_button = Button(frame_output,text='crawl', command=getURL_Title)clear_button = Button(frame_output,text='clear', command=Clea)scr_Out_y = Scrollbar(frame_output)Ans.configure(yscrollcommand = scr_Out_y.set)scr_Out_y['command']=Ans.yviewscr_Out_x = Scrollbar(frame_output,orient='horizontal')Ans.configure(xscrollcommand = scr_Out_x.set)scr_Out_x['command']=Ans.xviewframe_select.pack()Mov_Tv.pack(side=LEFT)scr_MT.pack(side=LEFT)Kind_Select.pack(side=LEFT)scr_Kind.pack(side=LEFT)Sort_Select.pack(side=LEFT)scr_Sort.pack(side=LEFT)Comment_Select.pack(side=LEFT)scr_Com.pack(side=LEFT)Label_TOP.pack()input_Top.pack()Label_SN.pack()input_SN.pack()frame_output.pack()out_label.pack()crawl_button.pack(side=LEFT)clear_button.pack(side=RIGHT)scr_Out_y.pack(side=RIGHT)Ans.pack()scr_Out_x.pack()root.mainloop()
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261
- 262
- 263
- 264
- 265
- 266
- 267
- 268
- 269
- 270
- 271
- 272
- 273
- 274
- 275
- 276
- 277
- 278
- 279
- 280
- 281
- 282
- 283
- 284
- 285
- 286
- 287
- 288
- 289
- 290
- 291
- 292
- 293
- 294
- 295
- 296
- 297
- 298
- 299
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261
- 262
- 263
- 264
- 265
- 266
- 267
- 268
- 269
- 270
- 271
- 272
- 273
- 274
- 275
- 276
- 277
- 278
- 279
- 280
- 281
- 282
- 283
- 284
- 285
- 286
- 287
- 288
- 289
- 290
- 291
- 292
- 293
- 294
- 295
- 296
- 297
- 298
- 299
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
代码就不解释了。好好看下备注就ok了
问题及解决&Tips
1.在Python自定义豆瓣电影种类,排行,点评的爬取与存储(初级)的文章中,漏了简介的输出,这里补上,是我大意了。。添加如下代码即可(上篇已修复)
drama_wr=drama.text.encode('utf-8')Write_txt(drama_wr,'',save_name)
2.出现“提示:这篇影评可能剧透”,获取长评失败(上篇已修复),如图
问题出在这条语句上
comments_deep=driver_detail.find_element_by_xpath("//div[@class='review-bd']/div[2]/div")
2.解决方案,分析网页元素,看看到底谁在搞鬼;
ok,一看就知道,是我们的标签平白无故多了个div,好办,直接写个判断语句填上
comments_get = driver_detail.find_element_by_xpath("//div[@class='review-bd']/div[2]/div") if comments_get.text.encode('utf-8')=='提示: 这篇影评可能有剧透': comments_deep=driver_detail.find_element_by_xpath("//div[@class='review-bd']/div[2]/div[2]") else: comments_deep = comments_get
3.Kind有17个选项类别,一个个写if语句好心烦,好冗余,比如这样,要写17个
if Kind_Select.get(Kind_Select.curselection()).encode('utf-8')=='Movies': kind = 1
3.解决方案;用字典啊!!!!!!键值对应的除了字典还有更好的么???以Kind键入为例
#构建对应字典,方便键入值得对应关系查找 Kind_Dict={'Hot':1,'Newest':2,'Classics':3,'Playable':4,'High Scores':5, 'Wonderful but not popular':6,'Chinese film':7,'Hollywood':8, 'Korea':9,'Japan':10,'Action movies':11,'Comedy':12,'Love story':13, 'Science fiction':14,'Thriller':15,'Horror film':16,'Whatever':17} #最后一个电影老是在变啊,艹 kind=Kind_Dict[Kind_Select.get(Kind_Select.curselection()).encode('utf-8')]
4.目前只做了电影的抓取,电视剧那个还没做,我只是放在上面而已,所以请测试时候不要点击“Tv”选项,里面没东西的,我以后,要是有空,应该会把它补全的。挖坑挖坑0.0
一个奇怪的问题
打包之后的文件,对某些电影抓不了长评,我已排除程序问题,原打包程序在py环境下运行可用,但是exe有的长评就抓不了。。。目前无解
请看图,以盗梦空间为例
但是一样的程序,在py下运行时可以抓到长评的
这个我实在无解,可能是pyinstaller的bug吧
EXE成品文件下载
里面包含了上个cmd版本的源文件,是个合集
基于python豆瓣自定义电影抓取GUI版本
最后
测试时间花了挺多的,主要是selenium效率有点低,而且firefox资源占用太高,对海量数据抓取不是十分有利啊。有谁知道怎么抓海量动态数据么?知道的话请留言一下咯
PS
各省被水淹没,哈尔滨也终于下大雨了,大家出行注意安全–话说我还回去的家么。。。
致谢
@MrLevo520–伪解决Selenium中调用PhantomJS无法模拟点击(click)操作
@MrLevo520–Python输出(print)内容写入txt中保存
@MrLevo520–解决网页元素无法定位(NoSuchElementException: Unable to locate element)的几种方法
@Eastmount–[Python爬虫] Selenium+Phantomjs动态获取CSDN下载资源信息和评论
@Eastmount–[Python爬虫] 在Windows下安装PIP+Phantomjs+Selenium
@MrLevo520–解决Selenium弹出新页面无法定位元素问题(Unable to locate element)
@MrLevo520–Python自定义豆瓣电影种类,排行,点评的爬取与存储(初级)
@MrLevo520–Python基于Tkinter的二输入规则器(乞丐版)
@MrLevo520–基于Python的参考文献生成器1.0