Python(11):多线程(Multi-Thread)

来源:互联网 发布:软件开发与软件研发 编辑:程序博客网 时间:2024/06/05 03:51

完全的独立的任务:下N个独立的文件


最近需要疯狂下载一些数据。单线程确实受不了了。

粗略估计了一下,要下2000多个小时,长达三个月。

如果一个进程能开几个线程,我一台机器上又可以开几个进程,再去找个几十台机器……

为了简单省事,我去网上搜了一些较为简洁的Python开多线程处理的例子。

import urllib2from multiprocessing.dummy import Pool as ThreadPool import timeimport datetimeimport numpy as npurls = [    'http://www.python.org',     'http://www.python.org/about/',    'http://www.onlamp.com/pub/a/python/2003/04/17/metaclasses.html',    'http://www.python.org/doc/',    'http://www.python.org/download/',    'http://www.python.org/getit/',    'http://www.python.org/community/',    'https://wiki.python.org/moin/',    'http://planet.python.org/',    'https://wiki.python.org/moin/LocalUserGroups',    'http://www.python.org/psf/',    'http://docs.python.org/devguide/',    'http://www.python.org/community/awards/'    # etc..     ]# Make the Pool of workers# Open the urls in their own threads# and return the resultsresults = pool.map(urllib2.urlopen, urls)#close the pool and wait for the work to finish pool.close() pool.join() 
不幸的是,我的电脑居然装不了这个urllib2...


只好自己换个更简单的例子。

from multiprocessing.dummy import Pool as ThreadPool import timeimport numpy as npdef say_hello(x):    return xtime1 = time.time()# Make the Pool of workerspool = ThreadPool(4)# Open the urls in their own threads# and return the resultsresults = pool.map(say_hello,np.arange(100))#close the pool and wait for the work to finish pool.close() pool.join() time2 = time.time()print(time2 - time1)
返回的结果results将是一个list。





0 0