calibre recipes的API中文文档

来源:互联网 发布:json lib apache 编辑:程序博客网 时间:2024/05/17 02:02

class calibre.web.feeds.news.BasicNewsRecipe(options, log, progress_reporter)

这个基类包含逻辑所需的所有功能。通过逐步覆盖更多的功能在这个类中,你可以逐渐更多的定制/强大的recipes。

方法

abort_article(msg=None)

调用这个方法里面的任何预处理方法中止当前文章的下载。可以跳过包含不合适的内容的文章,如纯视频文章。

abort_recipe_processing(msg)

recipes下载系统中止这个recipe的下载,给用户一个简单的反馈消息。

add_toc_thumbnail(article, src)

从populate_article_metadata调用这个方法,就是从当前的article中的≶img>中src属性的链接图片的缩略图作为目录。目前kindle有显示这个的功能。

adeify_images(soup)

这个方法为了兼容Adobe Digital Editions对EPUB格式中的图像的支持, postprocess_html()调用这个方法.

canonicalize_internal_url(url, is_link=True)

返回一组规范表示的url。默认实现使用的服务器的主机名和URL的路径,忽略所有query parameters,fragments等。可以看urlparse.urlparse()函数。

is_link
True: URL是html文件里面带的
False: 下载文章的url链接

cleanup()

当所有的工作做完之后,对一些信息的清除,比如清楚登录信息。

clone_browser(br)

用来支持多线程用的

Clone the browser br. Cloned browsers are used for multi-threaded downloads, since mechanize is not thread safe. The default cloning routines should capture most browser customization, but if you do something exotic in your recipe, you should override this method in your recipe and clone manually.

Cloned browser instances use the same, thread-safe CookieJar by default, unless you have customized cookie handling.

default_cover(cover_file)

为没有封面的recipe提供一个默认的cover。

download()

下载和预处理recipe feed中的所有文章。在一个特定的recipe中,这个方法应该只调用一次。否则将导致未定义的行为。返回:index.html的地址。

extract_readable_article(html, url)

提取html的正文内容,返回一个二元组(article_html, extracted_title). 基于Arc90写的readability算法。详见推荐阅读。

get_article_url(article)

Override in a subclass to customize extraction of the URL that points to the content for each article. Return the article URL. It is called with article, an object representing a parsed article from a feed. See feedparser. By default it looks for the original link (for feeds syndicated via a service like feedburner or pheedo) and if found, returns that or else returns article.link.

get_browser(*args, **kwargs)

返回一个用于获取文档的web浏览器实例。默认情况下它返回浏览器实例,该实例支持cookies,忽略robots.txt文件,处理刷新唾mozilla firefox用户代理。

如果你的recipe需要先登录,那么重写子类的这个方法。例如,下面的代码是用于纽约时报recipe,实现了full access。

def get_browser(self):                br = BasicNewsRecipe.get_browser(self)                if self.username is not None and self.password is not None:                    br.open('https://www.nytimes.com/auth/login')                    br.select_form(name='login')                    br['USERID']   = self.username                    br['PASSWORD'] = self.password                    br.submit()                return br

get_cover_url()

返回一个封面图片的URL或者返回None。默认情况下它返回成员变量cover_url,但是cover_url通常为None。如果你想让你的recipe下载电子书的封面,可以重写此方法,或设置cover_url成员变量。但要在cover_url调用之前设置变量的值。

get_extra_css()

默认返回self.extra_css。如果你想以生成自己的extra_css,那么重写这个方法。

get_feeds()

Return a list of RSS feeds to fetch for this profile. Each element of the list must be a 2-element tuple of the form (title, url). If title is None or an empty string, the title from the feed is used. This method is useful if your recipe needs to do some processing to figure out the list of feeds to download. If so, override in your subclass.

get_masthead_title()

Override in subclass to use something other than the recipe title

get_masthead_url()

Return a URL to the masthead image for this issue or None. By default it returns the value of the member self.masthead_url which is normally None. If you want your recipe to download a masthead for the e-book override this method in your subclass, or set the member variable self.masthead_url before this method is called. Masthead images are used in Kindle MOBI files.

get_obfuscated_article(url)

If you set articles_are_obfuscated this method is called with every article URL. It should return the path to a file on the filesystem that contains the article HTML. That file is processed by the recursive HTML fetching engine, so it can contain links to pages/images on the web.

This method is typically useful for sites that try to make it difficult to access article content automatically.

classmethod image_url_processor(baseurl, url)

Perform some processing on image urls (perhaps removing size restrictions for dynamically generated images, etc.) and return the precessed URL.

index_to_soup(url_or_raw, raw=False, as_tree=False)

需要索引页面的URL,并返回一个BeautifulSoup。

url_or_raw: 一个URL或已经下载好的索引页的字符串格式。

is_link_wanted(url, tag)

Return True if the link should be followed or False otherwise. By default, raises NotImplementedError which causes the downloader to ignore it.

Parameters:

  • url – The URL to be followed
  • tag – The tag from which the URL was derived
  • parse_feeds()[source]
  • Create a list of articles from the list of feeds returned by
  • BasicNewsRecipe.get_feeds(). Return a list of Feed objects.

parse_index()

This method should be implemented in recipes that parse a website instead of feeds to generate a list of articles. Typical uses are for news sources that have a “Print Edition” webpage that lists all the articles in the current print edition. If this function is implemented, it will be used in preference to BasicNewsRecipe.parse_feeds().

It must return a list. Each element of the list must be a 2-element tuple of the form (‘feed title’, list of articles).

Each list of articles must contain dictionaries of the form:

{'title'       : article title,'url'         : URL of print version,'date'        : The publication date of the article as a string,'description' : A summary of the article'content'     : The full article (can be an empty string). Obsolete do not use, instead save the content to a temporary file and pass a file:///path/to/temp/file.html as the URL.}

For an example, see the recipe for downloading The Atlantic. In addition, you can add ‘author’ for the author of the article.

If you want to abort processing for some reason and have calibre show the user a simple message instead of an error, call abort_recipe_processing().

populate_article_metadata(article, soup, first)

Called when each HTML page belonging to article is downloaded. Intended to be used to get article metadata like author/summary/etc. from the parsed HTML (soup).

Parameters:

  • article – A object of class calibre.web.feeds.Article. If you change the summary, remember to also change the text_summary
  • soup – Parsed HTML belonging to this article
  • first – True iff the parsed HTML is the first page of the article.

postprocess_book(oeb, opts, log)

Run any needed post processing on the parsed downloaded e-book.

Parameters:

  • oeb – An OEBBook object
  • opts – Conversion options

postprocess_html(soup, first_fetch)

This method is called with the source of each downloaded HTML file, after it is parsed for links and images. It can be used to do arbitrarily powerful post-processing on the HTML. It should return soup after processing it.

Parameters:

  • soup – A BeautifulSoup instance containing the downloaded HTML.
  • first_fetch – True if this is the first page of an article.

preprocess_html(soup)

This method is called with the source of each downloaded HTML file, before it is parsed for links and images. It is called after the cleanup as specified by remove_tags etc. It can be used to do arbitrarily powerful pre-processing on the HTML. It should return soup after processing it.

soup: A BeautifulSoup instance containing the downloaded HTML.

preprocess_image(img_data, image_url)

Perform some processing on downloaded image data. This is called on the raw data before any resizing is done. Must return the processed raw data. Return None to skip the image.

preprocess_raw_html(raw_html, url)

This method is called with the source of each downloaded HTML file, before it is parsed into an object tree. raw_html is a unicode string representing the raw HTML downloaded from the web. url is the URL from which the HTML was downloaded.

Note that this method acts before preprocess_regexps.

This method must return the processed raw_html as a unicode object.

classmethod print_version(url)

Take a url pointing to the webpage with article content and return the URL pointing to the print version of the article. By default does nothing. For example:

def print_version(self, url):    return url + '?&pagewanted=print'

skip_ad_pages(soup)

This method is called with the source of each downloaded HTML file, before any of the cleanup attributes like remove_tags, keep_only_tags are applied. Note that preprocess_regexps will have already been applied. It is meant to allow the recipe to skip ad pages. If the soup represents an ad page, return the HTML of the real page. Otherwise return None.

soup: A BeautifulSoup instance containing the downloaded HTML.

sort_index_by(index, weights)

Convenience method to sort the titles in index according to weights. index is sorted in place. Returns index.

index: A list of titles.

weights: A dictionary that maps weights to titles. If any titles in index are not in weights, they are assumed to have a weight of 0.

classmethod tag_to_string(tag, use_alt=True, normalize_whitespace=True)

Convenience method to take a BeautifulSoup Tag and extract the text from it recursively, including any CDATA sections and alt tag attributes. Return a possibly empty unicode string.

use_alt: If True try to use the alt attribute for tags that don’t have any textual content

tag: BeautifulSoup Tag

类变量

articles_are_obfuscated = False

缺省为False,文章比较方便抓取。如果设为True时,表示文章不太方便抓取,将会通过实现get_obfuscated_article方法处理比较难以抓取的文章页面。

auto_cleanup = False

在下载的文章页面中自动提取文本内容。使用Arc90 readability算法。如果这个参数设为True,意味着你不需要手动清理下载的html文档内容(手工清理总是优先)。

auto_cleanup_keep = None

设置自动清理算法不需要处理的元素。语法符合XPath表达式,例如:

auto_cleanup_keep = '//div[@id="article-image"]' #将会保留所有id="article-image"的div标签auto_cleanup_keep = '//*[@class="important"]' #将保留所有class="important"的元素auto_cleanup_keep = '//div[@id="article-image"]|//span[@class="important"]'#将保留所有id="article-image"的div元素和class="important"的span元素

center_navbar = True

生成的电子书的目录列表是否居中对齐。缺省为中间对齐,False时为左对齐。

compress_news_images = False

如果设置为False,则忽略图片的压缩参数,保持图片的原始大小。而如果设置为True,则其它压缩参数均保持为缺省情况,jpeg将按屏幕的情况按比例缩放,并被压缩至(w*h)/16标定的最大尺寸。这里w*h是按比例缩放的因子。

compress_news_images_auto_size = 16

用来自动压缩jpeg图片的比例因子。如果设置为None,则禁止自动压缩。否则,如果可以降低图片质量级别的情况下,图片将被压缩到(w*h)/ compress_news_images_auto_size个字节,这里(w*h)为图片在每个维度上的像素数的乘积。因为最小的jpeg图片质量为5/100,所以一般不会达到这个极限。这个参数将被compress_news_images_auto_size覆盖,它定义了图片固定的最大尺寸。

注意,如果你设置了scale_news_images_to_device为True,则图片首先将按比例缩放,然后直到它的质量下载到差于(w*h)/因子,这里w和h为当前图片在每个维度上的像素。,换言之,压缩在按比例缩放后才发生。

compress_news_images_max_size = None

设定jpeg的质量,图片不超过的给定的字节。如果设置了这个参数,则覆盖由compress_news_images_auto_size指定的自动压缩。最小的jpeg图片质量为5/100,所以这个参数不容易达到的限制。

conversion_options = {}

recipe的说明参数,用来控制下载内容生成电子书的会话。这些说明将覆盖任何用户或者插件的设置,所以如果不是绝对需要,不要设置这个参数。例:

conversion_options = {  'base_font_size'   : 16,  'tags'             : 'mytag1,mytag2',  'title'            : 'My Title',  'linearize_tables' : True,}

cover_margins = (0, 0, ‘#ffffff’)

缺省情况下,书籍的封面是由get_cover_url()返回的图片。在recipe中重写覆盖这个方法,calibre将会下载封面图片,并处理成为一个帧,它的调试和宽度都设置为百分比。cover_margins= (10, 15, ‘#ffffff’)参数表示,封面是白底的左右边框为10像素,上下边框为15像素。因为一些原因,白色在windows系统中不一定有效,可以用#ffffff来代替。

delay = 0

连续下载时,两次下载中的时间间隔,以浮点数表示,单位为秒。

description = u”

用几行文字对这个recipe所抓取的内容进行描述。最初是在GUI界面下一系列recipe的作用说明。

encoding = None

如果网页的字符集编码不正确时设置的字符集编码,通常为latin1cp1252。如果设置为None时,自动探测编码。如果它为可调用的,调用时有两个参数,recipe对象和被解码的源文件,返回被解码的源文件。

extra_css = None
这个参数设定了另外要加入到下载的html文件中的css样式,它们将被插入下载的html文件中的</head>标签前的<style>标签之中,因此,它将覆盖除了直接在html标签中由单独的style属性指定的样式之外的其它css样式。例:extra_css = '.heading { font: serif x-large }'

feeds = None

下载时用的feed,可以为[url1,url2, ...] 或者 [('title1', url1), ('title2',url2),...] 两种形式。

filter_regexps = []

该参数也是正则表达式的列表,与match_regexps相反,该参数用来决定哪些链接将被过滤掉,不被采用,同样,如果为空忽略本参数,只有在没有实现is_link_wanted方法时,本参数才起作用。例:

filter_regexps = [r'ads\.doubleclick\.net']

将过滤掉所有包含”page=数值”的URL。注意:filter_regexps和match_regexps参数只能设置一个,不能同时设置。

ignore_duplicate_articles = None

忽略一次或者多次重复的文章。重复的文章指标题,链接(URL)都相同的文章。

仅指明标题的设置方式,例如: ignore_duplicate_articles ={‘title’}

仅指明URL的设置方式,例如: ignore_duplicate_articles = {‘url’}

同时指明标题和URL的设置方式,例如: ignore_duplicate_articles = {‘title’, ‘url’}

keep_only_tags = []

说明只需要保留的标签及其子类标签,格式见remove_tags。本参数如果没有设置设置为空,则’’标签将清空,并填以本参数列表条目所匹配的标签。例:

keep_only_tags = [dict(id=['content', 'heading'])]

将仅仅会保留id属性值为’content’或者’heading’的标签。

language = ‘und’

网页内容所使用的语言,必须符合ISO-639编码,长度为2-3个字符。

masthead_url = None

缺省的情况下calibre将使用一个默认的图片文件来做为标题图片(masthead仅在kindle下使用),重写这个参数,可以使用一个url指向的图片来替换标题图片(masthead)。

match_regexps = []

该参数为正则表达列表,用来决定哪些链接是有用的。如果设置为空,则忽略本参数。只有在没有实现is_link_wanted方法时,本参数才起作用。例:

match_regexps = [r'page=[0-9]+']

将匹配所有包含”page=数值”的URL。注意:match_regexps和filter_regexps参数只能设置一个,不能同时设置。

注意:filter_regexps和match_regexps参数只能设置一个,不能同时设置。

max_articles_per_feed = 100

每个feed可以下载的最大文章数,用在没有文章日期的情况下。默认是100篇。文章有日期时通常是使用oldest_article参数。

needs_subscription = False

下载时是否需要登录,如果为True,GUI界面下会寻问登录用户名和密码。如果设置为”optional”,则登录名和密码为可选的。

no_stylesheets = False

标识是否下载和使用原来网页的样式表,缺省为下载和使用,设为True时,原样式表将不下载和使用。

oldest_article = 7.0

每个feed可以下载的最大文章数,用在没有文章日期的情况下。默认是100篇。文章有日期时通常是使用oldest_article参数。

preprocess_regexps = []

设置为正则表达式的列表,用以说明下载HTML文档时的替换规则。列表的每一个元素为一个有两个元素的元组,元组的第一个元素为编辑(compiled)的正则表达式,第二个元素为一个调用(lambda),匹配单一的对象,返回一个替换匹配部分的字符串。例:

preprocess_regexps = [   (re.compile(r'<!--Article ends here-->.*</body>', re.DOTALL|re.IGNORECASE),    lambda match: '</body>'),]

把所有的<!--Article ends here-->替换为</body>

publication_type = ‘unknown’

出版类型,抓取内容的类型,有报纸(newspaper)、杂志(magazine)、博客(blog)等。如果设为None,则无类型,该参数会做为metadata数据,写入opf文件中。

recipe_disabled = None

如果设置为非空的字符串,将会禁用这个recipe,字符串将会作为返回的信息。

recursions = 0

web页面中链接的递归级数,默认不递归下载。

remove_attributes = []

为属性的列表,说明需要删除所有包含这些属性的标签,例:

remove_attributes = ['style', 'font']

则会删除所有属性包含’style’或者 ‘font’的标签。

remove_empty_feeds = False

如果设置为True,将会在输出中删除空的feed。如果在子类中覆盖了parse_index方法,则这个参数无效。也就是说,只对使用feeds设置,或者通过覆盖get_feeds方法,返回一个feeds列表时参数有效。本参数也使用在设置了ignore_duplicate_articles参数的情况。

remove_javascript = True

是否在下载的网页中去除掉javascript脚本。缺省为去除脚本。

remove_tags = []

一些将要被删除的标签的列表,用来说明在下载的HTML文件中要删除的标签。每个标签都被以一个字典的形式来表示:

{ name      : 'tag name',   #e.g. 'div' attrs     : a dictionary, #e.g. {class: 'advertisment'}}

所有的关键字都是可选项。对搜索判据的解释,一个常见的例子如下:

remove_tags = [dict(name='div', attrs={'class':'advert'})]

将会在下载的HTML文档中删除含有形如’

’的标签及其了类标签。

remove_tags_after = None
在下载的HTML文档中删除第一次出现说明的这个标签以后的所有标签及其子标签,说明的格式同remove_tags,例:

remove_tags_after = [dict(id='content')]

将会删除第一次出现带有id=’content’属性的标签以后的所有标签,注意不删除带有id=’content’属性的标签本身。

remove_tags_before = None

在下载的HTML文档中删除第一次出现说明的这个标签以前的所有标签及其子标签,说明的格式同remove_tags,例:

remove_tags_before = [dict(id='content')]

将会删除第一次出现带有id=’content’属性的标签以前的所有标签,注意不删除带有id=’content’属性的标签本身。.

requires_version = (0, 6, 0)

执行这个recipe所需要的calibre的最低版本,默认是0.6.0

resolve_internal_links = False

如果设置为True,则原来的指向文章的下载链接(URL),将不再直接指向原始的下载地址,而改为指向下载文章的拷贝。设置为True的同时,你必须实现canonicalize_internal_url方法,自己特别指定网站上这些文章拷贝的下载链接列表,来对应替换掉原始文章的URL。

reverse_article_order = False

设为True时,将会反转feed中的文章排序。

scale_news_images = None

按比例把图片缩放到(w,h)中最大的可能大小。如果scale_news_images_to_device= True不管有没有设置设备的大小,图片都会按屏幕大小来显示。

scale_news_images_to_device = True

重新按比例缩放图片,以适应由输出设置屏幕的大小。如果不管输出设备的情况,则设为False。

simultaneous_downloads = 5

同时下载的数量,如果服务器有限制时设为1时。当delay` > 0时,自动降为1,缺省值为5。

summary_length = 500

简短描述时最多字符数,缺省为500个字符。

template_css= u'''            .article_date {                color: gray;font-family: monospace;            }            .article_description {                text-indent: 0pt;            }            a.article {                font-weight: bold;text-align:left;            }            a.feed {                font-weight: bold;            }            .calibre_navbar {               font-family:monospace;            }

本css做为一个样式表的模板,仅在浏览条与目录中,与其重写这个参数,不如在recipe中定义extra_css参数,定制浏览内容的外观和式样。

timefmt = ‘ [%a, %d %b %Y]’

在首页所显示的日期格式,缺省格式为日,月,年

timeout = 120.0

从服务器下载的最长时间限制,超过这个时间为超时。单位为秒,缺省为120秒。

title = u’Unknown News Source’

电子书的标题。

use_embedded_content = None

正常情况下,我们会基于内嵌内容的长度猜测feed会内嵌全部文章的内容。参数有三个值,设为None时,将会进行猜测;设为True时假设feed包含内嵌的所有文章内容;设为False时,feed不包含内嵌的文章内容。


推荐阅读

[1]. calibre recipes API English ver.
[2]. BasicNewsRecipe的源码
[3]. Arc90 readability算法
[4]. calibre 48个类变量的文档

原创粉丝点击