Elasticsearch Reference 5.5 中文翻译7

来源:互联网 发布:软件项目成果报告 编辑:程序博客网 时间:2024/06/06 06:40

Breaking changes in 5.0

在5.0版本中的更新

This section discusses the changes that you need to be aware of when migrating your application to Elasticsearch 5.0.
这一节讨论更新你需要知道当迁移你的应用到Elasticsearch5.0版本是需要知道的事情。

Migration Plugin

迁移插件

The elasticsearch-migration plugin (compatible with Elasticsearch 2.3.0 and above) will help you to find issues that need to be addressed when upgrading to Elasticsearch 5.0.
elasticsearch-migration插件(适用于Elasticsearch2.3.0及以上的版本)将会帮助你找到问题当你需要更新到Elasticsearch5.0.0的时候。

Indices created before 5.0

indices在5.0之前创建

Elasticsearch 5.0 can read indices created in version 2.0 or above. An Elasticsearch 5.0 node will not start in the presence of indices created in a version of Elasticsearch before 2.0.
Elasticsearch5.0可以读取indics创建在2.0及以后的版本。一个Elasticsearch5.0节点将不会启动由于Elasticsearch2.0之前的indices被使用。

Important
重要
Reindex indices from Elasticseach 1.x or before
从Elasticsearch1.x或之前的版本迁移要重新索引
Indices created in Elasticsearch 1.x or before will need to be reindexed with Elasticsearch 2.x or 5.x in order to be readable by Elasticsearch 5.x. It is not sufficient to use the upgrade API. See Reindex to upgrade for more details.
创建于Elasticsearch1.x或之前的索引需要被重新索引到Elasticsearch2.x或5.x用于Elasticsearch5.x的版本使用。使用更新的API是完全不够的。详见Reindex to upgrade。

The first time Elasticsearch 5.0 starts, it will automatically rename index folders to use the index UUID instead of the index name. If you are using shadow replicas with shared data folders, first start a single node with access to all data folders, and let it rename all index folders before starting other nodes in the cluster.
首先Elasticsearch5.0开始,他自动重新命名索引目录来使用索引UUID代替索引名字。如果你使用shadow复制对于共享的数据文件夹,首先启动单个节点访问所有的数据文件夹,并且将他们重命名所有的索引目录在启动集群中的其他节点之前。

Also see:
也可以参考:

  • Search and Query DSL changes
  • Mapping changes
  • Percolator changes
  • Suggester changes
  • Index APIs changes
  • Document API changes
  • Settings changes
  • Allocation changes
  • HTTP changes
  • REST API changes
  • CAT API changes
  • Java API changes
  • Packaging
  • Plugin changes
  • Filesystem related changes
  • Aggregation changes
  • Script related changes

Search and Query DSL changes

搜索和查询的DSL更新

search_type

search_type=count removed

The count search type was deprecated since version 2.0.0 and is now removed. In order to get the same benefits, you just need to set the value of the size parameter to 0.
count搜索类型已经被废弃了自从版本2.0.0并且现在已经被移除了。为了获得相同的效果,你需要设置size参数为0.

For instance, the following request:
例如下面的请求

GET /my_index/_search?search_type=count{  "aggs": {    "my_terms": {       "terms": {         "field": "foo"       }     }  }}

can be replaced with:
可以被替代通过:

GET /my_index/_search{  "size": 0,  "aggs": {    "my_terms": {       "terms": {         "field": "foo"       }     }  }}

search_type=scan removed

The scan search type was deprecated since version 2.1.0 and is now removed. All benefits from this search type can now be achieved by doing a scroll request that sorts documents in _doc order, for instance:
scan搜索类型已经从2.1.0版本开始被废弃并且现在已经被移除了。通过搜索类型的功能可以通过滚动请求在_doc中的问文档,例如:

GET /my_index/_search?scroll=2m{  "sort": [    "_doc"  ]}

Scroll requests sorted by _doc have been optimized to more efficiently resume from where the previous request stopped, so this will have the same performance characteristics as the former scan search type.
滚动请求被_doc排序已经被优化的更加高效并且可以中之前请求停止的位置恢复运行,因此他的性能可以之前扫描搜索类型是相当的。

Search shard limit

搜索分片限制

In 5.0, Elasticsearch rejects requests that would query more than 1000 shard copies (primaries or replicas). The reason is that such large numbers of shards make the job of the coordinating node very CPU and memory intensive. It is usually a better idea to organize data in such a way that there are fewer larger shards. In case you would like to bypass this limit, which is discouraged, you can update the action.search.shard_count.limit cluster setting to a greater value.
在5.0中,Elasticsearch拒绝请求查询超过1000分片的拷贝(主和复制)。理由是这样大数据量的分片可能导致相应节点的CPU和内存压力过大。以这样的方式来组织数据通常不是一种很好的方式因为他需要更大的分片。因此你需要超越这个限制但是不推荐,你可以更新action.search.shard_count.limit的集群设置来加大这个值。

fields parameter

fields参数

The fields parameter has been replaced by stored_fields. The stored_fields parameter will only return stored fields?—?it will no longer extract values from the _source.
fields参数已经被替代通过stored_fields。stored_fields参数只会返回存储的fields————他将不会从_source中获取值。

fielddata_fields parameter

fielddata_fields参数

The fielddata_fields has been deprecated, use parameter docvalue_fields instead.
fielddata_fields已经被废弃了,使用参数docvalue_fields作为替代。

search-exists API removed

search-exists的API已经被移除

The search exists api has been removed in favour of using the search api with size set to 0 and terminate_after set to 1.
搜索存在的API已经被移除由于使用搜索API将size设置为0并且将terminate_after设置为1。

Deprecated queries removed

废弃查询被移除

The following deprecated queries have been removed:
下面的废弃的查询已经被移除了:

filtered

Use bool query instead, which supports filter clauses too.
使用布尔查询作为替代,也支持过滤器子句。

and

Use must clauses in a bool query instead.
使用must子句在bool查询中被替代。

or

Use should clauses in a bool query instead.
使用should子句在布尔查询中被替代。

missing

Use a negated exists query instead. (Also removed missing from the query_string query)
使用否定存在查询替代。(并且也从query_string查询中移除了_missing_

limit

Use the terminate_after parameter instead.
使用terminate_after参数作为替代。

fquery

Is obsolete after filters and queries have been merged.
在筛选器和查询器合并后被废弃

query

Is obsolete after filters and queries have been merged.
在筛选器和查询器合并后被废弃

query_binary

Was undocumented and has been removed.
没有登记已经被移除。

filter_binary

Was undocumented and has been removed.
没有登记已经被移除。

Changes to queries
查询的改变

  • Unsupported queries such as term queries on geo_point fields will now fail rather than returning no hits.
    不支持查询例如属于查询对于geo_point域现在会失败而不是返回为空。
  • Removed support for fuzzy queries on numeric, date and ip fields, use range queries instead.
    不在支持对于数字、日期和ip域的模糊查询,使用范围查询作为替代。
  • Removed support for range and prefix queries on _uid and _id fields.
    不在支持对于_uid_id的范围查询和前缀查询。
  • Querying an unindexed field will now fail rather than returning no hits.
    查询一个非索引域将会失败而不是返回为空。
  • Removed support for the deprecated min_similarity parameter in fuzzy query, in favour of fuzziness.
    不在支持废弃的min_similarity参数在模糊查询中,支持fuzziness。
  • Removed support for the deprecated fuzzy_min_sim parameter in query_string query, in favour of fuzziness.
    不在支持废弃的fuzzy_min_sim参数在query_string查询中,支持fuzziness。
  • Removed support for the deprecated edit_distance parameter in completion suggester, in favour of fuzziness.
    不在支持废弃的edit_distance参数在完成建议中,支持fuzziness。
  • Removed support for the deprecated filter and no_match_filter fields in indices query, in favour of query and no_match_query.
    不在支持废弃的过滤器和no_match_filter域在索引查询中,支持query和no_match_query。
  • Removed support for the deprecated filter fields in nested query, in favour of query.
    不在支持废弃的filter域在嵌入查询中,支持query。
  • Removed support for the deprecated minimum_should_match and disable_coord in terms query, use bool query instead. Also removed support for the deprecated execution parameter.
    不在支持废弃的minimum_should_matchdisable_coord在terms查询中,使用布尔查询作为替代。不在支持废弃的execution参数。
  • Removed support for the top level filter element in function_score query, replaced by query.
    不在支持顶级的过滤器元素在function_score查询中,由query替代。
  • The collect_payloads parameter of the span_near query has been deprecated. Payloads will be loaded when needed.
    span_near查询中的collect_payloads参数已经被废弃。根据需要会进行负载。
  • The score_type parameter to the nested and has_child queries has been removed in favour of score_mode. The score_mode parameter to has_parent has been deprecated in favour of the score boolean parameter. Also, the total score mode has been removed in favour of the sum mode.
    score_type参数对于内嵌和has_child查询已经被移除使用score_mode。score_mode参数对于has_parent已经被废弃使用score布尔参数。并且,total score模式已经被移除使用sum模式。
  • When the max_children parameter was set to 0 on the has_child query then there was no upper limit on how many child documents were allowed to match. Now, 0 really means that zero child documents are allowed. If no upper limit is needed then the max_children parameter shouldn’t be specified at all.
    当max_children参数被设置为0则has_child查询将不设置上限对于允许匹配的子文档。现在0真的意味着运行0个子文档。如果不需要上限则max_children参数不应当被指定。
  • The exists query will now fail if the _field_names field is disabled.
    exists的查询将会失败如果_field_names域被关闭的话。
  • The multi_match query will fail if fuzziness is used for cross_fields, phrase or phrase_prefix type. This parameter was undocumented and silently ignored before for these types of multi_match.
    multi_match查询将会失败如果模糊查询用于corss_fields、phrase或phrase_prefix的类型。参数是非文档话并且会忽略对于这些类型的multi_match。
  • Deprecated support for the coerce, normalize, ignore_malformed parameters in GeoPolygonQuery. Use parameter validation_method instead.
    不在支持在GeoPolygonQuery中的coerce、normailize、ignore_malformed参数。使用validation_method参数作为替代。
  • Deprecated support for the coerce, normalize, ignore_malformed parameters in GeoDistanceQuery. Use parameter validation_method instead.
    不在支持在GeoDistanceQuery中的coerce、normalize、ignore_malformed参数。使用validation_method参数作为替代。
  • Deprecated support for the coerce, normalize, ignore_malformed parameters in GeoBoundingBoxQuery. Use parameter validation_method instead.
    不在支持在GeoBoundingBoxQuery中的coerce、normalize、ignore_malformed参数。使用validation_method参数作为替代。
  • The geo_distance_range query is deprecated and should be replaced by either the geo_distance bucket aggregation, or geo_distance sort.
    geo_distance_range查询被废弃并且被替换通过geo_distance的组聚合和geo_distance排序。
  • For geo_distance query, aggregation, and sort the sloppy_arc option for the distance_type parameter has been deprecated.
    对于geo_distance查询、聚合和排序中distance_type参数中的sloppy_arc操作已经被废弃了。

Top level filter parameter

顶级过过滤器参数

Removed support for the deprecated top level filter in the search api, replaced by post_filter.
不在支持在查询api中的顶级过滤器、由post_filter来替换。

Highlighters

高亮

Removed support for multiple highlighter names, the only supported ones are: plain, fvh and postings.
不在支持多个高亮名,只支持 :plain、fvh和postings。

Term vectors API

组向量API

The term vectors APIs no longer persist unmapped fields in the mappings.
在映射中组向量API不在持久化非匹配的域。

The dfs parameter to the term vectors API has been removed completely. Term vectors don’t support distributed document frequencies anymore.
对于组向量中的dfs参数已经被移除。组向量不在支持分布式文档请求。

Sort

排序

The reverse parameter has been removed, in favour of explicitly specifying the sort order with the order option.
reverse参数已经被移除,支持使用order选项来定义排序的顺序。

The coerce and ignore_malformed parameters were deprecated in favour of validation_method.
coerce和ignore_malformed参数已经被废弃由validation_method参数来替代。

Inner hits

内部命中

  • Top level inner hits syntax has been removed. Inner hits can now only be specified as part of the nested, has_child and has_parent queries. Use cases previously only possible with top level inner hits can now be done with inner hits defined inside the query dsl.
    顶级内部命中语法已经被移除了。内部命中智能通过内嵌的部分来指定,has_child和has_parent查询。之前的用例只能应用于顶级的内部命中并且现在可以指定在查询dsl的内部。
  • Source filtering for inner hits inside nested queries requires full field names instead of relative field names. This is now consistent for source filtering on other places in the search API.
    对于内部命中的源码过滤器内嵌在查询请求中的全域名代替了相对的域名。这个实现了在搜索API中对于源码过滤的统一。
  • Nested inner hits will now no longer include _index, _type and _id keys. For nested inner hits these values are always the same as the _index, _type and _id keys of the root search hit.
    内嵌的内部命中将不在包含_index、_type和_id关键字。对于内嵌内部命中这些值是相同的对于根查询命中中的_index、_type和_id。
  • Parent/child inner hits will now no longer include the _index key. For parent/child inner hits the _index key is always the same as the the parent search hit.
    父子内部命中不在包含_index关键字。对于父子内部查询来说_index关键字和父查询命中中是相同的。

Query Profiler

查询分析器

In the response for profiling queries, the query_type has been renamed to type and lucene has been renamed to description. These changes have been made so the response format is more friendly to supporting other types of profiling in the future.
对于查询分析的响应,query_type已经改名为type并且lucene已经被改名为description。这些改变已经完成因此响应格式可以更加人性化来支持其他类型的分析在后续的版本中。

Search preferences

查询偏好

The search preference _only_node has been removed. The same behavior can be achieved by using _only_nodes and specifying a single node ID.
查询偏好_only_node已经被移除。相同的行为可以通过使用_only_nodes和指定单一的节点ID来实现。

The search preference _prefer_node has been superseded by _prefer_nodes. By specifying a single node, _prefer_nodes provides the same functionality as _prefer_node but also supports specifying multiple nodes.
查询偏好_prefer_node已经被_perfer_nodes所替代。通个指定单个节点,_perfer_nodes提供了相同的功能和_perfer_node一样但是支持支持多个节点。

The search preference _shards accepts a secondary preference, for example _primary to specify the primary copy of the specified shards. The separator previously used to separate the _shards portion of the parameter from the secondary preference was ;. However, this is also an acceptable separator between query string parameters which means that unless the ; was escaped, the secondary preference was never observed. The separator has been changed to | and does not need to be escaped.
查询偏好_shards接收第二个偏好,例如_primary来指定对于指定shards的复制。之前的分隔符分隔了参数中_shards对于第二个偏好是分号;然而也可以在查询字符串参数之间使用分隔符意味着除了使用分号外次要偏好不会被察觉。分隔符已经改变为竖线并且不在需要escaped。

Scoring changes

评分更新

Default similarity

默认相似

The default similarity has been changed to BM25.
默认相似已经被改变为BM25.

DF formula

DF公式

Document frequency (which is for instance used to compute inverse document frequency - IDF) is now based on the number of documents that have a value for the considered field rather than the total number of documents in the index. This change affects most similarities. See LUCENE-6711 for more information.
文档的频繁度(就是实例计算inverse文档的频繁度————IDF)是基于文档的数目来考虑而不是所有索引中的文档数目。这个改变影响是相似的。见LUCENE-6711来了解更多内容。

explain API

解析API

The fields field has been renamed to stored_fields
fields域已经更名为stored_fields。

Mapping changes

匹配更新

string fields replaced by text/keyword fields

string域被替换通过text/keyword域

The string field datatype has been replaced by the text field for full text analyzed content, and the keyword field for not-analyzed exact string values. For backwards compatibility purposes, during the 5.x series:
string域数据类型已经被text域替代用于全文本分析内容并且keyword域不会用于解析字符串。出于对于后面兼容的问题,从5.x版本开始:

  • string fields on pre-5.0 indices will function as before.
    string域对于5.0之前的版本功能保持不变
  • New string fields can be added to pre-5.0 indices as before.
    新的string域可以被添加到5.0之前的索引中
  • text and keyword fields can also be added to pre-5.0 indices.
    text和keyword域也可以被添加到5.0之前的索引中
  • When adding a string field to a new index, the field mapping will be rewritten as a text or keyword field if possible, otherwise an exception will be thrown. Certain configurations that were possible with string fields are no longer possible with text/keyword fields such as enabling term_vectors on a not-analyzed keyword field.
    当添加一个string域到一个新的索引中,域匹配将被重写作为text或keyword域如果可以,否则会抛出异常。相应的配置对于string域不在适用于text/keyword域例如允许term_vectors对于非分析的keyword域。

Default string mappings

默认的字符串匹配

String mappings now have the following default mappings:
字符串匹配现在有如下的默认匹配:

{  "type": "text",  "fields": {    "keyword": {      "type": "keyword",      "ignore_above": 256    }  }}

This allows to perform full-text search on the original field name and to sort and run aggregations on the sub keyword field.
这允许全文本查询对于原始的域名并且可以排序和执行聚合对于子关键字域。

Numeric fields

数值域

Numeric fields are now indexed with a completely different data-structure, called BKD tree, that is expected to require less disk space and be faster for range queries than the previous way that numerics were indexed.
数值域现在被索引使用完整的不同的数据结构,名字为BKD数,就是要求最少的磁盘消耗和更快的范围查询相比之前的数值索引。

Term queries will return constant scores now, while they used to return higher scores for rare terms due to the contribution of the document frequency, which this new BKD structure does not record. If scoring is needed, then it is advised to map the numeric fields as keywords too.
term查询现在将返回一致的评分,使用返回高评分用于范围term由于文档频率的缘故,新的BKD结构不会记录。如果需要评分,就匹配数值域就像keyword的一样。

Note that this keyword mapping do not need to replace the numeric mapping. For instance if you need both sorting and scoring on your numeric field, you could map it both as a number and a keyword using fields:
注意这个keyword匹配不需要替换数值匹配。例如如果你需要排序和评分两个功能对于数值域,你可以匹配数值和keyword使用:

PUT my_index{  "mappings": {    "my_type": {      "properties": {        "my_number": {          "type": "long",          "fields": {            "keyword": {              "type":  "keyword"            }          }        }      }    }  }}

Also the precision_step parameter is now irrelevant and will be rejected on indices that are created on or after 5.0.
并且precision_step参数现在是非相关的并且对于在5.0之后的创建将会被拒绝。

geo_point fields

geo_point域

Like Numeric fields the Geo point field now uses the new BKD tree structure. Since this structure is fundamentally designed for multi-dimension spatial data, the following field parameters are no longer needed or supported: geohash, geohash_prefix, geohash_precision, lat_lon. Geohashes are still supported from an API perspective, and can still be accessed using the .geohash field extension, but they are no longer used to index geo point data.
就像数值域Geo point域现在使用新的BKD树结构。因此这个结构设计就是用于多维度的空间数据,下面的域参数不在需要或被支持:geohash、geohash_prefix、geohash_precision、lat_lon。Geohashes依然被支持由于相应的API并且可以被访问通过使用.geohash域的扩展,但是他不在被使用来索引geo point数据。

_timestamp and _ttl

_timestamp和_ttl

The _timestamp and _ttl fields were deprecated and are now removed. As a replacement for _timestamp, you should populate a regular date field with the current timestamp on application side. For _ttl, you should either use time-based indices when applicable, or cron a delete-by-query with a range query on a timestamp field
_timestamp和_ttl域被废弃并且已经被移除了。由于替换了_timestamp,你应当使用一个正常的日期域使用当前的时间戳对于应用的处理方面,你应当使用基于时间的索引当可用的时候或cron一个delete-by-query使用范围参数对于timestamp域。

index property

索引属性

On all field datatypes (except for the deprecated string field), the index property now only accepts true/false instead of not_analyzed/no. The string field still accepts analyzed/not_analyzed/no.
对于所有域数据类型(除了废弃的string域),索引属性只接收true/false来代替not_analyzed/no。string域依然接受analyzed/not_analyzed/no。

Doc values on unindexed fields

对于未索引域的doc值

Previously, setting a field to index:no would also disable doc-values. Now, doc-values are enabled by default on all types but text and binary, regardless of the value of the index property.
之前,设置一个index:no域需要关闭doc-value。现在doc-value默认是启用的对于所有的类型除了text和binary类型外,由于index属性的值。

Floating points use float instead of double

浮点point使用float代替double

When dynamically mapping a field containing a floating point number, the field now defaults to using float instead of double. The reasoning is that floats should be more than enough for most cases but would decrease storage requirements significantly.
当动态匹配一个包含浮点point数的域,域默认使用float代替double。理由是浮点应当是足够的对于大部分的情况并且可以明显降低对于存储的要求。

norms

norms now take a boolean instead of an object. This boolean is the replacement for norms.enabled. There is no replacement for norms.loading since eager loading of norms is not useful anymore now that norms are disk-based.
norms现在需要一个布尔值代替一个object。这个布尔值替换了norms.enabled。不需要替换norms.loading因此norms加载不在有用现在norms是基于磁盘的。

fielddata.format

Setting fielddata.format: doc_values in the mappings used to implicitly enable doc-values on a field. This no longer works: the only way to enable or disable doc-values is by using the doc_values property of mappings.
设置fielddata.format: doc_values在匹配中使用明确的doc-value对于域。这不在有效:唯一的开启或关闭doc-value是通过使用匹配中的doc-value属性。

fielddata.filter.regex

Regex filters are not supported anymore and will be dropped on upgrade.
正则过滤器不在支持并且将在更新中被删除。

 Source-transform removed

移除Source-transform

The source transform feature has been removed. Instead, use an ingest pipeline.
源码转移特性已经被移除。作为替代,使用ingest pipeline。

Field mapping limits

域匹配限制

To prevent mapping explosions, the following limits are applied to indices created in 5.x:
为了阻止匹配,下面的限制被应用于5.x版本中的索引创建:

  • The maximum number of fields in an index is limited to 1000.
    单个索引最大的域数目限制为1000。
  • The maximum depth for a field (1 plus the number of object or nested parents) is limited to 20.
    对于一个域的最大深度(1加上object或内嵌parent的数目或)限制是20。
  • The maximum number of nested fields in an index is limited to 50.
    对于一个索引中内嵌域的最大数目限制为50.

See Settings to prevent mappings explosion for more.
参考Settings to prevent mappings explosion来了解更多内容。

_parent field no longer indexed

_parent域不在被索引

The join between parent and child documents no longer relies on indexed fields and therefore from 5.0.0 onwards the _parent field is no longer indexed. In order to find documents that refer to a specific parent id, the new parent_id query can be used. The GET response and hits inside the search response still include the parent id under the _parent key.
对于父子文档的连接不在依赖于索引域从5.0.0开始之前_parent域不在被索引。为了查找文档对于引用指定的父id,新的parent_id查询可以被使用。GET响应和命中对于查询返回包括在_parent下的父id。

Source format option

源码格式化选项

The _source mapping no longer supports the format option. It will still be accepted for indices created before the upgrade to 5.0 for backwards compatibility, but it will have no effect. Indices created on or after 5.0 will reject this option.
_source匹配不在支持格式化选项。他依然接受索引的创建在升级到5.0之前由于向后的兼容,但是他不会有效。在5.0版本之后的索引创建将会拒绝这个选项。

Object notation

Object符号

Core types no longer support the object notation, which was used to provide per document boosts as follows:
核心类型不在支持object符号,使用如下的每个文档boots。

{  "value": "field_value",  "boost": 42}

Boost accuracy for queries on _all

对于_all提高查询的准确性

Per-field boosts on the _all are now compressed into a single byte instead of the 4 bytes used previously. While this will make the index much more space-efficient, it also means that index time boosts will be less accurately encoded.
每个域的准确性对于_all是压缩为单个字节代替之前使用的四个字节。这将使得索引更加具有空间有效性,他也意味着索引时间精度将会更加准确。

_ttl and _timestamp cannot be created

_ttl和_timestamp将不会被创建

You can no longer create indexes with _ttl or _timestamp enabled. Indexes with them enabled created before 5.0 will continue to work.
你不能创建索引启用_ttl或_timestamp。使用他们的索引允许在5.0之前创建将会继续有效。

You should replace _timestamp in new indexes by adding a field to your source either in the application producing the data or with an ingest pipline like this one:
你应当替代_timestamp在新的索引中通过添加域到你的应用源码中来产生数据或使用一个ingest pipline如下:

PUT _ingest/pipeline/timestamp{  "description" : "Adds a timestamp field at the current time",  "processors" : [ {    "set" : {      "field": "timestamp",      "value": "{{_ingest.timestamp}}"    }  } ]}PUT newindex/type/1?pipeline=timestamp{  "example": "data"}GET newindex/type/1

Which produces
将会返回

{  "_source": {    "example": "data",    "timestamp": "2016-06-21T18:48:55.560+0000"  },  ...}

If you have an old index created with 2.x that has _timestamp enabled then you can migrate it to a new index with the a timestamp field in the source with reindex:
如果你有一个创建于2.x的旧的索引启用了_timestamp则你可以迁移他们到新的索引中并且在源码中重新索引timestamp域:

POST _reindex{  "source": {    "index": "oldindex"  },  "dest": {    "index": "newindex"  },  "script": {    "lang": "painless",    "inline": "ctx._source.timestamp = ctx._timestamp; ctx._timestamp = null"  }}

You can replace _ttl with time based index names (preferred) or by adding a cron job which runs a delete-by-query on a timestamp field in the source document. If you had documents like this:
你可以替换_ttl使用基于时间的索引名字(建议)或通过添加一个定时任务运行delete-by-query对于源文档中timestamp域。如果你有一个如下的文档:

POST index/type/_bulk{"index":{"_id":1}}{"example": "data", "timestamp": "2016-06-21T18:48:55.560+0000" }{"index":{"_id":2}}{"example": "data", "timestamp": "2016-04-21T18:48:55.560+0000" }

Then you could delete all of the documents from before June 1st with:
则你可以删除所有六月一号之前的文档如下:

POST index/type/_delete_by_query{  "query": {    "range" : {      "timestamp" : {        "lt" : "2016-05-01"      }    }  }}

Important
重要
Keep in mind that deleting documents from an index is very expensive compared to deleting whole indexes. That is why time based indexes are recommended over this sort of thing and why _ttl was deprecated in the first place.
记住删除文档是一种较大的消耗相对于删除整个索引。这就是为什么基于时间的索引是被推荐的由于系统的消耗并且_ttl被首先替代的原因。

Blank field names is not supported

不在支持空白域名

Blank field names in mappings is not allowed after 5.0.
从5.0开始在匹配中的空白域名不在被允许使用。

Percolator changes

过滤器更新

Percolator is near-real time

过滤器是接近实时的

Previously percolators were activated in real-time, i.e. as soon as they were indexed. Now, changes to the percolate query are visible in near-real time, as soon as the index has been refreshed. This change was required because, in indices created from 5.0 onwards, the terms used in a percolator query are automatically indexed to allow for more efficient query selection during percolation.
之前过滤器是实时被激活的,例如,由于他们被索引。现在更新过滤器查询是在接近实时的可见范围内,由于索引已经被刷新。这个更新是被要求的因为从5.0开始索引创建,在过滤器查询中使用term是自动索引来允许更有效的查询在过滤的操作中。

Percolate and multi percolator APIs

过滤和过滤器API

Percolator and multi percolate APIs have been deprecated and will be removed in the next major release. These APIs have been replaced by the percolate query that can be used in the search and multi search APIs.
过滤器和多个过滤API已经被废弃并且将会在后续的主版本中移除。这些API已经被过滤查询替代可以被使用在查询或多个查询API中。

Percolator field mapping

过滤器域匹配

The .percolator type can no longer be used to index percolator queries.
.percolater类型不在被使用对于索引过滤查询。

Instead a percolator field type must be configured prior to indexing percolator queries.
替代过滤器域类型必须被配置指向索引过滤查询。

Indices with a .percolator type created on a version before 5.0.0 can still be used, but new indices no longer accept the .percolator type.
在5.0.0版本之前使用.percolator类型创建的索引依然可以被使用但是新的索引不在接受.percolator类型。

However it is strongly recommended to reindex any indices containing percolator queries created prior upgrading to Elasticsearch 5. By doing this the percolate query utilize the extracted terms the percolator field type extracted from the percolator queries and potentially execute many times faster.
然而强烈建议重新索引包含percolator查询创建的索引更新到Elasticsearch5。通过这么做过滤器查询利用了percolator域来自过滤器查询并且会成倍提高执行的速度。

Percolate document mapping

过滤文档匹配

The percolate query no longer modifies the mappings. Before the percolate API could be used to dynamically introduce new fields to the mappings based on the fields in the document being percolated. This no longer works, because these unmapped fields are not persisted in the mapping.
过滤查询不在修改匹配。在过滤API之前可以使用动态的引入新的域匹配基于在文档中被过滤的域。这个不在可行,因为没有匹配的域不在持久化到匹配中。

通过查询返回的过滤文档

Documents with the .percolate type were previously excluded from the search response, unless the .percolate type was specified explicitly in the search request. Now, percolator documents are treated in the same way as any other document and are returned by search requests.
使用.percolate类型的文档之前不在索引响应范围中除非.percolate类型被明确指定在搜索的请求中。现在,过滤文档以相同的方式被处理类似于其他的文档并且可以通过搜索请求被返回。

Percolating existing document

过滤已有文档

When percolating an existing document then also specifying a document as source in the percolate query is not allowed any more. Before the percolate API allowed and ignored the existing document.
当过滤一个已有的文档就指定文档作为源在过滤查询中是不在被允许使用的。在过滤API之前允许忽略已有的文档。

Percolate Stats

过滤状态

The percolate stats have been removed. This is because the percolator no longer caches the percolator queries.
过滤状态已经被移除。这是因为过滤器不在缓存过滤器查询。

Percolator queries containing range queries with now ranges

过滤器查询包含范围查询使用当前时间范围

The percolator no longer accepts percolator queries containing range queries with ranges that are based on current time (using now).
过滤器不在接受过滤查询包含范围查询使用基于当前时间的(使用当前时间)。

Percolator queries containing scripts.

过滤器查询包含脚本

Percolator queries that contain scripts (For example: script query or a function_score query script function) that have no explicit language specified will use the Painless scripting language from version 5.0 and up.
过滤器查询包含脚本(例如:脚本查询胡function_score查询脚本功能)没有指定明确的语言将使用Painless脚本语言从5.0及以上的版本。

Scripts with no explicit language set in percolator queries stored in indices created prior to version 5.0 will use the language that has been configured in the script.legacy.default_lang setting. This setting defaults to the Groovy scripting language, which was the default for versions prior to 5.0. If your default scripting language was different then set the script.legacy.default_lang setting to the language you used before.
对于没有明确语言的脚本在过滤器查询中存储在索引中创建指向5.0版本将使用配置在script.legacy.default_lang设置中的语言。这个设置默认是Groovy脚本语言,默认从5.0版本开始。如果你的默认脚本语言是不同的请在使用前设置script.legacy.default_lang选项。

In order to make use of the new percolator field type all percolator queries should be reindexed into a new index. When reindexing percolator queries with scripts that have no explicit language defined into a new index, one of the following two things should be done in order to make the scripts work: * (Recommended approach) While reindexing the percolator documents, migrate the scripts to the Painless scripting language. * or add lang parameter on the script and set it the language these scripts were written in.
为了使用新的percolator域类型所有的过滤器查询应当被重新索引到新的索引上。当重新索引带有脚本的过滤器查询没有明确指定语言对于新的索引,为了使得脚本有效应该完成下面两个中其中一个操作:* (建议方式)当重新索引过滤文档时,迁移脚本为Painless脚本语言。* 或对于脚本添加lang参数并且设置这些脚本写入的语言。

Java client

Java客户端

The percolator is no longer part of the core elasticsearch dependency. It has moved to the percolator module. Therefor when using the percolator feature from the Java client the new percolator module should also be on the classpath. Also the transport client should load the percolator module as plugin:
过滤器不在是Elasticsearch核心依赖的一部分。他已经被移动到过滤器模块。这样当使用过滤器特性对于Java客户端是新的过滤器模块应当被添加到classpath中。并且传输客户端应当加载过滤器模块作为插件:

TransportClient transportClient = TransportClient.builder()        .settings(Settings.builder().put("node.name", "node"))        .addPlugin(PercolatorPlugin.class)        .build();transportClient.addTransportAddress(        new InetSocketTransportAddress(new InetSocketAddress(InetAddresses.forString("127.0.0.1"), 9300)));

The percolator and multi percolate related methods from the Client interface have been removed. These APIs have been deprecated and it is recommended to use the percolate query in either the search or multi search APIs. However the percolate and multi percolate APIs can still be used from the Java client.
过滤器和多个过滤依赖方法来自客户端接口已经被移除。这些API已经被废弃并且建议使用过滤查询用于搜索和多个搜索API。然而过滤和多个过滤API依然可以使用通过Java客户端。

Using percolate request:
使用过滤请求:

PercolateRequest request = new PercolateRequest();// set stuff and then execute:PercolateResponse response = transportClient.execute(PercolateAction.INSTANCE, request).actionGet();

Using percolate request builder:
使用过滤请求构建器:

PercolateRequestBuilder builder = new PercolateRequestBuilder(transportClient, PercolateAction.INSTANCE);// set stuff and then execute:PercolateResponse response = builder.get();

Using multi percolate request:
使用多个过滤请求:

MultiPercolateRequest request = new MultiPercolateRequest();// set stuff and then execute:MultiPercolateResponse response = transportClient.execute(MultiPercolateAction.INSTANCE, request).get();

Using multi percolate request builder:
使用多个过滤请求构建器:

MultiPercolateRequestBuilder builder = new MultiPercolateRequestBuilder(transportClient, MultiPercolateAction.INSTANCE);// set stuff and then execute:MultiPercolateResponse response = builder.get();

Suggester changes

建议器更新

The completion suggester has undergone a complete rewrite. This means that the syntax and data structure for fields of type completion have changed, as have the syntax and response of completion suggester requests. See completion suggester for details.
完成建议器已经实现了全部的重写。这意味着语法和数据结构对于completion类型域已经被改变,由于语法和完成建议请求。详见completion suggester来了解细节。

For indices created before Elasticsearch 5.0.0, completion fields and the completion suggester will continue to work as they did in Elasticsearch 2.x. However, it is not possible to run a completion suggester query across indices created in 2.x and indices created in 5.x.
对于Elasticsearch5.0.0之前创建的索引,completion域和完成建议器将继续使用就像在Elasticsearch2.x中。然而,不能运行完成建议器跨越由2.x版本创建的索引和由5.x创建的索引。

It is strongly recommended to reindex indices containing 2.x completion fields in 5.x to take advantage of the new features listed below.
强烈建议将包含2.x的completion域中的索引重新索引为5.x版本的索引来使用如下的新特性。

Note
注意
You will need to change the structure of the completion field values when reindexing.
你将需要改变completion域的结构值当重新索引的时候。

Completion suggester is near-real time

完成建议器是接近实时的

Previously, deleted suggestions could be included in results even after refreshing an index. Now, deletions are visible in near-real time, i.e. as soon as the index has been refreshed. This applies to suggestion entries for both context and completion suggesters.
之前,删除建议可以包含在结果中在刷新索引之后。现在,删除是可见的近乎实时,例如,由于索引被刷新。这应用域建议实体对于上下文和完成建议器。

Completion suggester is document-oriented

完成建议器是面向文档的

Suggestions are aware of the document they belong to. Now, associated documents (_source) are returned as part of completion suggestions.
建议是依赖于文档的。现在相关的文档(_source)是返回作为完成建议器的一部分。

Important
重要
_source meta-field must be enabled, which is the default behavior, to enable returning _source with suggestions.
_source元域必须被开启,这是默认的行为,为了返回_source对于建议。

Previously, context and completion suggesters supported an index-time payloads option, which was used to store and return metadata with suggestions. Now metadata can be stored as part of the the same document as the suggestion for retrieval at query-time. The support for index-time payloads has been removed to avoid bloating the in-memory index with suggestion metadata.
之前,上下文和完成建议器支持索引时间负载选项,使用来存储和返回元数据对于建议。现在元数据可以被存储作为相同文档的一部分由于建议是用于查询时间。支持索引时间负载已经被移除避免占用内存索引对于建议元数据。

Simpler completion indexing

更加简单的完成索引

As suggestions are document-oriented, suggestion metadata (e.g. output) should now be specified as a field in the document. The support for specifying output when indexing suggestion entries has been removed. Now suggestion result entry’s text is always the un-analyzed value of the suggestion’s input (same as not specifying output while indexing suggestions in pre-5.0 indices).

Completion mapping with multiple contexts

对于多个上下文的完成匹配

The context option in completion field mapping is now an array to support multiple named contexts per completion field. Note that this is sugar for indexing same suggestions under different name with different contexts. The default option for a named context has been removed. Now querying with no context against a context-enabled completion field yields results from all indexed suggestions. Note that performance for match-all-context query degrades with the number of unique context value for a given completion field.
上下文选项在完成域中匹配是一个数组支持多个名为上下文的每个completion域。注意这是相同的建议对于不同名字和不同的上下文。默认的选项用于命名上下文已经被移除。现在查询中没有上下文对于启用上下文的completion域结果来自所有的被索引建议。注意对于match-all-context查询的性能取决于唯一的上下文值对于给定的完成域。

Completion suggestion with multiple context filtering

对于多个上下文过滤的完成建议

Previously context option in a suggest request was used for filtering suggestions by context value. Now, the option has been named to contexts to specify multiple named context filters. Note that this is not supported by pre-5.0 indices. Following is the contexts snippet for a suggest query filtered by both color and location contexts:
之前的上下文选项在建议请求中被使用过滤器建议通过上下文值。现在,选项已经命名为指定多个命名上下文过滤器。注意这不在5.0之前的版本的索引中支持。下面是上下文片段对于建议查询过滤器对于color和location上下文:

"contexts": {  "color": [ {...} ],  "location": [ {...} ]}

Index APIs changes

索引API的更新

Closing / deleting indices while running snapshot

当运行快照时关闭/删除索引

In previous versions of Elasticsearch, closing or deleting an index during a full snapshot would make the snapshot fail. In 5.0, the close/delete index request will fail instead. The behavior for partial snapshots remains unchanged: Closing or deleting an index during a partial snapshot is still possible. The snapshot result is then marked as partial.
在之前版本的Elasticsearch中,在一个完整的快照中关闭或删除一个索引将导致快照失败。在5.0版本中,关闭或删除索引请求将会失败。对于部分快照的行为没有改变:对于部分快照进行关闭或删除索引是有效的。快照结果将被标记为部分。

Warmers

Thanks to several changes like doc values by default and disk-based norms, warmers are no longer useful. As a consequence, warmers and the warmer API have been removed: it is no longer possible to register queries that will run before a new IndexSearcher is published.
由于类似于doc值被默认和基于磁盘的标准,warmer不在有用。因此,warmer和warmer的API已经被移除了;不会在注册查询并且将在一个新的IndexSearcher之前运行。

Don’t worry if you have warmers defined on your indices, they will simply be ignored when upgrading to 5.0.
不要担心如果你在索引上定义了warmer,当升级到5.0版本时会被忽略。

System CPU stats

系统CPU状态

The recent CPU usage (as a percent) has been added to the OS stats reported under the node stats API and the cat nodes API. The breaking change here is that there is a new object in the os object in the node stats response. This object is called cpu and includes percent and load_average as fields. This moves the load_average field that was previously a top-level field in the os object to the cpu object. The format of the load_average field has changed to an object with fields 1m, 5m, and 15m representing the one-minute, five-minute and fifteen-minute loads respectively. If any of these fields are not present, it indicates that the corresponding value is not available.
最近CPU的使用(百分比)已经被添加到OS状态报告中在节点状态的API和查看节点的API中。这里的重点是在系统的object中有一个新的object在节点状态的返回中。这个object是调用CPU及保护百分比和load_average作为域。这使得load_average域从之前的系统中的顶级域移动到了CPU的object中。load_average域的格式已经更新为带有域1m、5m和15m代表一分钟、五分钟和十五分钟相应负载的代表。如果这些都没有被展现,他只是相应的值是不可用的。

In the cat nodes API response, the cpu field is output by default. The previous load field has been removed and is replaced by load_1m, load_5m, and load_15m which represent the one-minute, five-minute and fifteen-minute loads respectively. The field will be null if the corresponding value is not available.
在查看节点的API响应中,CPU域是默认的输出。之前的load域已经被移除并且被load_1m、load_5m和load_15m所替代来代表一分钟、五分钟和十五分钟相应的负载。这个域将会是null如果相应的值不可用的话。

Finally, the API for org.elasticsearch.monitor.os.OsStats has changed. The getLoadAverage method has been removed. The value for this can now be obtained from OsStats.Cpu#getLoadAverage but it is no longer a double and is instead an object encapsulating the one-minute, five-minute and fifteen-minute load averages. Additionally, the recent CPU usage can be obtained from OsStats.Cpu#getPercent.
最后,用于org.elasticsearch.monitor.os.OsStats的API已经改变了。getLoadAverage方法已经被移除。这个值可以被获得通过OsStatsCpu#getLoadAverage来获取但是不是一个double值而是一个object包含一分钟、五分钟和十五分钟的负载。此外,最近的CPU使用可以被获取通过OsStatsCpu#getPercent。

Suggest stats

Suggest stats exposed through suggest in indices stats has been merged with search stats. suggest stats is exposed as part of search stats.
Suggest状态展示通过在索引中的建议已经被合并到search状态中。建议状态是search状态的一部分。

Creating indices starting with - or +

创建开头为-或+的索引

Elasticsearch no longer allows indices to be created started with - or +, so that the multi-index matching and expansion is not confused. It was previously possible (but a really bad idea) to create indices starting with a hyphen or plus sign. Any index already existing with these preceding characters will continue to work normally.
Elasticsearch不在允许索引被创建开始于-或+,因此多个索引匹配和扩展可能会存在问题。之前是可以(是一个非常糟糕的注意)创建使用减号或加号创建的索引。任何已有的索引使用这些字符依然可以正常使用。

Aliases API

别名API

The /_aliases API no longer supports indexRouting and index-routing, only index_routing. It also no longer support searchRouting and search-routing, only search_routing. These were removed because they were untested and we prefer there to be only one (obvious) way to do things like this.
/_aliases的API不在支持indexRouting和index-routing,只有index_routing。也不在支持searchRouting和search-routing,只有search_routing。这些被移除了由于他们未被测试并且我们希望只有一种方法来完成这件事情。

OpType Create without an ID

不使用ID创建OpType

As of 5.0 indexing a document with op_type=create without specifying an ID is not supported anymore.
在5.0的索引一个文档设置op_type=create但是没有指定ID是不在被支持的。

Flush API

刷新API

The wait_if_ongoing flag default has changed to true causing _flush calls to wait and block if another flush operation is currently running on the same shard. In turn, if wait_if_ongoing is set to false and another flush operation is already running the flush is skipped and the shards flush call will return immediately without any error. In previous versions flush_not_allowed exceptions where reported for each skipped shard.
wait_if_ongoing标志默认已经改变为true由于_flush调用会等待和阻塞如果其他的刷新操作正在相同的shard上运行的时候。反过来如果wait_if_ongoing被设置为false并且另一个刷新操作已经运行了是逃过并且shard刷新调用会立即返回但是不会报错。在之前的版本中flush_not_allowed异常会被反馈对应于每个跳过的shard。

Document API changes

文档API的更新

?refresh no longer supports truthy and falsy values

?refresh不在支持truthy和falsy值

The ?refresh request parameter used to accept any value other than false, 0, off, and no to mean “make the changes from this request visible for search immediately.” Now it only accepts ?refresh and ?refresh=true to mean that. You can set it to ?refresh=false and the request will take no refresh-related action. The same is true if you leave refresh off of the url entirely. If you add ?refresh=wait_for Elasticsearch will wait for the changes to become visible before replying to the request but won’t take any immediate refresh related action. See ?refresh.
?refresh请求参数用于接收任何值除了false、0、off和no用于“使得对于这个请求更新对于每个搜索立即可见”。现在他只接收?refresh和?refresh-ture来实现这样的功能。你可以设置?refreash=false并且请求将不会执行相应的刷新操作。和true一样如果你关闭url的refresh。如果你田间?refresh=wait用于Elasticsearch将会等待更新变为可见状态在依赖于请求之前但是不会有任何立即执行的刷新操作。见?refresh。

created field deprecated in the Index API

在Index的API中created域被废弃

The created field has been deprecated in the Index API. It now returns result, returning “result”: “created” when it created a document and “result”: “updated” when it updated the document. This is also true for index bulk operations.
在Index的APi中created域已经被废弃。他现在返回result,返回”result”: “created”当他创建一个文档并且返回”result”: “updated”当他更新一个文档。这也适用于index的bulk操作。

found field deprecated in the Delete API

在Delete的API中found域被废弃

The found field has been deprecated in the Delete API. It now returns result, returning “result”: “deleted” when it deleted a document and “result”: “not_found” when it didn’t find the document. This is also true for delete bulk operations.
在Delete的API中found域已经被废弃了。他现在返回result,返回”result”: “deleted”当他删除一个文档时并且返回”result”: “not_found”当他没有找到文档时。这也适用index的bulk操作。

Reindex and Update By Query

重新索引和通过查询来更新

Before 5.0.0 _reindex and _update_by_query only retried bulk failures so they used the following response format:
在5.0.0之前,_reindex和_update_by_query只接收bulk失败因此他使用下面的响应格式:

{   ...   "retries": 10   ...}

Where retries counts the number of bulk retries. Now they retry on search failures as well and use this response format:
当重试计算bulk重试的数目。现在他们对于搜索失败的重试使用的响应格式为:

{   ...   "retries": {     "bulk": 10,     "search": 1   }   ...}

Where bulk counts the number of bulk retries and search counts the number of search retries.
当bulk计数bulk重试的数目以及搜索重试的数目。

get API

get的API

As of 5.0.0 the get API will issue a refresh if the requested document has been changed since the last refresh but the change hasn’t been refreshed yet. This will also make all other changes visible immediately. This can have an impact on performance if the same document is updated very frequently using a read modify update pattern since it might create many small segments. This behavior can be disabled by passing realtime=false to the get request.
由于5.0.0版本的get的API将会导致refresh如果请求文档已经被更新因为最后的刷新但是更新还没有被刷新。这会导致其他更新立即可见。这可能会影响性能如果相同的文档被频繁的更新使用读修改更新模式因此他可能会创建许多小的段。这个行为可以被关闭通过对get请求使用realtime=false。

The fields field has been renamed to stored_fields
fields域已经被更名为stored_fields

mget API

The fields field has been renamed to stored_fields
fields域已经被更名为stored_fields

update API

The fields field has been deprecated. You should use _source to load the field from _source.
fields域已经被废弃。你应当使用_source来加载域从_source

bulk API

The fields field has been deprecated. You should use _source to load the field from _source.
fields域已经被废弃。你应当使用_source来加载域从_source

Settings changes

设置更新

From Elasticsearch 5.0 on all settings are validated before they are applied. Node level and default index level settings are validated on node startup, dynamic cluster and index setting are validated before they are updated/added to the cluster state.
从Elasticsearch5.0开始对于所有的设置会被验证在他们应用之前。节点级别和默认的索引级别设置是验证的在节点启动、动态集群和索引设置是在更新和添加集群状态前验证的。

Every setting must be a known setting. All settings must have been registered with the node or transport client they are used with. This implies that plugins that define custom settings must register all of their settings during plugin loading using the SettingsModule#registerSettings(Setting) method.
每个设置必须是一个可知的设置。所有的设置必须被注册使用节点或他们使用的传输客户端。这意味着定义自定义设置的插件必须注册他们所有的设置在插件加载时使用SettingsModule#registerSettings(Setting)方法。

Index Level Settings

索引等级设置

In previous versions Elasticsearch allowed to specify index level setting as defaults on the node level, inside the elasticsearch.yaml file or even via command-line parameters. From Elasticsearch 5.0 on only selected settings like for instance index.codec can be set on the node level. All other settings must be set on each individual index. To set default values on every index, index templates should be used instead.
在之前版本的Elasticsearch中允许指定索引等级设置作为默认的节点级别,在elasticsearch.yml文件中或通过命令行参数。从ELasticsearch5.0开始只有选择设置例如index.codec可以被设置在节点级别。所有其他的设置必须被设置在每个独立的索引上。为了设置默认值对于每个索引,应当使用索引模板作为替代。

Node settings

节点设置

The name setting has been removed and is replaced by node.name. Usage of -Dname=some_node_name is not supported anymore.
name设置已经被移除并且已经由node.name替代。也不在支持-Dname=some_node_name设置。

The node.add_id_to_custom_path was renamed to add_lock_id_to_custom_path.
node.add_id_to_custom_path已经被更名为add_lock_id_to_custom_path

The default for the node.name settings is now the first 7 characters of the node id, which is in turn a randomly generated UUID.
对于node.name的默认设置限制是节点id的前7个字符,就是反过来的随机生成的UUID。

The settings node.mode and node.local are removed. Local mode should be configured via transport.type: local. In order to disable HTTP please use http.enabled: false
设置node.mode和node.local已经被移除。本地模式已经被配置通过transport.type:local。为了关闭HTTP请使用http.enabled:false。

Node attribute settings

节点属性设置

Node level attributes used for allocation filtering, forced awareness or other node identification / grouping must be prefixed with node.attr. In previous versions it was possible to specify node attributes with the node. prefix. All node attributes except of node.master, node.data and node.ingest must be moved to the new node.attr. namespace.
节点级别属性用于分配过滤器、强制其他节点识别和分组必须使用node.attr作为前缀。在之前的版本中可以指定节点属性使用node.前缀。所有的节点属性除了node.master、node.data和node.ingest被移动到了新的node.attr的命名空间。

Node types settings

节点类型设置

The node.client setting has been removed. A node with such a setting set will not start up. Instead, each node role needs to be set separately using the existing node.master, node.data and node.ingest supported static settings.
node.client设置已经被移除。使用这样设置的节点设置将不会启动。作为代替,每个节点角色需要被分开设置使用已有的node.master、node.data和node.ingest支持静态设置。

Gateway settings

网关设置

The gateway.format setting for configuring global and index state serialization format has been removed. By default, smile is used as the format.
对于配置全局的gateway.format设置和索引状态序列化格式已经被移除。默认的,smile被使用作为格式。

Transport Settings

传输设置

All settings with a netty infix have been replaced by their already existing transport synonyms. For instance transport.netty.bind_host is no longer supported and should be replaced by the superseding setting transport.bind_host.
所有的设置使用netty作为中缀已经被提花他们已有的传输同义词。因此transport.netty.bind_host不在被支持并且应当被替换通过使用superseding setting transport.bind_host。

Security manager settings

安全管理器设置

The option to disable the security manager security.manager.enabled has been removed. In order to grant special permissions to elasticsearch users must edit the local Java Security Policy.
关闭安全管理器的选项security.manager.enabled已经被移除。为了授权指定的权限对于Elasticsearch用户必须编辑本地的Java安全策略。

Network settings

网络设置

The non_loopback value for settings like network.host would arbitrarily pick the first interface not marked as loopback. Instead, specify by address scope (e.g. local,site for all loopback and private network addresses) or by explicit interface names, hostnames, or addresses.
non_loopback值用于设置像network.host已经使用了第一个接口没有被标记为loopback。作为替代指定address范围(例如,localsite用于所有loopback和私有的网络地址)或明确接口名、主机名或地址。

The netty.epollBugWorkaround settings is removed. This settings allow people to enable a netty work around for a high CPU usage issue with early JVM versions. This bug was fixed in Java 7. Since Elasticsearch 5.0 requires Java 8 the settings is removed. Note that if the workaround needs to be reintroduced you can still set the org.jboss.netty.epollBugWorkaround system property to control Netty directly.
netty.epollBugWorkaround设置已经被移除。这个设置允许开启netty用于高CPU使用问题对于早期的JVM版本。这个bug在Java7中被修复。因此Elasticsearch5.0要求Java8所以设置已经被移除。注意如果workaround需要被重新引入你依然可以设置org.jboss.netty.epollBugWorkaround系统属性来直接控制netty。

Forbid changing of thread pool types

阻止改变线程池类型

Previously, thread pool types could be dynamically adjusted. The thread pool type effectively controls the backing queue for the thread pool and modifying this is an expert setting with minimal practical benefits and high risk of being misused. The ability to change the thread pool type for any thread pool has been removed. It is still possible to adjust relevant thread pool parameters for each of the thread pools (e.g., depending on the thread pool type, keep_alive, queue_size, etc.).
之前,线程池类型可以被动态调整。线程池类型有效的控制线程背后的队列并且修改他们是一个高级设置并且实用性不高且风险较高如果错误使用的话。改变线程池类型的功能已经被移除。依然可以调整相应的线程池参数对于每个线程池(例如,依赖于线程池类型,keep_alive、queue_size等等)。

Threadpool settings

线程池设置

The suggest threadpool has been removed, now suggest requests use the search threadpool.
suggest线程池已经被移除了,现在suggest请求使用search线程池。

The prefix on all thread pool settings has been changed from threadpool to thread_pool.
对于线程池的设置前缀已经由threadpool改变为thread_pool。

The minimum size setting for a scaling thread pool has been changed from min to core.
对于范围的线程池的最小值已经从min改为core。

The maximum size setting for a scaling thread pool has been changed from size to max.
对于范围的线程池的最大值设置已经从size改为max。

The queue size setting for a fixed thread pool must be queue_size (all other variants that were previously supported are no longer supported).
队列数量设置用于固定的线程池必须是queue_size(所有其他的以前支持的变量现在已经不在被支持了)。

Thread pool settings are now node-level settings. As such, it is not possible to update thread pool settings via the cluster settings API.
线程池设置现在是一个节点级别的设置。因此,他可以更新线程池设置通过集群设置的API。

Analysis settings

分析设置

The index.analysis.analyzer.default_index analyzer is not supported anymore. If you wish to change the analyzer to use for indexing, change the index.analysis.analyzer.default analyzer instead.
index.analysis.analyzer.default_index分析器已经不在被支持。如果你希望改变索引使用的分析器,可以选择设置index.analysis.analyzer.default。

Ping settings

Ping设置

Previously, there were three settings for the ping timeout: discovery.zen.initial_ping_timeout, discovery.zen.ping.timeout and discovery.zen.ping_timeout. The former two have been removed and the only setting key for the ping timeout is now discovery.zen.ping_timeout. The default value for ping timeouts remains at three seconds.
之前,有三个设置用于ping的超时:discovery.zen.initial_ping_timeout、discovery.zen.ping.timeout和discovery.zen.ping_timeout。前两个已经被移除并且现在对于ping超时的设置是discovery.zen.ping_timeout。ping超时的默认值保留在三秒。

discovery.zen.master_election.filter_client and discovery.zen.master_election.filter_data have been removed in favor of the new discovery.zen.master_election.ignore_non_master_pings. This setting control how ping responses are interpreted during master election and should be used with care and only in extreme cases. See documentation for details.
discovery.zen.master_election.filter_client和discovery.zen.master_election.filter_data已经被移除由于新的discovery.zen.master_election.ignore_non_master_pings。这个设置控制ping响应如何例如在master选举过程中并且应当谨慎使用在特殊的情况下。详见相关文档。

Recovery settings

恢复设置

Recovery settings deprecated in 1.x have been removed:
在1.x中被废弃的恢复设置已经被移除

  • index.shard.recovery.translog_size is superseded by indices.recovery.translog_size
    index.shard.recovery.translog_size是被indices.recovery.translog_size取代的。
  • index.shard.recovery.translog_ops is superseded by indices.recovery.translog_ops
    index.shard.recovery.translog_ops是被indices.recovery.translog_ops取代的。
  • index.shard.recovery.file_chunk_size is superseded by indices.recovery.file_chunk_size
    index.shard.recovery.file_chunk_size是被indices.recovery.file_chunk_size取代的。
  • indices.recovery.concurrent_streams is superseded by cluster.routing.allocation.node_concurrent_recoveries
    indices.recovery.concurrent_streams是被cluster.routing.allocation.node_concurrent_recoveries取代的。
  • index.shard.recovery.concurrent_small_file_streams is superseded by indices.recovery.concurrent_small_file_streams
    index.shard.recovery.concurrent_small_file_streams是被indices.recovery.concurrent_small_file_streams取代的。
  • indices.recovery.max_size_per_sec is superseded by indices.recovery.max_bytes_per_sec
    indices.recovery.max_size_per_sec是被indices.recovery.max_bytes_per_sec取代的。

If you are using any of these settings please take the time to review their purpose. All of the settings above are considered expert settings and should only be used if absolutely necessary. If you have set any of the above setting as persistent cluster settings please use the settings update API and set their superseded keys accordingly.
如果你使用任何设置请花时间考虑你的目的。所有上面的设置都是高级设置并且只有在必须的情况下才会使用。如果你已经设置了上面的任何一个设置作为持久化的集群设置请使用设置更新API并且设置相应的废弃的key。

The following settings have been removed without replacement
下面的设置已经被移除并且没有替代设置

  • indices.recovery.concurrent_small_file_streams - recoveries are now single threaded. The number of concurrent outgoing recoveries are throttled via allocation deciders
    indices.recovery.concurrent_small_file_streams - 恢复现在是单线程的。通过分配决策程序来限制并发输出恢复的数量。
  • indices.recovery.concurrent_file_streams - recoveries are now single threaded. The number of concurrent outgoing recoveries are throttled via allocation deciders
    indices.recovery.concurrent_file_streams - 恢复现在是单线程的。通过分配决策程序来限制并发输出恢复的数量。

Translog settings

交易日志设置

The index.translog.flush_threshold_ops setting is not supported anymore. In order to control flushes based on the transaction log growth use index.translog.flush_threshold_size instead.
index.translog.flush_threshold_ops设置不在被支持。为了控制对于事务日志增长的刷新使用index.translog.flush_threshold_size作为替代。

Changing the translog type with index.translog.fs.type is not supported anymore, the buffered implementation is now the only available option and uses a fixed 8kb buffer.
使用index.translog.fs.type来更新交易日志的类型已经不在被支持,缓冲实现现在只是可用选项并且使用一个固定的8kb的缓冲。

The translog by default is fsynced after every index, create, update, delete, or bulk request. The ability to fsync on every operation is not necessary anymore. In fact, it can be a performance bottleneck and it’s trappy since it enabled by a special value set on index.translog.sync_interval. Now, index.translog.sync_interval doesn’t accept a value less than 100ms which prevents fsyncing too often if async durability is enabled. The special value 0 is no longer supported.
默认的交易日志是fsynced的在每个索引创建、更新、删除或bulk请求之后。对于每个操作的fsync是没有必要的。实际上,他可以是一个性能瓶颈并且对于index.translog.sync_interval设置一个指定值开启是有风险的。现在,index.translog.sync_interval不在接受小于100ms的值用于阻止fsync过于频繁如果async持久化已经被开启的话。指定值0不在被支持。

index.translog.interval has been removed.
index.translog.interval已经被移除。

Request Cache Settings

请求缓存设置

The deprecated settings index.cache.query.enable and indices.cache.query.size have been removed and are replaced with index.requests.cache.enable and indices.requests.cache.size respectively.
废弃的设置index.cache.query.enable和indices.cache.query.size已经被移除并且被替代由index.requests.cache.enable和indices.requests.cache.size。

indices.requests.cache.clean_interval has been replaced with indices.cache.clean_interval and is no longer supported.
indices.requests.cache.clean_interval已经被替代由indices.cache.clean_interval并且不在被支持。

Field Data Cache Settings

域数据缓存设置

The indices.fielddata.cache.clean_interval setting has been replaced with indices.cache.clean_interval.
indices.fielddata.cache.clean_interval设置已经被替代由indices.cache.clean_interval。

Allocation settings

分配设置

The cluster.routing.allocation.concurrent_recoveries setting has been replaced with cluster.routing.allocation.node_concurrent_recoveries.
cluster.routing.allocation.concurrent_recoveries设置已经被替代由cluster.routing.allocation.node_concurrent_recoveries。

Similarity settings

similarity设置

The default similarity has been renamed to classic.
默认的similarity已经被更名为classic。

Indexing settings

索引设置

The indices.memory.min_shard_index_buffer_size and indices.memory.max_shard_index_buffer_size have been removed as Elasticsearch now allows any one shard to use amount of heap as long as the total indexing buffer heap used across all shards is below the node’s indices.memory.index_buffer_size (defaults to 10% of the JVM heap).
indices.memory.min_shard_index_buffer_size和indices.memory.max_shard_index_buffer_size已经被移除从Elasticsearch中现在允许任何分片使用一定数量的堆由于总共的索引缓存堆使用是跨所有分片的并且小于节点的indices.memory.index_buffer_size (默认大小是JVM堆的10%)。

Removed es.max-open-files

移除es.max-open-files

Setting the system property es.max-open-files to true to get Elasticsearch to print the number of maximum open files for the Elasticsearch process has been removed. This same information can be obtained from the Nodes Info API, and a warning is logged on startup if it is set too low.
系统属性es.max-open-files为true来使得Elasticsearch可以输出最大的打开文件的数目用于Elasticsearch的进程的设置已经被移除。这个信息可以从节点信息的API中获取并且如果设置过低的话会在启动时提出警告。

Removed es.netty.gathering

移除es.netty.gathering

Disabling Netty from using NIO gathering could be done via the escape hatch of setting the system property “es.netty.gathering” to “false”. Time has proven enabling gathering by default is a non-issue and this non-documented setting has been removed.
从使用NIO收集中关闭Netty可以被实现通过安全的设置系统属性”es.netty.gathering”为false来实现。时间证明默认开启收集不是问题并且这个非文档设置已经被移除。

Removed es.useLinkedTransferQueue

移除es.useLinkedTransferQueue

The system property es.useLinkedTransferQueue could be used to control the queue implementation used in the cluster service and the handling of ping responses during discovery. This was an undocumented setting and has been removed.
系统属性es.useLinkedTransferQueue可以使用来控制队列实现使用在集群服务中并且处理ping返回在发现的过程中。这是一个非文档化的设置并且已经被移除。

Cache concurrency level settings removed

缓存并发级别设置移除

Two cache concurrency level settings indices.requests.cache.concurrency_level and indices.fielddata.cache.concurrency_level because they no longer apply to the cache implementation used for the request cache and the field data cache.
两个缓存并发级别设置indices.requests.cache.concurrency_level和indices.fielddata.cache.concurrency_level因为不在应用于缓存实现用于请求缓存和域数据缓存。

Using system properties to configure Elasticsearch

使用系统属性来配置Elasticsearch

Elasticsearch can no longer be configured by setting system properties. This means that support for all of the following has been removed:
Elasticsearch可以不在使用设置系统属性来配置。这意味着所有如下的支持已经被移除。

  • setting via command line arguments to elasticsearch as -Des.name.of.setting=value.of.setting
    通过命令行参数设置Elasticsearch如-Des.name.of.setting=value.of.setting
  • setting via the JAVA_OPTS environment variable JAVA_OPTS=JAVAOPTSDes.name.of.setting=value.of.settingJAVAOPTSJAVAOPTS=JAVA_OPTS -Des.name.of.setting=value.of.setting来设置
  • setting via the ES_JAVA_OPTS environment variable ES_JAVA_OPTS=ESJAVAOPTSDes.name.of.setting=value.of.settingESJAVAOPTSESJAVAOPTS=ES_JAVA_OPTS -Des.name.of.setting=value.of.setting来设置

Instead, use -Ename.of.setting=value.of.setting.
作为替代,使用-Ename.of.setting=value.of.setting。

Removed using double-dashes to configure Elasticsearch

移除使用double-dashes来配置Elasticsearch

Elasticsearch could previously be configured on the command line by setting settings via –name.of.setting value.of.setting. This feature has been removed. Instead, use -Ename.of.setting=value.of.setting.
Elasticsearch之前可以通过命令行来设置通过–name.of.setting value.of.setting。这个特性已经被移除。作为替代,使用-Ename.of.setting=value.of.setting。

Remove support for .properties config files

不在支持。properties的配置文件

The Elasticsearch configuration and logging configuration can no longer be stored in the Java properties file format (line-delimited key=value pairs with a .properties extension).
Elasticsearch配置和日志配置可以不在存储使用Java的属性文件格式(行分隔key=value对并且使用.properties扩展名)

Discovery Settings

发现设置

The discovery.zen.minimum_master_node must be set for nodes that have network.host, network.bind_host, network.publish_host, transport.host, transport.bind_host, or transport.publish_host configuration options set. We see those nodes as in “production” mode and thus require the setting.
discovery.zen.minimum_master_node必须被设置用于节点包含network.host、network.bind_host、network.publish_host、transport.host、transport.bind_host或transport.publish_host配置选项设置。我们看到这些节点是在生产模式下并且要求设置。

Realtime get setting

实时获取设置

The action.get.realtime setting has been removed. This setting was a fallback realtime setting for the get and mget APIs when realtime wasn’t specified. Now if the parameter isn’t specified we always default to true.
action.get.realtime设置已经被移除。这个设置是回调实时设置用于get和mget的API当没有指定realtime时。现在如果参数没有指定我们依然默认为true。

Memory lock settings

内存锁设置

The setting bootstrap.mlockall has been renamed to bootstrap.memory_lock.
bootstrap.mlockall设置已经被更名为bootstrap.memory_lock。

Snapshot settings

快照设置

The default setting include_global_state for restoring snapshots has been changed from true to false. It has not been changed for taking snapshots and still defaults to true in that case.
默认设置include_global_state用于恢复快照已经由true改为false。但是没有改变制作快照并且默认依然是true。

Time value parsing

时间值解析

The unit w representing weeks is no longer supported.
单位w代表周的形式已经不在被支持。

Fractional time values (e.g., 0.5s) are no longer supported. For example, this means when setting timeouts “0.5s” will be rejected and should instead be input as “500ms”.
部分时间值(例如,0.5s)也不在支持。例如,这意味着设置超时为0.5s将会被拒绝并且应当被替代通过输入500ms。

Node max local storage nodes

节点最大本地存储节点

Previous versions of Elasticsearch defaulted to allowing multiple nodes to share the same data directory (up to 50). This can be confusing where users accidentally startup multiple nodes and end up thinking that they’ve lost data because the second node will start with an empty data directory. While the default of allowing multiple nodes is friendly to playing with forming a small cluster on a laptop, and end-users do sometimes run multiple nodes on the same host, this tends to be the exception. Keeping with Elasticsearch’s continual movement towards safer out-of-the-box defaults, and optimizing for the norm instead of the exception, the default for node.max_local_storage_nodes is now one.
之前版本的Elasticsearch默认允许多节点共享相同的数据目录(最多是50个)。这是存在困扰的当用户不小心开启的多个节点并且不在考虑数据的丢失由于第二个节点将会使用一个空的数据目录启动。当默认允许多截点使用来形成小的集群在笔记本上并且终端用户有时运行多个节点在相同的主机上,这可能会导致异常。如果让Elasticsearch持续运行最好还是不要这么做并且优化norm避免发生异常另外默认的node.max_local_storage_nodes现在是一。

Script settings

脚本设置

Indexed script settings

索引脚本设置

Due to the fact that indexed script has been replaced by stored scripts the following settings have been replaced to:
由于索引脚本已经被替代通过存储脚本因此下面的设置已经被替换:

  • script.indexed has been replaced by script.stored
    script.indexed已经被替换由script.stored
  • script.engine..indexed.aggs has been replaced by script.engine..stored.aggs (where * represents the script language, like groovy, mustache, painless etc.)
    script.engine..indexed.aggs已经被替换由script.engine..stored.aggs(其中*代表脚本语言 ,例如Groovy、mustache、painless等)
  • script.engine..indexed.mapping has been replaced by script.engine..stored.mapping (where * represents the script language, like groovy, mustache, painless etc.)
    script.engine..indexed.mapping已经被替换由script.engine..stored.mapping(其中*代表脚本语言 ,例如Groovy、mustache、painless等)
  • script.engine..indexed.search has been replaced by script.engine..stored.search (where * represents the script language, like groovy, mustache, painless etc.)
    script.engine..indexed.search已经被替换由script.engine..stored.search(其中*代表脚本语言 ,例如Groovy、mustache、painless等)
  • script.engine..indexed.update has been replaced by script.engine..stored.update (where * represents the script language, like groovy, mustache, painless etc.)
    script.engine..indexed.update已经被替换由script.engine..stored.update(其中*代表脚本语言 ,例如Groovy、mustache、painless等)
  • script.engine..indexed.plugin has been replaced by script.engine..stored.plugin (where * represents the script language, like groovy, mustache, painless etc.)
    script.engine..indexed.plugin已经被替换由script.engine..stored.plugin(其中*代表脚本语言 ,例如Groovy、mustache、painless等)

Script mode settings

脚本模式设置

Previously script mode settings (e.g., “script.inline: true”, “script.engine.groovy.inline.aggs: false”, etc.) accepted a wide range of “truthy” or “falsy” values. This is now much stricter and supports only the true and false options.
之前脚本模式设置(例如,”script.inline: true”、”script.engine.groovy.inline.aggs: false”等等)接收参数truthy或falsy的值。现在比较严格了并且只支持true和false选项。

Script sandbox settings removed

脚本沙箱设置移除

Prior to 5.0 a third option could be specified for the script.inline and script.stored settings (“sandbox”). This has been removed, you can now only set script.inline: true or script.stored: true.
5.0之前第三方选项可以被指定用于script.inline和script.stored设置(“沙箱”)。这已经被移除,你现在只能设置script.inline:true或script.stored:true。

Search settings

搜索设置

The setting index.query.bool.max_clause_count has been removed. In order to set the maximum number of boolean clauses indices.query.bool.max_clause_count should be used instead.
设置index.query.bool.max_clause_count已经被移除。为了设置最大的布尔子句数量应当使用indices.query.bool.max_clause_count作为替代。

Allocation changes

分配更新

Primary shard allocation

主分片分配

Previously, primary shards were only assigned if a quorum of shard copies were found (configurable using index.recovery.initial_shards, now deprecated). In case where a primary had only a single replica, quorum was defined to be a single shard. This meant that any shard copy of an index with replication factor 1 could become primary, even it was a stale copy of the data on disk. This is now fixed thanks to shard allocation IDs.
之前,主分片是只分配的如果存在一定数量的分片(使用index.recovery.initial_shards来配置,现在已经废弃)。因此当主分片有单独的副本时,数量被定义作为一个单独的分片。这意味着一个索引的任何分片复制使用复制因子为一可以成为主分片,甚至他是一个状态备份对于磁盘上的数据。这个现在已经被修复由于分片分配ID。

Allocation IDs assign unique identifiers to shard copies. This allows the cluster to differentiate between multiple copies of the same data and track which shards have been active so that, after a cluster restart, only shard copies containing the most recent data can become primaries.
分配ID分配同意的定义对于分片副本。这允许集群的不同在相同数据的多个备份中并且跟踪分片已经被激活因此在集群重启后,只有分片副本包含最新的数据并且可以成为主分片。

Indices Shard Stores command

索引分片存储命令

By using allocation IDs instead of version numbers to identify shard copies for primary shard allocation, the former versioning scheme has become obsolete. This is reflected in the Indices Shard Stores API.
通过使用分配ID代替版本数量对于定义分片副本对于主分片分配,之前版本的scheme已经被废弃。这反应在Indices Shard Stores API中。

A new allocation_id field replaces the former version field in the result of the Indices Shard Stores command. This field is available for all shard copies that have been either created with the current version of Elasticsearch or have been active in a cluster running a current version of Elasticsearch. For legacy shard copies that have not been active in a current version of Elasticsearch, a legacy_version field is available instead (equivalent to the former version field).
新的allocation_id域替换之前版本域在Indices分片存储命令的返回结果中。这个域是可用的对于所有的分片备份可以创建使用当前版本的Elasticsearch或在集群中已经被激活运行当前版本的Elasticsearch。对于遗留的分片副本没有在当前版本的Elasticsearch中激活,legacy_version域是可用的作为替代(相当于之前的版本域)。

Reroute commands

重新路由命令

The reroute command allocate has been split into two distinct commands allocate_replica and allocate_empty_primary. This was done as we introduced a new allocate_stale_primary command. The new allocate_replica command corresponds to the old allocate command with allow_primary set to false. The new allocate_empty_primary command corresponds to the old allocate command with allow_primary set to true.
reroute命令分配已经分为两个独立的命令allocate_replica和allocate_empty_primary。这是由于引入了一个新的allocate_stale_primary命令。新的allocate_replica命令对于旧的分配命令使用allow_primary设置为false。新的allocate_empty_primary命令响应对于分配命令使用allow_primary设置为true。

Custom Reroute Commands

自定义重新路由命令

Elasticsearch no longer supports plugins registering custom allocation commands. It was unused and hopefully unneeded.
ELasticsearch不在支持插件注册自定义分配命令。他不被使用并且可能是没有必要的。

index.shared_filesystem.recover_on_any_node changes

index.shared_filesystem.recover_on_any_node的更新

The behavior of index.shared_filesystem.recover_on_any_node: true has been changed. Previously, in the case where no shard copies could be found, an arbitrary node was chosen by potentially ignoring allocation deciders. Now, we take balancing into account but don’t assign the shard if the allocation deciders are not satisfied.
index.shared_filesystem.recover_on_any_node的行为:true已经被改变。之前,由于没有分片副本可以被找到,任意的节点会被选择忽略分配决定。现在,我们使用账户平衡但是不会分配分片如果分配决定是无效的。

The behavior has also changed in the case where shard copies can be found. Previously, a node not holding the shard copy was chosen if none of the nodes holding shard copies were satisfying the allocation deciders. Now, the shard will be assigned to a node having a shard copy, even if none of the nodes holding a shard copy satisfy the allocation deciders.
行为已经被改变当共享副本可以被找到时。之前一个节点没有持有共享副本被选择如果没有节点持有共享分片副本适用于分配决定。现在,分片将被分配给节点使用一个分片副本,甚至如果没有节点持有一个分片副本符合分配决定。

HTTP changes

HTTP的更新

Compressed HTTP requests are always accepted

压缩HTTP请求是一致被接受的

Before 5.0, Elasticsearch accepted compressed HTTP requests only if the setting http.compressed was set to true. Elasticsearch accepts compressed requests now but will continue to send compressed responses only if http.compressed is set to true.
在5.0版本之前,Elasticsearch接受压缩HTTP请求如果设置http.compressed为true的话。Elasticsearch接收压缩请求现在但是将继续发送压缩响应如果http.compressed被设置为true的话。

REST API changes

REST风格API的更新

Strict REST query string parameter parsing

严格的REST风格请求字符串参数解析

Previous versions of Elasticsearch ignored unrecognized URL query string parameters. This means that extraneous parameters or parameters containing typographical errors would be silently accepted by Elasticsearch. This is dangerous from an end-user perspective because it means a submitted request will silently execute not as intended. This leniency has been removed and Elasticsearch will now fail any request that contains unrecognized query string parameters.
之前版本的Elasticsearch忽略未识别的URL查询字符串参数。这意味着外来的参数或包含输入错误的参数将被Elasticsearch默默接收。这对于终端用户的长远来说是危险的因为他意味着一个提交请求可能会被没有被意识到的默默运行。这已经被移除并且Elasticsearch将会返回失败如果请求中含有未识别的查询字符串参数。

id values longer than 512 bytes are rejected

id值大于512字节的被拒绝

When specifying an _id value longer than 512 bytes, the request will be rejected.
当指定_id值超过512字节,则请求将会被拒绝。

/_optimize endpoint removed

/_optimize已经被移除

The deprecated /_optimize endpoint has been removed. The /_forcemerge endpoint should be used in lieu of optimize.
废弃的/_optimize已经被移除。/_forcemerge应当被使用来代替优化。

The GET HTTP verb for /_forcemerge is no longer supported, please use the POST HTTP verb.
GET方式的HTTP使用/_forcemerge已经不在被支持,请使用POST方式的HTTP方式。

Index creation endpoint only accepts PUT

索引创建只接受PUT请求

It used to be possible to create an index by either calling PUT index_name or POST index_name. Only the former is now supported.
可以创建索引通过调用PUT index_name或POST index_name。现在只接受前面的那种方式。

HEAD {index}/{type} replaced with HEAD {index}/_mapping/{type}

HEAD {index}/{type}被替代由于HEAD {index}/_mapping/{type}

The endpoint for checking whether a type exists has been changed from {index}/{type} to {index}/_mapping/{type} in order to prepare for the removal of types when HEAD {index}/{id} will be used to check whether a document exists in an index. The old endpoint will keep working until 6.0.
用于检查是否存在一个类型已经被改变从{index}/{type}到{index}/_mapping/{type}为了准备类型的移动当HEAD {index}/{id}将被使用来检查是否一个文档存在于一个索引中。以前的端点将一直有效直到版本5.0.

Removed mem section from /_cluster/stats response

移除mem小节从/_cluster/stats的响应中

The mem section contained only the total value, which was actually the memory available throughout all nodes in the cluster. The section contains now total, free, used, used_percent and free_percent.
mem小节包含总值,实际是可用的内存对于集群中的节点。小节现在包含总共、可用、已用和已用百分比和可用百分比。

Revised node roles aggregate returned by /_cluster/stats

改变节点角色总计返回通过/_cluster/stats

The client, master_only, data_only and master_data fields have been removed in favor of master, data, ingest and coordinating_only. A node can contribute to multiple counts as it can have multiple roles. Every node is implicitly a coordinating node, so whenever a node has no explicit roles, it will be counted as coordinating only.
client、master_only、data_only和master_data域已经被移除由于master、data、ingest和coordinating_only。一个节点可以贡献多个部分由于他可以有多个角色。每个节点指定一个coordinating节点,因此当一个节点没有明确的角色,他将只被计算为coordinating。

Removed shard version information from /_cluster/state routing table

移除分片version信息从/_cluster/state路由表中

We now store allocation id’s of shards in the cluster state and use that to select primary shards instead of the version information.
我们现在存储分配id对于分片在集群状态中并且使用它来选择主分片代替版本信息。

Node roles are not part of node attributes anymore

节点角色不再是节点属性的一部分

Node roles are now returned in a specific section, called roles, as part of nodes stats and nodes info response. The new section is an array that holds all the different roles that each node fulfills. In case the array is returned empty, that means that the node is a coordinating only node.
节点角色现在返回在一个指定的小节中,名字为roles,作为节点状态和节点信息的一部分返回。新的小节是一个数组包含所有不同角色的每个节点。如果返回空的数组,则意味着节点是一个coordinating的节点。

Forbid unquoted JSON

禁止未结束的JSON

Previously, JSON documents were allowed with unquoted field names, which isn’t strictly JSON and broke some Elasticsearch clients. If documents were already indexed with unquoted fields in a previous version of Elasticsearch, some operations may throw errors. To accompany this, a commented out JVM option has been added to the jvm.options file: -Delasticsearch.json.allow_unquoted_field_names.
之前,JSON文档被允许使用未结束域名,不是严格的JSON并且破坏了一些Elasticsearch的客户端。如果文档已经被索引使用未结束的于在之前的Elasticsearch版本中,一些操作可能会抛出错误。为了处理这些,一个意见是JVM选项中添加jvm.options文件:-Delasticsearch.json.allow_unquoted_field_names。

Note that this option is provided solely for migration purposes and will be removed in Elasticsearch 6.0.0.
注意这个选项是提供唯一的用于迁移的目的并且将被在Elasticsearch6.0.0中被移除。

Analyze API changes

分析API更新

The filters and char_filters parameters have been renamed filter and char_filter. The token_filters parameter has been removed. Use filter instead.
filters和char_filters参数已经被更名为filter和char_filter。token_filters参数已经被移除。使用filter作为替代。

DELETE /_query endpoint removed

DELETE /_query已经被移除

The DELETE /_query endpoint provided by the Delete-By-Query plugin has been removed and replaced by the Delete By Query API.
DELETE /_query用于Delete-By-Query插件已经被移除并且替换为Delete By Query的API。

Create stored script endpoint removed

创建存储脚本被移除

The PUT /_scripts/{lang}/{id}/_create endpoint that previously allowed to create indexed scripts has been removed. Indexed scripts have been replaced by stored scripts.
PUT /_scripts/{lang}/{id}/_create之前允许创建索引脚本已经被移除。索引脚本已经被替换通过存储脚本。

Create stored template endpoint removed

创建存储模板被移除

The PUT /_search/template/{id}/_create endpoint that previously allowed to create indexed template has been removed. Indexed templates have been replaced by Pre-registered templates.
PUT /_search/template/{id}/_create之前允许创建索引模板已经被移除。索引模板已经被替换为Pre-registered模板。

Remove properties support

移除属性支持

Some REST endpoints (e.g., cluster update index settings) supported detecting content in the Java properties format (line-delimited key=value pairs). This support has been removed.
一些REST风格的端点(例如,集群更新索引设置)支持删除Java属性格式中的内容(行分隔key=value对)。这个支持已经被移除。

wait_for_relocating_shards is now wait_for_no_relocating_shards in /_cluster/health

wait_for_relocating_shards现在已经变为/_cluster/health中的now wait_for_no_relocating_shards

The wait_for_relocating_shards parameter that used to take a number is now simply a boolean flag wait_for_no_relocating_shards, which if set to true, means the request will wait (up until the configured timeout) for the cluster to have no shard relocations before returning. Defaults to false, which means the operation will not wait.
wait_for_relocating_shards参数使用现在是一个简单的布尔标志wait_for_no_relocating_shards如果设置为true,意味着请求将等待(直到配置的超时时间)对于集群没有分片重新分配在返回之前。默认为false,意味着操作将不会等待。

CAT API changes

CAT的API更新

Use Accept header for specifying response media type

使用Accept头用于指定响应媒体类型

Previous versions of Elasticsearch accepted the Content-type header field for controlling the media type of the response in the cat API. This is in opposition to the HTTP spec which specifies the Accept header field for this purpose. Elasticsearch now uses the Accept header field and support for using the Content-Type header field for this purpose has been removed.
之前版本的Elasticsearch接收Content-type头域用于控制响应的媒体类型在CAT的API中。这和HTTP指定Accept头域用于这个目的是相反的。Elasticsearch现在使用Accept头域并支持使用Content-Type域用于这个目的已经被移除。

Host field removed from the cat nodes API

Host域已经从查看节点的API中移除

The host field has been removed from the cat nodes API as its value is always equal to the ip field. The name field is available in the cat nodes API and should be used instead of the host field.
host域已经被移除从查看节点的API中由于他的值是等于ip的域。name域可以使用在查看节点的API中并且应当被使用来替代host域。

Changes to cat recovery API

更新对于查看恢复的API

The fields bytes_recovered and files_recovered have been added to the cat recovery API. These fields, respectively, indicate the total number of bytes and files that have been recovered.
域bytes_recovered和files_recovered已经被添加到cat recovery的API中。这些域相应的代表字节的总数和已经被恢复的文件。

The fields total_files and total_bytes have been renamed to files_total and bytes_total, respectively.
域total_files和total_bytes已经被相应的重命名为files_total和bytes_total。

Additionally, the field translog has been renamed to translog_ops_recovered, the field translog_total to translog_ops and the field translog_percent to translog_ops_percent. The short aliases for these fields are tor, to, and top, respectively.
此外,域translog已经被更名为translog_ops_recovered,域translog_total更名为translog_ops并且域translog_percent更名为translog_ops_percent。对于这些域相应的简写别名是tor、to和top。

Changes to cat nodes API

更新对于查看节点的API

The cat nodes endpoint returns m for master eligible, d for data, and i for ingest. A node with no explicit roles will be a coordinating only node and marked with -. A node can have multiple roles. The master column has been adapted to return only whether a node is the current master (*) or not (-).
查看节点返回m用于代表主节点,d代表数据节点并且i代表ingest。一个节点如果没有明确的角色将被响应的列为节点并且标记为-。一个节点可以有多个角色,主列已经适用于直反胃节点是否是主节点(*)或不是(-)

Changes to cat field data API

更新对于查看域数据的API

The cat field data endpoint adds a row per field instead of a column per field.
查看域数据添加一行每个域代替一列每个域。

The total field has been removed from the field data API. Total field data usage per node can be got by cat nodes API.
total域已经被移除从域数据的API中。总共域数据用于每个节点可以被获得通过cat节点的API。

Java API changes

Java的API的更新

Transport client has been moved

传输客户端已经被移除

The Java transport client has been moved to its own module which can be referenced using:
Java传输客户端已经被移动他自己的模块可以被引用来使用:

<dependency>    <groupId>org.elasticsearch.client</groupId>    <artifactId>transport</artifactId>    <version>5.0.0</version></dependency>

The transport client is now created using the following snippet:
传输客户端现在可以通过如下的方式来创建:

TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)        .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("host1"), 9300))        .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("host2"), 9300));

For more information please see the Java client documentation
有关更多的内容请见Java client documentation.

Count api has been removed

计数API已经被移除

The deprecated count api has been removed from the Java api, use the search api instead and set size to 0.
废弃的计数API已经被移除从Java的API中,使用search的API代替并且设置size为0.

The following call
下面的调用

client.prepareCount(indices).setQuery(query).get();

can be replaced with
可以被替换使用

client.prepareSearch(indices).setSource(new SearchSourceBuilder().size(0).query(query)).get();

Suggest api has been removed

suggest的API已经被移除

The suggest api has been removed from the Java api, use the suggest option in search api, it has been optimized for suggest-only request.
suggest的API已经被移除从Java的API中,使用search的API中的suggest选项,他已经被优化只用于suggest请求。

The following call
下面的调用

client.prepareSuggest(indices).addSuggestion("foo", SuggestBuilders.completionSuggestion("field").text("s")).get();

can be replaced with
可以被替换为

client.prepareSearch(indices).suggest(new SuggestBuilder().addSuggestion("foo", SuggestBuilders.completionSuggestion("field").text("s"))).get();

Elasticsearch will no longer detect logging implementations

Elasticsearch将不在检测日志的实现

Elasticsearch now logs using Log4j 2. Previously if Log4j wasn’t on the classpath it made some effort to degrade to SLF4J or Java logging. Now it will fail to work without the Log4j 2 API. The log4j-over-slf4j bridge ought to work when using the Java client. The log4j-1.2-api bridge is used for third-party dependencies that still use the Log4j 1 API. The Elasticsearch server now only supports Log4j 2 as configured by log4j2.properties and will fail if Log4j isn’t present.
Elasticsearch现在使用Log4j2来记录日志。之前如果Log4j没有在classpath中他会出现一些降低并使用SLF4J或Java日志。现在他将会使用如果不使用Log4J 2的API的话。log4j-over-slf4j桥输出工作当使用Java客户端的时候。log4j-1.2-api桥北使用用于第三方的依赖依然使用Log4J 1的API。Elasticsearch服务器现在只支持Log4J 2作为配置通过log4j2.properties并且将会出现问题如果Log4j没有被使用。

Groovy dependencies

Groovy依赖

In previous versions of Elasticsearch, the Groovy scripting capabilities depended on the org.codehaus.groovy:groovy-all artifact. In addition to pulling in the Groovy language, this pulls in a very large set of functionality, none of which is needed for scripting within Elasticsearch. Aside from the inherent difficulties in managing such a large set of dependencies, this also increases the surface area for security issues. This dependency has been reduced to the core Groovy language org.codehaus.groovy:groovy artifact.
在之前版本的Elasticsearch中,Grooby脚本的能力依赖于org.codehaus.groovy:groovy-all的artifact。此外对于使用Groovy语言实现pull,这个pull是大范围的,不需要在Elasticsearch使用脚本。此外困难的使用管理大范围的依赖,这也增加了表面的安全问题。这个依赖已经被减轻由于核心的Groovy语言org.codehaus.groovy:groovy的artifact。

DocumentAlreadyExistsException removed

DocumentAlreadyExistsException被移除

DocumentAlreadyExistsException is removed and a VersionConflictEngineException is thrown instead (with a better error description). This will influence code that use the IndexRequest.opType() or IndexRequest.create() to index a document only if it doesn’t already exist.
DocumentAlreadyExistsException是被移除的并且VersionConflictEngineException被抛出作为代替(有更好的错误描述)。这将影响代码使用IndexRequest.opType()或IndexRequest.create()来索引文档如果他没有存在的话。

writeConsistencyLevel removed on write requests

writeConsistencyLevel被移除对于写请求

In previous versions of Elasticsearch, the various write requests had a setWriteConsistencyLevel method to set the shard consistency level for write operations. However, the semantics of write consistency were ambiguous as this is just a pre-operation check to ensure the specified number of shards were available before the operation commenced. The write consistency level did not guarantee that the data would be replicated to those number of copies by the time the operation finished. The setWriteConsistencyLevel method on these write requests has been changed to setWaitForActiveShards, which can take a numerical value up to the total number of shard copies or ActiveShardCount.ALL for all shard copies. The default is to just wait for the primary shard to be active before proceeding with the operation. See the section on wait for active shards for more details.
之前版本的Elasticsearch,不同的写请求有一个setWriteConsistencyLevel方法来设置分片一致级别用于写操作。然而,对于写一致性的语音是模糊不清的由于这只是一个提前操作检查来保证指定分片的数量 是可用的对于操作执行之前。写一致级别不会保证书可以被重复副本的数目知道操作结束的时间。setWriteConsistencyLevel方法对于这些写请求已经被更新为setWaitForActiveShards,可以使用数值对于分片复制的总数或ActiveShardCount。所有用于分片的复制。默认的是等待主分片被激活在执行操作之前。参考章节wait for active shards来了解更多内容。

This change affects IndexRequest, IndexRequestBuilder, BulkRequest, BulkRequestBuilder, UpdateRequest, UpdateRequestBuilder, DeleteRequest, and DeleteRequestBuilder.
这个更新影响IndexRequest、IndexRequestBuilder、BulkRequest、BulkRequestBuilder、UpdateRequest、UpdateRequestBuilder、DeleteRequest和DeleteRequestBuilder。

Changes to Query Builders

对于Query Builder的更新

BoostingQueryBuilder

Removed setters for mandatory positive/negative query. Both arguments now have to be supplied at construction time already and have to be non-null.
移除了setter用于强制的积极或消极的查询。参数可以提供在构造时提供并且必须是非空。

SpanContainingQueryBuilder

Removed setters for mandatory big/little inner span queries. Both arguments now have to be supplied at construction time already and have to be non-null. Updated static factory methods in QueryBuilders accordingly.
移除了设置用于强制big或little的内部分隔查询。参数现在必须被提供在构造时并且必须是非空。相应的更新了静态工厂方法在QueryBuilders中。

SpanOrQueryBuilder

Making sure that query contains at least one clause by making initial clause mandatory in constructor. Renaming method to add clauses from clause(SpanQueryBuilder) to addClause(SpanQueryBuilder).
保证查询包含至少一个集群使用初始的子句在构造器中。重命名方法来添加子句从clause(SpanQueryBuilder)到addClause(SpanQueryBuilder)。

SpanNearQueryBuilder

Removed setter for mandatory slop parameter, needs to be set in constructor now. Also making sure that query contains at least one clause by making initial clause mandatory in constructor. Updated the static factory methods in QueryBuilders accordingly. Renaming method to add clauses from clause(SpanQueryBuilder) to addClause(SpanQueryBuilder).
移除了设置用于强制slop参数,现在需要被设置在构造器中。并且保证查询包含至少一个子句使用初始的子句强制在构造器中。更新静态工厂方法在QueryBuilders中。重命名方法添加了子句从clause(SpanQueryBuilder)为addClause(SpanQueryBuilder)。

SpanNotQueryBuilder

Removed setter for mandatory include/exclude span query clause, needs to be set in constructor now. Updated the static factory methods in QueryBuilders and tests accordingly.
移除了设置用于强制include或exclude间隔查询子句,现在需要被设置在构造器中。更新静态工厂方法在QueryBuilders中并且进行了相应的测试。

SpanWithinQueryBuilder

Removed setters for mandatory big/little inner span queries. Both arguments now have to be supplied at construction time already and have to be non-null. Updated static factory methods in QueryBuilders accordingly.
移除了设置用于强制big或little的内部分隔查询。参数现在必须提供在构造器中并且必须是非空。相应的更新了在QueryBuilders中的静态工厂方法。

WrapperQueryBuilder

Removed wrapperQueryBuilder(byte[] source, int offset, int length). Instead simply use wrapperQueryBuilder(byte[] source). Updated the static factory methods in QueryBuilders accordingly.
移除了wrapperQueryBuilder(byte[] source, int offset, int length)。作为代替简单使用wrapperQueryBuilder(byte[] source)。相应的更新了在QueryBuilder中的静态工厂方法。

QueryStringQueryBuilder

Removed ability to pass in boost value using field(String field) method in form e.g. field^2. Use the field(String, float) method instead.
移除了在boost值中使用field(String field)方法的能力在表单例如field^2。使用field(String, float)方法作为代替。

Operator

Removed the enums called Operator from MatchQueryBuilder, QueryStringQueryBuilder, SimpleQueryStringBuilder, and CommonTermsQueryBuilder in favour of using the enum defined in org.elasticsearch.index.query.Operator in an effort to consolidate the codebase and avoid duplication.
移除了名字为Operator的枚举,从MatchQueryBuilder、QueryStringQueryBuilder、SimpleQueryStringBuilder和CommonTermsQueryBuilder中由于枚举定义在org.elasticsearch.index.query.Operator中用于巩固代码库并且避免重复。

queryName and boost support

支持queryName和boost

Support for queryName and boost has been streamlined to all of the queries. That is a breaking change till queries get sent over the network as serialized json rather than in Streamable format. In fact whenever additional fields are added to the json representation of the query, older nodes might throw error when they find unknown fields.
支持queryName和boost对于所有的查询。这是一个重大的更新直到查询基于网络由于序列化json而不是以流的形式。实际上当额外的域被添加到json相应的查询中,以前的节点可能会抛出错误当他们发现了无法识别的域时。

InnerHitsBuilder

InnerHitsBuilder now has a dedicated addParentChildInnerHits and addNestedInnerHits methods to differentiate between inner hits for nested vs. parent / child documents. This change makes the type / path parameter mandatory.
InnerHitsBuilder现在有一个明确的addParentChildInnerHits和addNestedInnerHits方法是不同的对于内部的命中对于内置或父子文档。这个更新使得类型或路径参数可以委托。

MatchQueryBuilder

Moving MatchQueryBuilder.Type and MatchQueryBuilder.ZeroTermsQuery enum to MatchQuery.Type. Also reusing new Operator enum.
将MatchQueryBuilder.Type和MatchQueryBuilder.ZeroTermsQuery枚举移动到MatchQuery.Type。并且重用新的Operator枚举。

MoreLikeThisQueryBuilder

Removed MoreLikeThisQueryBuilder.Item#id(String id), Item#doc(BytesReference doc), Item#doc(XContentBuilder doc). Use provided constructors instead.
移除了MoreLikeThisQueryBuilder.Item#id(String id)、Item#doc(BytesReference doc)、Item#doc(XContentBuilder doc)。使用已有的构造器作为替代。

Removed MoreLikeThisQueryBuilder#addLike in favor of texts and/or items being provided at construction time. Using arrays there instead of lists now.
移除了MoreLikeThisQueryBuilder#addLike由于文本或item被提供以方便的时间。现在使用数组来代替列表。

Removed MoreLikeThisQueryBuilder#addUnlike in favor to using the unlike methods which take arrays as arguments now rather than the lists used before.
移除了MoreLikeThisQueryBuilder#addUnlike由于unlike方法使用数组作为参数而不是像之前一样使用list。

The deprecated docs(Item… docs), ignoreLike(Item… docs), ignoreLike(String… likeText), addItem(Item… likeItems) have been removed.
废弃的deprecated docs(Item… docs)、ignoreLike(Item… docs)、ignoreLike(String… likeText)、addItem(Item… likeItems)已经被移除。

GeoDistanceQueryBuilder

Removing individual setters for lon() and lat() values, both values should be set together using point(lon, lat).
移除了独立的设置用于lon()和lat()的值,这两个值应当被同时设置使用point(lon,lat)。

GeoDistanceRangeQueryBuilder

Removing setters for to(Object …) and from(Object …) in favour of the only two allowed input arguments (String, Number). Removing setter for center point (point(), geohash()) because parameter is mandatory and should already be set in constructor. Also removing setters for lt(), lte(), gt(), gte() since they can all be replaced by equivalent calls to to/from() and includeLower()/includeUpper().
移除了设置对于to(Object …)和from(Object …)由于只有两个允许输入参数(String, Number)。移除了设置用于中心点(point(), geohash())因为参数是委托的并且应当已经被设置在构造器中。并且移除了设置用于lt()、lte()、gt()自从他们可以被替换通过使用调用对于to/from()和includeLower()/includeUpper()。

GeoPolygonQueryBuilder

Require shell of polygon already to be specified in constructor instead of adding it pointwise. This enables validation, but makes it necessary to remove the addPoint() methods.
请求polygon的shell已经被指定在构造器中而不是逐个添加。这个允许验证但是有必要移除addPoint()方法。

MultiMatchQueryBuilder

Moving MultiMatchQueryBuilder.ZeroTermsQuery enum to MatchQuery.ZeroTermsQuery. Also reusing new Operator enum.
移动MultiMatchQueryBuilder.ZeroTermsQuery枚举到MatchQuery.ZeroTermsQuery中。并且重用了新的Operator枚举。

Removed ability to pass in boost value using field(String field) method in form e.g. field^2. Use the field(String, float) method instead.
移除了可以在boost值中传递使用field(String field)方法形式例如field^2。使用field(String, float)方法作为代替。

MissingQueryBuilder

The MissingQueryBuilder which was deprecated in 2.2.0 is removed. As a replacement use ExistsQueryBuilder inside a mustNot() clause. So instead of using new MissingQueryBuilder(name) now use new BoolQueryBuilder().mustNot(new ExistsQueryBuilder(name)).
在2.2.0版本中废弃的MissingQueryBuilder已经被移除了。并且使用ExistsQueryBuilder作为替代在mustNot()子句中。因此代替使用new MissingQueryBuilder(name)现在应该使用new BoolQueryBuilder().mustNot(new ExistsQueryBuilder(name))

NotQueryBuilder

The NotQueryBuilder which was deprecated in 2.1.0 is removed. As a replacement use BoolQueryBuilder with added mustNot() clause. So instead of using new NotQueryBuilder(filter) now use new BoolQueryBuilder().mustNot(filter).
在2.1.0版本中废弃的NotQueryBuilder已经被移除。使用BoolQueryBuilder作为替代使用添加的mustNot()子句。因此替代使用new NotQueryBuilder(filter)现在应该使用new BoolQueryBuilder().mustNot(filter)

TermsQueryBuilder

Remove the setter for termsLookup(), making it only possible to either use a TermsLookup object or individual values at construction time. Also moving individual settings for the TermsLookup (lookupIndex, lookupType, lookupId, lookupPath) to the separate TermsLookup class, using constructor only and moving checks for validation there. Removed TermsLookupQueryBuilder in favour of TermsQueryBuilder.
移除了设置对于termsLookup(),使得可以使用TermsLookup的object或独立的值在构造的时候。并且移动了独立的设置用于TermsLookup (lookupIndex, lookupType, lookupId, lookupPath)为分开的TermsLookup类,只使用构造器并且移动了验证的检查。移除了TermsLookupQueryBuilder由于TermsQueryBuilder。

FunctionScoreQueryBuilder

add methods have been removed, all filters and functions must be provided as constructor arguments by creating an array of FunctionScoreQueryBuilder.FilterFunctionBuilder objects, containing one element for each filter/function pair.
添加的方法已经被移除,所有的过滤器和函数必须在构造参数中提供通过创建一个FunctionScoreQueryBuilder.FilterFunctionBuilder的object的数组,包含一个元素用于每个过滤器或函数对。

scoreMode and boostMode can only be provided using corresponding enum members instead of string values: see FilterFunctionScoreQuery.ScoreMode and CombineFunction.
scoreMode和boostMode只能被提供通过使用相应的枚举成员而不是字符串值:见FilterFunctionScoreQuery.ScoreMode和CombineFunction。

CombineFunction.MULT has been renamed to MULTIPLY.
CombineFunction.MULT已经被更名为MULTIPLY。

IdsQueryBuilder

For simplicity, only one way of adding the ids to the existing list (empty by default) is left: addIds(String…)
为了简化,只有一个方法添加ids到已有的列表中(默认为空)就是剩下的:addIds(String…)

ShapeBuilders

InternalLineStringBuilder is removed in favour of LineStringBuilder, InternalPolygonBuilder in favour of PolygonBuilder` and Ring has been replaced with LineStringBuilder. Also the abstract base classes BaseLineStringBuilder and BasePolygonBuilder haven been merged with their corresponding implementations.
InternalLineStringBuilder已经被移除由于LineStringBuilder,InternalPolygonBuilder由于PolygonBuilder并且Ring已经被替换由LineStringBuilder。并且抽象的基类BaseLineStringBuilder和BasePolygoBuilder已经被合并对于他们相应的实现。

RescoreBuilder

RecoreBuilder.Rescorer was merged with RescoreBuilder, which now is an abstract superclass. QueryRescoreBuilder currently is its only implementation.
RecoreBuilder.Rescorer被合并到RescoreBuilder,现在是一个抽象超类。QueryRescoreBuilder当前是他唯一的实现。

PhraseSuggestionBuilder

The inner DirectCandidateGenerator class has been moved out to its own class called DirectCandidateGeneratorBuilder.
内部类DirectCandidateGenerator类已经被移出来成为独立的类名字为DirectCandidateGeneratorBuilder。

SortBuilders

The sortMode setter in FieldSortBuilder, GeoDistanceSortBuilder and ScriptSortBuilder now accept a SortMode enum instead of a String constant. Also the getter returns the same enum type.
sorMode设置在FieldSortBuilder、GeoDistanceSortBuilder和ScriptSortBuilder现在接受一个SortMode枚举代替了字符串常量。并且getter方法返回相同的枚举类型。

SuggestBuilder

The setText method has been changed to setGlobalText to make the intent more clear, and a getGlobalText method has been added.
setText方法已经改变为setGlobalText用于使得意图更加清晰并且getGlobalText方法已经被添加。

The addSuggestion method now required the user specified suggestion name, previously used in the ctor of each suggestion.
addSuggestion方法现在要求使用指定的suggestion名字,之前使用在每个suggestion的ctor中。

SuggestionBuilder

The field setter has been deleted. Instead the field name needs to be specified as constructor argument.
域设置已经被删除。代替域名需要被指定作为构造器的参数。

SearchSourceBuilder

All methods which take an XContentBuilder, BytesReference Map

SearchRequestBuilder

All methods which take an XContentBuilder, BytesReference Map

SearchRequest

All source methods have been removed in favor of a single source(SearchSourceBuilder) method. This means that all search requests can now be validated at call time which results in much clearer errors.
所有的源方法已经被移除由于单一的source(SearchSourceBuilder)方法。这意味着所有的搜索请求可以是有效的在调用的使用并且返回的更加清晰的错误。

All extraSource methods have been removed.
所有的extraSource方法已经被移除。

All template methods have been removed in favor of a new Search Template API. A new SearchTemplateRequest now accepts a template and a SearchRequest and must be executed using the new SearchTemplateAction action.
所有的模板方法已经被移除由于新的搜索模板API。新的SearchTemplateRequest现在接收一个模板和SearchRequest并且必须被执行使用新的SearchTemplateAction的行为。

SearchResponse

Sort values for string fields are now return as java.lang.String objects rather than org.elasticsearch.common.text.Text.
Sort值用于string域现在返回由于java.lang.String的object而不是org.elasticsearch.common.text.Text。

AggregationBuilder

All methods which take an XContentBuilder, BytesReference Map

ValidateQueryRequest

source(QuerySourceBuilder), source(Map), source(XContentBuilder), source(String), source(byte[]), source(byte[], int, int), source(BytesReference) and source() have been removed in favor of using query(QueryBuilder) and query()
source(QuerySourceBuilder)、source(Map)、source(XContentBuilder)、source(String)、source(byte[])、source(byte[], int, int)、source(BytesReference)和source()已经被移除由于使用了query(QueryBuilder)和query()

ValidateQueryRequestBuilder

setSource() methods have been removed in favor of using setQuery(QueryBuilder)
setSource()方法已经被移除由于使用了setQuery(QueryBuilder)

ExplainRequest

source(QuerySourceBuilder), source(Map), source(BytesReference) and source() have been removed in favor of using query(QueryBuilder) and query()
source(QuerySourceBuilder)、source(Map)、source(BytesReference)和source()已经被移除由于使用query(QueryBuilder)和query()

ExplainRequestBuilder

The setQuery(BytesReference) method have been removed in favor of using setQuery(QueryBuilder)
setQuery(BytesReference)方法已经被移除由于使用setQuery(QueryBuilder)

ClusterStatsResponse

Removed the getMemoryAvailable method from OsStats, which could be previously accessed calling clusterStatsResponse.getNodesStats().getOs().getMemoryAvailable(). It is now replaced with clusterStatsResponse.getNodesStats().getOs().getMem() which exposes getTotal(), getFree(), getUsed(), getFreePercent() and getUsedPercent().
移除了getMemoryAvailable方法从OsStats中,之前可以被方法调用clusterStatsResponse.getNodesStats().getOs().getMemoryAvailable()。现在已经被替换clusterStatsResponse.getNodesStats().getOs().getMem()暴露了getTotal()、getFree()、getUsed()、getFreePercent()和getUsedPercent()

setRefresh(boolean) has been removed

setRefresh(boolean)已经被移除

setRefresh(boolean) has been removed in favor of setRefreshPolicy(RefreshPolicy) because there are now three options (NONE, IMMEDIATE, and WAIT_FOR). setRefresh(IMMEDIATE) has the same behavior as setRefresh(true) used to have. See setRefreshPolicy’s javadoc for more.
setRefresh(boolean)已经被移除由于setRefreshPolicy(RefreshPolicy)因为他现在是三个选项(NONE、IMMEDIATE和WAIT_FOR)。setRefresh(IMMEDIATE)是相同的行为等同于使用setRefresh(true)。见setRefreshPolicy的javadocs来了解更多。

Remove properties support

移除了properties支持

Some Java APIs (e.g., IndicesAdminClient#setSettings) would support Java properties syntax (line-delimited key=value pairs). This support has been removed.
一些Java的API(例如,IndicesAdminClient)可以支持Java属性语法(行分隔的key=value对)。这个支持已经被移除了。

Render Search Template Java API has been removed

Render搜索模板用于Java的API已经被移除

The Render Search Template Java API including RenderSearchTemplateAction, RenderSearchTemplateRequest and RenderSearchTemplateResponse has been removed in favor of a new simulate option in the Search Template Java API. This Search Template API is now included in the lang-mustache module and the simulate flag must be set on the SearchTemplateRequest object.
Render搜索模板Java的API中包括RenderSearchTemplateAction、RenderSearchTemplateRequest和RenderSearchTemplateResponse已经被移除由于新的相似的选项在搜索模板Java的API中。这个搜索模板API现在是包含在lang-mustache模块中并且模仿标志必须被设置在SearchTemplateRequest的object中。

AnalyzeRequest

The tokenFilters(String…) and charFilters(String…) methods have been removed in favor of using addTokenFilter(String)/addTokenFilter(Map) and addCharFilter(String)/addCharFilter(Map) each filters
tokenFilters(String…)和charFilters(String…)方法已经被移除由于使用addTokenFilter(String)/addTokenFilter(Map)和addCharFilter(String)/addCharFilter(Map)每个过滤器。

AnalyzeRequestBuilder

The setTokenFilters(String…) and setCharFilters(String…) methods have been removed in favor of using addTokenFilter(String)/addTokenFilter(Map) and addCharFilter(String)/addCharFilter(Map) each filters
setTokenFilters(String…)和setCharFilters(String…)方法已经被移除由于使用addTokenFilter(String)/addTokenFilter(Map)和addCharFilter(String)/addCharFilter(Map)每个过滤器

ClusterHealthRequest

The waitForRelocatingShards(int) method has been removed in favor of waitForNoRelocatingShards(boolean) which instead uses a boolean flag to denote whether the cluster health operation should wait for there to be no relocating shards in the cluster before returning.
waitForRelocatingShards(int)方法已经被移除由于waitForNoRelocatingShards(boolean)代替使用一个布尔标志来指示是否一个集群的健康操作应当等待在集群中没有重新分片在返回之前。

ClusterHealthRequestBuilder

The setWaitForRelocatingShards(int) method has been removed in favor of setWaitForNoRelocatingShards(boolean) which instead uses a boolean flag to denote whether the cluster health operation should wait for there to be no relocating shards in the cluster before returning.
setWaitForRelocatingShards(int)方法已经被移除由于setWaitForNoRelocatingShards(boolean)代替使用一个布尔标志位来表示是否一个集群的健康操作应当等待在集群中没有重新分片时在返回。

BlobContainer Interface for Snapshot/Restore

BlobContainer接口用于快照/恢复

Some methods have been removed from the BlobContainer interface for Snapshot/Restore repositories. In particular, the following three methods have been removed:
一些方法已经被移除从BlobContainer接口中用于快照/恢复库。尤其是下面的三个方法已经被移除了:

  • deleteBlobs(Collection) (use deleteBlob(String) instead)
  • deleteBlobsByPrefix(String) (use deleteBlob(String) instead)
  • writeBlob(String, BytesReference) (use writeBlob(String, InputStream, long) instead)

The deleteBlob methods that took multiple blobs as arguments were deleted because no atomic guarantees can be made about either deleting all blobs or deleting none of them, and exception handling in such a situation is ambiguous and best left to the caller. Hence, all delete blob calls use the singular deleteBlob(String) method.
deleteBlob方法接收多个blobs作为参数被删除由于没有原子的保证对于删除所有的blob或保留他们,并且异常处理在这样的情况下是模糊的并且最好是留给调用者来处理。因此所有删除的blob调用使用了单一的deleteBlob(String)方法。

The extra writeBlob method offered no real advantage to the interface and all calls to writeBlob(blobName, bytesRef) can be replaced with:
额外的writeBlob方法提供了非优点用于接口并且调用了writeBlob(blobName, bytesRef)可以被替换使用:

try (InputStream stream = bytesRef.streamInput()) {    blobContainer.writeBlob(blobName, stream, bytesRef.length());}

For any custom implementation of the BlobContainer interface, these three methods must be removed.
对于任何的自定义实现对于BlobContainer接口,这三个方法必须被移除。

NodeBuilder removed

移除了NodeBuilder

NodeBuilder has been removed. While using Node directly within an application is not officially supported, it can still be constructed with the Node(Settings) constructor.
NodeBuilder已经被移除。当直接使用节点在应用中时官方是不支持的,他依然可以被构造通过使用Node(Settings)的构造器。

Packaging

打包

APT/YUM repository URL changes

APT/YUM库的URL更新

The repository for apt and yum packages has changed from https://packages.elastic.co to https://artifacts.elastic.co/.
用于apt和yum包的库已经被更新从https://packages.elastic.co改为https://artifacts.elastic.co/

Full details can be found in Installing Elasticsearch.
有关详细的信息可以在安装Elasticsearch中找到。

Default logging using systemd (since Elasticsearch 2.2.0)

默认使用systemd记录日志(从Elasticsearch2.2.0开始)

In previous versions of Elasticsearch, the default logging configuration routed standard output to /dev/null and standard error to the journal. However, there are often critical error messages at startup that are logged to standard output rather than standard error and these error messages would be lost to the nether. The default has changed to now route standard output to the journal and standard error to inherit this setting (these are the defaults for systemd). These settings can be modified by editing the elasticsearch.service file.
在之前版本的Elasticsearch中,默认的日志配置路由标准输出到/dev/null和标准错误输出到journal中。然而一些关键的错误信息在启动只供的会记录到标准输出中而不是标准的错误消息中并且可能会丢失。默认已经更高了路由标准输出到流水中并且标准错误时继承这个设置(默认是用于systemd)。这些设置可以被修改由于编辑elasticsearch.service文件。

Longer startup times

更长的启动时间

In Elasticsearch 5.0.0 the -XX:+AlwaysPreTouch flag has been added to the JVM startup options. This option touches all memory pages used by the JVM heap during initialization of the HotSpot VM to reduce the chance of having to commit a memory page during GC time. This will increase the startup time of Elasticsearch as well as increasing the initial resident memory usage of the Java process.
在Elasticsearch5.0.0中-XX:+AlwaysPreTouch标志已经被添加到JVM的启动选项中。这个选项接触所有的内存页使用在JVM的堆中在HotSpot的虚拟机初始化的时候来减少提交一个内存页的机会在GC的时候。浙江加快启动Elasticsearch的时间并且提高初始内存占用对于Java进程。

JVM options

JVM选项

Arguments to the Java Virtual Machine have been centralized and moved to a new configuration file jvm.options. This centralization allows for simpler end-user management of JVM options.
用于Java虚拟机的参数已经被中心化并且移动的新的配置文件jvm.options中。这个中心化允许更加简单的终端用户管理JVM选项。

This migration removes all previous mechanisms of setting JVM options via the environment variables ES_MIN_MEM, ES_MAX_MEM, ES_HEAP_SIZE, ES_HEAP_NEWSIZE, ES_DIRECT_SIZE, ES_USE_IPV4, ES_GC_OPTS, ES_GC_LOG_FILE, and JAVA_OPTS.
这迁移移除了所有的之前策略对于设置JVM选项通过环境变量ES_MIN_MEM、ES_MAX_MEM、ES_HEAP_SIZE、ES_HEAP_NEWSIZE、ES_DIRECT_SIZE、ES_USE_IPV4、ES_GC_OPTS、ES_GC_LOG_FILE和JAVA_OPTS。

The default location for this file is in config/jvm.options if installing from the tar or zip distributions, and /etc/elasticsearch/jvm.options if installing from the Debian or RPM packages. You can specify an alternative location by setting the environment variable ES_JVM_OPTIONS to the path to the file.
默认的位置对于这个文件是在config/jvm.options中如果使用zip的形式安装的话,并且/etc/elasticsearch/jvm.options如果安装从Debian或RPM包中。你可以指定另外的路径通过设置环境变量ES_JVM_OPTIONS来指定文件的位置。

Thread stack size for the Windows service

线程栈大小用于Windows服务

Previously when installing the Windows service, the installation script would configure the thread stack size (this is required for the service daemon). As a result of moving all JVM configuration to the jvm.options file, the service installation script no longer configures the thread stack size. When installing the Windows service, you must configure thread stack size. For additional details, see the installation docs.
之前当安装Windows服务时,安装脚本会配置线程栈大小(要求是后台服务)。由于移动的所有的JVM配置到jvm.options文件中,服务安装脚本不在配置线程栈的大小。当安装Windows服务时,你必须配置栈大小。相关的细节,请参考安装文档。

/bin/bash is now required

/bin/bash现在是必须的

Previously, the scripts used to start Elasticsearch and run plugin commands only required a Bourne-compatible shell. Starting in Elasticsearch 5.0.0, the bash shell is now required and /bin/bash is a hard-dependency for the RPM and Debian packages.
之前,脚本使用来启动Elasticsearch以及运行插件命令要求Bourne-compatible的shell。从Elasticsearch5.0.0开始,bash的shell现在是要求的并且/bin/bash是硬性依赖用于RPM和Debian包。

Environmental Settings

环境设置

Previously, Elasticsearch could be configured via environment variables in two ways: first by using the placeholder syntax env.ENVVARNAMEandthesecondbyusingthesamesyntaxwithouttheenvprefix:{ENV_VAR_NAME}. The first method has been removed from Elasticsearch.
之前,Elasticsearch可以被配置通过环境变量有两种方式:首先使用占位符env.ENVVARNAME使{ENV_VAR_NAME}。第一方法已经从Elasticsearch移除。

Additionally, it was previously possible to set any setting in Elasticsearch via JVM system properties. This has been removed from Elasticsearch.
此外,之前可以设置在Elasticsearch中通过JVM系统属性。已经被移除从Elasticsearch中。

Dying on fatal errors

由于致命错误退出

Previous versions of Elasticsearch would not halt the JVM if out of memory errors or other fatal errors were encountered during the life of the Elasticsearch instance. Because such errors leave the JVM in a questionable state, the best course of action is to halt the JVM when this occurs. Starting in Elasticsearch 5.x, this is now the case. Operators should consider configuring their Elasticsearch services so that they respawn automatically in the case of such a fatal crash.
之前版本的Elasticsearch不会关闭JVM如果内存溢出的话或其他的致命的错误在Elasticsearch实例的生命周期中。因此这样的错误会导致JVM在有问题的状态下,最好的操作是关闭JVM当这个错误发生的时候。从Elasticsearch5.x开始,现在就是这样的。操作应当被考虑他们的Elasticsearch服务因此如果出现这样的错误会自动复位。

Plugin changes

插件的更新

The command bin/plugin has been renamed to bin/elasticsearch-plugin. The structure of the plugin ZIP archive has changed. All the plugin files must be contained in a top-level directory called elasticsearch. If you use the gradle build, this structure is automatically generated.
命令bin/plugin已经被重命名为bin/elasticsearch-plugin。插件ZIP包的结构已经被更新。所有的插件文件必须包含在顶级目录名字为elasticsearch中。如果你使用gradle构建,这个结构是自动生成的。

Plugins isolation

插件隔离

isolated option has been removed. Each plugin will have its own classloader.
isolated选项已经被移除。每个插件将有他自己的类加载器。

Site plugins removed

site插件被移除

Site plugins have been removed. Site plugins should be reimplemented as Kibana plugins.
site插件已经被移除。site插件应当被替换使用kibana插件

Multicast plugin removed

multicast插件被移除

Multicast has been removed. Use unicast discovery, or one of the cloud discovery plugins.
multicast插件被移除。使用unicast发现或启动一个cloud的发现插件。

Plugins with custom query implementations

插件对于自定义查询的实现

Plugins implementing custom queries need to implement the fromXContent(QueryParseContext) method in their QueryParser subclass rather than parse. This method will take care of parsing the query from XContent format into an intermediate query representation that can be streamed between the nodes in binary format, effectively the query object used in the java api. Also, the query builder needs to be registered as a NamedWriteable. This is all done by implementing the SearchPlugin interface and overriding the getQueries method. The query object can then transform itself into a lucene query through the new toQuery(QueryShardContext) method, which returns a lucene query to be executed on the data node.
插件实现自定义查询需要实现fromXContent(QueryParseContext)方法在他们的QueryParser子类中而不是解析中。这个方法将处理解析查询从XContent格式到中间的查询展示可以被流化在节点之间使用二进制格式,有效的查询object使用在java的api中。并且query的builder可以被注册作为NameWriteable。这可以被实现通过实现SearchPlugin接口和覆盖getQueries方法。query的object可以传输本身到lucene查询中通过新的toQuery(QueryShardContext)方法,返回一个lucene查询执行在数据节点上面。

Similarly, plugins implementing custom score functions need to implement the fromXContent(QueryParseContext) method in their ScoreFunctionParser subclass rather than parse. This method will take care of parsing the function from XContent format into an intermediate function representation that can be streamed between the nodes in binary format, effectively the function object used in the java api. The function object can then transform itself into a lucene function through the new toFunction(QueryShardContext) method, which returns a lucene function to be executed on the data node.
类似的,插件实现了自定义的score功能需要实现fromXContent(QueryParseContext)方法在他们的ScoreFunctionParser子类而不是解析中。这个方法将处理解析从XContent格式到中间的功能展示可以被流化在节点之前以二进制的形式,有效的function的object使用在java的api中。function的object可以传输本身到lucene功能通过新的toFunction(QueryShardContext)方法,返回一个lucene函数执行在数据节点上面。

Cloud AWS plugin changes

云的AWS插件更新

Cloud AWS plugin has been split in two plugins:
云AWS插件已经被分隔为两个插件:

  • Discovery EC2 plugin
  • Repository S3 plugin

Proxy settings for both plugins have been renamed:
代理设置用于这两个插件已经被重命名:

  • from cloud.aws.proxy_host to cloud.aws.proxy.host
  • from cloud.aws.ec2.proxy_host to cloud.aws.ec2.proxy.host
  • from cloud.aws.s3.proxy_host to cloud.aws.s3.proxy.host
  • from cloud.aws.proxy_port to cloud.aws.proxy.port
  • from cloud.aws.ec2.proxy_port to cloud.aws.ec2.proxy.port
  • from cloud.aws.s3.proxy_port to cloud.aws.s3.proxy.port

Cloud Azure plugin changes

云Azure插件的更新

Cloud Azure plugin has been split in three plugins:
云Azure插件已经被分为两个插件:

  • Discovery Azure plugin
  • Repository Azure plugin
  • Store SMB plugin

If you were using the cloud-azure plugin for snapshot and restore, you had in elasticsearch.yml:
如果你使用cloud-azure插件用于快照和恢复,你可以使用如下配置在elasticsearch.yml中:

cloud:    azure:        storage:            account: your_azure_storage_account            key: your_azure_storage_key

You need to give a unique id to the storage details now as you can define multiple storage accounts:
你需要给定一个唯一的id来存储细节就像你可以定义多个存储用户一样:

cloud:    azure:        storage:            my_account:                account: your_azure_storage_account                key: your_azure_storage_key

Cloud GCE plugin changes

云GCE插件的更新

Cloud GCE plugin has been renamed to Discovery GCE plugin.
云GCE插件已经被重命名为DIscovery GCE插件

Delete-By-Query plugin removed

移除了Delete-By-Query插件

The Delete-By-Query plugin has been removed in favor of a new Delete By Query API implementation in core. It now supports throttling, retries and cancellation but no longer supports timeouts. Instead use the cancel API to cancel deletes that run too long.
Delete-By-Query插件已经被移除由于一个新的Delete By Query的API实现在核心中存在。他现在支持限流、重试和取消但是不在支持超时。作为代替使用cancel的API来删除运行时间过长的问题。

Mapper Attachments plugin deprecated

Mapper Attachments插件被废弃

Mapper attachments has been deprecated. Users should use now the ingest-attachment plugin.
Mapper attachments已经被废弃。用户现在应当使用ingest-attachment插件。

Passing of Java System Properties

使用Java的系统属性传递

Previously, Java system properties could be passed to the plugin command by passing -D style arguments directly to the plugin script. This is no longer permitted and such system properties must be passed via ES_JAVA_OPTS.
之前,Java系统属性可以被传递通过插件命令通过传递使用-D风格的参数直接在插件脚本中。现在不在允许这么做并且这样的系统属性必须通过ES_JAVA_OPTS来传递。

Custom plugins path

自定义插件路径

The ability to specify a custom plugins path via path.plugins has been removed.
指定自定义插件路径通过path.plugins的功能已经被移除。

ScriptPlugin

Plugins that register custom scripts should implement ScriptPlugin and remove their onModule(ScriptModule) implementation.
插件注册自定义脚本应当实现ScriptPlugin并且移除他们的onModule(ScriptModule)实现。

AnalysisPlugin

Plugins that register custom analysis components should implement AnalysisPlugin and remove their onModule(AnalysisModule) implementation.
插件注册自定义的分析组件应当实现AnalysisPlugin并且移除了他们的onModule(AnalysisModule)实现。

MapperPlugin

Plugins that register custom mappers should implement MapperPlugin and remove their onModule(IndicesModule) implementation.
插件注册自定义的mappers应当实现MapperPlugin并且移除了他们的onModule(IndicesModule)实现。

ActionPlugin

Plugins that register custom actions should implement ActionPlugin and remove their onModule(ActionModule) implementation.
插件注册自定义行为应当实现ActionPlugin并且移除了他们的onModule(ActionModule)实现。

Plugins that register custom RestHandler`s should implement ActionPlugin and remove their onModule(NetworkModule) implemnetation.
插件注册自定义的RestHandler应当实现ActioPlugin并且移除了他们的onModule(NetworkModule)实现。

SearchPlugin

Plugins that register custom search time behavior (Query, Suggester, ScoreFunction, FetchSubPhase, Highlighter, etc) should implement SearchPlugin and remove their onModule(SearchModule) implementation.
插件注册自定义搜索实现行为(Query, Suggester, ScoreFunction, FetchSubPhase, Highlighter等)应当实现SearchPlugin并且移除了他们的onModule(SearchModule)实现。

SearchParseElement

The SearchParseElement interface has been removed. Custom search request sections can only be provided under the ext element. Plugins can plug in custom parsers for those additional sections by providing a SearchPlugin.SearchExtSpec, which consists of a SearchExtParser implementation that can parseXContent into a SearchExtBuilder implementation. The parsing happens now in the coordinating node. The result of parsing is serialized to the data nodes through transport layer together with the rest of the search request and stored in the search context for later retrieval.
SearchParseElement接口已经被移除。自定义搜索请求章节可以被提供在ext元素下。插件可以被指定在parsers中对于额外的章节通过由SearchPlugin.SearchExtSpec来提供,包含SearchExtParser实现可以解析XContent到SearchExtBuilder实现。解析现在发生在相应的节点中。解析的结果是序列化的对于数据节点通过传输层对于剩下的搜索请求并且存储在搜索上下文中用于相关的层。

Testing Custom Plugins

测试自定义插件

ESIntegTestCase#pluginList has been removed. Use Arrays.asList instead. It isn’t needed now that all plugins require Java 1.8.
ESIntegTestCase#pluginList已经被移除。使用Arrays.asList作为代替。他现在不需要所有的插件要求Java1.8.

Mapper-Size plugin

The metadata field _size is not accessible in aggregations, scripts and when sorting for indices created in 2.x. If these features are needed in your application it is required to reindex the data with Elasticsearch 5.x.
元数据域_size不能在聚集、脚本中被访问,并且当排序创建于2.x版本中的索引时。如果这些特性被需要在你的应用中则要求重新索引数据使用Elasticsearch5.x的版本。

文件系统相关的更新

Only a subset of index files were open with mmap on Elasticsearch 2.x. As of Elasticsearch 5.0, all index files will be open with mmap on 64-bit systems. While this may increase the amount of virtual memory used by Elasticsearch, there is nothing to worry about since this is only address space consumption and the actual memory usage of Elasticsearch will stay similar to what it was in 2.x. See http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html for more information.
只有索引文件的子集被打开mmap对于Elasticsearch2.x的版本。由于Elasticsearch5.0,所有的索引文件将被打开mmap对于64位的系统。这可能增加虚拟内存的使用对于Elasticsearch,但是不用担心因为只有地址空间的消耗和并且ELasticsearch对于实际内存的使用将和2.x的版本保持类似的数目。

Path to data on disk

磁盘中数据的路径

In prior versions of Elasticsearch, the path.data directory included a folder for the cluster name, so that data was in a folder such as DATADIR/CLUSTER_NAME/nodes/nodeOrdinal.In5.0theclusternameasadirectoryisdeprecated.DatawillnowbestoredinDATA_DIR/nodes/$nodeOrdinal if there is no existing data. Upon startup, Elasticsearch will check to see if the cluster folder exists and has data, and will read from it if necessary. In Elasticsearch 6.0 this backwards-compatible behavior will be removed.
在之前版本的Elasticsearch中,path.data路径包括一个文件夹用于集群名字,因此数据是在文件夹中例如$DATA_DIR/$CLUSTER_NAME/nodes/$nodeOrdinal。在5.0中集群名字作为目录的方式已经被废弃了。数据现在将被存储在$DATA_DIR/nodes/$nodeOrdinal中如果没有已有的数据。在启动的时候,Elasticsearch将会检查如果集群文件夹是否存在以及是否有数据,并且如果可以的话将读取里面的内容。在Elasticsearch6.0中这个向后兼容的行为将会被移除。

If you are using a multi-cluster setup with both instances of Elasticsearch pointing to the same data path, you will need to add the cluster name to the data path so that different clusters do not overwrite data.
如果你正在使用多集群设置对于所有的Elasticsearch实例指向相同的数据路径,你将需要添加集群名到数据路径中因此不同的集群不会覆盖数据。

Local files

本地文件

Prior to 5.0, nodes that were marked with both node.data: false and node.master: false (or the now removed node.client: true) didn’t write any files or folder to disk. 5.x added persistent node ids, requiring nodes to store that information. As such, all node types will write a small state file to their data folders.
在5.0版本之前,节点会使用node.data来标记:false和node.master:false(或现在移除了node.client:true)不会写入任何文件或文件夹到磁盘上。5.x的版本添加节点id,要求节点存储这些信息。因此,所有的节点类型将写入一个小的状态文件到他们的数据目录中。

Aggregation changes

聚合的更新

Significant terms on numeric fields

对于numeric域重要的terms

Numeric fields have been refactored to use a different data structure that performs better for range queries. However, since this data structure does not record document frequencies, numeric fields need to fall back to running queries in order to estimate the number of matching documents in the background set, which may incur a performance degradation.
Numeric域已经被重构来使用不同的数据结构来执行更好的效果对于范围查询。然而,由于数据结构没有记录文档的频率,numeric域需要回调运行查询用于估算匹配文档的数量在后台的集合中,可能引起性能的下降。

It is recommended to use keyword fields instead, either directly or through a multi-field if the numeric representation is still needed for sorting, range queries or numeric aggregations like stats aggregations.
如果建议使用keyword域作为替代,直接或通过一个multi-field如果numeric是需要用于排序、范围查询或numeric聚集类似于状态聚集。

ip_range aggregations

ip_range聚集

Now that Elasticsearch supports ipv6, ip addresses are encoded in the index using a binary representation rather than a numeric representation. As a consequence, the output of ip_range aggregations does not give numeric values for from and to anymore.
现在Elasticsearch支持ipv6,ip地址被编码在索引中使用二进制来代表而不是numeric代表。因此,ip_range的输出聚集不会使用numeric值对于from和to。

size: 0 on Terms, Significant Terms and Geohash Grid Aggregations

size: 0 is no longer valid for the terms, significant terms and geohash grid aggregations. Instead a size should be explicitly specified with a number greater than zero.
siez:0不会在用于terms、重要的terms和geohash grid聚集。作为代替size应当明确指定使用一个大于0的数值。

Fractional time values

部分时间值

Fractional time values (e.g., 0.5s) are no longer supported. For example, this means when setting date histogram intervals “1.5h” will be rejected and should instead be input as “90m”.
部分时间值(例如,0.5s)不在被支持。例如,这意味着当设置数据直方图间隔1.5h将会被拒绝并且应当使用类似于90m的方式作为代替。

脚本相关的更新

Switched Default Language from Groovy to Painless

使用Painless代替Groovy成为默认的语言

The default scripting language for Elasticsearch is now Painless. Painless is a custom-built language with syntax similar to Groovy designed to be fast as well as secure. Many Groovy scripts will be identitical to Painless scripts to help make the transition between languages as simple as possible.
默认用于Elasticsearch的脚本语言现在是Painless。Painless是一个自定义构建的语言使用类似于Groovy的设置并且更加快速和更加安全。许多Groovy的脚本是类似于Painless脚本的帮助使得这两个语言之间的过渡更加的简单。

Documentation for Painless can be found at Painless Scripting Language
对于Painless的相关文档可以在Painless Scripting Language中找到

One common difference to note between Groovy and Painless is the use of parameters?—?all parameters in Painless must be prefixed with params. now. The following example shows the difference:
一个不同对于Groovy和Painless就是对于属性的使用————所有的在Painless中的属性必须使用前缀params.。下面的例子展示了两者的不同:

Groovy:

{  "script_score": {    "script": {      "lang": "groovy",      "inline": "Math.log(_score * 2) + my_modifier",      "params": {        "my_modifier": 8      }    }  }}

Painless (my_modifer is prefixed with params):
(my_modifer使用了前缀params)

{  "script_score": {    "script": {      "lang": "painless",      "inline": "Math.log(_score * 2) + params.my_modifier",      "params": {        "my_modifier": 8      }    }  }}

The script.default_lang setting has been removed. It is no longer possible set the default scripting language. If a different language than painless is used then this should be explicitly specified on the script itself.
script.default_lang的设置已经被移除。不再可以设置默认的脚本语言。如果一个不同于painless的语言被使用则应当明确指定对于脚本本身。

For scripts with no explicit language defined, that are part of already stored percolator queries, the default language can be controlled with the script.legacy.default_lang setting.
对于脚本没有明确指定语言,就是已经存储过滤查询的部分,默认语言可以被控制使用script.legacy.default_lang设置。

Removed 1.x script and template syntax

移除了1.x脚本和模板语法

The deprecated 1.x syntax of defining inline scripts / templates and referring to file or index base scripts / templates have been removed.
废弃的1.x语言对于内嵌的脚本/模板和指向文件或索引基本的脚本/模板已经被移除。

The script and params string parameters can no longer be used and instead the script object syntax must be used. This applies for the update api, script sort, script_score function, script query, scripted_metric aggregation and script_heuristic aggregation.
脚本和params字符串参数不在被使用并且替代了脚本object语法必须被使用。这适用于更新的api、脚本排序、script_score功能、脚本查询、scripted_metric聚集和script_heuristic聚集。

So this usage of inline scripts is no longer allowed:
因此对于内嵌脚本的使用不在允许:

{  "script_score": {    "lang": "groovy",    "script": "Math.log(_score * 2) + my_modifier",    "params": {      "my_modifier": 8    }  }}

and instead this syntax must be used:
并且作为替代必须使用这样的语法:

{  "script_score": {    "script": {      "lang": "groovy",      "inline": "Math.log(_score * 2) + my_modifier",      "params": {        "my_modifier": 8      }    }  }}

The script or script_file parameter can no longer be used to refer to file based scripts and templates and instead file must be used.
script或script_file参数不在被使用来引用文件基于脚本和模板并且替代文件必须被使用。

This usage of referring to file based scripts is no longer valid:
对于引用文件的使用基于脚本不在是合法的:

{  "script_score": {    "script": "calculate-score",    "params": {      "my_modifier": 8    }  }}

This usage is valid:
这种使用是合法的:

{  "script_score": {    "script": {      "lang": "groovy",      "file": "calculate-score",      "params": {        "my_modifier": 8      }    }  }}

The script_id parameter can no longer be used the refer to indexed based scripts and templates and instead id must be used.
script_id参数不在被使用来应用索引基本脚本和模板并且作为代替id必须被使用:

This usage of referring to indexed scripts is no longer valid:
这种使用来引用索引脚本是不在合法的:

{  "script_score": {    "script_id": "indexedCalculateScore",    "params": {      "my_modifier": 8    }  }}

This usage is valid:
这种使用是合法的:

{  "script_score": {    "script": {      "id": "indexedCalculateScore",      "lang" : "groovy",      "params": {        "my_modifier": 8      }    }  }}

Template query

模板查询

The query field in the template query can no longer be used. This 1.x syntax can no longer be used:
在模板查询中的query域不在被使用。这个1.x的语法不在被使用:

{    "query": {        "template": {            "query": {"match_{{template}}": {}},            "params" : {                "template" : "all"            }        }    }}

and instead the following syntax should be used:
并且作为替代应当使用如下的语法:

{    "query": {        "template": {            "inline": {"match_{{template}}": {}},            "params" : {                "template" : "all"            }        }    }}

Search templates

搜索模板

The top level template field in the search template api has been replaced with consistent template / script object syntax. This 1.x syntax can no longer be used:
顶级template域在搜索模板api中已经被替代由于一致模板/脚本的object语法。这个1.x的语法不在被使用:

{    "template" : {        "query": { "match" : { "{{my_field}}" : "{{my_value}}" } },        "size" : "{{my_size}}"    },    "params" : {        "my_field" : "foo",        "my_value" : "bar",        "my_size" : 5    }}

and instead the following syntax should be used:
并且作为替代应当使用如下的语法:

{    "inline" : {        "query": { "match" : { "{{my_field}}" : "{{my_value}}" } },        "size" : "{{my_size}}"    },    "params" : {        "my_field" : "foo",        "my_value" : "bar",        "my_size" : 5    }}

Indexed scripts and templates

索引脚本和模板

Indexed scripts and templates have been replaced by stored scripts which stores the scripts and templates in the cluster state instead of a dedicate .scripts index.
索引脚本和模板已经被替换通过存储脚本来存储脚本和模板在集群状态中而不是定义.scripts索引。

For the size of stored scripts there is a soft limit of 65535 bytes. If scripts exceed that size then the script.max_size_in_bytes setting can be added to elasticsearch.yml to change the soft limit to a higher value. If scripts are really large, other options like native scripts should be considered.
对于存储脚本的数目是一个限制到65535字节。如果脚本执行的大小超过了可以设置script.max_size_in_bytes添加到elasticsearch.yml中来限制最大值。如果脚本确实河大,则应当考虑其他的选择如本地脚本。

Previously indexed scripts in the .scripts index will not be used any more as Elasticsearch will now try to fetch the scripts from the cluster state. Upon upgrading to 5.x the .scripts index will remain to exist, so it can be used by a script to migrate the stored scripts from the .scripts index into the cluster state. The current format of the scripts and templates hasn’t been changed, only the 1.x format has been removed.
之前替代脚本在.scripts索引中将不会在Elasticsearch中使用需要获取脚本从集群状态中。由于更新到5.x后.scripts索引将会被保留存在,因此他可以被使用来来迁移到存储脚本从.scripts索引到集群状态中。当前格式的脚本和模板不能被改变,只有1.x格式的已经被移除。

Python migration script

python迁移脚本

The following Python script can be used to import your indexed scripts into the cluster state as stored scripts:
下面的python脚本可以被使用来导入你的索引脚本到集群状态中作为存储脚本:

from elasticsearch import Elasticsearch,helperses = Elasticsearch([        {'host': 'localhost'}])for doc in helpers.scan(es, index=".scripts", preserve_order=True):        es.put_script(lang=doc['_type'], id=doc['_id'], body=doc['_source'])

This script makes use of the official Elasticsearch Python client and therefore you need to make sure that your have installed the client in your environment. For more information on this please see elasticsearch-py.
这个脚本使用了官方的Elasticsearch的Python客户端并且你需要保证你已经在你的环境中安装了客户端。有关更多的信息请参考elasticsearch-py。

Perl migration script

Perl迁移脚本

The following Perl script can be used to import your indexed scripts into the cluster state as stored scripts:
下面的Perl脚本可以被使用来导入你的索引脚本到集群状态作为存储脚本:

use Search::Elasticsearch;my $es     = Search::Elasticsearch->new( nodes => 'localhost:9200');my $scroll = $es->scroll_helper( index => '.scripts', sort => '_doc');while (my $doc = $scroll->next) {  $e->put_script(    lang => $doc->{_type},    id   => $doc->{_id},    body => $doc->{_source}  );}

This script makes use of the official Elasticsearch Perl client and therefore you need to make sure that your have installed the client in your environment. For more information on this please see Search::Elasticsearch.
脚本使用了官方的Elasticsearch客户端并且你需要保证在你的环境中已经安装了客户端。有关更多的信息请参考Search::Elasticsearch。

Verifying script migration

验证脚本迁移

After you have moved the scripts via the provided script or otherwise then you can verify with the following request if the migration has happened successfully:
在你移动脚本通过前面提供的脚本之后或其他你可以验证通过下面的查询如果迁移已经成功的话:

GET _cluster/state?filter_path=metadata.stored_scripts

The response should include all your scripts from the .scripts index. After you have verified that all your scripts have been moved, optionally as a last step, you can delete the .scripts index as Elasticsearch no longer uses it.
相应应当包括所有你的脚本来自.scripts索引。在你验证所有已经被移动的脚本后,可选的最后一步,你可以删除.scripts索引由于Elasticsearch不会在使用它。

Indexed scripts Java APIs

索引脚本Java的API

All the methods related to interacting with indexed scripts have been removed. The Java API methods for interacting with stored scripts have been added under ClusterAdminClient class. The sugar methods that used to exist on the indexed scripts API methods don’t exist on the methods for stored scripts. The only way to provide scripts is by using BytesReference implementation, if a string needs to be provided the BytesArray class should be used.
所有的相关方法影响索引脚本已经被移除。Java的API方法用于影响存储脚本已经被添加到ClusterAdminClient类中。对于已有的索引脚本使用API方法不会存在于存储的脚本。唯一方式来提供脚本是通过使用BytesReference实现,如果一个字符串需要被提供应当使用的BytesArray类。

Scripting engines now register only a single language

脚本引擎现在只能注册单一的语言

Prior to 5.0.0, script engines could register multiple languages. The Javascript script engine in particular registered both “lang”: “js” and “lang”: “javascript”. Script engines can now only register a single language. All references to “lang”: “js” should be changed to “lang”: “javascript” for existing users of the lang-javascript plugin.
在5.0.0之前,脚本引擎可以注册多个语言。Javascript脚本应轻被注册”lang”: “js”和”lang”: “javascript”。脚本引擎现在只能注册单一语言。所有的引用对于”lang”: “js”应当被更新为”lang”: “javascript”用于已有的lang-javascript插件的用户。

Scripting engines now register only a single extension

脚本引擎现在只能注册单一的扩展

Prior to 5.0.0 scripting engines could register multiple extensions. The only engine doing this was the Javascript engine, which registered “js” and “javascript”. It now only registers the “js” file extension for on-disk scripts.
在5.0.0之前脚本引擎可以注册多个扩展。唯一这么做的引擎是Javascript引擎,注册了”js”和”javascript”。现在只能注册”js”文件扩展用于on-disk的脚本。

.javascript files are no longer supported (use .js)

.javascript文件不在被支持(使用.js)

The Javascript engine previously registered “js” and “javascript”. It now only registers the “js” file extension for on-disk scripts.
Javascript引擎之前注册了”js”和”javascript”。现在只注册了”js”文件扩展用于on-disk脚本。

Removed scripting query string parameters from update rest api

移除了脚本查询参数用于更新的rest的API

The script, script_id and scripting_upsert query string parameters have been removed from the update api.
script、script_id和scripting_upsert查询字符串参数已经被移除从update的API中。

Java transport client

Java传输客户端

The TemplateQueryBuilder has been moved to the lang-mustache module. Therefor when using the TemplateQueryBuilder from the Java native client the lang-mustache module should be on the classpath. Also the transport client should load the lang-mustache module as plugin:
TemplateQueryBuilder已经移动到lang-mustanche模块中。因此当使用TemplateQueryBuilder在Java本地客户端时lang-mustache模块应该出现在classpath中。并且传输客户端应当加载lang-mustache模块作为插件:

TransportClient transportClient = TransportClient.builder()        .settings(Settings.builder().put("node.name", "node"))        .addPlugin(MustachePlugin.class)        .build();transportClient.addTransportAddress(        new InetSocketTransportAddress(new InetSocketAddress(InetAddresses.forString("127.0.0.1"), 9300)));

Also the helper methods in QueryBuilders class that create a TemplateQueryBuilder instance have been removed, instead the constructors on TemplateQueryBuilder should be used.
并且helper方法在QueryBuilders类中创建了一个TemplateQueryBuilder实例已经被移除,作为代替应当使用TemplateQueryBuilder的构造器。

Template query

模板查询

The template query has been deprecated in favour of the search template api. The template query is scheduled to be removed in the next major version.
模板查询已经被废弃由于搜索模板API。模板查询是计划被移除的在下一次主版本更新时。

GeoPoint scripts

GeoPoint脚本

The following helper methods have been removed from GeoPoint scripting:
下面的helper方法已经从GeoPoint脚本中移除:

  • factorDistance
  • factorDistanceWithDefault
  • factorDistance02
  • factorDistance13
  • arcDistanceInKm
  • arcDistanceInKmWithDefault
  • arcDistanceInMiles
  • arcDistanceInMilesWithDefault
  • distanceWithDefault
  • distanceInKm
  • distanceInKmWithDefault
  • distanceInMiles
  • distanceInMilesWithDefault
  • geohashDistanceInKm
  • geohashDistanceInMiles

Instead use arcDistance, arcDistanceWithDefault, planeDistance, planeDistanceWithDefault, geohashDistance, geohashDistanceWithDefault and convert from default units (meters) to desired units using the appropriate constance (e.g., multiply by 0.001 to convert to Km).
作为替代使用arcDistance、arcDistanceWithDefault、planeDistance、planeDistanceWithDefault、geohashDistance、geohashDistanceWithDefault并且从默认的单元(米)中转换用于目标单元使用适当的(例如,乘以0.001来转换为Km)。

Only 15 unique scripts can be compiled per minute by default

默认只有15个唯一的脚本可以被编译在一分钟的时间内

If you compile too many unique scripts within a small amount of time, Elasticsearch will reject the new dynamic scripts with a circuit_breaking_exception error. By default, up to 15 inline scripts per minute will be compiled. You can change this setting dynamically by setting script.max_compilations_per_minute.
如果你编译多个脚本在一小段时间中,Elasticsearch将拒绝新的动态脚本并且返回circuit_breaking_exception错误。默认的,15个内嵌脚本可以在一分钟被编译。你可以动态改变这个设置通过设置script.max_compilations_per_minute。

You should watch out for this if you are hard-coding values into your scripts.
你应当小心如果你在你的脚本中使用了硬编码的值。

Elasticsearch recommends the usage of parameters for efficient script handling. See details here.
Elasticsearch建议使用参数用于有效的脚本处理。详见here.