centos7中配置elasticsearch集群和离线安装x-pack

来源:互联网 发布:苹果手机越狱软件 编辑:程序博客网 时间:2024/05/22 01:33

环境

操作系统:win7
虚拟机:centos7
elasticsearch:5.2.2

步骤

前提条件

由于我是自己学习用,笔记本的内存有限,所以想在一台虚拟机中配置集群。
我虚拟机的情况是,已经安装好了elasticsearchkibanax-pack.

网上的一般做法是把已经安装好得elasticsearch目录再复制一遍,而我呢!由于保存了安装包,所以又重新解压了一份并且重命名为elasticsearch-5.2.2-node-2

这样我的目录就是这样:

这里写图片描述

修改配置

接着就是修改配置:

# 集群的名字修改#cluster.name: my-applicationcluster.name: yutao #集群名字要一样# 修改节点名称#node.name: node-1node.name: node-2node.master: false # 表示该节点不会作为主节点node.data: true # 表示该节点可以存储数据,false表示只作为搜索# http port的修改#http.port: 9200http.port: 9201 # 主节点已经为9200,范围9200-9300,自己随便编个,前提别被占用# 配置tcp传输端口# transport.tcptransport.tcp.port: 9301 # 默认里没有自己添加# 配置所有集群IP地址#discovery.zen.ping.unicast.hosts: ["host1", "host2"]discovery.zen.ping.unicast.hosts: ["192.168.116.19:9300", "192.168.116.19:9301"]# 上面因为我就配了两个,要是你配置三个就要写三个IP地址

这里我要特别强调:

discovery.zen.ping.unicast.hosts: ["192.168.116.19:9300", "192.168.116.19:9301"]# 这个配置也需要在主节点中配置,有多少个节点就需要配置多少个。

这里的端口是tcp通信协议的端口,不是http.port的端口。
这里的端口是tcp通信协议的端口,不是http.port的端口。
这里的端口是tcp通信协议的端口,不是http.port的端口。

说三遍,因为我在这里浪费了很多时间,好坑。

之后启动节点2,好事会报错。

[2017-04-05T20:08:52,854][WARN ][o.e.d.z.UnicastZenPing   ] [node-2] [1] failed send ping to {#zen_unicast_192.168.116.19:9300_0#}{agHBxAUVS6Kmm0T5wk4ylw}{192.168.116.19}{192.168.116.19:9300}java.lang.IllegalStateException: handshake failed with {#zen_unicast_192.168.116.19:9300_0#}{agHBxAUVS6Kmm0T5wk4ylw}{192.168.116.19}{192.168.116.19:9300}        at org.elasticsearch.transport.TransportService.handshake(TransportService.java:364) ~[elasticsearch-5.2.2.jar:5.2.2]        at org.elasticsearch.discovery.zen.UnicastZenPing$PingingRound.getOrConnect(UnicastZenPing.java:393) ~[elasticsearch-5.2.2.jar:5.2.2]        at org.elasticsearch.discovery.zen.UnicastZenPing$3.doRun(UnicastZenPing.java:500) [elasticsearch-5.2.2.jar:5.2.2]        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) [elasticsearch-5.2.2.jar:5.2.2]        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.2.jar:5.2.2]        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]Caused by: org.elasticsearch.transport.RemoteTransportException: [node-1][192.168.116.19:9300][internal:transport/handshake]Caused by: org.elasticsearch.ElasticsearchSecurityException: missing authentication token for action [internal:transport/handshake]        at org.elasticsearch.xpack.security.support.Exceptions.authenticationError(Exceptions.java:39) ~[?:?]        at org.elasticsearch.xpack.security.authc.DefaultAuthenticationFailureHandler.missingToken(DefaultAuthenticationFailureHandler.java:74) ~[?:?]        at org.elasticsearch.xpack.security.authc.AuthenticationService$AuditableTransportRequest.anonymousAccessDenied(AuthenticationService.java:483) ~[?:?]        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$handleNullToken$13(AuthenticationService.java:315) ~[?:?]        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.handleNullToken(AuthenticationService.java:320) ~[?:?]        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.consumeToken(AuthenticationService.java:247) ~[?:?]        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$extractToken$5(AuthenticationService.java:223) ~[?:?]        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.extractToken(AuthenticationService.java:236) ~[?:?]        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$authenticateAsync$0(AuthenticationService.java:184) ~[?:?]        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$lookForExistingAuthentication$2(AuthenticationService.java:201) ~[?:?]        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lookForExistingAuthentication(AuthenticationService.java:213) ~[?:?]        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.authenticateAsync(AuthenticationService.java:180) ~[?:?]        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.access$000(AuthenticationService.java:142) ~[?:?]        at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:114) ~[?:?]        at org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile.inbound(ServerTransportFilter.java:142) ~[?:?]        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:296) ~[?:?]        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.2.2.jar:5.2.2]        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1488) ~[elasticsearch-5.2.2.jar:5.2.2]        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.2.2.jar:5.2.2]        at org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:109) ~[elasticsearch-5.2.2.jar:5.2.2]        at org.elasticsearch.transport.TcpTransport.handleRequest(TcpTransport.java:1445) ~[elasticsearch-5.2.2.jar:5.2.2]        at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1329) ~[elasticsearch-5.2.2.jar:5.2.2]        at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[?:?]        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) ~[?:?]        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) ~[?:?]        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) ~[?:?]        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) ~[?:?]        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:280) ~[?:?]        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:396) ~[?:?]        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) ~[?:?]        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) ~[?:?]        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) ~[?:?]        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) ~[?:?]        at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) ~[?:?]        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) ~[?:?]        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) ~[?:?]        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) ~[?:?]        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) ~[?:?]        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) ~[?:?]        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) ~[?:?]        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) ~[?:?]        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129) ~[?:?]        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642) ~[?:?]        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527) ~[?:?]        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481) ~[?:?]        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) ~[?:?]        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]        ... 1 more

错误是因为它ping不同主节点的IP地址192.168.116.19:9300。因为主节点我是安装了x-pack,安装x-pack后会有权限问题。所以我们也要在节点2中安装x-pack

手动安装x-pack

如何你在线安装有问题的话,可以选择离线安装;方法就是先下载x-pack包。
官网教程:https://www.elastic.co/guide/en/x-pack/current/installing-xpack.html#xpack-installing-offline

包地址:Manually download the X-Pack zip file: https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.2.2.zip

校验文件:sha1

# 我存放的路径是/home/yutao/下载/x-pack-5.2.2.zip# 手动安装的方法bin/elasticsearch-plugin install file:///home/yutao/下载/x-pack-5.2.2.zip-> Downloading file:///home/yutao/下载/x-pack-5.2.2.zip[=================================================] 100%   @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@     WARNING: plugin requires additional permissions     @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@* java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries* java.lang.RuntimePermission getClassLoader* java.lang.RuntimePermission setContextClassLoader* java.lang.RuntimePermission setFactory* java.security.SecurityPermission createPolicy.JavaPolicy* java.security.SecurityPermission getPolicy* java.security.SecurityPermission putProviderProperty.BC* java.security.SecurityPermission setPolicy* java.util.PropertyPermission * read,write* java.util.PropertyPermission sun.nio.ch.bugLevel write* javax.net.ssl.SSLPermission setHostnameVerifierSee http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.htmlfor descriptions of what these permissions allow and the associated risks.Continue with installation? [y/N]y-> Installed x-pack

中途会要你选择 y就行了。

这时再启动节点2,就不会报错了,打印的日志:

[yutao@localhost elasticsearch-5.2.2-node-2]$ bin/elasticsearch[2017-04-05T20:15:29,324][INFO ][o.e.n.Node               ] [node-2] initializing ...[2017-04-05T20:15:30,093][INFO ][o.e.e.NodeEnvironment    ] [node-2] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [11.1gb], net total_space [16.9gb], spins? [unknown], types [rootfs][2017-04-05T20:15:30,093][INFO ][o.e.e.NodeEnvironment    ] [node-2] heap size [1.9gb], compressed ordinary object pointers [true][2017-04-05T20:15:30,094][INFO ][o.e.n.Node               ] [node-2] node name [node-2], node ID [yfSXvGJfSU2iQxQ2Y1lKeg][2017-04-05T20:15:30,187][INFO ][o.e.n.Node               ] [node-2] version[5.2.2], pid[11478], build[f9d9b74/2017-02-24T17:26:45.835Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_121/25.121-b13][2017-04-05T20:15:34,170][INFO ][o.e.p.PluginsService     ] [node-2] loaded module [aggs-matrix-stats][2017-04-05T20:15:34,177][INFO ][o.e.p.PluginsService     ] [node-2] loaded module [ingest-common][2017-04-05T20:15:34,177][INFO ][o.e.p.PluginsService     ] [node-2] loaded module [lang-expression][2017-04-05T20:15:34,178][INFO ][o.e.p.PluginsService     ] [node-2] loaded module [lang-groovy][2017-04-05T20:15:34,178][INFO ][o.e.p.PluginsService     ] [node-2] loaded module [lang-mustache][2017-04-05T20:15:34,178][INFO ][o.e.p.PluginsService     ] [node-2] loaded module [lang-painless][2017-04-05T20:15:34,178][INFO ][o.e.p.PluginsService     ] [node-2] loaded module [percolator][2017-04-05T20:15:34,178][INFO ][o.e.p.PluginsService     ] [node-2] loaded module [reindex][2017-04-05T20:15:34,178][INFO ][o.e.p.PluginsService     ] [node-2] loaded module [transport-netty3][2017-04-05T20:15:34,178][INFO ][o.e.p.PluginsService     ] [node-2] loaded module [transport-netty4][2017-04-05T20:15:34,179][INFO ][o.e.p.PluginsService     ] [node-2] loaded plugin [x-pack][2017-04-05T20:15:37,804][DEBUG][o.e.a.ActionModule       ] Using REST wrapper from plugin org.elasticsearch.xpack.XPackPlugin[2017-04-05T20:15:39,012][INFO ][o.e.n.Node               ] [node-2] initialized[2017-04-05T20:15:39,012][INFO ][o.e.n.Node               ] [node-2] starting ...[2017-04-05T20:15:39,478][INFO ][o.e.t.TransportService   ] [node-2] publish_address {192.168.116.19:9301}, bound_addresses {192.168.116.19:9301}[2017-04-05T20:15:39,485][INFO ][o.e.b.BootstrapChecks    ] [node-2] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks[2017-04-05T20:15:46,771][INFO ][o.e.c.s.ClusterService   ] [node-2] detected_master {node-1}{_eiOwymETTyrge_hnrFEgw}{vT7PK-fLSOylFjkJ0VEtzQ}{192.168.116.19}{192.168.116.19:9300}, added {{node-1}{_eiOwymETTyrge_hnrFEgw}{vT7PK-fLSOylFjkJ0VEtzQ}{192.168.116.19}{192.168.116.19:9300},}, reason: zen-disco-receive(from master [master {node-1}{_eiOwymETTyrge_hnrFEgw}{vT7PK-fLSOylFjkJ0VEtzQ}{192.168.116.19}{192.168.116.19:9300} committed version [12]])[2017-04-05T20:15:48,658][INFO ][o.e.l.LicenseService     ] [node-2] license [f6ae560a-735e-4546-af87-af87944d734f] mode [trial] - valid[2017-04-05T20:15:48,979][INFO ][o.e.h.HttpServer         ] [node-2] publish_address {192.168.116.19:9201}, bound_addresses {192.168.116.19:9201}[2017-04-05T20:15:48,979][INFO ][o.e.n.Node               ] [node-2] started[2017-04-05T20:16:53,935][WARN ][o.e.m.j.JvmGcMonitorService] [node-2] [gc][74] overhead, spent [583ms] collecting in the last [1s][2017-04-05T20:17:33,979][INFO ][o.e.m.j.JvmGcMonitorService] [node-2] [gc][114] overhead, spent [404ms] collecting in the last [1s]

从打印日志也可以看出:

[2017-04-05T20:15:46,771][INFO ][o.e.c.s.ClusterService   ] [node-2] detected_master {node-1}{_eiOwymETTyrge_hnrFEgw}{vT7PK-fLSOylFjkJ0VEtzQ}{192.168.116.19}{192.168.116.19:9300}, added {{node-1}{_eiOwymETTyrge_hnrFEgw}{vT7PK-fLSOylFjkJ0VEtzQ}{192.168.116.19}{192.168.116.19:9300},}, reason: zen-disco-receive(from master [master {node-1}{_eiOwymETTyrge_hnrFEgw}{vT7PK-fLSOylFjkJ0VEtzQ}{192.168.116.19}{192.168.116.19:9300} committed version [12]])

已经找到了主节点192.168.116.19:9300

测试是否成功:

http://192.168.116.19:9200/_cluster/health?pretty=true{  "cluster_name" : "yutao",  "status" : "green",  "timed_out" : false,  "number_of_nodes" : 2,  "number_of_data_nodes" : 2,  "active_primary_shards" : 12,  "active_shards" : 24,  "relocating_shards" : 0,  "initializing_shards" : 0,  "unassigned_shards" : 0,  "delayed_unassigned_shards" : 0,  "number_of_pending_tasks" : 0,  "number_of_in_flight_fetch" : 0,  "task_max_waiting_in_queue_millis" : 0,  "active_shards_percent_as_number" : 100.0}

也可以利用kibanaweb页面来查看
http://192.168.116.19:5601

(安装x-pack后)内置的账号:elastic密码:changeme

这里写图片描述

我这里贴出我的主节点配置信息:

# ======================== Elasticsearch Configuration =========================## NOTE: Elasticsearch comes with reasonable defaults for most settings.#       Before you set out to tweak and tune the configuration, make sure you#       understand what are you trying to accomplish and the consequences.## The primary way of configuring a node is via this file. This template lists# the most important settings you may want to configure for a production cluster.## Please consult the documentation for further information on configuration options:# https://www.elastic.co/guide/en/elasticsearch/reference/index.html## ---------------------------------- Cluster -----------------------------------## Use a descriptive name for your cluster:##cluster.name: my-applicationcluster.name: yutao## ------------------------------------ Node ------------------------------------## Use a descriptive name for the node:#node.name: node-1node.master: truenode.data: true## Add custom attributes to the node:##node.attr.rack: r1## ----------------------------------- Paths ------------------------------------## Path to directory where to store the data (separate multiple locations by comma):##path.data: /path/to/data## Path to log files:##path.logs: /path/to/logs## ----------------------------------- Memory -----------------------------------## Lock the memory on startup:##bootstrap.memory_lock: true## Make sure that the heap size is set to about half the memory available# on the system and that the owner of the process is allowed to use this# limit.## Elasticsearch performs poorly when the system is swapping the memory.## ---------------------------------- Network -----------------------------------## Set the bind address to a specific IP (IPv4 or IPv6):##network.host: 192.168.0.1network.host: 192.168.116.19## Set a custom port for HTTP:##http.port: 9200## For more information, consult the network module documentation.## --------------------------------- Discovery ----------------------------------## Pass an initial list of hosts to perform discovery when new node is started:# The default list of hosts is ["127.0.0.1", "[::1]"]##discovery.zen.ping.unicast.hosts: ["host1", "host2"]discovery.zen.ping.unicast.hosts: ["192.168.116.19:9300", "192.168.116.19:9301"]## Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):##discovery.zen.minimum_master_nodes: 3## For more information, consult the zen discovery module documentation.## ---------------------------------- Gateway -----------------------------------## Block initial recovery after a full cluster restart until N nodes are started:##gateway.recover_after_nodes: 3## For more information, consult the gateway module documentation.## ---------------------------------- Various -----------------------------------## Require explicit names when deleting indices:##action.destructive_requires_name: true

节点2的配置:

# ======================== Elasticsearch Configuration =========================## NOTE: Elasticsearch comes with reasonable defaults for most settings.#       Before you set out to tweak and tune the configuration, make sure you#       understand what are you trying to accomplish and the consequences.## The primary way of configuring a node is via this file. This template lists# the most important settings you may want to configure for a production cluster.## Please consult the documentation for further information on configuration options:# https://www.elastic.co/guide/en/elasticsearch/reference/index.html## ---------------------------------- Cluster -----------------------------------## Use a descriptive name for your cluster:##cluster.name: my-applicationcluster.name: yutao## ------------------------------------ Node ------------------------------------## Use a descriptive name for the node:##node.name: node-1node.name: node-2node.master: falsenode.data: true## Add custom attributes to the node:##node.attr.rack: r1## ----------------------------------- Paths ------------------------------------## Path to directory where to store the data (separate multiple locations by comma):##path.data: /path/to/data## Path to log files:##path.logs: /path/to/logs## ----------------------------------- Memory -----------------------------------## Lock the memory on startup:##bootstrap.memory_lock: true## Make sure that the heap size is set to about half the memory available# on the system and that the owner of the process is allowed to use this# limit.## Elasticsearch performs poorly when the system is swapping the memory.## ---------------------------------- Network -----------------------------------## Set the bind address to a specific IP (IPv4 or IPv6):##network.host: 192.168.0.1network.host: 192.168.116.19## Set a custom port for HTTP:##http.port: 9200http.port: 9201# transport.tcptransport.tcp.port: 9301# For more information, consult the network module documentation.## --------------------------------- Discovery ----------------------------------## Pass an initial list of hosts to perform discovery when new node is started:# The default list of hosts is ["127.0.0.1", "[::1]"]##discovery.zen.ping.unicast.hosts: ["host1", "host2"]discovery.zen.ping.unicast.hosts: ["192.168.116.19:9300", "192.168.116.19:9301"]## Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):##discovery.zen.minimum_master_nodes: 3## For more information, consult the zen discovery module documentation.## ---------------------------------- Gateway -----------------------------------## Block initial recovery after a full cluster restart until N nodes are started:##gateway.recover_after_nodes: 3## For more information, consult the gateway module documentation.## ---------------------------------- Various -----------------------------------## Require explicit names when deleting indices:##action.destructive_requires_name: true

这里贴出网上一些配置解释:

1:Elasticsearch集群中的三种角色

master node:master几点主要用于元数据(metadata)的处理,比如索引的新增、删除、分片分配等。
data node:data 节点上保存了数据分片。它负责数据相关操作,比如分片的 CRUD,以及搜索和整合操作。这些操作都比较消耗 CPU、内存和 I/O 资源;
client node:client 节点起到路由请求的作用,实际上可以看做负载均衡器。
其对应的高性能集群拓扑结构模式为:

# 配置文件中给出了三种配置高性能集群拓扑结构的模式,如下: # 1. 如果你想让节点从不选举为主节点,只用来存储数据,可作为负载器 # node.master: false # node.data: true # 2. 如果想让节点成为主节点,且不存储任何数据,并保有空闲资源,可作为协调器# node.master: true# node.data: false# 3. 如果想让节点既不称为主节点,又不成为数据节点,那么可将他作为搜索器,从节点中获取数据,生成搜索结果等 # node.master: false # node.data: false

2:config/elasticsearch.ymal中配置项说明

cluster_name 集群名称,默认为elasticsearch,这里我们设置为es5.2.1Cluster
node.name配置节点名,用来区分节点
network.host 是配置可以访问本节点的路由地址

http.port 路由地址端口
transport.tcp.port TCP协议转发地址端口

node.master 是否作为集群的主结点 ,值为true或true
node.data 是否存储数据,值为true或true

discovery.zen.ping.unicast.hosts 用来配置所有用来组建集群的机器的IP地址,由于5.2.1新版本是不支持多播的,因此这个值需要提前设定好,当集群需要扩展的时候,该值都要做改变,增加新机器的IP地址,如果是在一个ip上,要把TCP协议转发端口写上

discovery.zen.minimum_master_nodes 用来配置主节点数量的最少值,如果主节点数量低于该值,闭包范围内的集群将会停止服务,之所以加粗体,是因为暂时尚未认证,下面配置为1方便集群更容易形成,即使只有一个主节点,也可以构建集群

gateway.* 网关的相关配置

script.* indices.* 根据需求添加的配置(可选)

Elasticsearch5.2.1集群搭建,动态加入节点,并添加监控诊断插件

0 0
原创粉丝点击