zhmg23

我们是如此的不同

Elasticsearch Cluster 集群安装及常见错误

一、安装准备及说明

1、主机3台

elk1:192.168.85.156

elk2:192.168.85.157

elk3:192.168.85.158


2、三台机器均安装jdk1.8

# java -version

java version "1.8.0_112"

Java(TM) SE Runtime Environment (build 1.8.0_112-b15)

Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)


3、系统环境:CentOS release 6.5 (Final)

4、配置 ES的yum源

导入签名:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

设置 yum 源

vim /etc/yum.repos.d/es.repo

[elasticsearch-5.x]

name=Elasticsearch repository for 5.x packages

baseurl=https://artifacts.elastic.co/packages/5.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md


二、elasticsearch安装

# yum install elasticsearch

# service elasticsearch start 


检查安装结果

# #  curl https://127.0.0.1:9200

{

  "name" : "elk2",

  "cluster_name" : "ptes",

  "cluster_uuid" : "uWpL6GpkQMKykxYXIbytjA",

  "version" : {

    "number" : "5.1.2",

    "build_hash" : "c8c4c16",

    "build_date" : "2017-01-11T20:18:39.146Z",

    "build_snapshot" : false,

    "lucene_version" : "6.3.0"

  },

  "tagline" : "You Know, for Search"

}


Elasticsearch 配置文件在 /etc/elasticsearch/elasticsearch.yml


以上步骤分别在elk1、elk2、elk3上分别执行。


三、配置elasticsearch集群

1、elk1配置如下

# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml 

cluster.name: ptes

node.name: elk1

node.master: true

node.data: true

http.enabled: true

path.data: /data/elasticsearch

path.logs: /data/elasticsearch/logs

network.host: 0.0.0.0

http.port: 9200

discovery.zen.ping.unicast.hosts: ["elk1", "elk2", "elk3"]

discovery.zen.minimum_master_nodes: 2

bootstrap.memory_lock: false

bootstrap.system_call_filter: false


2、elk2配置如下

#  egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml 

cluster.name: ptes

node.name: elk2

node.master: true

node.data: true

http.enabled: true

path.data: /data/elasticsearch

path.logs: /data/elasticsearch/logs

bootstrap.memory_lock: false

network.host: 0.0.0.0

http.port: 9200

discovery.zen.ping.unicast.hosts: ["elk1", "elk2", "elk3"]

discovery.zen.minimum_master_nodes: 2


3、elk3配置如下

# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml 

cluster.name: ptes

node.name: elk3

node.master: true

node.data: true

http.enabled: true

path.data: /data/elasticsearch

path.logs: /data/elasticsearch/logs

network.host: 0.0.0.0

discovery.zen.ping.unicast.hosts: ["elk1", "elk2", "elk3"]

discovery.zen.minimum_master_nodes: 2

bootstrap.memory_lock: false

bootstrap.system_call_filter: false


三台机器均建文件夹及修改属性

mkdir -p /data/elasticsearch/logs

chown elasticsearch:elasticsearch /data/elasticsearch/   -R



四、启动集群查看状态

1、分别启动elk1、elk2、elk3三台机器,并查看集群状态

# service elasticsearch  start 


2、查看集群状态

# curl 'localhost:9200/_cat/nodes?v'


Elasticsearch Cluster 集群安装及常见错误 - zhm - 合肥运维


五、常用错误及解决办法


错误1:memory locking requested for elasticsearch process but memory is not locked

解决 :修改elasticsearch.yml文件

bootstrap.memory_lock : false


错误2:

[2017-02-10T18:41:09,187][WARN ][o.e.d.z.UnicastZenPing   ] [elk2] failed to resolve host [node3]

java.net.UnknownHostException: node3

at java.net.InetAddress.getAllByName0(InetAddress.java:1280) ~[?:1.8.0_112]

at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[?:1.8.0_112]

at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[?:1.8.0_112]

at org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:764) ~[elasticsearch-5.1.2.jar:5.1.2]

at org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:719) ~[elasticsearch-5.1.2.jar:5.1.2]

at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:629) ~[elasticsearch-5.1.2.jar:5.1.2]

at org.elasticsearch.discovery.zen.UnicastZenPing.lambda$null$0(UnicastZenPing.java:214) ~[elasticsearch-5.1.2.jar:5.1.2]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112]

at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:458) [elasticsearch-5.1.2.jar:5.1.2]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_112]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_112]

at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]


解决:discovery.zen.ping.unicast.hosts: ["node1", "node2", "node3"]

把node1、node2、node3改为节点名


错误3:max number of threads [1024] for user [elasticsearch] is too low, increase to at least [2048]

解决:

修改 /etc/security/limits.d/90-nproc.conf 

*          soft    nproc     1024

*          soft    nproc     2048


vim /etc/security/limits.conf 


* soft nofile 65536

* hard nofile 65536


错误4:

max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]

system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

解决:

修改/etc/sysctl.conf配置文件,

cat /etc/sysctl.conf | grep vm.max_map_count

vm.max_map_count=262144

如果不存在则添加

echo "vm.max_map_count=262144" >>/etc/sysctl.conf


错误5:system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

原因:

这是在因为Centos6不支持SecComp,而ES5.2.0默认bootstrap.system_call_filter为true进行检测,所以导致检测失败,失败后直接导致ES不能启动。


解决:

在elasticsearch.yml中配置bootstrap.system_call_filter为false,注意要在Memory下面:

bootstrap.memory_lock: false

bootstrap.system_call_filter: false


六、相关配置解释

 cluster.name: ptes

配置的集群名称,默认是elasticsearch,es服务会通过广播方式自动连接在同一网段下的es服务,通过多播方式进行通信,同一网段下可以有多个集群,通过集群名称这个属性来区分不同的集群。


node.name: ${HOSTNAME}

当前配置所在机器的节点名,你不设置就默认随机指定一个name列表中名字,该name列表在es的jar包中config文件夹里name.txt文件中,其中有很多作者添加的有趣名字。


node.master: true

指定该节点是否有资格被选举成为node(注意这里只是设置成有资格, 不代表该node一定就是master),默认是true,es是默认集群中的第一台机器为master,如果这台机挂了就会重新选举master。


node.data: true

指定该节点是否存储索引数据,默认为true。


path.conf: /path/to/conf

设置配置文件的存储路径,默认是es根目录下的config文件夹。

path.data: /path/to/data

设置索引数据的存储路径,默认是es根目录下的data文件夹,可以设置多个存储路径,用逗号隔开,例:

path.data: /path/to/data1,/path/to/data2

path.logs: /path/to/logs

设置日志文件的存储路径,默认是es根目录下的logs文件夹


discovery.zen.ping.unicast.hosts: ["node1", "node2", "node3"]

设置集群中master节点的初始列表,可以通过这些节点来自动发现新加入集群的节点。

discovery.zen.minimum_master_nodes: 2


以上是Elasticsearch Cluster 集群安装过程及常见错误总结解决办法。

评论