环境说明:
系统配置项说明 Disable swap 以提升性能(可选) 1 2 3 4 5 6 7 8 sudo swapoff -a vm.swappiness=1 bootstrap.memory_lock: true
修改文件描述符数量(必选) 1 2 3 4 5 6 7 8 9 ulimit -n 65535/etc/security/limits.conf 增加 nofile to 65535 curl -X GET "localhost:9200/_nodes/stats/process?filter_path=**.max_file_descriptors&pretty"
虚拟内存数量(必选) 1 2 3 4 5 6 7 sysctl -w vm.max_map_count=262144 /etc/sysctl.conf 中增加 vm.max_map_count=262144 sysctl -p 使其生效
线程数量(必选) 1 2 3 4 5 6 ulimit -u 4096/etc/security/limits.conf nproc to 4096 in /etc/security/limits.conf
DNS 缓存设置(可选) 1 2 networkaddress.cache.ttl=<timeout> networkaddress.cache.negative.ttl=<timeout>
ES 配置项说明 增加分片数量(必选) ES740 必须配置分片数量,避免分片过少,导致项目程序运行报错。
1 2 3 4 5 6 7 8 9 10 curl -XPUT 'http://0.0.0.0:9200/_cluster/settings' -H 'Content-Type: application/json' -d ' { "persistent": { "cluster": { "max_shards_per_node": 100000000 } } } '
transient 临时生效
persistent 永久生效
1 2 3 4 5 6 7 8 9 10 11 方案1 - 增加滚动设置() curl -X PUT http://192.168.101.71:9200/_cluster/settings -H 'Content-Type: application/json' -d '{ "persistent" : { "search.max_open_scroll_context": 100000000 }, "transient": { "search.max_open_scroll_context": 100000000 } }'
磁盘使用量在 95%以上时,索引会被标记为已读,无法写入数据(可选) 1 flood stage disk watermark [95 %] exceeded on [KeyWFmZzQdy101sSic1ilA][node-only][/ssd_datapath/data/nodes/0 ] free: 42.3 gb[4.6 %], all indices on this node will be marked read-only
解决方法: 通过 kibana 修改磁盘使用上限为 99%
1 2 3 4 5 6 7 8 9 PUT _cluster/settings { "transient" : { "cluster.routing.allocation.disk.watermark.low" : "99%" , "cluster.routing.allocation.disk.watermark.high" : "99%" , "cluster.routing.allocation.disk.watermark.flood_stage" : "99%" , "cluster.info.update.interval" : "1m" } }
ES 进程数修改(必选) 1 2 3 4 5 6 - thread_pool.get.queue_size=1000 - thread_pool.write.queue_size=1000 - thread_pool.analyze.queue_size=1000 - thread_pool.search.queue_size=1000 - thread_pool.listener.queue_size=1000
待测试配置 1 2 3 4 5 6 7 8 9 10 11 12 13 14 discovery.zen.commit_timeout dicovery.zen.ping_timeout discovery.zen.fd.ping_timeout: 180s discovery.zen.fd.ping_retries: 6 discovery.zen.fd.ping_interval: 30s discovery.zen.ping_timeout: 120s
修改 ES 内存大小(生产必选配置) 修改 config/jvm.options 文件
建议的配置如下:
将最小堆大小(Xms)和最大堆大小(Xmx)设置为彼此相等。
Elasticsearch 可用的堆越多,它可用于缓存的内存就越多。但请注意,过多的堆可能会陷入长时间的垃圾收集暂停。所以设置的堆不能太大, 尽量设置到内存的 50%。
将 Xmx 设置为不超过物理 RAM 的 50%,以确保有足够的物理内存给内核文件系统缓存。
内存 heap size 配置不要超过 32G, 基本上大多数系统最多只配置到 26G.
ES 参数说明 文件 config/elasticsearch.yml
参数配置说明
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 cluster.name: elasticsearch node.name: "es-node1" path.data: /data/elasticsearch1/data path.logs: /data/elasticsearch1/logs network.host: 192.168.1.11 network.bind_host: 192.168.1.11 network.publish_host: 192.168.1.11 cluster.initial_master_nodes: ["192.168.1.11" ] transport.tcp.port: 9300 http.port: 9200 discovery.zen.ping.unicast.hosts: ["192.168.1.11:9300" ,"192.168.1.12:9300" ] discovery.zen.minimum_master_nodes: 1 discovery.zen.ping_timeout: 120s client.transport.ping_timeout: 60s indices.queries.cache.size: 20% indices.memory.index_buffer_size: 30% bootstrap.memory_lock: true bootstrap.system_call_filter: false node.master: true node.data: true thread_pool.search.queue_size: 1000 thread_pool.search.size: 200 thread_pool.search.min_queue_size: 10 thread_pool.search.max_queue_size: 1000 thread_pool.search.auto_queue_frame_size: 2000 thread_pool.search.target_response_time: 6s cluster.name: es-test node.name: node-3 node.master: true node.data: true path.data: /data/es/data path.logs: /data/es/logs bootstrap.memory_lock: false bootstrap.system_call_filter: false network.host: 0.0.0.0 network.publish_host: 10.240.0.8 http.port: 9200 discovery.seed_hosts: ["10.10.10.1" ,"10.10.10.2" ,"10.10.10.3" ] cluster.initial_master_nodes:["10.10.10.1" ,"10.10.10.2" ,"10.10.10.3" ] http.cors.enabled: true http.cors.allow-origin: "*"
ES 配置文件 ES7.4 官网配置详细说明
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 path: logs: - /var/log /elasticsearch data: - /mnt/elasticsearch_1 - /mnt/elasticsearch_2 - /mnt/elasticsearch_3 cluster.name: logging-prod node.name: prod-data-2 network.host: 192.168.1.10 discovery.seed_hosts: ["10.10.10.1" ,"10.10.10.2" ,"10.10.10.3" ] cluster.initial_master_nodes:["10.10.10.1" ,"10.10.10.2" ,"10.10.10.3" ]
附件 ES 说明 1 2 3 4 5 6 Elasticsearch保留端口9300-9400用于集群通信,而端口9200-9300保留用于访问Elasticsearch API 出现master not discovered异常的根本原因是节点无法在端口9300上相互ping通。这需要同时进行。即node1应该能够在9300上ping node2,反之亦然