elasticsearch/kibana install
2020-08-27Elasticsearch 是一个分布式、高扩展、高实时的搜索与数据分析引擎。它能很方便的使大量数据具有搜索、分析和探索的能力。
1、Elasticsearch是与名为Logstash的数据收集和日志解析引擎以及名为Kibana的分析和可视化平台一起开发的。这三个产品被设计成一个集成解决方案,称为“Elastic Stack”(以前称为“ELK stack”)。
2、建议用迅雷下载,上传服务器再安装
#wget -c https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.0-x86_64.rpm
#wget -c https://artifacts.elastic.co/downloads/kibana/kibana-7.9.0-x86_64.rpm
wget -c https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.9.0/elasticsearch-analysis-ik-7.9.0.zip
yum localinstall elasticsearch-7.9.0-x86_64.rpm
yum -y install npm git
unzip elasticsearch-analysis-ik-7.9.0.zip -d /usr/share/elasticsearch/plugins/ik
mkdir -p /data/elasticsearch/{data,logs}
chown -R elasticsearch.elasticsearch /data/elasticsearch
today=$(date +%F-%H%M)
mv /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.${today}
cat << EOF > /etc/elasticsearch/elasticsearch.yml
cluster.name: kcall_es # 组名(同一个组,组名必须一致)
node.name: es_node1
node.master: true
network.host: 0.0.0.0 # 网络设置
network.bind_host: 0.0.0.0
path.data: /data1/elasticsearch/data # 数据存放的路径
path.logs: /data1/elasticsearch/logs # 日志存放的路径
#bootstrap.mlockall: true # 锁住内存,不被使用到交换分区去
http.cors.enabled: true
http.cors.allow-origin: "*"
http.port: 9200 # 端口
transport.tcp.port: 9300 #各节点通信
cluster.initial_master_nodes: ["card_es_node1"]
discovery.zen.ping.unicast.hosts: ["172.18.91.249:9300"]
discovery.zen.minimum_master_nodes: 1
#index.analysis.analyzer.default.type: ik
EOF
cat << EOF > /root/get_nodes.sh;
curl -XGET localhost:9200/_cat/nodes
EOF
sed -i 's/1g/4g/g' /etc/elasticsearch/jvm.options
3、Shay Banon在2004年创造了Elasticsearch的前身,称为Compass。Shay Banon在2010年2月发布了Elasticsearch的第一个版本。
4、在国内,还没较为完善的面向 Elasticsearch 的监控管理平台
5、elasticsearch-head
6、shards:代表索引分片,es可以把一个完整的索引分成多个分片,这样的好处是可以把一个大的索引拆分成多个,分布到不同的节点上。构成分布式搜索。分片的数量只能在索引创建前指定,并且索引创建后不能更改。
7、安装kibana,配置nginx
#!/bin/sh
yum -y localinstall kibana-7.9.0-x86_64.rpm
IP=$(ifconfig eth0|grep inet|awk '{print $2}'|head -1)
today=$(date +%F-%H%M)
mv /etc/kibana/kibana.yml /etc/kibana/kibana.yml.${today}
cat << EOF > /etc/kibana/kibana.yml
server.port: 5601
server.host: "${IP}"
elasticsearch.hosts: ["http://${IP}:9200"]
kibana.index: ".kibana"
elasticsearch.preserveHost: true
i18n.locale: "zh-CN"
#server.defaultRoute: /app/system_portal
server.basePath: "/kibana"
EOF
chown -R kibana.kibana /etc/kibana/kibana.yml
cat << EOF > /root/restart_kibana.sh;
systemctl restart kibana.service
EOF
nginx配置:
location /kibana/ {
proxy_pass http://172.18.91.249:5601;
include proxy.conf;
rewrite ^/kibana/(.*)$ /$1 break;
auth_basic "kibana";
auth_basic_user_file conf/passwd;
}
8、logstash安装
#!/bin/sh
# bootstrap_servers => ["172.18.62.142:9092,172.16.0.165:9092"]
yum -y localinstall logstash-7.9.0.rpm
IP=$(ifconfig eth0|grep inet|awk '{print $2}'|head -1)
today=$(date +%F-%H%M)
$confile=/etc/logstash/logstash.yml
mv $confile ${confile}.${today}
cat << EOF > $confile
input{
kafka{
bootstrap_servers => ["172.18.62.142:9092"]
topics => ["ocpc"]
codec => "plain"
consumer_threads => 2
group_id => "logstash_kafka"
client_id => "logstash_1"
decorate_events => false
auto_offset_reset => "latest"
}
}
filter {
grok {
match => [
"message", "(?<env>(.*)) (?<time>[^ ]+ [^ ]+) (?<level>[^ ]+) (?<sessionId>[a-zA-Z]+\[[0-9]+\]) (?<body>(.*))",
"message", "(?<env>(.*)) (?<time>[^ ]+ [^ ]+) (?<level>[^ ]+) (?<sessionId>[0-9]+) (?<body>(.*))",
"message", "(?<env>(.*)) (?<time>[^ ]+ [^ ]+) (?<level>[^ ]+) (?<sessionId>(.)(.)(.)[0-9]+)(?<body>(.*))"
]
remove_field => ["message","@version","host"]
}
}
output {
elasticsearch {
hosts => "172.18.91.249:9200"
manage_template => true
index => "elai-%{+YYYY.MM.dd}"
template => "/etc/logstash/template.json"
template_name => "template"
template_overwrite => true
}
}
EOF
chown -R logstash.logstash $confile
cat << EOF > /root/restart_logstash.sh;
systemctl restart logstash.service
EOF