本文將總結(jié)一下我們目前使用Prometheus對(duì)Kubernetes集群監(jiān)控的實(shí)踐。 我們選擇Prometheus作為監(jiān)控系統(tǒng)主要在以下各層面實(shí)現(xiàn)監(jiān)控:
*
基礎(chǔ)設(shè)施層:監(jiān)控各個(gè)主機(jī)服務(wù)器資源(包括Kubernetes的Node和非Kubernetes的Node),如CPU,內(nèi)存,網(wǎng)絡(luò)吞吐和帶寬占用,磁盤(pán)I/O和磁盤(pán)使用等指標(biāo)。
* 中間件層:監(jiān)控獨(dú)立部署于Kubernetes集群之外的中間件,例如:MySQL、Redis、RabbitMQ、ElasticSearch、Nginx等。
* Kubernetes集群:監(jiān)控Kubernetes集群本身的關(guān)鍵指標(biāo)
* Kubernetes集群上部署的應(yīng)用:監(jiān)控部署在Kubernetes集群上的應(yīng)用
*
1.基礎(chǔ)設(shè)施層和中間件層的監(jiān)控
其中基礎(chǔ)設(shè)施層監(jiān)控指標(biāo)的拉取肯定是來(lái)在Prometheus
<https://yq.aliyun.com/go/articleRenderRedirect?url=https%3A%2F%2Fwww.kubernetes.org.cn%2Ftags%2Fprometheus>
的node_exporter,因?yàn)槲覀円O(jiān)控的服務(wù)器節(jié)點(diǎn)既包含Kubernetes節(jié)點(diǎn)又包含其他部署獨(dú)立中間件的節(jié)點(diǎn),
所以我們并沒(méi)有將node_exporter以daemonset的形式部署到k8s上,而是使用ansible將node_exporter以二進(jìn)制的形式部署到所有要監(jiān)控的服務(wù)器上。
而負(fù)責(zé)從node_exporter拉取指標(biāo)的Prometheus也是用ansible獨(dú)立部署在Kubernetes集群外部的。Prometheus的配置文件prometheus.yml使用ansible的j2模板生成。
中間層的監(jiān)控和基礎(chǔ)設(shè)施層監(jiān)控類(lèi)似,使用ansible在各個(gè)中間件所在的主機(jī)上部署各個(gè)中間件的exporter,仍然使用上面在Kubernetes集群外部的這個(gè)Prometheus從這些exporter拉取指標(biāo),Prometheus的配置文件prometheus.yml使用ansible的j2模板生成。
2.Kubernetes集群的監(jiān)控
要實(shí)現(xiàn)對(duì)Kubernetes集群的監(jiān)控,因?yàn)镵ubernetes的rbac機(jī)制以及證書(shū)認(rèn)證,當(dāng)然是把Prometheus部署在Kubernetes集群上最方便??墒俏覀兡壳暗谋O(jiān)控系統(tǒng)是以k8s集群外部的Prometheus為主的,grafana和告警都是使用這個(gè)外部的Prometheus,如果還需要在Kubernetes集群內(nèi)部部署一個(gè)Prometheus的話一定要把它桶外部的Prometheus聯(lián)合起來(lái),好在Prometheus支持Federation。
2.1 Prometheus的Federation簡(jiǎn)介
Federation允許一個(gè)Prometheus從另一個(gè)Prometheus中拉取某些指定的時(shí)序數(shù)據(jù)。Federation是Prometheus提供的擴(kuò)展機(jī)制,允許Prometheus從一個(gè)節(jié)點(diǎn)擴(kuò)展到多個(gè)節(jié)點(diǎn),實(shí)際使用中一般會(huì)擴(kuò)展成樹(shù)狀的層級(jí)結(jié)構(gòu)。下面是Prometheus官方文檔中對(duì)federation的配置示例:
- job_name: 'federate' scrape_interval: 15s honor_labels: true metrics_path:
'/federate' params: 'match[]': - '{job="prometheus"}' - '{__name__=~"job:.*"}'
static_configs: - targets: - 'source-prometheus-1:9090' -
'source-prometheus-2:9090' - 'source-prometheus-3:9090'
這段配置所屬的Prometheus將從source-prometheus-1 ~ 3這3個(gè)Prometheus的/federate端點(diǎn)拉取監(jiān)控?cái)?shù)據(jù)。
match[]參數(shù)指定了只拉取帶有job=”prometheus標(biāo)簽的指標(biāo),或者名稱(chēng)以job開(kāi)頭的指標(biāo)。
2.2 在Kubernetes上部署Prometheus
前面已經(jīng)介紹了將使用Prometheus
federation的形式,k8s集群外部的Prometheus從k8s集群中Prometheus拉取監(jiān)控?cái)?shù)據(jù),外部的Prometheus才是監(jiān)控?cái)?shù)據(jù)的存儲(chǔ)。
k8s集群中部署Prometheus的數(shù)據(jù)存儲(chǔ)層可以簡(jiǎn)單的使用emptyDir,數(shù)據(jù)只保留24小時(shí)(或更短時(shí)間)即可,部署在k8s集群上的這個(gè)Prometheus實(shí)例即使發(fā)生故障也可以放心的讓它在集群節(jié)點(diǎn)中漂移。
在k8s上部署Prometheus十分簡(jiǎn)單,只需要下面4個(gè)文件:prometheus.rbac.yml, prometheus.config.yml,
prometheus.deploy.yml, prometheus.svc.yml。 下面給的例子中將Prometheus部署到kube-system命名空間。
prometheus.rbac.yml定義了Prometheus容器訪問(wèn)k8s
apiserver所需的ServiceAccount和ClusterRole及ClusterRoleBinding,參考Prometheus源碼中庫(kù)中的例子
<https://yq.aliyun.com/go/articleRenderRedirect?url=https%3A%2F%2Fgithub.com%2Fprometheus%2Fprometheus%2Fblob%2Fmaster%2Fdocumentation%2Fexamples%2Frbac-setup.yml>
:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name:
prometheus rules: - apiGroups: [""] resources: - nodes - nodes/proxy - services
- endpoints - pods verbs: ["get", "list", "watch"] - apiGroups: - extensions
resources: - ingresses verbs: ["get", "list", "watch"] - nonResourceURLs: [
"/metrics"] verbs: ["get"] --- apiVersion: v1 kind: ServiceAccount metadata:
name: prometheus namespace: kube-system --- apiVersion: rbac.authorization.k8s.
io/v1 kind: ClusterRoleBinding metadata: name: prometheus roleRef: apiGroup:
rbac.authorization.k8s.io kind: ClusterRole name: prometheus subjects: - kind:
ServiceAccount name: prometheus namespace: kube-system
prometheus.config.yml configmap中的prometheus的配置文件,參考Prometheus源碼中庫(kù)中的例子
<https://yq.aliyun.com/go/articleRenderRedirect?url=https%3A%2F%2Fgithub.com%2Fprometheus%2Fprometheus%2Fblob%2Fmaster%2Fdocumentation%2Fexamples%2Fprometheus-kubernetes.yml>
:
apiVersion: v1 kind: ConfigMap metadata: name: prometheus-config namespace:
kube-system data: prometheus.yml: | global: scrape_interval: 15s
evaluation_interval: 15s scrape_configs: - job_name: 'kubernetes-apiservers'
kubernetes_sd_configs: - role: endpoints scheme: https tls_config: ca_file:
/var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run
/secrets/kubernetes.io/serviceaccount/token relabel_configs: - source_labels: [
__meta_kubernetes_namespace, __meta_kubernetes_service_name,
__meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;
https- job_name: 'kubernetes-nodes' kubernetes_sd_configs: - role: node scheme:
https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) -
target_label: __address__ replacement: kubernetes.default.svc:443 -
source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label:
__metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics - job_name:
'kubernetes-cadvisor' kubernetes_sd_configs: - role: node scheme: https
tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) -
target_label: __address__ replacement: kubernetes.default.svc:443 -
source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label:
__metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor -
job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role:
endpoints relabel_configs: - source_labels: [
__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex:
true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme
] action: replace target_label: __scheme__ regex: (https?) - source_labels: [
__meta_kubernetes_service_annotation_prometheus_io_path] action: replace
target_label: __metrics_path__ regex: (.+) - source_labels: [__address__,
__meta_kubernetes_service_annotation_prometheus_io_port] action: replace
target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 -
action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [
__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name] action: replace target_label:
kubernetes_name- job_name: 'kubernetes-services' kubernetes_sd_configs: - role:
service metrics_path: /probe params: module: [http_2xx] relabel_configs: -
source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep regex: true - source_labels: [__address__] target_label:
__param_target - target_label: __address__ replacement:
blackbox-exporter.example.com:9115 - source_labels: [__param_target]
target_label: instance - action: labelmap regex:
__meta_kubernetes_service_label_(.+) - source_labels:
[__meta_kubernetes_namespace] target_label: kubernetes_namespace -
source_labels: [__meta_kubernetes_service_name] target_label: kubernetes_name -
job_name: 'kubernetes-ingresses' kubernetes_sd_configs: - role: ingress
relabel_configs: - source_labels:
[__meta_kubernetes_ingress_annotation_prometheus_io_probe] action: keep regex:
true - source_labels:
[__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
regex: (.+);(.+);(.+) replacement: ${1}://${2}${3} target_label: __param_target
- target_label: __address__ replacement: blackbox-exporter.example.com:9115 -
source_labels: [__param_target] target_label: instance - action: labelmap regex:
__meta_kubernetes_ingress_label_(.+) - source_labels: [
__meta_kubernetes_namespace] target_label: kubernetes_namespace - source_labels:
[__meta_kubernetes_ingress_name] target_label: kubernetes_name - job_name:
'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: -
source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action:
keep regex: true - source_labels: [
__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace
target_label: __metrics_path__ regex: (.+) - source_labels: [__address__,
__meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex:
([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action:
labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [
__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name] action: replace target_label:
kubernetes_pod_name
prometheus.deploy.yml定義Prometheus的部署:
--- apiVersion: apps/v1beta2 kind: Deployment metadata: labels: name:
prometheus-deployment name: prometheus namespace: kube-system spec: replicas: 1
selector: matchLabels: app: prometheus template: metadata: labels: app:
prometheus spec: containers: - image: harbor.frognew.com/prom/prometheus:2.0.0
name: prometheus command: - "/bin/prometheus" args: -
"--config.file=/etc/prometheus/prometheus.yml" -
"--storage.tsdb.path=/prometheus" - "--storage.tsdb.retention=24h" ports: -
containerPort: 9090 protocol: TCP volumeMounts: - mountPath: "/prometheus" name:
data- mountPath: "/etc/prometheus" name: config-volume resources: requests: cpu
: 100m memory: 100Mi limits: cpu: 500m memory: 2500Mi serviceAccountName:
prometheus imagePullSecrets: - name: regsecret volumes: - name: data emptyDir:
{} - name: config-volume configMap: name: prometheus-config
prometheus.svc.yml定義Prometheus的Servic,需要將Prometheus以NodePort,
LoadBalancer或使用Ingress暴露到集群外部,這樣外部的Prometheus才能訪問(wèn)它:
--- kind: Service apiVersion: v1 metadata: labels: app: prometheus name:
prometheusnamespace: kube-system spec: type: NodePort ports: - port: 9090
targetPort: 9090 nodePort: 30003 selector: app: prometheus
2.3 配置Prometheus Federation
完成Kubernetes集群上的Prometheus的部署之后,下面將配置集群外部的Prometheus使其從集群內(nèi)部的Prometheus拉取數(shù)據(jù)。
實(shí)際上只需以靜態(tài)配置的形式添加一個(gè)job就可以:
- job_name: 'federate' scrape_interval: 15s honor_labels: true metrics_path:
'/federate' params: 'match[]': - '{job=~"kubernetes-.*"}' static_configs: -
targets: - '<nodeip>:30003'
注意上面的配置是外部Prometheus拉取k8s集群上面所有名稱(chēng)以kubernetes-的job的監(jiān)控?cái)?shù)據(jù)。
2.4 Kubernetes集群Grafana Dashboard
監(jiān)控Dashboard使用Kubernetes cluster monitoring (via Prometheus)
<https://yq.aliyun.com/go/articleRenderRedirect?url=https%3A%2F%2Fgrafana.com%2Fdashboards%2F162>
這個(gè)即可。 另外關(guān)于Pod和Deployment還有這兩個(gè)Dashboard:Kubernetes Pod Metrics
<https://yq.aliyun.com/go/articleRenderRedirect?url=https%3A%2F%2Fgrafana.com%2Fdashboards%2F747>
和Kubernetes Deployment metrics
<https://yq.aliyun.com/go/articleRenderRedirect?url=https%3A%2F%2Fgrafana.com%2Fdashboards%2F741>
。
2.5 Kubernetes集群告警規(guī)則
可以對(duì)apiserver和kubelet兩個(gè)關(guān)鍵組件的存活狀態(tài)進(jìn)行監(jiān)控,規(guī)則如下:
up{job=~"kubernetes-apiservers|kubernetes-nodes|kubernetes-cadvisor"} == 0
更多的告警規(guī)則可以通過(guò)查看上面2.4中的grafana
dashboard中監(jiān)控的關(guān)鍵指標(biāo),選擇和合適的指標(biāo)進(jìn)行設(shè)置,實(shí)際上一套好的監(jiān)控系統(tǒng)的監(jiān)控指標(biāo)和告警規(guī)則并不是越多越好。
3.Kubernetes集群上部署應(yīng)用的監(jiān)控
Kubernetes集群上部署應(yīng)用的監(jiān)控需要從兩個(gè)方面:
* Kubernetes集群上Pod, DaemonSet, Deployment, Job,
CronJob等各種資源對(duì)象的狀態(tài)需要監(jiān)控,這也反映了使用這些資源部署的應(yīng)用的狀態(tài)。但通過(guò)查看前面Prometheus從k8s集群拉取的指標(biāo)(這些指標(biāo)主要來(lái)自apiserver和kubelet中集成的cAdvisor),并沒(méi)有具體的各種資源對(duì)象的狀態(tài)指標(biāo)。對(duì)于Prometheus來(lái)說(shuō),當(dāng)然是需要引入新的exporter來(lái)暴露這些指標(biāo),Kubernetes提供了一個(gè)
kube-state-metrics
<https://yq.aliyun.com/go/articleRenderRedirect?url=https%3A%2F%2Fgithub.com%2Fkubernetes%2Fkube-state-metrics>
正式我們需要。
*
Kubernetes集群上應(yīng)用內(nèi)部的監(jiān)控,這個(gè)與具體應(yīng)用的開(kāi)發(fā)語(yǔ)言,開(kāi)發(fā)框架和具體技術(shù)緊密相關(guān),比如Java應(yīng)用的JVM監(jiān)控,Go應(yīng)用的GC監(jiān)控等等,這個(gè)需要應(yīng)用自身作為Exporter暴露這些指標(biāo)或在應(yīng)用的Pod中起一個(gè)exporter的sidecar容器。
這里將主要介紹kube-state-metrics,而對(duì)于應(yīng)用內(nèi)部的監(jiān)控實(shí)踐后邊有時(shí)間再單獨(dú)總結(jié)。kube-state-metrics使用kubernetes的go語(yǔ)言客戶(hù)端
client-go
<https://yq.aliyun.com/go/articleRenderRedirect?url=https%3A%2F%2Fgithub.com%2Fkubernetes%2Fclient-go>
可以從Kubernetes集群中獲取各種資源對(duì)象的指標(biāo)。
3.1 在Kubernetes上部署kube-state-metrics
kube-state-metrics已經(jīng)給出了在Kubernetes部署的manifest定義文件,具體的文件定義都在這里
<https://yq.aliyun.com/go/articleRenderRedirect?url=https%3A%2F%2Fgithub.com%2Fkubernetes%2Fkube-state-metrics%2Ftree%2Fmaster%2Fkubernetes>
。
將kube-state-metrics部署到Kubernetes上之后,就會(huì)發(fā)現(xiàn)Kubernetes集群中的Prometheus會(huì)在kubernetes-service-endpoints這個(gè)job下自動(dòng)服務(wù)發(fā)現(xiàn)kube-state-metrics,并開(kāi)始拉取metrics,當(dāng)然集群外部的Prometheus也能從集群中的Prometheus拉取到這些數(shù)據(jù)了。這是因?yàn)樯?.2中prometheus.config.yml中Prometheus的配置文件job
kubernetes-service-endpoints的配置。而部署kube-state-metrics的manifest定義文件kube-state-metrics-service.yaml對(duì)kube-state-metricsService的定義包含annotation
prometheus.io/scrape: ‘true’,因此kube-state-metrics的endpoint可以被Prometheus自動(dòng)服務(wù)發(fā)現(xiàn)。
關(guān)于kube-state-metrics暴露的所有監(jiān)控指標(biāo)可以參考kube-state-metrics的文檔kube-state-metrics
Documentation
<https://yq.aliyun.com/go/articleRenderRedirect?url=https%3A%2F%2Fgithub.com%2Fkubernetes%2Fkube-state-metrics%2Ftree%2Fmaster%2FDocumentation>
。
3.2 告警規(guī)則
目前我們根據(jù)從kube-state-metrics獲取的監(jiān)控指標(biāo),制定了以下告警規(guī)則:
* 存在執(zhí)行失敗的Job:
kube_job_status_failed{job=”kubernetes-service-endpoints”,k8s_app=”kube-state-metrics”}==1
* 集群節(jié)點(diǎn)狀態(tài)錯(cuò)誤: kube_node_status_condition{condition=”Ready”,status!=”true”}==1
* 集群節(jié)點(diǎn)內(nèi)存或磁盤(pán)資源短缺:
kube_node_status_condition{condition=~”O(jiān)utOfDisk|MemoryPressure|DiskPressure”,status!=”false”}==1
* 集群中存在失敗的PVC:kube_persistentvolumeclaim_status_phase{phase=”Failed”}==1
* 集群中存在啟動(dòng)失敗的Pod:kube_pod_status_phase{phase=~”Failed|Unknown”}==1
* 最近30分鐘內(nèi)有Pod容器重啟: changes(kube_pod_container_status_restarts[30m])>0
其中關(guān)于Pod狀態(tài)的的告警尤為重要,可以在Jenkins完成CI/CD自動(dòng)發(fā)布后,不用守在Kubernetes
Dashboard旁邊確認(rèn)這個(gè)Deployment關(guān)聯(lián)的Pod已經(jīng)全部啟動(dòng),因?yàn)槿绻霈F(xiàn)問(wèn)題是會(huì)收到Prometheus的告警的。
本文轉(zhuǎn)自kubernetes中文社區(qū)-Prometheus監(jiān)控實(shí)踐:Kubernetes集群監(jiān)控
<https://yq.aliyun.com/go/articleRenderRedirect?url=https%3A%2F%2Fwww.kubernetes.org.cn%2F3418.html>
熱門(mén)工具 換一換
