Started monitoring the Kubernetes application

At the beginning

You can now release the Kubernetes application last time.I would like to do this as an application developer, but if the performance drops after the release or a problem is found, it is the developer who will respond.So, this time and next time, I will explain the minimum monitoring and logging mechanism and method of the Kubernetes application, as well as the concept and introduction of the observability.

Monitoring and observerability

Needless to say, monitoring is needed to operate applications.The same is true for the Kubernetes application.Metrics such as load, access, and error rate, as well as running non -container applications, must be measured, and overload, software disorders, communication disorders, and physical disorders naturally occur, so it is necessary to respond.In that case, issue an alert.Many people have experienced a large number of alerts after the release, or when looking at the dashboard, the CPU usage rate is tremendous.

In recent years, it is said that in addition to monitoring, it is necessary to work on "observer venture" that goes into the internal movements of the application and visualized.Monitoring is a visualization and notification of the application status.You can see that an abnormality has occurred due to a monitoring, but in order to actually solve the abnormality, it is necessary to investigate, cut the problem, and identify the cause.This survey is a problem, and the logs to be investigated have been crossed by multiple lines or multiple files, and it is difficult to match, and if you have introduced a node or workload auto scale, where the target log is in the first place.It is hard to find it.Even if I managed to find the applicable log, it was stuck there, so I changed the log output and waited for reproduction....That is a common story.In particular, when adopting a micro service architecture, it is also a story that this survey needs to cross the boundaries of the team and does not go smoothly.

Obserbility is an initiative to know what is happening by stepping on the application of the application metrics and logs.If something happens, if there is a cut in problem parts inside the application, it will be somewhat easy to deal with.For example, an application like a web service that returns after receiving a request, records and visualizes not only status code and response time throughout the request, but also how each service and functions worked in the processing.We aim to support the identification and solution of the problem.While the release of cloud native architectures such as Kubernetes has been speeding up, it has become more complex due to auto -scale and service split, and it has been required to secure observatory in order to respond quickly and smartly in problems that occur.It's coming.

By the way, this time, I will talk about monitoring the Kubernetes application.And next time, let's introduce tracing to the application to realize the obserobility, and follow the flow of microscopic processing.The introduction of detailed monitoring tools and the explanation of the set values will be minimized, focusing on topics that you need to know as an application developer.

Kubernetes Metrics monitoring

Let's start by visualizing metrics with Kubernetes.Kubernetes has a Metrics API as a standard method for acquiring pod and node metrics.The METRICS API is implemented by the plug -in, so if you build a cluster using Kubeadm as shown in this series, it cannot be used as it is.The most major way to use the Metrics API is to install Metrics Server on the cluster.METRICS SERVER gets metrics from container drills such as Containerd through Kubelet so that you can get it through Kube-APISERVER.

Let's install it on the cluster at once.As with the components you have seen so far, it is completed with one command.

$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability.yaml

Originally, this is complete, but if you are building a cluster in the second step, it takes another effort.Metrics-Server accesses Kubelet and gets Metrics, but if it is a cluster built with Kubeadm like this time, a Kubelet certificate error will occur as it is.Set the kubelet certificate correctly to sign up with Kubernetes CA, or ignore the certificate error.This time, the latter will be adopted.

Kubernetesアプリケーションのモニタリングことはじめ

$ kubectl patch -n kube-system deployment/metrics-server --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname" },{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls" }]'

When the installation is completed, the Metrics API is enabled.You can actually use the Kubectl Top command to display the CPU usage rate and memory usage of Node and Pod.It takes a little time to reflect.

$ kubectl top nodeNAMECPU(cores)CPU%MEMORY(bytes)MEMORY%cocoa371m18% 2584Mi 67%vanilla128m6%2182Mi 56%$ kubectl top pod -n sample-appNAMECPU(cores)MEMORY(bytes)accesscount-546568fbcb-882dd1m120Miaccesscountdb-0 1m21Miarticle-bd679456-7lkfs1m175Miarticledb-01m35Mirank-5c6485cb74-slsrx 1m185Mirankdb-01m35Miwebsite-578dc78898-xntw5 1m61Mi

You can use a dashboard tool such as Kubernetes-Dashboard to visualize data obtained from Metrics API on WebUI.This Metrics API is also used in Kubernetes's auto -scale mechanism, Horizontal Pod Autoscaler and Vertical Pod Autoscaler.By introducing Metrics Server, you can auto -scale pods using two metrics, CPU usage and memory usage.Please refer to the official document, as this series is not mentioned anymore.By the way, by using plug -ins other than Metrics Server, you can get more metrics with Metrics API.In order to auto -scale with application -specific metrics such as the number of requests, the Metrics API is often expanded in Prometheus described later.

However, the Metrics API is a mechanism for acquiring pods and node metrics.It is not suitable for monitoring the application -specific metrics, nor monitoring the application -specific metrics.It is recommended that the Metrics API is used for auto -scale and use external monitoring tools in parallel to monitor for operation.

Metrics monitoring using Prometheus

So how do you monitor your application -specific metrics?Many of the monitoring tools that were also used in the non -Kubernetes system support Kubernetes, so it is a good idea to continue using these tools.Here, I would like to use the most major Prometheus in Kubernetes.Many other tools, such as Elasticsearch and Datadog, can be used in Kubernetes, so if you have already moved it, check it out.

In fact, let's try monitoring Kubernetes using Prometheus.Prometheus Server itself works inside and outside the Kubernetes cluster, but this time it is built inside the cluster using Helm.Also, for simple, we install PersistentVolume and install.If you actually operate it because the data will be lost, prepare PersistentVolume.

$ kubectl create namespace monitoring$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts$ helm repo add kube-state-metrics https://kubernetes.github.io/kube-state-metrics$ helm repo update$ helm install prometheus prometheus-community/prometheus -f ---namespace monitoring <

Once you have a Prometheus server, publish the application metrics.Fixed the application, Spring Framework and NUXT.Introducing the JS Prometheus Exporter Module to build the image.If you want to see the details of the changes, GitLab.See COM commit logs.

Once you have an exporter, you will need to set up metrics from these Exporters on the Prometheus side.Prometheus has a setting value called Kubernetes_sd_config for the Kubernetes cluster, so it is convenient to use it in many cases.With this setting, Prometheus automatically collects values by describing a specific value in pod and service annotation.

# base/deployment-accesscount.yamlapiVersion: apps/v1kind: Deploymentmetadata:labels: app: accesscountname: accesscountspec:template: metadata:annotations:prometheus.io/scrape: 'true'prometheus.io/path: '/actuator/prometheus'prometheus.io/port: '8080'...# base/deployment-website.yamlapiVersion: apps/v1kind: Deploymentspec:metadata:labels: app: websitename: websitetemplate: metadata:annotations:prometheus.io/scrape: 'true'prometheus.io/path: '/'prometheus.io/port: '9091'...

These were set to publish metrics on the application side.Now let's look at the metrics of the Kubernetes resource itself, such as POD and Node.Monitoring using the Metrics API is unique to Kubernetes and does not correspond directly to Prometheus, so it is necessary to install separately, instead of Metrics-Server, to collect Pod and Node metrics in Prometheus.Node-Exporter, Kube-State-Metrics is most widely used for node-exporter, pod and other Kubernetes resources.When Prometheus is installed on the Kubernetes cluster in Helm, these Exporters are also installed at the same time.

$ kubectl -n monitoring get podNAMEREADYSTATUS RESTARTSAGEprometheus-alertmanager-7bbdfbf666-m5v9m2/2Running0 44hprometheus-kube-state-metrics-696cf79768-p98qw1/1Running0 44hprometheus-node-exporter-4jsnj 1/1Running0 44hprometheus-node-exporter-7k8ts 1/1Running0 44hprometheus-pushgateway-898d5bdb9-wr89g1/1Running0 44hprometheus-server-66f5858784-4b7rq2/2Running0 44h

Thus, the application -specific metrics and the Kubernetes resource metrics can now be handled in Prometheus.Especially in the case of an auto -scale application, the number of replicas of the Deployment resource and the status of the POD should be displayed in conjunction with the application metrics.So, let's display it on the dashboard using the visualization tool Grafana.In the figure, the number of replicas of the Deployment resource, the condition of the POD, the request for the request and the garbage collection are settled on one dashboard.By the way, this grafana can also be installed on Helm.

Kubernetes log monitoring

When the metrics can now be monitored, I will touch on the log monitoring.In container applications, including Kubernetes, the best practices are to use standard outputs instead of log files.Kubernetes allows you to view logs output from the standard output of each container from the API.Execute the Kubectl Logs command and actually check it.You can also check the output in real time with the -f option, like the Tail command.

$ kubectl -n sample-app logs article-7c65b647b7-qjcjt____ / \_ __| |_(_) ___| | ___/ _ \ | '__| __| |/ __| |/ _ \/ ___ \| || |_| | (__| |__/ /_/\_\_|\__|_|\___|_|\___|sample1:: (v0.2.0-SNAPSHOT)2021-09-28 05:39:34.467INFO 6 --- [main] c.c.k.s.article.ArticleApplication : Starting ArticleApplication v0.2.0-SNAPSHOT using Java 11.0.11 on article-7c65b647b7-qjcjt with PID 6 (/article/article.jar started by root in /article)2021-09-28 05:39:34.473 DEBUG 6 --- [main] c.c.k.s.article.ArticleApplication : Running with Spring Boot v2.4.2, Spring v5.3.32021-09-28 05:39:34.474INFO 6 --- [main] c.c.k.s.article.ArticleApplication : The following profiles are active: develop2021-09-28 05:39:35.946INFO 6 --- [main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data R2DBC repositories in DEFAULT mode.2021-09-28 05:39:36.234INFO 6 --- [main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 281 ms. Found 1 R2DBC repository interfaces.2021-09-28 05:39:38.569INFO 6 --- [main] o.s.b.a.e.web.EndpointLinksResolver: Exposing 1 endpoint(s) beneath base path '/actuator'2021-09-28 05:39:38.963INFO 6 --- [main] o.s.b.web.embedded.netty.NettyWebServer: Netty started on port 80802021-09-28 05:39:38.993INFO 6 --- [main] c.c.k.s.article.ArticleApplication : Started ArticleApplication in 5.808 seconds (JVM running for 7.129)2021-09-28 05:39:38.997 DEBUG 6 --- [main] c.c.k.s.article.ArticleApplication : Article application started!2021-09-28 06:10:55.071 DEBUG 6 --- [or-http-epoll-3] c.c.k.s.a.controller.ArticleController: access GET /api/articles/ dispatched2021-09-28 06:11:01.653 DEBUG 6 --- [or-http-epoll-4] c.c.k.s.a.controller.ArticleController: access GET /api/articles/ dispatched2021-09-28 07:49:49.729 DEBUG 6 --- [or-http-epoll-1] c.c.k.s.a.controller.ArticleController: access GET /api/articles/ dispatched

If multiple pods are in parallel using DEPLOYMENT resources, you can display a POD log with the same label by specifying the --selector option in the KubectL Logs command.。It's just a little hard to see, so using a tool like STERN is convenient because it displays color coding for each POD.Since all operations of Kubernetes are just requests for Kube-APISERVER, these extended tools have been actively developed.KREW is a mechanism to manage the plug -in of these Kubectl commands.Many plugins are registered, so you may want to use something you want to use ( * as of October 2021, STERN is not registered in the KREW repository).

$ stern -n sample-app accesscount

Log monitoring using LOKI

You can also use the tools used in the non -Kubenetes system for logs.Typical log collection tools are often used for fluentd, Promtail, Logstash, and object storage such as Elasticsearch, Loki, S3, etc. are often used.I used Grafana to visualize the metrics, so let's use Promtail and Loki developed by Grafana Labs.

Deploy Promtail and Loki with Helm as before.The set value has been changed so that the transfer destination of Promtail becomes LOKI.PROMTAIL is located on each node in DaemONSET, and each container automatically collects logs that are standard output.By the way, the logs of each container are written in/var/log/pods/below.

$ helm repo add grafana https://grafana.github.io/helm-charts$ helm repo update$ helm install loki grafana/loki$ helm install promtail grafana/promtail -f - <

If you set LOKI in the Grafana data source, you can display the application log.Namespace, node, and pod label are set on the log label.In the figure, by specifying namespace, it is limited to the application log.By setting the POD label properly, it is easier to filter in detail, such as extracting some sub -systems in the application and checking the log.

Also, depending on the application, there is something that writes logs in the file, not the standard output of the container.In that case, use the same pod as a side car container to configure the POD so that the log is transferred to LOKI.By sharing Volume between containers, you can read and transfer logs written by the application container in the Promtail container.Excerpt the sample of the manifest file and put it.

apiVersion: apps/v1kind: Deploymentspec:template: spec:volumes:- name: logscontainers:- name: promtail-containerimage: grafana/promtailvolumeMounts:- name: logs mountPath: /var/log/app- name: main-containerimage: main-imagevolumeMounts:- name: logs mountPath: /var/log/app

in conclusion

This time, we explained the monitoring of applications in the Kubernetes environment.As you can see, like the non -Kubernetes application, you can monitor metrics and logs.Next time, we will introduce examples of the observers that have been one step from monitoring while showing examples.looking forward to.

Tags: