Чтение логов из всех Pod’ов кластера и направление их в Loki | Getting logs from all cluster Pods and sending them to Loki |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: all-logs spec: type: KubernetesPods destinationRefs:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: all-logs spec: type: KubernetesPods destinationRefs:
|
Чтение логов Pod’ов из указанного namespace с указанным label и перенаправление одновременно в Loki и Elasticsearch | Reading Pod logs from a specified namespace with a specified label and redirecting to Loki and Elasticsearch |
Чтение логов Pod’ов из namespace | Reading logs from |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: whispers-booking-logs spec: type: KubernetesPods kubernetesPods: namespaceSelector: matchNames:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: whispers-booking-logs spec: type: KubernetesPods kubernetesPods: namespaceSelector: matchNames:
|
Создание source в namespace и чтение логов всех Pod’ов в этом NS, с направлением их в Loki | Creating a source in namespace and reading logs of all Pods in that NS with forwarding them to Loki |
Следующий pipeline создает source в namespace: | Namespaced pipeline - reading logs from |
yaml apiVersion: deckhouse.io/v1alpha1 kind: PodLoggingConfig metadata: name: whispers-logs namespace: tests-whispers spec: clusterDestinationRefs:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: PodLoggingConfig metadata: name: whispers-logs namespace: tests-whispers spec: clusterDestinationRefs:
|
Чтение только Pod’ов в указанном namespace и имеющих определенный label | Reading only Pods in the specified namespace and having a certain label |
Пример чтения только Pod’ов, имеющих label | Read logs from Pods with label |
yaml apiVersion: deckhouse.io/v1alpha1 kind: PodLoggingConfig metadata: name: whispers-logs namespace: tests-whispers spec: labelSelector: matchLabels: app: booking clusterDestinationRefs:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: PodLoggingConfig metadata: name: whispers-logs namespace: tests-whispers spec: labelSelector: matchLabels: app: booking clusterDestinationRefs:
|
Переход с Promtail на Log-Shipper | Migration from Promtail to Log-Shipper |
В ранее используемом URL Loki требуется убрать путь | Path |
Vector сам добавит этот путь при работе с Loki. | Vector will add this PATH automatically during working with Loki destination. |
Работа с Grafana Cloud | Working with Grafana Cloud |
Данная документация подразумевает, что у вас уже создан ключ API. | This documentation expects that you have created API key. |
Для начала вам потребуется закодировать в base64 ваш токен доступа к GrafanaCloud. | |
Firstly you should encode your token with base64. | |
bash
echo -n “ | bash
echo -n “ |
Затем создадим ClusterLogDestination | Then you can create ClusterLogDestination |
yaml
apiVersion: deckhouse.io/v1alpha1
kind: ClusterLogDestination
metadata:
name: loki-storage
spec:
loki:
auth:
password: PFlPVVItR1JBRkFOQUNMT1VELVRPS0VOPg==
strategy: Basic
user: “ | yaml
apiVersion: deckhouse.io/v1alpha1
kind: ClusterLogDestination
metadata:
name: loki-storage
spec:
loki:
auth:
password: PFlPVVItR1JBRkFOQUNMT1VELVRPS0VOPg==
strategy: Basic
user: “ |
Теперь можно создать PodLogginConfig или ClusterPodLoggingConfig и отправлять логи в Grafana Cloud. | Now you can create PodLogginConfig or ClusterPodLoggingConfig and send logs to Grafana Cloud. |
Добавление Loki в Deckhouse Grafana | Adding Loki source to Deckhouse Grafana |
Вы можете работать с Loki из встроенной в Deckhouse Grafana. Достаточно добавить GrafanaAdditionalDatasource | You can work with Loki from embedded to deckhouse Grafana. Just add GrafanaAdditionalDatasource |
yaml apiVersion: deckhouse.io/v1 kind: GrafanaAdditionalDatasource metadata: name: loki spec: access: Proxy basicAuth: false jsonData: maxLines: 5000 timeInterval: 30s type: loki url: http://loki.loki:3100 | yaml apiVersion: deckhouse.io/v1 kind: GrafanaAdditionalDatasource metadata: name: loki spec: access: Proxy basicAuth: false jsonData: maxLines: 5000 timeInterval: 30s type: loki url: http://loki.loki:3100 |
Поддержка Elasticsearch < 6.X | Elasticsearch < 6.X usage |
Для Elasticsearch < 6.0 нужно включить поддержку doc_type индексов. Сделать это можно следующим образом: | For Elasticsearch < 6.0 doc_type indexing should be set. Config should look like this: |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: es-storage spec: type: Elasticsearch elasticsearch: endpoint: http://192.168.1.1:9200 docType: “myDocType” # Укажите значение здесь. Оно не должно начинаться с ‘_’ auth: strategy: Basic user: elastic password: c2VjcmV0IC1uCg== | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: es-storage spec: type: Elasticsearch elasticsearch: endpoint: http://192.168.1.1:9200 docType: “myDocType” # Set any string here. It should not start with ‘_’ auth: strategy: Basic user: elastic password: c2VjcmV0IC1uCg== |
Шаблон индекса для Elasticsearch | Index template for Elasticsearch |
Существует возможность отправлять сообщения в определенные индексы на основе метаданных с помощью шаблонов индексов: | It is possible to route logs to particular indexes based on metadata using index templating: |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: es-storage spec: type: Elasticsearch elasticsearch: endpoint: http://192.168.1.1:9200 index: “k8s-{{ namespace }}-%F” | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: es-storage spec: type: Elasticsearch elasticsearch: endpoint: http://192.168.1.1:9200 index: “k8s-{{ namespace }}-%F” |
В приведенном выше примере для каждого пространства имен Kubernetes будет создан свой индекс в Elasticsearch. | For the above example for each Kubernetes namespace a dedicated index in Elasticsearch will be created. |
Эта функция так же хорошо работает в комбинации с | This feature works well combining with |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: es-storage spec: type: Elasticsearch elasticsearch: endpoint: http://192.168.1.1:9200 index: “k8s-{{ service }}-{{ namespace }}-%F” extraLabels: service: “{{ service_name }}” | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: es-storage spec: type: Elasticsearch elasticsearch: endpoint: http://192.168.1.1:9200 index: “k8s-{{ service }}-{{ namespace }}-%F” extraLabels: service: “{{ service_name }}” |
|
|
Пример интеграции со Splunk | Splunk integration |
Существует возможность отсылать события из Deckhouse в Splunk. | It is possible to send logs from Deckhouse to Splunk. |
|
|
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: splunk spec: type: Splunk splunk: endpoint: https://prd-p-xxxxxx.splunkcloud.com:8088 token: xxxx-xxxx-xxxx index: logs tls: verifyCertificate: false verifyHostname: false | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: splunk spec: type: Splunk splunk: endpoint: https://prd-p-xxxxxx.splunkcloud.com:8088 token: xxxx-xxxx-xxxx index: logs tls: verifyCertificate: false verifyHostname: false |
|
|
yaml extraLabels: pod_label_app: ‘{{ pod_labels.app }}’ | yaml extraLabels: pod_label_app: ‘{{ pod_labels.app }}’ |
Простой пример Logstash | Simple Logstash example |
Чтобы отправлять логи в Logstash, на стороне Logstash должен быть настроен входящий поток | To send logs to Logstash, the |
Пример минимальной конфигурации Logstash: | An example of the minimal Logstash configuration: |
hcl input { tcp { port => 12345 codec => json } } output { stdout { codec => json } } | hcl input { tcp { port => 12345 codec => json } } output { stdout { codec => json } } |
Пример манифеста | An example of the |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: logstash spec: type: Logstash logstash: endpoint: logstash.default:12345 | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: logstash spec: type: Logstash logstash: endpoint: logstash.default:12345 |
Сбор событий Kubernetes | Collect Kubernetes Events |
События Kubernetes могут быть собраны log-shipper’ом, если | Kubernetes Events can be collected by log-shipper if |
Включите events-exporter изменив параметры модуля | Enable |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: extended-monitoring spec: version: 1 settings: events: exporterEnabled: true | yaml apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: extended-monitoring spec: version: 1 settings: events: exporterEnabled: true |
Выложите в кластер следующий | Apply the following |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: kubernetes-events spec: type: KubernetesPods kubernetesPods: labelSelector: matchLabels: app: events-exporter namespaceSelector: matchNames:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: kubernetes-events spec: type: KubernetesPods kubernetesPods: labelSelector: matchLabels: app: events-exporter namespaceSelector: matchNames:
|
Фильтрация логов | Log filters |
Пользователи могут фильтровать логи используя следующие фильтры:
| Users can filter logs by applying two filters:
|
Сборка логов только для контейнера
| Collect only logs of the
|
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: nginx-logs spec: type: KubernetesPods labelFilter:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: nginx-logs spec: type: KubernetesPods labelFilter:
|
Аудит событий kubelet’а | Audit of kubelet actions |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: kubelet-audit-logs spec: type: File file: include:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: kubelet-audit-logs spec: type: File file: include:
|
Системные логи Deckhouse | Deckhouse system logs |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: system-logs spec: type: File file: include:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: system-logs spec: type: File file: include:
|
|
|
Настройка сборки логов с продуктовых namespace’ов используя опцию namespace label selector | Collect logs from production namespaces using the namespace label selector option |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: production-logs spec: type: KubernetesPods kubernetesPods: namespaceSelector: labelSelector: matchNames: environment: production destinationRefs:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: production-logs spec: type: KubernetesPods kubernetesPods: namespaceSelector: labelSelector: matchLabels: environment: production destinationRefs:
|
Исключить Pod’ы и namespace’ы используя label | Exclude Pods or namespaces with a label |
Существует преднастроенный label для исключения определенных Pod’ов и namespace’ов: | There is a preconfigured label to exclude particular namespaces or Pods: |
yamlapiVersion: v1 kind: Namespace metadata: name: test-namespace labels: log-shipper.deckhouse.io/exclude: “true” — apiVersion: apps/v1 kind: Deployment metadata: name: test-deployment spec: … template: metadata: labels: log-shipper.deckhouse.io/exclude: “true” | yamlapiVersion: v1 kind: Namespace metadata: name: test-namespace labels: log-shipper.deckhouse.io/exclude: “true” — apiVersion: apps/v1 kind: Deployment metadata: name: test-deployment spec: … template: metadata: labels: log-shipper.deckhouse.io/exclude: “true” |