fix: vendor observability charts
Deploy Cluster / Terraform (push) Waiting to run
Deploy Cluster / Ansible (push) Blocked by required conditions

This commit is contained in:
2026-05-04 10:49:46 +00:00
parent f5473a9bec
commit a04b8ad865
325 changed files with 46640 additions and 40 deletions
@@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
+17
View File
@@ -0,0 +1,17 @@
apiVersion: v2
appVersion: 3.0.0
description: Promtail is an agent which ships the contents of local logs to a Loki
instance
home: https://grafana.com/loki
icon: https://raw.githubusercontent.com/grafana/loki/master/docs/sources/logo.png
maintainers:
- email: lokiproject@googlegroups.com
name: Loki Maintainers
- name: unguiculus
name: promtail
sources:
- https://github.com/grafana/loki
- https://grafana.com/oss/loki/
- https://grafana.com/docs/loki/latest/
type: application
version: 6.16.6
+347
View File
@@ -0,0 +1,347 @@
# promtail
![Version: 6.16.6](https://img.shields.io/badge/Version-6.16.6-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 3.0.0](https://img.shields.io/badge/AppVersion-3.0.0-informational?style=flat-square)
Promtail is an agent which ships the contents of local logs to a Loki instance
## Source Code
* <https://github.com/grafana/loki>
* <https://grafana.com/oss/loki/>
* <https://grafana.com/docs/loki/latest/>
## Chart Repo
Add the following repo to use the chart:
```console
helm repo add grafana https://grafana.github.io/helm-charts
```
## Upgrading
A major chart version change indicates that there is an incompatible breaking change needing manual actions.
### From Chart Versions >= 3.0.0
* Customizeable initContainer added.
### From Chart Versions < 3.0.0
#### Notable Changes
* Helm 3 is required
* Labels have been updated to follow the official Kubernetes [label recommendations](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/)
* The default scrape configs have been updated to take new and old labels into consideration
* The config file must be specified as string which can be templated.
See below for details
* The config file is now stored in a Secret and no longer in a ConfigMap because it may contain sensitive data, such as basic auth credentials
Due to the label changes, an existing installation cannot be upgraded without manual interaction.
There are basically two options:
##### Option 1
Uninstall the old release and re-install the new one.
There will be no data loss.
Promtail will cleanly shut down and write the `positions.yaml`.
The new release which will pick up again from the existing `positions.yaml`.
##### Option 2
* Add new selector labels to the existing pods:
```
kubectl label pods -n <namespace> -l app=promtail,release=<release> app.kubernetes.io/name=promtail app.kubernetes.io/instance=<release>
```
* Perform a non-cascading deletion of the DaemonSet which will keep the pods running:
```
kubectl delete daemonset -n <namespace> -l app=promtail,release=<release> --cascade=false
```
* Perform a regular Helm upgrade on the existing release.
The new DaemonSet will pick up the existing pods and perform a rolling upgrade.
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | Affinity configuration for pods |
| annotations | object | `{}` | Annotations for the DaemonSet |
| automountServiceAccountToken | bool | `true` | Automatically mount API credentials for a particular Pod |
| config | object | See `values.yaml` | Section for crafting Promtails config file. The only directly relevant value is `config.file` which is a templated string that references the other values and snippets below this key. |
| config.clients | list | See `values.yaml` | The config of clients of the Promtail server Must be reference in `config.file` to configure `clients` |
| config.enableTracing | bool | `false` | The config to enable tracing |
| config.enabled | bool | `true` | Enable Promtail config from Helm chart Set `configmap.enabled: true` and this to `false` to manage your own Promtail config See default config in `values.yaml` |
| config.file | string | See `values.yaml` | Config file contents for Promtail. Must be configured as string. It is templated so it can be assembled from reusable snippets in order to avoid redundancy. |
| config.logFormat | string | `"logfmt"` | The log format of the Promtail server Must be reference in `config.file` to configure `server.log_format` Valid formats: `logfmt, json` See default config in `values.yaml` |
| config.logLevel | string | `"info"` | The log level of the Promtail server Must be reference in `config.file` to configure `server.log_level` See default config in `values.yaml` |
| config.positions | object | `{"filename":"/run/promtail/positions.yaml"}` | Configures where Promtail will save it's positions file, to resume reading after restarts. Must be referenced in `config.file` to configure `positions` |
| config.serverPort | int | `3101` | The port of the Promtail server Must be reference in `config.file` to configure `server.http_listen_port` See default config in `values.yaml` |
| config.snippets | object | See `values.yaml` | A section of reusable snippets that can be reference in `config.file`. Custom snippets may be added in order to reduce redundancy. This is especially helpful when multiple `kubernetes_sd_configs` are use which usually have large parts in common. |
| config.snippets.extraLimitsConfig | string | empty | You can put here any keys that will be directly added to the config file's 'limits_config' block. |
| config.snippets.extraRelabelConfigs | list | `[]` | You can put here any additional relabel_configs to "kubernetes-pods" job |
| config.snippets.extraScrapeConfigs | string | empty | You can put here any additional scrape configs you want to add to the config file. |
| config.snippets.extraServerConfigs | string | empty | You can put here any keys that will be directly added to the config file's 'server' block. |
| configmap.enabled | bool | `false` | If enabled, promtail config will be created as a ConfigMap instead of a secret |
| containerSecurityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":true}` | The security context for containers |
| daemonset.autoscaling.controlledResources | list | `[]` | List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory |
| daemonset.autoscaling.enabled | bool | `false` | Creates a VerticalPodAutoscaler for the daemonset |
| daemonset.autoscaling.maxAllowed | object | `{}` | Defines the max allowed resources for the pod |
| daemonset.autoscaling.minAllowed | object | `{}` | Defines the min allowed resources for the pod |
| daemonset.enabled | bool | `true` | Deploys Promtail as a DaemonSet |
| defaultVolumeMounts | list | See `values.yaml` | Default volume mounts. Corresponds to `volumes`. |
| defaultVolumes | list | See `values.yaml` | Default volumes that are mounted into pods. In most cases, these should not be changed. Use `extraVolumes`/`extraVolumeMounts` for additional custom volumes. |
| deployment.autoscaling.enabled | bool | `false` | Creates a HorizontalPodAutoscaler for the deployment |
| deployment.autoscaling.maxReplicas | int | `10` | |
| deployment.autoscaling.minReplicas | int | `1` | |
| deployment.autoscaling.targetCPUUtilizationPercentage | int | `80` | |
| deployment.autoscaling.targetMemoryUtilizationPercentage | string | `nil` | |
| deployment.enabled | bool | `false` | Deploys Promtail as a Deployment |
| deployment.replicaCount | int | `1` | |
| deployment.strategy | object | `{"type":"RollingUpdate"}` | Set deployment object update strategy |
| enableServiceLinks | bool | `true` | Configure enableServiceLinks in pod |
| extraArgs | list | `[]` | |
| extraContainers | object | `{}` | |
| extraEnv | list | `[]` | Extra environment variables. Set up tracing enviroment variables here if .Values.config.enableTracing is true. Tracing currently only support configure via environment variables. See: https://grafana.com/docs/loki/latest/clients/promtail/configuration/#tracing_config https://www.jaegertracing.io/docs/1.16/client-features/ |
| extraEnvFrom | list | `[]` | Extra environment variables from secrets or configmaps |
| extraObjects | list | `[]` | Extra K8s manifests to deploy |
| extraPorts | object | `{}` | Configure additional ports and services. For each configured port, a corresponding service is created. See values.yaml for details |
| extraVolumeMounts | list | `[]` | |
| extraVolumes | list | `[]` | |
| fullnameOverride | string | `nil` | Overrides the chart's computed fullname |
| global.imagePullSecrets | list | `[]` | Allow parent charts to override registry credentials |
| global.imageRegistry | string | `""` | Allow parent charts to override registry hostname |
| hostAliases | list | `[]` | hostAliases to add |
| hostNetwork | string | `nil` | Controls whether the pod has the `hostNetwork` flag set. |
| httpPathPrefix | string | `""` | Base path to server all API routes fro |
| image.pullPolicy | string | `"IfNotPresent"` | Docker image pull policy |
| image.registry | string | `"docker.io"` | The Docker registry |
| image.repository | string | `"grafana/promtail"` | Docker image repository |
| image.tag | string | `""` | Overrides the image tag whose default is the chart's appVersion |
| imagePullSecrets | list | `[]` | Image pull secrets for Docker images |
| initContainer | list | `[]` | |
| livenessProbe | object | `{}` | Liveness probe |
| nameOverride | string | `nil` | Overrides the chart's name |
| namespace | string | `nil` | The name of the Namespace to deploy If not set, `.Release.Namespace` is used |
| networkPolicy.enabled | bool | `false` | Specifies whether Network Policies should be created |
| networkPolicy.k8sApi.cidrs | list | `[]` | Specifies specific network CIDRs you want to limit access to |
| networkPolicy.k8sApi.port | int | `8443` | Specify the k8s API endpoint port |
| networkPolicy.metrics.cidrs | list | `[]` | Specifies specific network CIDRs which are allowed to access the metrics port. In case you use namespaceSelector, you also have to specify your kubelet networks here. The metrics ports are also used for probes. |
| networkPolicy.metrics.namespaceSelector | object | `{}` | Specifies the namespaces which are allowed to access the metrics port |
| networkPolicy.metrics.podSelector | object | `{}` | Specifies the Pods which are allowed to access the metrics port. As this is cross-namespace communication, you also neeed the namespaceSelector. |
| nodeSelector | object | `{}` | Node selector for pods |
| podAnnotations | object | `{}` | Pod annotations |
| podLabels | object | `{}` | Pod labels |
| podSecurityContext | object | `{"runAsGroup":0,"runAsUser":0}` | The security context for pods |
| podSecurityPolicy | object | See `values.yaml` | PodSecurityPolicy configuration. |
| priorityClassName | string | `nil` | The name of the PriorityClass |
| rbac.create | bool | `true` | Specifies whether RBAC resources are to be created |
| rbac.pspEnabled | bool | `false` | Specifies whether a PodSecurityPolicy is to be created |
| readinessProbe | object | See `values.yaml` | Readiness probe |
| resources | object | `{}` | Resource requests and limits |
| secret.annotations | object | `{}` | Annotations for the Secret |
| secret.labels | object | `{}` | Labels for the Secret |
| service.annotations | object | `{}` | Annotations for the service |
| service.enabled | bool | `false` | |
| service.labels | object | `{}` | Labels for the service |
| serviceAccount.annotations | object | `{}` | Annotations for the service account |
| serviceAccount.automountServiceAccountToken | bool | `true` | Automatically mount a ServiceAccount's API credentials |
| serviceAccount.create | bool | `true` | Specifies whether a ServiceAccount should be created |
| serviceAccount.imagePullSecrets | list | `[]` | Image pull secrets for the service account |
| serviceAccount.name | string | `nil` | The name of the ServiceAccount to use. If not set and `create` is true, a name is generated using the fullname template |
| serviceMonitor.annotations | object | `{}` | ServiceMonitor annotations |
| serviceMonitor.enabled | bool | `false` | If enabled, ServiceMonitor resources for Prometheus Operator are created |
| serviceMonitor.interval | string | `nil` | ServiceMonitor scrape interval |
| serviceMonitor.labels | object | `{}` | Additional ServiceMonitor labels |
| serviceMonitor.metricRelabelings | list | `[]` | ServiceMonitor relabel configs to apply to samples as the last step before ingestion https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#relabelconfig (defines `metric_relabel_configs`) |
| serviceMonitor.namespace | string | `nil` | Alternative namespace for ServiceMonitor resources |
| serviceMonitor.namespaceSelector | object | `{}` | Namespace selector for ServiceMonitor resources |
| serviceMonitor.prometheusRule | object | `{"additionalLabels":{},"enabled":false,"rules":[]}` | Prometheus rules will be deployed for alerting purposes |
| serviceMonitor.relabelings | list | `[]` | ServiceMonitor relabel configs to apply to samples before scraping https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#relabelconfig (defines `relabel_configs`) |
| serviceMonitor.scheme | string | `"http"` | ServiceMonitor will use http by default, but you can pick https as well |
| serviceMonitor.scrapeTimeout | string | `nil` | ServiceMonitor scrape timeout in Go duration format (e.g. 15s) |
| serviceMonitor.targetLabels | list | `[]` | ServiceMonitor will add labels from the service to the Prometheus metric https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#servicemonitorspec |
| serviceMonitor.tlsConfig | string | `nil` | ServiceMonitor will use these tlsConfig settings to make the health check requests |
| sidecar.configReloader.config.serverPort | int | `9533` | The port of the config-reloader server |
| sidecar.configReloader.containerSecurityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":true}` | The security context for containers for sidecar config-reloader |
| sidecar.configReloader.enabled | bool | `false` | |
| sidecar.configReloader.extraArgs | list | `[]` | |
| sidecar.configReloader.extraEnv | list | `[]` | Extra environment variables for sidecar config-reloader |
| sidecar.configReloader.extraEnvFrom | list | `[]` | Extra environment variables from secrets or configmaps for sidecar config-reloader |
| sidecar.configReloader.image.pullPolicy | string | `"IfNotPresent"` | Docker image pull policy for sidecar config-reloader |
| sidecar.configReloader.image.registry | string | `"ghcr.io"` | The Docker registry for sidecar config-reloader |
| sidecar.configReloader.image.repository | string | `"jimmidyson/configmap-reload"` | Docker image repository for sidecar config-reloader |
| sidecar.configReloader.image.tag | string | `"v0.12.0"` | Docker image tag for sidecar config-reloader |
| sidecar.configReloader.livenessProbe | object | `{}` | Liveness probe for sidecar config-reloader |
| sidecar.configReloader.readinessProbe | object | `{}` | Readiness probe for sidecar config-reloader |
| sidecar.configReloader.resources | object | `{}` | Resource requests and limits for sidecar config-reloader |
| sidecar.configReloader.serviceMonitor.enabled | bool | `true` | |
| tolerations | list | `[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","operator":"Exists"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane","operator":"Exists"}]` | Tolerations for pods. By default, pods will be scheduled on master/control-plane nodes. |
| updateStrategy | object | `{}` | The update strategy for the DaemonSet |
## Configuration
The config file for Promtail must be configured as string.
This is necessary because the contents are passed through the `tpl` function.
With this, the file can be templated and assembled from reusable YAML snippets.
It is common to have multiple `kubernetes_sd_configs` that, in turn, usually need the same `pipeline_stages`.
Thus, extracting reusable snippets helps reduce redundancy and avoid copy/paste errors.
See `values.yaml´ for details.
Also, the following examples make use of this feature.
For additional reference, please refer to Promtail's docs:
https://grafana.com/docs/loki/latest/clients/promtail/configuration/
### Syslog Support
```yaml
extraPorts:
syslog:
name: tcp-syslog
containerPort: 1514
service:
port: 80
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 123.234.123.234
config:
snippets:
extraScrapeConfigs: |
# Add an additional scrape config for syslog
- job_name: syslog
syslog:
listen_address: 0.0.0.0:{{ .Values.extraPorts.syslog.containerPort }}
labels:
job: syslog
relabel_configs:
- source_labels:
- __syslog_message_hostname
target_label: hostname
# example label values: kernel, CRON, kubelet
- source_labels:
- __syslog_message_app_name
target_label: app
# example label values: debug, notice, informational, warning, error
- source_labels:
- __syslog_message_severity
target_label: level
```
Find additional source labels in the Promtail's docs:
https://grafana.com/docs/loki/latest/clients/promtail/configuration/#syslog
### Journald Support
```yaml
config:
snippets:
extraScrapeConfigs: |
# Add an additional scrape config for syslog
- job_name: journal
journal:
path: /var/log/journal
max_age: 12h
labels:
job: systemd-journal
relabel_configs:
- source_labels:
- __journal__hostname
target_label: hostname
# example label values: kubelet.service, containerd.service
- source_labels:
- __journal__systemd_unit
target_label: unit
# example label values: debug, notice, info, warning, error
- source_labels:
- __journal_priority_keyword
target_label: level
# Mount journal directory and machine-id file into promtail pods
extraVolumes:
- name: journal
hostPath:
path: /var/log/journal
- name: machine-id
hostPath:
path: /etc/machine-id
extraVolumeMounts:
- name: journal
mountPath: /var/log/journal
readOnly: true
- name: machine-id
mountPath: /etc/machine-id
readOnly: true
```
Find additional configuration options in Promtail's docs:
https://grafana.com/docs/loki/latest/clients/promtail/configuration/#journal
More journal source labels can be found here https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html.
> Note that each message from the journal may have a different set of fields and software may write an arbitrary set of custom fields for their logged messages. [(related issue)](https://github.com/grafana/loki/issues/2048#issuecomment-626234611)
The machine-id needs to be available in the container as it is required for scraping.
This is described in Promtail's scraping docs:
https://grafana.com/docs/loki/latest/clients/promtail/scraping/#journal-scraping-linux-only
### Push API Support
```yaml
extraPorts:
httpPush:
name: http-push
containerPort: 3500
grpcPush:
name: grpc-push
containerPort: 3600
config:
file: |
server:
log_level: {{ .Values.config.logLevel }}
http_listen_port: {{ .Values.config.serverPort }}
clients:
- url: {{ .Values.config.lokiAddress }}
positions:
filename: /run/promtail/positions.yaml
scrape_configs:
{{- tpl .Values.config.snippets.scrapeConfigs . | nindent 2 }}
- job_name: push1
loki_push_api:
server:
http_listen_port: {{ .Values.extraPorts.httpPush.containerPort }}
grpc_listen_port: {{ .Values.extraPorts.grpcPush.containerPort }}
labels:
pushserver: push1
```
### Customize client config options
By default, promtail send logs scraped to `loki` server at `http://loki-gateway/loki/api/v1/push`.
If you want to customize clients or add additional options to `loki`, please use the `clients` section. For example, to enable HTTP basic auth and include OrgID header, you can use:
```yaml
config:
clients:
- url: http://loki.server/loki/api/v1/push
tenant_id: 1
basic_auth:
username: loki
password: secret
```
@@ -0,0 +1,229 @@
{{ template "chart.header" . }}
{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}
{{ template "chart.description" . }}
{{ template "chart.sourcesSection" . }}
{{ template "chart.requirementsSection" . }}
## Chart Repo
Add the following repo to use the chart:
```console
helm repo add grafana https://grafana.github.io/helm-charts
```
## Upgrading
A major chart version change indicates that there is an incompatible breaking change needing manual actions.
### From Chart Versions >= 3.0.0
* Customizeable initContainer added.
### From Chart Versions < 3.0.0
#### Notable Changes
* Helm 3 is required
* Labels have been updated to follow the official Kubernetes [label recommendations](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/)
* The default scrape configs have been updated to take new and old labels into consideration
* The config file must be specified as string which can be templated.
See below for details
* The config file is now stored in a Secret and no longer in a ConfigMap because it may contain sensitive data, such as basic auth credentials
Due to the label changes, an existing installation cannot be upgraded without manual interaction.
There are basically two options:
##### Option 1
Uninstall the old release and re-install the new one.
There will be no data loss.
Promtail will cleanly shut down and write the `positions.yaml`.
The new release which will pick up again from the existing `positions.yaml`.
##### Option 2
* Add new selector labels to the existing pods:
```
kubectl label pods -n <namespace> -l app=promtail,release=<release> app.kubernetes.io/name=promtail app.kubernetes.io/instance=<release>
```
* Perform a non-cascading deletion of the DaemonSet which will keep the pods running:
```
kubectl delete daemonset -n <namespace> -l app=promtail,release=<release> --cascade=false
```
* Perform a regular Helm upgrade on the existing release.
The new DaemonSet will pick up the existing pods and perform a rolling upgrade.
{{ template "chart.valuesSection" . }}
## Configuration
The config file for Promtail must be configured as string.
This is necessary because the contents are passed through the `tpl` function.
With this, the file can be templated and assembled from reusable YAML snippets.
It is common to have multiple `kubernetes_sd_configs` that, in turn, usually need the same `pipeline_stages`.
Thus, extracting reusable snippets helps reduce redundancy and avoid copy/paste errors.
See `values.yaml´ for details.
Also, the following examples make use of this feature.
For additional reference, please refer to Promtail's docs:
https://grafana.com/docs/loki/latest/clients/promtail/configuration/
### Syslog Support
```yaml
extraPorts:
syslog:
name: tcp-syslog
containerPort: 1514
service:
port: 80
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 123.234.123.234
config:
snippets:
extraScrapeConfigs: |
# Add an additional scrape config for syslog
- job_name: syslog
syslog:
listen_address: 0.0.0.0:{{"{{"}} .Values.extraPorts.syslog.containerPort {{"}}"}}
labels:
job: syslog
relabel_configs:
- source_labels:
- __syslog_message_hostname
target_label: hostname
# example label values: kernel, CRON, kubelet
- source_labels:
- __syslog_message_app_name
target_label: app
# example label values: debug, notice, informational, warning, error
- source_labels:
- __syslog_message_severity
target_label: level
```
Find additional source labels in the Promtail's docs:
https://grafana.com/docs/loki/latest/clients/promtail/configuration/#syslog
### Journald Support
```yaml
config:
snippets:
extraScrapeConfigs: |
# Add an additional scrape config for syslog
- job_name: journal
journal:
path: /var/log/journal
max_age: 12h
labels:
job: systemd-journal
relabel_configs:
- source_labels:
- __journal__hostname
target_label: hostname
# example label values: kubelet.service, containerd.service
- source_labels:
- __journal__systemd_unit
target_label: unit
# example label values: debug, notice, info, warning, error
- source_labels:
- __journal_priority_keyword
target_label: level
# Mount journal directory and machine-id file into promtail pods
extraVolumes:
- name: journal
hostPath:
path: /var/log/journal
- name: machine-id
hostPath:
path: /etc/machine-id
extraVolumeMounts:
- name: journal
mountPath: /var/log/journal
readOnly: true
- name: machine-id
mountPath: /etc/machine-id
readOnly: true
```
Find additional configuration options in Promtail's docs:
https://grafana.com/docs/loki/latest/clients/promtail/configuration/#journal
More journal source labels can be found here https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html.
> Note that each message from the journal may have a different set of fields and software may write an arbitrary set of custom fields for their logged messages. [(related issue)](https://github.com/grafana/loki/issues/2048#issuecomment-626234611)
The machine-id needs to be available in the container as it is required for scraping.
This is described in Promtail's scraping docs:
https://grafana.com/docs/loki/latest/clients/promtail/scraping/#journal-scraping-linux-only
### Push API Support
```yaml
extraPorts:
httpPush:
name: http-push
containerPort: 3500
grpcPush:
name: grpc-push
containerPort: 3600
config:
file: |
server:
log_level: {{"{{"}} .Values.config.logLevel {{"}}"}}
http_listen_port: {{"{{"}} .Values.config.serverPort {{"}}"}}
clients:
- url: {{"{{"}} .Values.config.lokiAddress {{"}}"}}
positions:
filename: /run/promtail/positions.yaml
scrape_configs:
{{"{{"}}- tpl .Values.config.snippets.scrapeConfigs . | nindent 2 {{"}}"}}
- job_name: push1
loki_push_api:
server:
http_listen_port: {{"{{"}} .Values.extraPorts.httpPush.containerPort {{"}}"}}
grpc_listen_port: {{"{{"}} .Values.extraPorts.grpcPush.containerPort {{"}}"}}
labels:
pushserver: push1
```
### Customize client config options
By default, promtail send logs scraped to `loki` server at `http://loki-gateway/loki/api/v1/push`.
If you want to customize clients or add additional options to `loki`, please use the `clients` section. For example, to enable HTTP basic auth and include OrgID header, you can use:
```yaml
config:
clients:
- url: http://loki.server/loki/api/v1/push
tenant_id: 1
basic_auth:
username: loki
password: secret
```
@@ -0,0 +1,6 @@
daemonset:
enabled: false
deployment:
enabled: true
autoscaling:
enabled: true
@@ -0,0 +1,4 @@
daemonset:
enabled: false
deployment:
enabled: true
@@ -0,0 +1,53 @@
extraPorts:
syslog:
name: tcp-syslog
containerPort: 1514
service:
port: 1234
type: NodePort
httpPush:
name: http-push
containerPort: 3500
grpcPush:
name: grpc-push
containerPort: 3600
config:
snippets:
extraScrapeConfigs: |
- job_name: syslog
syslog:
listen_address: 0.0.0.0:{{ .Values.extraPorts.syslog.containerPort }}
labels:
job: syslog
relabel_configs:
- source_labels:
- __syslog_message_hostname
target_label: host
- job_name: push1
loki_push_api:
server:
http_listen_port: {{ .Values.extraPorts.httpPush.containerPort }}
grpc_listen_port: {{ .Values.extraPorts.grpcPush.containerPort }}
labels:
pushserver: push1
networkPolicy:
# -- Specifies whether Network Policies should be created
enabled: true
metrics:
# -- Specifies the Pods which are allowed to access the metrics port.
# As this is cross-namespace communication, you also neeed the namespaceSelector.
podSelector: {}
# -- Specifies the namespaces which are allowed to access the metrics port
namespaceSelector: {}
# -- Specifies specific network CIDRs which are allowed to access the metrics port.
# In case you use namespaceSelector, you also have to specify your kubelet networks here.
# The metrics ports are also used for probes.
cidrs: []
k8sApi:
# -- Specify the k8s API endpoint port
port: 8443
# -- Specifies specific network CIDRs you want to limit access to
cidrs: []
@@ -0,0 +1,37 @@
extraPorts:
syslog:
name: tcp-syslog
containerPort: 1514
service:
port: 1234
type: NodePort
httpPush:
name: http-push
containerPort: 3500
ingress:
hosts:
- chart-example.local
grpcPush:
name: grpc-push
containerPort: 3600
config:
snippets:
extraScrapeConfigs: |
- job_name: syslog
syslog:
listen_address: 0.0.0.0:{{ .Values.extraPorts.syslog.containerPort }}
labels:
job: syslog
relabel_configs:
- source_labels:
- __syslog_message_hostname
target_label: host
- job_name: push1
loki_push_api:
server:
http_listen_port: {{ .Values.extraPorts.httpPush.containerPort }}
grpc_listen_port: {{ .Values.extraPorts.grpcPush.containerPort }}
labels:
pushserver: push1
@@ -0,0 +1,15 @@
***********************************************************************
Welcome to Grafana Promtail
Chart version: {{ .Chart.Version }}
Promtail version: {{ .Values.image.tag | default .Chart.AppVersion }}
***********************************************************************
Verify the application is working by running these commands:
{{- if .Values.daemonset.enabled }}
* kubectl --namespace {{ .Release.Namespace }} port-forward daemonset/{{ include "promtail.fullname" . }} {{ .Values.config.serverPort }}
{{- end }}
{{- if .Values.deployment.enabled }}
* kubectl --namespace {{ .Release.Namespace }} port-forward deployment/{{ include "promtail.fullname" . }} {{ .Values.config.serverPort }}
{{- end }}
* curl http://127.0.0.1:{{ .Values.config.serverPort }}/metrics
@@ -0,0 +1,116 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "promtail.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "promtail.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "promtail.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "promtail.labels" -}}
helm.sh/chart: {{ include "promtail.chart" . }}
{{ include "promtail.selectorLabels" . }}
{{- if or .Chart.AppVersion .Values.image.tag }}
app.kubernetes.io/version: {{ mustRegexReplaceAllLiteral "@sha.*" .Values.image.tag "" | default .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "promtail.selectorLabels" -}}
app.kubernetes.io/name: {{ include "promtail.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the namespace
*/}}
{{- define "promtail.namespaceName" -}}
{{- default .Release.Namespace .Values.namespace }}
{{- end }}
{{/*
Create the name of the service account
*/}}
{{- define "promtail.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "promtail.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Configure enableServiceLinks in pod
*/}}
{{- define "promtail.enableServiceLinks" -}}
{{- if semverCompare ">=1.13-0" .Capabilities.KubeVersion.GitVersion }}
{{- if or (.Values.enableServiceLinks) (eq (.Values.enableServiceLinks | toString) "<nil>") }}
{{- printf "enableServiceLinks: true" }}
{{- else }}
{{- printf "enableServiceLinks: false" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Return the appropriate apiVersion for ingress.
*/}}
{{- define "promtail.ingress.apiVersion" -}}
{{- if and ($.Capabilities.APIVersions.Has "networking.k8s.io/v1") (semverCompare ">= 1.19-0" .Capabilities.KubeVersion.Version) }}
{{- print "networking.k8s.io/v1" }}
{{- else if $.Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" }}
{{- print "networking.k8s.io/v1beta1" }}
{{- else }}
{{- print "extensions/v1beta1" }}
{{- end }}
{{- end }}
{{/*
Return if ingress is stable.
*/}}
{{- define "promtail.ingress.isStable" -}}
{{- eq (include "promtail.ingress.apiVersion" .) "networking.k8s.io/v1" }}
{{- end }}
{{/*
Return if ingress supports ingressClassName.
*/}}
{{- define "promtail.ingress.supportsIngressClassName" -}}
{{- or (eq (include "promtail.ingress.isStable" .) "true") (and (eq (include "promtail.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" .Capabilities.KubeVersion.Version)) }}
{{- end }}
{{/*
Return if ingress supports pathType.
*/}}
{{- define "promtail.ingress.supportsPathType" -}}
{{- or (eq (include "promtail.ingress.isStable" .) "true") (and (eq (include "promtail.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" .Capabilities.KubeVersion.Version)) }}
{{- end }}
@@ -0,0 +1,172 @@
{{/*
Pod template used in Daemonset and Deployment
*/}}
{{- define "promtail.podTemplate" -}}
metadata:
labels:
{{- include "promtail.selectorLabels" . | nindent 4 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
annotations:
{{- if not .Values.sidecar.configReloader.enabled }}
checksum/config: {{ tpl .Values.config.file . | sha256sum }}
{{- end }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
serviceAccountName: {{ include "promtail.serviceAccountName" . }}
automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}
{{- include "promtail.enableServiceLinks" . | nindent 2 }}
{{- with .Values.hostNetwork }}
hostNetwork: {{ . }}
{{- end }}
{{- with .Values.priorityClassName }}
priorityClassName: {{ . }}
{{- end }}
{{- with .Values.initContainer }}
initContainers:
{{- tpl (toYaml .) $ | nindent 4 }}
{{- end }}
{{- with .Values.global.imagePullSecrets | default .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.hostAliases }}
hostAliases:
{{- toYaml . | nindent 4 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 4 }}
containers:
- name: promtail
image: "{{ .Values.global.imageRegistry | default .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- "-config.file=/etc/promtail/promtail.yaml"
{{- if .Values.sidecar.configReloader.enabled }}
- "-server.enable-runtime-reload"
{{- end }}
{{- with .Values.extraArgs }}
{{- toYaml . | nindent 8 }}
{{- end }}
volumeMounts:
- name: config
mountPath: /etc/promtail
{{- with .Values.defaultVolumeMounts }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.extraVolumeMounts }}
{{- toYaml . | nindent 8 }}
{{- end }}
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- with .Values.extraEnv }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.extraEnvFrom }}
envFrom:
{{- toYaml . | nindent 8 }}
{{- end }}
ports:
- name: http-metrics
containerPort: {{ .Values.config.serverPort }}
protocol: TCP
{{- range $key, $values := .Values.extraPorts }}
- name: {{ .name | default $key }}
containerPort: {{ $values.containerPort }}
protocol: {{ $values.protocol | default "TCP" }}
{{- end }}
securityContext:
{{- toYaml .Values.containerSecurityContext | nindent 8 }}
{{- with .Values.livenessProbe }}
livenessProbe:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- with .Values.readinessProbe }}
readinessProbe:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.sidecar.configReloader.enabled }}
- name: config-reloader
image: "{{ .Values.sidecar.configReloader.image.registry }}/{{ .Values.sidecar.configReloader.image.repository }}:{{ .Values.sidecar.configReloader.image.tag }}"
imagePullPolicy: {{ .Values.sidecar.configReloader.image.pullPolicy }}
args:
- '-web.listen-address=:{{ .Values.sidecar.configReloader.config.serverPort }}'
- '-volume-dir=/etc/promtail/'
- '-webhook-method=GET'
- '-webhook-url=http://127.0.0.1:{{ .Values.config.serverPort }}/reload'
{{- range .Values.sidecar.configReloader.extraArgs }}
- {{ . }}
{{- end }}
ports:
- name: reloader
containerPort: {{ .Values.sidecar.configReloader.config.serverPort }}
protocol: TCP
{{- with .Values.sidecar.configReloader.extraEnv }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.sidecar.configReloader.extraEnvFrom }}
envFrom:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.sidecar.configReloader.containerSecurityContext | nindent 8 }}
{{- with .Values.sidecar.configReloader.livenessProbe }}
livenessProbe:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- with .Values.sidecar.configReloader.readinessProbe }}
readinessProbe:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- with .Values.sidecar.configReloader.resources }}
resources:
{{- toYaml . | nindent 8 }}
{{- end }}
volumeMounts:
- name: config
mountPath: /etc/promtail
{{- end }}
{{- if .Values.extraContainers }}
{{- range $name, $values := .Values.extraContainers }}
- name: {{ $name }}
{{ toYaml $values | nindent 6 }}
{{- end }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 4 }}
{{- end }}
volumes:
- name: config
{{- if .Values.configmap.enabled }}
configMap:
name: {{ include "promtail.fullname" . }}
{{- else }}
secret:
secretName: {{ include "promtail.fullname" . }}
{{- end }}
{{- with .Values.defaultVolumes }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.extraVolumes }}
{{- tpl (toYaml .) $ | nindent 4 }}
{{- end }}
{{- end }}
@@ -0,0 +1,21 @@
{{- if .Values.rbac.create }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "promtail.fullname" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs:
- get
- watch
- list
{{- end }}
@@ -0,0 +1,16 @@
{{- if .Values.rbac.create }}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "promtail.fullname" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
subjects:
- kind: ServiceAccount
name: {{ include "promtail.serviceAccountName" . }}
namespace: {{ include "promtail.namespaceName" . }}
roleRef:
kind: ClusterRole
name: {{ include "promtail.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end }}
@@ -0,0 +1,12 @@
{{- if and .Values.config.enabled .Values.configmap.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "promtail.fullname" . }}
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
data:
promtail.yaml: |
{{- tpl .Values.config.file . | nindent 4 }}
{{- end }}
@@ -0,0 +1,24 @@
{{- if .Values.daemonset.enabled }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "promtail.fullname" . }}
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.revisionHistoryLimit }}
revisionHistoryLimit: {{ .Values.revisionHistoryLimit }}
{{- end }}
selector:
matchLabels:
{{- include "promtail.selectorLabels" . | nindent 6 }}
updateStrategy:
{{- toYaml .Values.updateStrategy | nindent 4 }}
template:
{{- include "promtail.podTemplate" . | nindent 4 }}
{{- end }}
@@ -0,0 +1,29 @@
{{- if .Values.deployment.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "promtail.fullname" . }}
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if not .Values.deployment.autoscaling.enabled }}
replicas: {{ .Values.deployment.replicaCount }}
{{- end }}
{{- if .Values.revisionHistoryLimit }}
revisionHistoryLimit: {{ .Values.revisionHistoryLimit }}
{{- end }}
{{- with .Values.deployment.strategy }}
strategy:
{{- toYaml . | trim | nindent 4 }}
{{- end }}
selector:
matchLabels:
{{- include "promtail.selectorLabels" . | nindent 6 }}
template:
{{- include "promtail.podTemplate" . | nindent 4 }}
{{- end }}
@@ -0,0 +1,4 @@
{{ range .Values.extraObjects }}
---
{{ tpl (toYaml .) $ }}
{{ end }}
@@ -0,0 +1,43 @@
{{- if and .Values.deployment.enabled .Values.deployment.autoscaling.enabled }}
apiVersion: {{ if or (.Capabilities.APIVersions.Has "autoscaling/v2/HorizontalPodAutoscaler") (semverCompare ">=1.23" .Capabilities.KubeVersion.Version) -}}
autoscaling/v2
{{- else -}}
autoscaling/v2beta2
{{- end }}
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "promtail.fullname" . }}
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "promtail.fullname" . }}
{{- with .Values.deployment.autoscaling }}
minReplicas: {{ .minReplicas }}
maxReplicas: {{ .maxReplicas }}
metrics:
{{- with .targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ . }}
{{- end }}
{{- with .targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ . }}
{{- end }}
{{- end }}
{{- with .Values.deployment.autoscaling.behavior }}
behavior:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
@@ -0,0 +1,62 @@
{{ range $key, $values := .Values.extraPorts }}
{{ if .ingress }}
{{ $ingressApiIsStable := eq (include "promtail.ingress.isStable" $ ) "true" }}
{{ $ingressSupportsIngressClassName := eq (include "promtail.ingress.supportsIngressClassName" $ ) "true" }}
{{ $ingressSupportsPathType := eq (include "promtail.ingress.supportsPathType" $ ) "true" }}
---
apiVersion: {{ include "promtail.ingress.apiVersion" $ }}
kind: Ingress
metadata:
name: {{ include "promtail.fullname" $ }}-{{ $key | lower }}
labels:
{{- include "promtail.labels" $ | nindent 4 }}
{{- with .ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and $ingressSupportsIngressClassName .ingress.ingressClassName }}
ingressClassName: {{ .ingress.ingressClassName }}
{{- end -}}
{{- if .ingress.tls }}
tls:
{{- range .ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
{{- with .secretName }}
secretName: {{ . }}
{{- end }}
{{- end }}
{{- end }}
rules:
{{- range .ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: {{ $values.ingress.path | default "/" }}
{{- if $ingressSupportsPathType }}
pathType: Prefix
{{- end }}
backend:
{{- if $ingressApiIsStable }}
service:
name: {{ include "promtail.fullname" $ }}-{{ $key | lower }}
port:
{{- if $values.service }}
number: {{ $values.service.port }}
{{ else }}
number: {{ $values.containerPort }}
{{ end }}
{{- else }}
serviceName: {{ include "promtail.fullname" $ }}-{{ $key | lower }}
{{- if $values.service }}
servicePort: {{ $values.service.port }}
{{ else }}
number: {{ $values.containerPort }}
{{ end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
@@ -0,0 +1,123 @@
{{- if .Values.networkPolicy.enabled }}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ template "promtail.name" . }}-namespace-only
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector: {}
ingress:
- from:
- podSelector: {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ template "promtail.name" . }}-egress-dns
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
spec:
podSelector:
matchLabels:
{{- include "promtail.selectorLabels" . | nindent 6 }}
policyTypes:
- Egress
egress:
- ports:
- port: 53
protocol: UDP
to:
- namespaceSelector: {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ template "promtail.name" . }}-egress-k8s-api
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
spec:
podSelector:
matchLabels:
{{- include "promtail.selectorLabels" . | nindent 6 }}
policyTypes:
- Egress
egress:
- ports:
- port: {{ .Values.networkPolicy.k8sApi.port }}
protocol: TCP
{{- if len .Values.networkPolicy.k8sApi.cidrs }}
to:
{{- range $cidr := .Values.networkPolicy.k8sApi.cidrs }}
- ipBlock:
cidr: {{ $cidr }}
{{- end }}
{{- end }}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ template "promtail.name" . }}-ingress-metrics
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
spec:
podSelector:
matchLabels:
{{- include "promtail.selectorLabels" . | nindent 6 }}
policyTypes:
- Ingress
ingress:
- ports:
- port: http-metrics
protocol: TCP
{{- if len .Values.networkPolicy.metrics.cidrs }}
from:
{{- range $cidr := .Values.networkPolicy.metrics.cidrs }}
- ipBlock:
cidr: {{ $cidr }}
{{- end }}
{{- if .Values.networkPolicy.metrics.namespaceSelector }}
- namespaceSelector:
{{- toYaml .Values.networkPolicy.metrics.namespaceSelector | nindent 12 }}
{{- if .Values.networkPolicy.metrics.podSelector }}
podSelector:
{{- toYaml .Values.networkPolicy.metrics.podSelector | nindent 12 }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.extraPorts }}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ template "promtail.name" . }}-egress-extra-ports
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
spec:
podSelector:
matchLabels:
{{- include "promtail.selectorLabels" . | nindent 6 }}
policyTypes:
- Egress
egress:
- ports:
{{- range $extraPortConfig := .Values.extraPorts }}
- port: {{ $extraPortConfig.containerPort }}
protocol: {{ $extraPortConfig.protocol }}
{{- end }}
{{- end }}
{{- end }}
@@ -0,0 +1,10 @@
{{- if and (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") .Values.rbac.create .Values.rbac.pspEnabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "promtail.fullname" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
spec:
{{- toYaml .Values.podSecurityPolicy | nindent 2 }}
{{- end }}
@@ -0,0 +1,21 @@
{{- if and .Values.serviceMonitor.enabled .Values.serviceMonitor.prometheusRule.enabled -}}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ include "promtail.fullname" . }}
{{- with .Values.serviceMonitor.prometheusRule.namespace }}
namespace: {{ . | quote }}
{{- end }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
{{- with .Values.serviceMonitor.prometheusRule.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.serviceMonitor.prometheusRule.rules }}
groups:
- name: {{ template "promtail.fullname" . }}
rules:
{{- toYaml .Values.serviceMonitor.prometheusRule.rules | nindent 4 }}
{{- end }}
{{- end }}
@@ -0,0 +1,18 @@
{{- if and (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") .Values.rbac.create .Values.rbac.pspEnabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "promtail.fullname" . }}-psp
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
rules:
- apiGroups:
- policy
resources:
- podsecuritypolicies
verbs:
- use
resourceNames:
- {{ include "promtail.fullname" . }}
{{- end }}
@@ -0,0 +1,16 @@
{{- if and (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") .Values.rbac.create .Values.rbac.pspEnabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "promtail.fullname" . }}-psp
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "promtail.fullname" . }}-psp
subjects:
- kind: ServiceAccount
name: {{ include "promtail.serviceAccountName" . }}
{{- end }}
@@ -0,0 +1,19 @@
{{- if not .Values.configmap.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "promtail.fullname" . }}
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
{{- with .Values.secret.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.secret.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
stringData:
promtail.yaml: |
{{- tpl .Values.config.file . | nindent 4 }}
{{- end }}
@@ -0,0 +1,52 @@
{{- range $key, $values := .Values.extraPorts }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "promtail.fullname" $ }}-{{ $key | lower }}
namespace: {{ include "promtail.namespaceName" $ }}
labels:
{{- include "promtail.labels" $ | nindent 4 }}
{{- with $values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- with $values.service }}
type: {{ .type | default "ClusterIP" }}
{{- with .clusterIP }}
clusterIP: {{ . }}
{{- end }}
{{- with .loadBalancerIP }}
loadBalancerIP: {{ . }}
{{- end }}
{{- with .loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .externalIPs }}
externalIPs:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .externalTrafficPolicy }}
externalTrafficPolicy: {{ . }}
{{- end }}
{{- end }}
ports:
- name: {{ .name | default $key }}
targetPort: {{ .name | default $key }}
protocol: {{ $values.protocol | default "TCP" }}
{{- if $values.service }}
port: {{ $values.service.port | default $values.containerPort }}
{{- if $values.service.nodePort }}
nodePort: {{ $values.service.nodePort }}
{{- end }}
{{- else }}
port: {{ $values.containerPort }}
{{- end }}
selector:
{{- include "promtail.selectorLabels" $ | nindent 4 }}
{{- end }}
@@ -0,0 +1,25 @@
{{- if or .Values.serviceMonitor.enabled .Values.service.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "promtail.fullname" . }}-metrics
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
{{- with .Values.service.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
clusterIP: None
ports:
- name: http-metrics
port: {{ .Values.config.serverPort }}
targetPort: http-metrics
protocol: TCP
selector:
{{- include "promtail.selectorLabels" . | nindent 4 }}
{{- end }}
@@ -0,0 +1,18 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "promtail.serviceAccountName" . }}
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
automountServiceAccountToken: {{ .Values.serviceAccount.automountServiceAccountToken }}
{{- with .Values.serviceAccount.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- end }}
@@ -0,0 +1,83 @@
{{- with .Values.serviceMonitor }}
{{- if .enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "promtail.fullname" $ }}
{{- with .namespace }}
namespace: {{ . }}
{{- end }}
{{- with .annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
{{- include "promtail.labels" $ | nindent 4 }}
{{- with .labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- with .namespaceSelector }}
namespaceSelector:
{{- toYaml . | nindent 4 }}
{{- end }}
selector:
matchLabels:
{{- include "promtail.selectorLabels" $ | nindent 6 }}
endpoints:
- port: http-metrics
{{- with $.Values.httpPathPrefix }}
path: {{ printf "%s/metrics" . }}
{{- end }}
{{- with .interval }}
interval: {{ . }}
{{- end }}
{{- with .scrapeTimeout }}
scrapeTimeout: {{ . }}
{{- end }}
{{- with .relabelings }}
relabelings:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .metricRelabelings }}
metricRelabelings:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .scheme }}
scheme: {{ . }}
{{- end }}
{{- with .tlsConfig }}
tlsConfig:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if and $.Values.sidecar.configReloader.enabled $.Values.sidecar.configReloader.serviceMonitor.enabled }}
- port: reloader
path: "/metrics"
{{- with .interval }}
interval: {{ . }}
{{- end }}
{{- with .scrapeTimeout }}
scrapeTimeout: {{ . }}
{{- end }}
{{- with .relabelings }}
relabelings:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .metricRelabelings }}
metricRelabelings:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .scheme }}
scheme: {{ . }}
{{- end }}
{{- with .tlsConfig }}
tlsConfig:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- with .targetLabels }}
targetLabels:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}
@@ -0,0 +1,40 @@
{{- if and (.Capabilities.APIVersions.Has "autoscaling.k8s.io/v1") .Values.daemonset.enabled .Values.daemonset.autoscaling.enabled }}
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: {{ include "promtail.fullname" . }}
namespace: {{ include "promtail.namespaceName" . }}
labels:
{{- include "promtail.labels" . | nindent 4 }}
spec:
{{- with .Values.daemonset.autoscaling.recommenders }}
recommenders:
{{- toYaml . | nindent 4 }}
{{- end }}
resourcePolicy:
containerPolicies:
- containerName: promtail
{{- with .Values.daemonset.autoscaling.controlledResources }}
controlledResources:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.daemonset.autoscaling.controlledValues }}
controlledValues: {{ .Values.daemonset.autoscaling.controlledValues }}
{{- end }}
{{- if .Values.daemonset.autoscaling.maxAllowed }}
maxAllowed:
{{ toYaml .Values.daemonset.autoscaling.maxAllowed | nindent 8 }}
{{- end }}
{{- if .Values.daemonset.autoscaling.minAllowed }}
minAllowed:
{{ toYaml .Values.daemonset.autoscaling.minAllowed | nindent 8 }}
{{- end }}
targetRef:
apiVersion: apps/v1
kind: DaemonSet
name: {{ include "promtail.fullname" . }}
{{- with .Values.daemonset.autoscaling.updatePolicy }}
updatePolicy:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
+647
View File
@@ -0,0 +1,647 @@
# -- Overrides the chart's name
nameOverride: null
# -- Overrides the chart's computed fullname
fullnameOverride: null
global:
# -- Allow parent charts to override registry hostname
imageRegistry: ""
# -- Allow parent charts to override registry credentials
imagePullSecrets: []
daemonset:
# -- Deploys Promtail as a DaemonSet
enabled: true
autoscaling:
# -- Creates a VerticalPodAutoscaler for the daemonset
enabled: false
# Recommender responsible for generating recommendation for the object.
# List should be empty (then the default recommender will generate the recommendation)
# or contain exactly one recommender.
# recommenders:
# - name: custom-recommender-performance
# -- List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory
controlledResources: []
# Specifies which resource values should be controlled: RequestsOnly or RequestsAndLimits.
# controlledValues: RequestsAndLimits
# -- Defines the max allowed resources for the pod
maxAllowed: {}
# cpu: 200m
# memory: 100Mi
# -- Defines the min allowed resources for the pod
minAllowed: {}
# cpu: 200m
# memory: 100Mi
# updatePolicy:
# Specifies minimal number of replicas which need to be alive for VPA Updater to attempt pod eviction
# minReplicas: 1
# Specifies whether recommended updates are applied when a Pod is started and whether recommended updates
# are applied during the life of a Pod. Possible values are "Off", "Initial", "Recreate", and "Auto".
# updateMode: Auto
deployment:
# -- Deploys Promtail as a Deployment
enabled: false
replicaCount: 1
autoscaling:
# -- Creates a HorizontalPodAutoscaler for the deployment
enabled: false
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage:
# behavior: {}
# -- Set deployment object update strategy
strategy:
type: RollingUpdate
service:
enabled: false
# -- Labels for the service
labels: {}
# -- Annotations for the service
annotations: {}
secret:
# -- Labels for the Secret
labels: {}
# -- Annotations for the Secret
annotations: {}
configmap:
# -- If enabled, promtail config will be created as a ConfigMap instead of a secret
enabled: false
initContainer: []
# # -- Specifies whether the init container for setting inotify max user instances is to be enabled
# - name: init
# # -- Docker registry, image and tag for the init container image
# image: docker.io/busybox:1.33
# # -- Docker image pull policy for the init container image
# imagePullPolicy: IfNotPresent
# # -- The inotify max user instances to configure
# command:
# - sh
# - -c
# - sysctl -w fs.inotify.max_user_instances=128
# securityContext:
# privileged: true
image:
# -- The Docker registry
registry: docker.io
# -- Docker image repository
repository: grafana/promtail
# -- Overrides the image tag whose default is the chart's appVersion
tag: ""
# -- Docker image pull policy
pullPolicy: IfNotPresent
# -- Image pull secrets for Docker images
imagePullSecrets: []
# -- hostAliases to add
hostAliases: []
# - ip: 1.2.3.4
# hostnames:
# - domain.tld
# -- Controls whether the pod has the `hostNetwork` flag set.
hostNetwork: null
# -- Annotations for the DaemonSet
annotations: {}
# -- Number of old history to retain to allow rollback (If not set, default Kubernetes value is set to 10)
# revisionHistoryLimit: 1
# -- The update strategy for the DaemonSet
updateStrategy: {}
# -- Pod labels
podLabels: {}
# -- Pod annotations
podAnnotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "http-metrics"
# -- The name of the PriorityClass
priorityClassName: null
# -- Liveness probe
livenessProbe: {}
# -- Readiness probe
# @default -- See `values.yaml`
readinessProbe:
failureThreshold: 5
httpGet:
path: "{{ printf `%s/ready` .Values.httpPathPrefix }}"
port: http-metrics
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
# -- Resource requests and limits
resources: {}
# limits:
# cpu: 200m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# -- The security context for pods
podSecurityContext:
runAsUser: 0
runAsGroup: 0
# -- The security context for containers
containerSecurityContext:
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
allowPrivilegeEscalation: false
rbac:
# -- Specifies whether RBAC resources are to be created
create: true
# -- Specifies whether a PodSecurityPolicy is to be created
pspEnabled: false
# -- The name of the Namespace to deploy
# If not set, `.Release.Namespace` is used
namespace: null
serviceAccount:
# -- Specifies whether a ServiceAccount should be created
create: true
# -- The name of the ServiceAccount to use.
# If not set and `create` is true, a name is generated using the fullname template
name: null
# -- Image pull secrets for the service account
imagePullSecrets: []
# -- Annotations for the service account
annotations: {}
# -- Automatically mount a ServiceAccount's API credentials
automountServiceAccountToken: true
# -- Automatically mount API credentials for a particular Pod
automountServiceAccountToken: true
# -- Node selector for pods
nodeSelector: {}
# -- Affinity configuration for pods
affinity: {}
# -- Tolerations for pods. By default, pods will be scheduled on master/control-plane nodes.
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
# -- Default volumes that are mounted into pods. In most cases, these should not be changed.
# Use `extraVolumes`/`extraVolumeMounts` for additional custom volumes.
# @default -- See `values.yaml`
defaultVolumes:
- name: run
hostPath:
path: /run/promtail
- name: containers
hostPath:
path: /var/lib/docker/containers
- name: pods
hostPath:
path: /var/log/pods
# -- Default volume mounts. Corresponds to `volumes`.
# @default -- See `values.yaml`
defaultVolumeMounts:
- name: run
mountPath: /run/promtail
- name: containers
mountPath: /var/lib/docker/containers
readOnly: true
- name: pods
mountPath: /var/log/pods
readOnly: true
# Extra volumes to be added in addition to those specified under `defaultVolumes`.
extraVolumes: []
# Extra volume mounts together. Corresponds to `extraVolumes`.
extraVolumeMounts: []
# Extra args for the Promtail container.
extraArgs: []
# -- Example:
# -- extraArgs:
# -- - -client.external-labels=hostname=$(HOSTNAME)
# -- Extra environment variables. Set up tracing enviroment variables here if .Values.config.enableTracing is true.
# Tracing currently only support configure via environment variables. See:
# https://grafana.com/docs/loki/latest/clients/promtail/configuration/#tracing_config
# https://www.jaegertracing.io/docs/1.16/client-features/
extraEnv: []
# -- Extra environment variables from secrets or configmaps
extraEnvFrom: []
# -- Configure enableServiceLinks in pod
enableServiceLinks: true
# ServiceMonitor configuration
serviceMonitor:
# -- If enabled, ServiceMonitor resources for Prometheus Operator are created
enabled: false
# -- Alternative namespace for ServiceMonitor resources
namespace: null
# -- Namespace selector for ServiceMonitor resources
namespaceSelector: {}
# -- ServiceMonitor annotations
annotations: {}
# -- Additional ServiceMonitor labels
labels: {}
# -- ServiceMonitor scrape interval
interval: null
# -- ServiceMonitor scrape timeout in Go duration format (e.g. 15s)
scrapeTimeout: null
# -- ServiceMonitor relabel configs to apply to samples before scraping
# https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#relabelconfig
# (defines `relabel_configs`)
relabelings: []
# -- ServiceMonitor relabel configs to apply to samples as the last
# step before ingestion
# https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#relabelconfig
# (defines `metric_relabel_configs`)
metricRelabelings: []
# -- ServiceMonitor will add labels from the service to the Prometheus metric
# https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#servicemonitorspec
targetLabels: []
# -- ServiceMonitor will use http by default, but you can pick https as well
scheme: http
# -- ServiceMonitor will use these tlsConfig settings to make the health check requests
tlsConfig: null
# -- Prometheus rules will be deployed for alerting purposes
prometheusRule:
enabled: false
additionalLabels: {}
# namespace:
rules: []
# - alert: PromtailRequestErrors
# expr: 100 * sum(rate(promtail_request_duration_seconds_count{status_code=~"5..|failed"}[1m])) by (namespace, job, route, instance) / sum(rate(promtail_request_duration_seconds_count[1m])) by (namespace, job, route, instance) > 10
# for: 5m
# labels:
# severity: critical
# annotations:
# description: |
# The {{ $labels.job }} {{ $labels.route }} is experiencing
# {{ printf \"%.2f\" $value }} errors.
# VALUE = {{ $value }}
# LABELS = {{ $labels }}
# summary: Promtail request errors (instance {{ $labels.instance }})
# - alert: PromtailRequestLatency
# expr: histogram_quantile(0.99, sum(rate(promtail_request_duration_seconds_bucket[5m])) by (le)) > 1
# for: 5m
# labels:
# severity: critical
# annotations:
# summary: Promtail request latency (instance {{ $labels.instance }})
# description: |
# The {{ $labels.job }} {{ $labels.route }} is experiencing
# {{ printf \"%.2f\" $value }}s 99th percentile latency.
# VALUE = {{ $value }}
# LABELS = {{ $labels }}
# Extra containers created as part of a Promtail Deployment resource
# - spec for Container:
# https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core
#
# Note that the key is used as the `name` field, i.e. below will create a
# container named `promtail-proxy`.
extraContainers: {}
# promtail-proxy:
# image: nginx
# ...
# -- Configure additional ports and services. For each configured port, a corresponding service is created.
# See values.yaml for details
extraPorts: {}
# syslog:
# name: tcp-syslog
# annotations: {}
# labels: {}
# containerPort: 1514
# protocol: TCP
# service:
# type: ClusterIP
# clusterIP: null
# port: 1514
# externalIPs: []
# nodePort: null
# loadBalancerIP: null
# loadBalancerSourceRanges: []
# externalTrafficPolicy: null
# ingress:
# # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
# # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
# # ingressClassName: nginx
# # Values can be templated
# annotations: {}
# # kubernetes.io/ingress.class: nginx
# # kubernetes.io/tls-acme: "true"
# paths: "/"
# hosts:
# - chart-example.local
#
# tls: []
# # - secretName: chart-example-tls
# # hosts:
# # - chart-example.local
# -- PodSecurityPolicy configuration.
# @default -- See `values.yaml`
podSecurityPolicy:
privileged: true
allowPrivilegeEscalation: true
volumes:
- 'secret'
- 'hostPath'
- 'downwardAPI'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: true
requiredDropCapabilities:
- ALL
# -- Section for crafting Promtails config file. The only directly relevant value is `config.file`
# which is a templated string that references the other values and snippets below this key.
# @default -- See `values.yaml`
config:
# -- Enable Promtail config from Helm chart
# Set `configmap.enabled: true` and this to `false` to manage your own Promtail config
# See default config in `values.yaml`
enabled: true
# -- The log level of the Promtail server
# Must be reference in `config.file` to configure `server.log_level`
# See default config in `values.yaml`
logLevel: info
# -- The log format of the Promtail server
# Must be reference in `config.file` to configure `server.log_format`
# Valid formats: `logfmt, json`
# See default config in `values.yaml`
logFormat: logfmt
# -- The port of the Promtail server
# Must be reference in `config.file` to configure `server.http_listen_port`
# See default config in `values.yaml`
serverPort: 3101
# -- The config of clients of the Promtail server
# Must be reference in `config.file` to configure `clients`
# @default -- See `values.yaml`
clients:
- url: http://loki-gateway/loki/api/v1/push
# -- Configures where Promtail will save it's positions file, to resume reading after restarts.
# Must be referenced in `config.file` to configure `positions`
positions:
filename: /run/promtail/positions.yaml
# -- The config to enable tracing
enableTracing: false
# -- A section of reusable snippets that can be reference in `config.file`.
# Custom snippets may be added in order to reduce redundancy.
# This is especially helpful when multiple `kubernetes_sd_configs` are use which usually have large parts in common.
# @default -- See `values.yaml`
snippets:
pipelineStages:
- cri: {}
common:
- action: replace
source_labels:
- __meta_kubernetes_pod_node_name
target_label: node_name
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
replacement: $1
separator: /
source_labels:
- namespace
- app
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- action: replace
replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- action: replace
replacement: /var/log/pods/*$1/*.log
regex: true/(.*)
separator: /
source_labels:
- __meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash
- __meta_kubernetes_pod_annotation_kubernetes_io_config_hash
- __meta_kubernetes_pod_container_name
target_label: __path__
# If set to true, adds an additional label for the scrape job.
# This helps debug the Promtail config.
addScrapeJobLabel: false
# -- You can put here any keys that will be directly added to the config file's 'limits_config' block.
# @default -- empty
extraLimitsConfig: ""
# -- You can put here any keys that will be directly added to the config file's 'server' block.
# @default -- empty
extraServerConfigs: ""
# -- You can put here any additional scrape configs you want to add to the config file.
# @default -- empty
extraScrapeConfigs: ""
# -- You can put here any additional relabel_configs to "kubernetes-pods" job
extraRelabelConfigs: []
scrapeConfigs: |
# See also https://github.com/grafana/loki/blob/master/production/ksonnet/promtail/scrape_config.libsonnet for reference
- job_name: kubernetes-pods
pipeline_stages:
{{- toYaml .Values.config.snippets.pipelineStages | nindent 4 }}
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_controller_name
regex: ([0-9a-z-.]+?)(-[0-9a-f]{8,10})?
action: replace
target_label: __tmp_controller_name
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_name
- __meta_kubernetes_pod_label_app
- __tmp_controller_name
- __meta_kubernetes_pod_name
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: app
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_instance
- __meta_kubernetes_pod_label_instance
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: instance
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_component
- __meta_kubernetes_pod_label_component
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: component
{{- if .Values.config.snippets.addScrapeJobLabel }}
- replacement: kubernetes-pods
target_label: scrape_job
{{- end }}
{{- toYaml .Values.config.snippets.common | nindent 4 }}
{{- with .Values.config.snippets.extraRelabelConfigs }}
{{- toYaml . | nindent 4 }}
{{- end }}
# -- Config file contents for Promtail.
# Must be configured as string.
# It is templated so it can be assembled from reusable snippets in order to avoid redundancy.
# @default -- See `values.yaml`
file: |
server:
log_level: {{ .Values.config.logLevel }}
log_format: {{ .Values.config.logFormat }}
http_listen_port: {{ .Values.config.serverPort }}
{{- with .Values.httpPathPrefix }}
http_path_prefix: {{ . }}
{{- end }}
{{- tpl .Values.config.snippets.extraServerConfigs . | nindent 2 }}
clients:
{{- tpl (toYaml .Values.config.clients) . | nindent 2 }}
positions:
{{- tpl (toYaml .Values.config.positions) . | nindent 2 }}
scrape_configs:
{{- tpl .Values.config.snippets.scrapeConfigs . | nindent 2 }}
{{- tpl .Values.config.snippets.extraScrapeConfigs . | nindent 2 }}
limits_config:
{{- tpl .Values.config.snippets.extraLimitsConfig . | nindent 2 }}
tracing:
enabled: {{ .Values.config.enableTracing }}
networkPolicy:
# -- Specifies whether Network Policies should be created
enabled: false
metrics:
# -- Specifies the Pods which are allowed to access the metrics port.
# As this is cross-namespace communication, you also neeed the namespaceSelector.
podSelector: {}
# -- Specifies the namespaces which are allowed to access the metrics port
namespaceSelector: {}
# -- Specifies specific network CIDRs which are allowed to access the metrics port.
# In case you use namespaceSelector, you also have to specify your kubelet networks here.
# The metrics ports are also used for probes.
cidrs: []
k8sApi:
# -- Specify the k8s API endpoint port
port: 8443
# -- Specifies specific network CIDRs you want to limit access to
cidrs: []
# -- Base path to server all API routes fro
httpPathPrefix: ""
sidecar:
configReloader:
enabled: false
image:
# -- The Docker registry for sidecar config-reloader
registry: ghcr.io
# -- Docker image repository for sidecar config-reloader
repository: jimmidyson/configmap-reload
# -- Docker image tag for sidecar config-reloader
tag: v0.12.0
# -- Docker image pull policy for sidecar config-reloader
pullPolicy: IfNotPresent
# Extra args for the config-reloader container.
extraArgs: []
# -- Extra environment variables for sidecar config-reloader
extraEnv: []
# -- Extra environment variables from secrets or configmaps for sidecar config-reloader
extraEnvFrom: []
# -- The security context for containers for sidecar config-reloader
containerSecurityContext:
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
allowPrivilegeEscalation: false
# -- Readiness probe for sidecar config-reloader
readinessProbe: {}
# -- Liveness probe for sidecar config-reloader
livenessProbe: {}
# -- Resource requests and limits for sidecar config-reloader
resources: {}
# limits:
# cpu: 200m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
config:
# -- The port of the config-reloader server
serverPort: 9533
serviceMonitor:
enabled: true
# -- Extra K8s manifests to deploy
extraObjects: []
# - apiVersion: "kubernetes-client.io/v1"
# kind: ExternalSecret
# metadata:
# name: promtail-secrets
# spec:
# backendType: gcpSecretsManager
# data:
# - key: promtail-oauth2-creds
# name: client_secret