This integration is powered by Elastic Agent. Elastic Agent is a single, unified way to add monitoring for logs, metrics, and other types of data to a host. It can also protect hosts from security threats, query data from operating systems, forward data from remote services or hardware, and more. Refer to our documentation for a detailed comparison between Beats and Elastic Agent.
Prefer to use Beats for this use case? See Filebeat modules for logs or Metricbeat modules for metrics.
See the integrations quick start guides to get started:
This integration is used to collect logs and metrics from Kubernetes clusters.
This integration requires kube-state-metrics, which is not included with Kubernetes by default. For dashboards to properly populate, the kube-state-metrics service must be deployed to your Kubernetes cluster |
As one of the main pieces provided for Kubernetes monitoring, this integration is capable of fetching metrics from several components:
Some of the previous components are running on each of the Kubernetes nodes (like kubelet
or proxy
) while others provide
a single cluster-wide endpoint. This is important to determine the optimal configuration and running strategy
for the different datasets included in the integration.
Kubernetes endpoints and metricsets
Kubernetes module is a bit complex as its internal datasets require access to a wide variety of endpoints.
This section highlights and introduces some groups of datasets with similar endpoint access needs.
For more details on the datasets see configuration example
and the datasets
sections below.
node / system / pod / container / module / volume
The datasets container
, node
, pod
, system
and volume
require access to the kubelet endpoint
in each of
the Kubernetes nodes, hence it's recommended to include them as part
of an Agent DaemonSet
or standalone Agents running on the hosts.
Depending on the version and configuration of Kubernetes nodes, kubelet
might provide a read only http port (typically 10255),
which is used in some configuration examples. But in general, and lately, this endpoint requires SSL (https
) access
(to port 10250 by default) and token based authentication.
state_* and event
State_* datasets are enabled by default.
All datasets with the state_
prefix require hosts
field pointing to kube-state-metrics
service within the cluster. As the service provides cluster-wide metrics, there's no need to fetch them per node,
hence the recommendation is to run these datasets as part of an Agent Deployment
with one only replica.
Generally kube-state-metrics
runs a Deployment
and is accessible via a service called kube-state-metrics
on
kube-system
namespace, which will be the service to use in our configuration.
apiserver
The apiserver dataset requires access to the Kubernetes API, which should be easily available in all Kubernetes
environments. Depending on the Kubernetes configuration, the API access might require SSL (https
) and token
based authentication.
proxy
The proxy dataset requires access to the proxy endpoint in each of Kubernetes nodes, hence it's recommended
to configure it as a part of an Agent DaemonSet
.
scheduler and controllermanager
These datasets require access to the Kubernetes controller-manager
and scheduler
endpoints. By default, these pods
run only on master nodes, and they are not exposed via a Service, but there are different strategies
available for its configuration:
- Create
Kubernetes Services
to makekube-controller-manager
andkube-scheduler
available and configure the datasets to point to these services as part of anAgent Deployment
. - Run these datasets as part an
Agent Daemonset
(with HostNetwork setting) with anodeSelector
to only run on Master nodes.
These datasets are not enabled by default.
Note: In some "As a Service" Kubernetes implementations, like GKE
, the master nodes or even the pods running on
the masters won't be visible. In these cases it won't be possible to use scheduler
and controllermanager
metricsets.
container-logs
The container-logs dataset requires access to the log files in each Kubernetes node where the container logs are stored.
This defaults to /var/log/containers/*${kubernetes.container.id}.log
.
audit-logs
The audit-logs dataset requires access to the log files on each Kubernetes node where the audit logs are stored.
This defaults to /var/log/kubernetes/kube-apiserver-audit.log
.
Compatibility
The Kubernetes package is tested with Kubernetes [1.23.x - 1.26.x] versions
Dashboard
Kubernetes integration is shipped including default dashboards for apiserver
, controllermanager
, overview
, proxy
and scheduler
.
If you are using HA for those components, be aware that when gathering data from all instances the dashboard will usually show the average of the metrics. For those scenarios filtering by hosts or service address is possible.
Cluster selector in overview
dashboard helps in distinguishing and filtering metrics collected from multiple clusters. If you want to focus on a subset of the Kubernetes clusters for monitoring a specific scenario, this cluster selector could be a handy tool. Note that this selector gets populated from the orchestrator.cluster.name
field that may not always be available. This field gets its value from sources like kube_config
, kubeadm-config
configMap, and Google Cloud's meta API for GKE. If the sources mentioned above don't provide this value, metricbeat will not report it. However, you can always use processors to set this field and utilize it in the cluster overview
dashboard.
Changelog
Version | Details |
---|---|
1.31.2 | View pull request Add system testing for state service datastream |
1.31.1 | View pull request Update controller manager, proxy and scheduler metrics and dashboards |
1.31.0 | View pull request Use datas_tream.dataset as pre filters for dashboards and remove tags |
1.30.0 | View pull request Add missing namespace_uid and namespace_labels fields |
1.29.2 | View pull request Fix function for memory node usage |
1.29.1 | View pull request Fix removed condition setting for container_logs |
1.29.0 | View pull request Remove "Control Plane" column from Node Information table |
1.28.2 | View pull request Fix Pod Memory usage panel title |
1.28.1 | View pull request Delete statefulset, job and cronjob visualizations from Cluster Overview dashboard |
1.28.0 | View pull request Adding banner for Kube-state metrics |
1.27.1 | View pull request Fix typo in cluster overview dashboard |
1.27.0 | View pull request New cluster overview dashboard |
1.26.0 | View pull request Add processors configuration option for Kubernetes data_streams |
1.25.0 | View pull request Add condition configuration option to container logs data stream |
1.24.0 | View pull request Add fields to audit logs data stream |
1.23.1 | View pull request Add missing dimension fields |
1.23.0 | View pull request Add fields to audit logs data stream |
1.22.1 | View pull request Fix overlapping fields in state_job data stream |
1.22.0 | View pull request Update apiserver and controllermanaged deprecated fields and dashboards |
1.21.2 | View pull request add container ID and pod name as part of Kubernetes Cotainer Logs filestream input |
1.21.1 | View pull request improved the wording of the link to Kubernetes documentation |
1.21.0 | View pull request Add new dashboards |
1.20.0 | View pull request Change fields type for audit_logs data_stream to use requestObject and responseObject fields of audit events. Disable dynamic mapping for audit_logs data_stream. Drop kubernetes.audit.responseObject.metadata and kubernetes.audit.requestObject.metadata |
1.19.1 | View pull request Add documentation for volume field |
1.19.0 | View pull request Add missed ecs fields |
1.18.1 | View pull request Add documentation for multi-fields |
1.18.0 | View pull request Fix k8s overview dashboard |
1.17.3 | View pull request Remove incorrectly tagged boolean dimension |
1.17.2 | View pull request Add missing metadata fields |
1.17.1 | View pull request Improve default ndjson parser configuration |
1.17.0 | View pull request Disable audit logs collection by default |
1.16.0 | View pull request Documentation improvements |
1.15.0 | View pull request Add ssl.certificate_authorities configuration |
1.14.3 | View pull request Add missing job.name and cronjob.name fields to state_container datastream |
1.14.2 | View pull request Add missing job.name and cronjob.name fields to container related datastreams |
1.14.1 | View pull request Add missing job.name and cronjob.name fields added by metadata generators |
1.14.0 | View pull request Tune state_metrics settings |
1.13.0 | View pull request Update to ECS 8.0 |
1.12.0 | View pull request Expose add_recourse_metadata configuration option |
1.11.0 | View pull request Add memory.working_set.limit.pct for pod and container data streams |
1.10.0 | View pull request Add leader election in state_job data stream |
1.9.0 | View pull request Add missing fields |
1.8.1 | View pull request Set kubernetes.volume.fs.used.pct to scaled_float |
1.8.0 | View pull request Support json logs parsing |
1.7.0 | View pull request Add new audit logs data stream in kubernetes integration |
1.6.0 | View pull request Add skip_older option for event datastream |
1.5.0 | View pull request Revert Kubernetes namespace field breaking change |
1.4.2 | View pull request Add dimension fields |
1.4.1 | View pull request Remove overriding of index pattern on the Kubernetes overview dashboard |
1.4.0 | View pull request Use filestream input for container_logs data stream |
1.3.3 | View pull request Fix conditions of data_streams that are based on k8s labels & add condition in pipelines |
1.3.2 | View pull request Set default host for proxy to localhost |
1.3.1 | View pull request Uniform with guidelines |
1.3.0 | View pull request Add container_logs ecs fields |
1.2.1 | View pull request Update Kubernetes cluster_ip field type |
1.2.0 | View pull request Update Kubernetes namespace field |
1.1.1 | View pull request Update Kubernetes integration Readme |
1.1.0 | View pull request Update to ECS 1.12.0 |
1.0.0 | View pull request Release Kubernetes as GA |
0.14.1 | View pull request Update default host in kubernetes proxy data stream in kubernetes integration |
0.14.0 | View pull request Add new container logs data stream in kubernetes integration |
0.13.0 | View pull request Leverage dynamic kubernetes provider for controller and scheduler datastream |
0.12.2 | View pull request Add missing field "kubernetes.daemonset.name" field for pod and container data streams |
0.12.1 | View pull request Add missing cluster filter for "orchestrator.cluster.name" field in [Metrics Kubernetes] Overview dashboard and Dashboard section in the integration overview page |
0.12.0 | View pull request Update kubernetes package ecs fields with orchestrator.cluster.url and orchestrator.cluster.name |
0.11.1 | View pull request Escape special characters in docs |
0.11.0 | View pull request Update documentation to fit mdx spec |
0.10.0 | View pull request Update integration description |
0.9.1 | View pull request Add missing field "kubernetes.daemonset.name" field for state_pod and state_container |
0.9.0 | View pull request Enhance kubernetes package with state_job data stream |
0.8.0 | View pull request Leverage leader election in kubernetes integration |
0.7.0 | View pull request Add _meta information to Kubernetes fields |
0.6.0 | View pull request Introduce kubernetes package granularity using input_groups |
0.5.3 | View pull request Add missing field "kubernetes.statefulset.replicas.ready" |
0.5.2 | View pull request Fix stack compatability |
0.5.1 | View pull request Fix references to env variables |
0.5.0 | View pull request Add missing field "kubernetes.selectors.*" and extra https settings for controllermanager and scheduler datastreams |
0.4.5 | View pull request Add missing field "kubernetes.pod.ip" |
0.4.4 | View pull request Updating package owner |
0.4.3 | View pull request Correct sample event file. |
0.4.2 | View pull request Change kibana.version constraint to be more conservative. |
0.4.1 | View pull request Add missing fields |
0.1.0 | View pull request initial release |