- Elastic integrations
- Integrations quick reference
- 1Password
- Abnormal Security
- ActiveMQ
- Active Directory Entity Analytics
- Airflow
- Akamai
- Apache
- API (custom)
- Arbor Peakflow SP Logs
- Arista NG Firewall
- Atlassian
- Auditd
- Auth0
- authentik
- AWS
- Amazon CloudFront
- Amazon DynamoDB
- Amazon EBS
- Amazon EC2
- Amazon ECS
- Amazon EMR
- AWS API Gateway
- Amazon GuardDuty
- AWS Health
- Amazon Kinesis Data Firehose
- Amazon Kinesis Data Stream
- Amazon Managed Streaming for Apache Kafka (MSK)
- Amazon NAT Gateway
- Amazon RDS
- Amazon Redshift
- Amazon S3
- Amazon S3 Storage Lens
- Amazon Security Lake
- Amazon SNS
- Amazon SQS
- Amazon VPC
- Amazon VPN
- AWS Bedrock
- AWS Billing
- AWS CloudTrail
- AWS CloudWatch
- AWS ELB
- AWS Fargate
- AWS Inspector
- AWS Lambda
- AWS Logs (custom)
- AWS Network Firewall
- AWS Route 53
- AWS Security Hub
- AWS Transit Gateway
- AWS Usage
- AWS WAF
- Azure
- Activity logs
- App Service
- Application Gateway
- Application Insights metrics
- Application Insights metrics overview
- Application State Insights metrics
- Azure logs (v2 preview)
- Azure OpenAI
- Billing metrics
- Container instance metrics
- Container registry metrics
- Container service metrics
- Custom Azure Logs
- Custom Blob Storage Input
- Database Account metrics
- Event Hub input
- Firewall logs
- Frontdoor
- Functions
- Microsoft Entra ID
- Monitor metrics
- Network Watcher VNet
- Network Watcher NSG
- Platform logs
- Resource metrics
- Spring Cloud logs
- Storage Account metrics
- Virtual machines metrics
- Virtual machines scaleset metrics
- Barracuda
- BitDefender
- Bitwarden
- blacklens.io
- Blue Coat Director Logs
- BBOT (Bighuge BLS OSINT Tool)
- Box Events
- Bravura Monitor
- Broadcom ProxySG
- Canva
- Cassandra
- CEL Custom API
- Ceph
- Check Point
- Cilium Tetragon
- CISA Known Exploited Vulnerabilities
- Cisco
- Cisco Meraki Metrics
- Citrix
- Claroty CTD
- Cloudflare
- Cloud Asset Inventory
- CockroachDB Metrics
- Common Event Format (CEF)
- Containerd
- CoreDNS
- Corelight
- Couchbase
- CouchDB
- Cribl
- CrowdStrike
- Cyberark
- Cybereason
- CylanceProtect Logs
- Custom Websocket logs
- Darktrace
- Data Exfiltration Detection
- DGA
- Digital Guardian
- Docker
- Elastic APM
- Elastic Fleet Server
- Elastic Security
- Elastic Stack monitoring
- Elasticsearch Service Billing
- Envoy Proxy
- ESET PROTECT
- ESET Threat Intelligence
- etcd
- Falco
- F5
- File Integrity Monitoring
- FireEye Network Security
- First EPSS
- Forcepoint Web Security
- ForgeRock
- Fortinet
- Gigamon
- GitHub
- GitLab
- Golang
- Google Cloud
- Custom GCS Input
- GCP
- GCP Audit logs
- GCP Billing metrics
- GCP Cloud Run metrics
- GCP CloudSQL metrics
- GCP Compute metrics
- GCP Dataproc metrics
- GCP DNS logs
- GCP Firestore metrics
- GCP Firewall logs
- GCP GKE metrics
- GCP Load Balancing metrics
- GCP Metrics Input
- GCP PubSub logs (custom)
- GCP PubSub metrics
- GCP Redis metrics
- GCP Security Command Center
- GCP Storage metrics
- GCP VPC Flow logs
- GCP Vertex AI
- GoFlow2 logs
- Hadoop
- HAProxy
- Hashicorp Vault
- HTTP Endpoint logs (custom)
- IBM MQ
- IIS
- Imperva
- InfluxDb
- Infoblox
- Iptables
- Istio
- Jamf Compliance Reporter
- Jamf Pro
- Jamf Protect
- Jolokia Input
- Journald logs (custom)
- JumpCloud
- Kafka
- Keycloak
- Kubernetes
- LastPass
- Lateral Movement Detection
- Linux Metrics
- Living off the Land Attack Detection
- Logs (custom)
- Lumos
- Lyve Cloud
- Mattermost
- Memcached
- Menlo Security
- Microsoft
- Microsoft 365
- Microsoft Defender for Cloud
- Microsoft Defender for Endpoint
- Microsoft DHCP
- Microsoft DNS Server
- Microsoft Entra ID Entity Analytics
- Microsoft Exchange Online Message Trace
- Microsoft Exchange Server
- Microsoft Graph Activity Logs
- Microsoft M365 Defender
- Microsoft Office 365 Metrics Integration
- Microsoft Sentinel
- Microsoft SQL Server
- Mimecast
- ModSecurity Audit
- MongoDB
- MongoDB Atlas
- MySQL
- Nagios XI
- NATS
- NetFlow Records
- Netskope
- Network Beaconing Identification
- Network Packet Capture
- Nginx
- Okta
- Oracle
- OpenCanary
- Osquery
- Palo Alto
- pfSense
- PHP-FPM
- PingOne
- PingFederate
- Pleasant Password Server
- PostgreSQL
- Prometheus
- Proofpoint TAP
- Proofpoint On Demand
- Pulse Connect Secure
- Qualys VMDR
- QNAP NAS
- RabbitMQ Logs
- Radware DefensePro Logs
- Rapid7
- Redis
- Rubrik RSC Metrics Integration
- Salesforce
- SentinelOne
- ServiceNow
- Slack Logs
- Snort
- Snyk
- SonicWall Firewall
- Sophos
- Spring Boot
- SpyCloud Enterprise Protection
- SQL Input
- Squid Logs
- SRX
- STAN
- Statsd Input
- Sublime Security
- Suricata
- StormShield SNS
- Symantec
- Symantec Endpoint Security
- Sysmon for Linux
- Sysdig
- Syslog Router Integration
- System
- System Audit
- Tanium
- TCP Logs (custom)
- Teleport
- Tenable
- Threat intelligence
- ThreatConnect
- Threat Map
- Thycotic Secret Server
- Tines
- Traefik
- Trellix
- Trend Micro
- TYCHON Agentless
- UDP Logs (custom)
- Universal Profiling
- Vectra Detect
- VMware
- WatchGuard Firebox
- WebSphere Application Server
- Windows
- Wiz
- Zeek
- ZeroFox
- Zero Networks
- ZooKeeper Metrics
- Zoom
- Zscaler
RabbitMQ Integration
editRabbitMQ Integration
editVersion |
1.16.0 (View all) |
Compatible Kibana version(s) |
8.13.0 or higher |
Supported Serverless project types |
Security |
Subscription level |
Basic |
Level of support |
Elastic |
This integration uses HTTP API created by the management plugin to collect metrics.
The default data streams are connection
, node
, queue
, exchange
and standard logs.
If management.path_prefix
is set in RabbitMQ configuration, management_path_prefix has to be set to the same value
in this integration configuration.
Compatibility
editThe RabbitMQ integration is fully tested with RabbitMQ 3.7.4 and it should be compatible with any version supporting the management plugin (which needs to be installed and enabled). Exchange dataset is also tested with 3.6.0, 3.6.5 and 3.7.14.
The application logs dataset parses single file format introduced in 3.7.0.
Logs
editApplication Logs
editApplication logs collects standard RabbitMQ logs. It will only support RabbitMQ default i.e RFC 3339 timestamp format.
ECS Field Reference
Please refer to the following document for detailed information on ECS fields.
Exported fields
Field | Description | Type |
---|---|---|
@timestamp |
Event timestamp. |
date |
cloud.image.id |
Image ID for the cloud instance. |
keyword |
data_stream.dataset |
Data stream dataset. |
constant_keyword |
data_stream.namespace |
Data stream namespace. |
constant_keyword |
data_stream.type |
Data stream type. |
constant_keyword |
event.dataset |
Event dataset |
constant_keyword |
event.module |
Event module |
constant_keyword |
host.containerized |
If the host is a container. |
boolean |
host.os.build |
OS build information. |
keyword |
host.os.codename |
OS codename, if any. |
keyword |
rabbitmq.log.pid |
The Erlang process id |
keyword |
Metrics
editConnection Metrics
editExample
An example event for connection
looks as following:
{ "@timestamp": "2020-06-25T10:16:10.138Z", "ecs": { "version": "8.11.0" }, "event": { "dataset": "rabbitmq.connection", "duration": 374411, "module": "rabbitmq" }, "metricset": { "name": "connection", "period": 10000 }, "rabbitmq": { "connection": { "channel_max": 65535, "channels": 2, "client_provided": { "name": "Connection1" }, "frame_max": 131072, "host": "::1", "name": "[::1]:31153 -> [::1]:5672", "octet_count": { "received": 5834, "sent": 5834 }, "packet_count": { "pending": 0, "received": 442, "sent": 422 }, "peer": { "host": "::1", "port": 31153 }, "port": 5672, "state": "running", "type": "network" }, "vhost": "/" }, "service": { "address": "localhost:15672", "type": "rabbitmq" } }
ECS Field Reference
Please refer to the following document for detailed information on ECS fields.
Exported fields
Field | Description | Type | Metric Type |
---|---|---|---|
@timestamp |
Event timestamp. |
date |
|
agent.id |
keyword |
||
cloud.account.id |
The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. |
keyword |
|
cloud.availability_zone |
Availability zone in which this host is running. |
keyword |
|
cloud.image.id |
Image ID for the cloud instance. |
keyword |
|
cloud.instance.id |
Instance ID of the host machine. |
keyword |
|
cloud.provider |
Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. |
keyword |
|
cloud.region |
Region in which this host is running. |
keyword |
|
container.id |
Unique container id. |
keyword |
|
data_stream.dataset |
Data stream dataset. |
constant_keyword |
|
data_stream.namespace |
Data stream namespace. |
constant_keyword |
|
data_stream.type |
Data stream type. |
constant_keyword |
|
event.dataset |
Event dataset |
constant_keyword |
|
event.module |
Event module |
constant_keyword |
|
host.containerized |
If the host is a container. |
boolean |
|
host.name |
Name of the host. It can contain what |
keyword |
|
host.os.build |
OS build information. |
keyword |
|
host.os.codename |
OS codename, if any. |
keyword |
|
rabbitmq.connection.channel_max |
The maximum number of channels allowed on the connection. |
long |
counter |
rabbitmq.connection.channels |
The number of channels on the connection. |
long |
gauge |
rabbitmq.connection.client_provided.name |
User specified connection name. |
keyword |
|
rabbitmq.connection.frame_max |
Maximum permissible size of a frame (in bytes) to negotiate with clients. |
long |
gauge |
rabbitmq.connection.host |
Server hostname obtained via reverse DNS, or its IP address if reverse DNS failed or was disabled. |
keyword |
|
rabbitmq.connection.name |
The name of the connection with non-ASCII characters escaped as in C. |
keyword |
|
rabbitmq.connection.octet_count.received |
Number of octets received on the connection. |
long |
gauge |
rabbitmq.connection.octet_count.sent |
Number of octets sent on the connection. |
long |
gauge |
rabbitmq.connection.packet_count.pending |
Number of packets pending on the connection. |
long |
gauge |
rabbitmq.connection.packet_count.received |
Number of packets received on the connection. |
long |
counter |
rabbitmq.connection.packet_count.sent |
Number of packets sent on the connection. |
long |
counter |
rabbitmq.connection.peer.host |
Peer hostname obtained via reverse DNS, or its IP address if reverse DNS failed or was not enabled. |
keyword |
|
rabbitmq.connection.peer.port |
Peer port. |
long |
|
rabbitmq.connection.port |
Server port. |
long |
|
rabbitmq.connection.state |
Connection state. |
keyword |
|
rabbitmq.connection.type |
Type of the connection. |
keyword |
|
rabbitmq.vhost |
Virtual host name with non-ASCII characters escaped as in C. |
keyword |
|
service.address |
Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). |
keyword |
Exchange Metrics
editExample
An example event for exchange
looks as following:
{ "@timestamp": "2020-06-25T10:04:20.944Z", "ecs": { "version": "8.11.0" }, "event": { "dataset": "rabbitmq.exchange", "duration": 4078507, "module": "rabbitmq" }, "metricset": { "name": "exchange", "period": 10000 }, "rabbitmq": { "exchange": { "arguments": {}, "auto_delete": false, "durable": true, "internal": false, "name": "" }, "vhost": "/" }, "service": { "address": "localhost:15672", "type": "rabbitmq" }, "user": { "name": "rmq-internal" } }
ECS Field Reference
Please refer to the following document for detailed information on ECS fields.
Exported fields
Field | Description | Type | Metric Type |
---|---|---|---|
@timestamp |
Event timestamp. |
date |
|
agent.id |
keyword |
||
cloud.account.id |
The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. |
keyword |
|
cloud.availability_zone |
Availability zone in which this host is running. |
keyword |
|
cloud.image.id |
Image ID for the cloud instance. |
keyword |
|
cloud.instance.id |
Instance ID of the host machine. |
keyword |
|
cloud.provider |
Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. |
keyword |
|
cloud.region |
Region in which this host is running. |
keyword |
|
container.id |
Unique container id. |
keyword |
|
data_stream.dataset |
Data stream dataset. |
constant_keyword |
|
data_stream.namespace |
Data stream namespace. |
constant_keyword |
|
data_stream.type |
Data stream type. |
constant_keyword |
|
event.dataset |
Event dataset |
constant_keyword |
|
event.module |
Event module |
constant_keyword |
|
host.containerized |
If the host is a container. |
boolean |
|
host.name |
Name of the host. It can contain what |
keyword |
|
host.os.build |
OS build information. |
keyword |
|
host.os.codename |
OS codename, if any. |
keyword |
|
rabbitmq.exchange.auto_delete |
Whether the queue will be deleted automatically when no longer used. |
boolean |
|
rabbitmq.exchange.durable |
Whether or not the queue survives server restarts. |
boolean |
|
rabbitmq.exchange.internal |
Whether the exchange is internal, i.e. cannot be directly published to by a client. |
boolean |
|
rabbitmq.exchange.messages.publish_in.count |
Count of messages published "in" to an exchange, i.e. not taking account of routing. |
long |
gauge |
rabbitmq.exchange.messages.publish_in.details.rate |
How much the exchange publish-in count has changed per second in the most recent sampling interval. |
float |
gauge |
rabbitmq.exchange.messages.publish_out.count |
Count of messages published "out" of an exchange, i.e. taking account of routing. |
long |
gauge |
rabbitmq.exchange.messages.publish_out.details.rate |
How much the exchange publish-out count has changed per second in the most recent sampling interval. |
float |
gauge |
rabbitmq.exchange.name |
The name of the queue with non-ASCII characters escaped as in C. |
keyword |
|
rabbitmq.vhost |
Virtual host name with non-ASCII characters escaped as in C. |
keyword |
|
service.address |
Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). |
keyword |
Node Metrics
editThe "node" dataset collects metrics about RabbitMQ nodes.
It supports two modes to collect data which can be selected with the "Collection mode" setting:
-
node
- collects metrics only from the node the agent connects to. -
cluster
- collects metrics from all the nodes in the cluster. This is recommended when collecting metrics of an only endpoint for the whole cluster.
Example
An example event for node
looks as following:
{ "@timestamp": "2020-06-25T10:04:20.944Z", "event": { "dataset": "rabbitmq.node", "duration": 115000, "module": "rabbitmq" }, "rabbitmq": { "node": { "disk": { "free": { "bytes": 485213712384, "limit": { "bytes": 50000000 } } }, "fd": { "total": 1048576, "used": 54 }, "gc": { "num": { "count": 5724 }, "reclaimed": { "bytes": 294021640 } }, "io": { "file_handle": { "open_attempt": { "avg": { "ms": 0 }, "count": 10 } }, "read": { "avg": { "ms": 0 }, "bytes": 1, "count": 1 }, "reopen": { "count": 1 }, "seek": { "avg": { "ms": 0 }, "count": 0 }, "sync": { "avg": { "ms": 0 }, "count": 0 }, "write": { "avg": { "ms": 0 }, "bytes": 0, "count": 0 } }, "mem": { "limit": { "bytes": 13340778496 }, "used": { "bytes": 71448312 } }, "mnesia": { "disk": { "tx": { "count": 0 } }, "ram": { "tx": { "count": 43 } } }, "msg": { "store_read": { "count": 0 }, "store_write": { "count": 0 } }, "name": "rabbit@my-rabbit", "proc": { "total": 1048576, "used": 234 }, "processors": 12, "queue": { "index": { "journal_write": { "count": 0 }, "read": { "count": 0 }, "write": { "count": 0 } } }, "run": { "queue": 0 }, "socket": { "total": 943626, "used": 0 }, "type": "disc", "uptime": 155275 } }, "service": { "address": "localhost:15672", "type": "rabbitmq" } }
ECS Field Reference
Please refer to the following document for detailed information on ECS fields.
Exported fields
Field | Description | Type | Metric Type |
---|---|---|---|
@timestamp |
Event timestamp. |
date |
|
agent.id |
keyword |
||
cloud.account.id |
The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. |
keyword |
|
cloud.availability_zone |
Availability zone in which this host is running. |
keyword |
|
cloud.image.id |
Image ID for the cloud instance. |
keyword |
|
cloud.instance.id |
Instance ID of the host machine. |
keyword |
|
cloud.provider |
Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. |
keyword |
|
cloud.region |
Region in which this host is running. |
keyword |
|
container.id |
Unique container id. |
keyword |
|
data_stream.dataset |
Data stream dataset. |
constant_keyword |
|
data_stream.namespace |
Data stream namespace. |
constant_keyword |
|
data_stream.type |
Data stream type. |
constant_keyword |
|
event.dataset |
Event dataset |
constant_keyword |
|
event.module |
Event module |
constant_keyword |
|
host.containerized |
If the host is a container. |
boolean |
|
host.name |
Name of the host. It can contain what |
keyword |
|
host.os.build |
OS build information. |
keyword |
|
host.os.codename |
OS codename, if any. |
keyword |
|
rabbitmq.node.disk.free.bytes |
Disk free space in bytes. |
long |
gauge |
rabbitmq.node.disk.free.limit.bytes |
Point at which the disk alarm will go off. |
long |
gauge |
rabbitmq.node.fd.total |
File descriptors available. |
long |
gauge |
rabbitmq.node.fd.used |
Used file descriptors. |
long |
gauge |
rabbitmq.node.gc.num.count |
Number of GC operations. |
long |
counter |
rabbitmq.node.gc.reclaimed.bytes |
GC bytes reclaimed. |
long |
counter |
rabbitmq.node.io.file_handle.open_attempt.avg.ms |
File handle open avg time |
long |
gauge |
rabbitmq.node.io.file_handle.open_attempt.count |
File handle open attempts |
long |
counter |
rabbitmq.node.io.read.avg.ms |
File handle read avg time |
long |
gauge |
rabbitmq.node.io.read.bytes |
Data read in bytes |
long |
counter |
rabbitmq.node.io.read.count |
Data read operations |
long |
counter |
rabbitmq.node.io.reopen.count |
Data reopen operations |
long |
counter |
rabbitmq.node.io.seek.avg.ms |
Data seek avg time |
long |
gauge |
rabbitmq.node.io.seek.count |
Data seek operations |
long |
counter |
rabbitmq.node.io.sync.avg.ms |
Data sync avg time |
long |
gauge |
rabbitmq.node.io.sync.count |
Data sync operations |
long |
counter |
rabbitmq.node.io.write.avg.ms |
Data write avg time |
long |
gauge |
rabbitmq.node.io.write.bytes |
Data write in bytes |
long |
counter |
rabbitmq.node.io.write.count |
Data write operations |
long |
counter |
rabbitmq.node.mem.limit.bytes |
Point at which the memory alarm will go off. |
long |
gauge |
rabbitmq.node.mem.used.bytes |
Memory used in bytes. |
long |
gauge |
rabbitmq.node.mnesia.disk.tx.count |
Number of Mnesia transactions which have been performed that required writes to disk. |
long |
counter |
rabbitmq.node.mnesia.ram.tx.count |
Number of Mnesia transactions which have been performed that did not require writes to disk. |
long |
counter |
rabbitmq.node.msg.store_read.count |
Number of messages which have been read from the message store. |
long |
counter |
rabbitmq.node.msg.store_write.count |
Number of messages which have been written to the message store. |
long |
counter |
rabbitmq.node.name |
Node name |
keyword |
|
rabbitmq.node.proc.total |
Maximum number of Erlang processes. |
long |
gauge |
rabbitmq.node.proc.used |
Number of Erlang processes in use. |
long |
gauge |
rabbitmq.node.processors |
Number of cores detected and usable by Erlang. |
long |
gauge |
rabbitmq.node.queue.index.journal_write.count |
Number of records written to the queue index journal. |
long |
counter |
rabbitmq.node.queue.index.read.count |
Number of records read from the queue index. |
long |
counter |
rabbitmq.node.queue.index.write.count |
Number of records written to the queue index. |
long |
counter |
rabbitmq.node.run.queue |
Average number of Erlang processes waiting to run. |
long |
gauge |
rabbitmq.node.socket.total |
File descriptors available for use as sockets. |
long |
gauge |
rabbitmq.node.socket.used |
File descriptors used as sockets. |
long |
gauge |
rabbitmq.node.type |
Node type. |
keyword |
|
rabbitmq.node.uptime |
Node uptime. |
long |
gauge |
rabbitmq.vhost |
Virtual host name with non-ASCII characters escaped as in C. |
keyword |
|
service.address |
Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). |
keyword |
Queue Metrics
editExample
An example event for queue
looks as following:
{ "@timestamp": "2020-06-25T10:15:10.955Z", "ecs": { "version": "8.11.0" }, "event": { "dataset": "rabbitmq.queue", "duration": 5860529, "module": "rabbitmq" }, "metricset": { "name": "queue", "period": 10000 }, "rabbitmq": { "queue": { "arguments": {}, "auto_delete": false, "consumers": { "count": 0, "utilisation": {} }, "disk": { "reads": {}, "writes": {} }, "durable": true, "exclusive": false, "memory": { "bytes": 14000 }, "messages": { "persistent": { "count": 0 }, "ready": { "count": 0, "details": { "rate": 0 } }, "total": { "count": 0, "details": { "rate": 0 } }, "unacknowledged": { "count": 0, "details": { "rate": 0 } } }, "name": "NameofQueue1", "state": "running" }, "vhost": "/" }, "service": { "address": "localhost:15672", "type": "rabbitmq" } }
ECS Field Reference
Please refer to the following document for detailed information on ECS fields.
Exported fields
Field | Description | Type | Metric Type |
---|---|---|---|
@timestamp |
Event timestamp. |
date |
|
agent.id |
keyword |
||
cloud.account.id |
The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. |
keyword |
|
cloud.availability_zone |
Availability zone in which this host is running. |
keyword |
|
cloud.image.id |
Image ID for the cloud instance. |
keyword |
|
cloud.instance.id |
Instance ID of the host machine. |
keyword |
|
cloud.provider |
Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. |
keyword |
|
cloud.region |
Region in which this host is running. |
keyword |
|
container.id |
Unique container id. |
keyword |
|
data_stream.dataset |
Data stream dataset. |
constant_keyword |
|
data_stream.namespace |
Data stream namespace. |
constant_keyword |
|
data_stream.type |
Data stream type. |
constant_keyword |
|
event.dataset |
Event dataset |
constant_keyword |
|
event.module |
Event module |
constant_keyword |
|
host.containerized |
If the host is a container. |
boolean |
|
host.name |
Name of the host. It can contain what |
keyword |
|
host.os.build |
OS build information. |
keyword |
|
host.os.codename |
OS codename, if any. |
keyword |
|
rabbitmq.queue.arguments.max_priority |
Maximum number of priority levels for the queue to support. |
long |
gauge |
rabbitmq.queue.auto_delete |
Whether the queue will be deleted automatically when no longer used. |
boolean |
|
rabbitmq.queue.consumers.count |
Number of consumers. |
long |
gauge |
rabbitmq.queue.consumers.utilisation.pct |
Fraction of the time (between 0.0 and 1.0) that the queue is able to immediately deliver messages to consumers. This can be less than 1.0 if consumers are limited by network congestion or prefetch count. |
long |
gauge |
rabbitmq.queue.disk.reads.count |
Total number of times messages have been read from disk by this queue since it started. |
long |
counter |
rabbitmq.queue.disk.writes.count |
Total number of times messages have been written to disk by this queue since it started. |
long |
counter |
rabbitmq.queue.durable |
Whether or not the queue survives server restarts. |
boolean |
|
rabbitmq.queue.exclusive |
Whether the queue is exclusive (i.e. has owner_pid). |
boolean |
|
rabbitmq.queue.memory.bytes |
Bytes of memory consumed by the Erlang process associated with the queue, including stack, heap and internal structures. |
long |
gauge |
rabbitmq.queue.messages.persistent.count |
Total number of persistent messages in the queue (will always be 0 for transient queues). |
long |
gauge |
rabbitmq.queue.messages.ready.count |
Number of messages ready to be delivered to clients. |
long |
gauge |
rabbitmq.queue.messages.ready.details.rate |
How much the count of messages ready has changed per second in the most recent sampling interval. |
float |
gauge |
rabbitmq.queue.messages.total.count |
Sum of ready and unacknowledged messages (queue depth). |
long |
gauge |
rabbitmq.queue.messages.total.details.rate |
How much the queue depth has changed per second in the most recent sampling interval. |
float |
gauge |
rabbitmq.queue.messages.unacknowledged.count |
Number of messages delivered to clients but not yet acknowledged. |
long |
gauge |
rabbitmq.queue.messages.unacknowledged.details.rate |
How much the count of unacknowledged messages has changed per second in the most recent sampling interval. |
float |
gauge |
rabbitmq.queue.name |
The name of the queue with non-ASCII characters escaped as in C. |
keyword |
|
rabbitmq.queue.state |
The state of the queue. Normally running, but may be |
keyword |
|
rabbitmq.vhost |
Virtual host name with non-ASCII characters escaped as in C. |
keyword |
|
service.address |
Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). |
keyword |
Changelog
editChangelog
Version | Details | Kibana version(s) |
---|---|---|
1.16.0 |
Enhancement (View pull request) |
8.13.0 or higher |
1.15.0 |
Enhancement (View pull request) |
8.13.0 or higher |
1.14.0 |
Enhancement (View pull request) |
8.12.0 or higher |
1.13.0 |
Enhancement (View pull request) |
8.12.0 or higher |
1.12.1 |
Bug fix (View pull request) |
8.8.0 or higher |
1.12.0 |
Enhancement (View pull request) |
8.8.0 or higher |
1.11.0 |
Enhancement (View pull request) |
8.8.0 or higher |
1.10.1 |
Bug fix (View pull request) |
8.8.0 or higher |
1.10.0 |
Enhancement (View pull request) |
8.8.0 or higher |
1.9.0 |
Enhancement (View pull request) |
8.0.0 or higher |
1.8.8 |
Bug fix (View pull request) |
8.0.0 or higher |
1.8.7 |
Enhancement (View pull request) |
8.0.0 or higher |
1.8.6 |
Enhancement (View pull request) |
8.0.0 or higher |
1.8.5 |
Enhancement (View pull request) |
8.0.0 or higher |
1.8.4 |
Enhancement (View pull request) |
8.0.0 or higher |
1.8.3 |
Enhancement (View pull request) |
8.0.0 or higher |
1.8.2 |
Enhancement (View pull request) |
8.0.0 or higher |
1.8.1 |
Enhancement (View pull request) |
8.0.0 or higher |
1.8.0 |
Enhancement (View pull request) |
8.0.0 or higher |
1.7.0 |
Enhancement (View pull request) |
8.0.0 or higher |
1.6.1 |
Enhancement (View pull request) |
7.14.0 or higher |
1.6.0 |
Enhancement (View pull request) |
7.14.0 or higher |
1.5.0 |
Enhancement (View pull request) |
7.14.0 or higher |
1.4.0 |
Enhancement (View pull request) |
7.14.0 or higher |
1.3.1 |
Enhancement (View pull request) |
7.14.0 or higher |
1.3.0 |
Enhancement (View pull request) |
— |
1.2.0 |
Enhancement (View pull request) |
7.14.0 or higher |
1.1.2 |
Enhancement (View pull request) |
— |
1.1.1 |
Bug fix (View pull request) |
— |
1.1.0 |
Enhancement (View pull request) |
— |
1.0.0 |
Enhancement (View pull request) |
7.14.0 or higher |
0.6.3 |
Enhancement (View pull request) |
— |
0.6.2 |
Enhancement (View pull request) |
— |
0.6.1 |
Enhancement (View pull request) |
— |
0.6.0 |
Enhancement (View pull request) |
— |
0.5.0 |
Enhancement (View pull request) |
— |
0.4.1 |
Bug fix (View pull request) |
— |
0.4.0 |
Enhancement (View pull request) |
— |
0.3.0 |
Enhancement (View pull request) |
— |
0.2.8 |
Enhancement (View pull request) Enhancement (View pull request) |
— |
0.2.7 |
Bug fix (View pull request) |
— |
0.1.0 |
Enhancement (View pull request) |
— |
On this page
ElasticON events are back!
Learn about the Elastic Search AI Platform from the experts at our live events.
Register now