- Elastic integrations
- Integrations quick reference
- 1Password
- Abnormal Security
- ActiveMQ
- Active Directory Entity Analytics
- Airflow
- Akamai
- Apache
- API (custom)
- Arbor Peakflow SP Logs
- Arista NG Firewall
- Atlassian
- Auditd
- Auth0
- authentik
- AWS
- Amazon CloudFront
- Amazon DynamoDB
- Amazon EBS
- Amazon EC2
- Amazon ECS
- Amazon EMR
- AWS API Gateway
- Amazon GuardDuty
- AWS Health
- Amazon Kinesis Data Firehose
- Amazon Kinesis Data Stream
- Amazon Managed Streaming for Apache Kafka (MSK)
- Amazon NAT Gateway
- Amazon RDS
- Amazon Redshift
- Amazon S3
- Amazon S3 Storage Lens
- Amazon Security Lake
- Amazon SNS
- Amazon SQS
- Amazon VPC
- Amazon VPN
- AWS Bedrock
- AWS Billing
- AWS CloudTrail
- AWS CloudWatch
- AWS ELB
- AWS Fargate
- AWS Inspector
- AWS Lambda
- AWS Logs (custom)
- AWS Network Firewall
- AWS Route 53
- AWS Security Hub
- AWS Transit Gateway
- AWS Usage
- AWS WAF
- Azure
- Activity logs
- App Service
- Application Gateway
- Application Insights metrics
- Application Insights metrics overview
- Application State Insights metrics
- Azure logs (v2 preview)
- Azure OpenAI
- Billing metrics
- Container instance metrics
- Container registry metrics
- Container service metrics
- Custom Azure Logs
- Custom Blob Storage Input
- Database Account metrics
- Event Hub input
- Firewall logs
- Frontdoor
- Functions
- Microsoft Entra ID
- Monitor metrics
- Network Watcher VNet
- Network Watcher NSG
- Platform logs
- Resource metrics
- Spring Cloud logs
- Storage Account metrics
- Virtual machines metrics
- Virtual machines scaleset metrics
- Barracuda
- BitDefender
- Bitwarden
- blacklens.io
- Blue Coat Director Logs
- BBOT (Bighuge BLS OSINT Tool)
- Box Events
- Bravura Monitor
- Broadcom ProxySG
- Canva
- Cassandra
- CEL Custom API
- Ceph
- Check Point
- Cilium Tetragon
- CISA Known Exploited Vulnerabilities
- Cisco
- Cisco Meraki Metrics
- Citrix
- Claroty CTD
- Cloudflare
- Cloud Asset Inventory
- CockroachDB Metrics
- Common Event Format (CEF)
- Containerd
- CoreDNS
- Corelight
- Couchbase
- CouchDB
- Cribl
- CrowdStrike
- Cyberark
- Cybereason
- CylanceProtect Logs
- Custom Websocket logs
- Darktrace
- Data Exfiltration Detection
- DGA
- Digital Guardian
- Docker
- Elastic APM
- Elastic Fleet Server
- Elastic Security
- Elastic Stack monitoring
- Elasticsearch Service Billing
- Envoy Proxy
- ESET PROTECT
- ESET Threat Intelligence
- etcd
- Falco
- F5
- File Integrity Monitoring
- FireEye Network Security
- First EPSS
- Forcepoint Web Security
- ForgeRock
- Fortinet
- Gigamon
- GitHub
- GitLab
- Golang
- Google Cloud
- Custom GCS Input
- GCP
- GCP Audit logs
- GCP Billing metrics
- GCP Cloud Run metrics
- GCP CloudSQL metrics
- GCP Compute metrics
- GCP Dataproc metrics
- GCP DNS logs
- GCP Firestore metrics
- GCP Firewall logs
- GCP GKE metrics
- GCP Load Balancing metrics
- GCP Metrics Input
- GCP PubSub logs (custom)
- GCP PubSub metrics
- GCP Redis metrics
- GCP Security Command Center
- GCP Storage metrics
- GCP VPC Flow logs
- GCP Vertex AI
- GoFlow2 logs
- Hadoop
- HAProxy
- Hashicorp Vault
- HTTP Endpoint logs (custom)
- IBM MQ
- IIS
- Imperva
- InfluxDb
- Infoblox
- Iptables
- Istio
- Jamf Compliance Reporter
- Jamf Pro
- Jamf Protect
- Jolokia Input
- Journald logs (custom)
- JumpCloud
- Kafka
- Keycloak
- Kubernetes
- LastPass
- Lateral Movement Detection
- Linux Metrics
- Living off the Land Attack Detection
- Logs (custom)
- Lumos
- Lyve Cloud
- Mattermost
- Memcached
- Menlo Security
- Microsoft
- Microsoft 365
- Microsoft Defender for Cloud
- Microsoft Defender for Endpoint
- Microsoft DHCP
- Microsoft DNS Server
- Microsoft Entra ID Entity Analytics
- Microsoft Exchange Online Message Trace
- Microsoft Exchange Server
- Microsoft Graph Activity Logs
- Microsoft M365 Defender
- Microsoft Office 365 Metrics Integration
- Microsoft Sentinel
- Microsoft SQL Server
- Mimecast
- ModSecurity Audit
- MongoDB
- MongoDB Atlas
- MySQL
- Nagios XI
- NATS
- NetFlow Records
- Netskope
- Network Beaconing Identification
- Network Packet Capture
- Nginx
- Okta
- Oracle
- OpenCanary
- Osquery
- Palo Alto
- pfSense
- PHP-FPM
- PingOne
- PingFederate
- Pleasant Password Server
- PostgreSQL
- Prometheus
- Proofpoint TAP
- Proofpoint On Demand
- Pulse Connect Secure
- Qualys VMDR
- QNAP NAS
- RabbitMQ Logs
- Radware DefensePro Logs
- Rapid7
- Redis
- Rubrik RSC Metrics Integration
- Salesforce
- SentinelOne
- ServiceNow
- Slack Logs
- Snort
- Snyk
- SonicWall Firewall
- Sophos
- Spring Boot
- SpyCloud Enterprise Protection
- SQL Input
- Squid Logs
- SRX
- STAN
- Statsd Input
- Sublime Security
- Suricata
- StormShield SNS
- Symantec
- Symantec Endpoint Security
- Sysmon for Linux
- Sysdig
- Syslog Router Integration
- System
- System Audit
- Tanium
- TCP Logs (custom)
- Teleport
- Tenable
- Threat intelligence
- ThreatConnect
- Threat Map
- Thycotic Secret Server
- Tines
- Traefik
- Trellix
- Trend Micro
- TYCHON Agentless
- UDP Logs (custom)
- Universal Profiling
- Vectra Detect
- VMware
- WatchGuard Firebox
- WebSphere Application Server
- Windows
- Wiz
- Zeek
- ZeroFox
- Zero Networks
- ZooKeeper Metrics
- Zoom
- Zscaler
Redis Integration
editRedis Integration
editVersion |
1.18.0 (View all) |
Compatible Kibana version(s) |
8.13.0 or higher |
Supported Serverless project types |
Security |
Subscription level |
Basic |
Level of support |
Elastic |
This integration periodically fetches logs and metrics from Redis servers.
Compatibility
editThe log
and slowlog
datasets were tested with logs from Redis versions 1.2.6, 2.4.6, and 3.0.2, so we expect
compatibility with any version 1.x, 2.x, or 3.x.
The info
, key
and keyspace
datasets were tested with Redis 3.2.12, 4.0.11 and 5.0-rc4, and are expected to work
with all versions >= 3.0
.
Logs
editlog
editThe log
dataset collects the Redis standard logs.
ECS Field Reference
Please refer to the following document for detailed information on ECS fields.
Exported fields
Field | Description | Type |
---|---|---|
@timestamp |
Event timestamp. |
date |
cloud.image.id |
Image ID for the cloud instance. |
keyword |
data_stream.dataset |
Data stream dataset. |
constant_keyword |
data_stream.namespace |
Data stream namespace. |
constant_keyword |
data_stream.type |
Data stream type. |
constant_keyword |
event.dataset |
Event dataset |
constant_keyword |
event.module |
Event module |
constant_keyword |
host.containerized |
If the host is a container. |
boolean |
host.os.build |
OS build information. |
keyword |
host.os.codename |
OS codename, if any. |
keyword |
redis.log.role |
The role of the Redis instance. Can be one of |
keyword |
slowlog
editThe slowlog
dataset collects the Redis slow logs.
ECS Field Reference
Please refer to the following document for detailed information on ECS fields.
Exported fields
Field | Description | Type |
---|---|---|
@timestamp |
Event timestamp. |
date |
cloud.image.id |
Image ID for the cloud instance. |
keyword |
data_stream.dataset |
Data stream dataset. |
constant_keyword |
data_stream.namespace |
Data stream namespace. |
constant_keyword |
data_stream.type |
Data stream type. |
constant_keyword |
event.dataset |
Event dataset |
constant_keyword |
event.module |
Event module |
constant_keyword |
host.containerized |
If the host is a container. |
boolean |
host.os.build |
OS build information. |
keyword |
host.os.codename |
OS codename, if any. |
keyword |
redis.log.role |
The role of the Redis instance. Can be one of |
keyword |
Metrics
editinfo
editThe info
dataset collects information and statistics from Redis by running the INFO
command and parsing the returned
result.
Example
An example event for info
looks as following:
{ "@timestamp": "2020-06-25T10:16:10.138Z", "ecs": { "version": "8.11.0" }, "event": { "dataset": "redis.info", "duration": 374411, "module": "redis" }, "metricset": { "name": "info", "period": 10000 }, "redis": { "info": { "clients": { "biggest_input_buf": 0, "blocked": 0, "connected": 5, "longest_output_list": 0, "max_input_buffer": 0, "max_output_buffer": 0 }, "cluster": { "enabled": false }, "cpu": { "used": { "sys": 1.66, "sys_children": 0, "user": 0.39, "user_children": 0.01 } }, "memory": { "active_defrag": {}, "allocator": "jemalloc-4.0.3", "allocator_stats": { "fragmentation": {}, "rss": {} }, "fragmentation": { "ratio": 2.71 }, "max": { "policy": "noeviction", "value": 0 }, "used": { "lua": 37888, "peak": 945016, "rss": 2453504, "value": 904992 } }, "persistence": { "aof": { "bgrewrite": { "last_status": "ok" }, "buffer": {}, "copy_on_write": {}, "enabled": false, "fsync": {}, "rewrite": { "buffer": {}, "current_time": { "sec": -1 }, "in_progress": false, "last_time": { "sec": -1 }, "scheduled": false }, "size": {}, "write": { "last_status": "ok" } }, "loading": false, "rdb": { "bgsave": { "current_time": { "sec": -1 }, "in_progress": false, "last_status": "ok", "last_time": { "sec": -1 } }, "copy_on_write": {}, "last_save": { "changes_since": 35, "time": 1548663522 } } }, "replication": { "backlog": { "active": 0, "first_byte_offset": 0, "histlen": 0, "size": 1048576 }, "connected_slaves": 0, "master": { "offset": 0, "sync": {} }, "master_offset": 0, "role": "master", "slave": {} }, "server": { "arch_bits": "64", "build_id": "b9a4cd86ce8027d3", "config_file": "", "gcc_version": "6.4.0", "git_dirty": "0", "git_sha1": "00000000", "hz": 10, "lru_clock": 5159690, "mode": "standalone", "multiplexing_api": "epoll", "run_id": "0f681cb959aa47413ec40ff383715c923f9cbefd", "tcp_port": 6379, "uptime": 707 }, "slowlog": { "count": 0 }, "stats": { "active_defrag": {}, "commands_processed": 265, "connections": { "received": 848, "rejected": 0 }, "instantaneous": { "input_kbps": 0.18, "ops_per_sec": 6, "output_kbps": 1.39 }, "keys": { "evicted": 0, "expired": 0 }, "keyspace": { "hits": 15, "misses": 0 }, "latest_fork_usec": 0, "migrate_cached_sockets": 0, "net": { "input": { "bytes": 7300 }, "output": { "bytes": 219632 } }, "pubsub": { "channels": 0, "patterns": 0 }, "sync": { "full": 0, "partial": { "err": 0, "ok": 0 } } } } }, "service": { "address": "localhost:6379", "type": "redis" } }
ECS Field Reference
Please refer to the following document for detailed information on ECS fields.
Exported fields
Field | Description | Type | Metric Type |
---|---|---|---|
@timestamp |
Event timestamp. |
date |
|
agent.id |
keyword |
||
cloud.account.id |
The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. |
keyword |
|
cloud.availability_zone |
Availability zone in which this host is running. |
keyword |
|
cloud.image.id |
Image ID for the cloud instance. |
keyword |
|
cloud.instance.id |
Instance ID of the host machine. |
keyword |
|
cloud.provider |
Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. |
keyword |
|
cloud.region |
Region in which this host is running. |
keyword |
|
container.id |
Unique container id. |
keyword |
|
data_stream.dataset |
Data stream dataset. |
constant_keyword |
|
data_stream.namespace |
Data stream namespace. |
constant_keyword |
|
data_stream.type |
Data stream type. |
constant_keyword |
|
event.dataset |
Event dataset |
constant_keyword |
|
event.module |
Event module |
constant_keyword |
|
host.containerized |
If the host is a container. |
boolean |
|
host.name |
Name of the host. It can contain what |
keyword |
|
host.os.build |
OS build information. |
keyword |
|
host.os.codename |
OS codename, if any. |
keyword |
|
os.full |
Operating system name, including the version or code name. |
keyword |
|
os.full.text |
Multi-field of |
match_only_text |
|
redis.info.clients.biggest_input_buf |
Biggest input buffer among current client connections (replaced by max_input_buffer). |
long |
gauge |
redis.info.clients.blocked |
Number of clients pending on a blocking call (BLPOP, BRPOP, BRPOPLPUSH). |
long |
gauge |
redis.info.clients.connected |
Number of client connections (excluding connections from slaves). |
long |
gauge |
redis.info.clients.longest_output_list |
Longest output list among current client connections (replaced by max_output_buffer). |
long |
gauge |
redis.info.clients.max_input_buffer |
Biggest input buffer among current client connections (on redis 5.0). |
long |
gauge |
redis.info.clients.max_output_buffer |
Longest output list among current client connections. |
long |
gauge |
redis.info.cluster.enabled |
Indicates that the Redis cluster is enabled. |
boolean |
|
redis.info.cpu.used.sys |
System CPU consumed by the Redis server. |
scaled_float |
gauge |
redis.info.cpu.used.sys_children |
User CPU consumed by the Redis server. |
scaled_float |
gauge |
redis.info.cpu.used.user |
System CPU consumed by the background processes. |
scaled_float |
gauge |
redis.info.cpu.used.user_children |
User CPU consumed by the background processes. |
scaled_float |
gauge |
redis.info.memory.active_defrag.is_running |
Flag indicating if active defragmentation is active |
boolean |
|
redis.info.memory.allocator |
Memory allocator. |
keyword |
|
redis.info.memory.allocator_stats.active |
Active memeory |
long |
gauge |
redis.info.memory.allocator_stats.allocated |
Allocated memory |
long |
gauge |
redis.info.memory.allocator_stats.fragmentation.bytes |
Fragmented bytes |
long |
gauge |
redis.info.memory.allocator_stats.fragmentation.ratio |
Fragmentation ratio |
float |
gauge |
redis.info.memory.allocator_stats.resident |
Resident memory |
long |
gauge |
redis.info.memory.allocator_stats.rss.bytes |
Resident bytes |
long |
gauge |
redis.info.memory.allocator_stats.rss.ratio |
Resident ratio |
float |
gauge |
redis.info.memory.fragmentation.bytes |
Bytes between used_memory_rss and used_memory |
long |
gauge |
redis.info.memory.fragmentation.ratio |
Ratio between used_memory_rss and used_memory |
float |
gauge |
redis.info.memory.max.policy |
Eviction policy to use when memory limit is reached. |
keyword |
|
redis.info.memory.max.value |
Memory limit. |
long |
gauge |
redis.info.memory.used.dataset |
The size in bytes of the dataset |
long |
gauge |
redis.info.memory.used.lua |
Used memory by the Lua engine. |
long |
gauge |
redis.info.memory.used.peak |
Peak memory consumed by Redis. |
long |
gauge |
redis.info.memory.used.rss |
Number of bytes that Redis allocated as seen by the operating system (a.k.a resident set size). |
long |
gauge |
redis.info.memory.used.value |
Total number of bytes allocated by Redis. |
long |
gauge |
redis.info.persistence.aof.bgrewrite.last_status |
Status of the last AOF rewrite operatio |
keyword |
|
redis.info.persistence.aof.buffer.size |
Size of the AOF buffer |
long |
gauge |
redis.info.persistence.aof.copy_on_write.last_size |
The size in bytes of copy-on-write allocations during the last RBD save operation |
long |
gauge |
redis.info.persistence.aof.enabled |
Flag indicating AOF logging is activated |
boolean |
|
redis.info.persistence.aof.fsync.delayed |
Delayed fsync counter |
long |
gauge |
redis.info.persistence.aof.fsync.pending |
Number of fsync pending jobs in background I/O queue |
long |
gauge |
redis.info.persistence.aof.rewrite.buffer.size |
Size of the AOF rewrite buffer |
long |
gauge |
redis.info.persistence.aof.rewrite.current_time.sec |
Duration of the on-going AOF rewrite operation if any |
long |
gauge |
redis.info.persistence.aof.rewrite.in_progress |
Flag indicating a AOF rewrite operation is on-going |
boolean |
|
redis.info.persistence.aof.rewrite.last_time.sec |
Duration of the last AOF rewrite operation in seconds |
long |
gauge |
redis.info.persistence.aof.rewrite.scheduled |
Flag indicating an AOF rewrite operation will be scheduled once the on-going RDB save is complete. |
boolean |
|
redis.info.persistence.aof.size.base |
AOF file size on latest startup or rewrite |
long |
gauge |
redis.info.persistence.aof.size.current |
AOF current file size |
long |
gauge |
redis.info.persistence.aof.write.last_status |
Status of the last write operation to the AOF |
keyword |
|
redis.info.persistence.loading |
Flag indicating if the load of a dump file is on-going |
boolean |
|
redis.info.persistence.rdb.bgsave.current_time.sec |
Duration of the on-going RDB save operation if any |
long |
gauge |
redis.info.persistence.rdb.bgsave.in_progress |
Flag indicating a RDB save is on-going |
boolean |
|
redis.info.persistence.rdb.bgsave.last_status |
Status of the last RDB save operation |
keyword |
|
redis.info.persistence.rdb.bgsave.last_time.sec |
Duration of the last RDB save operation in seconds |
long |
gauge |
redis.info.persistence.rdb.copy_on_write.last_size |
The size in bytes of copy-on-write allocations during the last RBD save operation |
long |
gauge |
redis.info.persistence.rdb.last_save.changes_since |
Number of changes since the last dump |
long |
gauge |
redis.info.persistence.rdb.last_save.time |
Epoch-based timestamp of last successful RDB save |
long |
gauge |
redis.info.replication.backlog.active |
Flag indicating replication backlog is active |
long |
|
redis.info.replication.backlog.first_byte_offset |
The master offset of the replication backlog buffer |
long |
gauge |
redis.info.replication.backlog.histlen |
Size in bytes of the data in the replication backlog buffer |
long |
gauge |
redis.info.replication.backlog.size |
Total size in bytes of the replication backlog buffer |
long |
gauge |
redis.info.replication.connected_slaves |
Number of connected slaves |
long |
gauge |
redis.info.replication.master.last_io_seconds_ago |
Number of seconds since the last interaction with master |
long |
gauge |
redis.info.replication.master.link_status |
Status of the link (up/down) |
keyword |
|
redis.info.replication.master.offset |
The server’s current replication offset |
long |
gauge |
redis.info.replication.master.second_offset |
The offset up to which replication IDs are accepted |
long |
gauge |
redis.info.replication.master.sync.in_progress |
Indicate the master is syncing to the slave |
boolean |
|
redis.info.replication.master.sync.last_io_seconds_ago |
Number of seconds since last transfer I/O during a SYNC operation |
long |
gauge |
redis.info.replication.master.sync.left_bytes |
Number of bytes left before syncing is complete |
long |
gauge |
redis.info.replication.master_offset |
The server’s current replication offset |
long |
gauge |
redis.info.replication.role |
Role of the instance (can be "master", or "slave"). |
keyword |
|
redis.info.replication.slave.is_readonly |
Flag indicating if the slave is read-only |
boolean |
|
redis.info.replication.slave.offset |
The replication offset of the slave instance |
long |
gauge |
redis.info.replication.slave.priority |
The priority of the instance as a candidate for failover |
long |
|
redis.info.server.arch_bits |
keyword |
||
redis.info.server.build_id |
keyword |
||
redis.info.server.config_file |
keyword |
||
redis.info.server.gcc_version |
keyword |
||
redis.info.server.git_dirty |
keyword |
||
redis.info.server.git_sha1 |
keyword |
||
redis.info.server.hz |
long |
||
redis.info.server.lru_clock |
long |
||
redis.info.server.mode |
keyword |
||
redis.info.server.multiplexing_api |
keyword |
||
redis.info.server.run_id |
keyword |
||
redis.info.server.tcp_port |
long |
||
redis.info.server.uptime |
long |
gauge |
|
redis.info.slowlog.count |
Count of slow operations |
long |
gauge |
redis.info.stats.active_defrag.hits |
Number of value reallocations performed by active the defragmentation process |
long |
gauge |
redis.info.stats.active_defrag.key_hits |
Number of keys that were actively defragmented |
long |
gauge |
redis.info.stats.active_defrag.key_misses |
Number of keys that were skipped by the active defragmentation process |
long |
gauge |
redis.info.stats.active_defrag.misses |
Number of aborted value reallocations started by the active defragmentation process |
long |
gauge |
redis.info.stats.commands_processed |
Total number of commands processed. |
long |
counter |
redis.info.stats.connections.received |
Total number of connections received. |
long |
counter |
redis.info.stats.connections.rejected |
Total number of connections rejected. |
long |
counter |
redis.info.stats.instantaneous.input_kbps |
The network’s read rate per second in KB/sec |
scaled_float |
gauge |
redis.info.stats.instantaneous.ops_per_sec |
Number of commands processed per second |
long |
gauge |
redis.info.stats.instantaneous.output_kbps |
The network’s write rate per second in KB/sec |
scaled_float |
gauge |
redis.info.stats.keys.evicted |
Number of evicted keys due to maxmemory limit |
long |
gauge |
redis.info.stats.keys.expired |
Total number of key expiration events |
long |
gauge |
redis.info.stats.keyspace.hits |
Number of successful lookup of keys in the main dictionary |
long |
gauge |
redis.info.stats.keyspace.misses |
Number of failed lookup of keys in the main dictionary |
long |
gauge |
redis.info.stats.latest_fork_usec |
Duration of the latest fork operation in microseconds |
long |
gauge |
redis.info.stats.migrate_cached_sockets |
The number of sockets open for MIGRATE purposes |
long |
gauge |
redis.info.stats.net.input.bytes |
Total network input in bytes. |
long |
counter |
redis.info.stats.net.output.bytes |
Total network output in bytes. |
long |
counter |
redis.info.stats.pubsub.channels |
Global number of pub/sub channels with client subscriptions |
long |
gauge |
redis.info.stats.pubsub.patterns |
Global number of pub/sub pattern with client subscriptions |
long |
gauge |
redis.info.stats.slave_expires_tracked_keys |
The number of keys tracked for expiry purposes (applicable only to writable slaves) |
long |
gauge |
redis.info.stats.sync.full |
The number of full resyncs with slaves |
long |
gauge |
redis.info.stats.sync.partial.err |
The number of denied partial resync requests |
long |
gauge |
redis.info.stats.sync.partial.ok |
The number of accepted partial resync requests |
long |
gauge |
service.address |
Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). |
keyword |
key
editThe key
dataset collects information about Redis keys.
For each key matching one of the configured patterns, an event is sent to Elasticsearch with information about this key, what includes the type, its length when available, and its TTL.
Patterns are configured as a list containing these fields:
-
pattern
(required): pattern for key names, as accepted by the Redis KEYS or SCAN commands. -
limit
(optional): safeguard when using patterns with wildcards to avoid collecting too many keys (Default: 0, no limit) -
keyspace
(optional): Identifier of the database to use to look for the keys (Default: 0)
Example
An example event for key
looks as following:
{ "@timestamp": "2020-06-25T10:16:10.138Z", "ecs": { "version": "8.11.0" }, "event": { "dataset": "redis.key", "duration": 374411, "module": "redis" }, "metricset": { "name": "key", "period": 10000 }, "redis": { "key": { "expire": { "ttl": 360 }, "id": "0:foo", "length": 3, "name": "foo", "type": "string" } }, "service": { "address": "localhost:6379", "type": "redis" } }
ECS Field Reference
Please refer to the following document for detailed information on ECS fields.
Exported fields
Field | Description | Type | Metric Type |
---|---|---|---|
@timestamp |
Event timestamp. |
date |
|
agent.id |
keyword |
||
cloud.account.id |
The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. |
keyword |
|
cloud.availability_zone |
Availability zone in which this host is running. |
keyword |
|
cloud.image.id |
Image ID for the cloud instance. |
keyword |
|
cloud.instance.id |
Instance ID of the host machine. |
keyword |
|
cloud.provider |
Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. |
keyword |
|
cloud.region |
Region in which this host is running. |
keyword |
|
container.id |
Unique container id. |
keyword |
|
data_stream.dataset |
Data stream dataset. |
constant_keyword |
|
data_stream.namespace |
Data stream namespace. |
constant_keyword |
|
data_stream.type |
Data stream type. |
constant_keyword |
|
event.dataset |
Event dataset |
constant_keyword |
|
event.module |
Event module |
constant_keyword |
|
host.containerized |
If the host is a container. |
boolean |
|
host.name |
Name of the host. It can contain what |
keyword |
|
host.os.build |
OS build information. |
keyword |
|
host.os.codename |
OS codename, if any. |
keyword |
|
redis.key.expire.ttl |
Seconds to expire. |
long |
gauge |
redis.key.id |
Unique id for this key (With the form |
keyword |
|
redis.key.length |
Length of the key (Number of elements for lists, length for strings, cardinality for sets). |
long |
gauge |
redis.key.name |
Key name. |
keyword |
|
redis.key.type |
Key type as shown by |
keyword |
|
service.address |
Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). |
keyword |
keyspace
editThe keyspace
dataset collects information about the Redis keyspaces. For each keyspace, an event is sent to
Elasticsearch. The keyspace information is fetched from the INFO
command.
Example
An example event for keyspace
looks as following:
{ "@timestamp": "2020-06-25T10:16:10.138Z", "ecs": { "version": "8.11.0" }, "event": { "dataset": "redis.keyspace", "duration": 374411, "module": "redis" }, "metricset": { "name": "keyspace", "period": 10000 }, "redis": { "keyspace": { "avg_ttl": 359459, "expires": 0, "id": "db0", "keys": 1 } }, "service": { "address": "localhost:6379", "type": "redis" } }
ECS Field Reference
Please refer to the following document for detailed information on ECS fields.
Exported fields
Field | Description | Type | Metric Type |
---|---|---|---|
@timestamp |
Event timestamp. |
date |
|
agent.id |
keyword |
||
cloud.account.id |
The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. |
keyword |
|
cloud.availability_zone |
Availability zone in which this host is running. |
keyword |
|
cloud.image.id |
Image ID for the cloud instance. |
keyword |
|
cloud.instance.id |
Instance ID of the host machine. |
keyword |
|
cloud.provider |
Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. |
keyword |
|
cloud.region |
Region in which this host is running. |
keyword |
|
container.id |
Unique container id. |
keyword |
|
data_stream.dataset |
Data stream dataset. |
constant_keyword |
|
data_stream.namespace |
Data stream namespace. |
constant_keyword |
|
data_stream.type |
Data stream type. |
constant_keyword |
|
event.dataset |
Event dataset |
constant_keyword |
|
event.module |
Event module |
constant_keyword |
|
host.containerized |
If the host is a container. |
boolean |
|
host.name |
Name of the host. It can contain what |
keyword |
|
host.os.build |
OS build information. |
keyword |
|
host.os.codename |
OS codename, if any. |
keyword |
|
redis.keyspace.avg_ttl |
Average ttl. |
long |
gauge |
redis.keyspace.expires |
long |
||
redis.keyspace.id |
Keyspace identifier. |
keyword |
|
redis.keyspace.keys |
Number of keys in the keyspace. |
long |
|
service.address |
Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). |
keyword |
Changelog
editChangelog
Version | Details | Kibana version(s) |
---|---|---|
1.18.0 |
Enhancement (View pull request) |
8.13.0 or higher |
1.17.0 |
Enhancement (View pull request) |
8.13.0 or higher |
1.16.0 |
Enhancement (View pull request) |
8.12.0 or higher |
1.15.0 |
Enhancement (View pull request) |
8.12.0 or higher |
1.14.0 |
Enhancement (View pull request) |
8.12.0 or higher |
1.13.1 |
Enhancement (View pull request) |
8.10.2 or higher |
1.13.0 |
Enhancement (View pull request) |
8.10.2 or higher |
1.12.0 |
Enhancement (View pull request) |
8.8.0 or higher |
1.11.1 |
Bug fix (View pull request) |
8.8.0 or higher |
1.11.0 |
Enhancement (View pull request) |
8.8.0 or higher |
1.10.0 |
Enhancement (View pull request) |
8.3.0 or higher |
1.9.2 |
Bug fix (View pull request) |
8.3.0 or higher |
1.9.1 |
Enhancement (View pull request) |
8.3.0 or higher |
1.9.0 |
Enhancement (View pull request) |
8.3.0 or higher |
1.8.0 |
Enhancement (View pull request) |
8.3.0 or higher |
1.7.0 |
Enhancement (View pull request) |
8.3.0 or higher |
1.6.5 |
Enhancement (View pull request) |
7.14.0 or higher |
1.6.4 |
Enhancement (View pull request) |
7.14.0 or higher |
1.6.3 |
Enhancement (View pull request) |
7.14.0 or higher |
1.6.2 |
Enhancement (View pull request) |
7.14.0 or higher |
1.6.1 |
Enhancement (View pull request) |
7.14.0 or higher |
1.6.0 |
Enhancement (View pull request) |
7.14.0 or higher |
1.5.1 |
Enhancement (View pull request) |
7.14.0 or higher |
1.5.0 |
Enhancement (View pull request) |
7.14.0 or higher |
1.4.0 |
Enhancement (View pull request) |
7.14.0 or higher |
1.3.1 |
Enhancement (View pull request) |
7.14.0 or higher |
1.3.0 |
Enhancement (View pull request) |
— |
1.2.0 |
Enhancement (View pull request) |
7.14.0 or higher |
1.1.2 |
Enhancement (View pull request) |
— |
1.1.1 |
Bug fix (View pull request) |
— |
1.1.0 |
Enhancement (View pull request) |
7.14.0 or higher |
1.0.0 |
Enhancement (View pull request) |
— |
0.7.3 |
Enhancement (View pull request) |
— |
0.7.2 |
Enhancement (View pull request) |
— |
0.7.1 |
Enhancement (View pull request) |
— |
0.7.0 |
Enhancement (View pull request) |
— |
0.6.0 |
Enhancement (View pull request) |
— |
0.5.0 |
Enhancement (View pull request) |
— |
0.4.0 |
Enhancement (View pull request) |
— |
0.3.8 |
Enhancement (View pull request) |
— |
0.3.7 |
Bug fix (View pull request) |
— |
0.1.0 |
Enhancement (View pull request) |
— |