-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Description
I ran Elasticsearch 7.1.0 in a Docker container with audit logging turned on. Consider the following sample log entry I generated:
{"type": "server", "timestamp": "2019-05-28T19:17:36,708+0000", "level": "INFO", "component": "o.e.x.s.a.l.LoggingAuditTrail", "cluster.name": "docker-cluster", "node.name": "68394c3e3bbe", "cluster.uuid": "BI0VZnPyRJuQWQzhuwO8sg", "node.id": "mQjSOd7bREOwl_I4IUaEtw", "message": "event.action=\"anonymous_access_denied\" event.type=\"rest\" node.id=\"mQjSOd7bREOwl_I4IUaEtw\" origin.address=\"172.17.0.1:36744\" origin.type=\"rest\" request.id=\"2nj_ww26T0eODoilHXDPSA\" request.method=\"GET\" url.path=\"/\"" }
Notice that the name of the timestamp field is timestamp.
Now, I ran Elasticsearch 7.1.0 again, but by downloading and extracting it from a .tar.gz file instead of in a Docker container. Here's a sample audit log entry I generated:
{"@timestamp":"2019-05-28T12:33:31,246", "node.id":"qUZG15diSquz8jpvQbTmhA", "event.type":"rest", "event.action":"anonymous_access_denied", "origin.type":"rest", "origin.address":"[::1]:49499", "url.path":"/", "request.method":"GET", "request.id":"b5A_F-ihQaOHrxBgAa3gjA"}
Notice that the name of the timestamp field is @timestamp.
This field name discrepancy is causing some issues for users trying to use Filebeat to parse the audit log, depending on whether the environment is a Docker container vs. not.
Other types of Elasticsearch logs (server, deprecation) consistently use timestamp (no @ prefix) in either Docker or non-Docker logs.
For the short term we should make Filebeat look for either @timestamp or timestamp when its trying to parse Elasticsearch deprecation logs. I've created elastic/beats#12339 to track this.
But for the longer term it would be nice if Elasticsearch consistently just used timestamp (no @ prefix) for audit logs in both Docker and non-Docker logs.