From 1670243df0992e87b68d49532ad88ba417cd231c Mon Sep 17 00:00:00 2001 From: Takanobu Asanuma Date: Thu, 22 Aug 2019 16:45:38 +0900 Subject: [PATCH] HDFS-14763. Fix package name of audit log class in Dynamometer document --- .../hadoop-dynamometer/src/site/markdown/Dynamometer.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/hadoop-tools/hadoop-dynamometer/src/site/markdown/Dynamometer.md b/hadoop-tools/hadoop-dynamometer/src/site/markdown/Dynamometer.md index 39dd0dbbeef2c..fee569a58d474 100644 --- a/hadoop-tools/hadoop-dynamometer/src/site/markdown/Dynamometer.md +++ b/hadoop-tools/hadoop-dynamometer/src/site/markdown/Dynamometer.md @@ -144,7 +144,7 @@ via the `auditreplay.command-parser.class` configuration. One mapper will automa audit log file within the audit log directory specified at launch time. The default is a direct format, -`com.linkedin.dynamometer.workloadgenerator.audit.AuditLogDirectParser`. This accepts files in the format produced +`org.apache.hadoop.tools.dynamometer.workloadgenerator.audit.AuditLogDirectParser`. This accepts files in the format produced by a standard configuration audit logger, e.g. lines like: ``` 1970-01-01 00:00:42,000 INFO FSNamesystem.audit: allowed=true ugi=hdfs ip=/127.0.0.1 cmd=open src=/tmp/foo dst=null perm=null proto=rpc @@ -154,7 +154,7 @@ the Unix epoch) the start time of the audit traces. This is needed for all mappe example, if the above line was the first audit event, you would specify `auditreplay.log-start-time.ms=42000`. Within a file, the audit logs must be in order of ascending timestamp. -The other supported format is `com.linkedin.dynamometer.workloadgenerator.audit.AuditLogHiveTableParser`. This accepts +The other supported format is `org.apache.hadoop.tools.dynamometer.workloadgenerator.audit.AuditLogHiveTableParser`. This accepts files in the format produced by a Hive query with output fields, in order: * `relativeTimestamp`: event time offset, in milliseconds, from the start of the trace