Skip to content

Commit 7bd77b7

Browse files
committed
[Doc] - Guide for sending logs to Kinesis (#4)
* AWS' SDK concerns for cached credentials should not be handled by the logger as they're handled by the aws sdk * [Doc] - added a guide for configuring AwsKinesisLogger backend. * [Doc] - Updated README based on reviews
1 parent 9937158 commit 7bd77b7

File tree

1 file changed

+102
-1
lines changed

1 file changed

+102
-1
lines changed

README.md

Lines changed: 102 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,108 @@ Backend systems
9494

9595
AWS Kinesis backend
9696
------------------
97-
Sends logs to AWS Kinesis.
97+
Backend implementation for sending logs to AWS Kinesis. This guide walks you through a few steps for configuring it.
98+
This is not the only setup but it's performant and it's a great way to start with.
99+
100+
In NGINX conf define the following variables:
101+
102+
* `aws_region` - the AWS region where the kinesis stream is created
103+
* `kinesis_stream_name` - the name of the kinesis stream
104+
105+
Make sure to define 2 shared dictionaries:
106+
107+
* `lua_shared_dict stats_kinesis 16m;` - dictionary used to buffer the logs in memory
108+
* `lua_shared_dict aws_credentials 1m;` - dictionary used to cache any IAM and STS credentials
109+
110+
Then in the `log_by_lua` configure the logger to send the information:
111+
```lua
112+
local cjson = require "cjson"
113+
local logger_factory = require "api-gateway.logger.factory"
114+
115+
local function get_logger_configuration()
116+
local logger_module = "api-gateway.logger.BufferedAsyncLogger"
117+
local logger_opts = {
118+
flush_length = 500, -- http://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecords.html - 500 is max
119+
flush_interval = 5, -- interval in seconds to flush regardless if the buffer is full or not
120+
flush_concurrency = 16, -- max parallel threads used for sending logs
121+
flush_throughput = 10000, -- max logs / SECOND that can be sent to the Kinesis backend
122+
sharedDict = "stats_kinesis", -- dict for caching the logs
123+
backend = "api-gateway.logger.backend.AwsKinesisLogger",
124+
backend_opts = {
125+
aws_region = ngx.var.aws_region or "us-east-1",
126+
kinesis_stream_name = ngx.var.kinesis_stream_name or "api-gateway-stream",
127+
aws_credentials = {
128+
provider = "api-gateway.aws.AWSIAMCredentials",
129+
shared_cache_dict = "aws_credentials" -- dict for caching STS and IAM credentials
130+
}
131+
},
132+
callback = function(status)
133+
-- capture or log information about each flush
134+
-- status.logs_sent - how many logs have been flushed successfully
135+
-- status.logs_failed - how many logs failed to be sent
136+
-- status.backend_response_code - HTTP Status code returned by Kinesis
137+
-- status.threads_running - how many parallel threads are active
138+
-- status.threads_pending - how many threads are waiting to be executed
139+
end
140+
}
141+
return logger_module, logger_opts
142+
end
143+
144+
local function get_logger(name)
145+
-- try to reuse an existing logger instance for each worker process
146+
if (logger_factory:hasLogger(name)) then
147+
return logger_factory:getLogger(name)
148+
end
149+
150+
-- create a new logger instance
151+
local logger_module , logger_opts = get_logger_configuration()
152+
return logger_factory:getLogger(name, logger_module, logger_opts)
153+
end
154+
155+
local kinesis_logger = get_logger("kinesis-logger")
156+
157+
local partition_key = ngx.utctime() .."-".. math.random(ngx.now() * 1000)
158+
local kinesis_data = {}
159+
160+
-- add any information you want to capture
161+
kinesis_data["http_referer"] = ngx.var.http_referer
162+
kinesis_data["user_agent"] = ngx.var.http_user_agent
163+
kinesis_data["hostname"] = ngx.var.hostname
164+
kinesis_data["http_host"] = ngx.var.host
165+
166+
-- at the end log the message
167+
kinesis_logger:logMetrics( partition_key, cjson.encode(kinesis_data))
168+
```
169+
170+
If you want to use STS Credentials instead of IAM Credentials with the Kinesis Logger then configure the `backend_opts.aws_credentials` as follows:
171+
```lua
172+
aws_credentials = {
173+
provider = "api-gateway.aws.AWSSTSCredentials",
174+
role_ARN = "arn:aws:iam::" .. ngx.var.kinesis_aws_account .. ":role/" .. ngx.var.kinesis_iam_role,
175+
role_session_name = "kinesis-logger-session",
176+
shared_cache_dict = "aws_credentials" -- dict for caching STS and IAM credentials
177+
}
178+
```
179+
180+
Make sure to also configure the NGINX variables:
181+
182+
* `kinesis_aws_account` - the AWS Account where the kinesis stream is configured
183+
* `kinesis_iam_role` - the role to be assumed in order to send the logs to kinesis
184+
185+
>INFO: If you send the logs into the same account where NGINX runs you don't need to configure any STS Credentials, but you can use IAM Credentials.
186+
187+
For more information about `AWSSTSCredentials` configuration see [the documentation](https://github.com/adobe-apiplatform/api-gateway-aws#sts-credentials).
188+
189+
If you can't use IAM Credentials nor STS Credentials, then you can still send logs to Kinesis by configuring AWS with static `access_key` and `secret`.
190+
These are not as secure as IAM/STS but are working for non-AWS deployments:
191+
192+
```lua
193+
aws_credentials = {
194+
provider = "api-gateway.aws.AWSBasicCredentials",
195+
access_key = "replace-me",
196+
secret_key = "replace-me"
197+
}
198+
```
98199

99200
HttpLogger backend
100201
------------------

0 commit comments

Comments
 (0)