-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add elasticsearch-logger #7643
feat: add elasticsearch-logger #7643
Conversation
@ccxhwmy |
|
||
local DEFAULT_ELASTICSEARCH_SOURCE = "apache-apisix-elasticsearch-logging" | ||
|
||
local plugin_name = "elasticsearch-logging" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should call it elasticsearch-logger like the kafka-logger plugin?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should call it elasticsearch-logger like the kafka-logger plugin?
OK, do I need to change all ealsticsearch-logging
to elasticsearch-logger
? Including the file name?
|
||
|
||
local function get_logger_entry(conf) | ||
local entry = log_util.get_full_log(ngx, conf) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also support the custom log format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also support the custom log format.
How about the reference kafka-logger
plugin?
apisix/apisix/plugins/kafka-logger.lua
Line 178 in 689e4f1
local entry |
local entry
if conf.meta_format == "origin" then
entry = log_util.get_req_original(ctx, conf)
-- core.log.info("origin entry: ", entry)
else
local metadata = plugin.plugin_metadata(plugin_name)
core.log.info("metadata: ", core.json.delay_encode(metadata))
if metadata and metadata.value.log_format
and core.table.nkeys(metadata.value.log_format) > 0
then
entry = log_util.get_custom_format_log(ctx, metadata.value.log_format)
core.log.info("custom log format entry: ", core.json.delay_encode(entry))
else
entry = log_util.get_full_log(ngx, conf)
core.log.info("full log entry: ", core.json.delay_encode(entry))
end
end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not reference the code of another plugin in one plugin. Unless we pull some generic code into a common module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not reference the code of another plugin in one plugin. Unless we pull some generic code into a common module.
Please also support the custom log format.
Is the custom log format
like:
https://github.com/apache/apisix/blob/master/docs/en/latest/plugins/kafka-logger.md#metadata
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes
local schema = { | ||
type = "object", | ||
properties = { | ||
endpoint = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we wrap all the fields in an extra endpoint field?
}) .. "\n" .. | ||
core.json.encode({ | ||
time = ngx_now(), | ||
host = entry.server.hostname, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why should we invent a format structure for a specific plugin?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why should we invent a format structure for a specific plugin?
I refer to splunk-hec-logging
:
local function get_logger_entry(conf) |
How about core.json.encode(entry)
directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we just need to use log_util.get_req_original(ctx, conf)
to get entry
, This is a json, and ES supports this format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remember that when we update the entry
, we also need to update the png image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we just need to use
log_util.get_req_original(ctx, conf)
to getentry
, This is a json, and ES supports this format.
I found that log_util.get_req_original(ctx, conf)
will return a string instead of json:
apisix/apisix/utils/log-util.lua
Lines 196 to 211 in c4d5f2f
function _M.get_req_original(ctx, conf) | |
local headers = { | |
ctx.var.request, "\r\n" | |
} | |
for k, v in pairs(ngx.req.get_headers()) do | |
core.table.insert_tail(headers, k, ": ", v, "\r\n") | |
end | |
-- core.log.error("headers: ", core.table.concat(headers, "")) | |
core.table.insert(headers, "\r\n") | |
if conf.include_req_body then | |
core.table.insert(headers, ctx.var.request_body) | |
end | |
return core.table.concat(headers, "") | |
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my mistake. json string is ok. we can follow this.
end | ||
|
||
local uri = conf.endpoint.uri .. | ||
(str_sub(conf.endpoint.uri, -1) == "/" and "_bulk" or "/_bulk") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use string.byte would be better
-- limitations under the License. | ||
-- | ||
|
||
local ngx = ngx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should move the localized variable after require ...
t/plugin/elasticsearch-logging.t
Outdated
--- request | ||
GET /hello | ||
--- wait: 2 | ||
--- response_body |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should check the data sent to the elasticsearch, via injecting like: #7593 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should check the data sent to the elasticsearch, via injecting like: #7593 (comment)
Does you mean I should hook the function httpc:request_uri
and mock it with my own mock_request_uri, and check the request body with the mock_request_uri?
If so, I am not quite understand the nessary to check request body.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should check the data sent to the elasticsearch, via injecting like: #7593 (comment)
Does you mean I should hook the function
httpc:request_uri
and mock it with my own mock_request_uri, and check the request body with the mock_request_uri? If so, I am not quite understand the nessary to check request body.
@spacewander
Please pay attention to this question when you are free, Thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for missing this. Yes, you are right.
uri = core.schema.uri_def, | ||
index = { type = "string"}, | ||
type = { type = "string"}, | ||
username = { type = "string"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can store username & password in an additional field like
apisix/apisix/plugins/kafka-proxy.lua
Line 25 in a8d03ac
properties = { |
So that we can require them easily.
I tryed to do it, but failed, it seems that I don't have permission. |
like: Fixes #7636 |
I get, Thank you |
headers["Authorization"] = authorization | ||
end | ||
|
||
core.log.info("uri: ", uri, ", body: ", body, ", headers: ", core.json.encode(headers)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This headers
may contain username
and passowrd
in base64 format, which we should not output in the log for security reasons.
--- | ||
title: elasticsearch-logging | ||
--- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you add some keywords and description?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| 名称 | 是否必需 | 默认值 | 描述 | | ||
| ------------------- | -------- | -------------------- | ------------------------------------------------------------ | | ||
| endpoint | 必选 | | Elasticsearch 端点配置信息 | | ||
| endpoint.uri | 必选 | | Elasticsearch API | | ||
| endpoint.index | 必选 | | Elasticsearch [_index field](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-index-field.html#mapping-index-field) | | ||
| endpoint.type | 可选 | Elasticsearch 默认值 | Elasticsearch [_type field](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/mapping-type-field.html#mapping-type-field) | | ||
| endpoint.username | 可选 | | Elasticsearch [authentication](https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-up-authentication.html) username | | ||
| endpoint.password | 可选 | | Elasticsearch [authentication](https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-up-authentication.html) password | | ||
| endpoint.ssl_verify | 可选 | true | 当设置为 `true` 则允许 SSL 验证,参考 [OpenResty docs](https://github.com/openresty/lua-nginx-module#tcpsocksslhandshake) | | ||
| endpoint.timeout | 可选 | 10 | 发送给 Elasticsearch 请求超时时间 | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please refer to http-logger modification.
great work! 👍 |
Co-authored-by: tzssangglass <[email protected]>
Co-authored-by: tzssangglass <[email protected]>
Co-authored-by: Fei Han <[email protected]>
Co-authored-by: Fei Han <[email protected]>
Co-authored-by: Fei Han <[email protected]>
Co-authored-by: Fei Han <[email protected]>
LGTM(left two comments about indent) |
Co-authored-by: tzssangglass <[email protected]>
Co-authored-by: tzssangglass <[email protected]>
check endpoint_addr must not end with "/"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
other LGTM 👍
Co-authored-by: Fei Han <[email protected]>
Co-authored-by: Fei Han <[email protected]>
Co-authored-by: Fei Han <[email protected]>
Co-authored-by: Fei Han <[email protected]>
Co-authored-by: Fei Han <[email protected]>
Need we re-run the ci? |
local schema = { | ||
type = "object", | ||
properties = { | ||
meta_format = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where do we use this field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where do we use this field?
My mistake, I forgot to remove it.
|
||
local uri = conf.endpoint_addr .. "/_bulk" | ||
local body = core.table.concat(entries, "") | ||
local headers = {["Content-Type"] = "application/json"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be better to use application/x-ndjson
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be better to use
application/x-ndjson
?
I learned about ndjson
, I think it is more suitable than json
here. Thank you.
### 完整配置示例 | ||
|
||
```shell | ||
curl http://127.0.0.1:9080/apisix/admin/routes/1 \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you update the port number to 9180 like the PR: #7806?
1. remove useless code 2. replace application/json with application/x-ndjson 3. modify cp port from 9080 to 9180
Need we re-run the ci? |
done |
@tzssangglass Please re-run ci again, Thanks. |
* upstream/master: (214 commits) feat: set constants.apisix_lua_home for used by plugins (apache#7893) fix: response-rewrite plugin might cause Apache AIPSIX hanging (apache#7836) test: sleep 1 second in t/cli/test_upstream_mtls.sh (apache#7889) fix: reload once when log rotate (apache#7869) change: move etcd conf under deployment (apache#7860) fix: plugin metadata missing v3 adapter call (apache#7877) docs: add ClickHouse and Elasticsearch loggers. (apache#7848) docs(plugin): refactor limit-conn.md (apache#7857) feat: call `destroy` method when Nginx exits (apache#7866) feat: Add ability to inject headers via prefix to otel traces (apache#7822) docs(plugin): refactor proxy-cache.md (apache#7858) fix: don't enable passive healthcheck by default (apache#7850) feat: add openfunction plugin (apache#7634) fix(zipkin): send trace IDs with a reject sampling decision (apache#7833) fix: Change opentelemetry's span kind to server (apache#7830) docs(hmac-auth): additional details for generating signing_string (apache#7816) docs: correct the test-nginx description (apache#7818) docs: add docs of workflow plugin (apache#7813) docs(FAQ): add how to detect APISIX alive status (apache#7812) feat: add elasticsearch-logger (apache#7643) ...
Co-authored-by: tzssangglass <[email protected]> Co-authored-by: Fei Han <[email protected]>
Description
Fixes #7636
Checklist