From 0eef303cdf3ccd816b3b1401dd4c789112101ea1 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Mon, 29 Sep 2025 07:30:51 +0000 Subject: [PATCH 1/3] Initial plan From a2aa1909e8388eae6935c456512471730b10f225 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Mon, 29 Sep 2025 07:46:17 +0000 Subject: [PATCH 2/3] Add TQL value aliasing documentation for v0.17 and current versions Co-authored-by: nicecui <10130959+nicecui@users.noreply.github.com> --- docs/reference/sql/tql.md | 12 +++++++++++- .../current/reference/sql/tql.md | 12 +++++++++++- .../version-0.17/reference/sql/tql.md | 12 +++++++++++- versioned_docs/version-0.17/reference/sql/tql.md | 12 +++++++++++- 4 files changed, 44 insertions(+), 4 deletions(-) diff --git a/docs/reference/sql/tql.md b/docs/reference/sql/tql.md index 664dd1b65..8507eb1b4 100644 --- a/docs/reference/sql/tql.md +++ b/docs/reference/sql/tql.md @@ -12,7 +12,7 @@ The `TQL` keyword executes TQL language in SQL. The TQL is Time-Series Query Lan ### Syntax ```sql -TQL [EVAL | EVALUATE] (start, end, step) expr +TQL [EVAL | EVALUATE] (start, end, step) expr [AS alias] ``` The `start`, `end` and `step` are the query parameters just like [Prometheus Query API](https://prometheus.io/docs/prometheus/latest/querying/api/): @@ -23,6 +23,8 @@ The `start`, `end` and `step` are the query parameters just like [Prometheus Que The `expr` is the TQL expression query string. +The optional `AS alias` clause allows you to provide an alias for the query result. This is useful for giving meaningful names to the output columns. + ### Examples Return the per-second rate for all time series with the `http_requests_total` metric name, as measured over the last 5 minutes: @@ -33,6 +35,14 @@ TQL eval (1677057993, 1677058993, '1m') rate(prometheus_http_requests_total{job= will get a result just like other normal SQL queries. +You can also use value aliasing to give a meaningful name to your query result: + +```sql +TQL eval (0, 10, '5s') {__name__="test"} AS test_series; +``` + +This will return the results with the column named `test_series` instead of the default column name. + ## EXPLAIN `EXPLAIN` displays both the logical plan and execution plan for a given PromQL query. The syntax is as follows: diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/tql.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/tql.md index 01aeb9d61..a21822304 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/tql.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/tql.md @@ -12,7 +12,7 @@ description: 介绍了 `TQL` 关键字及其在 GreptimeDB 中的用法,包括 ### Syntax ```sql -TQL [EVAL | EVALUATE] (start, end, step) expr +TQL [EVAL | EVALUATE] (start, end, step) expr [AS alias] ``` `start`, `end` 和 `step` 是查询参数,就像 [Prometheus Query API](https://prometheus.io/docs/prometheus/latest/querying/api/) 一样: @@ -23,6 +23,8 @@ TQL [EVAL | EVALUATE] (start, end, step) expr `expr` 是 TQL 表达式查询字符串。 +可选的 `AS alias` 子句允许你为查询结果提供别名。这有助于为输出列提供有意义的名称。 + ### 示例 返回过去 5 分钟内 `http_requests_total` 指标的所有时间序列的每秒值: @@ -33,6 +35,14 @@ TQL eval (1677057993, 1677058993, '1m') rate(prometheus_http_requests_total{job= 其查询结果和 SQL 查询结果类似。 +你也可以使用值别名为查询结果提供有意义的名称: + +```sql +TQL eval (0, 10, '5s') {__name__="test"} AS test_series; +``` + +这将返回列名为 `test_series` 而不是默认列名的结果。 + ## EXPLAIN `EXPLAIN` 展示特定 PromQL 查询的逻辑计划和执行计划,其语法如下: diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-0.17/reference/sql/tql.md b/i18n/zh/docusaurus-plugin-content-docs/version-0.17/reference/sql/tql.md index 01aeb9d61..a21822304 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-0.17/reference/sql/tql.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-0.17/reference/sql/tql.md @@ -12,7 +12,7 @@ description: 介绍了 `TQL` 关键字及其在 GreptimeDB 中的用法,包括 ### Syntax ```sql -TQL [EVAL | EVALUATE] (start, end, step) expr +TQL [EVAL | EVALUATE] (start, end, step) expr [AS alias] ``` `start`, `end` 和 `step` 是查询参数,就像 [Prometheus Query API](https://prometheus.io/docs/prometheus/latest/querying/api/) 一样: @@ -23,6 +23,8 @@ TQL [EVAL | EVALUATE] (start, end, step) expr `expr` 是 TQL 表达式查询字符串。 +可选的 `AS alias` 子句允许你为查询结果提供别名。这有助于为输出列提供有意义的名称。 + ### 示例 返回过去 5 分钟内 `http_requests_total` 指标的所有时间序列的每秒值: @@ -33,6 +35,14 @@ TQL eval (1677057993, 1677058993, '1m') rate(prometheus_http_requests_total{job= 其查询结果和 SQL 查询结果类似。 +你也可以使用值别名为查询结果提供有意义的名称: + +```sql +TQL eval (0, 10, '5s') {__name__="test"} AS test_series; +``` + +这将返回列名为 `test_series` 而不是默认列名的结果。 + ## EXPLAIN `EXPLAIN` 展示特定 PromQL 查询的逻辑计划和执行计划,其语法如下: diff --git a/versioned_docs/version-0.17/reference/sql/tql.md b/versioned_docs/version-0.17/reference/sql/tql.md index 664dd1b65..8507eb1b4 100644 --- a/versioned_docs/version-0.17/reference/sql/tql.md +++ b/versioned_docs/version-0.17/reference/sql/tql.md @@ -12,7 +12,7 @@ The `TQL` keyword executes TQL language in SQL. The TQL is Time-Series Query Lan ### Syntax ```sql -TQL [EVAL | EVALUATE] (start, end, step) expr +TQL [EVAL | EVALUATE] (start, end, step) expr [AS alias] ``` The `start`, `end` and `step` are the query parameters just like [Prometheus Query API](https://prometheus.io/docs/prometheus/latest/querying/api/): @@ -23,6 +23,8 @@ The `start`, `end` and `step` are the query parameters just like [Prometheus Que The `expr` is the TQL expression query string. +The optional `AS alias` clause allows you to provide an alias for the query result. This is useful for giving meaningful names to the output columns. + ### Examples Return the per-second rate for all time series with the `http_requests_total` metric name, as measured over the last 5 minutes: @@ -33,6 +35,14 @@ TQL eval (1677057993, 1677058993, '1m') rate(prometheus_http_requests_total{job= will get a result just like other normal SQL queries. +You can also use value aliasing to give a meaningful name to your query result: + +```sql +TQL eval (0, 10, '5s') {__name__="test"} AS test_series; +``` + +This will return the results with the column named `test_series` instead of the default column name. + ## EXPLAIN `EXPLAIN` displays both the logical plan and execution plan for a given PromQL query. The syntax is as follows: From ce38e6be328a404127cfdf1fd57a1169a6846f02 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Sat, 11 Oct 2025 06:59:25 +0000 Subject: [PATCH 3/3] Merge main branch and resolve TQL documentation conflicts Co-authored-by: killme2008 <14142+killme2008@users.noreply.github.com> --- blog/release-0-15-3.md | 2 +- blog/release-0-16-0.md | 2 +- .../migrate/migrate-from-influxdb.md | 6 +-- docs/reference/sql/show.md | 4 ++ docs/reference/sql/tql.md | 37 +++++++++++++++---- docs/reference/sql/trigger-syntax.md | 14 +++++++ .../for-iot/influxdb-line-protocol.md | 20 +++++----- docs/user-guide/logs/manage-pipelines.md | 6 +-- .../migrate/migrate-from-influxdb.md | 6 +-- .../current/reference/sql/show.md | 4 ++ .../current/reference/sql/tql.md | 37 +++++++++++++++---- .../current/reference/sql/trigger-syntax.md | 14 +++++++ .../for-iot/influxdb-line-protocol.md | 20 +++++----- .../user-guide/logs/manage-pipelines.md | 4 +- .../user-guide/logs/manage-pipelines.md | 6 +-- .../user-guide/logs/manage-pipelines.md | 6 +-- .../user-guide/logs/manage-pipelines.md | 4 +- .../user-guide/logs/manage-pipelines.md | 4 +- .../user-guide/logs/manage-pipelines.md | 4 +- .../user-guide/logs/manage-pipelines.md | 4 +- .../ingest-data/for-observability/loki.md | 4 +- .../user-guide/logs/manage-pipelines.md | 6 +-- .../ingest-data/for-observability/loki.md | 4 +- .../user-guide/logs/manage-pipelines.md | 6 +-- .../ingest-data/for-observability/loki.md | 4 +- .../user-guide/logs/manage-pipelines.md | 4 +- .../user-guide/logs/manage-pipelines.md | 4 +- .../user-guide/logs/manage-pipelines.md | 4 +- .../user-guide/logs/manage-pipelines.md | 4 +- 29 files changed, 163 insertions(+), 81 deletions(-) diff --git a/blog/release-0-15-3.md b/blog/release-0-15-3.md index 465dae1f5..acc9d7853 100644 --- a/blog/release-0-15-3.md +++ b/blog/release-0-15-3.md @@ -23,7 +23,7 @@ Release date: July 24, 2025 * fix: aggr group by all partition cols use partial commutative by [@discord9](https://github.com/discord9) in [#6534](https://github.com/GreptimeTeam/greptimedb/pull/6534) * fix: estimate mem size for bulk ingester by [@fengys1996](https://github.com/fengys1996) in [#6550](https://github.com/GreptimeTeam/greptimedb/pull/6550) * fix: flow mirror cache by [@discord9](https://github.com/discord9) in [#6551](https://github.com/GreptimeTeam/greptimedb/pull/6551) -* fix: closee issue #6555 return empty result by [@yihong0618](https://github.com/yihong0618) in [#6569](https://github.com/GreptimeTeam/greptimedb/pull/6569) +* fix: close issue #6555 return empty result by [@yihong0618](https://github.com/yihong0618) in [#6569](https://github.com/GreptimeTeam/greptimedb/pull/6569) ### 🚜 Refactor * refactor(flow): faster time window expr by [@discord9](https://github.com/discord9) in [#6495](https://github.com/GreptimeTeam/greptimedb/pull/6495) diff --git a/blog/release-0-16-0.md b/blog/release-0-16-0.md index ddfc93966..d87b532d2 100644 --- a/blog/release-0-16-0.md +++ b/blog/release-0-16-0.md @@ -81,7 +81,7 @@ Release date: August 06, 2025 ### 🐛 Bug Fixes * fix: aggr group by all partition cols use partial commutative by [@discord9](https://github.com/discord9) in [#6534](https://github.com/GreptimeTeam/greptimedb/pull/6534) -* fix: closee issue #6555 return empty result by [@yihong0618](https://github.com/yihong0618) in [#6569](https://github.com/GreptimeTeam/greptimedb/pull/6569) +* fix: close issue #6555 return empty result by [@yihong0618](https://github.com/yihong0618) in [#6569](https://github.com/GreptimeTeam/greptimedb/pull/6569) * fix: ignore target files in make fmt-check by [@yihong0618](https://github.com/yihong0618) in [#6560](https://github.com/GreptimeTeam/greptimedb/pull/6560) * fix: close issue #6586 make pg also show error as mysql by [@yihong0618](https://github.com/yihong0618) in [#6587](https://github.com/GreptimeTeam/greptimedb/pull/6587) * fix: add map datatype conversion in copy_table_from (#6185) by [@Arshdeep54](https://github.com/Arshdeep54) in [#6422](https://github.com/GreptimeTeam/greptimedb/pull/6422) diff --git a/docs/db-cloud-shared/migrate/migrate-from-influxdb.md b/docs/db-cloud-shared/migrate/migrate-from-influxdb.md index 3f3faf982..233419071 100644 --- a/docs/db-cloud-shared/migrate/migrate-from-influxdb.md +++ b/docs/db-cloud-shared/migrate/migrate-from-influxdb.md @@ -88,15 +88,15 @@ The similar query in GreptimeDB SQL would be: ```sql [SQL] SELECT - ts, + greptime_timestamp, host, AVG(cpu) RANGE '1h' as mean_cpu FROM monitor WHERE - ts > NOW() - '24 hours'::INTERVAL + greptime_timestamp > NOW() - '24 hours'::INTERVAL ALIGN '1h' TO NOW -ORDER BY ts DESC; +ORDER BY greptime_timestamp DESC; ``` In this SQL query, diff --git a/docs/reference/sql/show.md b/docs/reference/sql/show.md index 7f35edfb4..f35a5c7ac 100644 --- a/docs/reference/sql/show.md +++ b/docs/reference/sql/show.md @@ -343,6 +343,10 @@ Output: Please refer to the [Trigger syntax](/reference/sql/trigger-syntax.md#show-triggers) documentation. +## SHOW CREATE TRIGGER + +Please refer to the [Trigger syntax](/reference/sql/trigger-syntax.md#show-create-trigger) documentation. + ## Extensions to SHOW Statements Some extensions to `SHOW` statements accompany the implementation of [`INFORMATION_SCHEMA`](/reference/sql/information-schema/overview.md) just like MySQL, they also accept a `WHERE` clause that provides more flexibility in specifying which rows to display. diff --git a/docs/reference/sql/tql.md b/docs/reference/sql/tql.md index 8507eb1b4..0501b0d65 100644 --- a/docs/reference/sql/tql.md +++ b/docs/reference/sql/tql.md @@ -5,23 +5,24 @@ description: Covers the TQL keyword for executing Time-Series Query Language in # TQL -The `TQL` keyword executes TQL language in SQL. The TQL is Time-Series Query Language, which is an extension for Prometheus's [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/) in GreptimeDB. +The `TQL` keyword executes TQL language in SQL. The TQL is Telemetry Query Language, which is an extension for Prometheus's [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/) in GreptimeDB. ## EVAL ### Syntax ```sql -TQL [EVAL | EVALUATE] (start, end, step) expr [AS alias] +TQL [EVAL | EVALUATE] (start, end, step, [lookback]) expr [AS alias] ``` The `start`, `end` and `step` are the query parameters just like [Prometheus Query API](https://prometheus.io/docs/prometheus/latest/querying/api/): -- `start`: ``: Start timestamp, inclusive. -- `end`: ``: End timestamp, inclusive. -- `step`: ``: Query resolution step width in `duration` format or float number of seconds. +- `start`: ``: The start timestamp of the query; the range is inclusive of this value. +- `end`: ``: The end timestamp of the query; the range is inclusive of this value. +- `step`: ``: The query resolution step, specified as a `duration` or a floating-point number of seconds. +- `lookback`: ``: The maximum lookback duration for evaluation, default is 5 minutes and optional. -The `expr` is the TQL expression query string. +`expr` is the TQL (PromQL) query string. The optional `AS alias` clause allows you to provide an alias for the query result. This is useful for giving meaningful names to the output columns. @@ -35,6 +36,28 @@ TQL eval (1677057993, 1677058993, '1m') rate(prometheus_http_requests_total{job= will get a result just like other normal SQL queries. +`start` and `end` can also be time expressions that evaluate to constants. For example, to query the past 3 hours: + +```sql +TQL EVAL (now() - interval '3' hours, now(), '1m') + sum by (namespace, pod) ( + increase(kube_pod_container_status_restarts_total[10m:30s]) + ); +``` + +To query data for the past day: + +```sql +TQL EVAL ( + date_trunc('day', now() - interval '1' day), + date_trunc('day', now(), + '1m' +) + sum by (namespace) ( + rate(http_requests_total[5m:30s]) + ); +``` + You can also use value aliasing to give a meaningful name to your query result: ```sql @@ -57,7 +80,7 @@ For example, to explain the PromQL `sum by (instance) (rate(node_disk_written_by TQL EXPLAIN sum by (instance) (rate(node_disk_written_bytes_total[2m])) > 50; ``` -Notice that since the given query won't be actually executed, the triple `(start, end, step)` is not necessary. But you can still provide it like in `TQL EVAL`: +Notice that since the given query won't be actually executed, the triple `(start, end, step, [lookback])` is not necessary. But you can still provide it like in `TQL EVAL`: ``` TQL EXPLAIN (0, 100, '10s') sum by (instance) (rate(node_disk_written_bytes_total[2m])) > 50; diff --git a/docs/reference/sql/trigger-syntax.md b/docs/reference/sql/trigger-syntax.md index b3b0f61b3..782ee021a 100644 --- a/docs/reference/sql/trigger-syntax.md +++ b/docs/reference/sql/trigger-syntax.md @@ -158,6 +158,20 @@ For example: SHOW TRIGGERS WHERE name = 'load1_monitor'; ``` +## SHOW CREATE TRIGGER + +To show the Trigger's definition: + +```sql +SHOW CREATE TRIGGER ; +``` + +For example: + +```sql +SHOW CREATE TRIGGER load1_monitor; +``` + ## DROP TRIGGER To delete a trigger, use the following `DROP TRIGGER` clause: diff --git a/docs/user-guide/ingest-data/for-iot/influxdb-line-protocol.md b/docs/user-guide/ingest-data/for-iot/influxdb-line-protocol.md index a1b94410a..d0cbae0a7 100644 --- a/docs/user-guide/ingest-data/for-iot/influxdb-line-protocol.md +++ b/docs/user-guide/ingest-data/for-iot/influxdb-line-protocol.md @@ -180,7 +180,7 @@ In the GreptimeDB data model, the data is represented as follows in the `census` ```sql +---------------------+----------+-----------+------+------+ -| ts | location | scientist | bees | ants | +| greptime_timestamp | location | scientist | bees | ants | +---------------------+----------+-----------+------+------+ | 2019-08-18 00:00:00 | klamath | anderson | 23 | NULL | | 2019-08-18 00:06:00 | klamath | anderson | 28 | NULL | @@ -192,15 +192,15 @@ In the GreptimeDB data model, the data is represented as follows in the `census` The schema of the `census` table is as follows: ```sql -+-----------+----------------------+------+------+---------+---------------+ -| Column | Type | Key | Null | Default | Semantic Type | -+-----------+----------------------+------+------+---------+---------------+ -| location | String | PRI | YES | | TAG | -| scientist | String | PRI | YES | | TAG | -| bees | Float64 | | YES | | FIELD | -| ts | TimestampNanosecond | PRI | NO | | TIMESTAMP | -| ants | Float64 | | YES | | FIELD | -+-----------+----------------------+------+------+---------+---------------+ ++--------------------+----------------------+------+------+---------+---------------+ +| Column | Type | Key | Null | Default | Semantic Type | ++--------------------+----------------------+------+------+---------+---------------+ +| location | String | PRI | YES | | TAG | +| scientist | String | PRI | YES | | TAG | +| bees | Float64 | | YES | | FIELD | +| greptime_timestamp | TimestampNanosecond | PRI | NO | | TIMESTAMP | +| ants | Float64 | | YES | | FIELD | ++--------------------+----------------------+------+------+---------+---------------+ ``` ## Reference diff --git a/docs/user-guide/logs/manage-pipelines.md b/docs/user-guide/logs/manage-pipelines.md index 5dcf6a606..7e2a6901b 100644 --- a/docs/user-guide/logs/manage-pipelines.md +++ b/docs/user-guide/logs/manage-pipelines.md @@ -303,13 +303,13 @@ At this point, the Pipeline processing is successful, and the output is as follo ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" @@ -435,4 +435,4 @@ After customizing the DDL to meet your requirements, execute it manually before **Notes:** 1. The API only infers the table schema from the pipeline configuration; it doesn't check if the table already exists. -2. The API doesn't account for table suffixes. If you're using `dispatcher`, `table_suffix`, or table suffix hints in your pipeline configuration, you'll need to adjust the table name manually. \ No newline at end of file +2. The API doesn't account for table suffixes. If you're using `dispatcher`, `table_suffix`, or table suffix hints in your pipeline configuration, you'll need to adjust the table name manually. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/db-cloud-shared/migrate/migrate-from-influxdb.md b/i18n/zh/docusaurus-plugin-content-docs/current/db-cloud-shared/migrate/migrate-from-influxdb.md index ccffc94e2..8cd94650f 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/db-cloud-shared/migrate/migrate-from-influxdb.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/db-cloud-shared/migrate/migrate-from-influxdb.md @@ -87,15 +87,15 @@ from(bucket: "public") ```sql [SQL] SELECT - ts, + greptime_timestamp, host, AVG(cpu) RANGE '1h' as mean_cpu FROM monitor WHERE - ts > NOW() - '24 hours'::INTERVAL + greptime_timestamp > NOW() - '24 hours'::INTERVAL ALIGN '1h' TO NOW -ORDER BY ts DESC; +ORDER BY greptime_timestamp DESC; ``` 在该 SQL 查询中, diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/show.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/show.md index aa9dcc8b0..f6a88a9a8 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/show.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/show.md @@ -344,6 +344,10 @@ SHOW FULL PROCESSLIST; 请参考 [Trigger 语法](/reference/sql/trigger-syntax.md#show-triggers)文档。 +## SHOW CREATE TRIGGER + +请参考 [Trigger 语法](/reference/sql/trigger-syntax.md#show-create-trigger)文档。 + ## SHOW 语句的扩展 与 MySQL 类似,一些 `SHOW` 语句的扩展伴随着 [`INFORMATION_SCHEMA`](/reference/sql/information-schema/overview.md) 的实现,它们还接受 `WHERE` 子句,提供了在指定显示的行时更大的灵活性。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/tql.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/tql.md index a21822304..58fd0e0d5 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/tql.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/tql.md @@ -5,23 +5,24 @@ description: 介绍了 `TQL` 关键字及其在 GreptimeDB 中的用法,包括 # TQL -`TQL` 关键字在 SQL 中执行 TQL 语言。TQL 是 Time-Series Query Language 的缩写,是 GreptimeDB 中对 Prometheus 的 [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/) 的扩展。 +`TQL` 关键字在 SQL 中执行 TQL 语言。TQL 是 Telemetry Query Language 的缩写,是 GreptimeDB 中对 Prometheus 的 [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/) 的扩展。 ## EVAL ### Syntax ```sql -TQL [EVAL | EVALUATE] (start, end, step) expr [AS alias] +TQL [EVAL | EVALUATE] (start, end, step, [lookback]) expr [AS alias] ``` `start`, `end` 和 `step` 是查询参数,就像 [Prometheus Query API](https://prometheus.io/docs/prometheus/latest/querying/api/) 一样: -- `start`: ``: Start 时间戳,范围中包含该值。 -- `end`: ``: End 时间戳,范围中包含该值。 +- `start`: ``: 查询的起始时间戳,范围中包含该值。 +- `end`: ``: 查询的截止时间戳,范围中包含该值。 - `step`: ``: 查询分辨率步长,采用 `duration` 格式或浮点秒数。 +- `lookback`: ``: 查询评估的最大过去持续时间,默认 5 分钟,可选参数。 -`expr` 是 TQL 表达式查询字符串。 +`expr` 是 TQL (PromQL) 的查询字符串。 可选的 `AS alias` 子句允许你为查询结果提供别名。这有助于为输出列提供有意义的名称。 @@ -30,11 +31,33 @@ TQL [EVAL | EVALUATE] (start, end, step) expr [AS alias] 返回过去 5 分钟内 `http_requests_total` 指标的所有时间序列的每秒值: ```sql -TQL eval (1677057993, 1677058993, '1m') rate(prometheus_http_requests_total{job="prometheus"}[5m]); +TQL EVAL (1677057993, 1677058993, '1m') + rate(prometheus_http_requests_total{job="prometheus"}[5m]); ``` 其查询结果和 SQL 查询结果类似。 +`start` 和 `end` 还可以是可以被求值为常量的时间表达式,例如查询过去 3 个小时: + +```sql +TQL EVAL (now() - interval '3' hours, now(), '1m') + sum by (namespace, pod) ( + increase(kube_pod_container_status_restarts_total[10m:30s]) + ); +``` + +查询过去一天的数据: +```sql +TQL EVAL ( + date_trunc('day', now() - interval '1' day), + date_trunc('day', now()), + '1m' +) + sum by (namespace) ( + rate(http_requests_total[5m:30s]) + ); +``` + 你也可以使用值别名为查询结果提供有意义的名称: ```sql @@ -57,7 +80,7 @@ TQL EXPLAIN expr; TQL EXPLAIN sum by (instance) (rate(node_disk_written_bytes_total[2m])) > 50; ``` -注意该查询实际上没有被执行,所以 `(start, end, step)` 不是必需的,但你仍然可以像在 `TQL EVAL` 中一样提供这些参数: +注意该查询实际上没有被执行,所以 `(start, end, step, [lookback])` 不是必需的,但你仍然可以像在 `TQL EVAL` 中一样提供这些参数: ``` TQL EXPLAIN (0, 100, '10s') sum by (instance) (rate(node_disk_written_bytes_total[2m])) > 50; diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/trigger-syntax.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/trigger-syntax.md index 715e0114c..9d7b3d2d6 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/trigger-syntax.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference/sql/trigger-syntax.md @@ -145,6 +145,20 @@ SHOW TRIGGERS WHERE ; SHOW TRIGGERS WHERE name = 'load1_monitor'; ``` +## SHOW CREATE TRIGGER + +用于显示 TRIGGER 的定义: + +```sql +SHOW CREATE TRIGGER ; +``` + +例如: + +```sql +SHOW CREATE TRIGGER load1_monitor; +``` + ## DROP TRIGGER 请使用以下 `DROP TRIGGER` 语句删除 Trigger: diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/user-guide/ingest-data/for-iot/influxdb-line-protocol.md b/i18n/zh/docusaurus-plugin-content-docs/current/user-guide/ingest-data/for-iot/influxdb-line-protocol.md index 32cd048d0..08d027e1e 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/user-guide/ingest-data/for-iot/influxdb-line-protocol.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/user-guide/ingest-data/for-iot/influxdb-line-protocol.md @@ -179,7 +179,7 @@ census,location=portland,scientist=mullen ants=32 1566086760000000000 ```sql +---------------------+----------+-----------+------+------+ -| ts | location | scientist | bees | ants | +| greptime_timestamp | location | scientist | bees | ants | +---------------------+----------+-----------+------+------+ | 2019-08-18 00:00:00 | klamath | anderson | 23 | NULL | | 2019-08-18 00:06:00 | klamath | anderson | 28 | NULL | @@ -191,15 +191,15 @@ census,location=portland,scientist=mullen ants=32 1566086760000000000 `census` 表结构如下: ```sql -+-----------+----------------------+------+------+---------+---------------+ -| Column | Type | Key | Null | Default | Semantic Type | -+-----------+----------------------+------+------+---------+---------------+ -| location | String | PRI | YES | | TAG | -| scientist | String | PRI | YES | | TAG | -| bees | Float64 | | YES | | FIELD | -| ts | TimestampNanosecond | PRI | NO | | TIMESTAMP | -| ants | Float64 | | YES | | FIELD | -+-----------+----------------------+------+------+---------+---------------+ ++--------------------+----------------------+------+------+---------+---------------+ +| Column | Type | Key | Null | Default | Semantic Type | ++--------------------+----------------------+------+------+---------+---------------+ +| location | String | PRI | YES | | TAG | +| scientist | String | PRI | YES | | TAG | +| bees | Float64 | | YES | | FIELD | +| greptime_timestamp | TimestampNanosecond | PRI | NO | | TIMESTAMP | +| ants | Float64 | | YES | | FIELD | ++--------------------+----------------------+------+------+---------+---------------+ ``` ## 参考 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/user-guide/logs/manage-pipelines.md b/i18n/zh/docusaurus-plugin-content-docs/current/user-guide/logs/manage-pipelines.md index bf17d4834..d2bb9abba 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/user-guide/logs/manage-pipelines.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/user-guide/logs/manage-pipelines.md @@ -303,13 +303,13 @@ curl -X "POST" "http://localhost:4000/v1/pipelines/dryrun?pipeline_name=test" \ ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-0.12/user-guide/logs/manage-pipelines.md b/i18n/zh/docusaurus-plugin-content-docs/version-0.12/user-guide/logs/manage-pipelines.md index 370af5236..e6d37b587 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-0.12/user-guide/logs/manage-pipelines.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-0.12/user-guide/logs/manage-pipelines.md @@ -279,13 +279,13 @@ curl -X "POST" "http://localhost:4000/v1/events/pipelines/dryrun?pipeline_name=t ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" @@ -294,4 +294,4 @@ curl -X "POST" "http://localhost:4000/v1/events/pipelines/dryrun?pipeline_name=t } ``` -可以看到,`1998.08` 字符串中的 `.` 已经被替换为 `-`,Pipeline 处理成功。 \ No newline at end of file +可以看到,`1998.08` 字符串中的 `.` 已经被替换为 `-`,Pipeline 处理成功。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-0.13/user-guide/logs/manage-pipelines.md b/i18n/zh/docusaurus-plugin-content-docs/version-0.13/user-guide/logs/manage-pipelines.md index a7a551901..c1b3b459d 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-0.13/user-guide/logs/manage-pipelines.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-0.13/user-guide/logs/manage-pipelines.md @@ -279,13 +279,13 @@ curl -X "POST" "http://localhost:4000/v1/events/pipelines/dryrun?pipeline_name=t ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" @@ -294,4 +294,4 @@ curl -X "POST" "http://localhost:4000/v1/events/pipelines/dryrun?pipeline_name=t } ``` -可以看到,`1998.08` 字符串中的 `.` 已经被替换为 `-`,Pipeline 处理成功。 \ No newline at end of file +可以看到,`1998.08` 字符串中的 `.` 已经被替换为 `-`,Pipeline 处理成功。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-0.14/user-guide/logs/manage-pipelines.md b/i18n/zh/docusaurus-plugin-content-docs/version-0.14/user-guide/logs/manage-pipelines.md index aa561c9c7..6d8942158 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-0.14/user-guide/logs/manage-pipelines.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-0.14/user-guide/logs/manage-pipelines.md @@ -299,13 +299,13 @@ curl -X "POST" "http://localhost:4000/v1/events/pipelines/dryrun?pipeline_name=t ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-0.15/user-guide/logs/manage-pipelines.md b/i18n/zh/docusaurus-plugin-content-docs/version-0.15/user-guide/logs/manage-pipelines.md index bc85acc64..b5630336c 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-0.15/user-guide/logs/manage-pipelines.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-0.15/user-guide/logs/manage-pipelines.md @@ -303,13 +303,13 @@ curl -X "POST" "http://localhost:4000/v1/events/pipelines/dryrun?pipeline_name=t ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-0.16/user-guide/logs/manage-pipelines.md b/i18n/zh/docusaurus-plugin-content-docs/version-0.16/user-guide/logs/manage-pipelines.md index c5a74beec..82980c169 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-0.16/user-guide/logs/manage-pipelines.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-0.16/user-guide/logs/manage-pipelines.md @@ -303,13 +303,13 @@ curl -X "POST" "http://localhost:4000/v1/pipelines/dryrun?pipeline_name=test" \ ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-0.17/user-guide/logs/manage-pipelines.md b/i18n/zh/docusaurus-plugin-content-docs/version-0.17/user-guide/logs/manage-pipelines.md index c5a74beec..82980c169 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-0.17/user-guide/logs/manage-pipelines.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-0.17/user-guide/logs/manage-pipelines.md @@ -303,13 +303,13 @@ curl -X "POST" "http://localhost:4000/v1/pipelines/dryrun?pipeline_name=test" \ ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" diff --git a/versioned_docs/version-0.12/user-guide/ingest-data/for-observability/loki.md b/versioned_docs/version-0.12/user-guide/ingest-data/for-observability/loki.md index a271357fd..dbf66a242 100644 --- a/versioned_docs/version-0.12/user-guide/ingest-data/for-observability/loki.md +++ b/versioned_docs/version-0.12/user-guide/ingest-data/for-observability/loki.md @@ -23,7 +23,7 @@ To send Logs to GreptimeDB through Raw HTTP API, use the following information: [Grafana Alloy](https://grafana.com/docs/alloy/latest/) is a vendor-neutral distribution of the OpenTelemetry (OTel) Collector. Alloy uniquely combines the very best OSS observability signals in the community. -It suplies a Loki exporter that can be used to send logs to GreptimeDB. Here is an example configuration: +It supplies a Loki exporter that can be used to send logs to GreptimeDB. Here is an example configuration: ```hcl loki.source.file "greptime" { @@ -125,4 +125,4 @@ WITH( append_mode = 'true' ) 1 row in set (0.00 sec) -``` \ No newline at end of file +``` diff --git a/versioned_docs/version-0.12/user-guide/logs/manage-pipelines.md b/versioned_docs/version-0.12/user-guide/logs/manage-pipelines.md index 6c76375e0..8eed1cae0 100644 --- a/versioned_docs/version-0.12/user-guide/logs/manage-pipelines.md +++ b/versioned_docs/version-0.12/user-guide/logs/manage-pipelines.md @@ -282,13 +282,13 @@ At this point, the Pipeline processing is successful, and the output is as follo ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" @@ -297,4 +297,4 @@ At this point, the Pipeline processing is successful, and the output is as follo } ``` -It can be seen that the `.` in the string `1998.08` has been replaced with `-`, indicating a successful processing of the Pipeline. \ No newline at end of file +It can be seen that the `.` in the string `1998.08` has been replaced with `-`, indicating a successful processing of the Pipeline. diff --git a/versioned_docs/version-0.13/user-guide/ingest-data/for-observability/loki.md b/versioned_docs/version-0.13/user-guide/ingest-data/for-observability/loki.md index a271357fd..dbf66a242 100644 --- a/versioned_docs/version-0.13/user-guide/ingest-data/for-observability/loki.md +++ b/versioned_docs/version-0.13/user-guide/ingest-data/for-observability/loki.md @@ -23,7 +23,7 @@ To send Logs to GreptimeDB through Raw HTTP API, use the following information: [Grafana Alloy](https://grafana.com/docs/alloy/latest/) is a vendor-neutral distribution of the OpenTelemetry (OTel) Collector. Alloy uniquely combines the very best OSS observability signals in the community. -It suplies a Loki exporter that can be used to send logs to GreptimeDB. Here is an example configuration: +It supplies a Loki exporter that can be used to send logs to GreptimeDB. Here is an example configuration: ```hcl loki.source.file "greptime" { @@ -125,4 +125,4 @@ WITH( append_mode = 'true' ) 1 row in set (0.00 sec) -``` \ No newline at end of file +``` diff --git a/versioned_docs/version-0.13/user-guide/logs/manage-pipelines.md b/versioned_docs/version-0.13/user-guide/logs/manage-pipelines.md index 61a816657..1a0efc1d6 100644 --- a/versioned_docs/version-0.13/user-guide/logs/manage-pipelines.md +++ b/versioned_docs/version-0.13/user-guide/logs/manage-pipelines.md @@ -282,13 +282,13 @@ At this point, the Pipeline processing is successful, and the output is as follo ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" @@ -297,4 +297,4 @@ At this point, the Pipeline processing is successful, and the output is as follo } ``` -It can be seen that the `.` in the string `1998.08` has been replaced with `-`, indicating a successful processing of the Pipeline. \ No newline at end of file +It can be seen that the `.` in the string `1998.08` has been replaced with `-`, indicating a successful processing of the Pipeline. diff --git a/versioned_docs/version-0.14/user-guide/ingest-data/for-observability/loki.md b/versioned_docs/version-0.14/user-guide/ingest-data/for-observability/loki.md index a271357fd..dbf66a242 100644 --- a/versioned_docs/version-0.14/user-guide/ingest-data/for-observability/loki.md +++ b/versioned_docs/version-0.14/user-guide/ingest-data/for-observability/loki.md @@ -23,7 +23,7 @@ To send Logs to GreptimeDB through Raw HTTP API, use the following information: [Grafana Alloy](https://grafana.com/docs/alloy/latest/) is a vendor-neutral distribution of the OpenTelemetry (OTel) Collector. Alloy uniquely combines the very best OSS observability signals in the community. -It suplies a Loki exporter that can be used to send logs to GreptimeDB. Here is an example configuration: +It supplies a Loki exporter that can be used to send logs to GreptimeDB. Here is an example configuration: ```hcl loki.source.file "greptime" { @@ -125,4 +125,4 @@ WITH( append_mode = 'true' ) 1 row in set (0.00 sec) -``` \ No newline at end of file +``` diff --git a/versioned_docs/version-0.14/user-guide/logs/manage-pipelines.md b/versioned_docs/version-0.14/user-guide/logs/manage-pipelines.md index e9f343592..dca947ffa 100644 --- a/versioned_docs/version-0.14/user-guide/logs/manage-pipelines.md +++ b/versioned_docs/version-0.14/user-guide/logs/manage-pipelines.md @@ -299,13 +299,13 @@ At this point, the Pipeline processing is successful, and the output is as follo ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" diff --git a/versioned_docs/version-0.15/user-guide/logs/manage-pipelines.md b/versioned_docs/version-0.15/user-guide/logs/manage-pipelines.md index 044d90199..54dc2a9df 100644 --- a/versioned_docs/version-0.15/user-guide/logs/manage-pipelines.md +++ b/versioned_docs/version-0.15/user-guide/logs/manage-pipelines.md @@ -303,13 +303,13 @@ At this point, the Pipeline processing is successful, and the output is as follo ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" diff --git a/versioned_docs/version-0.16/user-guide/logs/manage-pipelines.md b/versioned_docs/version-0.16/user-guide/logs/manage-pipelines.md index 5406dc8ab..7fe9d931a 100644 --- a/versioned_docs/version-0.16/user-guide/logs/manage-pipelines.md +++ b/versioned_docs/version-0.16/user-guide/logs/manage-pipelines.md @@ -303,13 +303,13 @@ At this point, the Pipeline processing is successful, and the output is as follo ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time" diff --git a/versioned_docs/version-0.17/user-guide/logs/manage-pipelines.md b/versioned_docs/version-0.17/user-guide/logs/manage-pipelines.md index 5406dc8ab..7fe9d931a 100644 --- a/versioned_docs/version-0.17/user-guide/logs/manage-pipelines.md +++ b/versioned_docs/version-0.17/user-guide/logs/manage-pipelines.md @@ -303,13 +303,13 @@ At this point, the Pipeline processing is successful, and the output is as follo ], "schema": [ { - "colume_type": "FIELD", + "column_type": "FIELD", "data_type": "STRING", "fulltext": false, "name": "message" }, { - "colume_type": "TIMESTAMP", + "column_type": "TIMESTAMP", "data_type": "TIMESTAMP_NANOSECOND", "fulltext": false, "name": "time"