You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/sql-ref-ansi-compliance.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ The casting behaviours are defined as store assignment rules in the standard.
28
28
When `spark.sql.storeAssignmentPolicy` is set to `ANSI`, Spark SQL complies with the ANSI store assignment rules. This is a separate configuration because its default value is `ANSI`, while the configuration `spark.sql.ansi.enabled` is disabled by default.
29
29
30
30
|Property Name|Default|Meaning|Since Version|
31
-
|---|---|---|---|
31
+
|-------------|-------|-------|-------------|
32
32
|`spark.sql.ansi.enabled`|false|(Experimental) When true, Spark tries to conform to the ANSI SQL specification: <br> 1. Spark will throw a runtime exception if an overflow occurs in any operation on integral/decimal field. <br> 2. Spark will forbid using the reserved keywords of ANSI SQL as identifiers in the SQL parser.|3.0.0|
33
33
|`spark.sql.storeAssignmentPolicy`|ANSI|(Experimental) When inserting a value into a column with different data type, Spark will perform type coercion. Currently, we support 3 policies for the type coercion rules: ANSI, legacy and strict. With ANSI policy, Spark performs the type coercion as per ANSI SQL. In practice, the behavior is mostly the same as PostgreSQL. It disallows certain unreasonable type conversions such as converting string to int or double to boolean. With legacy policy, Spark allows the type coercion as long as it is a valid Cast, which is very loose. e.g. converting string to int or double to boolean is allowed. It is also the only behavior in Spark 2.x and it is compatible with Hive. With strict policy, Spark doesn't allow any possible precision loss or data truncation in type coercion, e.g. converting double to int or decimal to double is not allowed.|3.0.0|
34
34
@@ -128,7 +128,7 @@ By default `spark.sql.ansi.enabled` is false.
|**ByteType**|int or long<br>**Note:** Numbers will be converted to 1-byte signed integer numbers at runtime. Please make sure that numbers are within the range of -128 to 127.|ByteType()|
134
134
|**ShortType**|int or long<br>**Note:** Numbers will be converted to 2-byte signed integer numbers at runtime. Please make sure that numbers are within the range of -32768 to 32767.|ShortType()|
135
135
|**IntegerType**|int or long|IntegerType()|
@@ -151,7 +151,7 @@ from pyspark.sql.types import *
151
151
152
152
<divdata-lang="r"markdown="1">
153
153
|Data type|Value type in R|API to access or create a data type|
|**ByteType**|integer <br>**Note:** Numbers will be converted to 1-byte signed integer numbers at runtime. Please make sure that numbers are within the range of -128 to 127.|"byte"|
156
156
|**ShortType**|integer <br>**Note:** Numbers will be converted to 2-byte signed integer numbers at runtime. Please make sure that numbers are within the range of -32768 to 32767.|"short"|
157
157
|**IntegerType**|integer|"integer"|
@@ -176,7 +176,7 @@ from pyspark.sql.types import *
176
176
The following table shows the type names as well as aliases used in Spark SQL parser for each data type.
|**range** ( end )|Long|Creates a table with a single *LongType* column named *id*, containing<br> rows in a range from 0 to *end* (exclusive) with step value 1.|
56
56
|**range** ( start, end )|Long, Long|Creates a table with a single *LongType* column named *id*, containing<br> rows in a range from *start* to *end* (exclusive) with step value 1.|
57
57
|**range** ( start, end, step )|Long, Long, Long|Creates a table with a single *LongType* column named *id*, containing<br> rows in a range from *start* to *end* (exclusive) with *step* value.|
0 commit comments