You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[SPARK-31636][SQL][DOCS] Remove HTML syntax in SQL reference
### What changes were proposed in this pull request?
Remove the unneeded embedded inline HTML markup by using the basic markdown syntax.
Please see #28414
### Why are the changes needed?
Make the doc cleaner and easily editable by MD editors.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Manually build and check
Closes#28451 from huaxingao/html_cleanup.
Authored-by: Huaxin Gao <[email protected]>
Signed-off-by: Sean Owen <[email protected]>
Copy file name to clipboardExpand all lines: docs/sql-ref-ansi-compliance.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,10 +41,10 @@ This means that in case an operation causes overflows, the result is the same wi
41
41
On the other hand, Spark SQL returns null for decimal overflows.
42
42
When `spark.sql.ansi.enabled` is set to `true` and an overflow occurs in numeric and interval arithmetic operations, it throws an arithmetic exception at runtime.
43
43
44
-
{% highlight sql %}
44
+
```sql
45
45
-- `spark.sql.ansi.enabled=true`
46
46
SELECT2147483647+1;
47
-
java.lang.ArithmeticException: integer overflow
47
+
java.lang.ArithmeticException: integer overflow
48
48
49
49
-- `spark.sql.ansi.enabled=false`
50
50
SELECT2147483647+1;
@@ -53,7 +53,7 @@ SELECT 2147483647 + 1;
53
53
+----------------+
54
54
| -2147483648|
55
55
+----------------+
56
-
{% endhighlight %}
56
+
```
57
57
58
58
### Type Conversion
59
59
@@ -64,15 +64,15 @@ On the other hand, `INSERT INTO` syntax throws an analysis exception when the AN
64
64
Currently, the ANSI mode affects explicit casting and assignment casting only.
65
65
In future releases, the behaviour of type coercion might change along with the other two type conversion rules.
66
66
67
-
{% highlight sql %}
67
+
```sql
68
68
-- Examples of explicit casting
69
69
70
70
-- `spark.sql.ansi.enabled=true`
71
71
SELECT CAST('a'ASINT);
72
-
java.lang.NumberFormatException: invalid input syntax for type numeric: a
72
+
java.lang.NumberFormatException: invalid input syntax for type numeric: a
73
73
74
74
SELECT CAST(2147483648L ASINT);
75
-
java.lang.ArithmeticException: Casting 2147483648 to int causes overflow
75
+
java.lang.ArithmeticException: Casting 2147483648 to int causes overflow
76
76
77
77
-- `spark.sql.ansi.enabled=false` (This is a default behaviour)
78
78
SELECT CAST('a'ASINT);
@@ -94,8 +94,8 @@ CREATE TABLE t (v INT);
94
94
95
95
-- `spark.sql.storeAssignmentPolicy=ANSI`
96
96
INSERT INTO t VALUES ('1');
97
-
org.apache.spark.sql.AnalysisException: Cannot write incompatible data to table '`default`.`t`':
98
-
- Cannot safely cast 'v': StringType to IntegerType;
97
+
org.apache.spark.sql.AnalysisException: Cannot write incompatible data to table '`default`.`t`':
98
+
- Cannot safely cast 'v': StringType to IntegerType;
99
99
100
100
-- `spark.sql.storeAssignmentPolicy=LEGACY` (This is a legacy behaviour until Spark 2.x)
Copy file name to clipboardExpand all lines: docs/sql-ref-functions-udf-aggregate.md
+45-56Lines changed: 45 additions & 56 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,46 +27,35 @@ User-Defined Aggregate Functions (UDAFs) are user-programmable routines that act
27
27
28
28
A base class for user-defined aggregations, which can be used in Dataset operations to take all of the elements of a group and reduce them to a single value.
29
29
30
-
* IN - The input type for the aggregation.
31
-
* BUF - The type of the intermediate value of the reduction.
32
-
* OUT - The type of the final output result.
30
+
***IN*** - The input type for the aggregation.
31
+
32
+
***BUF*** - The type of the intermediate value of the reduction.
Aggregate input value <code>a</code> into current intermediate value. For performance, the function may modify <code>b</code> and return it instead of constructing new object for <code>b</code>.
62
-
</dd>
63
-
</dl>
64
-
<dl>
65
-
<dt><code><em>zero: BUF</em></code></dt>
66
-
<dd>
51
+
52
+
***reduce(b: BUF, a: IN): BUF**
53
+
54
+
Aggregate input value `a` into current intermediate value. For performance, the function may modify `b` and return it instead of constructing new object for `b`.
55
+
56
+
***zero: BUF**
57
+
67
58
The initial value of the intermediate result for this aggregation.
68
-
</dd>
69
-
</dl>
70
59
71
60
### Examples
72
61
@@ -95,16 +84,16 @@ For example, a user-defined average for untyped DataFrames can look like:
Copy file name to clipboardExpand all lines: docs/sql-ref-functions-udf-hive.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ Spark SQL supports integration of Hive UDFs, UDAFs and UDTFs. Similar to Spark U
28
28
Hive has two UDF interfaces: [UDF](https://github.com/apache/hive/blob/master/udf/src/java/org/apache/hadoop/hive/ql/exec/UDF.java) and [GenericUDF](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDF.java).
29
29
An example below uses [GenericUDFAbs](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFAbs.java) derived from `GenericUDF`.
30
30
31
-
{% highlight sql %}
31
+
```sql
32
32
-- Register `GenericUDFAbs` and use it in Spark SQL.
33
33
-- Note that, if you use your own programmed one, you need to add a JAR containing it
34
34
-- into a classpath,
@@ -52,12 +52,12 @@ SELECT testUDF(value) FROM t;
52
52
| 2.0|
53
53
| 3.0|
54
54
+--------------+
55
-
{% endhighlight %}
55
+
```
56
56
57
57
58
58
An example below uses [GenericUDTFExplode](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFExplode.java) derived from [GenericUDTF](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDF.java).
59
59
60
-
{% highlight sql %}
60
+
```sql
61
61
-- Register `GenericUDTFExplode` and use it in Spark SQL
@@ -79,12 +79,12 @@ SELECT hiveUDTF(value) FROM t;
79
79
| 3|
80
80
| 4|
81
81
+---+
82
-
{% endhighlight %}
82
+
```
83
83
84
84
Hive has two UDAF interfaces: [UDAF](https://github.com/apache/hive/blob/master/udf/src/java/org/apache/hadoop/hive/ql/exec/UDAF.java) and [GenericUDAFResolver](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFResolver.java).
85
85
An example below uses [GenericUDAFSum](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFSum.java) derived from `GenericUDAFResolver`.
86
86
87
-
{% highlight sql %}
87
+
```sql
88
88
-- Register `GenericUDAFSum` and use it in Spark SQL
0 commit comments