Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion docs/client/python/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,5 +21,4 @@ Python
:maxdepth: 2

pyhive
pyspark
jaydebeapi
133 changes: 0 additions & 133 deletions docs/client/python/pyspark.md

This file was deleted.

130 changes: 117 additions & 13 deletions docs/extensions/engines/spark/jdbc-dialect.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,27 +15,131 @@
- limitations under the License.
-->

# Hive Dialect Support
# Hive JDBC Data Source Dialect

Hive Dialect plugin aims to provide Hive Dialect support to Spark's JDBC source.
Hive JDBC Data Source dialect plugin aims to provide Hive Dialect support to [Spark's JDBC Data Source](https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html).
It will auto registered to Spark and applied to JDBC sources with url prefix of `jdbc:hive2://` or `jdbc:kyuubi://`.

Hive Dialect helps to solve failures access Kyuubi. It fails and unexpected results when querying data from Kyuubi as JDBC source with Hive JDBC Driver or Kyuubi Hive JDBC Driver in Spark, as Spark JDBC provides no Hive Dialect support out of box and quoting columns and other identifiers in ANSI as "table.column" rather than in HiveSQL style as \`table\`.\`column\`.
Hive Dialect helps to solve failures access Kyuubi. It fails and unexpected results when querying data from Kyuubi as
JDBC data source with Hive JDBC Driver or Kyuubi Hive JDBC Driver in Spark, as Spark JDBC provides no Hive Dialect support
out of box and quoting columns and other identifiers in ANSI as "table.column" rather than in HiveSQL style as
\`table\`.\`column\`.

Notes: this is an inefficient way to access data stored in Hive warehouse, you can see more discussions at [SPARK-47482](https://github.com/apache/spark/pull/45609).

## Features

- quote identifier in Hive SQL style
- Quote identifier in Hive SQL style

e.g. Quote `table.column` in \`table\`.\`column\`

- Adapt to Hive data type definitions

Reference: https://cwiki.apache.org/confluence/display/hive/languagemanual+types

## Preparation

### Prepare JDBC driver

Prepare JDBC driver jar file. Supported Hive compatible JDBC Driver as below:

| Driver | Driver Class Name | Remarks |
|-----------------------------------------------------------|-----------------------------------------|----------------------------------------------------------------------------------------------------------|
| Kyuubi Hive JDBC Driver ([doc](../jdbc/kyuubi_jdbc.html)) | org.apache.kyuubi.jdbc.KyuubiHiveDriver | Use v1.6.1 or later versions, which includes [KYUUBI #3484](https://github.com/apache/kyuubi/pull/3485). |
| Hive JDBC Driver ([doc](../jdbc/hive_jdbc.html)) | org.apache.hive.jdbc.HiveDriver | The Hive JDBC driver is already included in official Spark binary distribution. |

Refer to docs of the driver and prepare the JDBC driver jar file.

### Prepare JDBC Hive Dialect extension

Prepare the plugin jar file `kyuubi-extension-spark-jdbc-dialect_-*.jar`.

Get the Kyuubi Hive Dialect Extension jar from Maven Central

```
<dependency>
<groupId>org.apache.kyuubi</groupId>
<artifactId>kyuubi-extension-spark-jdbc-dialect_2.12</artifactId>
<version>{latest-version}</version>
</dependency>
```

Or, compile the extension by executing

```
build/mvn clean package -pl :kyuubi-extension-spark-jdbc-dialect_2.12 -DskipTests
```

eg. Quote `table.column` in \`table\`.\`column\`
then get the extension jar under `extensions/spark/kyuubi-extension-spark-jdbc-dialect/target`.

If you like, you can compile the extension jar with the corresponding Maven's profile on you compile command,
i.e. you can get extension jar for Spark 3.5 by compiling with `-Pspark-3.5`

### Including jars of JDBC driver and Hive Dialect extension

Choose one of the following ways to include jar files in Spark.

- Put the jar file of JDBC driver and Hive Dialect to `$SPARK_HOME/jars` directory to make it visible for all Spark applications. And adding `spark.sql.extensions = org.apache.spark.sql.dialect.KyuubiSparkJdbcDialectExtension` to `$SPARK_HOME/conf/spark_defaults.conf.`

- With each `spark-submit`(or `spark-sql`, `pyspark` etc.) commands, include the JDBC driver when submitting the application with `--packages`, and the Hive Dialect plugins with `--jars`

```
$SPARK_HOME/bin/spark-submit \
--packages org.apache.hive:hive-jdbc:x.y.z \
--jars /path/kyuubi-extension-spark-jdbc-dialect_-*.jar \
...
```

- Setting jars and config with SparkSession builder

```
val spark = SparkSession.builder
.config("spark.jars", "/path/hive-jdbc-x.y.z.jar,/path/kyuubi-extension-spark-jdbc-dialect_-*.jar")
.config("spark.sql.extensions", "org.apache.spark.sql.dialect.KyuubiSparkJdbcDialectExtension")
.getOrCreate()
```

## Usage

1. Get the Kyuubi Hive Dialect Extension jar
1. compile the extension by executing `build/mvn clean package -pl :kyuubi-extension-spark-jdbc-dialect_2.12 -DskipTests`
2. get the extension jar under `extensions/spark/kyuubi-extension-spark-jdbc-dialect/target`
3. If you like, you can compile the extension jar with the corresponding Maven's profile on you compile command, i.e. you can get extension jar for Spark 3.5 by compiling with `-Pspark-3.5`
2. Put the Kyuubi Hive Dialect Extension jar `kyuubi-extension-spark-jdbc-dialect_-*.jar` into `$SPARK_HOME/jars`
3. Enable `KyuubiSparkJdbcDialectExtension`, by setting `spark.sql.extensions=org.apache.spark.sql.dialect.KyuubiSparkJdbcDialectExtension`, i.e.
- add a config into `$SPARK_HOME/conf/spark-defaults.conf`
- or add setting config in SparkSession builder
### Using as JDBC Datasource programmingly

```
# Loading data from Kyuubi via HiveDriver as JDBC datasource
val jdbcDF = spark.read
.format("jdbc")
.option("driver", "org.apache.hive.jdbc.HiveDriver")
.option("url", "jdbc:hive2://kyuubi_server_ip:port")
.option("dbtable", "schema.tablename")
.option("user", "username")
.option("password", "password")
.option("query", "select * from testdb.src_table")
.load()
```

### Using as JDBC Datasource table with SQL

From Spark 3.2.0, [`CREATE DATASOURCE TABLE`](https://spark.apache.org/docs/latest/sql-ref-syntax-ddl-create-table-datasource.html)
is supported to create jdbc source with SQL.

```sql
-- create JDBC data source table
CREATE TABLE kyuubi_table
USING JDBC
OPTIONS (
driver='org.apache.hive.jdbc.HiveDriver',
url='jdbc:hive2://kyuubi_server_ip:port',
user='user',
password='password',
dbtable='testdb.some_table'
)

-- query data
SELECT * FROM kyuubi_table

-- write data in overwrite mode
INSERT OVERWRITE kyuubi_table SELECT ...

-- write data in append mode
INSERT INTO kyuubi_table SELECT ...
```

Loading