Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
67 changes: 61 additions & 6 deletions plugins/spark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,66 @@ and depends on iceberg-spark-runtime 1.8.1.

# Build Plugin Jar
A task createPolarisSparkJar is added to build a jar for the Polaris Spark plugin, the jar is named as:
"polaris-iceberg-<iceberg_version>-spark-runtime-<spark_major_version>_<scala_version>.jar"
The result jar is located at plugins/spark/v3.5/build/<scala_version>/libs after the build.

Building the Polaris project produces client jars for both Scala 2.12 and 2.13, and CI runs the Spark
client tests for both Scala versions as well.
# Start Spark with Local Polaris Service using built Jar
Once the jar is built, we can manually test it with Spark and a local Polaris service.

The Jar can also be built alone with a specific version using target `:polaris-spark-3.5_<scala_version>`. For example:
- `./gradlew :polaris-spark-3.5_2.12:createPolarisSparkJar` - Build a jar for the Polaris Spark plugin with scala version 2.12.
The result jar is located at plugins/spark/build/<scala_version>/libs after the build.
The following command starts a Polaris server for local testing, it runs on localhost:8181 with default
realm `POLARIS` and root credentials `root:secret`:
```shell
./gradlew run
```

Once the local server is running, the following command can be used to start the spark-shell with the built Spark client
jar, and to use the local Polaris server as a Catalog.

```shell
bin/spark-shell \
--jars <path-to-spark-client-jar> \
--packages org.apache.hadoop:hadoop-aws:3.4.0,io.delta:delta-spark_2.12:3.3.1 \
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,io.delta.sql.DeltaSparkSessionExtension \
--conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog \
--conf spark.sql.catalog.<catalog-name>.warehouse=<catalog-name> \
--conf spark.sql.catalog.<catalog-name>.header.X-Iceberg-Access-Delegation=true \
--conf spark.sql.catalog.<catalog-name>=org.apache.polaris.spark.SparkCatalog \
--conf spark.sql.catalog.<catalog-name>.uri=http://localhost:8181/api/catalog \
--conf spark.sql.catalog.<catalog-name>.credential="root:secret" \
--conf spark.sql.catalog.<catalog-name>.scope='PRINCIPAL_ROLE:ALL' \
--conf spark.sql.catalog.<catalog-name>.token-refresh-enabled=true \
--conf spark.sql.catalog.<catalog-name>.type=rest \
--conf spark.sql.sources.useV1SourceList=''
```

Assume the path to the built Spark client jar is
`/polaris/plugins/spark/v3.5/spark/build/2.12/libs/polaris-iceberg-1.8.1-spark-runtime-3.5_2.12-0.10.0-beta-incubating-SNAPSHOT.jar`
and the name of the catalog is `polaris`. The cli command will look like following:

```shell
bin/spark-shell \
--jars /polaris/plugins/spark/v3.5/spark/build/2.12/libs/polaris-iceberg-1.8.1-spark-runtime-3.5_2.12-0.10.0-beta-incubating-SNAPSHOT.jar \
--packages org.apache.hadoop:hadoop-aws:3.4.0,io.delta:delta-spark_2.12:3.3.1 \
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,io.delta.sql.DeltaSparkSessionExtension \
--conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog \
--conf spark.sql.catalog.polaris.warehouse=<catalog-name> \
--conf spark.sql.catalog.polaris.header.X-Iceberg-Access-Delegation=true \
--conf spark.sql.catalog.polaris=org.apache.polaris.spark.SparkCatalog \
--conf spark.sql.catalog.polaris.uri=http://localhost:8181/api/catalog \
--conf spark.sql.catalog.polaris.credential="root:secret" \
--conf spark.sql.catalog.polaris.scope='PRINCIPAL_ROLE:ALL' \
--conf spark.sql.catalog.polaris.token-refresh-enabled=true \
--conf spark.sql.catalog.polaris.type=rest \
--conf spark.sql.sources.useV1SourceList=''
```

# Limitations
The Polaris Spark client supports catalog management for both Iceberg and Delta tables, it routes all Iceberg table
requests to the Iceberg REST endpoints, and routes all Delta table requests to the Generic Table REST endpoints.

Following describes the current limitations of the Polaris Spark client:
1) Create table as select (CTAS) is not supported for Delta tables. As a result, the `saveAsTable` method of `Dataframe`
is also not supported, since it relies on the CTAS support.
2) Create a Delta table without explicit location is not supported.
3) Rename a Delta table is not supported.
4) ALTER TABLE ... SET LOCATION/SET FILEFORMAT/ADD PARTITION is not supported for DELTA table.
5) For other non-iceberg tables like csv, there is no specific guarantee provided today.
78 changes: 78 additions & 0 deletions plugins/spark/v3.5/getting-started/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->

# Getting Started with Apache Spark and Apache Polaris With Delta and Iceberg

This getting started guide provides a `docker-compose` file to set up [Apache Spark](https://spark.apache.org/) with Apache Polaris using
the new Polaris Spark Client.

The Polaris Spark Client enables manage of both Delta and Iceberg tables using Apache Polaris.

A Jupyter notebook is started to run PySpark, and Polaris Python client is also installed to call Polaris APIs
directly through Python Client.

## Build the Spark Client Jar and Polaris image
If Spark Client Jar is not presented locally under plugins/spark/v3.5/build/<scala_version>/libs, please build the jar
using
- `./gradlew assemble` -- build the Polaris project and skip the tests.

If a Polaris image is not already present locally, build one with the following command:

```shell
./gradlew \
:polaris-quarkus-server:assemble \
:polaris-quarkus-server:quarkusAppPartsBuild --rerun \
-Dquarkus.container-image.build=true
```

## Run the `docker-compose` file

To start the `docker-compose` file, run this command from the repo's root directory:
```shell
docker-compose -f plugins/spark/v3.5/getting-started/docker-compose.yml up
```

This will spin up 2 container services
* The `polaris` service for running Apache Polaris using an in-memory metastore
* The `jupyter` service for running Jupyter notebook with PySpark

NOTE: Starting the container first time may take a couple of minutes, because it will need to download the Spark 3.5.5.
When working with Delta, the Polaris Spark Client requires delta-io >= 3.2.1, and it requires at least Spark 3.5.3,
but the current jupyter Spark image only support Spark 3.5.0.

### Run with AWS access setup
If you want to interact with S3 bucket, make sure you have the following environment variables setup correctly in
your local env before running the `docker-compose` file.
```
AWS_ACCESS_KEY_ID=<your_access_key>
AWS_SECRET_ACCESS_KEY=<your_secret_key>
```

## Access the Jupyter notebook interface
In the Jupyter notebook container log, look for the URL to access the Jupyter notebook. The url should be in the
format, `http://127.0.0.1:8888/lab?token=<token>`.

Open the Jupyter notebook in a browser.
Navigate to [`notebooks/SparkPolaris.ipynb`](http://127.0.0.1:8888/lab/tree/notebooks/SparkPolaris.ipynb) <!-- markdown-link-check-disable-line -->

If the above url doesn't work, try to replace `127.0.0.1` with `localhost`, for example:
`http://localhost:8888/lab?token=<token>`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we just replace 127.0.0.1 with localhost then, at least, we can verify it works.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean the url printed in the console? or just suggest to replace it in the description from the very beginning. The console print is actually controlled by the Spark Jupyter notebook image, which I don't think i can change it, i can suggest to replace 127.0.0.1 to localhost from the beginning

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. NVM.


## Run the Jupyter notebook
You can now run all cells in the notebook or write your own code!
54 changes: 54 additions & 0 deletions plugins/spark/v3.5/getting-started/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

services:
polaris:
image: apache/polaris:latest
ports:
- "8181:8181"
- "8182"
environment:
AWS_REGION: us-west-2
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
POLARIS_BOOTSTRAP_CREDENTIALS: default-realm,root,s3cr3t
polaris.realm-context.realms: default-realm
quarkus.otel.sdk.disabled: "true"
healthcheck:
test: ["CMD", "curl", "http://localhost:8182/healthcheck"]
interval: 10s
timeout: 10s
retries: 5
jupyter:
build:
context: ../../../../ # this is needed to get the ./client
dockerfile: ./plugins/spark/v3.5/getting-started/notebooks/Dockerfile
network: host
ports:
- "8888:8888"
depends_on:
polaris:
condition: service_healthy
environment:
AWS_REGION: us-west-2
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
POLARIS_HOST: polaris
volumes:
- ./notebooks:/home/jovyan/notebooks
47 changes: 47 additions & 0 deletions plugins/spark/v3.5/getting-started/notebooks/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

FROM jupyter/all-spark-notebook:spark-3.5.0

ENV LANGUAGE='en_US:en'

USER root

# Generic table support requires delta 3.2.1
# Install Spark 3.5.5
RUN wget -q https://archive.apache.org/dist/spark/spark-3.5.5/spark-3.5.5-bin-hadoop3.tgz \
&& tar -xzf spark-3.5.5-bin-hadoop3.tgz \
&& mv spark-3.5.5-bin-hadoop3 /opt/spark \
&& rm spark-3.5.5-bin-hadoop3.tgz

# Set environment variables
ENV SPARK_HOME=/opt/spark
ENV PATH=$SPARK_HOME/bin:$PATH

USER jovyan

COPY --chown=jovyan client /home/jovyan/client
COPY --chown=jovyan regtests/requirements.txt /tmp
COPY --chown=jovyan plugins/spark/v3.5/spark/build/2.12/libs /home/jovyan/polaris_libs
RUN pip install -r /tmp/requirements.txt
RUN cd client/python && poetry lock && \
python3 -m poetry install && \
pip install -e .

WORKDIR /home/jovyan/
Loading