diff --git a/README.md b/README.md index 56bca34f6e..2a55ad1f1c 100644 --- a/README.md +++ b/README.md @@ -178,9 +178,10 @@ $ curl -i -X PUT -H "Authorization: Bearer $PRINCIPAL_TOKEN" -H 'Accept: applica -d '{"name": "polaris", "id": 100, "type": "INTERNAL", "readOnly": false}' ``` -This creates a catalog called `polaris`. From here, you can use Spark to create namespaces, tables, etc. +This creates a catalog called `polaris`. From here, you can use any Iceberg REST +compatible clients (e.g. Spark or Trino) to create namespaces, tables, etc. -You must run the following as the first query in your spark-sql shell to actually use Polaris: +You must run the following as the first query in your SQL shell to actually use Polaris: ``` use polaris; diff --git a/docs/index.html b/docs/index.html index ac8987d906..4c356e121f 100644 --- a/docs/index.html +++ b/docs/index.html @@ -449,7 +449,6 @@ -231.5279,231.248 -231.873,231.248 -0.3451,0 -104.688, -104.0616 -231.873,-231.248 z " fill="currentColor">
Download OpenAPI specification:Download
This guide serves as a introduction to several key entities that can be managed with Polaris, describes how to build and deploy Polaris locally, and finally includes examples of how to use Polaris with Spark and Trino.
-This guide serves as a introduction to several key entities that can be managed +with Polaris, describes how to build and deploy Polaris locally, and finally +includes examples of how to use Polaris with Apache Spark and Trino.
+This guide covers building Polaris, deploying it locally or via Docker, and interacting with it using the command-line interface and Apache Spark. Before proceeding with Polaris, be sure to satisfy the relevant prerequisites listed here.
+">This guide covers building Polaris, deploying it locally or via Docker, +and interacting with it using the command-line interface and +Apache Spark. Before proceeding with Polaris, be +sure to satisfy the relevant prerequisites listed here.
To get the latest Polaris code, you'll need to clone the repository using git. You can install git using homebrew:
+To get the latest Polaris code, you'll need to clone the repository using +git. You can install git using homebrew:
brew install git
Then, use git to clone the Polaris repo:
@@ -531,15 +547,19 @@If you plan to deploy Polaris inside Docker], you'll need to install docker itself. For can be done using homebrew:
-brew install docker
-
-Once installed, make sure Docker is running. This can be done on macOS with:
-open -a Docker
+If you plan to deploy Polaris inside Docker], you'll
+need to install docker itself. For example, this can be done using
+homebrew:
+brew install --cask docker
+Once installed, make sure Docker is running.
From Source
-If you plan to build Polaris from source yourself, you will need to satisfy a few prerequisites first.
-Polaris is built using gradle and is compatible with Java 21. We recommend the use of jenv to manage multiple Java versions. For example, to install Java 21 via [homebre]w(https://brew.sh/) and configure it with jenv:
+If you plan to build Polaris from source yourself, you will need to satisfy a
+few prerequisites first.
+Polaris is built using gradle and is compatible with
+Java 21. We recommend the use of jenv to manage multiple
+Java versions. For example, to install Java 21 via homebrew
+and configure it with jenv:
cd ~/polaris
jenv local 21
brew install openjdk@21 gradle@8 jenv
@@ -547,66 +567,91 @@ From Source
jenv local 21
Connecting to Polaris
-Polaris is compatible with any Apache Iceberg client that supports the REST API. Depending on the client you plan to use, refer to the prerequisites below.
+Polaris is compatible with any Apache Iceberg
+client that supports the REST API. Depending on the client you plan to use,
+refer to the prerequisites below.
With Spark
-If you want to connect to Polaris with Apache Spark, you'll need to start by cloning Spark. As above, make sure git is installed first. You can install it with homebrew:
+If you want to connect to Polaris with Apache Spark,
+you'll need to start by cloning Spark. As above,
+make sure git is installed first. You can install it with homebrew:
brew install git
-Then, clone Spark and check out a versioned branch. This guide uses Spark 3.5.0.
+Then, clone Spark and check out a versioned branch. This guide uses Spark 3.5.
cd ~
git clone https://github.com/apache/spark.git
cd ~/spark
-git checkout branch-3.5.0
+git checkout branch-3.5
-Polaris can be deployed via a lightweight docker image or as a standalone process. Before starting, be sure that you've satisfied the relevant prerequisites detailed above.
+">Polaris can be deployed via a lightweight docker image or as a standalone +process. Before starting, be sure that you've satisfied the relevant +prerequisites detailed above.
To start using Polaris in Docker, launch Polaris while Docker is running:
cd ~/polaris
docker compose -f docker-compose.yml up --build
-Once the polaris-polaris container is up, you can continue to Defining a Catalog.
Once the polaris-polaris container is up, you can continue to
+Defining a Catalog.
Run Polaris locally with:
cd ~/polaris
./gradlew runApp
-You should see output for some time as Polaris builds and starts up. Eventually, you won’t see any more logs and should see messages that resemble the following:
+You should see output for some time as Polaris builds and starts up. Eventually, +you won’t see any more logs and should see messages that resemble the following:
INFO [...] [main] [] o.e.j.s.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@...
INFO [...] [main] [] o.e.j.server.AbstractConnector: Started application@...
INFO [...] [main] [] o.e.j.server.AbstractConnector: Started admin@...
INFO [...] [main] [] o.eclipse.jetty.server.Server: Started Server@...
At this point, Polaris is running.
-For this tutorial, we'll launch an instance of Polaris that stores entities only in-memory. This means that any entities that you define will be destroyed when Polaris is shut down. It also means that Polaris will automatically bootstrap itself with root credentials. For more information on how to configure Polaris for production usage, see the docs.
-When Polaris is launched using in-memory mode the root CLIENT_ID and CLIENT_SECRET can be found in stdout on initial startup. For example:
Bootstrapped with credentials: {"client-id": "XXXX", "client-secret": "YYYY"}
+">For this tutorial, we'll launch an instance of Polaris that stores entities only
+in-memory. This means that any entities that you define will be destroyed when
+Polaris is shut down. It also means that Polaris will automatically bootstrap
+itself with root credentials. For more information on how to configure Polaris
+for production usage, see the docs.
+When Polaris is launched using in-memory mode the root CLIENT_ID and
+CLIENT_SECRET can be found in stdout on initial startup. For example:
+realm: default-realm root principal credentials: XXXX:YYYY
Be sure to note of these credentials as we'll be using them below.
-In Polaris, the catalog is the top-level entity that objects like tables and views are organized under. With a Polaris service running, you can create a catalog like so:
+<p><code>CATALOG_MANAGE_CONTENT</code> has create/list/read/write privileges on all entities +within the catalog. The same privilege could be granted to a namespace, in which +case the principal could create/list/read/write any entity under that namespace.</p> +">In Polaris, the catalog is the top-level entity that +objects like tables and views are +organized under. With a Polaris service running, you can create a catalog like +so:
cd ~/polaris
./polaris \
@@ -700,11 +771,24 @@ Building Polaris
quickstart_catalog
This will create a new catalog called quickstart_catalog.
-The DEFAULT_BASE_LOCATION you provide will be the default location that objects in this catalog should be stored in, and the ROLE_ARN you provide should be a Role ARN with access to read and write data in that location. These credentials will be provided to engines reading data from the catalog once they have authenticated with Polaris using credentials that have access to those resources.
If you’re using a storage type other than S3, such as Azure, you’ll provide a different type of credential than a Role ARN. For more details on supported storage types, see the docs.
-Additionally, if Polaris is running somewhere other than localhost:8181, you can specify the correct hostname and port by providing --host and --port flags. For the full set of options supported by the CLI, please refer to the docs.
The DEFAULT_BASE_LOCATION you provide will be the default location that
+objects in this catalog should be stored in, and the ROLE_ARN you provide
+should be a Role ARN
+with access to read and write data in that location. These credentials will be
+provided to engines reading data from the catalog once they have authenticated
+with Polaris using credentials that have access to those resources.
If you’re using a storage type other than S3, such as Azure, you’ll provide a +different type of credential than a Role ARN. For more details on supported +storage types, see the docs.
+Additionally, if Polaris is running somewhere other than localhost:8181, you
+can specify the correct hostname and port by providing --host and --port
+flags. For the full set of options supported by the CLI, please refer to the
+docs.
With a catalog created, we can create a principal that has access to manage that catalog. For details on how to configure the Polaris CLI, see the section above or refer to the docs.
+With a catalog created, we can create a principal +that has access to manage that catalog. For details on how to configure the +Polaris CLI, see the section above or refer to the +docs.
./polaris \
--client-id ${CLIENT_ID} \
--client-secret ${CLIENT_SECRET} \
@@ -728,11 +812,15 @@ Creating a Principal a
quickstart_catalog_role
Be sure to provide the necessary credentials, hostname, and port as before.
-When the principals create command completes successfully, it will return the credentials for this new principal. Be sure to note these down for later. For example:
When the principals create command completes successfully, it will return the
+credentials for this new principal. Be sure to note these down for later. For
+example:
./polaris ... principals create example
{"clientId": "XXXX", "clientSecret": "YYYY"}
-Now, we grant the principal the principal role we created, and grant the catalog role the principal role we created. For more information on these entities, please refer to the linked documentation.
+Now, we grant the principal the principal role +we created, and grant the catalog role the principal role we created. For +more information on these entities, please refer to the linked documentation.
./polaris \
--client-id ${CLIENT_ID} \
--client-secret ${CLIENT_SECRET} \
@@ -752,25 +840,37 @@ Creating a Principal a
Now, we’ve linked our principal to the catalog via roles like so:

In order to give this principal the ability to interact with the catalog, we must assign some privileges. For the time being, we will give this principal the ability to fully manage content in our new catalog. We can do this with the CLI like so:
+In order to give this principal the ability to interact with the catalog, we +must assign some privileges. For the time being, we +will give this principal the ability to fully manage content in our new catalog. +We can do this with the CLI like so:
./polaris \
--client-id ${CLIENT_ID} \
--client-secret ${CLIENT_SECRET} \
privileges \
- --catalog quickstart_catalog \
- --catalog-role quickstart_catalog_role \
catalog \
grant \
+ --catalog quickstart_catalog \
+ --catalog-role quickstart_catalog_role \
CATALOG_MANAGE_CONTENT
-This grants the catalog privileges CATALOG_MANAGE_CONTENT to our catalog role, linking everything together like so:
This grants the catalog privileges CATALOG_MANAGE_CONTENT to our
+catalog role, linking everything together like so:

CATALOG_MANAGE_CONTENT has create/list/read/write privileges on all entities within the catalog. The same privilege could be granted to a namespace, in which case the principal could create/list/read/write any entity under that namespace.
At this point, we’ve created a principal and granted it the ability to manage a catalog. We can now use an external engine to assume that principal, access our catalog, and store data in that catalog using Apache Iceberg.
+<h3 id="connecting-with-trino">Connecting with Trino</h3> +<p>To use a Polaris-managed catalog in <a href="https://trino.io/">Trino</a>, you can +configure Trino to use the Iceberg REST API.</p> +<p>You'll need to have Trino installed, so download the <a href="https://trino.io/download">latest version of Trino</a>, +and you can follow <a href="https://trino.io/docs/current/installation.html">the Trino docs</a> +to install it. You'll also need to create a catalog per the instructions above +and generate and export a <code>PRINCIPAL_TOKEN</code> per the +<a href="/README.md#creating-a-catalog-manually">README</a>.</p> +<p>Once Trino is installed and you have your <code>PRINCIPAL_TOKEN</code>, create a catalog +properties file, <code>polaris.properties</code>, in the <code>etc/catalog/</code> directory of your +Trino installation. This is the file where you can configure Trino's Iceberg +connector. Edit it to:</p> +<pre><code>connector<span class="token punctuation">.</span>name<span class="token operator">=</span>iceberg +iceberg<span class="token punctuation">.</span>catalog<span class="token punctuation">.</span>type<span class="token operator">=</span>rest +iceberg<span class="token punctuation">.</span>rest<span class="token operator">-</span>catalog<span class="token punctuation">.</span>security<span class="token operator">=</span>OAUTH2 +iceberg<span class="token punctuation">.</span>rest<span class="token operator">-</span>catalog<span class="token punctuation">.</span>oauth2<span class="token punctuation">.</span>token<span class="token operator">=</span><span class="token punctuation">{</span>the value of your PRINCIPAL_TOKEN<span class="token punctuation">}</span> +iceberg<span class="token punctuation">.</span>rest<span class="token operator">-</span>catalog<span class="token punctuation">.</span>warehouse<span class="token operator">=</span><span class="token punctuation">{</span>your catalog name<span class="token punctuation">}</span> +iceberg<span class="token punctuation">.</span>rest<span class="token operator">-</span>catalog<span class="token punctuation">.</span>uri<span class="token operator">=</span>http<span class="token punctuation">:</span><span class="token operator">/</span><span class="token operator">/</span>localhost<span class="token punctuation">:</span><span class="token number">8181</span><span class="token operator">/</span>api<span class="token operator">/</span>catalog +</code></pre> +<p>Start (or restart) Trino, and <code>SHOW CATALOGS</code> should show the Polaris catalog. +You can then run <code>USE catalogname.schemaname</code> to access, query, or write to +Polaris.</p> +">At this point, we’ve created a principal and granted it the ability to manage a +catalog. We can now use an external engine to assume that principal, access our +catalog, and store data in that catalog using Apache Iceberg.
To use a Polaris-managed catalog in Apache Spark, we can configure Spark to use the Iceberg catalog REST API.
-This guide uses Apache Spark 3.5, but be sure to find the appropriate iceberg-spark package for your Spark version. With a local Spark clone, we on the branch-3.5 branch we can run the following:
Note: the credentials provided here are those for our principal, not the root credentials.
+To use a Polaris-managed catalog in Apache Spark, +we can configure Spark to use the Iceberg catalog REST API.
+This guide uses Apache Spark 3.5,
+but be sure to find the appropriate iceberg-spark package for your Spark version.
+From a local Spark clone on the branch-3.5 branch we can run the following:
Note: the credentials provided here are those for our principal, not the root +credentials.
bin/spark-shell \
--packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.5.2,org.apache.hadoop:hadoop-aws:3.4.0 \
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \
@@ -841,10 +974,15 @@ Connecting with Spark
--conf spark.sql.catalog.quickstart_catalog.scope='PRINCIPAL_ROLE:ALL' \
--conf spark.sql.catalog.quickstart_catalog.token-refresh-enabled=true
-Replace XXXX and YYYY with the client ID and client secret generated when you created the quickstart_user principal.
Similar to the CLI commands above, this configures Spark to use the Polaris running at localhost:8181 as a catalog. If your Polaris server is running elsewhere, but sure to update the configuration appropriately.
Finally, note that we include the hadoop-aws package here. If your table is using a different filesystem, be sure to include the appropriate dependency.
Once the Spark session starts, we can create a namespace and table within the catalog:
+Replace XXXX and YYYY with the client ID and client secret generated when
+you created the quickstart_user principal.
Similar to the CLI commands above, this configures Spark to use the Polaris
+running at localhost:8181 as a catalog. If your Polaris server is running
+elsewhere, but sure to update the configuration appropriately.
Finally, note that we include the hadoop-aws package here. If your table is
+using a different filesystem, be sure to include the appropriate dependency.
Once the Spark session starts, we can create a namespace and table within the +catalog:
spark.sql("USE quickstart_catalog")
spark.sql("CREATE NAMESPACE IF NOT EXISTS quickstart_namespace")
spark.sql("CREATE NAMESPACE IF NOT EXISTS quickstart_namespace.schema")
@@ -871,10 +1009,10 @@ Connecting with Spark
--client-id ${CLIENT_ID} \
--client-secret ${CLIENT_SECRET} \
privileges \
- --catalog quickstart_catalog \
- --catalog-role quickstart_catalog_role \
catalog \
revoke \
+ --catalog quickstart_catalog \
+ --catalog-role quickstart_catalog_role \
CATALOG_MANAGE_CONTENT
Spark will lose access to the table:
@@ -882,6 +1020,28 @@To use a Polaris-managed catalog in Trino, you can +configure Trino to use the Iceberg REST API.
+You'll need to have Trino installed, so download the latest version of Trino,
+and you can follow the Trino docs
+to install it. You'll also need to create a catalog per the instructions above
+and generate and export a PRINCIPAL_TOKEN per the
+README.
Once Trino is installed and you have your PRINCIPAL_TOKEN, create a catalog
+properties file, polaris.properties, in the etc/catalog/ directory of your
+Trino installation. This is the file where you can configure Trino's Iceberg
+connector. Edit it to:
connector.name=iceberg
+iceberg.catalog.type=rest
+iceberg.rest-catalog.security=OAUTH2
+iceberg.rest-catalog.oauth2.token={the value of your PRINCIPAL_TOKEN}
+iceberg.rest-catalog.warehouse={your catalog name}
+iceberg.rest-catalog.uri=http://localhost:8181/api/catalog
+
+Start (or restart) Trino, and SHOW CATALOGS should show the Polaris catalog.
+You can then run USE catalogname.schemaname to access, query, or write to
+Polaris.
For more information, see Access control.
This page documents various entities that can be managed in Polaris.
@@ -1343,10 +1499,10 @@All catalogs in Polaris are associated with a storage type. Valid Storage Types are S3, Azure, and GCS. The FILE type is also additionally available for testing. Each of these types relates to a different storage provider where data within the catalog may reside. Depending on the storage type, various other configurations may be set for a catalog including credentials to be used when accessing data inside the catalog.
For details on how to use Storage Types in the REST API, see the API docs.
A namespace is a logical entity that resides within a catalog and can contain other entities such as tables or views. Some other systems may refer to namespaces as schemas or databases.
-In Polaris, namespaces can be nested up to 16 levels. For example, a.b.c.d.e.f.g is a valid namespace. b is said to reside within a, and so on.
In Polaris, namespaces can be nested. For example, a.b.c.d.e.f.g is a valid namespace. b is said to reside within a, and so on.
For information on managing namespaces with the REST API or for more information on what data can be associated with a namespace, see the API docs.
For example, a catalog client may be configured with client credentials from the OAuth2 Authorization flow. This client would exchange its client ID and secret for an access token using the client credentials request with this endpoint (1). Subsequent requests would then use that access token.
Some clients may also handle sessions that have additional user context. These clients would use the token exchange flow to exchange a user token (the "subject" token) from the session for a more specific access token for that user, using the catalog's access token as the "actor" token (2). The user ID token is the "subject" token and can be any token type allowed by the OAuth2 token exchange flow, including a unsecured JWT token with a sub claim. This request should use the catalog's bearer token in the "Authorization" header.
Clients may also use the token exchange flow to refresh a token that is about to expire by sending a token exchange request (3). The request's "subject" token should be the expiring token. This request should use the subject token in the "Authorization" header.
-| grant_type required | string Value: "client_credentials" | ||||||||
| scope | string | ||||||||
| client_id required | string Authorizations:Apache_Iceberg_REST_Catalog_API_BearerAuth header Parameters
Request Body schema: application/x-www-form-urlencoded |
| grant_type required | string Value: "client_credentials" |
| scope | string |
| client_id required | string Client ID This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header. @@ -3470,7 +3626,7 @@Catalog privileges" class="sc-euGpHm sc-exayXG fwfkcU jYGAQp">Generic base server URL, with all parts configurable {scheme}://{host}:{port}/{basePath}/v1/{prefix}/views/rename Request samples
Content type application/json {Response samples
Content type application/json { |