diff --git a/.dockerignore b/.dockerignore new file mode 100644 index 0000000000..d56f0ecb92 --- /dev/null +++ b/.dockerignore @@ -0,0 +1,5 @@ +polaris-service/logs +polaris-service/build +polaris-core/build +build +.idea diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS new file mode 100644 index 0000000000..d176d662e5 --- /dev/null +++ b/.github/CODEOWNERS @@ -0,0 +1,17 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +@polaris-catalog/polaris \ No newline at end of file diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md new file mode 100644 index 0000000000..01e549df6e --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -0,0 +1,35 @@ +--- +name: Bug report +about: Create a report to help us improve +title: "[BUG]" +labels: bug +assignees: '' + +--- + +**Describe the bug** +A clear and concise description of what the bug is. + +**Is this a possible security vulnerability?** +- [ ] yes -- if yes, stop here and contact security@polaris.io instead +- [ ] no + +**To Reproduce** +Steps to reproduce the behavior: +1. Go to '...' +2. Click on '....' +3. Scroll down to '....' +4. See error + +**Expected behavior** +A clear and concise description of what you expected to happen. + +**Screenshots** +If applicable, add screenshots to help explain your problem. + +**System info (please complete the following information):** + - OS: [e.g. Windows] + - Polaris Catalog Version [e.g. 0.3.0] + +**Additional context** +Add any other context about the problem here. diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md new file mode 100644 index 0000000000..813747531d --- /dev/null +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -0,0 +1,20 @@ +--- +name: Feature request +about: Suggest an idea for this project +title: "[FEATURE REQUEST]" +labels: enhancement +assignees: '' + +--- + +**Is your feature request related to a problem? Please describe.** +A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] + +**Describe the solution you'd like** +A clear and concise description of what you want to happen. + +**Describe alternatives you've considered** +A clear and concise description of any alternative solutions or features you've considered. + +**Additional context** +Add any other context or screenshots about the feature request here. diff --git a/.github/dependabot.yml b/.github/dependabot.yml new file mode 100644 index 0000000000..94a8501fd0 --- /dev/null +++ b/.github/dependabot.yml @@ -0,0 +1,23 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +--- +version: 2 +updates: + - package-ecosystem: "github-actions" + directory: "/" + schedule: + interval: "weekly" diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md new file mode 100644 index 0000000000..5759e92308 --- /dev/null +++ b/.github/pull_request_template.md @@ -0,0 +1,41 @@ +# Description + +Please include a summary of the changes and the related issue. Please also include relevant motivation and context. List any dependencies that are required for this change. + +Fixes # (issue) + +## Type of change + +Please delete options that are not relevant. + +- [ ] Bug fix (non-breaking change which fixes an issue) +- [ ] New feature (non-breaking change which adds functionality) +- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) +- [ ] This change requires a documentation update + +# How Has This Been Tested? + +Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration + +- [ ] Test A +- [ ] Test B + +**Test Configuration**: +* Firmware version: +* Hardware: +* Toolchain: +* SDK: + +# Checklist: + +Please delete options that are not relevant. + +- [ ] I have performed a self-review of my code +- [ ] I have commented my code, particularly in hard-to-understand areas +- [ ] I have made corresponding changes to the documentation +- [ ] My changes generate no new warnings +- [ ] I have added tests that prove my fix is effective or that my feature works +- [ ] New and existing unit tests pass locally with my changes +- [ ] Any dependent changes have been merged and published in downstream modules +- [ ] If adding new functionality, I have discussed my implementation with the community using the linked GitHub issue +- [ ] I have signed and submitted the [ICLA](../ICLA.md) and if needed, the [CCLA](../CCLA.md). See [Contributing](../CONTRIBUTING.md) for details. diff --git a/.github/workflows/gradle.yml b/.github/workflows/gradle.yml new file mode 100644 index 0000000000..6fc1d4eb19 --- /dev/null +++ b/.github/workflows/gradle.yml @@ -0,0 +1,76 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# This workflow uses actions that are not certified by GitHub. +# They are provided by a third-party and are governed by +# separate terms of service, privacy policy, and support +# documentation. +# This workflow will build a Java project with Gradle and cache/restore any dependencies to improve the workflow execution time +# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-java-with-gradle + +name: Java CI with Gradle + +on: + push: + branches: [ "main" ] + pull_request: + branches: [ "main" ] + +jobs: + build: + + runs-on: ubuntu-latest + permissions: + contents: read + + steps: + - uses: actions/checkout@v4 + - name: Set up JDK 21 + uses: actions/setup-java@v4 + with: + java-version: '21' + distribution: 'temurin' + + # Configure Gradle for optimal use in GiHub Actions, including caching of downloaded dependencies. + # See: https://github.com/gradle/actions/blob/main/setup-gradle/README.md + - name: Setup Gradle + uses: gradle/actions/setup-gradle@d9c87d481d55275bb5441eef3fe0e46805f9ef70 # v3.5.0 + + - name: Check formatting + run: ./gradlew check + + - name: Build with Gradle Wrapper + run: ./gradlew test + + - name: Archive test results + uses: actions/upload-artifact@v4 + if: always() + with: + name: upload-test-artifacts + path: | + polaris-core/build/test-results/test + polaris-service/build/test-results/test + + # NOTE: The Gradle Wrapper is the default and recommended way to run Gradle (https://docs.gradle.org/current/userguide/gradle_wrapper.html). + # If your project does not have the Gradle Wrapper configured, you can use the following configuration to run Gradle with a specified version. + # + # - name: Setup Gradle + # uses: gradle/actions/setup-gradle@d9c87d481d55275bb5441eef3fe0e46805f9ef70 # v3.5.0 + # with: + # gradle-version: '8.6' + # + # - name: Build with Gradle 8.6 + # run: gradle build diff --git a/.github/workflows/regtest.yml b/.github/workflows/regtest.yml new file mode 100644 index 0000000000..b81f80c62e --- /dev/null +++ b/.github/workflows/regtest.yml @@ -0,0 +1,39 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +name: Regression Tests +on: + push: + branches: [ "main" ] + pull_request: + branches: [ "main" ] + +jobs: + regtest: + + runs-on: ubuntu-latest + permissions: + contents: read + + steps: + - uses: actions/checkout@v4 + - name: fix permissions + run: mkdir -p regtests/output && chmod 777 regtests/output && chmod 777 regtests/t_*/ref/* + - name: Regression Test + env: + AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY_ID}} + AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET_ACCESS_KEY}} + run: docker compose up --build --exit-code-from regtest diff --git a/.github/workflows/semgrep.yml b/.github/workflows/semgrep.yml new file mode 100644 index 0000000000..7538486564 --- /dev/null +++ b/.github/workflows/semgrep.yml @@ -0,0 +1,28 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +--- +name: Run semgrep checks +on: + pull_request: + branches: [main] +permissions: + contents: read +jobs: + run-semgrep-reusable-workflow: + uses: snowflakedb/reusable-workflows/.github/workflows/semgrep-v2.yml@main + secrets: + token: ${{ secrets.SEMGREP_APP_TOKEN }} diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml new file mode 100644 index 0000000000..6444414313 --- /dev/null +++ b/.github/workflows/stale.yml @@ -0,0 +1,34 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +--- +jobs: + stale: + runs-on: ubuntu-22.04 + steps: + - uses: actions/stale@28ca1036281a5e5922ead5184a1bbf96e5fc984e + with: + days-before-close: 5 + days-before-stale: 30 + stale-issue-message: "This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days." + stale-pr-message: "This PR is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days." +name: "Close stale issues and PRs" +on: + schedule: + - cron: "30 1 * * *" +permissions: + issues: read + pull-requests: write diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000..0e6e0be8c9 --- /dev/null +++ b/.gitignore @@ -0,0 +1,70 @@ +regtests/derby.log +regtests/metastore_db +regtests/output/ + +# Notebooks +notebooks/.ipynb_checkpoints/ + +# Metastore +metastore_db/ + +.gradle +**/build/ +!src/**/build/ + +# Ignore Gradle GUI config +gradle-app.setting + +# Ignore Gradle wrapper jar file +gradle/wrapper/gradle-wrapper.jar +gradle/wrapper/gradle-wrapper-*.sha256 + +# Ignore Gradle wrapper jar file +gradle/wrapper/gradle-wrapper.jar +gradle/wrapper/gradle-wrapper-*.sha256 + +# Avoid ignore Gradle wrappper properties +!gradle-wrapper.properties + +# Cache of project +.gradletasknamecache + +# Eclipse Gradle plugin generated files +# Eclipse Core +.project +# JDT-specific (Eclipse Java Development Tools) +.classpath +.env +.java-version + +# IntelliJ +/.idea +*.iml +*.ipr +*.iws + +# Gradle +/.gradle +**/build/ +!src/**/build/ + +# jenv +.java-version + +# Log files +*.log +logs/ + +# binary files +*.class +*.jar +*.zip +*.tar.gz +*.tgz + +# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml +hs_err_pid* + +# macOS +*.DS_Store +.DS_Store diff --git a/.idea/google-java-format.xml b/.idea/google-java-format.xml new file mode 100644 index 0000000000..8b57f4527a --- /dev/null +++ b/.idea/google-java-format.xml @@ -0,0 +1,6 @@ + + + + + \ No newline at end of file diff --git a/.openapi-generator-ignore b/.openapi-generator-ignore new file mode 100644 index 0000000000..e866efcbdb --- /dev/null +++ b/.openapi-generator-ignore @@ -0,0 +1,7 @@ +src/main/webapp/** +build.gradle +pom.xml +README.md +settings.gradle +.openapi-generator-ignore +src/main/java/org/** \ No newline at end of file diff --git a/CCLA.md b/CCLA.md new file mode 100644 index 0000000000..edcc41ff55 --- /dev/null +++ b/CCLA.md @@ -0,0 +1,34 @@ +# Snowflake Corporate Contributor License Agreement + +This version of the contributor license agreement allows an entity (the “Corporation”) to submit Contributions (as defined below) to Snowflake, to authorize Contributions submitted by its designated employees to Snowflake, and to grant copyright and patent licenses thereto. + +1. DEFINITIONS. "You" (or "Your") shall mean the copyright owner or legal entity authorized by the copyright owner that is making this Agreement with Snowflake. For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are under common control with that entity are considered to be a single Contributor. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. + + "Contribution" shall mean the code, documentation, or other original work of authorship, including any modifications or additions to an existing work, that is intentionally submitted by You to Snowflake for inclusion in, or documentation of, any of the products owned or managed by Snowflake (the “Work”). For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to Snowflake or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Snowflake for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by You as "Not a Contribution." + +2. GRANT OF COPYRIGHT LICENSE. Subject to the terms and conditions of this Agreement, You hereby grant to Snowflake and to recipients of software distributed by Snowflake a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Your Contributions and such derivative works. + +3. GRANT OF PATENT LICENSE. Subject to the terms and conditions of this Agreement, You hereby grant to Snowflake and to recipients of software distributed by Snowflake a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) was submitted. If any entity institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed. + +4. You represent that You are legally entitled to grant the above licenses. You represent further that each employee of the Corporation designated by You is authorized to submit Contributions on behalf of the Corporation. + +5. You represent that each of Your Contributions is Your original creation (see Section 7 for submissions on behalf of others). + +6. NO SUPPORT. You are not expected to provide support for Your Contributions, except to the extent You desire to provide support. You may provide support for free, for a fee, or not at all. Unless required by applicable law or agreed to in writing, You provide Your Contributions on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. + +7. Should You wish to submit work that is not Your original creation, You may submit it to Snowflake separately from any Contribution, identifying the complete details of its source and of any license or other restriction (including, but not limited to, related patents, trademarks, and license agreements) of which you are personally aware, and conspicuously marking the work as "Submitted on behalf of a third-party: [named here]". + +8. NOTICE TO SNOWFLAKE. It is your responsibility to notify Snowflake when any change is required to the designated employees authorized to submit Contributions on behalf of the Corporation, or to the Corporation’s point of contact with Snowflake. + + +Name: _________________________________________ + +Signature: _________________________________________ + +Title: _________________________________________ + +Corporation: _________________________________________ + +Date: _________________________________________ + +Notices: _________________________________________ diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 0000000000..866e1c0ffe --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,102 @@ + + +# Contributor Code of Conduct + +This is a copy of the [Contributor Covenant v2.1](https://www.contributor-covenant.org/version/2/1/code_of_conduct.html). No changes have been made. + +## Our Pledge + +We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. + +We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. + +## Our Standards + +Examples of behavior that contributes to a positive environment for our community include: + +* Demonstrating empathy and kindness toward other people +* Being respectful of differing opinions, viewpoints, and experiences +* Giving and gracefully accepting constructive feedback +* Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience +* Focusing on what is best not just for us as individuals, but for the overall community + +Examples of unacceptable behavior include: + +* The use of sexualized language or imagery, and sexual attention or advances of any kind +* Trolling, insulting or derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or email address, without their explicit permission +* Other conduct which could reasonably be considered inappropriate in a professional setting + +## Enforcement Responsibilities + +Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. + +Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. + +## Scope + +This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. +Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at . +All complaints will be reviewed and investigated promptly and fairly. +All community leaders are obligated to respect the privacy and security of the reporter of any incident. + +## Enforcement Guidelines + +Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: + +### 1. Correction + +**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. + +**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. + +### 2. Warning + +**Community Impact**: A violation through a single incident or series of actions. + +**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. + +### 3. Temporary Ban + +**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. + +**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. + +### 4. Permanent Ban + +**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. + +**Consequence**: A permanent ban from any sort of public interaction within the community. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. + +Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder][Mozilla CoC]. + +For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at [https://www.contributor-covenant.org/translations][translations]. + +[homepage]: https://www.contributor-covenant.org +[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html +[Mozilla CoC]: https://github.com/mozilla/diversity +[FAQ]: https://www.contributor-covenant.org/faq +[translations]: https://www.contributor-covenant.org/translations diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 0000000000..6ff623b926 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,102 @@ + + +# Contributing to Polaris Catalog + +Thank you for considering contributing to the Polaris Catalog. Any contribution (code, test cases, documentation, use cases, ...) is valuable! + +This documentation will help you get started. + +## Contribute bug reports and feature requests + +You can report an issue in the Polaris Catalog [issue tracker](https://github.com/polaris-catalog/polaris/issues). + +### How to report a bug +Note: If you find a **security vulnerability**, do _NOT_ open an issue. Please email security [at] polaris.io instead to get advice from maintainers. + +When filing an [issue](https://github.com/polaris-catalog/polaris/issues), make sure to answer these five questions: +1. What version of Polaris Catalog are you using? +2. What operating system and processor architecture are you using? +3. What did you do? +4. What did you expect to see? +5. What did you see instead? + +Troubleshooting questions should be posted in [GitHub Discussions](https://github.com/polaris-catalog/polaris/discussions/categories/q-a) instead of the issue tracker. Maintainers and community members will answer your questions there or ask you to file an issue if you’ve encountered a bug. + +### How to suggest a feature or enhancement + +Polaris Catalog aims to provide the Iceberg community with new levels of choice, flexibility and control over their data, with full enterprise security and Apache Iceberg interoperability with Amazon Web Services (AWS), Confluent, Dremio, Google Cloud, Microsoft Azure, Salesforce and more. + +If you're looking for a feature that doesn't exist in Polaris Catalog, you're probably not alone. Others likely have similar needs. Please open a [GitHub Issue](https://github.com/polaris-catalog/polaris/issues) describing the feature you'd like to see, why you need it, and how it should work. + +When creating your feature request, document your requirements first. Please, try to not directly describe the solution. + + +## Before you begin contributing code + +### Review the License + +When contributing to this project, you agree that your contributions use the Apache License version 2. Please ensure you have permission to do this if required by your employer. + +### Sign the CLA +When you submit your first PR to Polaris Catalog, you will need to sign an [Individual Contributor License Agreement (ICLA)](./ICLA.md). If your employer agreement requires you to do so, you may also need someone from your company to also sign the [Corporate Contributor License Agreement (CCLA)](./CCLA.md). Make sure they have the legal authority to enter into contracts on behalf of the company. Please send over your ICLA and CCLA to community [at] polaris.io in order for your pull request to be considered. + +You can download a copy of the ICLA [here](./ICLA.md) and the CCLA [here](./CCLA.md). + +### Review open issues and discuss your approach + +If you want to dive into development yourself then you can check out existing open issues or requests for features that need to be implemented. Take ownership of an issue and try fix it. + +Before starting on a large code change, please describe the concept/design of what you plan to do on the issue/feature request you intend to address. If unsure if the design is good or will be accepted, discuss it with the community in the respective issue first, before you do too much active development. + +### Provide your changes in a Pull Request + +The best way to provide changes is to fork Polaris repository on GitHub and provide a Pull Request with your changes. To make it easy to apply your changes please use the following conventions: + +* Every Pull Request should have a matching GitHub Issue. +* Create a branch that will house your change: + +```bash +git clone https://github.com/polaris-catalog/polaris +cd polaris +git fetch --all +git checkout -b my-branch origin/main +``` + + Don't forget to periodically rebase your branch: + +```bash +git pull --rebase +git push GitHubUser my-branch --force +``` + + Ensure the code is properly formatted: + +```bash +./gradlew format +``` + +* Pull Requests should be based on the `main` branch. +* Test that your changes works by adapting or adding tests. Verify the build passes (see `README.md` for build instructions). +* If your Pull Request has conflicts with the `main` branch, please rebase and fix the conflicts. + +## Java version requirements + +The Polaris build currently requires Java 21 or later. There are a few tools that help you running the right Java version: + +* [SDKMAN!](https://sdkman.io/) follow the installation instructions, then run `sdk list java` to see the available distributions and versions, then run `sdk install java ` using the identifier for the distribution and version (>= 21) of your choice. +* [jenv](https://www.jenv.be/) If on a Mac you can use jenv to set the appropriate SDK. + diff --git a/Dockerfile b/Dockerfile new file mode 100644 index 0000000000..c83cdb0ca8 --- /dev/null +++ b/Dockerfile @@ -0,0 +1,37 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Base Image +FROM gradle:8.6-jdk21 as build + +# Copy the REST catalog into the container +COPY . /app + +# Set the working directory in the container, nuke any existing builds +WORKDIR /app +RUN rm -rf build + +# Build the rest catalog +RUN gradle --no-daemon --info shadowJar + +FROM openjdk:21 +WORKDIR /app +COPY --from=build /app/polaris-service/build/libs/polaris-service-1.0.0-all.jar /app +COPY --from=build /app/polaris-server.yml /app + +EXPOSE 8181 + +# Run the resulting java binary +CMD ["java", "-jar", "/app/polaris-service-1.0.0-all.jar", "server", "polaris-server.yml"] diff --git a/ICLA.md b/ICLA.md new file mode 100644 index 0000000000..711626d78e --- /dev/null +++ b/ICLA.md @@ -0,0 +1,31 @@ +# Snowflake Individual Contributor License Agreement + +By signing this contributor license agreement, You understand and agree that Your Contribution (as defined below) is public and that a record of the Contribution, including Your full name and email address among other information, will be maintained indefinitely and may be redistributed consistent with this project, compliance with the open source license(s) involved, and maintenance of authorship attribution. + +1. DEFINITIONS. "You" (or "Your") shall mean the copyright owner or legal entity authorized by the copyright owner that is making this Agreement with Snowflake. For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are under common control with that entity are considered to be a single Contributor. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. + + "Contribution" shall mean any original work of authorship, including any modifications or additions to an existing work, that is intentionally submitted by You to Snowflake for inclusion in, or documentation of, any of the products owned or managed by Snowflake (the “Work”). For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to Snowflake or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Snowflake for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by You as "Not a Contribution." + +2. GRANT OF COPYRIGHT LICENSE. Subject to the terms and conditions of this Agreement, You hereby grant to Snowflake and to recipients of software distributed by Snowflake a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Your Contributions and such derivative works. + +3. GRANT OF PATENT LICENSE. Subject to the terms and conditions of this Agreement, You hereby grant to Snowflake and to recipients of software distributed by Snowflake a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) was submitted. If any entity institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed. + +4. You represent that You are legally entitled to grant the above licenses. If Your employer(s) has rights to intellectual property that You create that includes Your Contributions, You represent that You have received permission to make Contributions on behalf of Your employer, that Your employer has waived such rights for Your Contributions to Snowflake, or that your employer has executed a separate corporate CLA with Snowflake. + +5. You represent that each of Your Contributions is Your original creation (see Section 7 for submissions on behalf of others). You represent that Your Contribution submissions include complete details of any third-party license or other restriction (including, but not limited to, related patents and trademarks) of which you are personally aware and which are associated with any part of Your Contributions. + +6. NO SUPPORT. You are not expected to provide support for Your Contributions, except to the extent You desire to provide support. You may provide support for free, for a fee, or not at all. Unless required by applicable law or agreed to in writing, You provide Your Contributions on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. + +7. Should You wish to submit work that is not Your original creation, You may submit it to Snowflake separately from any Contribution, identifying the complete details of its source and of any license or other restriction (including, but not limited to, related patents, trademarks, and license agreements) of which you are personally aware, and conspicuously marking the work as "Submitted on behalf of a third-party: [named here]". + +8. NOTICE TO SNOWFLAKE. You agree to notify Snowflake of any facts or circumstances of which you become aware that would make these representations inaccurate in any respect. + + + +Name (“You”): _________________________________________ + +Signature: _________________________________________ + +Date: _________________________________________ + +Email: _________________________________________ diff --git a/LICENSE b/LICENSE index 261eeb9e9f..e1816c9106 100644 --- a/LICENSE +++ b/LICENSE @@ -199,3 +199,5 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. + + Apache Iceberg diff --git a/NOTICE b/NOTICE new file mode 100644 index 0000000000..afa62e9a3d --- /dev/null +++ b/NOTICE @@ -0,0 +1,8 @@ +Polaris +Copyright 2024 Snowflake Computing Inc. + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + +Apache Iceberg +Copyright 2017-2022 The Apache Software Foundation diff --git a/README.md b/README.md index 4a11919b88..b68f86cbc1 100644 --- a/README.md +++ b/README.md @@ -1,13 +1,181 @@ + + # Polaris Catalog Polaris Catalog is an open source catalog for Apache Iceberg. Polaris Catalog implements Iceberg’s open REST API for multi-engine interoperability with Apache Doris, Apache Flink, Apache Spark, PyIceberg, StarRocks and Trino. -1200x500_DCS24_PR-Banner-Polaris Catalog-02@2x +![Polaris Catalog Header](docs/img/logos/Polaris-Catalog-BLOG-symmetrical-subhead.png) ## Status -Polaris Catalog will be open sourced under an Apache 2.0 license in the next 90 days. In the meantime: +Polaris Catalog is open source under an Apache 2.0 license. -- 👀 Watch this repo if you would like to be notified when the Polaris code goes live. - ⭐ Star this repo if you’d like to bookmark and come back to it! - 📖 Read the announcement blog post for more details! + +## API Docs + +API docs are hosted via Github Pages at https://polaris-catalog.github.io/polaris. All updates to the main branch +update the hosted docs. + +The Polaris management API docs are found [here](https://polaris-catalog.github.io/polaris/index.html#tag/polaris-management-service_other) + +The open source Iceberg REST API docs are found [here](https://polaris-catalog.github.io/polaris/index.html#tag/Configuration-API) + +Docs are generated using [Redocly](https://redocly.com/docs/cli/installation). They can be regenerated by running the following commands +from the project root directory + +```bash +docker run -p 8080:80 -v ${PWD}:/spec redocly/cli join spec/docs.yaml spec/polaris-management-service.yml spec/rest-catalog-open-api.yaml -o spec/index.yaml --prefix-components-with-info-prop title +docker run -p 8080:80 -v ${PWD}:/spec redocly/cli build-docs spec/index.yaml --output=docs/index.html --config=spec/redocly.yaml +``` + +# Setup + +## Requirements / Setup + +- Java JDK >= 21, see [CONTRIBUTING.md](./CONTRIBUTING.md#java-version-requirements). +- Gradle - This is included in the project and can be run using `./gradlew` in the project root. +- Docker - If you want to run the project in a containerized environment. + +Command-Line getting started +------------------- +Polaris is a multi-module project with three modules: + +- `polaris-core` - The main Polaris entity definitions and core business logic +- `polaris-server` - The Polaris REST API server +- `polaris-eclipselink` - The Eclipselink implementation of the MetaStoreManager interface + +Build the binary (first time may require installing new JDK version). This build will run IntegrationTests by default. + +``` +./gradlew build +``` + +Run the Polaris server locally on localhost:8181 + +``` +./gradlew runApp +``` + +While the Polaris server is running, run regression tests, or end-to-end tests in another terminal + +``` +./regtests/run.sh +``` + +Docker Instructions +------------------- + +Build the image: + +``` +docker build -t localhost:5001/polaris:latest . +``` + +Run it in a standalone mode. This runs a single container that binds the container's port `8181` to localhosts `8181`: + +``` +docker run -p 8181:8181 localhost:5001/polaris:latest +``` + +# Running the tests + +## Unit and Integration tests + +Unit and integration tests are run using gradle. To run all tests, use the following command: + +```bash +./gradlew test +``` + +## Regression tests + +Regression tests, or functional tests, are stored in the `regtests` directory. They can be executed in a docker +environment by using the `docker-compose.yml` file in the project root. + +```bash +docker compose up --build --exit-code-from regtest +``` + +They can also be executed outside of docker by following the setup instructions in +the [README](regtests/README.md) + +# Kubernetes Instructions +----------------------- + +You can run Polaris as a mini-deployment locally. This will create two pods that bind themselves to port `8181`: + +``` +./setup.sh +``` + +You can check the pod and deployment status like so: + +``` +kubectl get pods +kubectl get deployment +``` + +If things aren't working as expected you can troubleshoot like so: + +``` +kubectl describe deployment polaris-deployment +``` + +## Creating a Catalog manually + +Before connecting with Spark, you'll need to create a catalog. To create a catalog, generate a token for the root +principal: + +```bash +curl -i -X POST \ + http://localhost:8181/api/catalog/v1/oauth/tokens \ + -d 'grant_type=client_credentials&client_id==&client_secret==&scope=PRINCIPAL_ROLE:ALL' +``` + +The response output will contain an access token: + +```json +{ + "access_token": "ver:1-hint:1036-ETMsDgAAAY/GPANareallyverylongstringthatissecret", + "token_type": "bearer", + "expires_in": 3600 +} +``` + +Set the contents of the `access_token` field as the `PRINCIPAL_TOKEN` variable. Then use curl to invoke the +createCatalog +api: + +```bash +$ export PRINCIPAL_TOKEN=ver:1-hint:1036-ETMsDgAAAY/GPANareallyverylongstringthatissecret + +$ curl -i -X PUT -H "Authorization: Bearer $PRINCIPAL_TOKEN" -H 'Accept: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/v1/catalogs \ + -d '{"name": "snowflake", "id": 100, "type": "INTERNAL", "readOnly": false}' +``` + +This creates a catalog called `snowflake`. From here, you can use Spark to create namespaces, tables, etc. + +You must run the following as the first query in your spark-sql shell to actually use Polaris: + +``` +use polaris; +``` diff --git a/SECURITY.md b/SECURITY.md new file mode 100644 index 0000000000..c94c618e1b --- /dev/null +++ b/SECURITY.md @@ -0,0 +1,6 @@ +# Security Policy +If you discover a security issue, please bring it to our attention right away! + +## Reporting a Vulnerability + +Please DO NOT file a public issue to report a security vulberability, instead send your report privately to security@polaris.io. This will help ensure that any vulnerabilities that are found can be disclosed responsibly to any affected parties. diff --git a/build.gradle b/build.gradle new file mode 100644 index 0000000000..38c650bdd2 --- /dev/null +++ b/build.gradle @@ -0,0 +1,166 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +buildscript { + repositories { + maven { + url "https://plugins.gradle.org/m2/" + } + } + dependencies { + classpath "com.diffplug.spotless:spotless-plugin-gradle:6.25.0" + } +} + +plugins { + id "idea" + id "eclipse" + id "org.jetbrains.gradle.plugin.idea-ext" version "1.1.8" +} + +allprojects { + repositories { + mavenLocal() + mavenCentral() + } + idea { + module { + downloadJavadoc = true + downloadSources = true + } + } +} + +subprojects { + apply plugin: "jacoco" + apply plugin: "java" + apply plugin: "com.diffplug.spotless" + apply plugin: "jacoco-report-aggregation" + apply plugin: "groovy" + ext { + jacksonVersion = "2.17.2" + icebergVersion = "1.5.0" + hadoopVersion = "3.3.6" + dropwizardVersion = "4.0.7" + assertJVersion = "3.25.3" + } + + tasks.withType(JavaCompile) { + options.compilerArgs << "-Xlint:unchecked" + options.compilerArgs << "-Xlint:deprecation" + } + + project(":polaris-service") { + apply plugin: "application" + } + + project(":polaris-core") { + apply plugin: "java-library" + } + + dependencies { + implementation(platform("com.fasterxml.jackson:jackson-bom:${jacksonVersion}")) + implementation("com.fasterxml.jackson.core:jackson-annotations") + implementation("com.google.guava:guava:33.0.0-jre") + implementation("org.jetbrains:annotations:24.0.0") + implementation("org.slf4j:slf4j-api:2.0.12") + compileOnly("com.github.spotbugs:spotbugs-annotations:4.8.5") + + testImplementation(platform("org.junit:junit-bom:5.10.3")) + testImplementation("org.junit.jupiter:junit-jupiter") + testImplementation("org.assertj:assertj-core:3.26.3") + testImplementation("org.mockito:mockito-core:5.11.0") + + testRuntimeOnly("org.junit.platform:junit-platform-launcher") + } + + task format { + dependsOn "spotlessApply" + } + + test { + useJUnitPlatform() + } + + spotless { + def disallowWildcardImports = { + String text = it + def regex = ~/import .*\.\*;/ + def m = regex.matcher(text) + if (m.find()) { + throw new AssertionError("Wildcard imports disallowed - ${m.findAll()}") + } + } + format("xml") { + target("src/**/*.xml", "src/**/*.xsd") + targetExclude("codestyle/copyright-header.xml") + eclipseWtp(com.diffplug.spotless.extra.wtp.EclipseWtpFormatterStep.XML) + .configFile(rootProject.file("codestyle/org.eclipse.wst.xml.core.prefs")) + // getting the license-header delimiter right is a bit tricky. + //licenseHeaderFile(rootProject.file("codestyle/copyright-header.xml"), '<^[!?].*$') + } + if (project.plugins.hasPlugin("java-base")) { + java { + target "src/*/java/**/*.java" + targetExclude "build/**" + licenseHeaderFile(rootProject.file("codestyle/copyright-header-java.txt")) + googleJavaFormat() + endWithNewline() + custom "disallowWildcardImports", disallowWildcardImports + } + } + } +} + +def projectName = rootProject.file("ide-name.txt").text.trim() +def ideName = "$projectName ${rootProject.version.toString().replace("^([0-9.]+).*", "\1")}" + +if (System.getProperty("idea.sync.active").asBoolean()) { + // There's no proper way to set the name of the IDEA project (when "just importing" or + // syncing the Gradle project) + def ideaDir = rootProject.layout.projectDirectory.dir(".idea") + ideaDir.asFile.mkdirs() + ideaDir.file(".name").asFile.text = ideName + def icon = ideaDir.file("icon.png").asFile + if (!icon.exists()) { + def img = new URI("https://avatars.githubusercontent.com/u/173406119?s=200&v=4").toURL().openConnection().getInputStream().bytes + ideaDir.file("icon.png").asFile.newOutputStream().with { out -> out.write(img) } + } + + idea { + module { + name = ideName + downloadSources = true // this is the default BTW + inheritOutputDirs = true + } + } + + idea.project.settings { + copyright { + useDefault = "ApacheLicense-v2" + profiles.create("ApacheLicense-v2") { + // strip trailing LF + def copyrightText = rootProject.file("codestyle/copyright-header.txt").text + notice = copyrightText + } + } + + encodings.encoding = "UTF-8" + encodings.properties.encoding = "UTF-8" + } +} + +eclipse { project { name = ideName } } diff --git a/codestyle/copyright-header-java.txt b/codestyle/copyright-header-java.txt new file mode 100644 index 0000000000..fdd7c41a38 --- /dev/null +++ b/codestyle/copyright-header-java.txt @@ -0,0 +1,15 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ diff --git a/codestyle/copyright-header.txt b/codestyle/copyright-header.txt new file mode 100644 index 0000000000..b0b441c85c --- /dev/null +++ b/codestyle/copyright-header.txt @@ -0,0 +1,13 @@ +Copyright (c) 2024 Snowflake Computing Inc. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/codestyle/copyright-header.xml b/codestyle/copyright-header.xml new file mode 100644 index 0000000000..d3838526b8 --- /dev/null +++ b/codestyle/copyright-header.xml @@ -0,0 +1,16 @@ + + diff --git a/codestyle/org.eclipse.wst.xml.core.prefs b/codestyle/org.eclipse.wst.xml.core.prefs new file mode 100644 index 0000000000..bc2f15da16 --- /dev/null +++ b/codestyle/org.eclipse.wst.xml.core.prefs @@ -0,0 +1,24 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# + +eclipse.preferences.version=1 +formatCommentJoinLines=false +formatCommentText=false +indentationChar=space +indentationSize=2 +lineWidth=100 +spaceBeforeEmptyCloseTag=false diff --git a/docker-compose-jupyter.yml b/docker-compose-jupyter.yml new file mode 100644 index 0000000000..b336d736a6 --- /dev/null +++ b/docker-compose-jupyter.yml @@ -0,0 +1,57 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +services: + polaris: + build: + context: . + network: host + ports: + - "8181:8181" + - "8182" + environment: + AWS_REGION: us-west-2 + AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY + + healthcheck: + test: ["CMD", "curl", "http://localhost:8182/healthcheck"] + interval: 10s + timeout: 10s + retries: 5 + jupyter: + build: + context: . + dockerfile: ./notebooks/Dockerfile + network: host + ports: + - "8888:8888" + depends_on: + polaris: + condition: service_healthy + environment: + AWS_REGION: us-west-2 + POLARIS_HOST: polaris + volumes: + - notebooks:/home/jovyan/notebooks + +volumes: + notebooks: + driver: local + driver_opts: + o: bind + type: none + device: ./notebooks diff --git a/docker-compose.yml b/docker-compose.yml new file mode 100644 index 0000000000..267d8a2d74 --- /dev/null +++ b/docker-compose.yml @@ -0,0 +1,89 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +services: + polaris: + build: + context: . + network: host + ports: + - "8181:8181" + - "8182" + environment: + AWS_REGION: us-west-2 + GOOGLE_APPLICATION_CREDENTIALS: $GOOGLE_APPLICATION_CREDENTIALS + AZURE_TENANT_ID: $AZURE_TENANT_ID + AZURE_CLIENT_ID: $AZURE_CLIENT_ID + AZURE_CLIENT_SECRET: $AZURE_CLIENT_SECRET + command: # override the command to specify aws keys as dropwizard config + - java + - -Ddw.awsAccessKey=$AWS_ACCESS_KEY_ID + - -Ddw.awsSecretKey=$AWS_SECRET_ACCESS_KEY + - -jar + - /app/polaris-service-1.0.0-all.jar + - server + - polaris-server.yml + volumes: + - credentials:/tmp/credentials/ + + healthcheck: + test: ["CMD", "curl", "http://localhost:8182/healthcheck"] + interval: 10s + timeout: 10s + retries: 5 + regtest: + build: + context: regtests + network: host + args: + POLARIS_HOST: polaris + depends_on: + polaris: + condition: service_healthy + environment: + AWS_TEST_ENABLED: $AWS_TEST_ENABLED + AWS_STORAGE_BUCKET: $AWS_STORAGE_BUCKET + AWS_ROLE_ARN: $AWS_ROLE_ARN + AWS_TEST_BASE: $AWS_TEST_BASE + GCS_TEST_ENABLED: $GCS_TEST_ENABLED + GCS_TEST_BASE: $GCS_TEST_BASE + GOOGLE_APPLICATION_CREDENTIALS: $GOOGLE_APPLICATION_CREDENTIALS + AZURE_TEST_ENABLED: $AZURE_TEST_ENABLED + AZURE_TENANT_ID: $AZURE_TENANT_ID + AZURE_DFS_TEST_BASE: $AZURE_DFS_TEST_BASE + AZURE_BLOB_TEST_BASE: $AZURE_BLOB_TEST_BASE + AZURE_CLIENT_ID: $AZURE_CLIENT_ID + AZURE_CLIENT_SECRET: $AZURE_CLIENT_SECRET + AWS_CROSS_REGION_TEST_ENABLED: $AWS_CROSS_REGION_TEST_ENABLED + AWS_CROSS_REGION_BUCKET: $AWS_CROSS_REGION_BUCKET + AWS_ROLE_FOR_CROSS_REGION_BUCKET: $AWS_ROLE_FOR_CROSS_REGION_BUCKET + volumes: + - local_output:/tmp/polaris-regtests/ + - credentials:/tmp/credentials/ + +volumes: + local_output: + driver: local + driver_opts: + o: bind + type: none + device: ./regtests/output + credentials: + driver: local + driver_opts: + o: bind + type: none + device: ./regtests/credentials diff --git a/docs/access-control.md b/docs/access-control.md new file mode 100644 index 0000000000..fe5ceb1316 --- /dev/null +++ b/docs/access-control.md @@ -0,0 +1,188 @@ + + +This section provides information about how access control works for Polaris Catalog. + +Polaris Catalog uses a role-based access control (RBAC) model, in which the Polaris administrator assigns access privileges to catalog roles, +and then grants service principals access to resources by assigning catalog roles to principal roles. + +The key concepts to understanding access control in Polaris are: + +- **Securable object** +- **Principal role** +- **Catalog role** +- **Privilege** + +## Securable object + +A securable object is an object to which access can be granted. Polaris +has the following securable objects: + +- Catalog +- Namespace +- Iceberg table +- View + +## Principal role + +A principal role is a resource in Polaris that you can use to logically group Polaris service principals together and grant privileges on +securable objects. + +Polaris supports a many-to-one relationship between service principals and principal roles. For example, to grant the same privileges to +multiple service principals, you can grant a single principal role to those service principals. A service principal can be granted one +principal role. When registering a service connection, the Polaris administrator specifies the principal role that is granted to the +service principal. + +You don't grant privileges directly to a principal role. Instead, you configure object permissions at the catalog role level, and then grant +catalog roles to a principal role. + +The following table shows examples of principal roles that you might configure in Polaris: + +| Principal role name | Description | +| -----------------------| ----------- | +| Data_engineer | A role that is granted to multiple service principals for running data engineering jobs. | +| Data_scientist | A role that is granted to multiple service principals for running data science or AI jobs. | + +## Catalog role + +A catalog role belongs to a particular catalog resource in Polaris and specifies a set of permissions for actions on the catalog, or on objects +in the catalog, such as catalog namespaces or tables. You can create one or more catalog roles for a catalog. + +You grant privileges to a catalog role, and then grant the catalog role to a principal role to bestow the privileges to one or more service +principals. + +**Note** + +If you update the privileges bestowed to a service principal, the updates won\'t take effect for up to one hour. This means that if you +revoke or grant some privileges for a catalog, the updated privileges won\'t take effect on any service principal with access to that catalog +for up to one hour. + +Polaris also supports a many-to-many relationship between catalog roles and principal roles. You can grant the same catalog role to one or more +principal roles. Likewise, a principal role can be granted to one or more catalog roles. + +The following table displays examples of catalog roles that you might +configure in Polaris: + +| Example Catalog role | Description | +| -----------------------| ----------- | +| Catalog administrators | A role that has been granted multiple privileges to emulate full access to the catalog.

Principal roles that have been granted this role are permitted to create, alter, read, write, and drop tables in the catalog. | +| Catalog readers | A role that has been granted read-only privileges to tables in the catalog.

Principal roles that have been granted this role are allowed to read from tables in the catalog. | +| Catalog contributor | A role that has been granted read and write access privileges to all tables that belong to the catalog.

Principal roles that have been granted this role are allowed to perform read and write operations on tables in the catalog. | + +## RBAC model + +The following diagram illustrates the RBAC model used by Polaris Catalog. For each catalog, the Polaris administrator assigns access +privileges to catalog roles, and then grants service principals access to resources by assigning catalog roles to principal roles. Polaris +supports a many-to-one relationship between service principals and principal roles. + +![Diagram that shows the RBAC model for Polaris Catalog.](./img/rbac-model.svg "Polaris Catalog RBAC model") + +## Access control privileges + +This section describes the privileges that are available in the Polaris access control model. Privileges are granted to catalog roles, catalog +roles are granted to principal roles, and principal roles are granted to service principals to specify the operations that service principals can +perform on objects in Polaris. + +To grant the full set of privileges (drop, list, read, write, etc.) on an object, you can use the *full privilege* option. + +### Table privileges + +**Note** + +The TABLE_FULL_METADATA full privilege doesn't grant access to the TABLE_READ_DATA or TABLE_WRITE_DATA individual privileges. + +| Full privilege | Individual privilege | Description | +| -----------------------| ----------- | ---- | +| TABLE_FULL_METADATA | TABLE_CREATE | Enables registering a table with the catalog. | +| | TABLE_DROP | Enables dropping a table from the catalog. | +| | TABLE_LIST | Enables listing any tables in the catalog. | +| | TABLE_READ_PROPERTIES | Enables reading [properties](https://iceberg.apache.org/docs/nightly/configuration/#table-properties) of the table. | +| | TABLE_WRITE_PROPERTIES | Enables configuring [properties](https://iceberg.apache.org/docs/nightly/configuration/#table-properties) for the table. | +| N/A | TABLE_READ_DATA | Enables reading data from the table by receiving short-lived read-only storage credentials from the catalog. | +| N/A | TABLE_WRITE_DATA | Enables writing data to the table by receiving short-lived read+write storage credentials from the catalog. | + +### View privileges + +| Full privilege | Individual privilege | Description | +| -----------------------| ----------- | ---- | +| VIEW_FULL_METADATA | VIEW_CREATE | Enables registering a view with the catalog. | +| | VIEW_DROP | Enables dropping a view from the catalog. | +| | VIEW_LIST | Enables listing any views in the catalog. | +| | VIEW_READ_PROPERTIES | Enables reading all the view properties. | +| | VIEW_WRITE_PROPERTIES | Enables configuring view properties. | + +### Namespace privileges + +| Full privilege | Individual privilege | Description | +| -----------------------| ----------- | ---- | +| NAMESPACE_FULL_METADATA | NAMESPACE_CREATE | Enables creating a namespace in a catalog. | +| | NAMESPACE_DROP | Enables dropping the namespace from the catalog. | +| | NAMESPACE_LIST | Enables listing any object in the namespace, including nested namespaces and tables. | +| | NAMESPACE_READ_PROPERTIES | Enables reading all the namespace properties. | +| | NAMESPACE_WRITE_PROPERTIES | Enables configuring namespace properties. | + +### Catalog privileges + +| Privilege | Description | +| -----------------------| ----------- | +| CATALOG_MANAGE_ACCESS | Includes the ability to grant or revoke privileges on objects in a catalog to catalog roles, and the ability to grant or revoke catalog roles to or from principal roles. | +| CATALOG_MANAGE_CONTENT | Enables full management of content for the catalog. This privilege encompasses the following privileges:
  • CATALOG_MANAGE_METADATA
  • TABLE_FULL_METADATA
  • NAMESPACE_FULL_METADATA
  • VIEW_FULL_METADATA
  • TABLE_WRITE_DATA
  • TABLE_READ_DATA
  • CATALOG_READ_PROPERTIES
  • CATALOG_WRITE_PROPERTIES
| +| CATALOG_MANAGE_METADATA | Enables full management of the catalog, as well as catalog roles, namespaces, and tables. | +| CATALOG_READ_PROPERTIES | Enables listing catalogs and reading properties of the catalog. | +| CATALOG_WRITE_PROPERTIES | Enables configuring catalog properties. | + +## RBAC example + +The following diagram illustrates how RBAC works in Polaris, and +includes the following users: + +- **Alice**: A service admin who signs up for Polaris. Alice can + create service principals. She can also create catalogs and + namespaces, and configure access control for Polaris resources. + +> **Note** +> +> The service principal for Alice is not visible in the Polaris Catalog +> user interface. + +- **Bob**: A data engineer who uses Snowpipe Streaming (in Snowflake) + and Apache Spark connections to interact with Polaris. + + - Alice has created a service principal for Bob. It has been + granted the Data_engineer principal role, which in turn has been + granted the following catalog roles: Catalog contributor and + Data administrator (for both the Silver and Gold zone catalogs + in the following diagram). + + - The Catalog contributor role grants permission to create + namespaces and tables in the Bronze zone catalog. + + - The Data administrator roles grant full administrative rights to + the Silver zone catalog and Gold zone catalog. + +- **Mark**: A data scientist who uses Snowflake AI services to + interact with Polaris. + + - Alice has created a service principal for Mark. It has been + granted the Data_scientist principal role, which in turn has + been granted the catalog role named Catalog reader. + + - The Catalog reader role grants read-only access for a catalog + named Gold zone catalog. + +![Diagram that shows an example of how RBAC works in Polaris Catalog.](./img/rbac-example.svg "Polaris Catalog RBAC example") diff --git a/docs/entities.md b/docs/entities.md new file mode 100644 index 0000000000..c0cf1d650f --- /dev/null +++ b/docs/entities.md @@ -0,0 +1,81 @@ + + +This page documents various entities that can be managed in Polaris. + +## Catalog + +A catalog is a top-level entity in Polaris that may contain other entities like [namespaces](#namespace) and [tables](#table). These map directly to [Apache Iceberg catalogs](https://iceberg.apache.org/concepts/catalog/). + +For information on managing catalogs with the REST API or for more information on what data can be associated with a catalog, see [the API docs](../regtests/client/python/docs/CreateCatalogRequest.md). + +### Storage Type + +All catalogs in Polaris are associated with a _storage type_. Valid Storage Types are `S3`, `Azure`, and `GCS`. The `FILE` type is also additionally available for testing. Each of these types relates to a different storage provider where data within the catalog may reside. Depending on the storage type, various other configurations may be set for a catalog including credentials to be used when accessing data inside the catalog. + +For details on how to use Storage Types in the REST API, see [the API docs](../regtests/client/python/docs/StorageConfigInfo.md). + +## Namespace + +A namespace is a logical entity that resides within a [catalog](#catalog) and can contain other entities such as [tables](#table) or [views](#view). Some other systems may refer to namespaces as _schemas_ or _databases_. + +In Polaris, namespaces can be nested up to 16 levels. For example, `a.b.c.d.e.f.g` is a valid namespace. `b` is said to reside within `a`, and so on. + +For information on managing namespaces with the REST API or for more information on what data can be associated with a namespace, see [the API docs](../regtests/client/python/docs/CreateNamespaceRequest.md). + + +## Table + +Polaris tables are entites that map to [Apache Iceberg tables](https://iceberg.apache.org/docs/nightly/configuration/). + +For information on managing tables with the REST API or for more information on what data can be associated with a table, see [the API docs](../regtests/client/python/docs/CreateTableRequest.md). + +## View + +Polaris views are entites that map to [Apache Iceberg views](https://iceberg.apache.org/view-spec/). + +For information on managing views with the REST API or for more information on what data can be associated with a view, see [the API docs](../regtests/client/python/docs/CreateViewRequest.md). + +## Principal + +Polaris principals are unique identities that can be used to represent users or services. Each principal may have one or more [principal roles](#principal-role) assigned to it for the purpose of accessing catalogs and the entities within them. + +For information on managing principals with the REST API or for more information on what data can be associated with a principal, see [the API docs](../regtests/client/python/docs/CreatePrincipalRequest.md). + +## Principal Role + +Polaris principal roles are labels that may be granted to [principals](#principal). Each principal may have one or more principal roles, and the same principal role may be granted to multiple principals. Principal roles may be assigned based on the persona or responsibilities of a given principal, or on how that principal will need to access different entities within Polaris. + +For information on managing principal roles with the REST API or for more information on what data can be associated with a principal role, see [the API docs](../regtests/client/python/docs/CreatePrincipalRoleRequest.md). + + +## Catalog Role + +Polaris catalog roles are labels that may be granted to [catalogs](#catalog). Each catalog may have one or more catalog roles, and the same catalog role may be granted to multiple catalogs. Catalog roles may be assigned based on the nature of data that will reside in a catalog, or by the groups of users and services that might need to access that data. + +Each catalog role may have multiple [privileges](#privilege) granted to it, and each catalog role can be granted to one or more [principal roles](#principal-role). This is the mechanism by which principals are granted access to entities inside a catalog such as namespaces and tables. + +## Privilege + +Polaris privileges are granted to [catalog roles](#catalog-role) in order to grant principals with a given principal role some degree of access to catalogs with a given catalog role. When a privilege is granted to a catalog role, any principal roles granted that catalog role receive the privilege. In turn, any principals who are granted that principal role receive it. + +A privilege can be scoped to any entity inside a catalog, including the catalog itself. + +For a list of supported privileges for each privilege class, see the API docs: +* [Table Privileges](../regtests/client/python/docs/TablePrivilege.md) +* [View Privileges](../regtests/client/python/docs/ViewPrivilege.md) +* [Namespace Privileges](../regtests/client/python/docs/NamespacePrivilege.md) +* [Catalog Privileges](../regtests/client/python/docs/CatalogPrivilege.md) diff --git a/docs/img/example-workflow.svg b/docs/img/example-workflow.svg new file mode 100644 index 0000000000..7db3df677d --- /dev/null +++ b/docs/img/example-workflow.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/img/logos/Polaris-Catalog-BLOG-symmetrical-subhead.png b/docs/img/logos/Polaris-Catalog-BLOG-symmetrical-subhead.png new file mode 100644 index 0000000000..eb941b89e8 Binary files /dev/null and b/docs/img/logos/Polaris-Catalog-BLOG-symmetrical-subhead.png differ diff --git a/docs/img/logos/polaris-brandmark.png b/docs/img/logos/polaris-brandmark.png new file mode 100644 index 0000000000..6573f7a61b Binary files /dev/null and b/docs/img/logos/polaris-brandmark.png differ diff --git a/docs/img/logos/polaris-catalog-stacked-logo.svg b/docs/img/logos/polaris-catalog-stacked-logo.svg new file mode 100644 index 0000000000..b44b0a5d7d --- /dev/null +++ b/docs/img/logos/polaris-catalog-stacked-logo.svg @@ -0,0 +1,17 @@ + + + + + + + + + + + + + + + + + diff --git a/docs/img/logos/polaris-favicon.png b/docs/img/logos/polaris-favicon.png new file mode 100644 index 0000000000..bb92271f76 Binary files /dev/null and b/docs/img/logos/polaris-favicon.png differ diff --git a/docs/img/overview.svg b/docs/img/overview.svg new file mode 100644 index 0000000000..dbe577490d --- /dev/null +++ b/docs/img/overview.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/img/quickstart/privilege-illustration-1.png b/docs/img/quickstart/privilege-illustration-1.png new file mode 100644 index 0000000000..69328caf59 Binary files /dev/null and b/docs/img/quickstart/privilege-illustration-1.png differ diff --git a/docs/img/quickstart/privilege-illustration-2.png b/docs/img/quickstart/privilege-illustration-2.png new file mode 100644 index 0000000000..a26428ca65 Binary files /dev/null and b/docs/img/quickstart/privilege-illustration-2.png differ diff --git a/docs/img/rbac-example.svg b/docs/img/rbac-example.svg new file mode 100644 index 0000000000..431e30ffbe --- /dev/null +++ b/docs/img/rbac-example.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/img/rbac-model.svg b/docs/img/rbac-model.svg new file mode 100644 index 0000000000..7c7323d32c --- /dev/null +++ b/docs/img/rbac-model.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/img/sample-catalog-structure.svg b/docs/img/sample-catalog-structure.svg new file mode 100644 index 0000000000..efecec6ba4 --- /dev/null +++ b/docs/img/sample-catalog-structure.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/index.html b/docs/index.html new file mode 100644 index 0000000000..ac8987d906 --- /dev/null +++ b/docs/index.html @@ -0,0 +1,3481 @@ + + + + + + Polaris Catalog Documentation + + + + + + + + + +

Polaris Catalog Documentation

Download OpenAPI specification:Download

+

Quick Start

This guide serves as a introduction to several key entities that can be managed with Polaris, describes how to build and deploy Polaris locally, and finally includes examples of how to use Polaris with Spark and Trino.

+

Prerequisites

This guide covers building Polaris, deploying it locally or via Docker, and interacting with it using the command-line interface and Apache Spark. Before proceeding with Polaris, be sure to satisfy the relevant prerequisites listed here.

+

Building and Deploying Polaris

+

To get the latest Polaris code, you'll need to clone the repository using git. You can install git using homebrew:

+
brew install git
+
+

Then, use git to clone the Polaris repo:

+
cd ~
+git clone https://github.com/polaris-catalog/polaris.git
+
+

With Docker

+

If you plan to deploy Polaris inside Docker], you'll need to install docker itself. For can be done using homebrew:

+
brew install docker
+
+

Once installed, make sure Docker is running. This can be done on macOS with:

+
open -a Docker
+
+

From Source

+

If you plan to build Polaris from source yourself, you will need to satisfy a few prerequisites first.

+

Polaris is built using gradle and is compatible with Java 21. We recommend the use of jenv to manage multiple Java versions. For example, to install Java 21 via [homebre]w(https://brew.sh/) and configure it with jenv:

+
cd ~/polaris
+jenv local 21
+brew install openjdk@21 gradle@8 jenv
+jenv add $(brew --prefix openjdk@21)
+jenv local 21
+
+

Connecting to Polaris

+

Polaris is compatible with any Apache Iceberg client that supports the REST API. Depending on the client you plan to use, refer to the prerequisites below.

+

With Spark

+

If you want to connect to Polaris with Apache Spark, you'll need to start by cloning Spark. As above, make sure git is installed first. You can install it with homebrew:

+
brew install git
+
+

Then, clone Spark and check out a versioned branch. This guide uses Spark 3.5.0.

+
cd ~
+git clone https://github.com/apache/spark.git
+cd ~/spark
+git checkout branch-3.5.0
+
+

Deploying Polaris

Polaris can be deployed via a lightweight docker image or as a standalone process. Before starting, be sure that you've satisfied the relevant prerequisites detailed above.

+

Docker Image

+

To start using Polaris in Docker, launch Polaris while Docker is running:

+
cd ~/polaris
+docker compose -f docker-compose.yml up --build
+
+

Once the polaris-polaris container is up, you can continue to Defining a Catalog.

+

Building Polaris

+

Run Polaris locally with:

+
cd ~/polaris
+./gradlew runApp
+
+

You should see output for some time as Polaris builds and starts up. Eventually, you won’t see any more logs and should see messages that resemble the following:

+
INFO  [...] [main] [] o.e.j.s.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@...
+INFO  [...] [main] [] o.e.j.server.AbstractConnector: Started application@...
+INFO  [...] [main] [] o.e.j.server.AbstractConnector: Started admin@...
+INFO  [...] [main] [] o.eclipse.jetty.server.Server: Started Server@...
+
+

At this point, Polaris is running.

+

Bootstrapping Polaris

For this tutorial, we'll launch an instance of Polaris that stores entities only in-memory. This means that any entities that you define will be destroyed when Polaris is shut down. It also means that Polaris will automatically bootstrap itself with root credentials. For more information on how to configure Polaris for production usage, see the docs.

+

When Polaris is launched using in-memory mode the root CLIENT_ID and CLIENT_SECRET can be found in stdout on initial startup. For example:

+
Bootstrapped with credentials: {"client-id": "XXXX", "client-secret": "YYYY"}
+
+

Be sure to note of these credentials as we'll be using them below.

+

Defining a Catalog

In Polaris, the catalog is the top-level entity that objects like tables and views are organized under. With a Polaris service running, you can create a catalog like so:

+
cd ~/polaris
+
+./polaris \
+  --client-id ${CLIENT_ID} \
+  --client-secret ${CLIENT_SECRET} \
+  catalogs \
+  create \
+  --storage-type s3 \
+  --default-base-location ${DEFAULT_BASE_LOCATION} \
+  --role-arn ${ROLE_ARN} \
+  quickstart_catalog
+
+

This will create a new catalog called quickstart_catalog.

+

The DEFAULT_BASE_LOCATION you provide will be the default location that objects in this catalog should be stored in, and the ROLE_ARN you provide should be a Role ARN with access to read and write data in that location. These credentials will be provided to engines reading data from the catalog once they have authenticated with Polaris using credentials that have access to those resources.

+

If you’re using a storage type other than S3, such as Azure, you’ll provide a different type of credential than a Role ARN. For more details on supported storage types, see the docs.

+

Additionally, if Polaris is running somewhere other than localhost:8181, you can specify the correct hostname and port by providing --host and --port flags. For the full set of options supported by the CLI, please refer to the docs.

+

Creating a Principal and Assigning it Privileges

+

With a catalog created, we can create a principal that has access to manage that catalog. For details on how to configure the Polaris CLI, see the section above or refer to the docs.

+
./polaris \
+  --client-id ${CLIENT_ID} \
+  --client-secret ${CLIENT_SECRET} \
+  principals \
+  create \
+  quickstart_user
+
+./polaris \
+  --client-id ${CLIENT_ID} \
+  --client-secret ${CLIENT_SECRET} \
+  principal-roles \
+  create \
+  quickstart_user_role
+
+./polaris \
+  --client-id ${CLIENT_ID} \
+  --client-secret ${CLIENT_SECRET} \
+  catalog-roles \
+  create \
+  --catalog quickstart_catalog \
+  quickstart_catalog_role
+
+

Be sure to provide the necessary credentials, hostname, and port as before.

+

When the principals create command completes successfully, it will return the credentials for this new principal. Be sure to note these down for later. For example:

+
./polaris ... principals create example
+{"clientId": "XXXX", "clientSecret": "YYYY"}
+
+

Now, we grant the principal the principal role we created, and grant the catalog role the principal role we created. For more information on these entities, please refer to the linked documentation.

+
./polaris \
+  --client-id ${CLIENT_ID} \
+  --client-secret ${CLIENT_SECRET} \
+  principal-roles \
+  grant \
+  --principal quickstart_user \
+  quickstart_user_role
+
+./polaris \
+  --client-id ${CLIENT_ID} \
+  --client-secret ${CLIENT_SECRET} \
+  catalog-roles \
+  grant \
+  --catalog quickstart_catalog \
+  --principal-role quickstart_user_role \
+  quickstart_catalog_role
+
+

Now, we’ve linked our principal to the catalog via roles like so:

+

Principal to Catalog

+

In order to give this principal the ability to interact with the catalog, we must assign some privileges. For the time being, we will give this principal the ability to fully manage content in our new catalog. We can do this with the CLI like so:

+
./polaris \
+  --client-id ${CLIENT_ID} \
+  --client-secret ${CLIENT_SECRET} \
+  privileges \
+  --catalog quickstart_catalog \
+  --catalog-role quickstart_catalog_role \
+  catalog \
+  grant \
+  CATALOG_MANAGE_CONTENT
+
+

This grants the catalog privileges CATALOG_MANAGE_CONTENT to our catalog role, linking everything together like so:

+

Principal to Catalog with Catalog Role

+

CATALOG_MANAGE_CONTENT has create/list/read/write privileges on all entities within the catalog. The same privilege could be granted to a namespace, in which case the principal could create/list/read/write any entity under that namespace.

+

Using Iceberg & Polaris

At this point, we’ve created a principal and granted it the ability to manage a catalog. We can now use an external engine to assume that principal, access our catalog, and store data in that catalog using Apache Iceberg.

+

Connecting with Spark

+

To use a Polaris-managed catalog in Apache Spark, we can configure Spark to use the Iceberg catalog REST API.

+

This guide uses Apache Spark 3.5, but be sure to find the appropriate iceberg-spark package for your Spark version. With a local Spark clone, we on the branch-3.5 branch we can run the following:

+

Note: the credentials provided here are those for our principal, not the root credentials.

+
bin/spark-shell \
+--packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.5.2,org.apache.hadoop:hadoop-aws:3.4.0 \
+--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \
+--conf spark.sql.catalog.quickstart_catalog.warehouse=quickstart_catalog \
+--conf spark.sql.catalog.quickstart_catalog.header.X-Iceberg-Access-Delegation=true \
+--conf spark.sql.catalog.quickstart_catalog=org.apache.iceberg.spark.SparkCatalog \
+--conf spark.sql.catalog.quickstart_catalog.catalog-impl=org.apache.iceberg.rest.RESTCatalog \
+--conf spark.sql.catalog.quickstart_catalog.uri=http://localhost:8181/api/catalog \
+--conf spark.sql.catalog.quickstart_catalog.credential='XXXX:YYYY' \
+--conf spark.sql.catalog.quickstart_catalog.scope='PRINCIPAL_ROLE:ALL' \
+--conf spark.sql.catalog.quickstart_catalog.token-refresh-enabled=true
+
+

Replace XXXX and YYYY with the client ID and client secret generated when you created the quickstart_user principal.

+

Similar to the CLI commands above, this configures Spark to use the Polaris running at localhost:8181 as a catalog. If your Polaris server is running elsewhere, but sure to update the configuration appropriately.

+

Finally, note that we include the hadoop-aws package here. If your table is using a different filesystem, be sure to include the appropriate dependency.

+

Once the Spark session starts, we can create a namespace and table within the catalog:

+
spark.sql("USE quickstart_catalog")
+spark.sql("CREATE NAMESPACE IF NOT EXISTS quickstart_namespace")
+spark.sql("CREATE NAMESPACE IF NOT EXISTS quickstart_namespace.schema")
+spark.sql("USE NAMESPACE quickstart_namespace.schema")
+spark.sql("""
+    CREATE TABLE IF NOT EXISTS quickstart_table (
+        id BIGINT, data STRING
+    ) 
+USING ICEBERG
+""")
+
+

We can now use this table like any other:

+
spark.sql("INSERT INTO quickstart_table VALUES (1, 'some data')")
+spark.sql("SELECT * FROM quickstart_table").show(false)
+. . .
++---+---------+
+|id |data     |
++---+---------+
+|1  |some data|
++---+---------+
+
+

If at any time access is revoked...

+
./polaris \
+  --client-id ${CLIENT_ID} \
+  --client-secret ${CLIENT_SECRET} \
+  privileges \
+  --catalog quickstart_catalog \
+  --catalog-role quickstart_catalog_role \
+  catalog \
+  revoke \
+  CATALOG_MANAGE_CONTENT
+
+

Spark will lose access to the table:

+
spark.sql("SELECT * FROM quickstart_table").show(false)
+
+org.apache.iceberg.exceptions.ForbiddenException: Forbidden: Principal 'quickstart_user' with activated PrincipalRoles '[]' and activated ids '[6, 7]' is not authorized for op LOAD_TABLE_WITH_READ_DELEGATION
+
+

Polaris Catalog Overview

+ +

Polaris Catalog is a catalog implementation for Apache Iceberg built on the open source Apache Iceberg REST protocol.

+

With Polaris Catalog, you can provide centralized, secure read and write access across different REST-compatible query engines to your Iceberg tables.

+

Conceptual diagram of Polaris Catalog.

+

Key concepts

This section introduces key concepts associated with using Polaris Catalog.

+

In the following diagram, a sample Polaris Catalog structure with nested namespaces is shown for Catalog1. No tables +or namespaces have been created yet for Catalog2 or Catalog3:

+

Diagram that shows an example Polaris Catalog structure.

+

Catalog

+

In Polaris Catalog, you can create one or more catalog resources to organize Iceberg tables.

+

Configure your catalog by setting values in the storage configuration for S3, Azure, or Google Cloud Storage. An Iceberg catalog enables a +query engine to manage and organize tables. The catalog forms the first architectural layer in the Iceberg table specification and must support:

+
    +
  • Storing the current metadata pointer for one or more Iceberg tables. A metadata pointer maps a table name to the location of that table's +current metadata file.

    +
  • +
  • Performing atomic operations so that you can update the current metadata pointer for a table to the metadata pointer of a new version of +the table.

    +
  • +
+

To learn more about Iceberg catalogs, see the Apache Iceberg documentation.

+

Catalog types

+

A catalog can be one of the following two types:

+
    +
  • Internal: The catalog is managed by Polaris. Tables from this catalog can be read and written in Polaris.

    +
  • +
  • External: The catalog is externally managed by another Iceberg catalog provider (for example, Snowflake, Glue, Dremio Arctic). Tables from +this catalog are synced to Polaris. These tables are read-only in Polaris. In the current release, only Snowflake external catalog is provided.

    +
  • +
+

A catalog is configured with a storage configuration that can point to S3, Azure storage, or GCS.

+

To create a new catalog, see Create a catalog.

+

Namespace

+

You create namespaces to logically group Iceberg tables within a catalog. A catalog can have one or more namespaces. You can also create +nested namespaces. Iceberg tables belong to namespaces.

+

Iceberg tables & catalogs

+

In an internal catalog, an Iceberg table is registered in Polaris Catalog, but read and written via query engines. The table data and +metadata is stored in your external cloud storage. The table uses Polaris Catalog as the Iceberg catalog.

+

If you have tables that use Snowflake as the Iceberg catalog (Snowflake-managed tables), you can sync these tables to an external +catalog in Polaris Catalog. If you sync this catalog to Polaris Catalog, it appears as an external catalog in Polaris Catalog. The table data and +metadata is stored in your external cloud storage. The Snowflake query engine can read from or write to these tables. However, the other query +engines can only read from these tables.

+

Important

+

To ensure that the access privileges defined for a catalog are enforced +correctly, you must:

+
    +
  • Ensure a directory only contains the data files that belong to a +single table.

    +
  • +
  • Create a directory hierarchy that matches the namespace hierarchy +for the catalog.

    +
  • +
+

For example, if a catalog includes:

+
    +
  • Top-level namespace namespace1

    +
  • +
  • Nested namespace namespace1a

    +
  • +
  • A customers table, which is grouped under nested namespace +namespace1a

    +
  • +
  • An orders table, which is grouped under nested namespace namespace1a

    +
  • +
+

The directory hierarchy for the catalog must be:

+
    +
  • /namespace1/namespace1a/customers/<files for the customers table +*only*>

    +
  • +
  • /namespace1/namespace1a/orders/<files for the orders table *only*>

    +
  • +
+

Service principal

+

A service principal is an entity that you create in Polaris Catalog. Each service principal encapsulates credentials that you use to connect +to Polaris Catalog.

+

Query engines use service principals to connect to catalogs.

+

Polaris Catalog generates a Client ID and Client Secret pair for each service principal.

+

The following table displays example service principals that you might create in Polaris Catalog:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
Service connection nameDescription
Flink ingestionFor Apache Flink to ingest streaming data into Iceberg tables.
Spark ETL pipelineFor Apache Spark to run ETL pipeline jobs on Iceberg tables.
Snowflake data pipelinesFor Snowflake to run data pipelines for transforming data in Iceberg tables.
Trino BI dashboardFor Trino to run BI queries for powering a dashboard.
Snowflake AI teamFor Snowflake to run AI jobs on data in Iceberg tables.
+

Service connection

+

A service connection represents a REST-compatible engine (such as Apache Spark, Apache Flink, or Trino) that can read from and write to Polaris +Catalog. When creating a new service connection, the Polaris administrator grants the service principal that is created with the new service +connection with either a new or existing principal role. A principal role is a resource in Polaris that you can use to logically group Polaris +service principals together and grant privileges on securable objects. For more information, see Principal role. Polaris Catalog uses a role-based access control (RBAC) model to grant service principals access to resources. For more information, +see Access control. For a diagram of this model, see RBAC model.

+

If the Polaris administrator grants the service principal for the new service connection with a new principal role, the service principal +doesn't have any privileges granted to it yet. When securing the catalog that the new service connection will connect to, the Polaris +administrator grants privileges to catalog roles and then grants these catalog roles to the new principal role. As a result, the service +principal for the new service connection is bestowed with these privileges. For more information about catalog roles, see Catalog role.

+

If the Polaris administrator grants an existing principal role to the service principal for the new service connection, the service principal +is bestowed with the privileges granted to the catalog roles that are granted to the existing principal role. If needed, the Polaris +administrator can grant additional catalog roles to the existing principal role or remove catalog roles from it to adjust the privileges +bestowed to the service principal. For an example of how RBAC works in Polaris, see RBAC example.

+

Storage configuration

+

A storage configuration stores a generated identity and access management (IAM) entity for your external cloud storage and is created +when you create a catalog. The storage configuration is used to set the values to connect Polaris Catalog to your cloud storage. During the +catalog creation process, an IAM entity is generated and used to create a trust relationship between the cloud storage provider and Polaris +Catalog.

+

When you create a catalog, you supply the following information about your external cloud storage:

+ + + + + + + + + + + + + + + + + + + +
Cloud storage providerInformation
Amazon S3
  • Default base location for your Amazon S3 bucket
  • Locations for your Amazon S3 bucket
  • S3 role ARN
  • External ID (optional)
Google Cloud Storage (GCS)
  • Default base location for your GCS bucket
  • Locations for your Amazon GCS bucket
Azure
  • Default base location for your Microsoft Azure container
  • Locations for your Microsoft Azure container
  • Azure tenant ID
+

Example workflow

In the following example workflow, Bob creates an Iceberg table named Table1 and Alice reads data from Table1.

+
    +
  1. Bob uses Apache Spark to create the Table1 table under the +Namespace1 namespace in the Catalog1 catalog and insert values into +Table1.

    +

    Bob can create Table1 and insert data into it, because he is using a +service connection with a service principal that is bestowed with +the privileges to perform these actions.

    +
  2. +
  3. Alice uses Snowflake to read data from Table1.

    +

    Alice can read data from Table1, because she is using a service +connection with a service principal with a catalog integration that +is bestowed with the privileges to perform this action. Alice +creates an unmanaged table in Snowflake to read data from Table1.

    +
  4. +
+

Diagram that shows an example workflow for Polaris Catalog

+

Security and access control

This section describes security and access control.

+

Credential vending

+

To secure interactions with service connections, Polaris Catalog vends temporary storage credentials to the query engine during query +execution. These credentials allow the query engine to run the query without needing to have access to your external cloud storage for +Iceberg tables. This process is called credential vending.

+

Identity and access management (IAM)

+

Polaris Catalog uses the identity and access management (IAM) entity to securely connect to your storage for accessing table data, Iceberg +metadata, and manifest files that store the table schema, partitions, and other metadata. Polaris Catalog retains the IAM entity for your +storage location.

+

Access control

+

Polaris Catalog enforces the access control that you configure across all tables registered with the service, and governs security for all +queries from query engines in a consistent manner.

+

Polaris uses a role-based access control (RBAC) model that lets you centrally configure access for Polaris service principals to catalogs, +namespaces, and tables.

+

Polaris RBAC uses two different role types to delegate privileges:

+
    +
  • Principal roles: Granted to Polaris service principals and +analogous to roles in other access control systems that you grant to +service principals.

    +
  • +
  • Catalog roles: Configured with certain privileges on Polaris +catalog resources, and granted to principal roles.

    +
  • +
+

For more information, see Access control.

+

Polaris Catalog Entities

+ +

This page documents various entities that can be managed in Polaris.

+

Catalog

A catalog is a top-level entity in Polaris that may contain other entities like namespaces and tables. These map directly to Apache Iceberg catalogs.

+

For information on managing catalogs with the REST API or for more information on what data can be associated with a catalog, see the API docs.

+

Storage Type

+

All catalogs in Polaris are associated with a storage type. Valid Storage Types are S3, Azure, and GCS. The FILE type is also additionally available for testing. Each of these types relates to a different storage provider where data within the catalog may reside. Depending on the storage type, various other configurations may be set for a catalog including credentials to be used when accessing data inside the catalog.

+

For details on how to use Storage Types in the REST API, see the API docs.

+

Namespace

A namespace is a logical entity that resides within a catalog and can contain other entities such as tables or views. Some other systems may refer to namespaces as schemas or databases.

+

In Polaris, namespaces can be nested up to 16 levels. For example, a.b.c.d.e.f.g is a valid namespace. b is said to reside within a, and so on.

+

For information on managing namespaces with the REST API or for more information on what data can be associated with a namespace, see the API docs.

+

Table

Polaris tables are entites that map to Apache Iceberg tables.

+

For information on managing tables with the REST API or for more information on what data can be associated with a table, see the API docs.

+

View

Polaris views are entites that map to Apache Iceberg views.

+

For information on managing views with the REST API or for more information on what data can be associated with a view, see the API docs.

+

Principal

Polaris principals are unique identities that can be used to represent users or services. Each principal may have one or more principal roles assigned to it for the purpose of accessing catalogs and the entities within them.

+

For information on managing principals with the REST API or for more information on what data can be associated with a principal, see the API docs.

+

Principal Role

Polaris principal roles are labels that may be granted to principals. Each principal may have one or more principal roles, and the same principal role may be granted to multiple principals. Principal roles may be assigned based on the persona or responsibilities of a given principal, or on how that principal will need to access different entities within Polaris.

+

For information on managing principal roles with the REST API or for more information on what data can be associated with a principal role, see the API docs.

+

Catalog Role

Polaris catalog roles are labels that may be granted to catalogs. Each catalog may have one or more catalog roles, and the same catalog role may be granted to multiple catalogs. Catalog roles may be assigned based on the nature of data that will reside in a catalog, or by the groups of users and services that might need to access that data.

+

Each catalog role may have multiple privileges granted to it, and each catalog role can be granted to one or more principal roles. This is the mechanism by which principals are granted access to entities inside a catalog such as namespaces and tables.

+

Privilege

Polaris privileges are granted to catalog roles in order to grant principals with a given principal role some degree of access to catalogs with a given catalog role. When a privilege is granted to a catalog role, any principal roles granted that catalog role receive the privilege. In turn, any principals who are granted that principal role receive it.

+

A privilege can be scoped to any entity inside a catalog, including the catalog itself.

+

For a list of supported privileges for each privilege class, see the API docs:

+ +

Access Control

+ +

This section provides information about how access control works for Polaris Catalog.

+

Polaris Catalog uses a role-based access control (RBAC) model, in which the Polaris administrator assigns access privileges to catalog roles, +and then grants service principals access to resources by assigning catalog roles to principal roles.

+

The key concepts to understanding access control in Polaris are:

+
    +
  • Securable object
  • +
  • Principal role
  • +
  • Catalog role
  • +
  • Privilege
  • +
+

Securable object

A securable object is an object to which access can be granted. Polaris +has the following securable objects:

+
    +
  • Catalog
  • +
  • Namespace
  • +
  • Iceberg table
  • +
  • View
  • +
+

Principal role

A principal role is a resource in Polaris that you can use to logically group Polaris service principals together and grant privileges on +securable objects.

+

Polaris supports a many-to-one relationship between service principals and principal roles. For example, to grant the same privileges to +multiple service principals, you can grant a single principal role to those service principals. A service principal can be granted one +principal role. When registering a service connection, the Polaris administrator specifies the principal role that is granted to the +service principal.

+

You don't grant privileges directly to a principal role. Instead, you configure object permissions at the catalog role level, and then grant +catalog roles to a principal role.

+

The following table shows examples of principal roles that you might configure in Polaris:

+ + + + + + + + + + + + + + + +
Principal role nameDescription
Data_engineerA role that is granted to multiple service principals for running data engineering jobs.
Data_scientistA role that is granted to multiple service principals for running data science or AI jobs.
+

Catalog role

A catalog role belongs to a particular catalog resource in Polaris and specifies a set of permissions for actions on the catalog, or on objects +in the catalog, such as catalog namespaces or tables. You can create one or more catalog roles for a catalog.

+

You grant privileges to a catalog role, and then grant the catalog role to a principal role to bestow the privileges to one or more service +principals.

+

Note

+

If you update the privileges bestowed to a service principal, the updates won't take effect for up to one hour. This means that if you +revoke or grant some privileges for a catalog, the updated privileges won't take effect on any service principal with access to that catalog +for up to one hour.

+

Polaris also supports a many-to-many relationship between catalog roles and principal roles. You can grant the same catalog role to one or more +principal roles. Likewise, a principal role can be granted to one or more catalog roles.

+

The following table displays examples of catalog roles that you might +configure in Polaris:

+ + + + + + + + + + + + + + + + + + + +
Example Catalog roleDescription
Catalog administratorsA role that has been granted multiple privileges to emulate full access to the catalog.

Principal roles that have been granted this role are permitted to create, alter, read, write, and drop tables in the catalog.
Catalog readersA role that has been granted read-only privileges to tables in the catalog.

Principal roles that have been granted this role are allowed to read from tables in the catalog.
Catalog contributorA role that has been granted read and write access privileges to all tables that belong to the catalog.

Principal roles that have been granted this role are allowed to perform read and write operations on tables in the catalog.
+

RBAC model

The following diagram illustrates the RBAC model used by Polaris Catalog. For each catalog, the Polaris administrator assigns access +privileges to catalog roles, and then grants service principals access to resources by assigning catalog roles to principal roles. Polaris +supports a many-to-one relationship between service principals and principal roles.

+

Diagram that shows the RBAC model for Polaris Catalog.

+

Access control privileges

This section describes the privileges that are available in the Polaris access control model. Privileges are granted to catalog roles, catalog +roles are granted to principal roles, and principal roles are granted to service principals to specify the operations that service principals can +perform on objects in Polaris.

+

To grant the full set of privileges (drop, list, read, write, etc.) on an object, you can use the full privilege option.

+

Table privileges

+

Note

+

The TABLE_FULL_METADATA full privilege doesn't grant access to the TABLE_READ_DATA or TABLE_WRITE_DATA individual privileges.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Full privilegeIndividual privilegeDescription
TABLE_FULL_METADATATABLE_CREATEEnables registering a table with the catalog.
TABLE_DROPEnables dropping a table from the catalog.
TABLE_LISTEnables listing any tables in the catalog.
TABLE_READ_PROPERTIESEnables reading properties of the table.
TABLE_WRITE_PROPERTIESEnables configuring properties for the table.
N/ATABLE_READ_DATAEnables reading data from the table by receiving short-lived read-only storage credentials from the catalog.
N/ATABLE_WRITE_DATAEnables writing data to the table by receiving short-lived read+write storage credentials from the catalog.
+

View privileges

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Full privilegeIndividual privilegeDescription
VIEW_FULL_METADATAVIEW_CREATEEnables registering a view with the catalog.
VIEW_DROPEnables dropping a view from the catalog.
VIEW_LISTEnables listing any views in the catalog.
VIEW_READ_PROPERTIESEnables reading all the view properties.
VIEW_WRITE_PROPERTIESEnables configuring view properties.
+

Namespace privileges

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Full privilegeIndividual privilegeDescription
NAMESPACE_FULL_METADATANAMESPACE_CREATEEnables creating a namespace in a catalog.
NAMESPACE_DROPEnables dropping the namespace from the catalog.
NAMESPACE_LISTEnables listing any object in the namespace, including nested namespaces and tables.
NAMESPACE_READ_PROPERTIESEnables reading all the namespace properties.
NAMESPACE_WRITE_PROPERTIESEnables configuring namespace properties.
+

Catalog privileges

+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
PrivilegeDescription
CATALOG_MANAGE_ACCESSIncludes the ability to grant or revoke privileges on objects in a catalog to catalog roles, and the ability to grant or revoke catalog roles to or from principal roles.
CATALOG_MANAGE_CONTENTEnables full management of content for the catalog. This privilege encompasses the following privileges:
  • CATALOG_MANAGE_METADATA
  • TABLE_FULL_METADATA
  • NAMESPACE_FULL_METADATA
  • VIEW_FULL_METADATA
  • TABLE_WRITE_DATA
  • TABLE_READ_DATA
  • CATALOG_READ_PROPERTIES
  • CATALOG_WRITE_PROPERTIES
CATALOG_MANAGE_METADATAEnables full management of the catalog, as well as catalog roles, namespaces, and tables.
CATALOG_READ_PROPERTIESEnables listing catalogs and reading properties of the catalog.
CATALOG_WRITE_PROPERTIESEnables configuring catalog properties.
+

RBAC example

The following diagram illustrates how RBAC works in Polaris, and +includes the following users:

+
    +
  • Alice: A service admin who signs up for Polaris. Alice can +create service principals. She can also create catalogs and +namespaces, and configure access control for Polaris resources.
  • +
+
+

Note

+

The service principal for Alice is not visible in the Polaris Catalog +user interface.

+
+
    +
  • Bob: A data engineer who uses Snowpipe Streaming (in Snowflake) +and Apache Spark connections to interact with Polaris.

    +
      +
    • Alice has created a service principal for Bob. It has been +granted the Data_engineer principal role, which in turn has been +granted the following catalog roles: Catalog contributor and +Data administrator (for both the Silver and Gold zone catalogs +in the following diagram).

      +
    • +
    • The Catalog contributor role grants permission to create +namespaces and tables in the Bronze zone catalog.

      +
    • +
    • The Data administrator roles grant full administrative rights to +the Silver zone catalog and Gold zone catalog.

      +
    • +
    +
  • +
  • Mark: A data scientist who uses Snowflake AI services to +interact with Polaris.

    +
      +
    • Alice has created a service principal for Mark. It has been +granted the Data_scientist principal role, which in turn has +been granted the catalog role named Catalog reader.

      +
    • +
    • The Catalog reader role grants read-only access for a catalog +named Gold zone catalog.

      +
    • +
    +
  • +
+

Diagram that shows an example of how RBAC works in Polaris Catalog.

+

other

listCatalogs

List all catalogs in this polaris service

+
Authorizations:
Polaris_Management_Service_OAuth2

Responses

Response samples

Content type
application/json
{
  • "catalogs": [
    ]
}

createCatalog

Add a new Catalog

+
Authorizations:
Polaris_Management_Service_OAuth2
Request Body schema: application/json
required

The Catalog to create

+
required
object (Polaris_Management_Service_Catalog)

A catalog object. A catalog may be internal or external. Internal catalogs are managed entirely by an external catalog interface. Third party catalogs may be other Iceberg REST implementations or other services with their own proprietary APIs

+

Responses

Request samples

Content type
application/json
{
  • "catalog": {
    }
}

getCatalog

Get the details of a catalog

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the catalog

+

Responses

Response samples

Content type
application/json
Example
{
  • "type": "INTERNAL",
  • "name": "string",
  • "properties": {
    },
  • "createTimestamp": 0,
  • "lastUpdateTimestamp": 0,
  • "entityVersion": 0,
  • "storageConfigInfo": {
    }
}

updateCatalog

Update an existing catalog

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the catalog

+
Request Body schema: application/json
required

The catalog details to use in the update

+
currentEntityVersion
integer

The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version.

+
object
object (Polaris_Management_Service_StorageConfigInfo)

A storage configuration used by catalogs

+

Responses

Request samples

Content type
application/json
{
  • "currentEntityVersion": 0,
  • "properties": {
    },
  • "storageConfigInfo": {
    }
}

Response samples

Content type
application/json
Example
{
  • "type": "INTERNAL",
  • "name": "string",
  • "properties": {
    },
  • "createTimestamp": 0,
  • "lastUpdateTimestamp": 0,
  • "entityVersion": 0,
  • "storageConfigInfo": {
    }
}

deleteCatalog

Delete an existing catalog. This is a cascading operation that deletes all metadata, including principals, roles and grants. If the catalog is an internal catalog, all tables and namespaces are dropped without purge.

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the catalog

+

Responses

listPrincipals

List the principals for the current catalog

+
Authorizations:
Polaris_Management_Service_OAuth2

Responses

Response samples

Content type
application/json
{
  • "principals": [
    ]
}

createPrincipal

Create a principal

+
Authorizations:
Polaris_Management_Service_OAuth2
Request Body schema: application/json
required

The principal to create

+
object (Polaris_Management_Service_Principal)

A Polaris principal.

+
credentialRotationRequired
boolean

If true, the initial credentials can only be used to call rotateCredentials

+

Responses

Request samples

Content type
application/json
{
  • "principal": {
    },
  • "credentialRotationRequired": true
}

Response samples

Content type
application/json
{
  • "principal": {
    },
  • "credentials": {
    }
}

getPrincipal

Get the principal details

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The principal name

+

Responses

Response samples

Content type
application/json
{
  • "name": "string",
  • "clientId": "string",
  • "properties": {
    },
  • "createTimestamp": 0,
  • "lastUpdateTimestamp": 0,
  • "entityVersion": 0
}

updatePrincipal

Update an existing principal

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The principal name

+
Request Body schema: application/json
required

The principal details to use in the update

+
currentEntityVersion
required
integer

The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version.

+
required
object

Responses

Request samples

Content type
application/json
{
  • "currentEntityVersion": 0,
  • "properties": {
    }
}

Response samples

Content type
application/json
{
  • "name": "string",
  • "clientId": "string",
  • "properties": {
    },
  • "createTimestamp": 0,
  • "lastUpdateTimestamp": 0,
  • "entityVersion": 0
}

deletePrincipal

Remove a principal from polaris

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The principal name

+

Responses

rotateCredentials

Rotate a principal's credentials. The new credentials will be returned in the response. This is the only API, aside from createPrincipal, that returns the user's credentials. This API is not idempotent.

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The user name

+

Responses

Response samples

Content type
application/json
{
  • "principal": {
    },
  • "credentials": {
    }
}

listPrincipalRolesAssigned

List the roles assigned to the principal

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the target principal

+

Responses

Response samples

Content type
application/json
{
  • "roles": [
    ]
}

assignPrincipalRole

Add a role to the principal

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the target principal

+
Request Body schema: application/json
required

The principal role to assign

+
object (Polaris_Management_Service_PrincipalRole)

Responses

Request samples

Content type
application/json
{
  • "principalRole": {
    }
}

revokePrincipalRole

Remove a role from a catalog principal

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the target principal

+
principalRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the role

+

Responses

listPrincipalRoles

List the principal roles

+
Authorizations:
Polaris_Management_Service_OAuth2

Responses

Response samples

Content type
application/json
{
  • "roles": [
    ]
}

createPrincipalRole

Create a principal role

+
Authorizations:
Polaris_Management_Service_OAuth2
Request Body schema: application/json
required

The principal to create

+
object (Polaris_Management_Service_PrincipalRole)

Responses

Request samples

Content type
application/json
{
  • "principalRole": {
    }
}

getPrincipalRole

Get the principal role details

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The principal role name

+

Responses

Response samples

Content type
application/json
{
  • "name": "string",
  • "properties": {
    },
  • "createTimestamp": 0,
  • "lastUpdateTimestamp": 0,
  • "entityVersion": 0
}

updatePrincipalRole

Update an existing principalRole

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The principal role name

+
Request Body schema: application/json
required

The principalRole details to use in the update

+
currentEntityVersion
required
integer

The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version.

+
required
object

Responses

Request samples

Content type
application/json
{
  • "currentEntityVersion": 0,
  • "properties": {
    }
}

Response samples

Content type
application/json
{
  • "name": "string",
  • "properties": {
    },
  • "createTimestamp": 0,
  • "lastUpdateTimestamp": 0,
  • "entityVersion": 0
}

deletePrincipalRole

Remove a principal role from polaris

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The principal role name

+

Responses

listAssigneePrincipalsForPrincipalRole

List the Principals to whom the target principal role has been assigned

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The principal role name

+

Responses

Response samples

Content type
application/json
{
  • "principals": [
    ]
}

listCatalogRolesForPrincipalRole

Get the catalog roles mapped to the principal role

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The principal role name

+
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the catalog where the catalogRoles reside

+

Responses

Response samples

Content type
application/json
{
  • "roles": [
    ]
}

assignCatalogRoleToPrincipalRole

Assign a catalog role to a principal role

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The principal role name

+
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the catalog where the catalogRoles reside

+
Request Body schema: application/json
required

The principal to create

+
object (Polaris_Management_Service_CatalogRole)

Responses

Request samples

Content type
application/json
{
  • "catalogRole": {
    }
}

revokeCatalogRoleFromPrincipalRole

Remove a catalog role from a principal role

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
principalRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The principal role name

+
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the catalog that contains the role to revoke

+
catalogRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the catalog role that should be revoked

+

Responses

listCatalogRoles

List existing roles in the catalog

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The catalog for which we are reading/updating roles

+

Responses

Response samples

Content type
application/json
{
  • "roles": [
    ]
}

createCatalogRole

Create a new role in the catalog

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The catalog for which we are reading/updating roles

+
Request Body schema: application/json
object (Polaris_Management_Service_CatalogRole)

Responses

Request samples

Content type
application/json
{
  • "catalogRole": {
    }
}

getCatalogRole

Get the details of an existing role

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The catalog for which we are retrieving roles

+
catalogRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the role

+

Responses

Response samples

Content type
application/json
{
  • "name": "string",
  • "properties": {
    },
  • "createTimestamp": 0,
  • "lastUpdateTimestamp": 0,
  • "entityVersion": 0
}

updateCatalogRole

Update an existing role in the catalog

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The catalog for which we are retrieving roles

+
catalogRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the role

+
Request Body schema: application/json
currentEntityVersion
required
integer

The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version.

+
required
object

Responses

Request samples

Content type
application/json
{
  • "currentEntityVersion": 0,
  • "properties": {
    }
}

Response samples

Content type
application/json
{
  • "name": "string",
  • "properties": {
    },
  • "createTimestamp": 0,
  • "lastUpdateTimestamp": 0,
  • "entityVersion": 0
}

deleteCatalogRole

Delete an existing role from the catalog. All associated grants will also be deleted

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The catalog for which we are retrieving roles

+
catalogRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the role

+

Responses

listAssigneePrincipalRolesForCatalogRole

List the PrincipalRoles to which the target catalog role has been assigned

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the catalog where the catalog role resides

+
catalogRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the catalog role

+

Responses

Response samples

Content type
application/json
{
  • "roles": [
    ]
}

listGrantsForCatalogRole

List the grants the catalog role holds

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the catalog where the role will receive the grant

+
catalogRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the role receiving the grant (must exist)

+

Responses

Response samples

Content type
application/json
{
  • "grants": [
    ]
}

addGrantToCatalogRole

Add a new grant to the catalog role

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the catalog where the role will receive the grant

+
catalogRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the role receiving the grant (must exist)

+
Request Body schema: application/json
object (Polaris_Management_Service_GrantResource)

Responses

Request samples

Content type
application/json
{
  • "grant": {
    }
}

revokeGrantFromCatalogRole

Delete a specific grant from the role. This may be a subset or a superset of the grants the role has. In case of a subset, the role will retain the grants not specified. If the cascade parameter is true, grant revocation will have a cascading effect - that is, if a principal has specific grants on a subresource, and grants are revoked on a parent resource, the grants present on the subresource will be revoked as well. By default, this behavior is disabled and grant revocation only affects the specified resource.

+
Authorizations:
Polaris_Management_Service_OAuth2
path Parameters
catalogName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the catalog where the role will receive the grant

+
catalogRoleName
required
string [ 1 .. 256 ] characters ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$

The name of the role receiving the grant (must exist)

+
query Parameters
cascade
boolean
Default: false

If true, the grant revocation cascades to all subresources.

+
Request Body schema: application/json
object (Polaris_Management_Service_GrantResource)

Responses

Request samples

Content type
application/json
{
  • "grant": {
    }
}

Configuration API

List all catalog configuration settings

All REST clients should first call this route to get catalog configuration properties from the server to configure the catalog and its HTTP client. Configuration from the server consists of two sets of key/value pairs.

+
    +
  • defaults - properties that should be used as default configuration; applied before client configuration
  • +
  • overrides - properties that should be used to override client configuration; applied after defaults and client configuration
  • +
+

Catalog configuration is constructed by setting the defaults, then client- provided configuration, and finally overrides. The final property set is then used to configure the catalog.

+

For example, a default configuration property might set the size of the client pool, which can be replaced with a client-specific setting. An override might be used to set the warehouse location, which is stored on the server rather than in client configuration.

+

Common catalog configuration settings are documented at https://iceberg.apache.org/docs/latest/configuration/#catalog-properties

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
query Parameters
warehouse
string

Warehouse location or identifier to request from the service

+

Responses

Response samples

Content type
application/json
{
  • "overrides": {
    },
  • "defaults": {
    }
}

OAuth2 API

Get a token using an OAuth2 flow

Exchange credentials for a token using the OAuth2 client credentials flow or token exchange.

+

This endpoint is used for three purposes -

+
    +
  1. To exchange client credentials (client ID and secret) for an access token This uses the client credentials flow.
  2. +
  3. To exchange a client token and an identity token for a more specific access token This uses the token exchange flow.
  4. +
  5. To exchange an access token for one with the same claims and a refreshed expiration period This uses the token exchange flow.
  6. +
+

For example, a catalog client may be configured with client credentials from the OAuth2 Authorization flow. This client would exchange its client ID and secret for an access token using the client credentials request with this endpoint (1). Subsequent requests would then use that access token.

+

Some clients may also handle sessions that have additional user context. These clients would use the token exchange flow to exchange a user token (the "subject" token) from the session for a more specific access token for that user, using the catalog's access token as the "actor" token (2). The user ID token is the "subject" token and can be any token type allowed by the OAuth2 token exchange flow, including a unsecured JWT token with a sub claim. This request should use the catalog's bearer token in the "Authorization" header.

+

Clients may also use the token exchange flow to refresh a token that is about to expire by sending a token exchange request (3). The request's "subject" token should be the expiring token. This request should use the subject token in the "Authorization" header.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_BearerAuth
Request Body schema: application/x-www-form-urlencoded
required
Any of
grant_type
required
string
Value: "client_credentials"
scope
string
client_id
required
string

Client ID

+

This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header.

+
client_secret
required
string

Client secret

+

This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header.

+

Responses

Response samples

Content type
application/json
{
  • "access_token": "string",
  • "token_type": "bearer",
  • "expires_in": 0,
  • "issued_token_type": "urn:ietf:params:oauth:token-type:access_token",
  • "refresh_token": "string",
  • "scope": "string"
}

Catalog API

List namespaces, optionally providing a parent namespace to list underneath

List all namespaces at a certain level, optionally starting from a given parent namespace. If table accounting.tax.paid.info exists, using 'SELECT NAMESPACE IN accounting' would translate into GET /namespaces?parent=accounting and must return a namespace, ["accounting", "tax"] only. Using 'SELECT NAMESPACE IN accounting.tax' would translate into GET /namespaces?parent=accounting%1Ftax and must return a namespace, ["accounting", "tax", "paid"]. If parent is not provided, all top-level namespaces should be listed.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
query Parameters
pageToken
string or null (Apache_Iceberg_REST_Catalog_API_PageToken)

An opaque token that allows clients to make use of pagination for list APIs (e.g. ListTables). Clients may initiate the first paginated request by sending an empty query parameter pageToken to the server. +Servers that support pagination should identify the pageToken parameter and return a next-page-token in the response if there are more results available. After the initial request, the value of next-page-token from each response must be used as the pageToken parameter value for the next request. The server must return null value for the next-page-token in the last response. +Servers that support pagination must return all results in a single response with the value of next-page-token set to null if the query parameter pageToken is not set in the request. +Servers that do not support pagination should ignore the pageToken parameter and return all results in a single response. The next-page-token must be omitted from the response. +Clients must interpret either null or missing response value of next-page-token as the end of the listing results.

+
pageSize
integer >= 1

For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated pageSize.

+
parent
string
Example: parent=accounting%1Ftax

An optional namespace, underneath which to list namespaces. If not provided or empty, all top-level namespaces should be listed. If parent is a multipart namespace, the parts must be separated by the unit separator (0x1F) byte.

+

Responses

Response samples

Content type
application/json
Example
{
  • "namespaces": [
    ]
}

Create a namespace

Create a namespace, with an optional set of properties. The server might also add properties, such as last_modified_time etc.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
Request Body schema: application/json
required
namespace
required
Array of strings (Apache_Iceberg_REST_Catalog_API_Namespace)

Reference to one or more levels of a namespace

+
object
Default: {}

Configured string to string map of properties for the namespace

+

Responses

Request samples

Content type
application/json
{
  • "namespace": [
    ],
  • "properties": {
    }
}

Response samples

Content type
application/json
{
  • "namespace": [
    ],
  • "properties": {
    }
}

Load the metadata properties for a namespace

Return all stored metadata properties for a given namespace

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+

Responses

Response samples

Content type
application/json
{
  • "namespace": [
    ],
  • "properties": {
    }
}

Check if a namespace exists

Check if a namespace exists. The response does not contain a body.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+

Responses

Response samples

Content type
application/json
{
  • "error": {
    }
}

Drop a namespace from the catalog. Namespace must be empty.

Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+

Responses

Response samples

Content type
application/json
{
  • "error": {
    }
}

Set or remove properties on a namespace

Set and/or remove properties on a namespace. The request body specifies a list of properties to remove and a map of key value pairs to update. +Properties that are not in the request are not modified or removed by this call. +Server implementations are not required to support namespace properties.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
Request Body schema: application/json
required
removals
Array of strings unique
object

Responses

Request samples

Content type
application/json
{
  • "removals": [
    ],
  • "updates": {
    }
}

Response samples

Content type
application/json
{
  • "updated": [
    ],
  • "removed": [
    ],
  • "missing": [
    ]
}

List all table identifiers underneath a given namespace

Return all table identifiers under this namespace

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
query Parameters
pageToken
string or null (Apache_Iceberg_REST_Catalog_API_PageToken)

An opaque token that allows clients to make use of pagination for list APIs (e.g. ListTables). Clients may initiate the first paginated request by sending an empty query parameter pageToken to the server. +Servers that support pagination should identify the pageToken parameter and return a next-page-token in the response if there are more results available. After the initial request, the value of next-page-token from each response must be used as the pageToken parameter value for the next request. The server must return null value for the next-page-token in the last response. +Servers that support pagination must return all results in a single response with the value of next-page-token set to null if the query parameter pageToken is not set in the request. +Servers that do not support pagination should ignore the pageToken parameter and return all results in a single response. The next-page-token must be omitted from the response. +Clients must interpret either null or missing response value of next-page-token as the end of the listing results.

+
pageSize
integer >= 1

For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated pageSize.

+

Responses

Response samples

Content type
application/json
Example
{
  • "identifiers": [
    ]
}

Create a table in the given namespace

Create a table or start a create transaction, like atomic CTAS.

+

If stage-create is false, the table is created immediately.

+

If stage-create is true, the table is not created, but table metadata is initialized and returned. The service should prepare as needed for a commit to the table commit endpoint to complete the create transaction. The client uses the returned metadata to begin a transaction. To commit the transaction, the client sends all create and subsequent changes to the table commit route. Changes from the table create operation include changes like AddSchemaUpdate and SetCurrentSchemaUpdate that set the initial table state.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
header Parameters
X-Iceberg-Access-Delegation
string
Enum: "vended-credentials" "remote-signing"
Example: vended-credentials,remote-signing

Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms.

+

Specific properties and handling for vended-credentials is documented in the LoadTableResult schema section of this spec document.

+

The protocol and specification for remote-signing is documented in the s3-signer-open-api.yaml OpenApi spec in the aws module.

+
Request Body schema: application/json
required
name
required
string
location
string
required
object (Apache_Iceberg_REST_Catalog_API_Schema)
object (Apache_Iceberg_REST_Catalog_API_PartitionSpec)
object (Apache_Iceberg_REST_Catalog_API_SortOrder)
stage-create
boolean
object

Responses

Request samples

Content type
application/json
{
  • "name": "string",
  • "location": "string",
  • "schema": {
    },
  • "partition-spec": {
    },
  • "write-order": {
    },
  • "stage-create": true,
  • "properties": {
    }
}

Response samples

Content type
application/json
{
  • "metadata-location": "string",
  • "metadata": {
    },
  • "config": {
    }
}

Register a table in the given namespace using given metadata file location

Register a table using given metadata file location.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
Request Body schema: application/json
required
name
required
string
metadata-location
required
string

Responses

Request samples

Content type
application/json
{
  • "name": "string",
  • "metadata-location": "string"
}

Response samples

Content type
application/json
{
  • "metadata-location": "string",
  • "metadata": {
    },
  • "config": {
    }
}

Load a table from the catalog

Load a table from the catalog.

+

The response contains both configuration and table metadata. The configuration, if non-empty is used as additional configuration for the table that overrides catalog configuration. For example, this configuration may change the FileIO implementation to be used for the table.

+

The response also contains the table's full metadata, matching the table metadata JSON file.

+

The catalog configuration may contain credentials that should be used for subsequent requests for the table. The configuration key "token" is used to pass an access token to be used as a bearer token for table requests. Otherwise, a token may be passed using a RFC 8693 token type as a configuration key. For example, "urn:ietf:params:oauth:token-type:jwt=".

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
table
required
string
Example: sales

A table name

+
query Parameters
snapshots
string
Enum: "all" "refs"

The snapshots to return in the body of the metadata. Setting the value to all would return the full set of snapshots currently valid for the table. Setting the value to refs would load all snapshots referenced by branches or tags. +Default if no param is provided is all.

+
header Parameters
X-Iceberg-Access-Delegation
string
Enum: "vended-credentials" "remote-signing"
Example: vended-credentials,remote-signing

Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms.

+

Specific properties and handling for vended-credentials is documented in the LoadTableResult schema section of this spec document.

+

The protocol and specification for remote-signing is documented in the s3-signer-open-api.yaml OpenApi spec in the aws module.

+

Responses

Response samples

Content type
application/json
{
  • "metadata-location": "string",
  • "metadata": {
    },
  • "config": {
    }
}

Commit updates to a table

Commit updates to a table.

+

Commits have two parts, requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, assert-ref-snapshot-id will check that a named ref's snapshot ID has a certain value.

+

Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id.

+

Create table transactions that are started by createTable with stage-create set to true are committed using this route. Transactions should include all changes to the table, including table initialization, like AddSchemaUpdate and SetCurrentSchemaUpdate. The assert-create requirement is used to ensure that the table was not created concurrently.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
table
required
string
Example: sales

A table name

+
Request Body schema: application/json
required
object (Apache_Iceberg_REST_Catalog_API_TableIdentifier)
required
Array of objects (Apache_Iceberg_REST_Catalog_API_TableRequirement)
required
Array of Apache_Iceberg_REST_Catalog_API_AssignUUIDUpdate (object) or Apache_Iceberg_REST_Catalog_API_UpgradeFormatVersionUpdate (object) or Apache_Iceberg_REST_Catalog_API_AddSchemaUpdate (object) or Apache_Iceberg_REST_Catalog_API_SetCurrentSchemaUpdate (object) or Apache_Iceberg_REST_Catalog_API_AddPartitionSpecUpdate (object) or Apache_Iceberg_REST_Catalog_API_SetDefaultSpecUpdate (object) or Apache_Iceberg_REST_Catalog_API_AddSortOrderUpdate (object) or Apache_Iceberg_REST_Catalog_API_SetDefaultSortOrderUpdate (object) or Apache_Iceberg_REST_Catalog_API_AddSnapshotUpdate (object) or Apache_Iceberg_REST_Catalog_API_SetSnapshotRefUpdate (object) or Apache_Iceberg_REST_Catalog_API_RemoveSnapshotsUpdate (object) or Apache_Iceberg_REST_Catalog_API_RemoveSnapshotRefUpdate (object) or Apache_Iceberg_REST_Catalog_API_SetLocationUpdate (object) or Apache_Iceberg_REST_Catalog_API_SetPropertiesUpdate (object) or Apache_Iceberg_REST_Catalog_API_RemovePropertiesUpdate (object) or Apache_Iceberg_REST_Catalog_API_SetStatisticsUpdate (object) or Apache_Iceberg_REST_Catalog_API_RemoveStatisticsUpdate (object) (Apache_Iceberg_REST_Catalog_API_TableUpdate)

Responses

Request samples

Content type
application/json
{
  • "identifier": {
    },
  • "requirements": [
    ],
  • "updates": [
    ]
}

Response samples

Content type
application/json
{
  • "metadata-location": "string",
  • "metadata": {
    }
}

Drop a table from the catalog

Remove a table from the catalog

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
table
required
string
Example: sales

A table name

+
query Parameters
purgeRequested
boolean
Default: false

Whether the user requested to purge the underlying table's data and metadata

+

Responses

Response samples

Content type
application/json
{
  • "error": {
    }
}

Check if a table exists

Check if a table exists within a given namespace. The response does not contain a body.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
table
required
string
Example: sales

A table name

+

Responses

Response samples

Content type
application/json
{
  • "error": {
    }
}

Rename a table from its current name to a new name

Rename a table from one identifier to another. It's valid to move a table across namespaces, but the server implementation is not required to support it.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
Request Body schema: application/json
required

Current table identifier to rename and new table identifier to rename to

+
required
object (Apache_Iceberg_REST_Catalog_API_TableIdentifier)
required
object (Apache_Iceberg_REST_Catalog_API_TableIdentifier)

Responses

Request samples

Content type
application/json
{
  • "source": {
    },
  • "destination": {
    }
}

Response samples

Content type
application/json
{
  • "error": {
    }
}

Send a metrics report to this endpoint to be processed by the backend

Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
table
required
string
Example: sales

A table name

+
Request Body schema: application/json
required

The request containing the metrics report to be sent

+
Any of
table-name
required
string
snapshot-id
required
integer <int64>
required
Apache_Iceberg_REST_Catalog_API_AndOrExpression (object) or Apache_Iceberg_REST_Catalog_API_NotExpression (object) or Apache_Iceberg_REST_Catalog_API_SetExpression (object) or Apache_Iceberg_REST_Catalog_API_LiteralExpression (object) or Apache_Iceberg_REST_Catalog_API_UnaryExpression (object) (Apache_Iceberg_REST_Catalog_API_Expression)
schema-id
required
integer
projected-field-ids
required
Array of integers
projected-field-names
required
Array of strings
required
object (Apache_Iceberg_REST_Catalog_API_Metrics)
object
report-type
required
string

Responses

Request samples

Content type
application/json
Example
{
  • "table-name": "string",
  • "snapshot-id": 0,
  • "filter": {
    },
  • "schema-id": 0,
  • "projected-field-ids": [
    ],
  • "projected-field-names": [
    ],
  • "metrics": {
    },
  • "metadata": {
    },
  • "report-type": "string"
}

Response samples

Content type
application/json
{
  • "error": {
    }
}

Sends a notification to the table

Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
table
required
string
Example: sales

A table name

+
Request Body schema: application/json
required

The request containing the notification to be sent

+
notification-type
required
string (Apache_Iceberg_REST_Catalog_API_NotificationType)
Enum: "UNKNOWN" "CREATE" "UPDATE" "DROP"
object (Apache_Iceberg_REST_Catalog_API_TableUpdateNotification)

Responses

Request samples

Content type
application/json
{
  • "notification-type": "UNKNOWN",
  • "payload": {
    }
}

Response samples

Content type
application/json
{
  • "error": {
    }
}

Commit updates to multiple tables in an atomic operation

Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
Request Body schema: application/json
required

Commit updates to multiple tables in an atomic operation

+

A commit for a single table consists of a table identifier with requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, assert-ref-snapshot-id will check that a named ref's snapshot ID has a certain value.

+

Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id.

+
required
Array of objects (Apache_Iceberg_REST_Catalog_API_CommitTableRequest)

Responses

Request samples

Content type
application/json
{
  • "table-changes": [
    ]
}

Response samples

Content type
application/json
{
  • "error": {
    }
}

List all view identifiers underneath a given namespace

Return all view identifiers under this namespace

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
query Parameters
pageToken
string or null (Apache_Iceberg_REST_Catalog_API_PageToken)

An opaque token that allows clients to make use of pagination for list APIs (e.g. ListTables). Clients may initiate the first paginated request by sending an empty query parameter pageToken to the server. +Servers that support pagination should identify the pageToken parameter and return a next-page-token in the response if there are more results available. After the initial request, the value of next-page-token from each response must be used as the pageToken parameter value for the next request. The server must return null value for the next-page-token in the last response. +Servers that support pagination must return all results in a single response with the value of next-page-token set to null if the query parameter pageToken is not set in the request. +Servers that do not support pagination should ignore the pageToken parameter and return all results in a single response. The next-page-token must be omitted from the response. +Clients must interpret either null or missing response value of next-page-token as the end of the listing results.

+
pageSize
integer >= 1

For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated pageSize.

+

Responses

Response samples

Content type
application/json
Example
{
  • "identifiers": [
    ]
}

Create a view in the given namespace

Create a view in the given namespace.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
Request Body schema: application/json
required
name
required
string
location
string
required
object (Apache_Iceberg_REST_Catalog_API_Schema)
required
object (Apache_Iceberg_REST_Catalog_API_ViewVersion)
required
object

Responses

Request samples

Content type
application/json
{
  • "name": "string",
  • "location": "string",
  • "schema": {
    },
  • "view-version": {
    },
  • "properties": {
    }
}

Response samples

Content type
application/json
{
  • "metadata-location": "string",
  • "metadata": {
    },
  • "config": {
    }
}

Load a view from the catalog

Load a view from the catalog.

+

The response contains both configuration and view metadata. The configuration, if non-empty is used as additional configuration for the view that overrides catalog configuration.

+

The response also contains the view's full metadata, matching the view metadata JSON file.

+

The catalog configuration may contain credentials that should be used for subsequent requests for the view. The configuration key "token" is used to pass an access token to be used as a bearer token for view requests. Otherwise, a token may be passed using a RFC 8693 token type as a configuration key. For example, "urn:ietf:params:oauth:token-type:jwt=".

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
view
required
string
Example: sales

A view name

+

Responses

Response samples

Content type
application/json
{
  • "metadata-location": "string",
  • "metadata": {
    },
  • "config": {
    }
}

Replace a view

Commit updates to a view.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
view
required
string
Example: sales

A view name

+
Request Body schema: application/json
required
object (Apache_Iceberg_REST_Catalog_API_TableIdentifier)
Array of objects (Apache_Iceberg_REST_Catalog_API_ViewRequirement)
required
Array of Apache_Iceberg_REST_Catalog_API_AssignUUIDUpdate (object) or Apache_Iceberg_REST_Catalog_API_UpgradeFormatVersionUpdate (object) or Apache_Iceberg_REST_Catalog_API_AddSchemaUpdate (object) or Apache_Iceberg_REST_Catalog_API_SetLocationUpdate (object) or Apache_Iceberg_REST_Catalog_API_SetPropertiesUpdate (object) or Apache_Iceberg_REST_Catalog_API_RemovePropertiesUpdate (object) or Apache_Iceberg_REST_Catalog_API_AddViewVersionUpdate (object) or Apache_Iceberg_REST_Catalog_API_SetCurrentViewVersionUpdate (object) (Apache_Iceberg_REST_Catalog_API_ViewUpdate)

Responses

Request samples

Content type
application/json
{
  • "identifier": {
    },
  • "requirements": [
    ],
  • "updates": [
    ]
}

Response samples

Content type
application/json
{
  • "metadata-location": "string",
  • "metadata": {
    },
  • "config": {
    }
}

Drop a view from the catalog

Remove a view from the catalog

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
view
required
string
Example: sales

A view name

+

Responses

Response samples

Content type
application/json
{
  • "error": {
    }
}

Check if a view exists

Check if a view exists within a given namespace. This request does not return a response body.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
namespace
required
string
Examples:
  • accounting -
  • accounting%1Ftax -

A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (0x1F) byte.

+
view
required
string
Example: sales

A view name

+

Responses

Response samples

Content type
application/json
{
  • "error": {
    }
}

Rename a view from its current name to a new name

Rename a view from one identifier to another. It's valid to move a view across namespaces, but the server implementation is not required to support it.

+
Authorizations:
Apache_Iceberg_REST_Catalog_API_OAuth2Apache_Iceberg_REST_Catalog_API_BearerAuth
path Parameters
prefix
required
string

An optional prefix in the path

+
Request Body schema: application/json
required

Current view identifier to rename and new view identifier to rename to

+
required
object (Apache_Iceberg_REST_Catalog_API_TableIdentifier)
required
object (Apache_Iceberg_REST_Catalog_API_TableIdentifier)

Responses

Request samples

Content type
application/json
{
  • "source": {
    },
  • "destination": {
    }
}

Response samples

Content type
application/json
{
  • "error": {
    }
}
+ + + + diff --git a/docs/overview.md b/docs/overview.md new file mode 100644 index 0000000000..39935a94c5 --- /dev/null +++ b/docs/overview.md @@ -0,0 +1,213 @@ + + +Polaris Catalog is a catalog implementation for Apache Iceberg built on the open source Apache Iceberg REST protocol. + +With Polaris Catalog, you can provide centralized, secure read and write access across different REST-compatible query engines to your Iceberg tables. + +![Conceptual diagram of Polaris Catalog.](./img/overview.svg "Polaris Catalog overview") + +## Key concepts + +This section introduces key concepts associated with using Polaris Catalog. + +In the following diagram, a sample [Polaris Catalog structure](./overview.md#catalog) with nested [namespaces](./overview.md#namespace) is shown for Catalog1. No tables +or namespaces have been created yet for Catalog2 or Catalog3: + +![Diagram that shows an example Polaris Catalog structure.](./img/sample-catalog-structure.svg "Sample Polaris Catalog structure") + +### Catalog + +In Polaris Catalog, you can create one or more catalog resources to organize Iceberg tables. + +Configure your catalog by setting values in the storage configuration for S3, Azure, or Google Cloud Storage. An Iceberg catalog enables a +query engine to manage and organize tables. The catalog forms the first architectural layer in the [Iceberg table specification](https://iceberg.apache.org/spec/#overview) and must support: + +- Storing the current metadata pointer for one or more Iceberg tables. A metadata pointer maps a table name to the location of that table's + current metadata file. + +- Performing atomic operations so that you can update the current metadata pointer for a table to the metadata pointer of a new version of + the table. + +To learn more about Iceberg catalogs, see the [Apache Iceberg documentation](https://iceberg.apache.org/concepts/catalog/). + +#### Catalog types + +A catalog can be one of the following two types: + +- Internal: The catalog is managed by Polaris. Tables from this catalog can be read and written in Polaris. + +- External: The catalog is externally managed by another Iceberg catalog provider (for example, Snowflake, Glue, Dremio Arctic). Tables from + this catalog are synced to Polaris. These tables are read-only in Polaris. In the current release, only Snowflake external catalog is provided. + +A catalog is configured with a storage configuration that can point to S3, Azure storage, or GCS. + +To create a new catalog, see [Create a catalog](./create-a-catalog.md "Sample Polaris Catalog structure"). + +### Namespace + +You create *namespaces* to logically group Iceberg tables within a catalog. A catalog can have one or more namespaces. You can also create +nested namespaces. Iceberg tables belong to namespaces. + +### Iceberg tables & catalogs + +In an internal catalog, an Iceberg table is registered in Polaris Catalog, but read and written via query engines. The table data and +metadata is stored in your external cloud storage. The table uses Polaris Catalog as the Iceberg catalog. + +If you have tables that use Snowflake as the Iceberg catalog (Snowflake-managed tables), you can sync these tables to an external +catalog in Polaris Catalog. If you sync this catalog to Polaris Catalog, it appears as an external catalog in Polaris Catalog. The table data and +metadata is stored in your external cloud storage. The Snowflake query engine can read from or write to these tables. However, the other query +engines can only read from these tables. + +**Important** + +To ensure that the access privileges defined for a catalog are enforced +correctly, you must: + +- Ensure a directory only contains the data files that belong to a + single table. + +- Create a directory hierarchy that matches the namespace hierarchy + for the catalog. + +For example, if a catalog includes: + +- Top-level namespace namespace1 + +- Nested namespace namespace1a + +- A customers table, which is grouped under nested namespace + namespace1a + +- An orders table, which is grouped under nested namespace namespace1a + +The directory hierarchy for the catalog must be: + +- /namespace1/namespace1a/customers/\ + +- /namespace1/namespace1a/orders/\ + +### Service principal + +A service principal is an entity that you create in Polaris Catalog. Each service principal encapsulates credentials that you use to connect +to Polaris Catalog. + +Query engines use service principals to connect to catalogs. + +Polaris Catalog generates a Client ID and Client Secret pair for each service principal. + +The following table displays example service principals that you might create in Polaris Catalog: + + | Service connection name | Description | + | --------------------------- | ----------- | + | Flink ingestion | For Apache Flink to ingest streaming data into Iceberg tables. | + | Spark ETL pipeline | For Apache Spark to run ETL pipeline jobs on Iceberg tables. | + | Snowflake data pipelines | For Snowflake to run data pipelines for transforming data in Iceberg tables. | + | Trino BI dashboard | For Trino to run BI queries for powering a dashboard. | + | Snowflake AI team | For Snowflake to run AI jobs on data in Iceberg tables. | + +### Service connection + +A service connection represents a REST-compatible engine (such as Apache Spark, Apache Flink, or Trino) that can read from and write to Polaris +Catalog. When creating a new service connection, the Polaris administrator grants the service principal that is created with the new service +connection with either a new or existing principal role. A principal role is a resource in Polaris that you can use to logically group Polaris +service principals together and grant privileges on securable objects. For more information, see [Principal role](./access-control.md#principal-role "Principal role"). Polaris Catalog uses a role-based access control (RBAC) model to grant service principals access to resources. For more information, +see [Access control](./access-control.md "Access control"). For a diagram of this model, see [RBAC model](./access-control.md#rbac-model "RBAC model"). + +If the Polaris administrator grants the service principal for the new service connection with a new principal role, the service principal +doesn\'t have any privileges granted to it yet. When securing the catalog that the new service connection will connect to, the Polaris +administrator grants privileges to catalog roles and then grants these catalog roles to the new principal role. As a result, the service +principal for the new service connection is bestowed with these privileges. For more information about catalog roles, see [Catalog role](./access-control.md#catalog-role "Catalog role"). + +If the Polaris administrator grants an existing principal role to the service principal for the new service connection, the service principal +is bestowed with the privileges granted to the catalog roles that are granted to the existing principal role. If needed, the Polaris +administrator can grant additional catalog roles to the existing principal role or remove catalog roles from it to adjust the privileges +bestowed to the service principal. For an example of how RBAC works in Polaris, see [RBAC example](./access-control.md#rbac-example "RBAC example"). + +### Storage configuration + +A storage configuration stores a generated identity and access management (IAM) entity for your external cloud storage and is created +when you create a catalog. The storage configuration is used to set the values to connect Polaris Catalog to your cloud storage. During the +catalog creation process, an IAM entity is generated and used to create a trust relationship between the cloud storage provider and Polaris +Catalog. + +When you create a catalog, you supply the following information about your external cloud storage: + +| Cloud storage provider | Information | +| -----------------------| ----------- | +| Amazon S3 |
  • Default base location for your Amazon S3 bucket
  • Locations for your Amazon S3 bucket
  • S3 role ARN
  • External ID (optional)
| +| Google Cloud Storage (GCS) |
  • Default base location for your GCS bucket
  • Locations for your Amazon GCS bucket
| +| Azure |
  • Default base location for your Microsoft Azure container
  • Locations for your Microsoft Azure container
  • Azure tenant ID
| + +## Example workflow + +In the following example workflow, Bob creates an Iceberg table named Table1 and Alice reads data from Table1. + +1. Bob uses Apache Spark to create the Table1 table under the + Namespace1 namespace in the Catalog1 catalog and insert values into + Table1. + + Bob can create Table1 and insert data into it, because he is using a + service connection with a service principal that is bestowed with + the privileges to perform these actions. + +2. Alice uses Snowflake to read data from Table1. + + Alice can read data from Table1, because she is using a service + connection with a service principal with a catalog integration that + is bestowed with the privileges to perform this action. Alice + creates an unmanaged table in Snowflake to read data from Table1. + +![Diagram that shows an example workflow for Polaris Catalog](./img/example-workflow.svg "Example workflow for Polaris Catalog") + +## Security and access control + +This section describes security and access control. + +### Credential vending + +To secure interactions with service connections, Polaris Catalog vends temporary storage credentials to the query engine during query +execution. These credentials allow the query engine to run the query without needing to have access to your external cloud storage for +Iceberg tables. This process is called credential vending. + +### Identity and access management (IAM) + +Polaris Catalog uses the identity and access management (IAM) entity to securely connect to your storage for accessing table data, Iceberg +metadata, and manifest files that store the table schema, partitions, and other metadata. Polaris Catalog retains the IAM entity for your +storage location. + +### Access control + +Polaris Catalog enforces the access control that you configure across all tables registered with the service, and governs security for all +queries from query engines in a consistent manner. + +Polaris uses a role-based access control (RBAC) model that lets you centrally configure access for Polaris service principals to catalogs, +namespaces, and tables. + +Polaris RBAC uses two different role types to delegate privileges: + +- **Principal roles:** Granted to Polaris service principals and + analogous to roles in other access control systems that you grant to + service principals. + +- **Catalog roles:** Configured with certain privileges on Polaris + catalog resources, and granted to principal roles. + +For more information, see [Access control](./access-control.md "Access control"). + diff --git a/docs/quickstart.md b/docs/quickstart.md new file mode 100644 index 0000000000..172c299267 --- /dev/null +++ b/docs/quickstart.md @@ -0,0 +1,327 @@ + + +# Quick Start + +This guide serves as a introduction to several key entities that can be managed with Polaris, describes how to build and deploy Polaris locally, and finally includes examples of how to use Polaris with Spark and Trino. + +## Prerequisites + +This guide covers building Polaris, deploying it locally or via [Docker](https://www.docker.com/), and interacting with it using the command-line interface and [Apache Spark](https://spark.apache.org/). Before proceeding with Polaris, be sure to satisfy the relevant prerequisites listed here. + +### Building and Deploying Polaris + +To get the latest Polaris code, you'll need to clone the repository using [git](https://git-scm.com/). You can install git using [homebrew](https://brew.sh/): + +``` +brew install git +``` + +Then, use git to clone the Polaris repo: + +``` +cd ~ +git clone https://github.com/polaris-catalog/polaris.git +``` + +#### With Docker + +If you plan to deploy Polaris inside [Docker](https://www.docker.com/)], you'll need to install docker itself. For can be done using [homebrew](https://brew.sh/): + +``` +brew install docker +``` + +Once installed, make sure Docker is running. This can be done on macOS with: + +``` +open -a Docker +``` + +#### From Source + +If you plan to build Polaris from source yourself, you will need to satisfy a few prerequisites first. + +Polaris is built using [gradle](https://gradle.org/) and is compatible with Java 21. We recommend the use of [jenv](https://www.jenv.be/) to manage multiple Java versions. For example, to install Java 21 via [homebre]w(https://brew.sh/) and configure it with jenv: + +``` +cd ~/polaris +jenv local 21 +brew install openjdk@21 gradle@8 jenv +jenv add $(brew --prefix openjdk@21) +jenv local 21 +``` + +### Connecting to Polaris + +Polaris is compatible with any [Apache Iceberg](https://iceberg.apache.org/) client that supports the REST API. Depending on the client you plan to use, refer to the prerequisites below. + +#### With Spark + +If you want to connect to Polaris with [Apache Spark](https://spark.apache.org/), you'll need to start by cloning Spark. As [above](#building-and-deploying-polaris), make sure [git](https://git-scm.com/) is installed first. You can install it with [homebrew](https://brew.sh/): + +``` +brew install git +``` + +Then, clone Spark and check out a versioned branch. This guide uses [Spark 3.5.0](https://spark.apache.org/releases/spark-release-3-5-0.html). + +``` +cd ~ +git clone https://github.com/apache/spark.git +cd ~/spark +git checkout branch-3.5.0 +``` + +## Deploying Polaris + +Polaris can be deployed via a lightweight docker image or as a standalone process. Before starting, be sure that you've satisfied the relevant [prerequisites](#building-and-deploying-polaris) detailed above. + +### Docker Image + +To start using Polaris in Docker, launch Polaris while Docker is running: + +``` +cd ~/polaris +docker compose -f docker-compose.yml up --build +``` + +Once the `polaris-polaris` container is up, you can continue to [Defining a Catalog](#defining-a-catalog). + +### Building Polaris + +Run Polaris locally with: + +``` +cd ~/polaris +./gradlew runApp +``` + +You should see output for some time as Polaris builds and starts up. Eventually, you won’t see any more logs and should see messages that resemble the following: + +``` +INFO [...] [main] [] o.e.j.s.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@... +INFO [...] [main] [] o.e.j.server.AbstractConnector: Started application@... +INFO [...] [main] [] o.e.j.server.AbstractConnector: Started admin@... +INFO [...] [main] [] o.eclipse.jetty.server.Server: Started Server@... +``` + +At this point, Polaris is running. + +## Bootstrapping Polaris + +For this tutorial, we'll launch an instance of Polaris that stores entities only in-memory. This means that any entities that you define will be destroyed when Polaris is shut down. It also means that Polaris will automatically bootstrap itself with root credentials. For more information on how to configure Polaris for production usage, see the [docs](./configuring-polaris-for-production.md). + +When Polaris is launched using in-memory mode the root `CLIENT_ID` and `CLIENT_SECRET` can be found in stdout on initial startup. For example: + +``` +Bootstrapped with credentials: {"client-id": "XXXX", "client-secret": "YYYY"} +``` + +Be sure to note of these credentials as we'll be using them below. + +## Defining a Catalog + +In Polaris, the [catalog](./entities/catalog.md) is the top-level entity that objects like [tables](./entities.md#table) and [views](./entities.md#view) are organized under. With a Polaris service running, you can create a catalog like so: + +``` +cd ~/polaris + +./polaris \ + --client-id ${CLIENT_ID} \ + --client-secret ${CLIENT_SECRET} \ + catalogs \ + create \ + --storage-type s3 \ + --default-base-location ${DEFAULT_BASE_LOCATION} \ + --role-arn ${ROLE_ARN} \ + quickstart_catalog +``` + +This will create a new catalog called **quickstart_catalog**. + +The `DEFAULT_BASE_LOCATION` you provide will be the default location that objects in this catalog should be stored in, and the `ROLE_ARN` you provide should be a [Role ARN](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html) with access to read and write data in that location. These credentials will be provided to engines reading data from the catalog once they have authenticated with Polaris using credentials that have access to those resources. + +If you’re using a storage type other than S3, such as Azure, you’ll provide a different type of credential than a Role ARN. For more details on supported storage types, see the [docs](./entities.md#storage-type). + +Additionally, if Polaris is running somewhere other than `localhost:8181`, you can specify the correct hostname and port by providing `--host` and `--port` flags. For the full set of options supported by the CLI, please refer to the [docs](./command-line-interface.md). + + +### Creating a Principal and Assigning it Privileges + +With a catalog created, we can create a [principal](./entities.md#principal) that has access to manage that catalog. For details on how to configure the Polaris CLI, see [the section above](#defining-a-catalog) or refer to the [docs](./command-line-interface.md). + +``` +./polaris \ + --client-id ${CLIENT_ID} \ + --client-secret ${CLIENT_SECRET} \ + principals \ + create \ + quickstart_user + +./polaris \ + --client-id ${CLIENT_ID} \ + --client-secret ${CLIENT_SECRET} \ + principal-roles \ + create \ + quickstart_user_role + +./polaris \ + --client-id ${CLIENT_ID} \ + --client-secret ${CLIENT_SECRET} \ + catalog-roles \ + create \ + --catalog quickstart_catalog \ + quickstart_catalog_role +``` + + +Be sure to provide the necessary credentials, hostname, and port as before. + +When the `principals create` command completes successfully, it will return the credentials for this new principal. Be sure to note these down for later. For example: + +``` +./polaris ... principals create example +{"clientId": "XXXX", "clientSecret": "YYYY"} +``` + +Now, we grant the principal the [principal role](./entities.md#principal-role) we created, and grant the [catalog role](./entities.md#catalog-role) the principal role we created. For more information on these entities, please refer to the linked documentation. + +``` +./polaris \ + --client-id ${CLIENT_ID} \ + --client-secret ${CLIENT_SECRET} \ + principal-roles \ + grant \ + --principal quickstart_user \ + quickstart_user_role + +./polaris \ + --client-id ${CLIENT_ID} \ + --client-secret ${CLIENT_SECRET} \ + catalog-roles \ + grant \ + --catalog quickstart_catalog \ + --principal-role quickstart_user_role \ + quickstart_catalog_role +``` + +Now, we’ve linked our principal to the catalog via roles like so: + +![Principal to Catalog](./img/quickstart/privilege-illustration-1.png "Principal to Catalog") + +In order to give this principal the ability to interact with the catalog, we must assign some [privileges](./entities.md#privileges). For the time being, we will give this principal the ability to fully manage content in our new catalog. We can do this with the CLI like so: + +``` +./polaris \ + --client-id ${CLIENT_ID} \ + --client-secret ${CLIENT_SECRET} \ + privileges \ + --catalog quickstart_catalog \ + --catalog-role quickstart_catalog_role \ + catalog \ + grant \ + CATALOG_MANAGE_CONTENT +``` + +This grants the [catalog privileges](./entities.md#privilege) `CATALOG_MANAGE_CONTENT` to our catalog role, linking everything together like so: + +![Principal to Catalog with Catalog Role](./img/quickstart/privilege-illustration-2.png "Principal to Catalog with Catalog Role") + +`CATALOG_MANAGE_CONTENT` has create/list/read/write privileges on all entities within the catalog. The same privilege could be granted to a namespace, in which case the principal could create/list/read/write any entity under that namespace. + +## Using Iceberg & Polaris + +At this point, we’ve created a principal and granted it the ability to manage a catalog. We can now use an external engine to assume that principal, access our catalog, and store data in that catalog using [Apache Iceberg](https://iceberg.apache.org/). + +### Connecting with Spark + +To use a Polaris-managed catalog in [Apache Spark](https://spark.apache.org/), we can configure Spark to use the Iceberg catalog REST API. + +This guide uses [Apache Spark 3.5](https://spark.apache.org/releases/spark-release-3-5-0.html), but be sure to find [the appropriate iceberg-spark package for your Spark version](https://mvnrepository.com/artifact/org.apache.iceberg/iceberg-spark). With a local Spark clone, we on the `branch-3.5` branch we can run the following: + +_Note: the credentials provided here are those for our principal, not the root credentials._ + +``` +bin/spark-shell \ +--packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.5.2,org.apache.hadoop:hadoop-aws:3.4.0 \ +--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \ +--conf spark.sql.catalog.quickstart_catalog.warehouse=quickstart_catalog \ +--conf spark.sql.catalog.quickstart_catalog.header.X-Iceberg-Access-Delegation=true \ +--conf spark.sql.catalog.quickstart_catalog=org.apache.iceberg.spark.SparkCatalog \ +--conf spark.sql.catalog.quickstart_catalog.catalog-impl=org.apache.iceberg.rest.RESTCatalog \ +--conf spark.sql.catalog.quickstart_catalog.uri=http://localhost:8181/api/catalog \ +--conf spark.sql.catalog.quickstart_catalog.credential='XXXX:YYYY' \ +--conf spark.sql.catalog.quickstart_catalog.scope='PRINCIPAL_ROLE:ALL' \ +--conf spark.sql.catalog.quickstart_catalog.token-refresh-enabled=true +``` + + +Replace `XXXX` and `YYYY` with the client ID and client secret generated when you created the `quickstart_user` principal. + +Similar to the CLI commands above, this configures Spark to use the Polaris running at `localhost:8181` as a catalog. If your Polaris server is running elsewhere, but sure to update the configuration appropriately. + +Finally, note that we include the `hadoop-aws` package here. If your table is using a different filesystem, be sure to include the appropriate dependency. + +Once the Spark session starts, we can create a namespace and table within the catalog: + +``` +spark.sql("USE quickstart_catalog") +spark.sql("CREATE NAMESPACE IF NOT EXISTS quickstart_namespace") +spark.sql("CREATE NAMESPACE IF NOT EXISTS quickstart_namespace.schema") +spark.sql("USE NAMESPACE quickstart_namespace.schema") +spark.sql(""" + CREATE TABLE IF NOT EXISTS quickstart_table ( + id BIGINT, data STRING + ) +USING ICEBERG +""") +``` + +We can now use this table like any other: + +``` +spark.sql("INSERT INTO quickstart_table VALUES (1, 'some data')") +spark.sql("SELECT * FROM quickstart_table").show(false) +. . . ++---+---------+ +|id |data | ++---+---------+ +|1 |some data| ++---+---------+ +``` + +If at any time access is revoked... + +``` +./polaris \ + --client-id ${CLIENT_ID} \ + --client-secret ${CLIENT_SECRET} \ + privileges \ + --catalog quickstart_catalog \ + --catalog-role quickstart_catalog_role \ + catalog \ + revoke \ + CATALOG_MANAGE_CONTENT +``` + +Spark will lose access to the table: + +``` +spark.sql("SELECT * FROM quickstart_table").show(false) + +org.apache.iceberg.exceptions.ForbiddenException: Forbidden: Principal 'quickstart_user' with activated PrincipalRoles '[]' and activated ids '[6, 7]' is not authorized for op LOAD_TABLE_WITH_READ_DELEGATION +``` diff --git a/extension/persistence/eclipselink/build.gradle b/extension/persistence/eclipselink/build.gradle new file mode 100644 index 0000000000..f872994358 --- /dev/null +++ b/extension/persistence/eclipselink/build.gradle @@ -0,0 +1,25 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +dependencies { + implementation(project(":polaris-core")) + implementation(project(":polaris-service")) + implementation("org.eclipse.persistence:eclipselink:4.0.3") + implementation("io.dropwizard:dropwizard-jackson:${dropwizardVersion}") + + testImplementation("com.h2database:h2:2.2.224") + testImplementation(testFixtures(project(":polaris-core"))) +} diff --git a/extension/persistence/eclipselink/src/main/java/io/polaris/extension/persistence/impl/eclipselink/EclipseLinkPolarisMetaStoreManagerFactory.java b/extension/persistence/eclipselink/src/main/java/io/polaris/extension/persistence/impl/eclipselink/EclipseLinkPolarisMetaStoreManagerFactory.java new file mode 100644 index 0000000000..5ca6c7b25c --- /dev/null +++ b/extension/persistence/eclipselink/src/main/java/io/polaris/extension/persistence/impl/eclipselink/EclipseLinkPolarisMetaStoreManagerFactory.java @@ -0,0 +1,51 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.extension.persistence.impl.eclipselink; + +import com.fasterxml.jackson.annotation.JsonProperty; +import com.fasterxml.jackson.annotation.JsonTypeName; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.context.RealmContext; +import io.polaris.core.persistence.LocalPolarisMetaStoreManagerFactory; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.persistence.PolarisMetaStoreSession; +import org.jetbrains.annotations.NotNull; + +/** + * The implementation of Configuration interface for configuring the {@link PolarisMetaStoreManager} + * using an EclipseLink based meta store to store and retrieve all Polaris metadata. It can be + * configured through persistence.xml to use supported RDBMS as the meta store. + */ +@JsonTypeName("eclipse-link") +public class EclipseLinkPolarisMetaStoreManagerFactory + extends LocalPolarisMetaStoreManagerFactory< + PolarisEclipseLinkStore, PolarisEclipseLinkMetaStoreSessionImpl> { + @JsonProperty("conf-file") + private String confFile; + + @JsonProperty("persistence-unit") + private String persistenceUnitName; + + protected PolarisEclipseLinkStore createBackingStore(@NotNull PolarisDiagnostics diagnostics) { + return new PolarisEclipseLinkStore(diagnostics); + } + + protected PolarisMetaStoreSession createMetaStoreSession( + @NotNull PolarisEclipseLinkStore store, @NotNull RealmContext realmContext) { + return new PolarisEclipseLinkMetaStoreSessionImpl( + store, storageIntegration, realmContext, confFile, persistenceUnitName); + } +} diff --git a/extension/persistence/eclipselink/src/main/java/io/polaris/extension/persistence/impl/eclipselink/PolarisEclipseLinkMetaStoreSessionImpl.java b/extension/persistence/eclipselink/src/main/java/io/polaris/extension/persistence/impl/eclipselink/PolarisEclipseLinkMetaStoreSessionImpl.java new file mode 100644 index 0000000000..43a78436f2 --- /dev/null +++ b/extension/persistence/eclipselink/src/main/java/io/polaris/extension/persistence/impl/eclipselink/PolarisEclipseLinkMetaStoreSessionImpl.java @@ -0,0 +1,693 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.extension.persistence.impl.eclipselink; + +import static org.eclipse.persistence.config.PersistenceUnitProperties.ECLIPSELINK_PERSISTENCE_XML; +import static org.eclipse.persistence.config.PersistenceUnitProperties.JDBC_URL; + +import com.google.common.base.Predicates; +import com.google.common.collect.Maps; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisChangeTrackingVersions; +import io.polaris.core.entity.PolarisEntitiesActiveKey; +import io.polaris.core.entity.PolarisEntityActiveRecord; +import io.polaris.core.entity.PolarisEntityCore; +import io.polaris.core.entity.PolarisEntityId; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.persistence.PolarisMetaStoreManagerImpl; +import io.polaris.core.persistence.PolarisMetaStoreSession; +import io.polaris.core.persistence.RetryOnConcurrencyException; +import io.polaris.core.persistence.models.ModelEntity; +import io.polaris.core.persistence.models.ModelEntityActive; +import io.polaris.core.persistence.models.ModelEntityChangeTracking; +import io.polaris.core.persistence.models.ModelGrantRecord; +import io.polaris.core.persistence.models.ModelPrincipalSecrets; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import io.polaris.core.storage.PolarisStorageIntegration; +import io.polaris.core.storage.PolarisStorageIntegrationProvider; +import jakarta.persistence.EntityManager; +import jakarta.persistence.EntityManagerFactory; +import jakarta.persistence.EntityTransaction; +import jakarta.persistence.OptimisticLockException; +import jakarta.persistence.Persistence; +import java.io.InputStream; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.function.Supplier; +import java.util.stream.Collectors; +import javax.xml.parsers.DocumentBuilder; +import javax.xml.parsers.DocumentBuilderFactory; +import javax.xml.xpath.XPath; +import javax.xml.xpath.XPathConstants; +import javax.xml.xpath.XPathFactory; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.w3c.dom.Document; +import org.w3c.dom.NamedNodeMap; +import org.w3c.dom.NodeList; + +/** + * EclipseLink implementation of a Polaris metadata store supporting persisting and retrieving all + * Polaris metadata from/to the configured database systems. + */ +public class PolarisEclipseLinkMetaStoreSessionImpl implements PolarisMetaStoreSession { + private static final Logger LOG = + LoggerFactory.getLogger(PolarisEclipseLinkMetaStoreSessionImpl.class); + + private EntityManagerFactory emf; + private ThreadLocal localSession = new ThreadLocal<>(); + private final PolarisEclipseLinkStore store; + private final PolarisStorageIntegrationProvider storageIntegrationProvider; + private static volatile Map properties; + + /** + * Create a meta store session against provided realm. Each realm has its own database. + * + * @param store Backing store of EclipseLink implementation + * @param storageIntegrationProvider Storage integration provider + * @param realmContext Realm context used to communicate with different database. + * @param confFile Optional EclipseLink configuration file. Default to 'META-INF/persistence.xml'. + * @param persistenceUnitName Optional persistence-unit name in confFile. Default to 'polaris'. + */ + public PolarisEclipseLinkMetaStoreSessionImpl( + @NotNull PolarisEclipseLinkStore store, + @NotNull PolarisStorageIntegrationProvider storageIntegrationProvider, + @NotNull RealmContext realmContext, + @Nullable String confFile, + @Nullable String persistenceUnitName) { + persistenceUnitName = persistenceUnitName == null ? "polaris" : persistenceUnitName; + Map properties = + loadProperties( + confFile == null ? "META-INF/persistence.xml" : confFile, persistenceUnitName); + // Replace database name in JDBC URL with realm + if (properties.containsKey(JDBC_URL)) { + properties.put( + JDBC_URL, properties.get(JDBC_URL).replace("{realm}", realmContext.getRealmIdentifier())); + } + properties.put(ECLIPSELINK_PERSISTENCE_XML, confFile); + + emf = Persistence.createEntityManagerFactory(persistenceUnitName, properties); + + LOG.debug("Create EclipseLink Meta Store Session for {}", realmContext.getRealmIdentifier()); + + // init store + this.store = store; + this.storageIntegrationProvider = storageIntegrationProvider; + } + + /** Load the persistence unit properties from a given configuration file */ + private Map loadProperties(String confFile, String persistenceUnitName) { + if (this.properties != null) { + return this.properties; + } + + try { + InputStream input = this.getClass().getClassLoader().getResourceAsStream(confFile); + DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); + DocumentBuilder builder = factory.newDocumentBuilder(); + Document doc = builder.parse(input); + XPath xPath = XPathFactory.newInstance().newXPath(); + String expression = + "/persistence/persistence-unit[@name='" + persistenceUnitName + "']/properties/property"; + NodeList nodeList = + (NodeList) xPath.compile(expression).evaluate(doc, XPathConstants.NODESET); + Map properties = new HashMap<>(); + for (int i = 0; i < nodeList.getLength(); i++) { + NamedNodeMap nodeMap = nodeList.item(i).getAttributes(); + properties.put( + nodeMap.getNamedItem("name").getNodeValue(), + nodeMap.getNamedItem("value").getNodeValue()); + } + + this.properties = properties; + return properties; + } catch (Exception e) { + LOG.warn( + "Cannot find or parse the configuration file {} for persistence-unit {}", + confFile, + persistenceUnitName); + } + + return Maps.newHashMap(); + } + + /** {@inheritDoc} */ + @Override + public T runInTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Supplier transactionCode) { + callCtx.getDiagServices().check(localSession.get() == null, "cannot nest transaction"); + + try (EntityManager session = emf.createEntityManager()) { + localSession.set(session); + EntityTransaction tr = session.getTransaction(); + try { + tr.begin(); + + T result = transactionCode.get(); + + // Commit when it's not rolled back by the client + if (session.getTransaction().isActive()) { + tr.commit(); + LOG.debug("transaction committed"); + } + + return result; + } catch (Exception e) { + tr.rollback(); + LOG.debug("transaction rolled back: {}", e); + + if (e instanceof OptimisticLockException + || e.getCause() instanceof OptimisticLockException) { + throw new RetryOnConcurrencyException(e); + } + + throw e; + } finally { + localSession.remove(); + } + } + } + + /** {@inheritDoc} */ + @Override + public void runActionInTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Runnable transactionCode) { + callCtx.getDiagServices().check(localSession.get() == null, "cannot nest transaction"); + + try (EntityManager session = emf.createEntityManager()) { + localSession.set(session); + EntityTransaction tr = session.getTransaction(); + try { + tr.begin(); + + transactionCode.run(); + + // Commit when it's not rolled back by the client + if (session.getTransaction().isActive()) { + tr.commit(); + LOG.debug("transaction committed"); + } + } catch (Exception e) { + tr.rollback(); + LOG.debug("transaction rolled back"); + + if (e instanceof OptimisticLockException + || e.getCause() instanceof OptimisticLockException) { + throw new RetryOnConcurrencyException(e); + } + + throw e; + } finally { + localSession.remove(); + } + } + } + + /** {@inheritDoc} */ + @Override + public T runInReadTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Supplier transactionCode) { + // EclipseLink doesn't support readOnly transaction + return runInTransaction(callCtx, transactionCode); + } + + /** {@inheritDoc} */ + @Override + public void runActionInReadTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Runnable transactionCode) { + // EclipseLink doesn't support readOnly transaction + runActionInTransaction(callCtx, transactionCode); + } + + /** + * @return new unique entity identifier + */ + @Override + public long generateNewId(@NotNull PolarisCallContext callCtx) { + // This function can be called within a transaction or out of transaction. + // If called out of transaction, create a new transaction, otherwise run in current transaction + return localSession.get() != null + ? this.store.getNextSequence(localSession.get()) + : runInReadTransaction(callCtx, () -> generateNewId(callCtx)); + } + + /** {@inheritDoc} */ + @Override + public void writeToEntities( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + this.store.writeToEntities(localSession.get(), entity); + } + + /** {@inheritDoc} */ + @Override + public void persistStorageIntegrationIfNeeded( + @NotNull PolarisCallContext callContext, + @NotNull PolarisBaseEntity entity, + @Nullable PolarisStorageIntegration storageIntegration) { + // not implemented for eclipselink store + } + + /** {@inheritDoc} */ + @Override + public void writeToEntitiesActive( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + // write it + this.store.writeToEntitiesActive(localSession.get(), entity); + } + + /** {@inheritDoc} */ + @Override + public void writeToEntitiesDropped( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + // write it + this.store.writeToEntitiesDropped(localSession.get(), entity); + } + + /** {@inheritDoc} */ + @Override + public void writeToEntitiesChangeTracking( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + // write it + this.store.writeToEntitiesChangeTracking(localSession.get(), entity); + } + + /** {@inheritDoc} */ + @Override + public void writeToGrantRecords( + @NotNull PolarisCallContext callCtx, @NotNull PolarisGrantRecord grantRec) { + // write it + this.store.writeToGrantRecords(localSession.get(), grantRec); + } + + /** {@inheritDoc} */ + @Override + public void deleteFromEntities( + @NotNull PolarisCallContext callCtx, @NotNull PolarisEntityCore entity) { + + // delete it + this.store.deleteFromEntities(localSession.get(), entity.getCatalogId(), entity.getId()); + } + + /** {@inheritDoc} */ + @Override + public void deleteFromEntitiesActive( + @NotNull PolarisCallContext callCtx, @NotNull PolarisEntityCore entity) { + // delete it + this.store.deleteFromEntitiesActive(localSession.get(), new PolarisEntitiesActiveKey(entity)); + } + + /** {@inheritDoc} */ + @Override + public void deleteFromEntitiesDropped( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + // delete it + this.store.deleteFromEntitiesDropped(localSession.get(), entity.getCatalogId(), entity.getId()); + } + + /** + * {@inheritDoc} + * + * @param callCtx + * @param entity entity record to delete + */ + @Override + public void deleteFromEntitiesChangeTracking( + @NotNull PolarisCallContext callCtx, @NotNull PolarisEntityCore entity) { + // delete it + this.store.deleteFromEntitiesChangeTracking(localSession.get(), entity); + } + + /** {@inheritDoc} */ + @Override + public void deleteFromGrantRecords( + @NotNull PolarisCallContext callCtx, @NotNull PolarisGrantRecord grantRec) { + this.store.deleteFromGrantRecords(localSession.get(), grantRec); + } + + /** {@inheritDoc} */ + @Override + public void deleteAllEntityGrantRecords( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisEntityCore entity, + @NotNull List grantsOnGrantee, + @NotNull List grantsOnSecurable) { + this.store.deleteAllEntityGrantRecords(localSession.get(), entity); + } + + /** {@inheritDoc} */ + @Override + public void deleteAll(@NotNull PolarisCallContext callCtx) { + this.store.deleteAll(localSession.get()); + } + + /** {@inheritDoc} */ + @Override + public @Nullable PolarisBaseEntity lookupEntity( + @NotNull PolarisCallContext callCtx, long catalogId, long entityId) { + return ModelEntity.toEntity(this.store.lookupEntity(localSession.get(), catalogId, entityId)); + } + + @Override + public @NotNull List lookupEntities( + @NotNull PolarisCallContext callCtx, List entityIds) { + return this.store.lookupEntities(localSession.get(), entityIds).stream() + .map(model -> ModelEntity.toEntity(model)) + .toList(); + } + + /** {@inheritDoc} */ + @Override + public int lookupEntityVersion( + @NotNull PolarisCallContext callCtx, long catalogId, long entityId) { + ModelEntity model = this.store.lookupEntity(localSession.get(), catalogId, entityId); + return model == null ? 0 : model.getEntityVersion(); + } + + /** {@inheritDoc} */ + @Override + public @NotNull List lookupEntityVersions( + @NotNull PolarisCallContext callCtx, List entityIds) { + Map idToEntityMap = + this.store.lookupEntities(localSession.get(), entityIds).stream() + .collect( + Collectors.toMap( + entry -> new PolarisEntityId(entry.getCatalogId(), entry.getId()), + entry -> entry)); + return entityIds.stream() + .map( + entityId -> { + ModelEntity entity = idToEntityMap.getOrDefault(entityId, null); + return entity == null + ? null + : new PolarisChangeTrackingVersions( + entity.getEntityVersion(), entity.getGrantRecordsVersion()); + }) + .collect(Collectors.toList()); + } + + /** {@inheritDoc} */ + @Override + @Nullable + public PolarisEntityActiveRecord lookupEntityActive( + @NotNull PolarisCallContext callCtx, @NotNull PolarisEntitiesActiveKey entityActiveKey) { + // lookup the active entity slice + return ModelEntityActive.toEntityActive( + this.store.lookupEntityActive(localSession.get(), entityActiveKey)); + } + + /** {@inheritDoc} */ + @Override + @NotNull + public List lookupEntityActiveBatch( + @NotNull PolarisCallContext callCtx, + @NotNull List entityActiveKeys) { + // now build a list to quickly verify that nothing has changed + return entityActiveKeys.stream() + .map(entityActiveKey -> this.lookupEntityActive(callCtx, entityActiveKey)) + .collect(Collectors.toList()); + } + + /** {@inheritDoc} */ + @Override + public @NotNull List listActiveEntities( + @NotNull PolarisCallContext callCtx, + long catalogId, + long parentId, + @NotNull PolarisEntityType entityType) { + return listActiveEntities(callCtx, catalogId, parentId, entityType, Predicates.alwaysTrue()); + } + + @Override + public @NotNull List listActiveEntities( + @NotNull PolarisCallContext callCtx, + long catalogId, + long parentId, + @NotNull PolarisEntityType entityType, + @NotNull Predicate entityFilter) { + // full range scan under the parent for that type + return listActiveEntities( + callCtx, + catalogId, + parentId, + entityType, + Integer.MAX_VALUE, + entityFilter, + entity -> + new PolarisEntityActiveRecord( + entity.getCatalogId(), + entity.getId(), + entity.getParentId(), + entity.getName(), + entity.getTypeCode(), + entity.getSubTypeCode())); + } + + @Override + public @NotNull List listActiveEntities( + @NotNull PolarisCallContext callCtx, + long catalogId, + long parentId, + @NotNull PolarisEntityType entityType, + int limit, + @NotNull Predicate entityFilter, + @NotNull Function transformer) { + // full range scan under the parent for that type + return this.store + .lookupFullEntitiesActive(localSession.get(), catalogId, parentId, entityType) + .stream() + .map(model -> ModelEntity.toEntity(model)) + .filter(entityFilter) + .limit(limit) + .map(transformer) + .collect(Collectors.toList()); + } + + /** {@inheritDoc} */ + public boolean hasChildren( + @NotNull PolarisCallContext callContext, + @Nullable PolarisEntityType entityType, + long catalogId, + long parentId) { + // check if it has children + return this.store.countActiveChildEntities(localSession.get(), catalogId, parentId, entityType) + > 0; + } + + /** {@inheritDoc} */ + @Override + public int lookupEntityGrantRecordsVersion( + @NotNull PolarisCallContext callCtx, long catalogId, long entityId) { + ModelEntityChangeTracking entity = + this.store.lookupEntityChangeTracking(localSession.get(), catalogId, entityId); + + // does not exist, 0 + return entity == null ? 0 : entity.getGrantRecordsVersion(); + } + + /** {@inheritDoc} */ + @Override + public @Nullable PolarisGrantRecord lookupGrantRecord( + @NotNull PolarisCallContext callCtx, + long securableCatalogId, + long securableId, + long granteeCatalogId, + long granteeId, + int privilegeCode) { + // lookup the grants records slice to find the usage role + return ModelGrantRecord.toGrantRecord( + this.store.lookupGrantRecord( + localSession.get(), + securableCatalogId, + securableId, + granteeCatalogId, + granteeId, + privilegeCode)); + } + + /** {@inheritDoc} */ + @Override + public @NotNull List loadAllGrantRecordsOnSecurable( + @NotNull PolarisCallContext callCtx, long securableCatalogId, long securableId) { + // now fetch all grants for this securable + return this.store + .lookupAllGrantRecordsOnSecurable(localSession.get(), securableCatalogId, securableId) + .stream() + .map(model -> ModelGrantRecord.toGrantRecord(model)) + .toList(); + } + + /** {@inheritDoc} */ + @Override + public @NotNull List loadAllGrantRecordsOnGrantee( + @NotNull PolarisCallContext callCtx, long granteeCatalogId, long granteeId) { + // now fetch all grants assigned to this grantee + return this.store + .lookupGrantRecordsOnGrantee(localSession.get(), granteeCatalogId, granteeId) + .stream() + .map(model -> ModelGrantRecord.toGrantRecord(model)) + .toList(); + } + + /** {@inheritDoc} */ + @Override + public @Nullable PolarisPrincipalSecrets loadPrincipalSecrets( + @NotNull PolarisCallContext callCtx, @NotNull String clientId) { + return ModelPrincipalSecrets.toPrincipalSecrets( + this.store.lookupPrincipalSecrets(localSession.get(), clientId)); + } + + /** {@inheritDoc} */ + @Override + public @NotNull PolarisPrincipalSecrets generateNewPrincipalSecrets( + @NotNull PolarisCallContext callCtx, @NotNull String principalName, long principalId) { + // ensure principal client id is unique + PolarisPrincipalSecrets principalSecrets; + ModelPrincipalSecrets lookupPrincipalSecrets; + do { + // generate new random client id and secrets + principalSecrets = new PolarisPrincipalSecrets(principalId); + + // load the existing secrets + lookupPrincipalSecrets = + this.store.lookupPrincipalSecrets( + localSession.get(), principalSecrets.getPrincipalClientId()); + } while (lookupPrincipalSecrets != null); + + // write new principal secrets + this.store.writePrincipalSecrets(localSession.get(), principalSecrets); + + // if not found, return null + return principalSecrets; + } + + /** {@inheritDoc} */ + @Override + public @NotNull PolarisPrincipalSecrets rotatePrincipalSecrets( + @NotNull PolarisCallContext callCtx, + @NotNull String clientId, + long principalId, + @NotNull String mainSecretToRotate, + boolean reset) { + + // load the existing secrets + PolarisPrincipalSecrets principalSecrets = + ModelPrincipalSecrets.toPrincipalSecrets( + this.store.lookupPrincipalSecrets(localSession.get(), clientId)); + + // should be found + callCtx + .getDiagServices() + .checkNotNull( + principalSecrets, + "cannot_find_secrets", + "client_id={} principalId={}", + clientId, + principalId); + + // ensure principal id is matching + callCtx + .getDiagServices() + .check( + principalId == principalSecrets.getPrincipalId(), + "principal_id_mismatch", + "expectedId={} id={}", + principalId, + principalSecrets.getPrincipalId()); + + // rotate the secrets + principalSecrets.rotateSecrets(mainSecretToRotate); + if (reset) { + principalSecrets.rotateSecrets(principalSecrets.getMainSecret()); + } + + // write back new secrets + this.store.writePrincipalSecrets(localSession.get(), principalSecrets); + + // return those + return principalSecrets; + } + + /** {@inheritDoc} */ + @Override + public void deletePrincipalSecrets( + @NotNull PolarisCallContext callCtx, @NotNull String clientId, long principalId) { + // load the existing secrets + ModelPrincipalSecrets principalSecrets = + this.store.lookupPrincipalSecrets(localSession.get(), clientId); + + // should be found + callCtx + .getDiagServices() + .checkNotNull( + principalSecrets, + "cannot_find_secrets", + "client_id={} principalId={}", + clientId, + principalId); + + // ensure principal id is matching + callCtx + .getDiagServices() + .check( + principalId == principalSecrets.getPrincipalId(), + "principal_id_mismatch", + "expectedId={} id={}", + principalId, + principalSecrets.getPrincipalId()); + + // delete these secrets + this.store.deletePrincipalSecrets(localSession.get(), clientId); + } + + /** {@inheritDoc} */ + @Override + public @Nullable + PolarisStorageIntegration createStorageIntegration( + @NotNull PolarisCallContext callCtx, + long catalogId, + long entityId, + PolarisStorageConfigurationInfo polarisStorageConfigurationInfo) { + return storageIntegrationProvider.getStorageIntegrationForConfig( + polarisStorageConfigurationInfo); + } + + /** {@inheritDoc} */ + @Override + public @Nullable + PolarisStorageIntegration loadPolarisStorageIntegration( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + PolarisStorageConfigurationInfo storageConfig = + PolarisMetaStoreManagerImpl.readStorageConfiguration(callCtx, entity); + return storageIntegrationProvider.getStorageIntegrationForConfig(storageConfig); + } + + @Override + public void rollback() { + EntityManager session = localSession.get(); + if (session != null) { + session.getTransaction().rollback(); + } + } +} diff --git a/extension/persistence/eclipselink/src/main/java/io/polaris/extension/persistence/impl/eclipselink/PolarisEclipseLinkStore.java b/extension/persistence/eclipselink/src/main/java/io/polaris/extension/persistence/impl/eclipselink/PolarisEclipseLinkStore.java new file mode 100644 index 0000000000..a4a880894d --- /dev/null +++ b/extension/persistence/eclipselink/src/main/java/io/polaris/extension/persistence/impl/eclipselink/PolarisEclipseLinkStore.java @@ -0,0 +1,412 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.extension.persistence.impl.eclipselink; + +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntitiesActiveKey; +import io.polaris.core.entity.PolarisEntityActiveRecord; +import io.polaris.core.entity.PolarisEntityCore; +import io.polaris.core.entity.PolarisEntityId; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.persistence.models.ModelEntity; +import io.polaris.core.persistence.models.ModelEntityActive; +import io.polaris.core.persistence.models.ModelEntityChangeTracking; +import io.polaris.core.persistence.models.ModelEntityDropped; +import io.polaris.core.persistence.models.ModelGrantRecord; +import io.polaris.core.persistence.models.ModelPrincipalSecrets; +import jakarta.persistence.EntityManager; +import jakarta.persistence.TypedQuery; +import java.util.ArrayList; +import java.util.List; +import java.util.stream.Collectors; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Implements an EclipseLink based metastore for Polaris which can be configured for any database + * with EclipseLink support + */ +public class PolarisEclipseLinkStore { + private static final Logger LOG = LoggerFactory.getLogger(PolarisEclipseLinkStore.class); + + // diagnostic services + private PolarisDiagnostics diagnosticServices; + + /** + * Constructor, allocate everything at once + * + * @param diagnostics diagnostic services + */ + public PolarisEclipseLinkStore(@NotNull PolarisDiagnostics diagnostics) { + this.diagnosticServices = diagnostics; + } + + long getNextSequence(EntityManager session) { + diagnosticServices.check(session != null, "session_is_null"); + // implement with a sequence table POLARIS_SEQUENCE + return (long) session.createNativeQuery("SELECT NEXTVAL('POLARIS_SEQ')").getSingleResult(); + } + + void writeToEntities(EntityManager session, PolarisBaseEntity entity) { + diagnosticServices.check(session != null, "session_is_null"); + + ModelEntity model = lookupEntity(session, entity.getCatalogId(), entity.getId()); + if (model != null) { + // Update if the same entity already exists + model.update(entity); + } else { + model = ModelEntity.fromEntity(entity); + } + + session.persist(model); + } + + void writeToEntitiesActive(EntityManager session, PolarisBaseEntity entity) { + diagnosticServices.check(session != null, "session_is_null"); + + ModelEntityActive model = lookupEntityActive(session, new PolarisEntitiesActiveKey(entity)); + if (model == null) { + session.persist(ModelEntityActive.fromEntityActive(new PolarisEntityActiveRecord(entity))); + } + } + + void writeToEntitiesDropped(EntityManager session, PolarisBaseEntity entity) { + diagnosticServices.check(session != null, "session_is_null"); + + ModelEntityDropped entityDropped = + lookupEntityDropped(session, entity.getCatalogId(), entity.getId()); + if (entityDropped == null) { + session.persist(ModelEntityDropped.fromEntity(entity)); + } + } + + void writeToEntitiesChangeTracking(EntityManager session, PolarisBaseEntity entity) { + diagnosticServices.check(session != null, "session_is_null"); + + // Update the existing change tracking if a record with the same ids exists; otherwise, persist + // a new one + ModelEntityChangeTracking entityChangeTracking = + lookupEntityChangeTracking(session, entity.getCatalogId(), entity.getId()); + if (entityChangeTracking != null) { + entityChangeTracking.update(entity); + } else { + entityChangeTracking = new ModelEntityChangeTracking(entity); + } + + session.persist(entityChangeTracking); + } + + void writeToGrantRecords(EntityManager session, PolarisGrantRecord grantRec) { + diagnosticServices.check(session != null, "session_is_null"); + + session.persist(ModelGrantRecord.fromGrantRecord(grantRec)); + } + + void deleteFromEntities(EntityManager session, long catalogId, long entityId) { + diagnosticServices.check(session != null, "session_is_null"); + + ModelEntity model = lookupEntity(session, catalogId, entityId); + diagnosticServices.check(model != null, "entity_not_found"); + + session.remove(model); + } + + void deleteFromEntitiesActive(EntityManager session, PolarisEntitiesActiveKey key) { + diagnosticServices.check(session != null, "session_is_null"); + + ModelEntityActive entity = lookupEntityActive(session, key); + diagnosticServices.check(entity != null, "active_entity_not_found"); + session.remove(entity); + } + + void deleteFromEntitiesDropped(EntityManager session, long catalogId, long entityId) { + diagnosticServices.check(session != null, "session_is_null"); + + ModelEntityDropped entity = lookupEntityDropped(session, catalogId, entityId); + diagnosticServices.check(entity != null, "dropped_entity_not_found"); + + session.remove(entity); + } + + void deleteFromEntitiesChangeTracking(EntityManager session, PolarisEntityCore entity) { + diagnosticServices.check(session != null, "session_is_null"); + + ModelEntityChangeTracking entityChangeTracking = + lookupEntityChangeTracking(session, entity.getCatalogId(), entity.getId()); + diagnosticServices.check(entityChangeTracking != null, "change_tracking_entity_not_found"); + + session.remove(entityChangeTracking); + } + + void deleteFromGrantRecords(EntityManager session, PolarisGrantRecord grantRec) { + diagnosticServices.check(session != null, "session_is_null"); + + ModelGrantRecord lookupGrantRecord = + lookupGrantRecord( + session, + grantRec.getSecurableCatalogId(), + grantRec.getSecurableId(), + grantRec.getGranteeCatalogId(), + grantRec.getGranteeId(), + grantRec.getPrivilegeCode()); + + diagnosticServices.check(lookupGrantRecord != null, "grant_record_not_found"); + + session.remove(lookupGrantRecord); + } + + void deleteAllEntityGrantRecords(EntityManager session, PolarisEntityCore entity) { + diagnosticServices.check(session != null, "session_is_null"); + + // Delete grant records from grantRecords tables + lookupAllGrantRecordsOnSecurable(session, entity.getCatalogId(), entity.getId()) + .forEach(session::remove); + + // Delete grantee records from grantRecords tables + lookupGrantRecordsOnGrantee(session, entity.getCatalogId(), entity.getId()) + .forEach(session::remove); + } + + void deleteAll(EntityManager session) { + diagnosticServices.check(session != null, "session_is_null"); + + session.createQuery("DELETE from ModelEntity").executeUpdate(); + session.createQuery("DELETE from ModelEntityActive").executeUpdate(); + session.createQuery("DELETE from ModelEntityDropped").executeUpdate(); + session.createQuery("DELETE from ModelEntityChangeTracking").executeUpdate(); + session.createQuery("DELETE from ModelGrantRecord").executeUpdate(); + session.createQuery("DELETE from ModelPrincipalSecrets").executeUpdate(); + + LOG.debug("All entities deleted."); + } + + ModelEntity lookupEntity(EntityManager session, long catalogId, long entityId) { + diagnosticServices.check(session != null, "session_is_null"); + + return session + .createQuery( + "SELECT m from ModelEntity m where m.catalogId=:catalogId and m.id=:id", + ModelEntity.class) + .setParameter("catalogId", catalogId) + .setParameter("id", entityId) + .getResultStream() + .findFirst() + .orElse(null); + } + + @SuppressWarnings("unchecked") + List lookupEntities(EntityManager session, List entityIds) { + diagnosticServices.check(session != null, "session_is_null"); + + if (entityIds == null || entityIds.isEmpty()) return new ArrayList<>(); + + // TODO Support paging + String inClause = + entityIds.stream() + .map(entityId -> "(" + entityId.getCatalogId() + "," + entityId.getId() + ")") + .collect(Collectors.joining(",")); + + String hql = "SELECT * from ENTITIES m where (m.catalogId, m.id) in (" + inClause + ")"; + return (List) session.createNativeQuery(hql, ModelEntity.class).getResultList(); + } + + ModelEntityActive lookupEntityActive( + EntityManager session, PolarisEntitiesActiveKey entityActiveKey) { + diagnosticServices.check(session != null, "session_is_null"); + + return session + .createQuery( + "SELECT m from ModelEntityActive m where m.catalogId=:catalogId and m.parentId=:parentId and m.typeCode=:typeCode and m.name=:name", + ModelEntityActive.class) + .setParameter("catalogId", entityActiveKey.getCatalogId()) + .setParameter("parentId", entityActiveKey.getParentId()) + .setParameter("typeCode", entityActiveKey.getTypeCode()) + .setParameter("name", entityActiveKey.getName()) + .getResultStream() + .findFirst() + .orElse(null); + } + + long countActiveChildEntities( + EntityManager session, + long catalogId, + long parentId, + @Nullable PolarisEntityType entityType) { + diagnosticServices.check(session != null, "session_is_null"); + + String hql = + "SELECT COUNT(m) from ModelEntityActive m where m.catalogId=:catalogId and m.parentId=:parentId"; + if (entityType != null) { + hql += " and m.typeCode=:typeCode"; + } + + TypedQuery query = + session + .createQuery(hql, Long.class) + .setParameter("catalogId", catalogId) + .setParameter("parentId", parentId); + if (entityType != null) { + query.setParameter("typeCode", entityType.getCode()); + } + + return query.getSingleResult(); + } + + List lookupFullEntitiesActive( + EntityManager session, long catalogId, long parentId, @NotNull PolarisEntityType entityType) { + diagnosticServices.check(session != null, "session_is_null"); + + // Currently check against ENTITIES not joining with ENTITIES_ACTIVE + String hql = + "SELECT m from ModelEntity m where m.catalogId=:catalogId and m.parentId=:parentId and m.typeCode=:typeCode"; + + TypedQuery query = + session + .createQuery(hql, ModelEntity.class) + .setParameter("catalogId", catalogId) + .setParameter("parentId", parentId) + .setParameter("typeCode", entityType.getCode()); + + return query.getResultList(); + } + + ModelEntityDropped lookupEntityDropped(EntityManager session, long catalogId, long entityId) { + diagnosticServices.check(session != null, "session_is_null"); + + return session + .createQuery( + "SELECT m from ModelEntityDropped m where m.catalogId=:catalogId and m.id=:id", + ModelEntityDropped.class) + .setParameter("catalogId", catalogId) + .setParameter("id", entityId) + .getResultStream() + .findFirst() + .orElse(null); + } + + ModelEntityChangeTracking lookupEntityChangeTracking( + EntityManager session, long catalogId, long entityId) { + diagnosticServices.check(session != null, "session_is_null"); + + return session + .createQuery( + "SELECT m from ModelEntityChangeTracking m where m.catalogId=:catalogId and m.id=:id", + ModelEntityChangeTracking.class) + .setParameter("catalogId", catalogId) + .setParameter("id", entityId) + .getResultStream() + .findFirst() + .orElse(null); + } + + ModelGrantRecord lookupGrantRecord( + EntityManager session, + long securableCatalogId, + long securableId, + long granteeCatalogId, + long granteeId, + int privilegeCode) { + diagnosticServices.check(session != null, "session_is_null"); + + return session + .createQuery( + "SELECT m from ModelGrantRecord m where m.securableCatalogId=:securableCatalogId " + + "and m.securableId=:securableId " + + "and m.granteeCatalogId=:granteeCatalogId " + + "and m.granteeId=:granteeId " + + "and m.privilegeCode=:privilegeCode", + ModelGrantRecord.class) + .setParameter("securableCatalogId", securableCatalogId) + .setParameter("securableId", securableId) + .setParameter("granteeCatalogId", granteeCatalogId) + .setParameter("granteeId", granteeId) + .setParameter("privilegeCode", privilegeCode) + .getResultStream() + .findFirst() + .orElse(null); + } + + List lookupAllGrantRecordsOnSecurable( + EntityManager session, long securableCatalogId, long securableId) { + diagnosticServices.check(session != null, "session_is_null"); + + return session + .createQuery( + "SELECT m from ModelGrantRecord m " + + "where m.securableCatalogId=:securableCatalogId " + + "and m.securableId=:securableId", + ModelGrantRecord.class) + .setParameter("securableCatalogId", securableCatalogId) + .setParameter("securableId", securableId) + .getResultList(); + } + + List lookupGrantRecordsOnGrantee( + EntityManager session, long granteeCatalogId, long granteeId) { + diagnosticServices.check(session != null, "session_is_null"); + + return session + .createQuery( + "SELECT m from ModelGrantRecord m " + + "where m.granteeCatalogId=:granteeCatalogId " + + "and m.granteeId=:granteeId", + ModelGrantRecord.class) + .setParameter("granteeCatalogId", granteeCatalogId) + .setParameter("granteeId", granteeId) + .getResultList(); + } + + ModelPrincipalSecrets lookupPrincipalSecrets(EntityManager session, String clientId) { + diagnosticServices.check(session != null, "session_is_null"); + + return session + .createQuery( + "SELECT m from ModelPrincipalSecrets m where m.principalClientId=:clientId", + ModelPrincipalSecrets.class) + .setParameter("clientId", clientId) + .getResultStream() + .findFirst() + .orElse(null); + } + + void writePrincipalSecrets(EntityManager session, PolarisPrincipalSecrets principalSecrets) { + diagnosticServices.check(session != null, "session_is_null"); + + ModelPrincipalSecrets modelPrincipalSecrets = + lookupPrincipalSecrets(session, principalSecrets.getPrincipalClientId()); + if (modelPrincipalSecrets != null) { + modelPrincipalSecrets.update(principalSecrets); + } else { + modelPrincipalSecrets = ModelPrincipalSecrets.fromPrincipalSecrets(principalSecrets); + } + + session.persist(modelPrincipalSecrets); + } + + void deletePrincipalSecrets(EntityManager session, String clientId) { + diagnosticServices.check(session != null, "session_is_null"); + + ModelPrincipalSecrets modelPrincipalSecrets = lookupPrincipalSecrets(session, clientId); + diagnosticServices.check(modelPrincipalSecrets != null, "principal_secretes_not_found"); + + session.remove(modelPrincipalSecrets); + } +} diff --git a/extension/persistence/eclipselink/src/test/java/com/snowflake/polaris/persistence/impl/eclipselink/PolarisEclipseLinkMetaStoreTest.java b/extension/persistence/eclipselink/src/test/java/com/snowflake/polaris/persistence/impl/eclipselink/PolarisEclipseLinkMetaStoreTest.java new file mode 100644 index 0000000000..03cc1026e9 --- /dev/null +++ b/extension/persistence/eclipselink/src/test/java/com/snowflake/polaris/persistence/impl/eclipselink/PolarisEclipseLinkMetaStoreTest.java @@ -0,0 +1,52 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.snowflake.polaris.persistence.impl.eclipselink; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisConfigurationStore; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.persistence.PolarisMetaStoreManagerImpl; +import io.polaris.core.persistence.PolarisMetaStoreManagerTest; +import io.polaris.core.persistence.PolarisTestMetaStoreManager; +import io.polaris.extension.persistence.impl.eclipselink.PolarisEclipseLinkMetaStoreSessionImpl; +import io.polaris.extension.persistence.impl.eclipselink.PolarisEclipseLinkStore; +import java.time.ZoneId; +import org.mockito.Mockito; + +/** + * Integration test for EclipseLink based metastore implementation + * + * @author aixu + */ +public class PolarisEclipseLinkMetaStoreTest extends PolarisMetaStoreManagerTest { + + @Override + protected PolarisTestMetaStoreManager createPolarisTestMetaStoreManager() { + PolarisDiagnostics diagServices = new PolarisDefaultDiagServiceImpl(); + PolarisEclipseLinkStore store = new PolarisEclipseLinkStore(diagServices); + PolarisEclipseLinkMetaStoreSessionImpl session = + new PolarisEclipseLinkMetaStoreSessionImpl( + store, Mockito.mock(), () -> "realm", null, "polaris-dev"); + return new PolarisTestMetaStoreManager( + new PolarisMetaStoreManagerImpl(), + new PolarisCallContext( + session, + diagServices, + new PolarisConfigurationStore() {}, + timeSource.withZone(ZoneId.systemDefault()))); + } +} diff --git a/gradle.properties b/gradle.properties new file mode 100644 index 0000000000..73637cefb5 --- /dev/null +++ b/gradle.properties @@ -0,0 +1,17 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +group=io.polaris +version=1.0.0 \ No newline at end of file diff --git a/gradle/gradlew-include.sh b/gradle/gradlew-include.sh new file mode 100644 index 0000000000..19cb46059c --- /dev/null +++ b/gradle/gradlew-include.sh @@ -0,0 +1,47 @@ +# Downloads the gradle-wrapper.jar if necessary and verifies its integrity. +# Included from /.gradlew + +# Extract the Gradle version from gradle-wrapper.properties. +GRADLE_DIST_VERSION="$(grep distributionUrl= "$APP_HOME/gradle/wrapper/gradle-wrapper.properties" | sed 's/^.*gradle-\([0-9.]*\)-[a-z]*.zip$/\1/')" +GRADLE_WRAPPER_SHA256="$APP_HOME/gradle/wrapper/gradle-wrapper-${GRADLE_DIST_VERSION}.jar.sha256" +GRADLE_WRAPPER_JAR="$APP_HOME/gradle/wrapper/gradle-wrapper.jar" +if [ -x "$(command -v sha256sum)" ] ; then + SHASUM="sha256sum" +else + if [ -x "$(command -v shasum)" ] ; then + SHASUM="shasum -a 256" + else + echo "Neither sha256sum nor shasum are available, install either." > /dev/stderr + exit 1 + fi +fi +if [ ! -e "${GRADLE_WRAPPER_SHA256}" ]; then + # Delete the wrapper jar, if the checksum file does not exist. + rm -f "${GRADLE_WRAPPER_JAR}" +fi +if [ -e "${GRADLE_WRAPPER_JAR}" ]; then + # Verify the wrapper jar, if it exists, delete wrapper jar and checksum file, if the checksums + # do not match. + JAR_CHECKSUM="$(${SHASUM} "${GRADLE_WRAPPER_JAR}" | cut -d\ -f1)" + EXPECTED="$(cat "${GRADLE_WRAPPER_SHA256}")" + if [ "${JAR_CHECKSUM}" != "${EXPECTED}" ]; then + rm -f "${GRADLE_WRAPPER_JAR}" "${GRADLE_WRAPPER_SHA256}" + fi +fi +if [ ! -e "${GRADLE_WRAPPER_SHA256}" ]; then + curl --location --output "${GRADLE_WRAPPER_SHA256}" https://services.gradle.org/distributions/gradle-${GRADLE_DIST_VERSION}-wrapper.jar.sha256 || exit 1 +fi +if [ ! -e "${GRADLE_WRAPPER_JAR}" ]; then + # The Gradle version extracted from the `distributionUrl` property does not contain ".0" patch + # versions. Need to append a ".0" in that case to download the wrapper jar. + GRADLE_VERSION="$(echo "$GRADLE_DIST_VERSION" | sed 's/^\([0-9]*[.][0-9]*\)$/\1.0/')" + curl --location --output "${GRADLE_WRAPPER_JAR}" https://raw.githubusercontent.com/gradle/gradle/v${GRADLE_VERSION}/gradle/wrapper/gradle-wrapper.jar || exit 1 + JAR_CHECKSUM="$(${SHASUM} "${GRADLE_WRAPPER_JAR}" | cut -d\ -f1)" + EXPECTED="$(cat "${GRADLE_WRAPPER_SHA256}")" + if [ "${JAR_CHECKSUM}" != "${EXPECTED}" ]; then + # If the (just downloaded) checksum and the downloaded wrapper jar do not match, something + # really bad is going on. + echo "Expected sha256 of the downloaded gradle-wrapper.jar does not match the downloaded sha256!" > /dev/stderr + exit 1 + fi +fi diff --git a/gradle/projects.main.properties b/gradle/projects.main.properties new file mode 100644 index 0000000000..d0ecccac37 --- /dev/null +++ b/gradle/projects.main.properties @@ -0,0 +1,20 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# + +polaris-core=polaris-core +polaris-service=polaris-service +polaris-eclipselink=extension/persistence/eclipselink diff --git a/gradle/wrapper/gradle-wrapper.properties b/gradle/wrapper/gradle-wrapper.properties new file mode 100644 index 0000000000..f24d7559ef --- /dev/null +++ b/gradle/wrapper/gradle-wrapper.properties @@ -0,0 +1,25 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +distributionBase=GRADLE_USER_HOME +distributionPath=wrapper/dists +# See https://gradle.org/release-checksums/ for valid checksums +distributionSha256Sum=d725d707bfabd4dfdc958c624003b3c80accc03f7037b5122c4b1d0ef15cecab +distributionUrl=https\://services.gradle.org/distributions/gradle-8.9-bin.zip +networkTimeout=10000 +validateDistributionUrl=true +zipStoreBase=GRADLE_USER_HOME +zipStorePath=wrapper/dists diff --git a/gradlew b/gradlew new file mode 100755 index 0000000000..61ec480bca --- /dev/null +++ b/gradlew @@ -0,0 +1,251 @@ +#!/bin/sh + +# +# Copyright © 2015-2021 the original authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +############################################################################## +# +# Gradle start up script for POSIX generated by Gradle. +# +# Important for running: +# +# (1) You need a POSIX-compliant shell to run this script. If your /bin/sh is +# noncompliant, but you have some other compliant shell such as ksh or +# bash, then to run this script, type that shell name before the whole +# command line, like: +# +# ksh Gradle +# +# Busybox and similar reduced shells will NOT work, because this script +# requires all of these POSIX shell features: +# * functions; +# * expansions «$var», «${var}», «${var:-default}», «${var+SET}», +# «${var#prefix}», «${var%suffix}», and «$( cmd )»; +# * compound commands having a testable exit status, especially «case»; +# * various built-in commands including «command», «set», and «ulimit». +# +# Important for patching: +# +# (2) This script targets any POSIX shell, so it avoids extensions provided +# by Bash, Ksh, etc; in particular arrays are avoided. +# +# The "traditional" practice of packing multiple parameters into a +# space-separated string is a well documented source of bugs and security +# problems, so this is (mostly) avoided, by progressively accumulating +# options in "$@", and eventually passing that to Java. +# +# Where the inherited environment variables (DEFAULT_JVM_OPTS, JAVA_OPTS, +# and GRADLE_OPTS) rely on word-splitting, this is performed explicitly; +# see the in-line comments for details. +# +# There are tweaks for specific operating systems such as AIX, CygWin, +# Darwin, MinGW, and NonStop. +# +# (3) This script is generated from the Groovy template +# https://github.com/gradle/gradle/blob/HEAD/subprojects/plugins/src/main/resources/org/gradle/api/internal/plugins/unixStartScript.txt +# within the Gradle project. +# +# You can find Gradle at https://github.com/gradle/gradle/. +# +############################################################################## + +# Attempt to set APP_HOME + +# Resolve links: $0 may be a link +app_path=$0 + +# Need this for daisy-chained symlinks. +while + APP_HOME=${app_path%"${app_path##*/}"} # leaves a trailing /; empty if no leading path + [ -h "$app_path" ] +do + ls=$( ls -ld "$app_path" ) + link=${ls#*' -> '} + case $link in #( + /*) app_path=$link ;; #( + *) app_path=$APP_HOME$link ;; + esac +done + +# This is normally unused +# shellcheck disable=SC2034 +APP_BASE_NAME=${0##*/} +# Discard cd standard output in case $CDPATH is set (https://github.com/gradle/gradle/issues/25036) +APP_HOME=$( cd "${APP_HOME:-./}" > /dev/null && pwd -P ) || exit + +. ${APP_HOME}/gradle/gradlew-include.sh + +# Use the maximum available, or set MAX_FD != -1 to use that value. +MAX_FD=maximum + +warn () { + echo "$*" +} >&2 + +die () { + echo + echo "$*" + echo + exit 1 +} >&2 + +# OS specific support (must be 'true' or 'false'). +cygwin=false +msys=false +darwin=false +nonstop=false +case "$( uname )" in #( + CYGWIN* ) cygwin=true ;; #( + Darwin* ) darwin=true ;; #( + MSYS* | MINGW* ) msys=true ;; #( + NONSTOP* ) nonstop=true ;; +esac + +CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar + + +# Determine the Java command to use to start the JVM. +if [ -n "$JAVA_HOME" ] ; then + if [ -x "$JAVA_HOME/jre/sh/java" ] ; then + # IBM's JDK on AIX uses strange locations for the executables + JAVACMD=$JAVA_HOME/jre/sh/java + else + JAVACMD=$JAVA_HOME/bin/java + fi + if [ ! -x "$JAVACMD" ] ; then + die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME + +Please set the JAVA_HOME variable in your environment to match the +location of your Java installation." + fi +else + JAVACMD=java + if ! command -v java >/dev/null 2>&1 + then + die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. + +Please set the JAVA_HOME variable in your environment to match the +location of your Java installation." + fi +fi + +# Increase the maximum file descriptors if we can. +if ! "$cygwin" && ! "$darwin" && ! "$nonstop" ; then + case $MAX_FD in #( + max*) + # In POSIX sh, ulimit -H is undefined. That's why the result is checked to see if it worked. + # shellcheck disable=SC2039,SC3045 + MAX_FD=$( ulimit -H -n ) || + warn "Could not query maximum file descriptor limit" + esac + case $MAX_FD in #( + '' | soft) :;; #( + *) + # In POSIX sh, ulimit -n is undefined. That's why the result is checked to see if it worked. + # shellcheck disable=SC2039,SC3045 + ulimit -n "$MAX_FD" || + warn "Could not set maximum file descriptor limit to $MAX_FD" + esac +fi + +# Collect all arguments for the java command, stacking in reverse order: +# * args from the command line +# * the main class name +# * -classpath +# * -D...appname settings +# * --module-path (only if needed) +# * DEFAULT_JVM_OPTS, JAVA_OPTS, and GRADLE_OPTS environment variables. + +# For Cygwin or MSYS, switch paths to Windows format before running java +if "$cygwin" || "$msys" ; then + APP_HOME=$( cygpath --path --mixed "$APP_HOME" ) + CLASSPATH=$( cygpath --path --mixed "$CLASSPATH" ) + + JAVACMD=$( cygpath --unix "$JAVACMD" ) + + # Now convert the arguments - kludge to limit ourselves to /bin/sh + for arg do + if + case $arg in #( + -*) false ;; # don't mess with options #( + /?*) t=${arg#/} t=/${t%%/*} # looks like a POSIX filepath + [ -e "$t" ] ;; #( + *) false ;; + esac + then + arg=$( cygpath --path --ignore --mixed "$arg" ) + fi + # Roll the args list around exactly as many times as the number of + # args, so each arg winds up back in the position where it started, but + # possibly modified. + # + # NB: a `for` loop captures its iteration list before it begins, so + # changing the positional parameters here affects neither the number of + # iterations, nor the values presented in `arg`. + shift # remove old arg + set -- "$@" "$arg" # push replacement arg + done +fi + + +# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. +DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"' + +# Collect all arguments for the java command: +# * DEFAULT_JVM_OPTS, JAVA_OPTS, JAVA_OPTS, and optsEnvironmentVar are not allowed to contain shell fragments, +# and any embedded shellness will be escaped. +# * For example: A user cannot expect ${Hostname} to be expanded, as it is an environment variable and will be +# treated as '${Hostname}' itself on the command line. + +set -- \ + "-Dorg.gradle.appname=$APP_BASE_NAME" \ + -classpath "$CLASSPATH" \ + org.gradle.wrapper.GradleWrapperMain \ + "$@" + +# Stop when "xargs" is not available. +if ! command -v xargs >/dev/null 2>&1 +then + die "xargs is not available" +fi + +# Use "xargs" to parse quoted args. +# +# With -n1 it outputs one arg per line, with the quotes and backslashes removed. +# +# In Bash we could simply go: +# +# readarray ARGS < <( xargs -n1 <<<"$var" ) && +# set -- "${ARGS[@]}" "$@" +# +# but POSIX shell has neither arrays nor command substitution, so instead we +# post-process each arg (as a line of input to sed) to backslash-escape any +# character that might be a shell metacharacter, then use eval to reverse +# that process (while maintaining the separation between arguments), and wrap +# the whole thing up as a single "set" statement. +# +# This will of course break if any of these variables contains a newline or +# an unmatched quote. +# + +eval "set -- $( + printf '%s\n' "$DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS" | + xargs -n1 | + sed ' s~[^-[:alnum:]+,./:=@_]~\\&~g; ' | + tr '\n' ' ' + )" '"$@"' + +exec "$JAVACMD" "$@" diff --git a/ide-name.txt b/ide-name.txt new file mode 100644 index 0000000000..fa950b0fa1 --- /dev/null +++ b/ide-name.txt @@ -0,0 +1 @@ +Polaris \ No newline at end of file diff --git a/kind-registry.sh b/kind-registry.sh new file mode 100755 index 0000000000..f2e153499d --- /dev/null +++ b/kind-registry.sh @@ -0,0 +1,79 @@ +#!/bin/sh +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +set -o errexit + +# 1. Create registry container unless it already exists +reg_name='kind-registry' +reg_port='5001' +if [ "$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" != 'true' ]; then + docker run \ + -d --restart=always -p "127.0.0.1:${reg_port}:5000" --network bridge --name "${reg_name}" \ + registry:2 +fi + +# 2. Create kind cluster with containerd registry config dir enabled +# TODO: kind will eventually enable this by default and this patch will +# be unnecessary. +# +# See: +# https://github.com/kubernetes-sigs/kind/issues/2875 +# https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration +# See: https://github.com/containerd/containerd/blob/main/docs/hosts.md +cat < /dev/null && pwd ) + +if [ ! -d ${SCRIPT_DIR}/polaris-venv ]; then + echo "Performing first-time setup for the Python client..." + python3 -m venv ${SCRIPT_DIR}/polaris-venv + . ${SCRIPT_DIR}/polaris-venv/bin/activate + pip install poetry==1.5.0 + + cp ${SCRIPT_DIR}/regtests/client/python/pyproject.toml ${SCRIPT_DIR} + pushd $SCRIPT_DIR && poetry install ; popd + + deactivate + echo "First time setup complete." +fi + +pushd $SCRIPT_DIR > /dev/null +PYTHONPATH=regtests/client/python ${SCRIPT_DIR}/polaris-venv/bin/python3 regtests/client/python/cli/polaris_cli.py "$@" +popd > /dev/null + diff --git a/polaris-core/build.gradle b/polaris-core/build.gradle new file mode 100644 index 0000000000..64536c1c05 --- /dev/null +++ b/polaris-core/build.gradle @@ -0,0 +1,152 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +plugins { + id "org.openapi.generator" version "7.6.0" + id("java-library") + id("java-test-fixtures") +} + +compileJava { + sourceCompatibility = 11 + targetCompatibility = 11 +} + +dependencies { + implementation(platform("org.apache.iceberg:iceberg-bom:${icebergVersion}")) + implementation("org.apache.iceberg:iceberg-api:${icebergVersion}") + implementation("org.apache.iceberg:iceberg-core:${icebergVersion}") + constraints { + implementation("io.airlift:aircompressor:0.27") { + because "Vulnerability detected in 0.25" + } + } + // TODO - this is only here for the Discoverable interface + // We should use a different mechanism to discover the plugin implementations + implementation("io.dropwizard:dropwizard-jackson:${dropwizardVersion}") + + implementation(platform("com.fasterxml.jackson:jackson-bom:${jacksonVersion}")) + implementation("com.fasterxml.jackson.core:jackson-annotations") + implementation("com.fasterxml.jackson.core:jackson-core") + implementation("com.fasterxml.jackson.core:jackson-databind") + implementation("com.github.ben-manes.caffeine:caffeine:3.1.8") + implementation("org.apache.commons:commons-lang3:3.14.0") + implementation("commons-codec:commons-codec:1.17.0") + + implementation("org.apache.hadoop:hadoop-common:${hadoopVersion}") { + exclude group: "org.slf4j", module: "slf4j-reload4j" + exclude group: "org.slf4j", module: "slf4j-log4j12" + exclude group: "ch.qos.reload4j", module: "reload4j" + exclude group: "log4j", module: "log4j" + exclude group: "org.apache.zookeeper", module: "zookeeper" + } + constraints { + implementation("org.xerial.snappy:snappy-java:1.1.10.4") { + because "Vulnerability detected in 1.1.8.2" + } + implementation("org.codehaus.jettison:jettison:1.5.4") { + because "Vulnerability detected in 1.1" + } + implementation("org.apache.commons:commons-configuration2:2.10.1") { + because "Vulnerability detected in 2.8.0" + } + implementation("org.apache.commons:commons-compress:1.26.0") { + because "Vulnerability detected in 1.21" + } + implementation("com.nimbusds:nimbus-jose-jwt:9.37.2") { + because "Vulnerability detected in 9.8.1" + } + + } + implementation("org.apache.hadoop:hadoop-hdfs-client:${hadoopVersion}") + + implementation("javax.inject:javax.inject:1") + implementation("io.swagger:swagger-annotations:1.6.14") + implementation("io.swagger:swagger-jaxrs:1.6.14") + implementation("jakarta.validation:jakarta.validation-api:3.0.2") + + implementation("org.apache.iceberg:iceberg-aws") + implementation(platform("software.amazon.awssdk:bom:2.26.25")) + implementation("software.amazon.awssdk:sts") + implementation("software.amazon.awssdk:iam-policy-builder") + implementation("software.amazon.awssdk:s3") + + implementation("org.apache.iceberg:iceberg-azure") + implementation("com.azure:azure-storage-blob:12.18.0") + implementation("com.azure:azure-storage-common:12.14.2") + implementation("com.azure:azure-identity:1.12.2") + implementation("com.azure:azure-storage-file-datalake:12.19.0") + constraints { + implementation("io.netty:netty-codec-http2:4.1.100") { + because "Vulnerability detected in 4.1.72" + } + implementation("io.projectreactor.netty:reactor-netty-http:1.1.13") { + because "Vulnerability detected in 1.0.45" + } + } + + implementation("org.apache.iceberg:iceberg-gcp") + implementation(platform("com.google.cloud:google-cloud-storage-bom:2.39.0")) + implementation("com.google.cloud:google-cloud-storage") + + implementation(platform("io.micrometer:micrometer-bom:1.13.2")) + implementation("io.micrometer:micrometer-core") + + testFixturesApi(platform("org.junit:junit-bom:5.10.3")) + testFixturesApi("org.junit.jupiter:junit-jupiter") + testFixturesApi("org.assertj:assertj-core:3.25.3") + testFixturesApi("org.mockito:mockito-core:5.11.0") + testFixturesApi("com.fasterxml.jackson.core:jackson-core") + testFixturesApi("com.fasterxml.jackson.core:jackson-databind") + testFixturesApi("org.apache.commons:commons-lang3:3.14.0") + testFixturesApi("org.jetbrains:annotations:24.0.0") + testFixturesApi(platform("com.fasterxml.jackson:jackson-bom:${jacksonVersion}")) + + compileOnly("jakarta.annotation:jakarta.annotation-api:2.1.1") + compileOnly("jakarta.persistence:jakarta.persistence-api:3.1.0") +} + +openApiValidate { + inputSpec = "$rootDir/spec/polaris-management-service.yml" +} + +task generatePolarisService(type: org.openapitools.generator.gradle.plugin.tasks.GenerateTask) { + inputSpec = "$rootDir/spec/polaris-management-service.yml" + generatorName = "jaxrs-resteasy" + outputDir = "$buildDir/generated" + modelPackage = "io.polaris.core.admin.model" + ignoreFileOverride = "$rootDir/.openapi-generator-ignore" + removeOperationIdPrefix = true + templateDir = "$rootDir/server-templates" + globalProperties = [ + apis : "false", + models : "", + apiDocs : "false", + modelTests: "false" + ] + configOptions = [ + useBeanValidation : "true", + sourceFolder : "src/main/java", + useJakartaEe : "true", + generateBuilders : "true", + generateConstructorWithAllArgs: "true", + ] + additionalProperties = [apiNamePrefix: "Polaris", apiNameSuffix: "Api", metricsPrefix: "polaris"] + serverVariables = [basePath: "api/v1"] +} + +compileJava.dependsOn tasks.generatePolarisService +sourceSets.main.java.srcDirs += ["$buildDir/generated/src/main/java"] diff --git a/polaris-core/src/main/java/io/polaris/core/PolarisCallContext.java b/polaris-core/src/main/java/io/polaris/core/PolarisCallContext.java new file mode 100644 index 0000000000..a64663d650 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/PolarisCallContext.java @@ -0,0 +1,73 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core; + +import io.polaris.core.persistence.PolarisMetaStoreSession; +import java.time.Clock; +import java.time.ZoneId; +import org.jetbrains.annotations.NotNull; + +/** + * The Call context is allocated each time a new REST request is processed. It contains instances of + * low-level services required to process that request + */ +public class PolarisCallContext { + + // meta store which is used to persist Polaris entity metadata + private final PolarisMetaStoreSession metaStore; + + // diag services + private final PolarisDiagnostics diagServices; + + private final PolarisConfigurationStore configurationStore; + + private final Clock clock; + + public PolarisCallContext( + @NotNull PolarisMetaStoreSession metaStore, + @NotNull PolarisDiagnostics diagServices, + @NotNull PolarisConfigurationStore configurationStore, + @NotNull Clock clock) { + this.metaStore = metaStore; + this.diagServices = diagServices; + this.configurationStore = configurationStore; + this.clock = clock; + } + + public PolarisCallContext( + @NotNull PolarisMetaStoreSession metaStore, @NotNull PolarisDiagnostics diagServices) { + this.metaStore = metaStore; + this.diagServices = diagServices; + this.configurationStore = new PolarisConfigurationStore() {}; + this.clock = Clock.system(ZoneId.systemDefault()); + } + + public PolarisMetaStoreSession getMetaStore() { + return metaStore; + } + + public PolarisDiagnostics getDiagServices() { + return diagServices; + } + + public PolarisConfigurationStore getConfigurationStore() { + return configurationStore; + } + + public Clock getClock() { + return clock; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/PolarisConfiguration.java b/polaris-core/src/main/java/io/polaris/core/PolarisConfiguration.java new file mode 100644 index 0000000000..ceefb83ff9 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/PolarisConfiguration.java @@ -0,0 +1,44 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core; + +public class PolarisConfiguration { + + public static final String ENFORCE_PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_CHECKING = + "ENFORCE_PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_CHECKING"; + public static final String ALLOW_TABLE_LOCATION_OVERLAP = "ALLOW_TABLE_LOCATION_OVERLAP"; + public static final String ALLOW_NAMESPACE_LOCATION_OVERLAP = "ALLOW_NAMESPACE_LOCATION_OVERLAP"; + public static final String ALLOW_EXTERNAL_METADATA_FILE_LOCATION = + "ALLOW_EXTERNAL_METADATA_FILE_LOCATION"; + + public static final String ALLOW_OVERLAPPING_CATALOG_URLS = "ALLOW_OVERLAPPING_CATALOG_URLS"; + + public static final String CATALOG_ALLOW_UNSTRUCTURED_TABLE_LOCATION = + "allow.unstructured.table.location"; + public static final String CATALOG_ALLOW_EXTERNAL_TABLE_LOCATION = + "allow.external.table.location"; + + /* + * Default values for the configuration properties + */ + + public static final boolean DEFAULT_ALLOW_OVERLAPPING_CATALOG_URLS = false; + public static final boolean DEFAULT_ALLOW_TABLE_LOCATION_OVERLAP = false; + public static final boolean DEFAULT_ALLOW_EXTERNAL_METADATA_FILE_LOCATION = false; + public static final boolean DEFAULT_ALLOW_NAMESPACE_LOCATION_OVERLAP = false; + + private PolarisConfiguration() {} +} diff --git a/polaris-core/src/main/java/io/polaris/core/PolarisConfigurationStore.java b/polaris-core/src/main/java/io/polaris/core/PolarisConfigurationStore.java new file mode 100644 index 0000000000..f2e38c2ddd --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/PolarisConfigurationStore.java @@ -0,0 +1,56 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core; + +import com.google.common.base.Preconditions; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** + * Dynamic configuration store used to retrieve runtime parameters, which may vary by realm or by + * request. + */ +public interface PolarisConfigurationStore { + + /** + * Retrieve the current value for a configuration key. May be null if not set. + * + * @param ctx the current call context + * @param configName the name of the configuration key to check + * @return the current value set for the configuration key or null if not set + * @param the type of the configuration value + */ + default @Nullable T getConfiguration(PolarisCallContext ctx, String configName) { + return null; + } + + /** + * Retrieve the current value for a configuration key. If not set, return the non-null default + * value. + * + * @param ctx the current call context + * @param configName the name of the configuration key to check + * @param defaultValue the default value if the configuration key has no value + * @return the current value or the supplied default value + * @param the type of the configuration value + */ + default @NotNull T getConfiguration( + PolarisCallContext ctx, String configName, @NotNull T defaultValue) { + Preconditions.checkNotNull(defaultValue, "Cannot pass null as a default value"); + T configValue = getConfiguration(ctx, configName); + return configValue != null ? configValue : defaultValue; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/PolarisDefaultDiagServiceImpl.java b/polaris-core/src/main/java/io/polaris/core/PolarisDefaultDiagServiceImpl.java new file mode 100644 index 0000000000..74acd06cd7 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/PolarisDefaultDiagServiceImpl.java @@ -0,0 +1,127 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core; + +import com.google.common.base.Preconditions; +import java.util.Arrays; +import org.jetbrains.annotations.Contract; + +/** Default implementation of the PolarisDiagServices. */ +public class PolarisDefaultDiagServiceImpl implements PolarisDiagnostics { + + /** + * Fail with an exception + * + * @param signature signature, small unique string to identify this assertion within the method, + * like "path_cannot_be_null" + * @param extraInfoFormat extra information regarding the assertion. Generally a set of name/value + * pairs: "id={} fileName={}" + * @param extraInfoArgs extra information arguments + */ + @Override + public RuntimeException fail(String signature, String extraInfoFormat, Object... extraInfoArgs) { + Preconditions.checkState(false, "%s: %s, %s", signature, extraInfoFormat, extraInfoArgs); + throw new RuntimeException(signature); + } + + /** + * Fail because of an exception + * + * @param signature signature, small unique string to identify this assertion within the method, + * like "path_cannot_be_null" + * @param cause exception which cause the issue + * @param extraInfoFormat extra information regarding the assertion. Generally a set of name/value + * pairs: "id={} fileName={}" + * @param extraInfoArgs extra information arguments + */ + @Override + public RuntimeException fail( + String signature, Throwable cause, String extraInfoFormat, Object... extraInfoArgs) { + Preconditions.checkState( + false, "%s: %s, %s (cause: %s)", signature, extraInfoFormat, extraInfoArgs, cause); + throw new RuntimeException(cause.getMessage()); + } + + /** + * Ensures that an object reference passed as a parameter to the calling method is not null + * + * @param reference an object reference + * @param signature signature, small unique string to identify this assertion within the method, + * like "path_cannot_be_null" + * @return the non-null reference that was validated + * @throws RuntimeException if `reference` is null + */ + @Contract("null, _ -> fail") + public T checkNotNull(final T reference, final String signature) { + return Preconditions.checkNotNull(reference, signature); + } + + /** + * Ensures that an object reference passed as a parameter to the calling method is not null + * + * @param reference an object reference + * @param signature signature, small unique string to identify this assertion within the method, + * like "path_cannot_be_null" + * @param extraInfoFormat extra information regarding the assertion. Generally a set of name/value + * pairs: "id={} fileName={}" + * @param extraInfoArgs extra information arguments + * @return the non-null reference that was validated + * @throws RuntimeException if `reference` is null + */ + @Contract("null, _, _, _ -> fail") + public T checkNotNull( + final T reference, + final String signature, + final String extraInfoFormat, + final Object... extraInfoArgs) { + return Preconditions.checkNotNull( + reference, "%s: %s, %s", signature, extraInfoFormat, Arrays.toString(extraInfoArgs)); + } + + /** + * Create a fatal incident if expression is false + * + * @param expression condition to test for + * @param signature signature, small unique string to identify this assertion within the method, + * like "path_cannot_be_null" + * @throws RuntimeException if `condition` is not true + */ + @Contract("false, _ -> fail") + public void check(final boolean expression, final String signature) { + Preconditions.checkState(expression, signature); + } + + /** + * Create a fatal incident if expression is false + * + * @param expression condition to test for + * @param signature signature, small unique string to identify this assertion within the method, + * like "path_cannot_be_null" + * @param extraInfoFormat extra information regarding the incident. Generally a set of name/value + * pairs: "fileId={} accountId={} fileName={}" + * @param extraInfoArgs extra information arguments + * @throws RuntimeException if condition` is not true + */ + @Contract("false, _, _, _ -> fail") + public void check( + final boolean expression, + final String signature, + final String extraInfoFormat, + final Object... extraInfoArgs) { + Preconditions.checkState( + expression, "%s: %s, %s", signature, extraInfoFormat, Arrays.toString(extraInfoArgs)); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/PolarisDiagnostics.java b/polaris-core/src/main/java/io/polaris/core/PolarisDiagnostics.java new file mode 100644 index 0000000000..f85c31b844 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/PolarisDiagnostics.java @@ -0,0 +1,111 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core; + +import org.jetbrains.annotations.Contract; + +public interface PolarisDiagnostics { + + /** + * Fail with an exception + * + * @param signature signature, small unique string to identify this assertion within the method, + * like "path_cannot_be_null" + * @param extraInfoFormat extra information regarding the assertion. Generally a set of name/value + * pairs: "id={} fileName={}" + * @param extraInfoArgs extra information arguments + */ + @Contract("_, _, _ -> fail") + RuntimeException fail( + final String signature, final String extraInfoFormat, final Object... extraInfoArgs); + + /** + * Fail because of an exception + * + * @param signature signature, small unique string to identify this assertion within the method, + * like "path_cannot_be_null" + * @param cause exception which cause the issue + * @param extraInfoFormat extra information regarding the assertion. Generally a set of name/value + * pairs: "id={} fileName={}" + * @param extraInfoArgs extra information arguments + */ + @Contract("_, _, _, _ -> fail") + RuntimeException fail( + final String signature, + final Throwable cause, + final String extraInfoFormat, + final Object... extraInfoArgs); + + /** + * Ensures that an object reference passed as a parameter to the calling method is not null + * + * @param reference an object reference + * @param signature signature, small unique string to identify this assertion within the method, + * like "path_cannot_be_null" + * @return the non-null reference that was validated + * @throws RuntimeException if `reference` is null + */ + @Contract("null, _ -> fail") + T checkNotNull(final T reference, final String signature); + + /** + * Ensures that an object reference passed as a parameter to the calling method is not null + * + * @param reference an object reference + * @param signature signature, small unique string to identify this assertion within the method, + * like "path_cannot_be_null" + * @param extraInfoFormat extra information regarding the assertion. Generally a set of name/value + * pairs: "id={} fileName={}" + * @param extraInfoArgs extra information arguments + * @return the non-null reference that was validated + * @throws RuntimeException if `reference` is null + */ + @Contract("null, _, _, _ -> fail") + T checkNotNull( + final T reference, + final String signature, + final String extraInfoFormat, + final Object... extraInfoArgs); + + /** + * Create a fatal incident if expression is false + * + * @param expression condition to test for + * @param signature signature, small unique string to identify this assertion within the method, + * like "path_cannot_be_null" + * @throws RuntimeException if `condition` is not true + */ + @Contract("false, _ -> fail") + void check(final boolean expression, final String signature); + + /** + * Create a fatal incident if expression is false + * + * @param expression condition to test for + * @param signature signature, small unique string to identify this assertion within the method, + * like "path_cannot_be_null" + * @param extraInfoFormat extra information regarding the incident. Generally a set of name/value + * pairs: "fileId={} accountId={} fileName={}" + * @param extraInfoArgs extra information arguments + * @throws RuntimeException if `condition` is not true + */ + @Contract("false, _, _, _ -> fail") + void check( + final boolean expression, + final String signature, + final String extraInfoFormat, + final Object... extraInfoArgs); +} diff --git a/polaris-core/src/main/java/io/polaris/core/auth/AuthenticatedPolarisPrincipal.java b/polaris-core/src/main/java/io/polaris/core/auth/AuthenticatedPolarisPrincipal.java new file mode 100644 index 0000000000..d40149660d --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/auth/AuthenticatedPolarisPrincipal.java @@ -0,0 +1,68 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.auth; + +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PrincipalRoleEntity; +import java.util.List; +import java.util.Set; +import org.jetbrains.annotations.NotNull; + +/** Holds the results of request authentication. */ +public class AuthenticatedPolarisPrincipal implements java.security.Principal { + private final PolarisEntity principalEntity; + private final Set activatedPrincipalRoleNames; + // only known and set after the above set of principal role names have been resolved. Before + // this, this list is null + private List activatedPrincipalRoles; + + public AuthenticatedPolarisPrincipal( + @NotNull PolarisEntity principalEntity, @NotNull Set activatedPrincipalRoles) { + this.principalEntity = principalEntity; + this.activatedPrincipalRoleNames = activatedPrincipalRoles; + this.activatedPrincipalRoles = null; + } + + @Override + public String getName() { + return principalEntity.getName(); + } + + public PolarisEntity getPrincipalEntity() { + return principalEntity; + } + + public Set getActivatedPrincipalRoleNames() { + return activatedPrincipalRoleNames; + } + + public List getActivatedPrincipalRoles() { + return activatedPrincipalRoles; + } + + public void setActivatedPrincipalRoles(List activatedPrincipalRoles) { + this.activatedPrincipalRoles = activatedPrincipalRoles; + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("principalEntity=" + getPrincipalEntity()); + sb.append(";activatedPrincipalRoleNames=" + getActivatedPrincipalRoleNames()); + sb.append(";activatedPrincipalRoles=" + getActivatedPrincipalRoles()); + return sb.toString(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/auth/PolarisAuthorizableOperation.java b/polaris-core/src/main/java/io/polaris/core/auth/PolarisAuthorizableOperation.java new file mode 100644 index 0000000000..1b82ff2fb9 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/auth/PolarisAuthorizableOperation.java @@ -0,0 +1,238 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.auth; + +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_DROP; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_LIST; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_DROP; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_LIST; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_WRITE_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_WRITE_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_DROP; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_LIST; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_WRITE_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_DROP; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_LIST; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_MANAGE_GRANTS_FOR_GRANTEE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_RESET_CREDENTIALS; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_DROP; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_LIST; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_WRITE_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROTATE_CREDENTIALS; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_WRITE_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.SERVICE_MANAGE_ACCESS; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_DROP; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_LIST; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_READ_DATA; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_WRITE_DATA; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_WRITE_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_DROP; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_LIST; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_WRITE_PROPERTIES; + +import io.polaris.core.entity.PolarisPrivilege; +import java.util.EnumSet; + +/** + * Denotes the fine-grained expansion of all Polaris operations that are associated with some set of + * authorization requirements to enact. + */ +public enum PolarisAuthorizableOperation { + LIST_NAMESPACES(NAMESPACE_LIST), + CREATE_NAMESPACE(NAMESPACE_CREATE), + LOAD_NAMESPACE_METADATA(NAMESPACE_READ_PROPERTIES), + NAMESPACE_EXISTS(NAMESPACE_LIST), + DROP_NAMESPACE(NAMESPACE_DROP), + UPDATE_NAMESPACE_PROPERTIES(NAMESPACE_WRITE_PROPERTIES), + LIST_TABLES(TABLE_LIST), + CREATE_TABLE_DIRECT(TABLE_CREATE), + CREATE_TABLE_STAGED(TABLE_CREATE), + CREATE_TABLE_STAGED_WITH_WRITE_DELEGATION(EnumSet.of(TABLE_CREATE, TABLE_WRITE_DATA)), + REGISTER_TABLE(TABLE_CREATE), + LOAD_TABLE(TABLE_READ_PROPERTIES), + LOAD_TABLE_WITH_READ_DELEGATION(TABLE_READ_DATA), + LOAD_TABLE_WITH_WRITE_DELEGATION(TABLE_WRITE_DATA), + UPDATE_TABLE(TABLE_WRITE_PROPERTIES), + UPDATE_TABLE_FOR_STAGED_CREATE(TABLE_CREATE), + DROP_TABLE_WITHOUT_PURGE(TABLE_DROP), + DROP_TABLE_WITH_PURGE(EnumSet.of(TABLE_DROP, TABLE_WRITE_DATA)), + TABLE_EXISTS(TABLE_LIST), + RENAME_TABLE(TABLE_DROP, EnumSet.of(TABLE_LIST, TABLE_CREATE)), + COMMIT_TRANSACTION(EnumSet.of(TABLE_WRITE_PROPERTIES, TABLE_CREATE)), + LIST_VIEWS(VIEW_LIST), + CREATE_VIEW(VIEW_CREATE), + LOAD_VIEW(VIEW_READ_PROPERTIES), + REPLACE_VIEW(VIEW_WRITE_PROPERTIES), + DROP_VIEW(VIEW_DROP), + VIEW_EXISTS(VIEW_LIST), + RENAME_VIEW(VIEW_DROP, EnumSet.of(VIEW_LIST, VIEW_CREATE)), + REPORT_METRICS(EnumSet.noneOf(PolarisPrivilege.class)), + SEND_NOTIFICATIONS( + EnumSet.of( + TABLE_CREATE, TABLE_WRITE_PROPERTIES, TABLE_DROP, NAMESPACE_CREATE, NAMESPACE_DROP)), + LIST_CATALOGS(CATALOG_LIST), + CREATE_CATALOG(CATALOG_CREATE), + GET_CATALOG(CATALOG_READ_PROPERTIES), + UPDATE_CATALOG(CATALOG_WRITE_PROPERTIES), + DELETE_CATALOG(CATALOG_DROP), + LIST_PRINCIPALS(PRINCIPAL_LIST), + CREATE_PRINCIPAL(PRINCIPAL_CREATE), + GET_PRINCIPAL(PRINCIPAL_READ_PROPERTIES), + UPDATE_PRINCIPAL(PRINCIPAL_WRITE_PROPERTIES), + DELETE_PRINCIPAL(PRINCIPAL_DROP), + ROTATE_CREDENTIALS(PRINCIPAL_ROTATE_CREDENTIALS), + RESET_CREDENTIALS(PRINCIPAL_RESET_CREDENTIALS), + LIST_PRINCIPAL_ROLES_ASSIGNED(PRINCIPAL_LIST_GRANTS), + ASSIGN_PRINCIPAL_ROLE( + PRINCIPAL_ROLE_MANAGE_GRANTS_ON_SECURABLE, PRINCIPAL_MANAGE_GRANTS_FOR_GRANTEE), + REVOKE_PRINCIPAL_ROLE( + PRINCIPAL_ROLE_MANAGE_GRANTS_ON_SECURABLE, PRINCIPAL_MANAGE_GRANTS_FOR_GRANTEE), + LIST_PRINCIPAL_ROLES(PRINCIPAL_ROLE_LIST), + CREATE_PRINCIPAL_ROLE(PRINCIPAL_ROLE_CREATE), + GET_PRINCIPAL_ROLE(PRINCIPAL_ROLE_READ_PROPERTIES), + UPDATE_PRINCIPAL_ROLE(PRINCIPAL_ROLE_WRITE_PROPERTIES), + DELETE_PRINCIPAL_ROLE(PRINCIPAL_ROLE_DROP), + LIST_ASSIGNEE_PRINCIPALS_FOR_PRINCIPAL_ROLE(PRINCIPAL_ROLE_LIST_GRANTS), + LIST_CATALOG_ROLES_FOR_PRINCIPAL_ROLE(PRINCIPAL_ROLE_LIST_GRANTS), + ASSIGN_CATALOG_ROLE_TO_PRINCIPAL_ROLE( + CATALOG_ROLE_MANAGE_GRANTS_ON_SECURABLE, PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + REVOKE_CATALOG_ROLE_FROM_PRINCIPAL_ROLE( + CATALOG_ROLE_MANAGE_GRANTS_ON_SECURABLE, PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + LIST_CATALOG_ROLES(CATALOG_ROLE_LIST), + CREATE_CATALOG_ROLE(CATALOG_ROLE_CREATE), + GET_CATALOG_ROLE(CATALOG_ROLE_READ_PROPERTIES), + UPDATE_CATALOG_ROLE(CATALOG_ROLE_WRITE_PROPERTIES), + DELETE_CATALOG_ROLE(CATALOG_ROLE_DROP), + LIST_ASSIGNEE_PRINCIPAL_ROLES_FOR_CATALOG_ROLE(CATALOG_ROLE_LIST_GRANTS), + LIST_GRANTS_FOR_CATALOG_ROLE(CATALOG_ROLE_LIST_GRANTS), + ADD_ROOT_GRANT_TO_PRINCIPAL_ROLE(SERVICE_MANAGE_ACCESS, PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + REVOKE_ROOT_GRANT_FROM_PRINCIPAL_ROLE( + SERVICE_MANAGE_ACCESS, PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + LIST_GRANTS_ON_ROOT(SERVICE_MANAGE_ACCESS), + ADD_PRINCIPAL_GRANT_TO_PRINCIPAL_ROLE( + PRINCIPAL_MANAGE_GRANTS_ON_SECURABLE, PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + REVOKE_PRINCIPAL_GRANT_FROM_PRINCIPAL_ROLE( + PRINCIPAL_MANAGE_GRANTS_ON_SECURABLE, PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + LIST_GRANTS_ON_PRINCIPAL(PRINCIPAL_LIST_GRANTS), + ADD_PRINCIPAL_ROLE_GRANT_TO_PRINCIPAL_ROLE( + PRINCIPAL_ROLE_MANAGE_GRANTS_ON_SECURABLE, PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + REVOKE_PRINCIPAL_ROLE_GRANT_FROM_PRINCIPAL_ROLE( + PRINCIPAL_ROLE_MANAGE_GRANTS_ON_SECURABLE, PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + LIST_GRANTS_ON_PRINCIPAL_ROLE(PRINCIPAL_ROLE_LIST_GRANTS), + ADD_CATALOG_ROLE_GRANT_TO_CATALOG_ROLE( + CATALOG_ROLE_MANAGE_GRANTS_ON_SECURABLE, CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + REVOKE_CATALOG_ROLE_GRANT_FROM_CATALOG_ROLE( + CATALOG_ROLE_MANAGE_GRANTS_ON_SECURABLE, CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + LIST_GRANTS_ON_CATALOG_ROLE(CATALOG_ROLE_LIST_GRANTS), + ADD_CATALOG_GRANT_TO_CATALOG_ROLE( + CATALOG_MANAGE_GRANTS_ON_SECURABLE, CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + REVOKE_CATALOG_GRANT_FROM_CATALOG_ROLE( + CATALOG_MANAGE_GRANTS_ON_SECURABLE, CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + LIST_GRANTS_ON_CATALOG(CATALOG_LIST_GRANTS), + ADD_NAMESPACE_GRANT_TO_CATALOG_ROLE( + NAMESPACE_MANAGE_GRANTS_ON_SECURABLE, CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + REVOKE_NAMESPACE_GRANT_FROM_CATALOG_ROLE( + NAMESPACE_MANAGE_GRANTS_ON_SECURABLE, CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + LIST_GRANTS_ON_NAMESPACE(NAMESPACE_LIST_GRANTS), + ADD_TABLE_GRANT_TO_CATALOG_ROLE( + TABLE_MANAGE_GRANTS_ON_SECURABLE, CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + REVOKE_TABLE_GRANT_FROM_CATALOG_ROLE( + TABLE_MANAGE_GRANTS_ON_SECURABLE, CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + LIST_GRANTS_ON_TABLE(TABLE_LIST_GRANTS), + ADD_VIEW_GRANT_TO_CATALOG_ROLE( + VIEW_MANAGE_GRANTS_ON_SECURABLE, CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + REVOKE_VIEW_GRANT_FROM_CATALOG_ROLE( + VIEW_MANAGE_GRANTS_ON_SECURABLE, CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE), + LIST_GRANTS_ON_VIEW(VIEW_LIST_GRANTS), + ; + + private final EnumSet privilegesOnTarget; + private final EnumSet privilegesOnSecondary; + + /** Most common case -- single privilege on target entities. */ + PolarisAuthorizableOperation(PolarisPrivilege targetPrivilege) { + this(targetPrivilege == null ? null : EnumSet.of(targetPrivilege), null); + } + + /** Require multiple simultaneous privileges on target entities. */ + PolarisAuthorizableOperation(EnumSet privilegesOnTarget) { + this(privilegesOnTarget, null); + } + + /** Single privilege on target entities, multiple privileges on secondary. */ + PolarisAuthorizableOperation( + PolarisPrivilege targetPrivilege, EnumSet privilegesOnSecondary) { + this(targetPrivilege == null ? null : EnumSet.of(targetPrivilege), privilegesOnSecondary); + } + + /** Single privilege on target, single privilege on targetParent. */ + PolarisAuthorizableOperation( + PolarisPrivilege targetPrivilege, PolarisPrivilege secondaryPrivilege) { + this( + targetPrivilege == null ? null : EnumSet.of(targetPrivilege), + secondaryPrivilege == null ? null : EnumSet.of(secondaryPrivilege)); + } + + /** EnumSets on target, targetParent */ + PolarisAuthorizableOperation( + EnumSet privilegesOnTarget, + EnumSet privilegesOnSecondary) { + this.privilegesOnTarget = + privilegesOnTarget == null ? EnumSet.noneOf(PolarisPrivilege.class) : privilegesOnTarget; + this.privilegesOnSecondary = + privilegesOnSecondary == null + ? EnumSet.noneOf(PolarisPrivilege.class) + : privilegesOnSecondary; + } + + public EnumSet getPrivilegesOnTarget() { + return privilegesOnTarget; + } + + public EnumSet getPrivilegesOnSecondary() { + return privilegesOnSecondary; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/auth/PolarisAuthorizer.java b/polaris-core/src/main/java/io/polaris/core/auth/PolarisAuthorizer.java new file mode 100644 index 0000000000..29f40daaba --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/auth/PolarisAuthorizer.java @@ -0,0 +1,630 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.auth; + +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_DROP; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_FULL_METADATA; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_LIST; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_MANAGE_ACCESS; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_MANAGE_CONTENT; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_MANAGE_METADATA; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_DROP; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_FULL_METADATA; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_LIST; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_USAGE; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_ROLE_WRITE_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.CATALOG_WRITE_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_DROP; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_FULL_METADATA; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_LIST; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.NAMESPACE_WRITE_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_DROP; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_FULL_METADATA; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_LIST; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_MANAGE_GRANTS_FOR_GRANTEE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_RESET_CREDENTIALS; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_DROP; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_LIST; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_USAGE; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROLE_WRITE_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_ROTATE_CREDENTIALS; +import static io.polaris.core.entity.PolarisPrivilege.PRINCIPAL_WRITE_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.SERVICE_MANAGE_ACCESS; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_DROP; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_FULL_METADATA; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_LIST; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_READ_DATA; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_WRITE_DATA; +import static io.polaris.core.entity.PolarisPrivilege.TABLE_WRITE_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_CREATE; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_DROP; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_FULL_METADATA; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_LIST; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_LIST_GRANTS; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_MANAGE_GRANTS_ON_SECURABLE; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_READ_PROPERTIES; +import static io.polaris.core.entity.PolarisPrivilege.VIEW_WRITE_PROPERTIES; + +import com.google.common.base.Preconditions; +import com.google.common.collect.HashMultimap; +import com.google.common.collect.SetMultimap; +import io.polaris.core.PolarisConfiguration; +import io.polaris.core.PolarisConfigurationStore; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.persistence.PolarisResolvedPathWrapper; +import io.polaris.core.persistence.ResolvedPolarisEntity; +import java.util.List; +import java.util.Set; +import org.apache.iceberg.exceptions.ForbiddenException; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Performs hierarchical resolution logic by matching the transively expanded set of grants to a + * calling principal against the cascading permissions over the parent hierarchy of a target + * Securable. + * + *

Additionally, encompasses "specialty" permission resolution logic, such as checking whether + * the expanded roles of the calling Principal hold SERVICE_MANAGE_ACCESS on the "root" catalog, + * which translates into a cross-catalog permission. + */ +public class PolarisAuthorizer { + private static final Logger LOG = LoggerFactory.getLogger(PolarisAuthorizer.class); + + private static final SetMultimap SUPER_PRIVILEGES = + HashMultimap.create(); + + static { + SUPER_PRIVILEGES.putAll(SERVICE_MANAGE_ACCESS, List.of(SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll(CATALOG_MANAGE_ACCESS, List.of(CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll(CATALOG_ROLE_USAGE, List.of(CATALOG_ROLE_USAGE)); + SUPER_PRIVILEGES.putAll(PRINCIPAL_ROLE_USAGE, List.of(PRINCIPAL_ROLE_USAGE)); + + // Namespace, Table, View privileges + SUPER_PRIVILEGES.putAll( + NAMESPACE_CREATE, + List.of( + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + NAMESPACE_CREATE, + NAMESPACE_FULL_METADATA)); + SUPER_PRIVILEGES.putAll( + TABLE_CREATE, + List.of( + CATALOG_MANAGE_CONTENT, CATALOG_MANAGE_METADATA, TABLE_CREATE, TABLE_FULL_METADATA)); + SUPER_PRIVILEGES.putAll( + VIEW_CREATE, + List.of(CATALOG_MANAGE_CONTENT, CATALOG_MANAGE_METADATA, VIEW_CREATE, VIEW_FULL_METADATA)); + SUPER_PRIVILEGES.putAll( + NAMESPACE_DROP, + List.of( + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + NAMESPACE_DROP, + NAMESPACE_FULL_METADATA)); + SUPER_PRIVILEGES.putAll( + TABLE_DROP, + List.of(CATALOG_MANAGE_CONTENT, CATALOG_MANAGE_METADATA, TABLE_DROP, TABLE_FULL_METADATA)); + SUPER_PRIVILEGES.putAll( + VIEW_DROP, + List.of(CATALOG_MANAGE_CONTENT, CATALOG_MANAGE_METADATA, VIEW_DROP, VIEW_FULL_METADATA)); + SUPER_PRIVILEGES.putAll( + NAMESPACE_LIST, + List.of( + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + NAMESPACE_CREATE, + NAMESPACE_FULL_METADATA, + NAMESPACE_LIST, + NAMESPACE_READ_PROPERTIES, + NAMESPACE_WRITE_PROPERTIES)); + SUPER_PRIVILEGES.putAll( + TABLE_LIST, + List.of( + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + TABLE_CREATE, + TABLE_FULL_METADATA, + TABLE_LIST, + TABLE_READ_DATA, + TABLE_READ_PROPERTIES, + TABLE_WRITE_DATA, + TABLE_WRITE_PROPERTIES)); + SUPER_PRIVILEGES.putAll( + VIEW_LIST, + List.of( + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + VIEW_CREATE, + VIEW_FULL_METADATA, + VIEW_LIST, + VIEW_READ_PROPERTIES, + VIEW_WRITE_PROPERTIES)); + SUPER_PRIVILEGES.putAll( + NAMESPACE_READ_PROPERTIES, + List.of( + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + NAMESPACE_FULL_METADATA, + NAMESPACE_READ_PROPERTIES, + NAMESPACE_WRITE_PROPERTIES)); + SUPER_PRIVILEGES.putAll( + TABLE_READ_PROPERTIES, + List.of( + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + TABLE_FULL_METADATA, + TABLE_READ_DATA, + TABLE_READ_PROPERTIES, + TABLE_WRITE_DATA, + TABLE_WRITE_PROPERTIES)); + SUPER_PRIVILEGES.putAll( + VIEW_READ_PROPERTIES, + List.of( + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + VIEW_FULL_METADATA, + VIEW_READ_PROPERTIES, + VIEW_WRITE_PROPERTIES)); + SUPER_PRIVILEGES.putAll( + NAMESPACE_WRITE_PROPERTIES, + List.of( + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + NAMESPACE_FULL_METADATA, + NAMESPACE_WRITE_PROPERTIES)); + SUPER_PRIVILEGES.putAll( + TABLE_WRITE_PROPERTIES, + List.of( + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + TABLE_FULL_METADATA, + TABLE_WRITE_DATA, + TABLE_WRITE_PROPERTIES)); + SUPER_PRIVILEGES.putAll( + VIEW_WRITE_PROPERTIES, + List.of( + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + VIEW_FULL_METADATA, + VIEW_WRITE_PROPERTIES)); + SUPER_PRIVILEGES.putAll( + TABLE_READ_DATA, List.of(CATALOG_MANAGE_CONTENT, TABLE_READ_DATA, TABLE_WRITE_DATA)); + SUPER_PRIVILEGES.putAll(TABLE_WRITE_DATA, List.of(CATALOG_MANAGE_CONTENT, TABLE_WRITE_DATA)); + SUPER_PRIVILEGES.putAll( + NAMESPACE_FULL_METADATA, + List.of(CATALOG_MANAGE_CONTENT, CATALOG_MANAGE_METADATA, NAMESPACE_FULL_METADATA)); + SUPER_PRIVILEGES.putAll( + TABLE_FULL_METADATA, + List.of(CATALOG_MANAGE_CONTENT, CATALOG_MANAGE_METADATA, TABLE_FULL_METADATA)); + SUPER_PRIVILEGES.putAll( + VIEW_FULL_METADATA, + List.of(CATALOG_MANAGE_CONTENT, CATALOG_MANAGE_METADATA, VIEW_FULL_METADATA)); + + // Catalog privileges + SUPER_PRIVILEGES.putAll( + CATALOG_MANAGE_METADATA, List.of(CATALOG_MANAGE_METADATA, CATALOG_MANAGE_CONTENT)); + SUPER_PRIVILEGES.putAll(CATALOG_MANAGE_CONTENT, List.of(CATALOG_MANAGE_CONTENT)); + SUPER_PRIVILEGES.putAll( + CATALOG_CREATE, List.of(CATALOG_CREATE, CATALOG_FULL_METADATA, SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_DROP, List.of(CATALOG_DROP, CATALOG_FULL_METADATA, SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_LIST, + List.of( + CATALOG_CREATE, + CATALOG_FULL_METADATA, + CATALOG_LIST, + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + CATALOG_READ_PROPERTIES, + CATALOG_WRITE_PROPERTIES, + SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_READ_PROPERTIES, + List.of( + CATALOG_FULL_METADATA, + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + CATALOG_READ_PROPERTIES, + CATALOG_WRITE_PROPERTIES, + SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_WRITE_PROPERTIES, + List.of( + CATALOG_FULL_METADATA, + CATALOG_MANAGE_CONTENT, + CATALOG_MANAGE_METADATA, + CATALOG_WRITE_PROPERTIES, + SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_FULL_METADATA, List.of(CATALOG_FULL_METADATA, SERVICE_MANAGE_ACCESS)); + + // _LIST_GRANTS + SUPER_PRIVILEGES.putAll( + PRINCIPAL_LIST_GRANTS, + List.of( + PRINCIPAL_LIST_GRANTS, + PRINCIPAL_MANAGE_GRANTS_ON_SECURABLE, + PRINCIPAL_MANAGE_GRANTS_FOR_GRANTEE, + SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_ROLE_LIST_GRANTS, + List.of( + PRINCIPAL_ROLE_LIST_GRANTS, + PRINCIPAL_ROLE_MANAGE_GRANTS_ON_SECURABLE, + PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE, + SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_ROLE_LIST_GRANTS, + List.of( + CATALOG_ROLE_LIST_GRANTS, + CATALOG_ROLE_MANAGE_GRANTS_ON_SECURABLE, + CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE, + CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_LIST_GRANTS, + List.of(CATALOG_LIST_GRANTS, CATALOG_MANAGE_GRANTS_ON_SECURABLE, CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + NAMESPACE_LIST_GRANTS, + List.of( + NAMESPACE_LIST_GRANTS, NAMESPACE_MANAGE_GRANTS_ON_SECURABLE, CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + TABLE_LIST_GRANTS, + List.of(TABLE_LIST_GRANTS, TABLE_MANAGE_GRANTS_ON_SECURABLE, CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + VIEW_LIST_GRANTS, + List.of(VIEW_LIST_GRANTS, VIEW_MANAGE_GRANTS_ON_SECURABLE, CATALOG_MANAGE_ACCESS)); + + // _MANAGE_GRANTS_ON_SECURABLE for CATALOG, NAMESPACE, TABLE, VIEW + SUPER_PRIVILEGES.putAll( + CATALOG_MANAGE_GRANTS_ON_SECURABLE, + List.of(CATALOG_MANAGE_GRANTS_ON_SECURABLE, CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + NAMESPACE_MANAGE_GRANTS_ON_SECURABLE, + List.of(NAMESPACE_MANAGE_GRANTS_ON_SECURABLE, CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + TABLE_MANAGE_GRANTS_ON_SECURABLE, + List.of(TABLE_MANAGE_GRANTS_ON_SECURABLE, CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + VIEW_MANAGE_GRANTS_ON_SECURABLE, + List.of(VIEW_MANAGE_GRANTS_ON_SECURABLE, CATALOG_MANAGE_ACCESS)); + + // PRINCIPAL CRUDL + SUPER_PRIVILEGES.putAll( + PRINCIPAL_CREATE, + List.of(PRINCIPAL_CREATE, PRINCIPAL_FULL_METADATA, SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_DROP, List.of(PRINCIPAL_DROP, PRINCIPAL_FULL_METADATA, SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_LIST, + List.of( + PRINCIPAL_LIST, + PRINCIPAL_CREATE, + PRINCIPAL_READ_PROPERTIES, + PRINCIPAL_WRITE_PROPERTIES, + PRINCIPAL_FULL_METADATA, + SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_READ_PROPERTIES, + List.of( + PRINCIPAL_READ_PROPERTIES, + PRINCIPAL_WRITE_PROPERTIES, + PRINCIPAL_FULL_METADATA, + SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_WRITE_PROPERTIES, + List.of(PRINCIPAL_WRITE_PROPERTIES, PRINCIPAL_FULL_METADATA, SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_FULL_METADATA, List.of(PRINCIPAL_FULL_METADATA, SERVICE_MANAGE_ACCESS)); + + // PRINCIPAL MANAGE_GRANTS + SUPER_PRIVILEGES.putAll( + PRINCIPAL_MANAGE_GRANTS_ON_SECURABLE, + List.of(PRINCIPAL_MANAGE_GRANTS_ON_SECURABLE, SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_MANAGE_GRANTS_FOR_GRANTEE, + List.of(PRINCIPAL_MANAGE_GRANTS_FOR_GRANTEE, SERVICE_MANAGE_ACCESS)); + + // PRINCIPAL special privileges + SUPER_PRIVILEGES.putAll(PRINCIPAL_ROTATE_CREDENTIALS, List.of(PRINCIPAL_ROTATE_CREDENTIALS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_RESET_CREDENTIALS, List.of(PRINCIPAL_RESET_CREDENTIALS, SERVICE_MANAGE_ACCESS)); + + // PRINCIPAL_ROLE CRUDL + SUPER_PRIVILEGES.putAll( + PRINCIPAL_ROLE_CREATE, + List.of(PRINCIPAL_ROLE_CREATE, PRINCIPAL_ROLE_FULL_METADATA, SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_ROLE_DROP, + List.of(PRINCIPAL_ROLE_DROP, PRINCIPAL_ROLE_FULL_METADATA, SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_ROLE_LIST, + List.of( + PRINCIPAL_ROLE_LIST, + PRINCIPAL_ROLE_CREATE, + PRINCIPAL_ROLE_READ_PROPERTIES, + PRINCIPAL_ROLE_WRITE_PROPERTIES, + PRINCIPAL_ROLE_FULL_METADATA, + SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_ROLE_READ_PROPERTIES, + List.of( + PRINCIPAL_ROLE_READ_PROPERTIES, + PRINCIPAL_ROLE_WRITE_PROPERTIES, + PRINCIPAL_ROLE_FULL_METADATA, + SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_ROLE_WRITE_PROPERTIES, + List.of( + PRINCIPAL_ROLE_WRITE_PROPERTIES, PRINCIPAL_ROLE_FULL_METADATA, SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_ROLE_FULL_METADATA, List.of(PRINCIPAL_ROLE_FULL_METADATA, SERVICE_MANAGE_ACCESS)); + + // PRINCIPAL_ROLE_ROLE MANAGE_GRANTS + SUPER_PRIVILEGES.putAll( + PRINCIPAL_ROLE_MANAGE_GRANTS_ON_SECURABLE, + List.of(PRINCIPAL_ROLE_MANAGE_GRANTS_ON_SECURABLE, SERVICE_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE, + List.of(PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE, SERVICE_MANAGE_ACCESS)); + + // CATALOG_ROLE CRUDL + SUPER_PRIVILEGES.putAll( + CATALOG_ROLE_CREATE, + List.of(CATALOG_ROLE_CREATE, CATALOG_ROLE_FULL_METADATA, CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_ROLE_DROP, + List.of(CATALOG_ROLE_DROP, CATALOG_ROLE_FULL_METADATA, CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_ROLE_LIST, + List.of( + CATALOG_ROLE_LIST, + CATALOG_ROLE_CREATE, + CATALOG_ROLE_READ_PROPERTIES, + CATALOG_ROLE_WRITE_PROPERTIES, + CATALOG_ROLE_FULL_METADATA, + CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_ROLE_READ_PROPERTIES, + List.of( + CATALOG_ROLE_READ_PROPERTIES, + CATALOG_ROLE_WRITE_PROPERTIES, + CATALOG_ROLE_FULL_METADATA, + CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_ROLE_WRITE_PROPERTIES, + List.of(CATALOG_ROLE_WRITE_PROPERTIES, CATALOG_ROLE_FULL_METADATA, CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_ROLE_FULL_METADATA, List.of(CATALOG_ROLE_FULL_METADATA, CATALOG_MANAGE_ACCESS)); + + // CATALOG_ROLE_ROLE MANAGE_GRANTS + SUPER_PRIVILEGES.putAll( + CATALOG_ROLE_MANAGE_GRANTS_ON_SECURABLE, + List.of(CATALOG_ROLE_MANAGE_GRANTS_ON_SECURABLE, CATALOG_MANAGE_ACCESS)); + SUPER_PRIVILEGES.putAll( + CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE, + List.of(CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE, CATALOG_MANAGE_ACCESS)); + } + + private final PolarisConfigurationStore featureConfig; + + public PolarisAuthorizer(PolarisConfigurationStore featureConfig) { + this.featureConfig = featureConfig; + } + + /** + * Checks whether the {@code grantedPrivilege} is sufficient to confer {@code desiredPrivilege}, + * assuming the privileges are referring to the same securable object. In other words, whether the + * grantedPrivilege is "better than or equal to" the desiredPrivilege. + */ + public boolean matchesOrIsSubsumedBy( + PolarisPrivilege desiredPrivilege, PolarisPrivilege grantedPrivilege) { + if (grantedPrivilege == desiredPrivilege) { + return true; + } + + if (SUPER_PRIVILEGES.containsKey(desiredPrivilege) + && SUPER_PRIVILEGES.get(desiredPrivilege).contains(grantedPrivilege)) { + return true; + } + // TODO: Fill out the map, maybe in the PolarisPrivilege enum definition itself. + return false; + } + + public void authorizeOrThrow( + @NotNull AuthenticatedPolarisPrincipal authenticatedPrincipal, + @NotNull Set activatedGranteeIds, + @NotNull PolarisAuthorizableOperation authzOp, + @Nullable PolarisResolvedPathWrapper target, + @Nullable PolarisResolvedPathWrapper secondary) { + authorizeOrThrow( + authenticatedPrincipal, + activatedGranteeIds, + authzOp, + target == null ? null : List.of(target), + secondary == null ? null : List.of(secondary)); + } + + public void authorizeOrThrow( + @NotNull AuthenticatedPolarisPrincipal authenticatedPrincipal, + @NotNull Set activatedGranteeIds, + @NotNull PolarisAuthorizableOperation authzOp, + @Nullable List targets, + @Nullable List secondaries) { + boolean enforceCredentialRotationRequiredState = + featureConfig.getConfiguration( + CallContext.getCurrentContext().getPolarisCallContext(), + PolarisConfiguration.ENFORCE_PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_CHECKING, + false); + if (enforceCredentialRotationRequiredState + && authenticatedPrincipal + .getPrincipalEntity() + .getInternalPropertiesAsMap() + .containsKey(PolarisEntityConstants.PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE) + && authzOp != PolarisAuthorizableOperation.ROTATE_CREDENTIALS) { + throw new ForbiddenException( + "Principal '%s' is not authorized for op %s due to PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE", + authenticatedPrincipal.getName(), authzOp); + } else if (!isAuthorized( + authenticatedPrincipal, activatedGranteeIds, authzOp, targets, secondaries)) { + throw new ForbiddenException( + "Principal '%s' with activated PrincipalRoles '%s' and activated ids '%s' is not authorized for op %s", + authenticatedPrincipal.getName(), + authenticatedPrincipal.getActivatedPrincipalRoleNames(), + activatedGranteeIds, + authzOp); + } + } + + /** + * Based on the required target/targetParent/secondary/secondaryParent privileges mapped from + * {@code authzOp}, determines whether the caller's set of activatedGranteeIds is authorized for + * the operation. + */ + public boolean isAuthorized( + @NotNull AuthenticatedPolarisPrincipal authenticatedPolarisPrincipal, + @NotNull Set activatedGranteeIds, + @NotNull PolarisAuthorizableOperation authzOp, + @Nullable PolarisResolvedPathWrapper target, + @Nullable PolarisResolvedPathWrapper secondary) { + return isAuthorized( + authenticatedPolarisPrincipal, + activatedGranteeIds, + authzOp, + target == null ? null : List.of(target), + secondary == null ? null : List.of(secondary)); + } + + public boolean isAuthorized( + @NotNull AuthenticatedPolarisPrincipal authenticatedPolarisPrincipal, + @NotNull Set activatedGranteeIds, + @NotNull PolarisAuthorizableOperation authzOp, + @Nullable List targets, + @Nullable List secondaries) { + for (PolarisPrivilege privilegeOnTarget : authzOp.getPrivilegesOnTarget()) { + // If any privileges are required on target, the target must be non-null. + Preconditions.checkState( + targets != null, + "Got null target when authorizing authzOp %s for privilege %s", + authzOp, + privilegeOnTarget); + for (PolarisResolvedPathWrapper target : targets) { + if (!hasTransitivePrivilege( + authenticatedPolarisPrincipal, activatedGranteeIds, privilegeOnTarget, target)) { + // TODO: Collect missing privileges to report all at the end and/or return to code + // that throws NotAuthorizedException for more useful messages. + return false; + } + } + } + for (PolarisPrivilege privilegeOnSecondary : authzOp.getPrivilegesOnSecondary()) { + Preconditions.checkState( + secondaries != null, + "Got null secondary when authorizing authzOp %s for privilege %s", + authzOp, + privilegeOnSecondary); + for (PolarisResolvedPathWrapper secondary : secondaries) { + if (!hasTransitivePrivilege( + authenticatedPolarisPrincipal, activatedGranteeIds, privilegeOnSecondary, secondary)) { + return false; + } + } + } + return true; + } + + /** + * Checks whether the resolvedPrincipal in the {@code resolved} resolvedPath has role-expanded + * permissions matching {@code privilege} on any entity in the resolvedPath of the resolvedPath. + * + *

The caller is responsible for translating these checks into either behavioral actions (e.g. + * returning 404 instead of 403, checking other root privileges that supercede the checked + * privilege, choosing whether to vend credentials) or throwing relevant Unauthorized + * errors/exceptions. + */ + public boolean hasTransitivePrivilege( + @NotNull AuthenticatedPolarisPrincipal authenticatedPolarisPrincipal, + Set activatedGranteeIds, + PolarisPrivilege desiredPrivilege, + PolarisResolvedPathWrapper resolvedPath) { + + // Iterate starting at the parent, since the most common case should be to manage grants as + // high up in the resource hierarchy as possible, so we expect earlier termination. + for (ResolvedPolarisEntity resolvedSecurableEntity : resolvedPath.getResolvedFullPath()) { + Preconditions.checkState( + resolvedSecurableEntity.getGrantRecordsAsSecurable() != null, + "Got null grantRecordsAsSecurable for resolvedSecurableEntity %s", + resolvedSecurableEntity); + for (PolarisGrantRecord grantRecord : resolvedSecurableEntity.getGrantRecordsAsSecurable()) { + if (matchesOrIsSubsumedBy( + desiredPrivilege, PolarisPrivilege.fromCode(grantRecord.getPrivilegeCode()))) { + // Found a potential candidate for satisfying our authz goal. + if (activatedGranteeIds.contains(grantRecord.getGranteeId())) { + LOG.debug( + "Satisfied privilege {} with grantRecord {} from securable {} for " + + "principalName {} and activatedIds {}", + desiredPrivilege, + grantRecord, + resolvedSecurableEntity, + authenticatedPolarisPrincipal.getName(), + activatedGranteeIds); + return true; + } + } + } + } + + LOG.debug( + "Failed to satisfy privilege {} for principalName {} on resolvedPath {}", + desiredPrivilege, + authenticatedPolarisPrincipal.getName(), + resolvedPath); + return false; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/catalog/PolarisCatalogHelpers.java b/polaris-core/src/main/java/io/polaris/core/catalog/PolarisCatalogHelpers.java new file mode 100644 index 0000000000..cf3e94052d --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/catalog/PolarisCatalogHelpers.java @@ -0,0 +1,92 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.catalog; + +import io.polaris.core.entity.PolarisEntity; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.stream.Collectors; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.TableIdentifier; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Holds helper methods translating between persistence-layer structs and Iceberg objects shared by + * different Polaris components. + */ +public class PolarisCatalogHelpers { + private static final Logger LOG = LoggerFactory.getLogger(PolarisCatalogHelpers.class); + + /** Not intended for instantiation. */ + private PolarisCatalogHelpers() {} + + public static List tableIdentifierToList(TableIdentifier identifier) { + List fullList = new ArrayList<>(); + fullList.addAll(Arrays.asList(identifier.namespace().levels())); + fullList.add(identifier.name()); + return fullList; + } + + public static TableIdentifier listToTableIdentifier(List ids) { + return TableIdentifier.of(ids.toArray(new String[0])); + } + + public static Namespace getParentNamespace(Namespace namespace) { + if (namespace.isEmpty() || namespace.length() == 1) { + return Namespace.empty(); + } + String[] parentLevels = new String[namespace.length() - 1]; + for (int i = 0; i < parentLevels.length; ++i) { + parentLevels[i] = namespace.level(i); + } + return Namespace.of(parentLevels); + } + + public static List nameAndIdToNamespaces( + List catalogPath, List entities) { + // Skip element 0 which is the catalog entity + String[] parentNamespaces = new String[catalogPath.size() - 1]; + for (int i = 0; i < parentNamespaces.length; ++i) { + parentNamespaces[i] = catalogPath.get(i + 1).getName(); + } + List namespaces = new ArrayList<>(); + for (PolarisEntity.NameAndId entity : entities) { + String[] fullName = Arrays.copyOf(parentNamespaces, parentNamespaces.length + 1); + fullName[fullName.length - 1] = entity.getName(); + namespaces.add(Namespace.of(fullName)); + } + return namespaces; + } + + /** + * Given the shortnames/ids of entities that all live under the given catalogPath, reconstructs + * TableIdentifier objects for each that all hold the catalogPath excluding the catalog entity. + */ + public static List nameAndIdToTableIdentifiers( + List catalogPath, List entities) { + // Skip element 0 which is the catalog entity + String[] parentNamespaces = new String[catalogPath.size() - 1]; + for (int i = 0; i < parentNamespaces.length; ++i) { + parentNamespaces[i] = catalogPath.get(i + 1).getName(); + } + Namespace sharedNamespace = Namespace.of(parentNamespaces); + return entities.stream() + .map(entity -> TableIdentifier.of(sharedNamespace, entity.getName())) + .collect(Collectors.toList()); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/context/CallContext.java b/polaris-core/src/main/java/io/polaris/core/context/CallContext.java new file mode 100644 index 0000000000..39940b9340 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/context/CallContext.java @@ -0,0 +1,156 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.context; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import java.io.IOException; +import java.util.HashMap; +import java.util.Map; +import java.util.stream.Collectors; +import org.apache.iceberg.io.CloseableGroup; +import org.jetbrains.annotations.NotNull; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Stores elements associated with an individual REST request such as RealmContext, caller + * identity/role, authn/authz, etc. This class is distinct from RealmContext because implementations + * may need to first independently resolve a RealmContext before resolving the identity/role + * elements of the CallContext that reside exclusively within the resolved Realm. For example, the + * principal/role entities may be defined within a Realm-specific persistence layer, and the + * underlying nature of the persistence layer may differ between different realms. + */ +public interface CallContext extends AutoCloseable { + InheritableThreadLocal CURRENT_CONTEXT = new InheritableThreadLocal<>(); + + // For requests that make use of a Catalog instance, this holds the instance that was + // created, scoped to the current call context. + public static final String REQUEST_PATH_CATALOG_INSTANCE_KEY = "REQUEST_PATH_CATALOG_INSTANCE"; + + // Authenticator filters should populate this field alongside resolving a SecurityContext. + // Value type: AuthenticatedPolarisPrincipal + String AUTHENTICATED_PRINCIPAL = "AUTHENTICATED_PRINCIPAL"; + String CLOSEABLES = "closeables"; + + static CallContext setCurrentContext(CallContext context) { + CURRENT_CONTEXT.set(context); + return context; + } + + static CallContext getCurrentContext() { + return CURRENT_CONTEXT.get(); + } + + static PolarisDiagnostics getDiagnostics() { + return CURRENT_CONTEXT.get().getPolarisCallContext().getDiagServices(); + } + + static AuthenticatedPolarisPrincipal getAuthenticatedPrincipal() { + return (AuthenticatedPolarisPrincipal) + CallContext.getCurrentContext().contextVariables().get(CallContext.AUTHENTICATED_PRINCIPAL); + } + + static void unsetCurrentContext() { + CURRENT_CONTEXT.remove(); + } + + static CallContext of( + final RealmContext realmContext, final PolarisCallContext polarisCallContext) { + Map map = new HashMap<>(); + return new CallContext() { + @Override + public RealmContext getRealmContext() { + return realmContext; + } + + @Override + public PolarisCallContext getPolarisCallContext() { + return polarisCallContext; + } + + @Override + public Map contextVariables() { + return map; + } + }; + } + + /** + * Copy the {@link CallContext}. {@link #contextVariables()} will be copied except for {@link + * #closeables()}. The original {@link #contextVariables()} map is untouched and {@link + * #closeables()} in the original {@link CallContext} should be closed along with the {@link + * CallContext}. + * + * @param base + * @return + */ + static CallContext copyOf(CallContext base) { + RealmContext realmContext = base.getRealmContext(); + PolarisCallContext polarisCallContext = base.getPolarisCallContext(); + Map contextVariables = + base.contextVariables().entrySet().stream() + .filter(e -> !e.getKey().equals(CLOSEABLES)) + .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); + return new CallContext() { + @Override + public RealmContext getRealmContext() { + return realmContext; + } + + @Override + public PolarisCallContext getPolarisCallContext() { + return polarisCallContext; + } + + @Override + public Map contextVariables() { + return contextVariables; + } + }; + } + + RealmContext getRealmContext(); + + /** + * @return the inner context used for delegating services + */ + PolarisCallContext getPolarisCallContext(); + + Map contextVariables(); + + default @NotNull CloseableGroup closeables() { + return (CloseableGroup) + contextVariables().computeIfAbsent(CLOSEABLES, key -> new CloseableGroup()); + } + + default void close() { + if (CURRENT_CONTEXT.get() == this) { + unsetCurrentContext(); + CloseableGroup closeables = closeables(); + try { + closeables.close(); + } catch (IOException e) { + Logger logger = LoggerFactory.getLogger(CallContext.class); + logger + .atWarn() + .addKeyValue("closeableGroup", closeables) + .log("Unable to close closeable group", e); + } + } + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/context/RealmContext.java b/polaris-core/src/main/java/io/polaris/core/context/RealmContext.java new file mode 100644 index 0000000000..ad5f7445e0 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/context/RealmContext.java @@ -0,0 +1,25 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.context; + +/** + * Represents the elements of a REST request associated with routing to independent and isolated + * "universes". This may include properties such as region, deployment environment (e.g. dev, qa, + * prod), and/or account. + */ +public interface RealmContext { + String getRealmIdentifier(); +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/AsyncTaskType.java b/polaris-core/src/main/java/io/polaris/core/entity/AsyncTaskType.java new file mode 100644 index 0000000000..f710a4dbe9 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/AsyncTaskType.java @@ -0,0 +1,45 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonValue; + +public enum AsyncTaskType { + ENTITY_CLEANUP_SCHEDULER(1), + FILE_CLEANUP(2); + + private final int typeCode; + + AsyncTaskType(int typeCode) { + this.typeCode = typeCode; + } + + @JsonValue + public int typeCode() { + return typeCode; + } + + @JsonCreator + public static AsyncTaskType fromTypeCode(int typeCode) { + for (AsyncTaskType taskType : AsyncTaskType.values()) { + if (taskType.typeCode == typeCode) { + return taskType; + } + } + return null; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/CatalogEntity.java b/polaris-core/src/main/java/io/polaris/core/entity/CatalogEntity.java new file mode 100644 index 0000000000..1b01ab6559 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/CatalogEntity.java @@ -0,0 +1,283 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import static io.polaris.core.admin.model.StorageConfigInfo.StorageTypeEnum.AZURE; + +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.admin.model.AwsStorageConfigInfo; +import io.polaris.core.admin.model.AzureStorageConfigInfo; +import io.polaris.core.admin.model.Catalog; +import io.polaris.core.admin.model.CatalogProperties; +import io.polaris.core.admin.model.ExternalCatalog; +import io.polaris.core.admin.model.FileStorageConfigInfo; +import io.polaris.core.admin.model.GcpStorageConfigInfo; +import io.polaris.core.admin.model.PolarisCatalog; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.core.storage.FileStorageConfigurationInfo; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import io.polaris.core.storage.aws.AwsStorageConfigurationInfo; +import io.polaris.core.storage.azure.AzureStorageConfigurationInfo; +import io.polaris.core.storage.gcp.GcpStorageConfigurationInfo; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Optional; +import java.util.Set; +import org.apache.iceberg.exceptions.BadRequestException; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Catalog specific subclass of the {@link PolarisEntity} that handles conversion from the {@link + * Catalog} model to the persistent entity model. + */ +public class CatalogEntity extends PolarisEntity { + private static final Logger LOG = LoggerFactory.getLogger(CatalogEntity.class); + + public static final long ROOT_CATALOG_ID = 0; + public static final String CATALOG_TYPE_PROPERTY = "catalogType"; + + // Specifies the object-store base location used for all Table file locations under the + // catalog, stored in the "properties" map. + public static final String DEFAULT_BASE_LOCATION_KEY = "default-base-location"; + + // Specifies a prefix that will be replaced with the catalog's default-base-location whenever + // it matches a specified new table or view location. For example, if the catalog base location + // is "s3://my-bucket/base/location" and the prefix specified here is "file:/tmp" then any + // new table attempting to specify a base location of "file:/tmp/ns1/ns2/table1" will be + // translated into "s3://my-bucket/base/location/ns1/ns2/table1". + public static final String REPLACE_NEW_LOCATION_PREFIX_WITH_CATALOG_DEFAULT_KEY = + "replace-new-location-prefix-with-catalog-default"; + public static final String REMOTE_URL = "remoteUrl"; + + public CatalogEntity(PolarisBaseEntity sourceEntity) { + super(sourceEntity); + } + + public static CatalogEntity of(PolarisBaseEntity sourceEntity) { + if (sourceEntity != null) { + return new CatalogEntity(sourceEntity); + } + return null; + } + + public static CatalogEntity fromCatalog(Catalog catalog) { + + Builder builder = + new Builder() + .setName(catalog.getName()) + .setProperties(catalog.getProperties().toMap()) + .setCatalogType(catalog.getType().name()); + Map internalProperties = new HashMap<>(); + if (catalog instanceof ExternalCatalog) { + internalProperties.put(REMOTE_URL, ((ExternalCatalog) catalog).getRemoteUrl()); + } + internalProperties.put(CATALOG_TYPE_PROPERTY, catalog.getType().name()); + builder.setInternalProperties(internalProperties); + builder.setStorageConfigurationInfo( + catalog.getStorageConfigInfo(), getDefaultBaseLocation(catalog)); + return builder.build(); + } + + public Catalog asCatalog() { + Map internalProperties = getInternalPropertiesAsMap(); + Catalog.TypeEnum catalogType = + Optional.ofNullable(internalProperties.get(CATALOG_TYPE_PROPERTY)) + .map(Catalog.TypeEnum::valueOf) + .orElseGet(() -> getName().equalsIgnoreCase("ROOT") ? Catalog.TypeEnum.INTERNAL : null); + Map propertiesMap = getPropertiesAsMap(); + CatalogProperties catalogProps = + CatalogProperties.builder(propertiesMap.get(DEFAULT_BASE_LOCATION_KEY)) + .putAll(propertiesMap) + .build(); + return catalogType == Catalog.TypeEnum.INTERNAL + ? PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName(getName()) + .setProperties(catalogProps) + .setCreateTimestamp(getCreateTimestamp()) + .setLastUpdateTimestamp(getLastUpdateTimestamp()) + .setEntityVersion(getEntityVersion()) + .setStorageConfigInfo(getStorageInfo(internalProperties)) + .build() + : ExternalCatalog.builder() + .setType(Catalog.TypeEnum.EXTERNAL) + .setName(getName()) + .setRemoteUrl(getInternalPropertiesAsMap().get(REMOTE_URL)) + .setProperties(catalogProps) + .setCreateTimestamp(getCreateTimestamp()) + .setLastUpdateTimestamp(getLastUpdateTimestamp()) + .setEntityVersion(getEntityVersion()) + .setStorageConfigInfo(getStorageInfo(internalProperties)) + .build(); + } + + private StorageConfigInfo getStorageInfo(Map internalProperties) { + if (internalProperties.containsKey(PolarisEntityConstants.getStorageConfigInfoPropertyName())) { + PolarisStorageConfigurationInfo configInfo = getStorageConfigurationInfo(); + PolarisStorageConfigurationInfo.StorageType storageType = configInfo.getStorageType(); + if (configInfo instanceof AwsStorageConfigurationInfo) { + AwsStorageConfigurationInfo awsConfig = (AwsStorageConfigurationInfo) configInfo; + return AwsStorageConfigInfo.builder() + .setRoleArn(awsConfig.getRoleARN()) + .setExternalId(awsConfig.getExternalId()) + .setUserArn(awsConfig.getUserARN()) + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(awsConfig.getAllowedLocations()) + .build(); + } + if (configInfo instanceof AzureStorageConfigurationInfo) { + AzureStorageConfigurationInfo azureConfig = (AzureStorageConfigurationInfo) configInfo; + return AzureStorageConfigInfo.builder() + .setTenantId(azureConfig.getTenantId()) + .setMultiTenantAppName(azureConfig.getMultiTenantAppName()) + .setConsentUrl(azureConfig.getConsentUrl()) + .setStorageType(AZURE) + .setAllowedLocations(azureConfig.getAllowedLocations()) + .build(); + } + if (configInfo instanceof GcpStorageConfigurationInfo) { + GcpStorageConfigurationInfo gcpConfigModel = (GcpStorageConfigurationInfo) configInfo; + return GcpStorageConfigInfo.builder() + .setGcsServiceAccount(gcpConfigModel.getGcpServiceAccount()) + .setStorageType(StorageConfigInfo.StorageTypeEnum.GCS) + .setAllowedLocations(gcpConfigModel.getAllowedLocations()) + .build(); + } + if (configInfo instanceof FileStorageConfigurationInfo) { + FileStorageConfigurationInfo fileConfigModel = (FileStorageConfigurationInfo) configInfo; + return new FileStorageConfigInfo( + StorageConfigInfo.StorageTypeEnum.FILE, fileConfigModel.getAllowedLocations()); + } + return null; + } + return null; + } + + public String getDefaultBaseLocation() { + return getPropertiesAsMap().get(DEFAULT_BASE_LOCATION_KEY); + } + + public String getReplaceNewLocationPrefixWithCatalogDefault() { + return getPropertiesAsMap().get(REPLACE_NEW_LOCATION_PREFIX_WITH_CATALOG_DEFAULT_KEY); + } + + public @Nullable PolarisStorageConfigurationInfo getStorageConfigurationInfo() { + String configStr = + getInternalPropertiesAsMap().get(PolarisEntityConstants.getStorageConfigInfoPropertyName()); + if (configStr != null) { + return PolarisStorageConfigurationInfo.deserialize( + new PolarisDefaultDiagServiceImpl(), configStr); + } + return null; + } + + public Catalog.TypeEnum getCatalogType() { + return Optional.ofNullable(getInternalPropertiesAsMap().get(CATALOG_TYPE_PROPERTY)) + .map(Catalog.TypeEnum::valueOf) + .orElse(null); + } + + public static class Builder extends PolarisEntity.BaseBuilder { + public Builder() { + super(); + setType(PolarisEntityType.CATALOG); + setCatalogId(PolarisEntityConstants.getNullId()); + setParentId(PolarisEntityConstants.getRootEntityId()); + } + + public Builder(CatalogEntity original) { + super(original); + } + + public Builder setCatalogType(String type) { + internalProperties.put(CATALOG_TYPE_PROPERTY, type); + return this; + } + + public Builder setDefaultBaseLocation(String defaultBaseLocation) { + // Note that this member lives in the main 'properties' map rather tha internalProperties. + properties.put(DEFAULT_BASE_LOCATION_KEY, defaultBaseLocation); + return this; + } + + public Builder setReplaceNewLocationPrefixWithCatalogDefault(String value) { + // Note that this member lives in the main 'properties' map rather tha internalProperties. + properties.put(REPLACE_NEW_LOCATION_PREFIX_WITH_CATALOG_DEFAULT_KEY, value); + return this; + } + + public Builder setStorageConfigurationInfo( + StorageConfigInfo storageConfigModel, String defaultBaseLocation) { + if (storageConfigModel != null) { + PolarisStorageConfigurationInfo config; + Set allowedLocations = new HashSet<>(storageConfigModel.getAllowedLocations()); + + // TODO: Reconsider whether this should actually just be a check up-front or if we + // actually want to silently add to the allowed locations. Maybe ideally we only + // add to the allowedLocations if allowedLocations is empty for the simple case, + // but if the caller provided allowedLocations explicitly, then we just verify that + // the defaultBaseLocation is at least a subpath of one of the allowedLocations. + if (defaultBaseLocation == null) { + throw new BadRequestException("Must specify default base location"); + } + allowedLocations.add(defaultBaseLocation); + switch (storageConfigModel.getStorageType()) { + case S3: + AwsStorageConfigInfo awsConfigModel = (AwsStorageConfigInfo) storageConfigModel; + config = + new AwsStorageConfigurationInfo( + PolarisStorageConfigurationInfo.StorageType.S3, + new ArrayList<>(allowedLocations), + awsConfigModel.getRoleArn(), + awsConfigModel.getExternalId()); + ((AwsStorageConfigurationInfo) config).validateArn(awsConfigModel.getRoleArn()); + break; + case AZURE: + AzureStorageConfigInfo azureConfigModel = (AzureStorageConfigInfo) storageConfigModel; + config = + new AzureStorageConfigurationInfo( + new ArrayList<>(allowedLocations), azureConfigModel.getTenantId()); + break; + case GCS: + config = new GcpStorageConfigurationInfo(new ArrayList<>(allowedLocations)); + break; + case FILE: + config = new FileStorageConfigurationInfo(new ArrayList<>(allowedLocations)); + break; + default: + throw new IllegalStateException( + "Unsupported storage type: " + storageConfigModel.getStorageType()); + } + internalProperties.put( + PolarisEntityConstants.getStorageConfigInfoPropertyName(), config.serialize()); + } + return this; + } + + public CatalogEntity build() { + return new CatalogEntity(buildBase()); + } + } + + protected static @NotNull String getDefaultBaseLocation(Catalog catalog) { + return catalog.getProperties().getDefaultBaseLocation(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/CatalogRoleEntity.java b/polaris-core/src/main/java/io/polaris/core/entity/CatalogRoleEntity.java new file mode 100644 index 0000000000..043e1bb285 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/CatalogRoleEntity.java @@ -0,0 +1,65 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import io.polaris.core.admin.model.CatalogRole; + +/** Wrapper for translating between the REST CatalogRole object and the base PolarisEntity type. */ +public class CatalogRoleEntity extends PolarisEntity { + public CatalogRoleEntity(PolarisBaseEntity sourceEntity) { + super(sourceEntity); + } + + public static CatalogRoleEntity of(PolarisBaseEntity sourceEntity) { + if (sourceEntity != null) { + return new CatalogRoleEntity(sourceEntity); + } + return null; + } + + public static CatalogRoleEntity fromCatalogRole(CatalogRole catalogRole) { + return new Builder() + .setName(catalogRole.getName()) + .setProperties(catalogRole.getProperties()) + .build(); + } + + public CatalogRole asCatalogRole() { + CatalogRole catalogRole = + new CatalogRole( + getName(), + getPropertiesAsMap(), + getCreateTimestamp(), + getLastUpdateTimestamp(), + getEntityVersion()); + return catalogRole; + } + + public static class Builder extends PolarisEntity.BaseBuilder { + public Builder() { + super(); + setType(PolarisEntityType.CATALOG_ROLE); + } + + public Builder(CatalogRoleEntity original) { + super(original); + } + + public CatalogRoleEntity build() { + return new CatalogRoleEntity(buildBase()); + } + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/NamespaceEntity.java b/polaris-core/src/main/java/io/polaris/core/entity/NamespaceEntity.java new file mode 100644 index 0000000000..523b47cb42 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/NamespaceEntity.java @@ -0,0 +1,89 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonIgnore; +import io.polaris.core.catalog.PolarisCatalogHelpers; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.rest.RESTUtil; + +/** + * Namespace-specific subclass of the {@link PolarisEntity} that provides accessors interacting with + * internalProperties specific to the NAMESPACE type. + */ +public class NamespaceEntity extends PolarisEntity { + // RESTUtil-encoded parent namespace. + public static final String PARENT_NAMESPACE_KEY = "parent-namespace"; + + public NamespaceEntity(PolarisBaseEntity sourceEntity) { + super(sourceEntity); + } + + public static NamespaceEntity of(PolarisBaseEntity sourceEntity) { + if (sourceEntity != null) { + return new NamespaceEntity(sourceEntity); + } + return null; + } + + public Namespace getParentNamespace() { + String encodedNamespace = getInternalPropertiesAsMap().get(PARENT_NAMESPACE_KEY); + if (encodedNamespace == null) { + return Namespace.empty(); + } + return RESTUtil.decodeNamespace(encodedNamespace); + } + + public Namespace asNamespace() { + Namespace parent = getParentNamespace(); + String[] levels = new String[parent.length() + 1]; + for (int i = 0; i < parent.length(); ++i) { + levels[i] = parent.level(i); + } + levels[levels.length - 1] = getName(); + return Namespace.of(levels); + } + + @JsonIgnore + public String getBaseLocation() { + return getPropertiesAsMap().get(PolarisEntityConstants.ENTITY_BASE_LOCATION); + } + + public static class Builder extends PolarisEntity.BaseBuilder { + public Builder(Namespace namespace) { + super(); + setType(PolarisEntityType.NAMESPACE); + setParentNamespace(PolarisCatalogHelpers.getParentNamespace(namespace)); + setName(namespace.level(namespace.length() - 1)); + } + + public Builder setBaseLocation(String baseLocation) { + properties.put(PolarisEntityConstants.ENTITY_BASE_LOCATION, baseLocation); + return this; + } + + public Builder setParentNamespace(Namespace namespace) { + if (namespace != null && !namespace.isEmpty()) { + internalProperties.put(PARENT_NAMESPACE_KEY, RESTUtil.encodeNamespace(namespace)); + } + return this; + } + + public NamespaceEntity build() { + return new NamespaceEntity(buildBase()); + } + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisBaseEntity.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisBaseEntity.java new file mode 100644 index 0000000000..0209c9951c --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisBaseEntity.java @@ -0,0 +1,352 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonIgnore; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import java.util.HashMap; +import java.util.Map; +import java.util.Objects; + +/** + * Base polaris entity representing all attributes of a Polaris Entity. This is used to exchange + * full entity information between the client and the GS backend + */ +public class PolarisBaseEntity extends PolarisEntityCore { + + public static final String EMPTY_MAP_STRING = "{}"; + + // to serialize/deserialize properties + private static final ObjectMapper MAPPER = new ObjectMapper(); + + // the type of the entity when it was resolved + protected int subTypeCode; + + // timestamp when this entity was created + protected long createTimestamp; + + // when this entity was dropped. Null if was never dropped + protected long dropTimestamp; + + // when did we start purging this entity. When not null, un-drop is no longer possible + protected long purgeTimestamp; + + // when should we start purging this entity + protected long toPurgeTimestamp; + + // last time this entity was updated, only for troubleshooting + protected long lastUpdateTimestamp; + + // properties, serialized as a JSON string + protected String properties; + + // internal properties, serialized as a JSON string + protected String internalProperties; + + // current version for that entity, will be monotonically incremented + protected int grantRecordsVersion; + + public int getSubTypeCode() { + return subTypeCode; + } + + public void setSubTypeCode(int subTypeCode) { + this.subTypeCode = subTypeCode; + } + + public long getCreateTimestamp() { + return createTimestamp; + } + + public void setCreateTimestamp(long createTimestamp) { + this.createTimestamp = createTimestamp; + } + + public long getDropTimestamp() { + return dropTimestamp; + } + + public void setDropTimestamp(long dropTimestamp) { + this.dropTimestamp = dropTimestamp; + } + + public long getPurgeTimestamp() { + return purgeTimestamp; + } + + public void setPurgeTimestamp(long purgeTimestamp) { + this.purgeTimestamp = purgeTimestamp; + } + + public long getToPurgeTimestamp() { + return toPurgeTimestamp; + } + + public void setToPurgeTimestamp(long toPurgeTimestamp) { + this.toPurgeTimestamp = toPurgeTimestamp; + } + + public long getLastUpdateTimestamp() { + return lastUpdateTimestamp; + } + + public void setLastUpdateTimestamp(long lastUpdateTimestamp) { + this.lastUpdateTimestamp = lastUpdateTimestamp; + } + + public String getProperties() { + return properties != null ? properties : EMPTY_MAP_STRING; + } + + @JsonIgnore + public Map getPropertiesAsMap() { + if (properties == null) { + return new HashMap<>(); + } + try { + return MAPPER.readValue(properties, new TypeReference<>() {}); + } catch (JsonProcessingException ex) { + throw new IllegalStateException( + String.format("Failed to deserialize json. properties %s", properties), ex); + } + } + + /** + * Set one single property + * + * @param propName name of the property + * @param propValue value of that property + */ + public void addProperty(String propName, String propValue) { + Map props = this.getPropertiesAsMap(); + props.put(propName, propValue); + this.setPropertiesAsMap(props); + } + + public void setProperties(String properties) { + this.properties = properties; + } + + @JsonIgnore + public void setPropertiesAsMap(Map properties) { + try { + this.properties = properties == null ? null : MAPPER.writeValueAsString(properties); + } catch (JsonProcessingException ex) { + throw new IllegalStateException( + String.format("Failed to serialize json. properties %s", properties), ex); + } + } + + public String getInternalProperties() { + return internalProperties != null ? internalProperties : EMPTY_MAP_STRING; + } + + @JsonIgnore + public Map getInternalPropertiesAsMap() { + if (this.internalProperties == null) { + return new HashMap<>(); + } + try { + return MAPPER.readValue(this.internalProperties, new TypeReference<>() {}); + } catch (JsonProcessingException ex) { + throw new IllegalStateException( + String.format( + "Failed to deserialize json. internalProperties %s", this.internalProperties), + ex); + } + } + + /** + * Set one single internal property + * + * @param propName name of the property + * @param propValue value of that property + */ + public void addInternalProperty(String propName, String propValue) { + Map props = this.getInternalPropertiesAsMap(); + props.put(propName, propValue); + this.setInternalPropertiesAsMap(props); + } + + public void setInternalProperties(String internalProperties) { + this.internalProperties = internalProperties; + } + + @JsonIgnore + public void setInternalPropertiesAsMap(Map internalProperties) { + try { + this.internalProperties = + internalProperties == null ? null : MAPPER.writeValueAsString(internalProperties); + } catch (JsonProcessingException ex) { + throw new IllegalStateException( + String.format("Failed to serialize json. internalProperties %s", internalProperties), ex); + } + } + + public int getGrantRecordsVersion() { + return grantRecordsVersion; + } + + public void setGrantRecordsVersion(int grantRecordsVersion) { + this.grantRecordsVersion = grantRecordsVersion; + } + + public static PolarisBaseEntity fromCore( + PolarisEntityCore coreEntity, PolarisEntityType entityType, PolarisEntitySubType subType) { + return new PolarisBaseEntity( + coreEntity.getCatalogId(), + coreEntity.getId(), + entityType, + subType, + coreEntity.getParentId(), + coreEntity.getName()); + } + + /** + * Copy constructor + * + * @param entity entity to copy + */ + public PolarisBaseEntity(PolarisBaseEntity entity) { + super( + entity.getCatalogId(), + entity.getId(), + entity.getParentId(), + entity.getTypeCode(), + entity.getName(), + entity.getEntityVersion()); + this.subTypeCode = entity.getSubTypeCode(); + this.createTimestamp = entity.getCreateTimestamp(); + this.dropTimestamp = entity.getDropTimestamp(); + this.purgeTimestamp = entity.getPurgeTimestamp(); + this.toPurgeTimestamp = entity.getToPurgeTimestamp(); + this.lastUpdateTimestamp = entity.getLastUpdateTimestamp(); + this.properties = entity.getProperties(); + this.internalProperties = entity.getInternalProperties(); + this.grantRecordsVersion = entity.getGrantRecordsVersion(); + } + + /** Build the DTO for a new entity */ + public PolarisBaseEntity( + long catalogId, + long id, + PolarisEntityType type, + PolarisEntitySubType subType, + long parentId, + String name) { + this(catalogId, id, type.getCode(), subType.getCode(), parentId, name); + } + + /** Build the DTO for a new entity */ + protected PolarisBaseEntity( + long catalogId, long id, int typeCode, int subTypeCode, long parentId, String name) { + super(catalogId, id, parentId, typeCode, name, 1); + this.subTypeCode = subTypeCode; + this.createTimestamp = System.currentTimeMillis(); + this.dropTimestamp = 0; + this.purgeTimestamp = 0; + this.toPurgeTimestamp = 0; + this.lastUpdateTimestamp = this.createTimestamp; + this.properties = null; + this.internalProperties = null; + this.grantRecordsVersion = 1; + } + + /** Build the DTO for a new entity */ + protected PolarisBaseEntity() { + super(); + } + + /** + * @return the subtype of this entity + */ + public @JsonIgnore PolarisEntitySubType getSubType() { + return PolarisEntitySubType.fromCode(this.subTypeCode); + } + + /** + * @return true if this entity has been dropped + */ + public @JsonIgnore boolean isDropped() { + return this.dropTimestamp != 0; + } + + @Override + public boolean equals(Object o) { + if (!super.equals(o)) { + return false; + } + if (this == o) { + return true; + } + if (!(o instanceof PolarisBaseEntity)) { + return false; + } + PolarisBaseEntity that = (PolarisBaseEntity) o; + return subTypeCode == that.subTypeCode + && createTimestamp == that.createTimestamp + && dropTimestamp == that.dropTimestamp + && purgeTimestamp == that.purgeTimestamp + && toPurgeTimestamp == that.toPurgeTimestamp + && lastUpdateTimestamp == that.lastUpdateTimestamp + && grantRecordsVersion == that.grantRecordsVersion + && Objects.equals(properties, that.properties) + && Objects.equals(internalProperties, that.internalProperties); + } + + @Override + public int hashCode() { + return Objects.hash( + catalogId, + id, + parentId, + typeCode, + name, + entityVersion, + subTypeCode, + createTimestamp, + dropTimestamp, + purgeTimestamp, + toPurgeTimestamp, + lastUpdateTimestamp, + properties, + internalProperties, + grantRecordsVersion); + } + + @Override + public String toString() { + return "PolarisBaseEntity{" + + super.toString() + + ", subTypeCode=" + + subTypeCode + + ", createTimestamp=" + + createTimestamp + + ", dropTimestamp=" + + dropTimestamp + + ", purgeTimestamp=" + + purgeTimestamp + + ", toPurgeTimestamp=" + + toPurgeTimestamp + + ", lastUpdateTimestamp=" + + lastUpdateTimestamp + + ", grantRecordsVersion=" + + grantRecordsVersion + + '}'; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisChangeTrackingVersions.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisChangeTrackingVersions.java new file mode 100644 index 0000000000..09808aa285 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisChangeTrackingVersions.java @@ -0,0 +1,60 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonProperty; + +/** Simple class to represent the version and grant records version associated to an entity */ +public class PolarisChangeTrackingVersions { + // entity version + private final int entityVersion; + + // entity grant records version + private final int grantRecordsVersion; + + /** + * Constructor + * + * @param entityVersion entity version + * @param grantRecordsVersion entity grant records version + */ + @JsonCreator + public PolarisChangeTrackingVersions( + @JsonProperty("entityVersion") int entityVersion, + @JsonProperty("grantRecordsVersion") int grantRecordsVersion) { + this.entityVersion = entityVersion; + this.grantRecordsVersion = grantRecordsVersion; + } + + public int getEntityVersion() { + return entityVersion; + } + + public int getGrantRecordsVersion() { + return grantRecordsVersion; + } + + @Override + public String toString() { + return "PolarisChangeTrackingVersions{" + + "entityVersion=" + + entityVersion + + ", grantRecordsVersion=" + + grantRecordsVersion + + '}'; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntitiesActiveKey.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntitiesActiveKey.java new file mode 100644 index 0000000000..2d8d3a8d64 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntitiesActiveKey.java @@ -0,0 +1,77 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +public class PolarisEntitiesActiveKey { + + // entity catalog id + private final long catalogId; + + // parent id of the entity + private final long parentId; + + // code representing the type of that entity + private final int typeCode; + + // name of the entity + private final String name; + + public PolarisEntitiesActiveKey(long catalogId, long parentId, int typeCode, String name) { + this.catalogId = catalogId; + this.parentId = parentId; + this.typeCode = typeCode; + this.name = name; + } + + /** Constructor to create the object with provided entity */ + public PolarisEntitiesActiveKey(PolarisEntityCore entity) { + this.catalogId = entity.getCatalogId(); + this.parentId = entity.getParentId(); + this.typeCode = entity.getTypeCode(); + this.name = entity.getName(); + } + + public long getCatalogId() { + return catalogId; + } + + public long getParentId() { + return parentId; + } + + public int getTypeCode() { + return typeCode; + } + + public String getName() { + return name; + } + + @Override + public String toString() { + return "PolarisEntitiesActiveKey{" + + "catalogId=" + + catalogId + + ", parentId=" + + parentId + + ", typeCode=" + + typeCode + + ", name='" + + name + + '\'' + + '}'; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntity.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntity.java new file mode 100644 index 0000000000..9dbf8b97d9 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntity.java @@ -0,0 +1,423 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonIgnore; +import com.fasterxml.jackson.annotation.JsonProperty; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Optional; +import java.util.function.Predicate; +import java.util.stream.Collectors; +import org.jetbrains.annotations.NotNull; + +public class PolarisEntity extends PolarisBaseEntity { + + public static class NameAndId { + private final String name; + private final long id; + + public NameAndId(String name, long id) { + this.name = name; + this.id = id; + } + + public String getName() { + return name; + } + + public long getId() { + return id; + } + } + + public static class TypeSubTypeAndName { + private final PolarisEntityType type; + private final PolarisEntitySubType subType; + private final String name; + + public TypeSubTypeAndName(PolarisEntityType type, PolarisEntitySubType subType, String name) { + this.type = type; + this.subType = subType; + this.name = name; + } + + public PolarisEntityType getType() { + return type; + } + + public PolarisEntitySubType getSubType() { + return subType; + } + + public String getName() { + return name; + } + } + + @JsonCreator + private PolarisEntity( + @JsonProperty("catalogId") long catalogId, + @JsonProperty("typeCode") PolarisEntityType type, + @JsonProperty("subTypeCode") PolarisEntitySubType subType, + @JsonProperty("id") long id, + @JsonProperty("parentId") long parentId, + @JsonProperty("name") String name, + @JsonProperty("createTimestamp") long createTimestamp, + @JsonProperty("dropTimestamp") long dropTimestamp, + @JsonProperty("purgeTimestamp") long purgeTimestamp, + @JsonProperty("lastUpdateTimestamp") long lastUpdateTimestamp, + @JsonProperty("properties") String properties, + @JsonProperty("internalProperties") String internalProperties, + @JsonProperty("entityVersion") int entityVersion, + @JsonProperty("grantRecordsVersion") int grantRecordsVersion) { + super(catalogId, id, type, subType, parentId, name); + this.createTimestamp = createTimestamp; + this.dropTimestamp = dropTimestamp; + this.purgeTimestamp = purgeTimestamp; + this.lastUpdateTimestamp = lastUpdateTimestamp; + this.properties = properties; + this.internalProperties = internalProperties; + this.entityVersion = entityVersion; + this.grantRecordsVersion = grantRecordsVersion; + } + + public PolarisEntity( + long catalogId, + PolarisEntityType type, + PolarisEntitySubType subType, + long id, + long parentId, + String name, + long createTimestamp, + long dropTimestamp, + long purgeTimestamp, + long lastUpdateTimestamp, + Map properties, + Map internalProperties, + int entityVersion, + int grantRecordsVersion) { + super(catalogId, id, type, subType, parentId, name); + this.createTimestamp = createTimestamp; + this.dropTimestamp = dropTimestamp; + this.purgeTimestamp = purgeTimestamp; + this.lastUpdateTimestamp = lastUpdateTimestamp; + this.setPropertiesAsMap(properties); + this.setInternalPropertiesAsMap(internalProperties); + this.entityVersion = entityVersion; + this.grantRecordsVersion = grantRecordsVersion; + } + + public static PolarisEntity of(PolarisBaseEntity sourceEntity) { + if (sourceEntity != null) { + return new PolarisEntity(sourceEntity); + } + return null; + } + + public static PolarisEntity of(PolarisMetaStoreManager.EntityResult result) { + if (result.isSuccess()) { + return new PolarisEntity(result.getEntity()); + } + return null; + } + + public static PolarisEntityCore toCore(PolarisBaseEntity entity) { + PolarisEntityCore entityCore = + new PolarisEntityCore( + entity.getCatalogId(), + entity.getId(), + entity.getParentId(), + entity.getTypeCode(), + entity.getName(), + entity.getEntityVersion()); + return entityCore; + } + + public static List toCoreList(List path) { + return Optional.ofNullable(path) + .filter(Predicate.not(List::isEmpty)) + .map(list -> list.stream().map(PolarisEntity::toCore).collect(Collectors.toList())) + .orElse(null); + } + + public static List toNameAndIdList(List entities) { + return Optional.ofNullable(entities) + .map( + list -> + list.stream() + .map(record -> new NameAndId(record.getName(), record.getId())) + .collect(Collectors.toList())) + .orElse(null); + } + + public PolarisEntity(@NotNull PolarisBaseEntity sourceEntity) { + super( + sourceEntity.getCatalogId(), + sourceEntity.getId(), + sourceEntity.getTypeCode(), + sourceEntity.getSubTypeCode(), + sourceEntity.getParentId(), + sourceEntity.getName()); + this.createTimestamp = sourceEntity.getCreateTimestamp(); + this.dropTimestamp = sourceEntity.getDropTimestamp(); + this.purgeTimestamp = sourceEntity.getPurgeTimestamp(); + this.lastUpdateTimestamp = sourceEntity.getLastUpdateTimestamp(); + this.properties = sourceEntity.getProperties(); + this.internalProperties = sourceEntity.getInternalProperties(); + this.entityVersion = sourceEntity.getEntityVersion(); + this.grantRecordsVersion = sourceEntity.getGrantRecordsVersion(); + } + + @JsonIgnore + public PolarisEntityType getType() { + return PolarisEntityType.fromCode(getTypeCode()); + } + + @JsonIgnore + public PolarisEntitySubType getSubType() { + return PolarisEntitySubType.fromCode(getSubTypeCode()); + } + + @JsonIgnore + public NameAndId nameAndId() { + return new NameAndId(name, id); + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("name=" + getName()); + sb.append(";id=" + getId()); + sb.append(";parentId=" + getParentId()); + sb.append(";entityVersion=" + getEntityVersion()); + sb.append(";type=" + getType()); + sb.append(";subType=" + getSubType()); + sb.append(";internalProperties=" + getInternalPropertiesAsMap()); + return sb.toString(); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (!(o instanceof PolarisEntity)) return false; + PolarisEntity that = (PolarisEntity) o; + return catalogId == that.catalogId + && id == that.id + && parentId == that.parentId + && createTimestamp == that.createTimestamp + && dropTimestamp == that.dropTimestamp + && purgeTimestamp == that.purgeTimestamp + && lastUpdateTimestamp == that.lastUpdateTimestamp + && entityVersion == that.entityVersion + && grantRecordsVersion == that.grantRecordsVersion + && typeCode == that.typeCode + && subTypeCode == that.subTypeCode + && Objects.equals(name, that.name) + && Objects.equals(properties, that.properties) + && Objects.equals(internalProperties, that.internalProperties); + } + + @Override + public int hashCode() { + return Objects.hash( + typeCode, + subTypeCode, + catalogId, + id, + parentId, + name, + createTimestamp, + dropTimestamp, + purgeTimestamp, + lastUpdateTimestamp, + properties, + internalProperties, + entityVersion, + grantRecordsVersion); + } + + public static class Builder extends BaseBuilder { + public Builder() { + super(); + } + + public Builder(PolarisEntity original) { + super(original); + } + + public PolarisEntity build() { + return buildBase(); + } + } + + @SuppressWarnings("unchecked") + public abstract static class BaseBuilder> { + protected long catalogId; + protected PolarisEntityType type; + protected PolarisEntitySubType subType; + protected long id; + protected long parentId; + protected String name; + protected long createTimestamp; + protected long dropTimestamp; + protected long purgeTimestamp; + protected long lastUpdateTimestamp; + protected Map properties; + protected Map internalProperties; + protected int entityVersion; + protected int grantRecordsVersion; + + protected BaseBuilder() { + this.catalogId = -1; + this.type = PolarisEntityType.NULL_TYPE; + this.subType = PolarisEntitySubType.NULL_SUBTYPE; + this.id = -1; + this.parentId = 0; + this.name = null; + this.createTimestamp = 0; + this.dropTimestamp = 0; + this.purgeTimestamp = 0; + this.lastUpdateTimestamp = 0; + this.properties = new HashMap<>(); + this.internalProperties = new HashMap<>(); + this.entityVersion = 1; + this.grantRecordsVersion = 1; + } + + protected BaseBuilder(T original) { + this.catalogId = original.catalogId; + this.type = original.getType(); + this.subType = original.getSubType(); + this.id = original.id; + this.parentId = original.parentId; + this.name = original.name; + this.createTimestamp = original.createTimestamp; + this.dropTimestamp = original.dropTimestamp; + this.purgeTimestamp = original.purgeTimestamp; + this.lastUpdateTimestamp = original.lastUpdateTimestamp; + this.properties = new HashMap<>(original.getPropertiesAsMap()); + this.internalProperties = new HashMap<>(original.getInternalPropertiesAsMap()); + this.entityVersion = original.entityVersion; + this.grantRecordsVersion = original.grantRecordsVersion; + } + + public abstract T build(); + + public PolarisEntity buildBase() { + // TODO: Validate required fields + // id > 0 already -- client must always supply id for idempotency purposes. + return new PolarisEntity( + catalogId, + type, + subType, + id, + parentId, + name, + createTimestamp, + dropTimestamp, + purgeTimestamp, + lastUpdateTimestamp, + properties, + internalProperties, + entityVersion, + grantRecordsVersion); + } + + public B setCatalogId(long catalogId) { + this.catalogId = catalogId; + return (B) this; + } + + public B setType(PolarisEntityType type) { + this.type = type; + return (B) this; + } + + public B setSubType(PolarisEntitySubType subType) { + this.subType = subType; + return (B) this; + } + + public B setId(long id) { + // TODO: Maybe block this one whenever builder is created from previously-existing entity + // since re-opening an entity should only be for modifying the mutable fields for a given + // logical entity. Would require separate builder type for "clone"-style copies, but + // usually when creating from other entity we want to preserve the id. + this.id = id; + return (B) this; + } + + public B setParentId(long parentId) { + this.parentId = parentId; + return (B) this; + } + + public B setName(String name) { + this.name = name; + return (B) this; + } + + public B setCreateTimestamp(long createTimestamp) { + this.createTimestamp = createTimestamp; + return (B) this; + } + + public B setDropTimestamp(long dropTimestamp) { + this.dropTimestamp = dropTimestamp; + return (B) this; + } + + public B setPurgeTimestamp(long purgeTimestamp) { + this.purgeTimestamp = purgeTimestamp; + return (B) this; + } + + public B setLastUpdateTimestamp(long lastUpdateTimestamp) { + this.lastUpdateTimestamp = lastUpdateTimestamp; + return (B) this; + } + + public B setProperties(Map properties) { + this.properties = new HashMap<>(properties); + return (B) this; + } + + public B addProperty(String key, String value) { + this.properties.put(key, value); + return (B) this; + } + + public B setInternalProperties(Map internalProperties) { + this.internalProperties = new HashMap<>(internalProperties); + return (B) this; + } + + public B setEntityVersion(int entityVersion) { + this.entityVersion = entityVersion; + return (B) this; + } + + public B setGrantRecordsVersion(int grantRecordsVersion) { + this.grantRecordsVersion = grantRecordsVersion; + return (B) this; + } + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityActiveRecord.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityActiveRecord.java new file mode 100644 index 0000000000..10461d6501 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityActiveRecord.java @@ -0,0 +1,135 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonProperty; +import java.util.Objects; + +public class PolarisEntityActiveRecord { + // entity catalog id + private final long catalogId; + + // id of the entity + private final long id; + + // parent id of the entity + private final long parentId; + + // name of the entity + private final String name; + + // code representing the type of that entity + private final int typeCode; + + // code representing the subtype of that entity + private final int subTypeCode; + + public long getCatalogId() { + return catalogId; + } + + public long getId() { + return id; + } + + public long getParentId() { + return parentId; + } + + public String getName() { + return name; + } + + public int getTypeCode() { + return typeCode; + } + + public PolarisEntityType getType() { + return PolarisEntityType.fromCode(this.typeCode); + } + + public int getSubTypeCode() { + return subTypeCode; + } + + public PolarisEntitySubType getSubType() { + return PolarisEntitySubType.fromCode(this.subTypeCode); + } + + @JsonCreator + public PolarisEntityActiveRecord( + @JsonProperty("catalogId") long catalogId, + @JsonProperty("id") long id, + @JsonProperty("parentId") long parentId, + @JsonProperty("name") String name, + @JsonProperty("typeCode") int typeCode, + @JsonProperty("subTypeCode") int subTypeCode) { + this.catalogId = catalogId; + this.id = id; + this.parentId = parentId; + this.name = name; + this.typeCode = typeCode; + this.subTypeCode = subTypeCode; + } + + /** Constructor to create the object with provided entity */ + public PolarisEntityActiveRecord(PolarisBaseEntity entity) { + this.catalogId = entity.getCatalogId(); + this.id = entity.getId(); + this.parentId = entity.getParentId(); + this.typeCode = entity.getTypeCode(); + this.name = entity.getName(); + this.subTypeCode = entity.getSubTypeCode(); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (!(o instanceof PolarisEntityActiveRecord)) return false; + PolarisEntityActiveRecord that = (PolarisEntityActiveRecord) o; + return catalogId == that.catalogId + && id == that.id + && parentId == that.parentId + && typeCode == that.typeCode + && subTypeCode == that.subTypeCode + && Objects.equals(name, that.name); + } + + @Override + public int hashCode() { + return Objects.hash(catalogId, id, parentId, name, typeCode, subTypeCode); + } + + @Override + public String toString() { + return "PolarisEntitiesActiveRecord{" + + "catalogId=" + + catalogId + + ", id=" + + id + + ", parentId=" + + parentId + + ", name='" + + name + + '\'' + + ", typeCode=" + + typeCode + + ", subTypeCode=" + + subTypeCode + + '}'; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityConstants.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityConstants.java new file mode 100644 index 0000000000..d3f562ddf4 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityConstants.java @@ -0,0 +1,110 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +public class PolarisEntityConstants { + + public static final String ENTITY_BASE_LOCATION = "location"; + // the key for the client_id property associated with a principal + private static final String CLIENT_ID_PROPERTY_NAME = "client_id"; + + // id of the root entity + private static final long ROOT_ENTITY_ID = 0L; + + // special 0 value to represent a NULL value. For example the catalog id is null for a top-level + // entity like a catalog + private static final long NULL_ID = 0L; + + // the name of the single root container representing an entire realm + private static final String ROOT_CONTAINER_NAME = "root_container"; + + // the name of the catalog/root admin role + private static final String ADMIN_CATALOG_ROLE_NAME = "catalog_admin"; + + // the name of the root principal we create at bootstrap time + private static final String ROOT_PRINCIPAL_NAME = "root"; + + // the name of the principal role we create to manage the entire Polaris service + private static final String ADMIN_PRINCIPAL_ROLE_NAME = "service_admin"; + + // 24 hours retention before purging. This should be a config + private static final long RETENTION_TIME_IN_MS = 24 * 3600_000; + + private static final String STORAGE_CONFIGURATION_INFO_PROPERTY_NAME = + "storage_configuration_info"; + + private static final String STORAGE_INTEGRATION_IDENTIFIER_PROPERTY_NAME = + "storage_integration_identifier"; + + private static final String PRINCIPAL_TYPE_NAME = "principal_type_name"; + + public static final String PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE = + "CREDENTIAL_ROTATION_REQUIRED"; + + /** + * Name format of storage integration for polaris entity: POLARIS__ . This + * name format gives us flexibility to switch to use integration name in the future if we want. + */ + public static final String POLARIS_STORAGE_INT_NAME_FORMAT = "POLARIS_%s_%s"; + + public static long getRootEntityId() { + return ROOT_ENTITY_ID; + } + + public static long getNullId() { + return NULL_ID; + } + + public static String getRootContainerName() { + return ROOT_CONTAINER_NAME; + } + + public static String getNameOfCatalogAdminRole() { + return ADMIN_CATALOG_ROLE_NAME; + } + + public static String getRootPrincipalName() { + return ROOT_PRINCIPAL_NAME; + } + + public static String getNameOfPrincipalServiceAdminRole() { + return ADMIN_PRINCIPAL_ROLE_NAME; + } + + public static long getRetentionTimeInMs() { + return RETENTION_TIME_IN_MS; + } + + public static String getClientIdPropertyName() { + return CLIENT_ID_PROPERTY_NAME; + } + + public static String getStorageIntegrationIdentifierPropertyName() { + return STORAGE_INTEGRATION_IDENTIFIER_PROPERTY_NAME; + } + + public static String getStorageConfigInfoPropertyName() { + return STORAGE_CONFIGURATION_INFO_PROPERTY_NAME; + } + + public static String getPolarisStorageIntegrationNameFormat() { + return POLARIS_STORAGE_INT_NAME_FORMAT; + } + + public static String getPrincipalTypeName() { + return PRINCIPAL_TYPE_NAME; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityCore.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityCore.java new file mode 100644 index 0000000000..f084f42f58 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityCore.java @@ -0,0 +1,185 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonIgnore; +import java.util.Objects; + +/** + * Core attributes representing basic information about an entity. Change generally means that the + * entity will be renamed, dropped, re-created, re-parented. Basically any change to the structure + * of the entity tree. For some operations like updating the entity, change will mean any change, + * i.e. entity version mismatch. + */ +public class PolarisEntityCore { + + // the id of the catalog associated to that entity. NULL_ID if this entity is top-level like + // a catalog + protected long catalogId; + + // the id of the entity which was resolved + protected long id; + + // the id of the parent of this entity, use 0 for a top-level entity whose parent is the account + protected long parentId; + + // the type of the entity when it was resolved + protected int typeCode; + + // the name that this entity had when it was resolved + protected String name; + + // the version that this entity had when it was resolved + protected int entityVersion; + + public PolarisEntityCore() {} + + public PolarisEntityCore( + long catalogId, long id, long parentId, int typeCode, String name, int entityVersion) { + this.catalogId = catalogId; + this.id = id; + this.parentId = parentId; + this.typeCode = typeCode; + this.name = name; + this.entityVersion = entityVersion; + } + + public PolarisEntityCore(PolarisBaseEntity entity) { + this.catalogId = entity.getCatalogId(); + this.id = entity.getId(); + this.parentId = entity.getParentId(); + this.typeCode = entity.getTypeCode(); + this.name = entity.getName(); + this.entityVersion = entity.getEntityVersion(); + } + + public long getId() { + return id; + } + + public void setId(long id) { + this.id = id; + } + + public long getParentId() { + return parentId; + } + + public void setParentId(long parentId) { + this.parentId = parentId; + } + + public int getTypeCode() { + return typeCode; + } + + public void setTypeCode(int typeCode) { + this.typeCode = typeCode; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public int getEntityVersion() { + return entityVersion; + } + + public long getCatalogId() { + return catalogId; + } + + public void setCatalogId(long catalogId) { + this.catalogId = catalogId; + } + + public void setEntityVersion(int entityVersion) { + this.entityVersion = entityVersion; + } + + /** + * @return the type of this entity + */ + @JsonIgnore + public PolarisEntityType getType() { + return PolarisEntityType.fromCode(this.typeCode); + } + + /** + * @return true if this entity cannot be dropped or renamed. Applies to the admin catalog role and + * the polaris service admin principal role. + */ + @JsonIgnore + public boolean cannotBeDroppedOrRenamed() { + return (this.typeCode == PolarisEntityType.CATALOG_ROLE.getCode() + && this.name.equals(PolarisEntityConstants.getNameOfCatalogAdminRole())) + || (this.typeCode == PolarisEntityType.PRINCIPAL_ROLE.getCode() + && this.name.equals(PolarisEntityConstants.getNameOfPrincipalServiceAdminRole())); + } + + /** + * @return true if this entity is top-level, like a catalog, a principal, + */ + @JsonIgnore + public boolean isTopLevel() { + return this.getType().isTopLevel(); + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (!(o instanceof PolarisEntityCore)) { + return false; + } + PolarisEntityCore that = (PolarisEntityCore) o; + return catalogId == that.catalogId + && id == that.id + && parentId == that.parentId + && typeCode == that.typeCode + && entityVersion == that.entityVersion + && Objects.equals(name, that.name); + } + + @Override + public int hashCode() { + return Objects.hash(catalogId, id, parentId, typeCode, name, entityVersion); + } + + @Override + public String toString() { + return "PolarisEntityCore{" + + "catalogId=" + + catalogId + + ", id=" + + id + + ", parentId=" + + parentId + + ", typeCode=" + + typeCode + + ", name='" + + name + + '\'' + + ", entityVersion=" + + entityVersion + + '}'; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityId.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityId.java new file mode 100644 index 0000000000..74e2b57b21 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityId.java @@ -0,0 +1,63 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonProperty; +import java.util.Objects; + +/** Simple record like class to represent the unique identifier of an entity */ +public class PolarisEntityId { + + // id of the catalog for this entity. If this entity is top-level, this will be NULL. Only not + // null if this entity is a catalog entity like a namespace, a role, a table, a view, ... + private final long catalogId; + + // entity id + private final long id; + + @JsonCreator + public PolarisEntityId(@JsonProperty("catalogId") long catalogId, @JsonProperty("id") long id) { + this.catalogId = catalogId; + this.id = id; + } + + public long getCatalogId() { + return catalogId; + } + + public long getId() { + return id; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + PolarisEntityId that = (PolarisEntityId) o; + return catalogId == that.catalogId && id == that.id; + } + + @Override + public int hashCode() { + return Objects.hash(catalogId, id); + } + + @Override + public String toString() { + return "PolarisEntityId{" + "catalogId=" + catalogId + ", id=" + id + '}'; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntitySubType.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntitySubType.java new file mode 100644 index 0000000000..c36b75bdc0 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntitySubType.java @@ -0,0 +1,111 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonValue; +import org.jetbrains.annotations.Nullable; + +/** Subtype for an entity */ +public enum PolarisEntitySubType { + // ANY_SUBTYPE is not stored but is used to indicate that any subtype entities should be + // returned, for example when doing a list operation or checking if a table like object of + // name X exists + ANY_SUBTYPE(-1, null), + // the NULL value is used when an entity has no subtype, i.e. NOT_APPLICABLE really + NULL_SUBTYPE(0, null), + TABLE(2, PolarisEntityType.TABLE_LIKE), + VIEW(3, PolarisEntityType.TABLE_LIKE); + + // to efficiently map the code of a subtype to its corresponding subtype enum, use a reverse + // array which is initialized below + private static final PolarisEntitySubType[] REVERSE_MAPPING_ARRAY; + + static { + // find max array size + int maxId = 0; + for (PolarisEntitySubType entitySubType : PolarisEntitySubType.values()) { + if (maxId < entitySubType.code) { + maxId = entitySubType.code; + } + } + + // allocate mapping array + REVERSE_MAPPING_ARRAY = new PolarisEntitySubType[maxId + 1]; + + // populate mapping array, only for positive indices + for (PolarisEntitySubType entitySubType : PolarisEntitySubType.values()) { + if (entitySubType.code >= 0) { + REVERSE_MAPPING_ARRAY[entitySubType.code] = entitySubType; + } + } + } + + // unique code associated to that entity subtype + private final int code; + + // parent type for this entity + private final PolarisEntityType parentType; + + PolarisEntitySubType(int code, PolarisEntityType parentType) { + // remember the id of this entity + this.code = code; + this.parentType = parentType; + } + + /** + * @return the code associated to a subtype, will be stored in FDB + */ + @JsonValue + public int getCode() { + return code; + } + + /** + * @return parent type of that entity + */ + public PolarisEntityType getParentType() { + return this.parentType; + } + + /** + * Given the id of the subtype of an entity, return the subtype associated to it. Return null if + * not found + * + * @param entitySubTypeCode code associated to the entity type + * @return entity subtype corresponding to that code or null if mapping not found + */ + @JsonCreator + public static @Nullable PolarisEntitySubType fromCode(int entitySubTypeCode) { + // ensure it is within bounds + if (entitySubTypeCode >= REVERSE_MAPPING_ARRAY.length) { + return null; + } + + // get value + if (entitySubTypeCode >= 0) { + return REVERSE_MAPPING_ARRAY[entitySubTypeCode]; + } else { + for (PolarisEntitySubType entitySubType : PolarisEntitySubType.values()) { + if (entitySubType.code == entitySubTypeCode) { + return entitySubType; + } + } + } + + return null; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityType.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityType.java new file mode 100644 index 0000000000..f920efbfd1 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisEntityType.java @@ -0,0 +1,132 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonValue; +import org.jetbrains.annotations.Nullable; + +/** Types of entities with their id */ +public enum PolarisEntityType { + NULL_TYPE(0, null, false, false), + ROOT(1, null, false, false), + PRINCIPAL(2, ROOT, true, false), + PRINCIPAL_ROLE(3, ROOT, true, false), + CATALOG(4, ROOT, false, false), + CATALOG_ROLE(5, CATALOG, true, false), + NAMESPACE(6, CATALOG, false, true), + // generic table is either a view or a real table + TABLE_LIKE(7, NAMESPACE, false, false), + TASK(8, ROOT, false, false), + FILE(9, TABLE_LIKE, false, false); + + // to efficiently map a code to its corresponding entity type, use a reverse array which + // is initialized below + private static final PolarisEntityType[] REVERSE_MAPPING_ARRAY; + + static { + // find max array size + int maxId = 0; + for (PolarisEntityType entityType : PolarisEntityType.values()) { + if (maxId < entityType.code) { + maxId = entityType.code; + } + } + + // allocate mapping array + REVERSE_MAPPING_ARRAY = new PolarisEntityType[maxId + 1]; + + // populate mapping array + for (PolarisEntityType entityType : PolarisEntityType.values()) { + REVERSE_MAPPING_ARRAY[entityType.code] = entityType; + } + } + + // unique id for an entity type + private final int code; + + // true if this entity is a grantee, i.e. is an entity which can be on the receiving end of + // a grant. Only roles and principals are grantees + private final boolean isGrantee; + + // true if the parent entity type can also be the same type (e.g. namespaces) + private final boolean parentSelfReference; + + // parent entity type, null for an ACCOUNT + private final PolarisEntityType parentType; + + PolarisEntityType(int id, PolarisEntityType parentType, boolean isGrantee, boolean sefRef) { + // remember the id of this entity + this.code = id; + this.isGrantee = isGrantee; + this.parentType = parentType; + this.parentSelfReference = sefRef; + } + + /** + * @return the code associated to the specified the entity type, will be stored in FDB + */ + @JsonValue + public int getCode() { + return code; + } + + /** + * @return true if this entity is a grantee, i.e. an entity which can receive grants + */ + public boolean isGrantee() { + return this.isGrantee; + } + + /** + * @return true if this entity can be nested with itself (like a NAMESPACE) + */ + public boolean isParentSelfReference() { + return parentSelfReference; + } + + /** + * Given the code associated to the type of entity, return the subtype associated to it. Return + * null if not found + * + * @param entityTypeCode code associated to the entity type + * @return entity type corresponding to that code or null if mapping not found + */ + @JsonCreator + public static @Nullable PolarisEntityType fromCode(int entityTypeCode) { + // ensure it is within bounds + if (entityTypeCode >= REVERSE_MAPPING_ARRAY.length) { + return null; + } + + // get value + return REVERSE_MAPPING_ARRAY[entityTypeCode]; + } + + /** + * @return TRUE if this entity is top-level + */ + public boolean isTopLevel() { + return (this.parentType == ROOT || this == ROOT); + } + + /** + * @return the parent type of this type in the entity hierarchy + */ + public PolarisEntityType getParentType() { + return this.parentType; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisGrantRecord.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisGrantRecord.java new file mode 100644 index 0000000000..7af2a2ee38 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisGrantRecord.java @@ -0,0 +1,153 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonProperty; + +public class PolarisGrantRecord { + + // id of the catalog where the securable entity resides, NULL_ID if this entity is a top-level + // account entity + private long securableCatalogId; + + // id of the securable + private long securableId; + + // id of the catalog where the grantee entity resides, NULL_ID if this entity is a top-level + // account entity + private long granteeCatalogId; + + // id of the grantee + private long granteeId; + + // id associated to the privilege + private int privilegeCode; + + public PolarisGrantRecord() {} + + public long getSecurableCatalogId() { + return securableCatalogId; + } + + public void setSecurableCatalogId(long securableCatalogId) { + this.securableCatalogId = securableCatalogId; + } + + public long getSecurableId() { + return securableId; + } + + public void setSecurableId(long securableId) { + this.securableId = securableId; + } + + public long getGranteeCatalogId() { + return granteeCatalogId; + } + + public void setGranteeCatalogId(long granteeCatalogId) { + this.granteeCatalogId = granteeCatalogId; + } + + public long getGranteeId() { + return granteeId; + } + + public void setGranteeId(long granteeId) { + this.granteeId = granteeId; + } + + public int getPrivilegeCode() { + return privilegeCode; + } + + public void setPrivilegeCode(int privilegeCode) { + this.privilegeCode = privilegeCode; + } + + /** + * Constructor + * + * @param securableCatalogId catalog id for the securable. Can be NULL_ID if securable is + * top-level account entity + * @param securableId id of the securable + * @param granteeCatalogId catalog id for the grantee, Can be NULL_ID if grantee is top-level + * account entity + * @param granteeId id of the grantee + * @param privilegeCode privilege being granted to the grantee on the securable + */ + @JsonCreator + public PolarisGrantRecord( + @JsonProperty("securableCatalogId") long securableCatalogId, + @JsonProperty("securableId") long securableId, + @JsonProperty("granteeCatalogId") long granteeCatalogId, + @JsonProperty("granteeId") long granteeId, + @JsonProperty("privilegeCode") int privilegeCode) { + this.securableCatalogId = securableCatalogId; + this.securableId = securableId; + this.granteeCatalogId = granteeCatalogId; + this.granteeId = granteeId; + this.privilegeCode = privilegeCode; + } + + /** + * Copy constructor + * + * @param grantRec grant rec to copy + */ + public PolarisGrantRecord(PolarisGrantRecord grantRec) { + this.securableCatalogId = grantRec.getSecurableCatalogId(); + this.securableId = grantRec.getSecurableId(); + this.granteeCatalogId = grantRec.getGranteeCatalogId(); + this.granteeId = grantRec.getGranteeId(); + this.privilegeCode = grantRec.getPrivilegeCode(); + } + + @Override + public String toString() { + return "PolarisGrantRec{" + + "securableCatalogId=" + + securableCatalogId + + ", securableId=" + + securableId + + ", granteeCatalogId=" + + granteeCatalogId + + ", granteeId=" + + granteeId + + ", privilegeCode=" + + privilegeCode + + '}'; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + PolarisGrantRecord that = (PolarisGrantRecord) o; + return securableCatalogId == that.securableCatalogId + && securableId == that.securableId + && granteeCatalogId == that.granteeCatalogId + && granteeId == that.granteeId + && privilegeCode == that.privilegeCode; + } + + @Override + public int hashCode() { + return java.util.Objects.hash( + securableCatalogId, securableId, granteeCatalogId, granteeId, privilegeCode); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisPrincipalSecrets.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisPrincipalSecrets.java new file mode 100644 index 0000000000..7efab8f530 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisPrincipalSecrets.java @@ -0,0 +1,115 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonProperty; +import java.security.SecureRandom; + +/** + * Simple class to represent the secrets used to authenticate a catalog principal, These secrets are + * managed separately. + */ +public class PolarisPrincipalSecrets { + + // secure random number generator + private static final SecureRandom secureRandom = new SecureRandom(); + + // the id of the principal + private final long principalId; + + // the client id for that principal + private final String principalClientId; + + // the main secret for that principal + private String mainSecret; + + // the secondary secret for that principal + private String secondarySecret; + + /** + * Generate a secure random string + * + * @return the secure random string we generated + */ + private String generateRandomHexString(int stringLength) { + + // generate random byte array + byte[] randomBytes = + new byte[stringLength / 2]; // Each byte will be represented by two hex characters + secureRandom.nextBytes(randomBytes); + + // build string + StringBuilder sb = new StringBuilder(); + for (byte randomByte : randomBytes) { + sb.append(String.format("%02x", randomByte)); + } + + return sb.toString(); + } + + @JsonCreator + public PolarisPrincipalSecrets( + @JsonProperty("principalId") long principalId, + @JsonProperty("principalClientId") String principalClientId, + @JsonProperty("mainSecret") String mainSecret, + @JsonProperty("secondarySecret") String secondarySecret) { + this.principalId = principalId; + this.principalClientId = principalClientId; + this.mainSecret = mainSecret; + this.secondarySecret = secondarySecret; + } + + public PolarisPrincipalSecrets(PolarisPrincipalSecrets principalSecrets) { + this.principalId = principalSecrets.getPrincipalId(); + this.principalClientId = principalSecrets.getPrincipalClientId(); + this.mainSecret = principalSecrets.getMainSecret(); + this.secondarySecret = principalSecrets.getSecondarySecret(); + } + + public PolarisPrincipalSecrets(long principalId) { + this.principalId = principalId; + this.principalClientId = this.generateRandomHexString(16); + this.mainSecret = this.generateRandomHexString(32); + this.secondarySecret = this.generateRandomHexString(32); + } + + /** + * Rotate the main secrets + * + * @param mainSecretToRotate the main secrets to rotate + */ + public void rotateSecrets(String mainSecretToRotate) { + this.secondarySecret = mainSecretToRotate; + this.mainSecret = this.generateRandomHexString(32); + } + + public long getPrincipalId() { + return principalId; + } + + public String getPrincipalClientId() { + return principalClientId; + } + + public String getMainSecret() { + return mainSecret; + } + + public String getSecondarySecret() { + return secondarySecret; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisPrivilege.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisPrivilege.java new file mode 100644 index 0000000000..3215825d76 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisPrivilege.java @@ -0,0 +1,212 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonValue; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** List of privileges */ +public enum PolarisPrivilege { + SERVICE_MANAGE_ACCESS(1, PolarisEntityType.ROOT), + CATALOG_MANAGE_ACCESS(2, PolarisEntityType.CATALOG), + CATALOG_ROLE_USAGE( + 3, + PolarisEntityType.CATALOG_ROLE, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityType.PRINCIPAL_ROLE), + PRINCIPAL_ROLE_USAGE( + 4, + PolarisEntityType.PRINCIPAL_ROLE, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityType.PRINCIPAL), + NAMESPACE_CREATE(5, PolarisEntityType.NAMESPACE), + TABLE_CREATE(6, PolarisEntityType.NAMESPACE), + VIEW_CREATE(7, PolarisEntityType.NAMESPACE), + NAMESPACE_DROP(8, PolarisEntityType.NAMESPACE), + TABLE_DROP(9, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.TABLE), + VIEW_DROP(10, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.VIEW), + NAMESPACE_LIST(11, PolarisEntityType.NAMESPACE), + TABLE_LIST(12, PolarisEntityType.NAMESPACE), + VIEW_LIST(13, PolarisEntityType.NAMESPACE), + NAMESPACE_READ_PROPERTIES(14, PolarisEntityType.NAMESPACE), + TABLE_READ_PROPERTIES(15, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.TABLE), + VIEW_READ_PROPERTIES(16, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.VIEW), + NAMESPACE_WRITE_PROPERTIES(17, PolarisEntityType.NAMESPACE), + TABLE_WRITE_PROPERTIES(18, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.TABLE), + VIEW_WRITE_PROPERTIES(19, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.VIEW), + TABLE_READ_DATA(20, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.TABLE), + TABLE_WRITE_DATA(21, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.TABLE), + NAMESPACE_FULL_METADATA(22, PolarisEntityType.NAMESPACE), + TABLE_FULL_METADATA(23, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.TABLE), + VIEW_FULL_METADATA(24, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.VIEW), + CATALOG_CREATE(25, PolarisEntityType.ROOT), + CATALOG_DROP(26, PolarisEntityType.CATALOG), + CATALOG_LIST(27, PolarisEntityType.ROOT), + CATALOG_READ_PROPERTIES(28, PolarisEntityType.CATALOG), + CATALOG_WRITE_PROPERTIES(29, PolarisEntityType.CATALOG), + CATALOG_FULL_METADATA(30, PolarisEntityType.CATALOG), + CATALOG_MANAGE_METADATA(31, PolarisEntityType.CATALOG), + CATALOG_MANAGE_CONTENT(32, PolarisEntityType.CATALOG), + PRINCIPAL_LIST_GRANTS(33, PolarisEntityType.PRINCIPAL), + PRINCIPAL_ROLE_LIST_GRANTS(34, PolarisEntityType.PRINCIPAL), + CATALOG_ROLE_LIST_GRANTS(35, PolarisEntityType.PRINCIPAL), + CATALOG_LIST_GRANTS(36, PolarisEntityType.CATALOG), + NAMESPACE_LIST_GRANTS(37, PolarisEntityType.NAMESPACE), + TABLE_LIST_GRANTS(38, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.TABLE), + VIEW_LIST_GRANTS(39, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.VIEW), + CATALOG_MANAGE_GRANTS_ON_SECURABLE(40, PolarisEntityType.CATALOG), + NAMESPACE_MANAGE_GRANTS_ON_SECURABLE(41, PolarisEntityType.NAMESPACE), + TABLE_MANAGE_GRANTS_ON_SECURABLE(42, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.TABLE), + VIEW_MANAGE_GRANTS_ON_SECURABLE(43, PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.VIEW), + PRINCIPAL_CREATE(44, PolarisEntityType.ROOT), + PRINCIPAL_DROP(45, PolarisEntityType.PRINCIPAL), + PRINCIPAL_LIST(46, PolarisEntityType.ROOT), + PRINCIPAL_READ_PROPERTIES(47, PolarisEntityType.PRINCIPAL), + PRINCIPAL_WRITE_PROPERTIES(48, PolarisEntityType.PRINCIPAL), + PRINCIPAL_FULL_METADATA(49, PolarisEntityType.PRINCIPAL), + PRINCIPAL_MANAGE_GRANTS_ON_SECURABLE(50, PolarisEntityType.PRINCIPAL), + PRINCIPAL_MANAGE_GRANTS_FOR_GRANTEE(51, PolarisEntityType.PRINCIPAL), + PRINCIPAL_ROTATE_CREDENTIALS(52, PolarisEntityType.PRINCIPAL), + PRINCIPAL_RESET_CREDENTIALS(53, PolarisEntityType.PRINCIPAL), + PRINCIPAL_ROLE_CREATE(54, PolarisEntityType.ROOT), + PRINCIPAL_ROLE_DROP(55, PolarisEntityType.PRINCIPAL_ROLE), + PRINCIPAL_ROLE_LIST(56, PolarisEntityType.ROOT), + PRINCIPAL_ROLE_READ_PROPERTIES(57, PolarisEntityType.PRINCIPAL_ROLE), + PRINCIPAL_ROLE_WRITE_PROPERTIES(58, PolarisEntityType.PRINCIPAL_ROLE), + PRINCIPAL_ROLE_FULL_METADATA(59, PolarisEntityType.PRINCIPAL_ROLE), + PRINCIPAL_ROLE_MANAGE_GRANTS_ON_SECURABLE(60, PolarisEntityType.PRINCIPAL_ROLE), + PRINCIPAL_ROLE_MANAGE_GRANTS_FOR_GRANTEE(61, PolarisEntityType.PRINCIPAL_ROLE), + CATALOG_ROLE_CREATE(62, PolarisEntityType.CATALOG), + CATALOG_ROLE_DROP(63, PolarisEntityType.CATALOG_ROLE), + CATALOG_ROLE_LIST(64, PolarisEntityType.CATALOG), + CATALOG_ROLE_READ_PROPERTIES(65, PolarisEntityType.CATALOG_ROLE), + CATALOG_ROLE_WRITE_PROPERTIES(66, PolarisEntityType.CATALOG_ROLE), + CATALOG_ROLE_FULL_METADATA(67, PolarisEntityType.CATALOG_ROLE), + CATALOG_ROLE_MANAGE_GRANTS_ON_SECURABLE(68, PolarisEntityType.CATALOG_ROLE), + CATALOG_ROLE_MANAGE_GRANTS_FOR_GRANTEE(69, PolarisEntityType.CATALOG_ROLE), + ; + + /** + * Full constructor + * + * @param code internal code associated to this privilege + * @param securableType securable type + * @param securableSubType securable subtype, mostly NULL_SUBTYPE + * @param granteeType grantee type, generally a ROLE + */ + PolarisPrivilege( + int code, + @NotNull PolarisEntityType securableType, + @NotNull PolarisEntitySubType securableSubType, + @NotNull PolarisEntityType granteeType) { + this.code = code; + this.securableType = securableType; + this.securableSubType = securableSubType; + this.granteeType = granteeType; + } + + /** + * Simple constructor, when the grantee is a role and the securable subtype is NULL_SUBTYPE + * + * @param code internal code associated to this privilege + * @param securableType securable type + */ + PolarisPrivilege(int code, @NotNull PolarisEntityType securableType) { + this.code = code; + this.securableType = securableType; + this.securableSubType = PolarisEntitySubType.NULL_SUBTYPE; + this.granteeType = PolarisEntityType.CATALOG_ROLE; + } + + /** + * Constructor when the grantee is a ROLE + * + * @param code internal code associated to this privilege + * @param securableType securable type + * @param securableSubType securable subtype, mostly NULL_SUBTYPE + */ + PolarisPrivilege( + int code, + @NotNull PolarisEntityType securableType, + @NotNull PolarisEntitySubType securableSubType) { + this.code = code; + this.securableType = securableType; + this.securableSubType = securableSubType; + this.granteeType = PolarisEntityType.CATALOG_ROLE; + } + + // internal code used to represent this privilege + private final int code; + + // the type of the securable for this privilege + private final PolarisEntityType securableType; + + // the subtype of the securable for this privilege + private final PolarisEntitySubType securableSubType; + + // the type of the securable for this privilege + private final PolarisEntityType granteeType; + + // to efficiently map a code to its corresponding entity type, use a reverse array which + // is initialized below + private static final PolarisPrivilege[] REVERSE_MAPPING_ARRAY; + + static { + // find max array size + int maxId = 0; + for (PolarisPrivilege privilegeDef : PolarisPrivilege.values()) { + if (maxId < privilegeDef.code) { + maxId = privilegeDef.code; + } + } + + // allocate mapping array + REVERSE_MAPPING_ARRAY = new PolarisPrivilege[maxId + 1]; + + // populate mapping array + for (PolarisPrivilege privilegeDef : PolarisPrivilege.values()) { + REVERSE_MAPPING_ARRAY[privilegeDef.code] = privilegeDef; + } + } + + /** + * @return the code associated to the specified privilege + */ + @JsonValue + public int getCode() { + return code; + } + + /** + * Given the code associated to a privilege, return the privilege associated to it. Return null if + * not found + * + * @param code code associated to the entity type + * @return entity type corresponding to that code or null if mapping not found + */ + @JsonCreator + public static @Nullable PolarisPrivilege fromCode(int code) { + // ensure it is within bounds + if (code >= REVERSE_MAPPING_ARRAY.length) { + return null; + } + + // get value + return REVERSE_MAPPING_ARRAY[code]; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PolarisTaskConstants.java b/polaris-core/src/main/java/io/polaris/core/entity/PolarisTaskConstants.java new file mode 100644 index 0000000000..36961eb9f8 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PolarisTaskConstants.java @@ -0,0 +1,28 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +/** Constants used to store task properties and configuration parameters */ +public class PolarisTaskConstants { + public static final long TASK_TIMEOUT_MILLIS = 300000; + public static final String TASK_TIMEOUT_MILLIS_CONFIG = "POLARIS_TASK_TIMEOUT_MILLIS"; + public static final String LAST_ATTEMPT_EXECUTOR_ID = "lastAttemptExecutorId"; + public static final String LAST_ATTEMPT_START_TIME = "lastAttemptStartTime"; + public static final String ATTEMPT_COUNT = "attemptCount"; + public static final String TASK_DATA = "data"; + public static final String TASK_TYPE = "taskType"; + public static final String STORAGE_LOCATION = "storageLocation"; +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PrincipalEntity.java b/polaris-core/src/main/java/io/polaris/core/entity/PrincipalEntity.java new file mode 100644 index 0000000000..eaa8bfc7e3 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PrincipalEntity.java @@ -0,0 +1,82 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import io.polaris.core.admin.model.Principal; + +/** Wrapper for translating between the REST Principal object and the base PolarisEntity type. */ +public class PrincipalEntity extends PolarisEntity { + public PrincipalEntity(PolarisBaseEntity sourceEntity) { + super(sourceEntity); + } + + public static PrincipalEntity of(PolarisBaseEntity sourceEntity) { + if (sourceEntity != null) { + return new PrincipalEntity(sourceEntity); + } + return null; + } + + public static PrincipalEntity fromPrincipal(Principal principal) { + return new Builder() + .setName(principal.getName()) + .setProperties(principal.getProperties()) + .setClientId(principal.getClientId()) + .build(); + } + + public Principal asPrincipal() { + return new Principal( + getName(), + getClientId(), + getPropertiesAsMap(), + getCreateTimestamp(), + getLastUpdateTimestamp(), + getEntityVersion()); + } + + public String getClientId() { + return getInternalPropertiesAsMap().get(PolarisEntityConstants.getClientIdPropertyName()); + } + + public static class Builder extends PolarisEntity.BaseBuilder { + public Builder() { + super(); + setType(PolarisEntityType.PRINCIPAL); + setCatalogId(PolarisEntityConstants.getNullId()); + setParentId(PolarisEntityConstants.getRootEntityId()); + } + + public Builder(PrincipalEntity original) { + super(original); + } + + public Builder setClientId(String clientId) { + internalProperties.put(PolarisEntityConstants.getClientIdPropertyName(), clientId); + return this; + } + + public Builder setCredentialRotationRequiredState() { + internalProperties.put( + PolarisEntityConstants.PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE, "true"); + return this; + } + + public PrincipalEntity build() { + return new PrincipalEntity(buildBase()); + } + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/PrincipalRoleEntity.java b/polaris-core/src/main/java/io/polaris/core/entity/PrincipalRoleEntity.java new file mode 100644 index 0000000000..44732e875b --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/PrincipalRoleEntity.java @@ -0,0 +1,69 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import io.polaris.core.admin.model.PrincipalRole; + +/** + * Wrapper for translating between the REST PrincipalRole object and the base PolarisEntity type. + */ +public class PrincipalRoleEntity extends PolarisEntity { + public PrincipalRoleEntity(PolarisBaseEntity sourceEntity) { + super(sourceEntity); + } + + public static PrincipalRoleEntity of(PolarisBaseEntity sourceEntity) { + if (sourceEntity != null) { + return new PrincipalRoleEntity(sourceEntity); + } + return null; + } + + public static PrincipalRoleEntity fromPrincipalRole(PrincipalRole principalRole) { + return new Builder() + .setName(principalRole.getName()) + .setProperties(principalRole.getProperties()) + .build(); + } + + public PrincipalRole asPrincipalRole() { + PrincipalRole principalRole = + new PrincipalRole( + getName(), + getPropertiesAsMap(), + getCreateTimestamp(), + getLastUpdateTimestamp(), + getEntityVersion()); + return principalRole; + } + + public static class Builder extends PolarisEntity.BaseBuilder { + public Builder() { + super(); + setType(PolarisEntityType.PRINCIPAL_ROLE); + setCatalogId(PolarisEntityConstants.getNullId()); + setParentId(PolarisEntityConstants.getRootEntityId()); + } + + public Builder(PrincipalRoleEntity original) { + super(original); + } + + public PrincipalRoleEntity build() { + return new PrincipalRoleEntity(buildBase()); + } + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/TableLikeEntity.java b/polaris-core/src/main/java/io/polaris/core/entity/TableLikeEntity.java new file mode 100644 index 0000000000..6aab5d2c6f --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/TableLikeEntity.java @@ -0,0 +1,109 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import com.fasterxml.jackson.annotation.JsonIgnore; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.rest.RESTUtil; + +public class TableLikeEntity extends PolarisEntity { + // For applicable types, this key on the "internalProperties" map will return the location + // of the internalProperties JSON file. + public static final String METADATA_LOCATION_KEY = "metadata-location"; + + public static final String USER_SPECIFIED_WRITE_DATA_LOCATION_KEY = "write.data.path"; + public static final String USER_SPECIFIED_WRITE_METADATA_LOCATION_KEY = "write.metadata.path"; + + public TableLikeEntity(PolarisBaseEntity sourceEntity) { + super(sourceEntity); + } + + public static TableLikeEntity of(PolarisBaseEntity sourceEntity) { + if (sourceEntity != null) { + return new TableLikeEntity(sourceEntity); + } + return null; + } + + @JsonIgnore + public TableIdentifier getTableIdentifier() { + Namespace parent = getParentNamespace(); + return TableIdentifier.of(parent, getName()); + } + + @JsonIgnore + public Namespace getParentNamespace() { + String encodedNamespace = + getInternalPropertiesAsMap().get(NamespaceEntity.PARENT_NAMESPACE_KEY); + if (encodedNamespace == null) { + return Namespace.empty(); + } + return RESTUtil.decodeNamespace(encodedNamespace); + } + + @JsonIgnore + public String getMetadataLocation() { + return getInternalPropertiesAsMap().get(METADATA_LOCATION_KEY); + } + + @JsonIgnore + public String getBaseLocation() { + return getPropertiesAsMap().get(PolarisEntityConstants.ENTITY_BASE_LOCATION); + } + + public static class Builder extends PolarisEntity.BaseBuilder { + public Builder(TableIdentifier identifier, String metadataLocation) { + super(); + setType(PolarisEntityType.TABLE_LIKE); + setTableIdentifier(identifier); + setMetadataLocation(metadataLocation); + } + + public Builder(TableLikeEntity original) { + super(original); + } + + public TableLikeEntity build() { + return new TableLikeEntity(buildBase()); + } + + public Builder setTableIdentifier(TableIdentifier identifier) { + Namespace namespace = identifier.namespace(); + setParentNamespace(namespace); + setName(identifier.name()); + return this; + } + + public Builder setParentNamespace(Namespace namespace) { + if (namespace != null && !namespace.isEmpty()) { + internalProperties.put( + NamespaceEntity.PARENT_NAMESPACE_KEY, RESTUtil.encodeNamespace(namespace)); + } + return this; + } + + public Builder setBaseLocation(String location) { + properties.put(PolarisEntityConstants.ENTITY_BASE_LOCATION, location); + return this; + } + + public Builder setMetadataLocation(String location) { + internalProperties.put(METADATA_LOCATION_KEY, location); + return this; + } + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/entity/TaskEntity.java b/polaris-core/src/main/java/io/polaris/core/entity/TaskEntity.java new file mode 100644 index 0000000000..ca2d7d17c2 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/entity/TaskEntity.java @@ -0,0 +1,102 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.entity; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.context.CallContext; +import io.polaris.core.persistence.PolarisObjectMapperUtil; + +/** + * Represents an asynchronous task entity in the persistence layer. A task executor is responsible + * for constructing the actual task instance based on the "data" and "taskType" properties + */ +public class TaskEntity extends PolarisEntity { + public TaskEntity(PolarisBaseEntity sourceEntity) { + super(sourceEntity); + } + + public static TaskEntity of(PolarisBaseEntity polarisEntity) { + if (polarisEntity != null) { + return new TaskEntity(polarisEntity); + } else { + return null; + } + } + + public T readData(Class klass) { + PolarisCallContext polarisCallContext = CallContext.getCurrentContext().getPolarisCallContext(); + return PolarisObjectMapperUtil.deserialize( + polarisCallContext, getPropertiesAsMap().get(PolarisTaskConstants.TASK_DATA), klass); + } + + public AsyncTaskType getTaskType() { + PolarisCallContext polarisCallContext = CallContext.getCurrentContext().getPolarisCallContext(); + return PolarisObjectMapperUtil.deserialize( + polarisCallContext, + getPropertiesAsMap().get(PolarisTaskConstants.TASK_TYPE), + AsyncTaskType.class); + } + + public static class Builder extends PolarisEntity.BaseBuilder { + public Builder() { + super(); + setType(PolarisEntityType.TASK); + setCatalogId(PolarisEntityConstants.getNullId()); + setParentId(PolarisEntityConstants.getRootEntityId()); + } + + public Builder(TaskEntity original) { + super(original); + } + + public Builder withTaskType(AsyncTaskType taskType) { + PolarisCallContext polarisCallContext = + CallContext.getCurrentContext().getPolarisCallContext(); + properties.put( + PolarisTaskConstants.TASK_TYPE, + PolarisObjectMapperUtil.serialize(polarisCallContext, taskType)); + return this; + } + + public Builder withData(Object data) { + PolarisCallContext polarisCallContext = + CallContext.getCurrentContext().getPolarisCallContext(); + properties.put( + PolarisTaskConstants.TASK_DATA, + PolarisObjectMapperUtil.serialize(polarisCallContext, data)); + return this; + } + + public Builder withLastAttemptExecutorId(String executorId) { + properties.put(PolarisTaskConstants.LAST_ATTEMPT_EXECUTOR_ID, executorId); + return this; + } + + public Builder withAttemptCount(int count) { + properties.put(PolarisTaskConstants.ATTEMPT_COUNT, String.valueOf(count)); + return this; + } + + public Builder withLastAttemptStartedTimestamp(long timestamp) { + properties.put(PolarisTaskConstants.LAST_ATTEMPT_START_TIME, String.valueOf(timestamp)); + return this; + } + + public TaskEntity build() { + return new TaskEntity(buildBase()); + } + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/monitor/PolarisMetricRegistry.java b/polaris-core/src/main/java/io/polaris/core/monitor/PolarisMetricRegistry.java new file mode 100644 index 0000000000..b3b8779f1d --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/monitor/PolarisMetricRegistry.java @@ -0,0 +1,117 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.monitor; + +import io.micrometer.core.instrument.Counter; +import io.micrometer.core.instrument.MeterRegistry; +import io.micrometer.core.instrument.Timer; +import io.polaris.core.resource.TimedApi; +import java.lang.reflect.Method; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.TimeUnit; + +/** + * Wrapper around the Micrometer {@link MeterRegistry} providing additional metric management + * functions for the Polaris application. Implements in-memory caching of timers and counters. + * Records two metrics for each instrument with one tagged by the realm ID (realm-specific metric) + * and one without. The realm-specific metric is suffixed with ".realm". + */ +public class PolarisMetricRegistry { + private final MeterRegistry meterRegistry; + private final ConcurrentMap timers = new ConcurrentHashMap<>(); + private final ConcurrentMap counters = new ConcurrentHashMap<>(); + private static final String TAG_REALM = "REALM_ID"; + private static final String TAG_RESP_CODE = "HTTP_RESPONSE_CODE"; + private static final String SUFFIX_COUNTER = ".count"; + private static final String SUFFIX_ERROR = ".error"; + private static final String SUFFIX_REALM = ".realm"; + + public PolarisMetricRegistry(MeterRegistry meterRegistry) { + this.meterRegistry = meterRegistry; + } + + public MeterRegistry getMeterRegistry() { + return meterRegistry; + } + + public void init(Class... classes) { + for (Class clazz : classes) { + Method[] methods = clazz.getDeclaredMethods(); + for (Method method : methods) { + if (method.isAnnotationPresent(TimedApi.class)) { + TimedApi timedApi = method.getAnnotation(TimedApi.class); + String metric = timedApi.value(); + timers.put(metric, Timer.builder(metric).register(meterRegistry)); + counters.put( + metric + SUFFIX_COUNTER, + Counter.builder(metric + SUFFIX_COUNTER).register(meterRegistry)); + + // Error counters contain the HTTP response code in a tag, thus caching them would not be + // meaningful. + Counter.builder(metric + SUFFIX_ERROR).tags(TAG_RESP_CODE, "400").register(meterRegistry); + Counter.builder(metric + SUFFIX_ERROR).tags(TAG_RESP_CODE, "500").register(meterRegistry); + } + } + } + } + + public void recordTimer(String metric, long elapsedTimeMs, String realmId) { + Timer timer = + timers.computeIfAbsent(metric, m -> Timer.builder(metric).register(meterRegistry)); + timer.record(elapsedTimeMs, TimeUnit.MILLISECONDS); + + Timer timerRealm = + timers.computeIfAbsent( + metric + SUFFIX_REALM, + m -> + Timer.builder(metric + SUFFIX_REALM) + .tag(TAG_REALM, realmId) + .register(meterRegistry)); + timerRealm.record(elapsedTimeMs, TimeUnit.MILLISECONDS); + } + + public void incrementCounter(String metric, String realmId) { + String counterMetric = metric + SUFFIX_COUNTER; + Counter counter = + counters.computeIfAbsent( + counterMetric, m -> Counter.builder(counterMetric).register(meterRegistry)); + counter.increment(); + + Counter counterRealm = + counters.computeIfAbsent( + counterMetric + SUFFIX_REALM, + m -> + Counter.builder(counterMetric + SUFFIX_REALM) + .tag(TAG_REALM, realmId) + .register(meterRegistry)); + counterRealm.increment(); + } + + public void incrementErrorCounter(String metric, int statusCode, String realmId) { + String errorMetric = metric + SUFFIX_ERROR; + Counter.builder(errorMetric) + .tag(TAG_RESP_CODE, String.valueOf(statusCode)) + .register(meterRegistry) + .increment(); + + Counter.builder(errorMetric + SUFFIX_REALM) + .tag(TAG_RESP_CODE, String.valueOf(statusCode)) + .tag(TAG_REALM, realmId) + .register(meterRegistry) + .increment(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/LocalPolarisMetaStoreManagerFactory.java b/polaris-core/src/main/java/io/polaris/core/persistence/LocalPolarisMetaStoreManagerFactory.java new file mode 100644 index 0000000000..70a92331a8 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/LocalPolarisMetaStoreManagerFactory.java @@ -0,0 +1,214 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.monitor.PolarisMetricRegistry; +import io.polaris.core.storage.PolarisStorageIntegrationProvider; +import io.polaris.core.storage.cache.StorageCredentialCache; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.function.Supplier; +import org.jetbrains.annotations.NotNull; +import org.slf4j.Logger; + +/** + * The common implementation of Configuration interface for configuring the {@link + * PolarisMetaStoreManager} using an underlying meta store to store and retrieve all Polaris + * metadata. + */ +public abstract class LocalPolarisMetaStoreManagerFactory< + StoreType, SessionType extends PolarisMetaStoreSession> + implements MetaStoreManagerFactory { + + Map metaStoreManagerMap = new HashMap<>(); + Map storageCredentialCacheMap = new HashMap<>(); + Map backingStoreMap = new HashMap<>(); + Map> sessionSupplierMap = new HashMap<>(); + protected PolarisDiagnostics diagServices = new PolarisDefaultDiagServiceImpl(); + + protected PolarisStorageIntegrationProvider storageIntegration; + + private Logger logger = + org.slf4j.LoggerFactory.getLogger(LocalPolarisMetaStoreManagerFactory.class); + + protected abstract StoreType createBackingStore(@NotNull PolarisDiagnostics diagnostics); + + protected abstract PolarisMetaStoreSession createMetaStoreSession( + @NotNull StoreType store, @NotNull RealmContext realmContext); + + private void initializeForRealm(RealmContext realmContext) { + final StoreType backingStore = createBackingStore(diagServices); + backingStoreMap.put(realmContext.getRealmIdentifier(), backingStore); + sessionSupplierMap.put( + realmContext.getRealmIdentifier(), + () -> createMetaStoreSession(backingStore, realmContext)); + + PolarisMetaStoreManager metaStoreManager = new PolarisMetaStoreManagerImpl(); + metaStoreManagerMap.put(realmContext.getRealmIdentifier(), metaStoreManager); + } + + @Override + public synchronized Map bootstrapRealms( + List realms) { + Map results = new HashMap<>(); + + for (String realm : realms) { + RealmContext realmContext = () -> realm; + if (!metaStoreManagerMap.containsKey(realmContext.getRealmIdentifier())) { + initializeForRealm(realmContext); + PolarisMetaStoreManager.PrincipalSecretsResult secretsResult = + bootstrapServiceAndCreatePolarisPrincipalForRealm( + realmContext, metaStoreManagerMap.get(realmContext.getRealmIdentifier())); + results.put(realmContext.getRealmIdentifier(), secretsResult); + } + } + + return results; + } + + @Override + public synchronized PolarisMetaStoreManager getOrCreateMetaStoreManager( + RealmContext realmContext) { + if (!metaStoreManagerMap.containsKey(realmContext.getRealmIdentifier())) { + initializeForRealm(realmContext); + checkPolarisServiceBootstrappedForRealm( + realmContext, metaStoreManagerMap.get(realmContext.getRealmIdentifier())); + } + return metaStoreManagerMap.get(realmContext.getRealmIdentifier()); + } + + @Override + public synchronized Supplier getOrCreateSessionSupplier( + RealmContext realmContext) { + if (!sessionSupplierMap.containsKey(realmContext.getRealmIdentifier())) { + initializeForRealm(realmContext); + checkPolarisServiceBootstrappedForRealm( + realmContext, metaStoreManagerMap.get(realmContext.getRealmIdentifier())); + } + return sessionSupplierMap.get(realmContext.getRealmIdentifier()); + } + + @Override + public synchronized StorageCredentialCache getOrCreateStorageCredentialCache( + RealmContext realmContext) { + if (!storageCredentialCacheMap.containsKey(realmContext.getRealmIdentifier())) { + storageCredentialCacheMap.put( + realmContext.getRealmIdentifier(), new StorageCredentialCache()); + } + + return storageCredentialCacheMap.get(realmContext.getRealmIdentifier()); + } + + @Override + public void setMetricRegistry(PolarisMetricRegistry metricRegistry) { + // no-op + } + + @Override + public void setStorageIntegrationProvider(PolarisStorageIntegrationProvider storageIntegration) { + this.storageIntegration = storageIntegration; + } + + /** + * This method bootstraps service for a given realm: i.e. creates all the needed entities in the + * metastore and creates a root service principal. After that we rotate the root principal + * credentials and print them to stdout + * + * @param realmContext + * @param metaStoreManager + */ + private PolarisMetaStoreManager.PrincipalSecretsResult + bootstrapServiceAndCreatePolarisPrincipalForRealm( + RealmContext realmContext, PolarisMetaStoreManager metaStoreManager) { + // While bootstrapping we need to act as a fake privileged context since the real + // CallContext hasn't even been resolved yet. + PolarisCallContext polarisContext = + new PolarisCallContext( + sessionSupplierMap.get(realmContext.getRealmIdentifier()).get(), diagServices); + CallContext.setCurrentContext(CallContext.of(realmContext, polarisContext)); + + metaStoreManager.bootstrapPolarisService(polarisContext); + + PolarisMetaStoreManager.EntityResult rootPrincipalLookup = + metaStoreManager.readEntityByName( + polarisContext, + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityConstants.getRootPrincipalName()); + PolarisPrincipalSecrets secrets = + metaStoreManager + .loadPrincipalSecrets( + polarisContext, + PolarisEntity.of(rootPrincipalLookup.getEntity()) + .getInternalPropertiesAsMap() + .get(PolarisEntityConstants.getClientIdPropertyName())) + .getPrincipalSecrets(); + PolarisMetaStoreManager.PrincipalSecretsResult rotatedSecrets = + metaStoreManager.rotatePrincipalSecrets( + polarisContext, + secrets.getPrincipalClientId(), + secrets.getPrincipalId(), + secrets.getMainSecret(), + false); + return rotatedSecrets; + } + + /** + * In this method we check if Service was bootstrapped for a given realm, i.e. that all the + * entities were created (root principal, root principal role, etc) If service was not + * bootstrapped we are throwing IllegalStateException exception That will cause service to crash + * and force user to run Bootstrap command and initialize MetaStore and create all the required + * entities + * + * @param realmContext + * @param metaStoreManager + */ + private void checkPolarisServiceBootstrappedForRealm( + RealmContext realmContext, PolarisMetaStoreManager metaStoreManager) { + PolarisCallContext polarisContext = + new PolarisCallContext( + sessionSupplierMap.get(realmContext.getRealmIdentifier()).get(), diagServices); + CallContext.setCurrentContext(CallContext.of(realmContext, polarisContext)); + + PolarisMetaStoreManager.EntityResult rootPrincipalLookup = + metaStoreManager.readEntityByName( + polarisContext, + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityConstants.getRootPrincipalName()); + + if (!rootPrincipalLookup.isSuccess()) { + logger.error( + "\n\n Realm {} is not bootstrapped, could not load root principal. Please run Bootstrap command. \n\n", + realmContext.getRealmIdentifier()); + throw new IllegalStateException( + "Realm is not bootstrapped, please run server in bootstrap mode."); + } + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/MetaStoreManagerFactory.java b/polaris-core/src/main/java/io/polaris/core/persistence/MetaStoreManagerFactory.java new file mode 100644 index 0000000000..199778dedb --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/MetaStoreManagerFactory.java @@ -0,0 +1,46 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import com.fasterxml.jackson.annotation.JsonTypeInfo; +import io.dropwizard.jackson.Discoverable; +import io.polaris.core.context.RealmContext; +import io.polaris.core.monitor.PolarisMetricRegistry; +import io.polaris.core.storage.PolarisStorageIntegrationProvider; +import io.polaris.core.storage.cache.StorageCredentialCache; +import java.util.List; +import java.util.Map; +import java.util.function.Supplier; + +/** + * Configuration interface for configuring the {@link PolarisMetaStoreManager} via Dropwizard + * configuration + */ +@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "type") +public interface MetaStoreManagerFactory extends Discoverable { + + PolarisMetaStoreManager getOrCreateMetaStoreManager(RealmContext realmContext); + + Supplier getOrCreateSessionSupplier(RealmContext realmContext); + + StorageCredentialCache getOrCreateStorageCredentialCache(RealmContext realmContext); + + void setStorageIntegrationProvider(PolarisStorageIntegrationProvider storageIntegrationProvider); + + void setMetricRegistry(PolarisMetricRegistry metricRegistry); + + Map bootstrapRealms(List realms); +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/PolarisEntityManager.java b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisEntityManager.java new file mode 100644 index 0000000000..79739813da --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisEntityManager.java @@ -0,0 +1,156 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.persistence.cache.EntityCache; +import io.polaris.core.persistence.resolver.PolarisResolutionManifest; +import io.polaris.core.persistence.resolver.Resolver; +import io.polaris.core.storage.cache.StorageCredentialCache; +import java.util.List; +import java.util.function.Supplier; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Wraps logic of handling name-caching and entity-caching against a concrete underlying entity + * store while exposing methods more natural for the Catalog layer to use. Encapsulates the various + * id and name resolution mechanics around PolarisEntities. + */ +public class PolarisEntityManager { + private static final Logger LOG = LoggerFactory.getLogger(PolarisEntityManager.class); + + private final PolarisMetaStoreManager metaStoreManager; + private final Supplier sessionSupplier; + private final EntityCache entityCache; + + private final StorageCredentialCache credentialCache; + + // Lazily instantiated only a single time per entity manager. + private ResolvedPolarisEntity implicitResolvedRootContainerEntity = null; + + /** + * @param sessionSupplier must return a new independent metastore session affiliated with the + * backing store under the {@code delegate} on each invocation. + */ + public PolarisEntityManager( + PolarisMetaStoreManager metaStoreManager, + Supplier sessionSupplier, + StorageCredentialCache credentialCache) { + this.metaStoreManager = metaStoreManager; + this.sessionSupplier = sessionSupplier; + this.entityCache = new EntityCache(metaStoreManager); + this.credentialCache = credentialCache; + } + + public PolarisMetaStoreSession newMetaStoreSession() { + return sessionSupplier.get(); + } + + public PolarisMetaStoreManager getMetaStoreManager() { + return metaStoreManager; + } + + public Resolver prepareResolver( + @NotNull CallContext callContext, + @NotNull AuthenticatedPolarisPrincipal authenticatedPrincipal, + @Nullable String referenceCatalogName) { + return new Resolver( + callContext.getPolarisCallContext(), + metaStoreManager, + authenticatedPrincipal.getPrincipalEntity().getId(), + null, /* callerPrincipalName */ + authenticatedPrincipal.getActivatedPrincipalRoleNames().isEmpty() + ? null + : authenticatedPrincipal.getActivatedPrincipalRoleNames(), + entityCache, + referenceCatalogName); + } + + public PolarisResolutionManifest prepareResolutionManifest( + @NotNull CallContext callContext, + @NotNull AuthenticatedPolarisPrincipal authenticatedPrincipal, + @Nullable String referenceCatalogName) { + PolarisResolutionManifest manifest = + new PolarisResolutionManifest( + callContext, this, authenticatedPrincipal, referenceCatalogName); + manifest.setSimulatedResolvedRootContainerEntity( + getSimulatedResolvedRootContainerEntity(callContext)); + return manifest; + } + + /** + * Returns a ResolvedPolarisEntity representing the realm-level "root" entity that is the implicit + * parent container of all things in this realm. + */ + private synchronized ResolvedPolarisEntity getSimulatedResolvedRootContainerEntity( + CallContext callContext) { + if (implicitResolvedRootContainerEntity == null) { + // For now, the root container is only implicit and doesn't exist in the entity store, and + // only + // the service_admin PrincipalRole has the SERVICE_MANAGE_ACCESS grant on this entity. If it + // becomes + // possible to grant other PrincipalRoles with SERVICE_MANAGE_ACCESS or other privileges on + // this + // root entity, then we must actually create a representation of this root entity in the + // entity store itself. + PolarisEntity serviceAdminPrincipalRole = + PolarisEntity.of( + metaStoreManager + .readEntityByName( + callContext.getPolarisCallContext(), + null, + PolarisEntityType.PRINCIPAL_ROLE, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityConstants.getNameOfPrincipalServiceAdminRole()) + .getEntity()); + if (serviceAdminPrincipalRole == null) { + throw new IllegalStateException("Failed to resolve service_admin PrincipalRole"); + } + PolarisEntity rootContainerEntity = + new PolarisEntity.Builder() + .setId(0L) + .setCatalogId(0L) + .setType(PolarisEntityType.ROOT) + .setName("root") + .build(); + PolarisGrantRecord serviceAdminGrant = + new PolarisGrantRecord( + 0L, + 0L, + serviceAdminPrincipalRole.getCatalogId(), + serviceAdminPrincipalRole.getId(), + PolarisPrivilege.SERVICE_MANAGE_ACCESS.getCode()); + + implicitResolvedRootContainerEntity = + new ResolvedPolarisEntity(rootContainerEntity, null, List.of(serviceAdminGrant)); + } + return implicitResolvedRootContainerEntity; + } + + public StorageCredentialCache getCredentialCache() { + return credentialCache; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/PolarisEntityResolver.java b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisEntityResolver.java new file mode 100644 index 0000000000..a5a87731b5 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisEntityResolver.java @@ -0,0 +1,299 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntitiesActiveKey; +import io.polaris.core.entity.PolarisEntityActiveRecord; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.entity.PolarisEntityCore; +import io.polaris.core.entity.PolarisEntityType; +import java.util.ArrayList; +import java.util.Iterator; +import java.util.List; +import java.util.stream.Collectors; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** + * Utility class used by the meta store manager to ensure that all entities which had been resolved + * by the Polaris service outside a transaction have not been changed by a concurrent operation. In + * particular, we will ensure that all entities resolved outside the transaction are still active, + * have not been renamed/re-parented or replaced by another entity with the same name. + */ +public class PolarisEntityResolver { + + // cache diagnostics services + private final PolarisDiagnostics diagnostics; + + // result of the resolution + private final boolean isSuccess; + + // the catalog entity on the path. Only set if a catalog path is specified, i.e. if the entity + // being resolved is contain within a top-level catalog + private final PolarisEntityCore catalogEntity; + + // the parent id of the entity. We have 2 cases here: + // - a path was specified, in which case the parent is the last element in that path + // - a path was not specified, in which case the parent id is the account. + private final long parentEntityId; + + /** + * Full constructor for the resolver. The caller can specify a path inside a catalog which MUST + * start with the catalog itself. Then an optional entity to also resolve. This entity will be + * top-level if the catalogPath is null, else it will be under that path. Finally, the caller can + * specify other top-level entities to resolve, either catalog or account top-level. If a catalog + * top-level entity is specified, the catalogPath should be specified in order to know the parent + * catalog. + * + *

The resolver will ensure that none of the entities which are passed in have been dropped or + * were renamed or moved. + * + * @param callCtx call context + * @param ms meta store in read mode + * @param catalogPath path within the catalog. The first element MUST be a catalog entity. + * @param resolvedEntity optional entity to resolve under that catalog path. If a non-null value + * is supplied, we will resolve it with the rest, as if it had been concatenated to the input + * path. If catalogPath is null, this MUST be a top-level entity + * @param otherTopLevelEntities any other top-level entities like a catalog role, a principal role + * or a principal can be specified here + */ + PolarisEntityResolver( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable List catalogPath, + @Nullable PolarisEntityCore resolvedEntity, + @Nullable List otherTopLevelEntities) { + + // cache diagnostics services + this.diagnostics = callCtx.getDiagServices(); + + // validate path if one was specified + if (catalogPath != null) { + // cannot be an empty list + callCtx.getDiagServices().check(!catalogPath.isEmpty(), "catalogPath_cannot_be_empty"); + // first in the path should be the catalog + callCtx + .getDiagServices() + .check( + catalogPath.get(0).getTypeCode() == PolarisEntityType.CATALOG.getCode(), + "entity_is_not_catalog", + "entity={}", + this); + } else if (resolvedEntity != null) { + // if an entity is specified without any path, it better be a top-level entity + callCtx + .getDiagServices() + .check( + resolvedEntity.getType().isTopLevel(), + "not_top_level_entity", + "resolvedEntity={}", + resolvedEntity); + } + + // validate the otherTopLevelCatalogEntities list. Must be top-level catalog entities + if (otherTopLevelEntities != null) { + // ensure all entities are top-level + for (PolarisEntityCore topLevelCatalogEntityDto : otherTopLevelEntities) { + // top-level (catalog or account) and is catalog, catalog path must be specified + callCtx + .getDiagServices() + .check( + topLevelCatalogEntityDto.isTopLevel() + || (topLevelCatalogEntityDto.getType().getParentType() + == PolarisEntityType.CATALOG + && catalogPath != null), + "not_top_level_or_missing_catalog_path", + "entity={} catalogPath={}", + topLevelCatalogEntityDto, + catalogPath); + } + } + + // call the resolution logic + this.isSuccess = + this.resolveEntitiesIfNeeded( + callCtx, ms, catalogPath, resolvedEntity, otherTopLevelEntities); + + // process result + if (!this.isSuccess) { + // if failed, initialized if NA values + this.catalogEntity = null; + this.parentEntityId = PolarisEntityConstants.getNullId(); + } else if (catalogPath != null) { + this.catalogEntity = catalogPath.get(0); + this.parentEntityId = catalogPath.get(catalogPath.size() - 1).getId(); + } else { + this.catalogEntity = null; + this.parentEntityId = PolarisEntityConstants.getRootEntityId(); + } + } + + /** + * Constructor for the resolver, when we only need to resolve a path + * + * @param callCtx call context + * @param ms meta store in read mode + * @param catalogPath input path, can be null or empty list if the entity is a top-level entity + * like a catalog. + */ + PolarisEntityResolver( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable List catalogPath) { + this(callCtx, ms, catalogPath, null, null); + } + + /** + * Constructor for the resolver, when we only need to resolve a path + * + * @param callCtx call context + * @param ms meta store in read mode + * @param catalogPath input path, can be null or empty list if the entity is a top-level entity + * like a catalog. + * @param resolvedEntityDto resolved entity DTO + */ + PolarisEntityResolver( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable List catalogPath, + PolarisEntityCore resolvedEntityDto) { + this(callCtx, ms, catalogPath, resolvedEntityDto, null); + } + + /** + * Constructor for the resolver, when we only need to resolve a path + * + * @param callCtx call context + * @param ms meta store in read mode + * @param catalogPath input path, can be null or empty list if the entity is a top-level entity + * like a catalog. + * @param entity Polaris base entity + */ + PolarisEntityResolver( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable List catalogPath, + @NotNull PolarisBaseEntity entity) { + this(callCtx, ms, catalogPath, new PolarisEntityCore(entity), null); + } + + /** + * @return status of the resolution, if true we couldn't resolve everything + */ + boolean isFailure() { + return !this.isSuccess; + } + + /** + * @return If a non-null catalog path was specified at construction time, the id of the last + * entity in this path, else the pseudo account id, i.e. 0 + */ + long getParentId() { + this.diagnostics.check(this.isSuccess, "resolver_failed"); + return this.parentEntityId; + } + + /** + * @return id of the catalog or the "NULL" id if the entity is top-level + */ + long getCatalogIdOrNull() { + this.diagnostics.check(this.isSuccess, "resolver_failed"); + return this.catalogEntity == null + ? PolarisEntityConstants.getNullId() + : this.catalogEntity.getId(); + } + + /** + * Ensure all specified entities are still active, have not been renamed or re-parented. + * + * @param callCtx call context + * @param ms meta store in read mode + * @param catalogPath path within the catalog. The first element MUST be a catalog. Null or empty + * for top-level entities like catalog + * @param resolvedEntity optional entity to resolve under that catalog path. If a non-null value + * is supplied, we will resolve it with the rest, as if it had been concatenated to the input + * path. + * @param otherTopLevelEntities if non-null, these are top-level catalog entities under the + * catalog rooting the catalogPath. Hence, this can be specified only if catalogPath is not + * null + * @return true if all entities have been resolved successfully + */ + private boolean resolveEntitiesIfNeeded( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable List catalogPath, + @Nullable PolarisEntityCore resolvedEntity, + @Nullable List otherTopLevelEntities) { + + // determine the number of entities to resolved + int resolveCount = + ((catalogPath != null) ? catalogPath.size() : 0) + + ((resolvedEntity != null) ? 1 : 0) + + ((otherTopLevelEntities != null) ? otherTopLevelEntities.size() : 0); + + // nothing to do if 0 + if (resolveCount == 0) { + return true; + } + + // construct full list of entities to resolve + final List toResolve = new ArrayList<>(resolveCount); + + // first add the other top-level catalog entities, then the catalog path, then the entity + if (otherTopLevelEntities != null) { + toResolve.addAll(otherTopLevelEntities); + } + if (catalogPath != null) { + toResolve.addAll(catalogPath); + } + if (resolvedEntity != null) { + toResolve.add(resolvedEntity); + } + + // now build a list of entity active keys + List entityActiveKeys = + toResolve.stream() + .map( + entityCore -> + new PolarisEntitiesActiveKey( + entityCore.getCatalogId(), + entityCore.getParentId(), + entityCore.getTypeCode(), + entityCore.getName())) + .collect(Collectors.toList()); + + // now lookup all these entities by name + Iterator activeRecordIt = + ms.lookupEntityActiveBatch(callCtx, entityActiveKeys).iterator(); + + // now validate if there was a change and if yes, re-resolve again + for (PolarisEntityCore resolveEntity : toResolve) { + // get associate active record + PolarisEntityActiveRecord activeEntityRecord = activeRecordIt.next(); + + // if this entity has been dropped (null) or replaced (<> ids), then fail validation + if (activeEntityRecord == null || activeEntityRecord.getId() != resolveEntity.getId()) { + return false; + } + } + + // all good, everything was resolved successfully + return true; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/PolarisMetaStoreManager.java b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisMetaStoreManager.java new file mode 100644 index 0000000000..278dea10af --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisMetaStoreManager.java @@ -0,0 +1,1482 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonIgnore; +import com.fasterxml.jackson.annotation.JsonProperty; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisChangeTrackingVersions; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityActiveRecord; +import io.polaris.core.entity.PolarisEntityCore; +import io.polaris.core.entity.PolarisEntityId; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.storage.PolarisCredentialProperty; +import io.polaris.core.storage.PolarisStorageActions; +import java.util.EnumMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** + * Polaris Metastore Manager manages all Polaris entities and associated grant records metadata for + * authorization. It uses the underlying persistent metastore to store and retrieve Polaris metadata + */ +public interface PolarisMetaStoreManager { + + /** Possible return code for the various API calls. */ + enum ReturnStatus { + // all good + SUCCESS(1), + + // an unexpected error was thrown, should result in a 500 error to the client + UNEXPECTED_ERROR_SIGNALED(2), + + // the specified catalog path cannot be resolved. There is a possibility that by the time a call + // is made by the client to the persistent storage, something has changed due to concurrent + // modification(s). The client should retry in that case. + CATALOG_PATH_CANNOT_BE_RESOLVED(3), + + // the specified entity (and its path) cannot be resolved. There is a possibility that by the + // time a call is made by the client to the persistent storage, something has changed due to + // concurrent modification(s). The client should retry in that case. + ENTITY_CANNOT_BE_RESOLVED(4), + + // entity not found + ENTITY_NOT_FOUND(5), + + // grant not found + GRANT_NOT_FOUND(6), + + // entity already exists + ENTITY_ALREADY_EXISTS(7), + + // entity cannot be dropped, it is one of the bootstrap object like a catalog admin role or the + // service admin principal role + ENTITY_UNDROPPABLE(8), + + // Namespace is not empty and cannot be dropped + NAMESPACE_NOT_EMPTY(9), + + // Catalog is not empty and cannot be dropped. All catalog roles (except the admin catalog + // role) and all namespaces in the catalog must be dropped before the namespace can be dropped + CATALOG_NOT_EMPTY(10), + + // The target entity was concurrently modified + TARGET_ENTITY_CONCURRENTLY_MODIFIED(11), + + // entity cannot be renamed + ENTITY_CANNOT_BE_RENAMED(12), + + // error caught while sub-scoping credentials. Error message will be returned + SUBSCOPE_CREDS_ERROR(13), + ; + + // code for the enum + private final int code; + + /** constructor */ + ReturnStatus(int code) { + this.code = code; + } + + int getCode() { + return this.code; + } + + // to efficiently map a code to its corresponding return status + private static final ReturnStatus[] REVERSE_MAPPING_ARRAY; + + static { + // find max array size + int maxCode = 0; + for (ReturnStatus returnStatus : ReturnStatus.values()) { + if (maxCode < returnStatus.code) { + maxCode = returnStatus.code; + } + } + + // allocate mapping array + REVERSE_MAPPING_ARRAY = new ReturnStatus[maxCode + 1]; + + // populate mapping array + for (ReturnStatus returnStatus : ReturnStatus.values()) { + REVERSE_MAPPING_ARRAY[returnStatus.code] = returnStatus; + } + } + + static ReturnStatus getStatus(int code) { + return code >= REVERSE_MAPPING_ARRAY.length ? null : REVERSE_MAPPING_ARRAY[code]; + } + } + + /** Base result class for any call to the persistence layer */ + class BaseResult { + // return code, indicates success or failure + private final int returnStatusCode; + + // additional information for some error return code + private final String extraInformation; + + public BaseResult() { + this.returnStatusCode = ReturnStatus.SUCCESS.getCode(); + this.extraInformation = null; + } + + public BaseResult(@NotNull PolarisMetaStoreManager.ReturnStatus returnStatus) { + this.returnStatusCode = returnStatus.getCode(); + this.extraInformation = null; + } + + @JsonCreator + public BaseResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") @Nullable String extraInformation) { + this.returnStatusCode = returnStatus.getCode(); + this.extraInformation = extraInformation; + } + + public ReturnStatus getReturnStatus() { + return ReturnStatus.getStatus(this.returnStatusCode); + } + + public String getExtraInformation() { + return extraInformation; + } + + public boolean isSuccess() { + return this.returnStatusCode == ReturnStatus.SUCCESS.getCode(); + } + + public boolean alreadyExists() { + return this.returnStatusCode == ReturnStatus.ENTITY_ALREADY_EXISTS.getCode(); + } + } + + /** + * Bootstrap the Polaris service, will remove ALL existing persisted entities, then will create + * the root catalog, root principal and associated service admin role. + * + *

*************************** WARNING ************************ + * + *

This will destroy whatever Polaris metadata exists in this account + * + * @param callCtx call context + * @return always success or unexpected error + */ + @NotNull + BaseResult bootstrapPolarisService(@NotNull PolarisCallContext callCtx); + + /** the return for an entity lookup call */ + class EntityResult extends BaseResult { + + // null if not success + private final PolarisBaseEntity entity; + + /** + * Constructor for an error + * + * @param errorCode error code, cannot be SUCCESS + * @param extraInformation extra information if error. Implementation specific + */ + public EntityResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorCode, + @Nullable String extraInformation) { + super(errorCode, extraInformation); + this.entity = null; + } + + /** + * Constructor for success + * + * @param entity the entity being looked-up + */ + public EntityResult(@NotNull PolarisBaseEntity entity) { + super(ReturnStatus.SUCCESS); + this.entity = entity; + } + + /** + * Constructor for an object already exists error where the subtype of the existing entity is + * returned + * + * @param errorStatus error status, cannot be SUCCESS + * @param subTypeCode existing entity subtype code + */ + public EntityResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorStatus, int subTypeCode) { + super(errorStatus, Integer.toString(subTypeCode)); + this.entity = null; + } + + /** + * For object already exist error, we use the extra information to serialize the subtype code of + * the existing object. Return the subtype + * + * @return object subtype or NULL (should not happen) if subtype code is missing or cannot be + * deserialized + */ + @Nullable + public PolarisEntitySubType getAlreadyExistsEntitySubType() { + if (this.getExtraInformation() == null) { + return null; + } else { + int subTypeCode; + try { + subTypeCode = Integer.parseInt(this.getExtraInformation()); + } catch (NumberFormatException e) { + return null; + } + return PolarisEntitySubType.fromCode(subTypeCode); + } + } + + @JsonCreator + private EntityResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") @Nullable String extraInformation, + @JsonProperty("entity") @Nullable PolarisBaseEntity entity) { + super(returnStatus, extraInformation); + this.entity = entity; + } + + public PolarisBaseEntity getEntity() { + return entity; + } + } + + /** + * Resolve an entity by name. Can be a top-level entity like a catalog or an entity inside a + * catalog like a namespace, a role, a table like entity, or a principal. If the entity is inside + * a catalog, the parameter catalogPath must be specified + * + * @param callCtx call context + * @param catalogPath path inside a catalog to that entity, rooted by the catalog. If null, the + * entity being resolved is a top-level account entity like a catalog. + * @param entityType entity type + * @param entitySubType entity subtype. Can be the special value ANY_SUBTYPE to match any + * subtypes. Else exact match on the subtype will be required. + * @param name name of the entity, cannot be null + * @return the result of the lookup operation. ENTITY_NOT_FOUND is returned if the specified + * entity is not found in the specified path. CONCURRENT_MODIFICATION_DETECTED_NEED_RETRY is + * returned if the specified catalog path cannot be resolved. + */ + @NotNull + PolarisMetaStoreManager.EntityResult readEntityByName( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull PolarisEntityType entityType, + @NotNull PolarisEntitySubType entitySubType, + @NotNull String name); + + /** the return the result for a list entities call */ + class ListEntitiesResult extends BaseResult { + + // null if not success. Else the list of entities being returned + private final List entities; + + /** + * Constructor for an error + * + * @param errorCode error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public ListEntitiesResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorCode, + @Nullable String extraInformation) { + super(errorCode, extraInformation); + this.entities = null; + } + + /** + * Constructor for success + * + * @param entities list of entities being returned, implies success + */ + public ListEntitiesResult(@NotNull List entities) { + super(ReturnStatus.SUCCESS); + this.entities = entities; + } + + @JsonCreator + private ListEntitiesResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") String extraInformation, + @JsonProperty("entities") List entities) { + super(returnStatus, extraInformation); + this.entities = entities; + } + + public List getEntities() { + return entities; + } + } + + /** + * List all entities of the specified type under the specified catalogPath. If the catalogPath is + * null, listed entities will be top-level entities like catalogs. + * + * @param callCtx call context + * @param catalogPath path inside a catalog. If null or empty, the entities to list are top-level, + * like catalogs + * @param entityType entity type + * @param entitySubType entity subtype. Can be the special value ANY_SUBTYPE to match any subtype. + * Else exact match will be performed. + * @return all entities name, ids and subtype under the specified namespace. + */ + @NotNull + ListEntitiesResult listEntities( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull PolarisEntityType entityType, + @NotNull PolarisEntitySubType entitySubType); + + /** the return for a generate new entity id */ + class GenerateEntityIdResult extends BaseResult { + + // null if not success + private final Long id; + + /** + * Constructor for an error + * + * @param errorCode error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public GenerateEntityIdResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorCode, + @Nullable String extraInformation) { + super(errorCode, extraInformation); + this.id = null; + } + + /** + * Constructor for success + * + * @param id the new id which was generated + */ + public GenerateEntityIdResult(@NotNull Long id) { + super(ReturnStatus.SUCCESS); + this.id = id; + } + + @JsonCreator + private GenerateEntityIdResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") @Nullable String extraInformation, + @JsonProperty("id") @Nullable Long id) { + super(returnStatus, extraInformation); + this.id = id; + } + + public Long getId() { + return id; + } + } + + /** + * Generate a new unique id that can be used by the Polaris client when it needs to create a new + * entity + * + * @param callCtx call context + * @return the newly created id, not expected to fail + */ + @NotNull + GenerateEntityIdResult generateNewEntityId(@NotNull PolarisCallContext callCtx); + + /** the return the result of a create-principal method */ + class CreatePrincipalResult extends BaseResult { + // the principal which has been created. Null if error + private final PolarisBaseEntity principal; + + // principal client identifier and associated secrets. Null if error + private final PolarisPrincipalSecrets principalSecrets; + + /** + * Constructor for an error + * + * @param errorCode error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public CreatePrincipalResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorCode, + @Nullable String extraInformation) { + super(errorCode, extraInformation); + this.principal = null; + this.principalSecrets = null; + } + + /** + * Constructor for success + * + * @param principal the principal + * @param principalSecrets and associated secret information + */ + public CreatePrincipalResult( + @NotNull PolarisBaseEntity principal, @NotNull PolarisPrincipalSecrets principalSecrets) { + super(ReturnStatus.SUCCESS); + this.principal = principal; + this.principalSecrets = principalSecrets; + } + + @JsonCreator + private CreatePrincipalResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") @Nullable String extraInformation, + @JsonProperty("principal") @NotNull PolarisBaseEntity principal, + @JsonProperty("principalSecrets") @NotNull PolarisPrincipalSecrets principalSecrets) { + super(returnStatus, extraInformation); + this.principal = principal; + this.principalSecrets = principalSecrets; + } + + public PolarisBaseEntity getPrincipal() { + return principal; + } + + public PolarisPrincipalSecrets getPrincipalSecrets() { + return principalSecrets; + } + } + + /** + * Create a new principal. This not only creates the new principal entity but also generates a + * client_id/secret pair for this new principal. + * + * @param callCtx call context + * @param principal the principal entity to create + * @return the client_id/secret for the new principal which was created. Will return + * ENTITY_ALREADY_EXISTS if the principal already exists + */ + @NotNull + CreatePrincipalResult createPrincipal( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity principal); + + /** the result of load/rotate principal secrets */ + class PrincipalSecretsResult extends BaseResult { + + // principal client identifier and associated secrets. Null if error + private final PolarisPrincipalSecrets principalSecrets; + + /** + * Constructor for an error + * + * @param errorCode error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public PrincipalSecretsResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorCode, + @Nullable String extraInformation) { + super(errorCode, extraInformation); + this.principalSecrets = null; + } + + /** + * Constructor for success + * + * @param principalSecrets and associated secret information + */ + public PrincipalSecretsResult(@NotNull PolarisPrincipalSecrets principalSecrets) { + super(ReturnStatus.SUCCESS); + this.principalSecrets = principalSecrets; + } + + @JsonCreator + private PrincipalSecretsResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") @Nullable String extraInformation, + @JsonProperty("principalSecrets") @NotNull PolarisPrincipalSecrets principalSecrets) { + super(returnStatus, extraInformation); + this.principalSecrets = principalSecrets; + } + + public PolarisPrincipalSecrets getPrincipalSecrets() { + return principalSecrets; + } + } + + /** + * Load the principal secrets given the client_id. + * + * @param callCtx call context + * @param clientId principal client id + * @return the secrets associated to that principal, including the entity id of the principal + */ + @NotNull + PrincipalSecretsResult loadPrincipalSecrets( + @NotNull PolarisCallContext callCtx, @NotNull String clientId); + + /** + * Rotate secrets + * + * @param callCtx call context + * @param clientId principal client id + * @param principalId id of the principal + * @param mainSecret main secret for the principal + * @param reset true if the principal's secrets should be disabled and replaced with a one-time + * password. if the principal's secret is already a one-time password, this flag is + * automatically true + * @return the secrets associated to that principal amd the id of the principal + */ + @NotNull + PrincipalSecretsResult rotatePrincipalSecrets( + @NotNull PolarisCallContext callCtx, + @NotNull String clientId, + long principalId, + @NotNull String mainSecret, + boolean reset); + + /** the return the result of a create-catalog method */ + class CreateCatalogResult extends BaseResult { + + // the catalog which has been created + private final PolarisBaseEntity catalog; + + // its associated catalog admin role + private final PolarisBaseEntity catalogAdminRole; + + /** + * Constructor for an error + * + * @param errorCode error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public CreateCatalogResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorCode, + @Nullable String extraInformation) { + super(errorCode, extraInformation); + this.catalog = null; + this.catalogAdminRole = null; + } + + /** + * Constructor for success + * + * @param catalog the catalog + * @param catalogAdminRole and associated admin role + */ + public CreateCatalogResult( + @NotNull PolarisBaseEntity catalog, @NotNull PolarisBaseEntity catalogAdminRole) { + super(ReturnStatus.SUCCESS); + this.catalog = catalog; + this.catalogAdminRole = catalogAdminRole; + } + + @JsonCreator + private CreateCatalogResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") @Nullable String extraInformation, + @JsonProperty("catalog") @NotNull PolarisBaseEntity catalog, + @JsonProperty("catalogAdminRole") @NotNull PolarisBaseEntity catalogAdminRole) { + super(returnStatus, extraInformation); + this.catalog = catalog; + this.catalogAdminRole = catalogAdminRole; + } + + public PolarisBaseEntity getCatalog() { + return catalog; + } + + public PolarisBaseEntity getCatalogAdminRole() { + return catalogAdminRole; + } + } + + /** + * Create a new catalog. This not only creates the new catalog entity but also the initial admin + * role required to admin this catalog. If inline storage integration property is provided, create + * a storage integration. + * + * @param callCtx call context + * @param catalog the catalog entity to create + * @param principalRoles once the catalog has been created, list of principal roles to grant its + * catalog_admin role to. If no principal role is specified, we will grant the catalog_admin + * role of the newly created catalog to the service admin role. + * @return if success, the catalog which was created and its admin role. + */ + @NotNull + CreateCatalogResult createCatalog( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisBaseEntity catalog, + @NotNull List principalRoles); + + /** + * Persist a newly created entity under the specified catalog path if specified, else this is a + * top-level entity. We will re-resolve the specified path to ensure nothing has changed since the + * Polaris app resolved the path. If the entity already exists with the same specified id, we will + * simply return it. This can happen when the client retries. If a catalogPath is specified and + * cannot be resolved, we will return null. And of course if another entity exists with the same + * name, we will fail and also return null. + * + * @param callCtx call context + * @param catalogPath path inside a catalog. If null, the entity to persist is assumed to be + * top-level. + * @param entity entity to write + * @return the newly created entity. If this entity was already created, we will simply return the + * already created entity. We will return null if a different entity with the same name exists + * or if the catalogPath couldn't be resolved. If null is returned, the client app should + * retry this operation. + */ + @NotNull + EntityResult createEntityIfNotExists( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull PolarisBaseEntity entity); + + /** a set of returned entities result */ + class EntitiesResult extends BaseResult { + + // null if not success. Else the list of entities being returned + private final List entities; + + /** + * Constructor for an error + * + * @param errorStatus error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public EntitiesResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorStatus, + @Nullable String extraInformation) { + super(errorStatus, extraInformation); + this.entities = null; + } + + /** + * Constructor for success + * + * @param entities list of entities being returned, implies success + */ + public EntitiesResult(@NotNull List entities) { + super(ReturnStatus.SUCCESS); + this.entities = entities; + } + + @JsonCreator + private EntitiesResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") String extraInformation, + @JsonProperty("entities") List entities) { + super(returnStatus, extraInformation); + this.entities = entities; + } + + public List getEntities() { + return entities; + } + } + + /** + * Persist a batch of newly created entities under the specified catalog path if specified, else + * these are top-level entities. We will re-resolve the specified path to ensure nothing has + * changed since the Polaris app resolved the path. If any of the entities already exists with the + * same specified id, we will simply return it. This can happen when the client retries. If a + * catalogPath is specified and cannot be resolved, we will return null and none of the entities + * will be persisted. And of course if any entity conflicts with an existing entity with the same + * name, we will fail all entities and also return null. + * + * @param callCtx call context + * @param catalogPath path inside a catalog. If null, the entity to persist is assumed to be + * top-level. + * @param entities batch of entities to write + * @return the newly created entities. If the entities were already created, we will simply return + * the already created entity. We will return null if a different entity with the same name + * exists or if the catalogPath couldn't be resolved. If null is returned, the client app + * should retry this operation. + */ + @NotNull + EntitiesResult createEntitiesIfNotExist( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull List entities); + + /** + * Update some properties of this entity assuming it can still be resolved the same way and itself + * has not changed. If this is not the case we will return false. Else we will update both the + * internal and visible properties and return true + * + * @param callCtx call context + * @param catalogPath path to that entity. Could be null if this entity is top-level + * @param entity entity to update, cannot be null + * @return the entity we updated or null if the client should retry + */ + @NotNull + EntityResult updateEntityPropertiesIfNotChanged( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull PolarisBaseEntity entity); + + /** Class to represent an entity with its path */ + class EntityWithPath { + // path to that entity. Could be null if this entity is top-level + private final @NotNull List catalogPath; + + // the base entity itself + private final @NotNull PolarisBaseEntity entity; + + @JsonCreator + public EntityWithPath( + @JsonProperty("catalogPath") @NotNull List catalogPath, + @JsonProperty("entity") @NotNull PolarisBaseEntity entity) { + this.catalogPath = catalogPath; + this.entity = entity; + } + + public @NotNull List getCatalogPath() { + return catalogPath; + } + + public @NotNull PolarisBaseEntity getEntity() { + return entity; + } + } + + /** + * This works exactly like {@link #updateEntityPropertiesIfNotChanged(PolarisCallContext, List, + * PolarisBaseEntity)} but allows to operate on multiple entities at once. Just loop through the + * list, calling each entity update and return null if any of those fail. + * + * @param callCtx call context + * @param entities the set of entities to update + * @return list of all entities we updated or null if the client should retry because one update + * failed + */ + @NotNull + EntitiesResult updateEntitiesPropertiesIfNotChanged( + @NotNull PolarisCallContext callCtx, @NotNull List entities); + + /** + * Rename an entity, potentially re-parenting it. + * + * @param callCtx call context + * @param catalogPath path to that entity. Could be an empty list of the entity is a catalog. + * @param entityToRename entity to rename. This entity should have been resolved by the client + * @param newCatalogPath if not null, new catalog path + * @param renamedEntity the new renamed entity we need to persist. We will use this argument to + * also update the internal and external properties as part of the rename operation. This is + * required to update the namespace path of the entity if it has changed + * @return the entity after renaming it or null if the rename operation has failed + */ + @NotNull + EntityResult renameEntity( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull PolarisEntityCore entityToRename, + @Nullable List newCatalogPath, + @NotNull PolarisEntity renamedEntity); + + // the return the result of a drop entity + class DropEntityResult extends BaseResult { + + /** If cleanup was requested and a task was successfully scheduled, */ + private final Long cleanupTaskId; + + /** + * Constructor for an error + * + * @param errorStatus error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public DropEntityResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorStatus, + @Nullable String extraInformation) { + super(errorStatus, extraInformation); + this.cleanupTaskId = null; + } + + /** Constructor for success when no cleanup needs to be performed */ + public DropEntityResult() { + super(ReturnStatus.SUCCESS); + this.cleanupTaskId = null; + } + + /** + * Constructor for success when a cleanup task has been scheduled + * + * @param cleanupTaskId id of the task which was created to clean up the table drop + */ + public DropEntityResult(long cleanupTaskId) { + super(ReturnStatus.SUCCESS); + this.cleanupTaskId = cleanupTaskId; + } + + @JsonCreator + private DropEntityResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") String extraInformation, + @JsonProperty("cleanupTaskId") Long cleanupTaskId) { + super(returnStatus, extraInformation); + this.cleanupTaskId = cleanupTaskId; + } + + public Long getCleanupTaskId() { + return cleanupTaskId; + } + + @JsonIgnore + public boolean failedBecauseNotEmpty() { + ReturnStatus status = this.getReturnStatus(); + return status == ReturnStatus.CATALOG_NOT_EMPTY || status == ReturnStatus.NAMESPACE_NOT_EMPTY; + } + + public boolean isEntityUnDroppable() { + return this.getReturnStatus() == ReturnStatus.ENTITY_UNDROPPABLE; + } + } + + /** + * Drop the specified entity assuming it exists + * + * @param callCtx call context + * @param catalogPath path to that entity. Could be an empty list of the entity is a catalog. + * @param entityToDrop entity to drop, must have been resolved by the client + * @param cleanupProperties if not null, properties that will be persisted with the cleanup task + * @param cleanup true if resources owned by this entity should be deleted as well + * @return the result of the drop entity call, either success or error. If the error, it could be + * that the namespace or catalog to drop still has children, this should not be retried and + * should cause a failure + */ + @NotNull + DropEntityResult dropEntityIfExists( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull PolarisEntityCore entityToDrop, + @Nullable Map cleanupProperties, + boolean cleanup); + + /** Result of a grant/revoke privilege call */ + class PrivilegeResult extends BaseResult { + + // null if not success. + private final PolarisGrantRecord grantRecord; + + /** + * Constructor for an error + * + * @param errorCode error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public PrivilegeResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorCode, + @Nullable String extraInformation) { + super(errorCode, extraInformation); + this.grantRecord = null; + } + + /** + * Constructor for success + * + * @param grantRecord grant record being granted or revoked + */ + public PrivilegeResult(@NotNull PolarisGrantRecord grantRecord) { + super(ReturnStatus.SUCCESS); + this.grantRecord = grantRecord; + } + + @JsonCreator + private PrivilegeResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") String extraInformation, + @JsonProperty("grantRecord") PolarisGrantRecord grantRecord) { + super(returnStatus, extraInformation); + this.grantRecord = grantRecord; + } + + public PolarisGrantRecord getGrantRecord() { + return grantRecord; + } + } + + /** + * Grant usage on a role to a grantee, for example granting usage on a catalog role to a principal + * role or granting a principal role to a principal. + * + * @param callCtx call context + * @param catalog if the role is a catalog role, the caller needs to pass-in the catalog entity + * which was used to resolve that granted. Else null. + * @param role resolved catalog or principal role + * @param grantee principal role or principal as resolved by the caller + * @return the grant record we created for this grant. Will return ENTITY_NOT_FOUND if the + * specified role couldn't be found. Should be retried in that case + */ + @NotNull + PrivilegeResult grantUsageOnRoleToGrantee( + @NotNull PolarisCallContext callCtx, + @Nullable PolarisEntityCore catalog, + @NotNull PolarisEntityCore role, + @NotNull PolarisEntityCore grantee); + + /** + * Revoke usage on a role (a catalog or a principal role) from a grantee (e.g. a principal role or + * a principal). + * + * @param callCtx call context + * @param catalog if the granted is a catalog role, the caller needs to pass-in the catalog entity + * which was used to resolve that role. Else null should be passed-in. + * @param role a catalog/principal role as resolved by the caller + * @param grantee resolved principal role or principal + * @return the result. Will return ENTITY_NOT_FOUND if the * specified role couldn't be found. + * Should be retried in that case. Will return GRANT_NOT_FOUND if the grant to revoke cannot + * be found + */ + @NotNull + PrivilegeResult revokeUsageOnRoleFromGrantee( + @NotNull PolarisCallContext callCtx, + @Nullable PolarisEntityCore catalog, + @NotNull PolarisEntityCore role, + @NotNull PolarisEntityCore grantee); + + /** + * Grant a privilege on a catalog securable to a grantee. + * + * @param callCtx call context + * @param grantee resolved role, the grantee + * @param catalogPath path to that entity, cannot be null or empty unless securable is top-level + * @param securable securable entity, must have been resolved by the client. Can be the catalog + * itself + * @param privilege privilege to grant + * @return the grant record we created for this grant. Will return ENTITY_NOT_FOUND if the + * specified role couldn't be found. Should be retried in that case + */ + @NotNull + PrivilegeResult grantPrivilegeOnSecurableToRole( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisEntityCore grantee, + @Nullable List catalogPath, + @NotNull PolarisEntityCore securable, + @NotNull PolarisPrivilege privilege); + + /** + * Revoke a privilege on a catalog securable from a grantee. + * + * @param callCtx call context + * @param grantee resolved role, the grantee + * @param catalogPath path to that entity, cannot be null or empty unless securable is top-level + * @param securable securable entity, must have been resolved by the client. Can be the catalog + * itself. + * @param privilege privilege to revoke + * @return the result. Will return ENTITY_NOT_FOUND if the * specified role couldn't be found. + * Should be retried in that case. Will return GRANT_NOT_FOUND if the grant to revoke cannot + * be found + */ + @NotNull + PrivilegeResult revokePrivilegeOnSecurableFromRole( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisEntityCore grantee, + @Nullable List catalogPath, + @NotNull PolarisEntityCore securable, + @NotNull PolarisPrivilege privilege); + + /** Result of a load grants call */ + class LoadGrantsResult extends BaseResult { + // true if success. If false, the caller should retry because of some concurrent change + private final int grantsVersion; + + // null if not success. Else set of grants records on a securable or to a grantee + private final List grantRecords; + + // null if not success. Else, for each grant record, list of securable or grantee entities + private final List entities; + + /** + * Constructor for an error + * + * @param errorCode error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public LoadGrantsResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorCode, + @Nullable String extraInformation) { + super(errorCode, extraInformation); + this.grantsVersion = 0; + this.grantRecords = null; + this.entities = null; + } + + /** + * Constructor for success + * + * @param grantsVersion version of the grants + * @param grantRecords set of grant records + */ + public LoadGrantsResult( + int grantsVersion, + @NotNull List grantRecords, + List entities) { + super(ReturnStatus.SUCCESS); + this.grantsVersion = grantsVersion; + this.grantRecords = grantRecords; + this.entities = entities; + } + + @JsonCreator + private LoadGrantsResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") String extraInformation, + @JsonProperty("grantsVersion") int grantsVersion, + @JsonProperty("grantRecords") List grantRecords, + @JsonProperty("entities") List entities) { + super(returnStatus, extraInformation); + this.grantsVersion = grantsVersion; + this.grantRecords = grantRecords; + // old GS code might not serialize this argument + this.entities = entities; + } + + public int getGrantsVersion() { + return grantsVersion; + } + + public List getGrantRecords() { + return grantRecords; + } + + public List getEntities() { + return entities; + } + + @JsonIgnore + public Map getEntitiesAsMap() { + return (this.getEntities() == null) + ? null + : this.getEntities().stream() + .collect(Collectors.toMap(PolarisBaseEntity::getId, entity -> entity)); + } + + @Override + public String toString() { + return "LoadGrantsResult{" + + "grantsVersion=" + + grantsVersion + + ", grantRecords=" + + grantRecords + + ", entities=" + + entities + + ", returnStatus=" + + getReturnStatus() + + '}'; + } + } + + /** + * This method should be used by the Polaris app to cache all grant records on a securable. + * + * @param callCtx call context + * @param securableCatalogId id of the catalog this securable belongs to + * @param securableId id of the securable + * @return the list of grants and the version of the grant records. We will return + * ENTITY_NOT_FOUND if the securable cannot be found + */ + @NotNull + LoadGrantsResult loadGrantsOnSecurable( + @NotNull PolarisCallContext callCtx, long securableCatalogId, long securableId); + + /** + * This method should be used by the Polaris app to load all grants made to a grantee, either a + * role or a principal. + * + * @param callCtx call context + * @param granteeCatalogId id of the catalog this grantee belongs to + * @param granteeId id of the grantee + * @return the list of grants and the version of the grant records. We will return NULL if the + * grantee does not exist + */ + @NotNull + LoadGrantsResult loadGrantsToGrantee( + PolarisCallContext callCtx, long granteeCatalogId, long granteeId); + + /** Result of a loadEntitiesChangeTracking call */ + class ChangeTrackingResult extends BaseResult { + + // null if not success. Else, will be null if the grant to revoke was not found + private final List changeTrackingVersions; + + /** + * Constructor for an error + * + * @param errorCode error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public ChangeTrackingResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorCode, + @Nullable String extraInformation) { + super(errorCode, extraInformation); + this.changeTrackingVersions = null; + } + + /** + * Constructor for success + * + * @param changeTrackingVersions change tracking versions + */ + public ChangeTrackingResult( + @NotNull List changeTrackingVersions) { + super(ReturnStatus.SUCCESS); + this.changeTrackingVersions = changeTrackingVersions; + } + + @JsonCreator + private ChangeTrackingResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") String extraInformation, + @JsonProperty("changeTrackingVersions") + List changeTrackingVersions) { + super(returnStatus, extraInformation); + this.changeTrackingVersions = changeTrackingVersions; + } + + public List getChangeTrackingVersions() { + return changeTrackingVersions; + } + } + + /** + * Load change tracking information for a set of entities in one single shot and return for each + * the version for the entity itself and the version associated to its grant records. + * + * @param callCtx call context + * @param entityIds list of catalog/entity pair ids for which we need to efficiently load the + * version information, both entity version and grant records version. + * @return a list of version tracking information. Order in that returned list is the same as the + * input list. Some elements might be NULL if the entity has been purged. Not expected to fail + */ + @NotNull + ChangeTrackingResult loadEntitiesChangeTracking( + @NotNull PolarisCallContext callCtx, @NotNull List entityIds); + + /** + * Load the entity from backend store. Will return NULL if the entity does not exist, i.e. has + * been purged. The entity being loaded might have been dropped + * + * @param callCtx call context + * @param entityCatalogId id of the catalog for that entity + * @param entityId the id of the entity to load + */ + @NotNull + EntityResult loadEntity(@NotNull PolarisCallContext callCtx, long entityCatalogId, long entityId); + + /** + * Fetch a list of tasks to be completed. Tasks + * + * @param callCtx call context + * @param executorId executor id + * @param limit limit + * @return list of tasks to be completed + */ + @NotNull + EntitiesResult loadTasks(@NotNull PolarisCallContext callCtx, String executorId, int limit); + + /** Result of a getSubscopedCredsForEntity() call */ + class ScopedCredentialsResult extends BaseResult { + + // null if not success. Else, set of name/value pairs for the credentials + private final EnumMap credentials; + + /** + * Constructor for an error + * + * @param errorCode error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public ScopedCredentialsResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorCode, + @Nullable String extraInformation) { + super(errorCode, extraInformation); + this.credentials = null; + } + + /** + * Constructor for success + * + * @param credentials credentials + */ + public ScopedCredentialsResult( + @NotNull EnumMap credentials) { + super(ReturnStatus.SUCCESS); + this.credentials = credentials; + } + + @JsonCreator + private ScopedCredentialsResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") String extraInformation, + @JsonProperty("credentials") Map credentials) { + super(returnStatus, extraInformation); + this.credentials = new EnumMap<>(PolarisCredentialProperty.class); + if (credentials != null) { + credentials.forEach( + (k, v) -> this.credentials.put(PolarisCredentialProperty.valueOf(k), v)); + } + } + + public EnumMap getCredentials() { + return credentials; + } + } + + /** + * Get a sub-scoped credentials for an entity against the provided allowed read and write + * locations. + * + * @param callCtx the polaris call context + * @param catalogId the catalog id + * @param entityId the entity id + * @param allowListOperation whether to allow LIST operation on the allowedReadLocations and + * allowedWriteLocations + * @param allowedReadLocations a set of allowed to read locations + * @param allowedWriteLocations a set of allowed to write locations + * @return an enum map containing the scoped credentials + */ + @NotNull + ScopedCredentialsResult getSubscopedCredsForEntity( + @NotNull PolarisCallContext callCtx, + long catalogId, + long entityId, + boolean allowListOperation, + @NotNull Set allowedReadLocations, + @NotNull Set allowedWriteLocations); + + /** Result of a validateAccessToLocations() call */ + class ValidateAccessResult extends BaseResult { + + // null if not success. Else, set of location/validationResult pairs for each location in the + // set + private final Map validateResult; + + /** + * Constructor for an error + * + * @param errorCode error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public ValidateAccessResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorCode, + @Nullable String extraInformation) { + super(errorCode, extraInformation); + this.validateResult = null; + } + + /** + * Constructor for success + * + * @param validateResult validate result + */ + public ValidateAccessResult(@NotNull Map validateResult) { + super(ReturnStatus.SUCCESS); + this.validateResult = validateResult; + } + + @JsonCreator + private ValidateAccessResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") String extraInformation, + @JsonProperty("validateResult") Map validateResult) { + super(returnStatus, extraInformation); + this.validateResult = validateResult; + } + + public Map getValidateResult() { + return this.validateResult; + } + } + + /** + * Validate whether the entity has access to the locations with the provided target operations + * + * @param callCtx the polaris call context + * @param catalogId the catalog id + * @param entityId the entity id + * @param actions a set of operation actions: READ/WRITE/LIST/DELETE/ALL + * @param locations a set of locations to verify + * @return a Map of , a validate result value looks like this + *

+   * {
+   *   "status" : "failure",
+   *   "actions" : {
+   *     "READ" : {
+   *       "message" : "The specified file was not found",
+   *       "status" : "failure"
+   *     },
+   *     "DELETE" : {
+   *       "message" : "One or more objects could not be deleted (Status Code: 200; Error Code: null)",
+   *       "status" : "failure"
+   *     },
+   *     "LIST" : {
+   *       "status" : "success"
+   *     },
+   *     "WRITE" : {
+   *       "message" : "Access Denied (Status Code: 403; Error Code: AccessDenied)",
+   *       "status" : "failure"
+   *     }
+   *   },
+   *   "message" : "Some of the integration checks failed. Check the Snowflake documentation for more information."
+   * }
+   * 
+ */ + @NotNull + ValidateAccessResult validateAccessToLocations( + @NotNull PolarisCallContext callCtx, + long catalogId, + long entityId, + @NotNull Set actions, + @NotNull Set locations); + + /** + * Represents an entry in the cache. If we refresh a cached entry, we will only refresh the + * information which have changed, based on the version of the entity + */ + class CachedEntryResult extends BaseResult { + + // the entity itself if it was loaded + private final @Nullable PolarisBaseEntity entity; + + // version for the grant records, in case the entity was not loaded + private final int grantRecordsVersion; + + private final @Nullable List entityGrantRecords; + + /** + * Constructor for an error + * + * @param errorCode error code, cannot be SUCCESS + * @param extraInformation extra information + */ + public CachedEntryResult( + @NotNull PolarisMetaStoreManager.ReturnStatus errorCode, + @Nullable String extraInformation) { + super(errorCode, extraInformation); + this.entity = null; + this.entityGrantRecords = null; + this.grantRecordsVersion = 0; + } + + /** + * Constructor with success + * + * @param entity the entity for that cached entry + * @param grantRecordsVersion the version of the grant records + * @param entityGrantRecords the list of grant records + */ + public CachedEntryResult( + @Nullable PolarisBaseEntity entity, + int grantRecordsVersion, + @Nullable List entityGrantRecords) { + super(ReturnStatus.SUCCESS); + this.entity = entity; + this.entityGrantRecords = entityGrantRecords; + this.grantRecordsVersion = grantRecordsVersion; + } + + @JsonCreator + public CachedEntryResult( + @JsonProperty("returnStatus") @NotNull ReturnStatus returnStatus, + @JsonProperty("extraInformation") String extraInformation, + @Nullable @JsonProperty("entity") PolarisBaseEntity entity, + @JsonProperty("grantRecordsVersion") int grantRecordsVersion, + @Nullable @JsonProperty("entityGrantRecords") List entityGrantRecords) { + super(returnStatus, extraInformation); + this.entity = entity; + this.entityGrantRecords = entityGrantRecords; + this.grantRecordsVersion = grantRecordsVersion; + } + + public @Nullable PolarisBaseEntity getEntity() { + return entity; + } + + public int getGrantRecordsVersion() { + return grantRecordsVersion; + } + + public @Nullable List getEntityGrantRecords() { + return entityGrantRecords; + } + } + + /** + * Load a cached entry, i.e. an entity definition and associated grant records, from the backend + * store. The entity is identified by its id (entity catalog id and id). + * + *

For entities that can be grantees, the associated grant records will include both the grant + * records for this entity as a grantee and for this entity as a securable. + * + * @param callCtx call context + * @param entityCatalogId id of the catalog for that entity + * @param entityId id of the entity + * @return cached entry for this entity. Status will be ENTITY_NOT_FOUND if the entity was not + * found + */ + @NotNull + PolarisMetaStoreManager.CachedEntryResult loadCachedEntryById( + @NotNull PolarisCallContext callCtx, long entityCatalogId, long entityId); + + /** + * Load a cached entry, i.e. an entity definition and associated grant records, from the backend + * store. The entity is identified by its name. Will return NULL if the entity does not exist, + * i.e. has been purged or dropped. + * + *

For entities that can be grantees, the associated grant records will include both the grant + * records for this entity as a grantee and for this entity as a securable. + * + * @param callCtx call context + * @param entityCatalogId id of the catalog for that entity + * @param parentId the id of the parent of that entity + * @param entityType the type of this entity + * @param entityName the name of this entity + * @return cached entry for this entity. Status will be ENTITY_NOT_FOUND if the entity was not + * found + */ + @NotNull + PolarisMetaStoreManager.CachedEntryResult loadCachedEntryByName( + @NotNull PolarisCallContext callCtx, + long entityCatalogId, + long parentId, + @NotNull PolarisEntityType entityType, + @NotNull String entityName); + + /** + * Refresh a cached entity from the backend store. Will return NULL if the entity does not exist, + * i.e. has been purged or dropped. Else, will determine what has changed based on the version + * information sent by the caller and will return only what has changed. + * + *

For entities that can be grantees, the associated grant records will include both the grant + * records for this entity as a grantee and for this entity as a securable. + * + * @param callCtx call context + * @param entityType type of the entity whose cached entry we are refreshing + * @param entityCatalogId id of the catalog for that entity + * @param entityId the id of the entity to load + * @return cached entry for this entity. Status will be ENTITY_NOT_FOUND if the entity was not * + * found + */ + @NotNull + PolarisMetaStoreManager.CachedEntryResult refreshCachedEntity( + @NotNull PolarisCallContext callCtx, + int entityVersion, + int entityGrantRecordsVersion, + @NotNull PolarisEntityType entityType, + long entityCatalogId, + long entityId); +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/PolarisMetaStoreManagerImpl.java b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisMetaStoreManagerImpl.java new file mode 100644 index 0000000000..710fbed1d3 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisMetaStoreManagerImpl.java @@ -0,0 +1,2413 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.JsonMappingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import edu.umd.cs.findbugs.annotations.SuppressFBWarnings; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.entity.AsyncTaskType; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisChangeTrackingVersions; +import io.polaris.core.entity.PolarisEntitiesActiveKey; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityActiveRecord; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.entity.PolarisEntityCore; +import io.polaris.core.entity.PolarisEntityId; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.entity.PolarisTaskConstants; +import io.polaris.core.storage.PolarisCredentialProperty; +import io.polaris.core.storage.PolarisStorageActions; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import io.polaris.core.storage.PolarisStorageIntegration; +import java.util.ArrayList; +import java.util.EnumMap; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** + * Default implementation of the Polaris Meta Store Manager. Uses the underlying meta store to store + * and retrieve all Polaris metadata + */ +@SuppressFBWarnings("NP_NULL_ON_SOME_PATH_FROM_RETURN_VALUE") +public class PolarisMetaStoreManagerImpl implements PolarisMetaStoreManager { + + /** mapper, allows to serialize/deserialize properties to/from JSON */ + private static final ObjectMapper MAPPER = new ObjectMapper(); + + /** use synchronous drop for entities */ + private static final boolean USE_SYNCHRONOUS_DROP = true; + + /** + * Lookup an entity by its name + * + * @param callCtx call context + * @param ms meta store + * @param entityActiveKey lookup key + * @return the entity if it exists, null otherwise + */ + private @Nullable PolarisBaseEntity lookupEntityByName( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull PolarisEntitiesActiveKey entityActiveKey) { + // ensure that the entity exists + PolarisEntityActiveRecord entityActiveRecord = ms.lookupEntityActive(callCtx, entityActiveKey); + + // if not found, return null + if (entityActiveRecord == null) { + return null; + } + + // lookup the entity, should be there + PolarisBaseEntity entity = + ms.lookupEntity(callCtx, entityActiveRecord.getCatalogId(), entityActiveRecord.getId()); + callCtx + .getDiagServices() + .checkNotNull( + entity, "unexpected_not_found_entity", "entityActiveRecord={}", entityActiveRecord); + + // return it now + return entity; + } + + /** + * Write this entity to the meta store. + * + * @param callCtx call context + * @param ms meta store in read/write mode + * @param entity entity to persist + * @param writeToActive if true, write it to active + */ + private void writeEntity( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull PolarisBaseEntity entity, + boolean writeToActive) { + ms.writeToEntities(callCtx, entity); + ms.writeToEntitiesChangeTracking(callCtx, entity); + + if (writeToActive) { + ms.writeToEntitiesActive(callCtx, entity); + } + } + + /** + * Persist the specified new entity. Persist will write this entity in the ENTITIES, in the + * ENTITIES_ACTIVE and finally in the ENTITIES_CHANGE_TRACKING tables + * + * @param callCtx call context + * @param ms meta store in read/write mode + * @param entity entity we need a DPO for + */ + private void persistNewEntity( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull PolarisBaseEntity entity) { + + // validate the entity type and subtype + callCtx.getDiagServices().checkNotNull(entity, "unexpected_null_entity"); + callCtx + .getDiagServices() + .checkNotNull(entity.getName(), "unexpected_null_name", "entity={}", entity); + PolarisEntityType type = PolarisEntityType.fromCode(entity.getTypeCode()); + callCtx.getDiagServices().checkNotNull(type, "unknown_type", "entity={}", entity); + PolarisEntitySubType subType = PolarisEntitySubType.fromCode(entity.getSubTypeCode()); + callCtx.getDiagServices().checkNotNull(subType, "unexpected_null_subType", "entity={}", entity); + callCtx + .getDiagServices() + .check( + subType.getParentType() == null || subType.getParentType() == type, + "invalid_subtype", + "type={} subType={}", + type, + subType); + + // if top-level entity, its parent should be the account + callCtx + .getDiagServices() + .check( + !type.isTopLevel() || entity.getParentId() == PolarisEntityConstants.getRootEntityId(), + "top_level_parent_should_be_account", + "entity={}", + entity); + + // id should not be null + callCtx + .getDiagServices() + .check( + entity.getId() != 0 || type == PolarisEntityType.ROOT, + "id_not_set", + "entity={}", + entity); + + // creation timestamp must be filled + callCtx.getDiagServices().check(entity.getCreateTimestamp() != 0, "null_create_timestamp"); + + // this is the first change + entity.setLastUpdateTimestamp(entity.getCreateTimestamp()); + + // set all other timestamps to 0 + entity.setDropTimestamp(0); + entity.setPurgeTimestamp(0); + entity.setToPurgeTimestamp(0); + + // write it + this.writeEntity(callCtx, ms, entity, true); + } + + /** + * Persist the specified entity after it has been changed. We will update the last changed time, + * increment the entity version and persist it back to the ENTITIES and ENTITIES_CHANGE_TRACKING + * tables + * + * @param callCtx call context + * @param ms meta store + * @param entity the entity which has been changed + * @return the entity with its version and lastUpdateTimestamp updated + */ + private @NotNull PolarisBaseEntity persistEntityAfterChange( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull PolarisBaseEntity entity) { + + // validate the entity type and subtype + callCtx.getDiagServices().checkNotNull(entity, "unexpected_null_entity"); + callCtx + .getDiagServices() + .checkNotNull(entity.getName(), "unexpected_null_name", "entity={}", entity); + PolarisEntityType type = entity.getType(); + callCtx.getDiagServices().checkNotNull(type, "unexpected_null_type", "entity={}", entity); + PolarisEntitySubType subType = entity.getSubType(); + callCtx.getDiagServices().checkNotNull(subType, "unexpected_null_subType", "entity={}", entity); + callCtx + .getDiagServices() + .check( + subType.getParentType() == null || subType.getParentType() == type, + "invalid_subtype", + "type={} subType={} entity={}", + type, + subType, + entity); + + // entity should not have been dropped + callCtx + .getDiagServices() + .check(entity.getDropTimestamp() == 0, "entity_dropped", "entity={}", entity); + + // creation timestamp must be filled + long createTimestamp = entity.getCreateTimestamp(); + callCtx + .getDiagServices() + .check(createTimestamp != 0, "null_create_timestamp", "entity={}", entity); + + // ensure time is not moving backward... + long now = System.currentTimeMillis(); + if (now < entity.getCreateTimestamp()) { + now = entity.getCreateTimestamp() + 1; + } + + // update last update timestamp and increment entity version + entity.setLastUpdateTimestamp(now); + entity.setEntityVersion(entity.getEntityVersion() + 1); + + // persist it to the various slices + this.writeEntity(callCtx, ms, entity, false); + + // return it + return entity; + } + + /** + * Drop this entity. This will: + * + *

+   *   - validate that the entity has not yet been dropped
+   *   - error out if this entity is undroppable
+   *   - if this is a catalog or a namespace, error out if the entity still has children
+   *   - we will fully delete the entity from persistence store
+   * 
+ * + * @param callCtx call context + * @param ms meta store + * @param entity the entity being dropped + */ + private void dropEntity( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull PolarisBaseEntity entity) { + + // validate the entity type and subtype + callCtx.getDiagServices().checkNotNull(entity, "unexpected_null_dpo"); + callCtx.getDiagServices().checkNotNull(entity.getName(), "unexpected_null_name"); + + // creation timestamp must be filled + callCtx.getDiagServices().check(entity.getDropTimestamp() == 0, "already_dropped"); + + // delete it from active slice + ms.deleteFromEntitiesActive(callCtx, entity); + + // for now drop all entities synchronously + if (USE_SYNCHRONOUS_DROP) { + // use synchronous drop + + // delete ALL grant records to (if the entity is a grantee) and from that entity + final List grantsOnGrantee = + (entity.getType().isGrantee()) + ? ms.loadAllGrantRecordsOnGrantee(callCtx, entity.getCatalogId(), entity.getId()) + : List.of(); + final List grantsOnSecurable = + ms.loadAllGrantRecordsOnSecurable(callCtx, entity.getCatalogId(), entity.getId()); + ms.deleteAllEntityGrantRecords(callCtx, entity, grantsOnGrantee, grantsOnSecurable); + + // Now determine the set of entities on the other side of the grants we just removed. Grants + // from/to these entities has been removed, hence we need to update the grant version of + // each entity. Collect the id of each. + Set entityIdsGrantChanged = new HashSet<>(); + grantsOnGrantee.forEach( + gr -> + entityIdsGrantChanged.add( + new PolarisEntityId(gr.getSecurableCatalogId(), gr.getSecurableId()))); + grantsOnSecurable.forEach( + gr -> + entityIdsGrantChanged.add( + new PolarisEntityId(gr.getGranteeCatalogId(), gr.getGranteeId()))); + + // Bump up the grant version of these entities + List entities = + ms.lookupEntities(callCtx, new ArrayList<>(entityIdsGrantChanged)); + for (PolarisBaseEntity entityGrantChanged : entities) { + entityGrantChanged.setGrantRecordsVersion(entityGrantChanged.getGrantRecordsVersion() + 1); + ms.writeToEntities(callCtx, entityGrantChanged); + ms.writeToEntitiesChangeTracking(callCtx, entityGrantChanged); + } + + // remove the entity being dropped now + ms.deleteFromEntities(callCtx, entity); + ms.deleteFromEntitiesChangeTracking(callCtx, entity); + + // if it is a principal, we also need to drop the secrets + if (entity.getType() == PolarisEntityType.PRINCIPAL) { + // get internal properties + Map properties = + this.deserializeProperties(callCtx, entity.getInternalProperties()); + + // get client_id + String clientId = properties.get(PolarisEntityConstants.getClientIdPropertyName()); + + // delete it from the secret slice + ms.deletePrincipalSecrets(callCtx, clientId, entity.getId()); + } + } else { + + // update the entity to indicate it has been dropped + final long now = System.currentTimeMillis(); + entity.setDropTimestamp(now); + entity.setLastUpdateTimestamp(now); + + // schedule purge + entity.setToPurgeTimestamp(now + PolarisEntityConstants.getRetentionTimeInMs()); + + // increment version + entity.setEntityVersion(entity.getEntityVersion() + 1); + + // write to the dropped slice and to purge slice + ms.writeToEntities(callCtx, entity); + ms.writeToEntitiesDropped(callCtx, entity); + ms.writeToEntitiesChangeTracking(callCtx, entity); + } + } + + /** + * Create and persist a new grant record. This will at the same time invalidate the grant records + * of the grantee and the securable if the grantee is a catalog role + * + * @param callCtx call context + * @param ms meta store in read/write mode + * @param securable securable + * @param grantee grantee, either a catalog role, a principal role or a principal + * @param priv privilege + * @return new grant record which was created and persisted + */ + private @NotNull PolarisGrantRecord persistNewGrantRecord( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull PolarisEntityCore securable, + @NotNull PolarisEntityCore grantee, + @NotNull PolarisPrivilege priv) { + + // validate non null arguments + callCtx.getDiagServices().checkNotNull(securable, "unexpected_null_securable"); + callCtx.getDiagServices().checkNotNull(grantee, "unexpected_null_grantee"); + callCtx.getDiagServices().checkNotNull(priv, "unexpected_null_priv"); + + // ensure that this entity is indeed a grantee like entity + callCtx + .getDiagServices() + .check(grantee.getType().isGrantee(), "entity_must_be_grantee", "entity={}", grantee); + + // create new grant record + PolarisGrantRecord grantRecord = + new PolarisGrantRecord( + securable.getCatalogId(), + securable.getId(), + grantee.getCatalogId(), + grantee.getId(), + priv.getCode()); + + // persist the new grant + ms.writeToGrantRecords(callCtx, grantRecord); + + // load the grantee (either a catalog/principal role or a principal) and increment its grants + // version + PolarisBaseEntity granteeEntity = + ms.lookupEntity(callCtx, grantee.getCatalogId(), grantee.getId()); + callCtx + .getDiagServices() + .checkNotNull(granteeEntity, "grantee_not_found", "grantee={}", grantee); + + // grants have changed, we need to bump-up the grants version + granteeEntity.setGrantRecordsVersion(granteeEntity.getGrantRecordsVersion() + 1); + this.writeEntity(callCtx, ms, granteeEntity, false); + + // we also need to invalidate the grants on that securable so that we can reload them. + // load the securable and increment its grants version + PolarisBaseEntity securableEntity = + ms.lookupEntity(callCtx, securable.getCatalogId(), securable.getId()); + callCtx + .getDiagServices() + .checkNotNull(securableEntity, "securable_not_found", "securable={}", securable); + + // grants have changed, we need to bump-up the grants version + securableEntity.setGrantRecordsVersion(securableEntity.getGrantRecordsVersion() + 1); + this.writeEntity(callCtx, ms, securableEntity, false); + + // done, return the new grant record + return grantRecord; + } + + /** + * Delete the specified grant record from the GRANT_RECORDS table. This will at the same time + * invalidate the grant records of the grantee and the securable if the grantee is a role + * + * @param callCtx call context + * @param ms meta store + * @param securable the securable entity + * @param grantee the grantee entity + * @param grantRecord the grant record to remove, which was read in the same transaction + */ + private void revokeGrantRecord( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull PolarisEntityCore securable, + @NotNull PolarisEntityCore grantee, + @NotNull PolarisGrantRecord grantRecord) { + + // validate securable + callCtx + .getDiagServices() + .check( + securable.getCatalogId() == grantRecord.getSecurableCatalogId() + && securable.getId() == grantRecord.getSecurableId(), + "securable_mismatch", + "securable={} grantRec={}", + securable, + grantRecord); + + // validate grantee + callCtx + .getDiagServices() + .check( + grantee.getCatalogId() == grantRecord.getGranteeCatalogId() + && grantee.getId() == grantRecord.getGranteeId(), + "grantee_mismatch", + "grantee={} grantRec={}", + grantee, + grantRecord); + + // ensure the grantee is really a grantee + callCtx + .getDiagServices() + .check(grantee.getType().isGrantee(), "not_a_grantee", "grantee={}", grantee); + + // remove that grant + ms.deleteFromGrantRecords(callCtx, grantRecord); + + // load the grantee and increment its grants version + PolarisBaseEntity refreshGrantee = + ms.lookupEntity(callCtx, grantee.getCatalogId(), grantee.getId()); + callCtx + .getDiagServices() + .checkNotNull( + refreshGrantee, "missing_grantee", "grantRecord={} grantee={}", grantRecord, grantee); + + // grants have changed, we need to bump-up the grants version + refreshGrantee.setGrantRecordsVersion(refreshGrantee.getGrantRecordsVersion() + 1); + this.writeEntity(callCtx, ms, refreshGrantee, false); + + // we also need to invalidate the grants on that securable so that we can reload them. + // load the securable and increment its grants version + PolarisBaseEntity refreshSecurable = + ms.lookupEntity(callCtx, securable.getCatalogId(), securable.getId()); + callCtx + .getDiagServices() + .checkNotNull( + refreshSecurable, + "missing_securable", + "grantRecord={} securable={}", + grantRecord, + securable); + + // grants have changed, we need to bump-up the grants version + refreshSecurable.setGrantRecordsVersion(refreshSecurable.getGrantRecordsVersion() + 1); + this.writeEntity(callCtx, ms, refreshSecurable, false); + } + + /** + * Create a new catalog. This not only creates the new catalog entity but also the initial admin + * role required to admin this catalog. + * + * @param callCtx call context + * @param ms meta store in read/write mode + * @param catalog the catalog entity to create + * @param integration the storage integration that should be attached to the catalog. If null, do + * nothing, otherwise persist the integration. + * @param principalRoles once the catalog has been created, list of principal roles to grant its + * catalog_admin role to. If no principal role is specified, we will grant the catalog_admin + * role of the newly created catalog to the service admin role. + * @return the catalog we just created and its associated admin catalog role or error if we failed + * to + */ + private @NotNull CreateCatalogResult createCatalog( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull PolarisBaseEntity catalog, + @Nullable PolarisStorageIntegration integration, + @NotNull List principalRoles) { + // validate input + callCtx.getDiagServices().checkNotNull(catalog, "unexpected_null_catalog"); + + // check if that catalog has already been created + PolarisBaseEntity refreshCatalog = + ms.lookupEntity(callCtx, catalog.getCatalogId(), catalog.getId()); + + // if found, probably a retry, simply return the previously created catalog + if (refreshCatalog != null) { + // if found, ensure it is indeed a catalog + callCtx + .getDiagServices() + .check( + refreshCatalog.getTypeCode() == PolarisEntityType.CATALOG.getCode(), + "not_a_catalog", + "catalog={}", + catalog); + + // lookup catalog admin role, should exist + PolarisEntitiesActiveKey adminRoleKey = + new PolarisEntitiesActiveKey( + refreshCatalog.getId(), + refreshCatalog.getId(), + PolarisEntityType.CATALOG_ROLE.getCode(), + PolarisEntityConstants.getNameOfCatalogAdminRole()); + PolarisBaseEntity catalogAdminRole = this.lookupEntityByName(callCtx, ms, adminRoleKey); + + // if found, ensure not null + callCtx + .getDiagServices() + .checkNotNull( + catalogAdminRole, "catalog_admin_role_not_found", "catalog={}", refreshCatalog); + + // done, return the existing catalog + return new CreateCatalogResult(refreshCatalog, catalogAdminRole); + } + + // check that a catalog with the same name does not exist already + PolarisEntitiesActiveKey catalogNameKey = + new PolarisEntitiesActiveKey( + PolarisEntityConstants.getNullId(), + PolarisEntityConstants.getRootEntityId(), + PolarisEntityType.CATALOG.getCode(), + catalog.getName()); + PolarisEntityActiveRecord otherCatalogRecord = ms.lookupEntityActive(callCtx, catalogNameKey); + + // if it exists, this is an error, the client should retry + if (otherCatalogRecord != null) { + return new CreateCatalogResult(ReturnStatus.ENTITY_ALREADY_EXISTS, null); + } + + ms.persistStorageIntegrationIfNeeded(callCtx, catalog, integration); + + // now create and persist new catalog entity + this.persistNewEntity(callCtx, ms, catalog); + + // create the catalog admin role for this new catalog + long adminRoleId = ms.generateNewId(callCtx); + PolarisBaseEntity adminRole = + new PolarisBaseEntity( + catalog.getId(), + adminRoleId, + PolarisEntityType.CATALOG_ROLE, + PolarisEntitySubType.NULL_SUBTYPE, + catalog.getId(), + PolarisEntityConstants.getNameOfCatalogAdminRole()); + this.persistNewEntity(callCtx, ms, adminRole); + + // grant the catalog admin role access-management on the catalog + this.persistNewGrantRecord( + callCtx, ms, catalog, adminRole, PolarisPrivilege.CATALOG_MANAGE_ACCESS); + + // grant the catalog admin role metadata-management on the catalog; this one + // is revocable + this.persistNewGrantRecord( + callCtx, ms, catalog, adminRole, PolarisPrivilege.CATALOG_MANAGE_METADATA); + + // immediately assign its catalog_admin role + if (principalRoles.isEmpty()) { + // lookup service admin role, should exist + PolarisEntitiesActiveKey serviceAdminRoleKey = + new PolarisEntitiesActiveKey( + PolarisEntityConstants.getNullId(), + PolarisEntityConstants.getRootEntityId(), + PolarisEntityType.PRINCIPAL_ROLE.getCode(), + PolarisEntityConstants.getNameOfPrincipalServiceAdminRole()); + PolarisBaseEntity serviceAdminRole = + this.lookupEntityByName(callCtx, ms, serviceAdminRoleKey); + callCtx.getDiagServices().checkNotNull(serviceAdminRole, "missing_service_admin_role"); + this.persistNewGrantRecord( + callCtx, ms, adminRole, serviceAdminRole, PolarisPrivilege.CATALOG_ROLE_USAGE); + } else { + // grant to each principal role usage on its catalog_admin role + for (PolarisEntityCore principalRole : principalRoles) { + // validate not null and really a principal role + callCtx.getDiagServices().checkNotNull(principalRole, "null principal role"); + callCtx + .getDiagServices() + .check( + principalRole.getTypeCode() == PolarisEntityType.PRINCIPAL_ROLE.getCode(), + "not_principal_role", + "type={}", + principalRole.getType()); + + // grant usage on that catalog admin role to this principal + this.persistNewGrantRecord( + callCtx, ms, adminRole, principalRole, PolarisPrivilege.CATALOG_ROLE_USAGE); + } + } + + // success, return the two entities + return new CreateCatalogResult(catalog, adminRole); + } + + /** + * Bootstrap Polaris catalog service + * + * @param callCtx call context + * @param ms meta store in read/write mode + */ + private void bootstrapPolarisService( + @NotNull PolarisCallContext callCtx, @NotNull PolarisMetaStoreSession ms) { + + // cleanup everything, start from a blank slate + ms.deleteAll(callCtx); + + // Create a root container entity that can represent the securable for any top-level grants. + PolarisBaseEntity rootContainer = + new PolarisBaseEntity( + PolarisEntityConstants.getNullId(), + PolarisEntityConstants.getRootEntityId(), + PolarisEntityType.ROOT, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityConstants.getRootEntityId(), + PolarisEntityConstants.getRootContainerName()); + this.persistNewEntity(callCtx, ms, rootContainer); + + // Now bootstrap the service by creating the root principal and the service_admin principal + // role. The principal role will be granted to that root principal and the root catalog admin + // of the root catalog will be granted to that principal role. + long rootPrincipalId = ms.generateNewId(callCtx); + PolarisBaseEntity rootPrincipal = + new PolarisBaseEntity( + PolarisEntityConstants.getNullId(), + rootPrincipalId, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityConstants.getRootEntityId(), + PolarisEntityConstants.getRootPrincipalName()); + + // create this principal + this.createPrincipal(callCtx, ms, rootPrincipal); + + // now create the account admin principal role + long serviceAdminPrincipalRoleId = ms.generateNewId(callCtx); + PolarisBaseEntity serviceAdminPrincipalRole = + new PolarisBaseEntity( + PolarisEntityConstants.getNullId(), + serviceAdminPrincipalRoleId, + PolarisEntityType.PRINCIPAL_ROLE, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityConstants.getRootEntityId(), + PolarisEntityConstants.getNameOfPrincipalServiceAdminRole()); + this.persistNewEntity(callCtx, ms, serviceAdminPrincipalRole); + + // we also need to grant usage on the account-admin principal to the principal + this.persistNewGrantRecord( + callCtx, + ms, + serviceAdminPrincipalRole, + rootPrincipal, + PolarisPrivilege.PRINCIPAL_ROLE_USAGE); + + // grant SERVICE_MANAGE_ACCESS on the rootContainer to the serviceAdminPrincipalRole + this.persistNewGrantRecord( + callCtx, + ms, + rootContainer, + serviceAdminPrincipalRole, + PolarisPrivilege.SERVICE_MANAGE_ACCESS); + } + + /** {@inheritDoc} */ + @Override + public @NotNull BaseResult bootstrapPolarisService(@NotNull PolarisCallContext callCtx) { + // get meta store we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // run operation in a read/write transaction + ms.runActionInTransaction(callCtx, () -> this.bootstrapPolarisService(callCtx, ms)); + + // all good + return new BaseResult(ReturnStatus.SUCCESS); + } + + /** + * See {@link #readEntityByName(PolarisCallContext, List, PolarisEntityType, PolarisEntitySubType, + * String)} + */ + private @NotNull PolarisMetaStoreManager.EntityResult readEntityByName( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable List catalogPath, + @NotNull PolarisEntityType entityType, + @NotNull PolarisEntitySubType entitySubType, + @NotNull String name) { + // first resolve again the catalogPath to that entity + PolarisEntityResolver resolver = new PolarisEntityResolver(callCtx, ms, catalogPath); + + // return if we failed to resolve + if (resolver.isFailure()) { + return new EntityResult(ReturnStatus.CATALOG_PATH_CANNOT_BE_RESOLVED, null); + } + + // now looking the entity by name + PolarisEntitiesActiveKey entityActiveKey = + new PolarisEntitiesActiveKey( + resolver.getCatalogIdOrNull(), resolver.getParentId(), entityType.getCode(), name); + PolarisBaseEntity entity = this.lookupEntityByName(callCtx, ms, entityActiveKey); + + // if found, check if subType really matches + if (entity != null + && entitySubType != PolarisEntitySubType.ANY_SUBTYPE + && entity.getSubTypeCode() != entitySubType.getCode()) { + entity = null; + } + + // success, return what we found + return (entity == null) + ? new EntityResult(ReturnStatus.ENTITY_NOT_FOUND, null) + : new EntityResult(entity); + } + + /** {@inheritDoc} */ + @Override + public @NotNull PolarisMetaStoreManager.EntityResult readEntityByName( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull PolarisEntityType entityType, + @NotNull PolarisEntitySubType entitySubType, + @NotNull String name) { + // get meta store we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // run operation in a read/write transaction + return ms.runInReadTransaction( + callCtx, () -> readEntityByName(callCtx, ms, catalogPath, entityType, entitySubType, name)); + } + + /** + * See {@link #listEntities(PolarisCallContext, List, PolarisEntityType, PolarisEntitySubType)} + */ + private @NotNull ListEntitiesResult listEntities( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable List catalogPath, + @NotNull PolarisEntityType entityType, + @NotNull PolarisEntitySubType entitySubType) { + // first resolve again the catalogPath to that entity + PolarisEntityResolver resolver = new PolarisEntityResolver(callCtx, ms, catalogPath); + + // return if we failed to resolve + if (resolver.isFailure()) { + return new ListEntitiesResult(ReturnStatus.CATALOG_PATH_CANNOT_BE_RESOLVED, null); + } + + // return list of active entities + List toreturnList = + ms.listActiveEntities( + callCtx, resolver.getCatalogIdOrNull(), resolver.getParentId(), entityType); + + // prune the returned list with only entities matching the entity subtype + if (entitySubType != PolarisEntitySubType.ANY_SUBTYPE) { + toreturnList = + toreturnList.stream() + .filter(rec -> rec.getSubTypeCode() == entitySubType.getCode()) + .collect(Collectors.toList()); + } + + // done + return new ListEntitiesResult(toreturnList); + } + + /** {@inheritDoc} */ + @Override + public @NotNull ListEntitiesResult listEntities( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull PolarisEntityType entityType, + @NotNull PolarisEntitySubType entitySubType) { + // get meta store we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // run operation in a read transaction + return ms.runInReadTransaction( + callCtx, () -> listEntities(callCtx, ms, catalogPath, entityType, entitySubType)); + } + + /** {@inheritDoc} */ + @Override + public @NotNull GenerateEntityIdResult generateNewEntityId(@NotNull PolarisCallContext callCtx) { + // get meta store we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + return new GenerateEntityIdResult(ms.generateNewId(callCtx)); + } + + /** + * Given the internal property as a map of key/value pairs, serialize it to a String + * + * @param properties a map of key/value pairs + * @return a String, the JSON representation of the map + */ + public String serializeProperties(PolarisCallContext callCtx, Map properties) { + + String jsonString = null; + try { + // Deserialize the JSON string to a Map + jsonString = MAPPER.writeValueAsString(properties); + } catch (JsonProcessingException ex) { + callCtx.getDiagServices().fail("got_json_processing_exception", "ex={}", ex); + } + + return jsonString; + } + + /** + * Given the serialized properties, deserialize those to a Map + * + * @param properties a JSON string representing the set of properties + * @return a Map of string + */ + public Map deserializeProperties(PolarisCallContext callCtx, String properties) { + + Map retProperties = null; + try { + // Deserialize the JSON string to a Map + retProperties = MAPPER.readValue(properties, new TypeReference<>() {}); + } catch (JsonMappingException ex) { + callCtx.getDiagServices().fail("got_json_mapping_exception", "ex={}", ex); + } catch (JsonProcessingException ex) { + callCtx.getDiagServices().fail("got_json_processing_exception", "ex={}", ex); + } + + return retProperties; + } + + /** {@link #createPrincipal(PolarisCallContext, PolarisBaseEntity)} */ + private @NotNull CreatePrincipalResult createPrincipal( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull PolarisBaseEntity principal) { + // validate input + callCtx.getDiagServices().checkNotNull(principal, "unexpected_null_principal"); + + // check if that catalog has already been created + PolarisBaseEntity refreshPrincipal = + ms.lookupEntity(callCtx, principal.getCatalogId(), principal.getId()); + + // if found, probably a retry, simply return the previously created principal + if (refreshPrincipal != null) { + // if found, ensure it is indeed a principal + callCtx + .getDiagServices() + .check( + principal.getTypeCode() == PolarisEntityType.PRINCIPAL.getCode(), + "not_a_principal", + "principal={}", + principal); + + // get internal properties + Map properties = + this.deserializeProperties(callCtx, refreshPrincipal.getInternalProperties()); + + // get client_id + String clientId = properties.get(PolarisEntityConstants.getClientIdPropertyName()); + + // should not be null + callCtx + .getDiagServices() + .checkNotNull( + clientId, + "null_client_id", + "properties={}", + refreshPrincipal.getInternalProperties()); + // ensure non null and non empty + callCtx + .getDiagServices() + .check( + !clientId.isEmpty(), + "empty_client_id", + "properties={}", + refreshPrincipal.getInternalProperties()); + + // get the main and secondary secrets for that client + PolarisPrincipalSecrets principalSecrets = ms.loadPrincipalSecrets(callCtx, clientId); + + // should not be null + callCtx + .getDiagServices() + .checkNotNull( + principalSecrets, + "missing_principal_secrets", + "clientId={} principal={}", + clientId, + refreshPrincipal); + + // done, return the newly created principal + return new CreatePrincipalResult(refreshPrincipal, principalSecrets); + } + + // check that a principal with the same name does not exist already + PolarisEntitiesActiveKey principalNameKey = + new PolarisEntitiesActiveKey( + PolarisEntityConstants.getNullId(), + PolarisEntityConstants.getRootEntityId(), + PolarisEntityType.PRINCIPAL.getCode(), + principal.getName()); + PolarisEntityActiveRecord otherPrincipalRecord = + ms.lookupEntityActive(callCtx, principalNameKey); + + // if it exists, this is an error, the client should retry + if (otherPrincipalRecord != null) { + return new CreatePrincipalResult(ReturnStatus.ENTITY_ALREADY_EXISTS, null); + } + + // generate new secretes for this principal + PolarisPrincipalSecrets principalSecrets = + ms.generateNewPrincipalSecrets(callCtx, principal.getName(), principal.getId()); + + // generate properties + Map internalProperties = getInternalPropertyMap(callCtx, principal); + internalProperties.put( + PolarisEntityConstants.getClientIdPropertyName(), principalSecrets.getPrincipalClientId()); + + // remember client id + principal.setInternalProperties(this.serializeProperties(callCtx, internalProperties)); + + // now create and persist new catalog entity + this.persistNewEntity(callCtx, ms, principal); + + // success, return the two entities + return new CreatePrincipalResult(principal, principalSecrets); + } + + /** {@inheritDoc} */ + public @NotNull CreatePrincipalResult createPrincipal( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity principal) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + return ms.runInTransaction(callCtx, () -> this.createPrincipal(callCtx, ms, principal)); + } + + /** See {@link #loadPrincipalSecrets(PolarisCallContext, String)} */ + private @Nullable PolarisPrincipalSecrets loadPrincipalSecrets( + @NotNull PolarisCallContext callCtx, PolarisMetaStoreSession ms, @NotNull String clientId) { + return ms.loadPrincipalSecrets(callCtx, clientId); + } + + /** {@inheritDoc} */ + @Override + public @NotNull PrincipalSecretsResult loadPrincipalSecrets( + @NotNull PolarisCallContext callCtx, @NotNull String clientId) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + PolarisPrincipalSecrets secrets = + ms.runInTransaction(callCtx, () -> this.loadPrincipalSecrets(callCtx, ms, clientId)); + + return (secrets == null) + ? new PrincipalSecretsResult(ReturnStatus.ENTITY_NOT_FOUND, null) + : new PrincipalSecretsResult(secrets); + } + + /** See {@link #} */ + private @Nullable PolarisPrincipalSecrets rotatePrincipalSecrets( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull String clientId, + long principalId, + @NotNull String masterSecret, + boolean reset) { + // if not found, the principal must have been dropped + EntityResult loadEntityResult = + loadEntity(callCtx, ms, PolarisEntityConstants.getNullId(), principalId); + if (loadEntityResult.getReturnStatus() != ReturnStatus.SUCCESS) { + return null; + } + + PolarisBaseEntity principal = loadEntityResult.getEntity(); + Map internalProps = + PolarisObjectMapperUtil.deserializeProperties( + callCtx, + principal.getInternalProperties() == null ? "{}" : principal.getInternalProperties()); + + boolean doReset = + reset + || internalProps.get( + PolarisEntityConstants.PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE) + != null; + PolarisPrincipalSecrets secrets = + ms.rotatePrincipalSecrets(callCtx, clientId, principalId, masterSecret, doReset); + + if (reset + && !internalProps.containsKey( + PolarisEntityConstants.PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE)) { + internalProps.put( + PolarisEntityConstants.PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE, "true"); + principal.setInternalProperties( + PolarisObjectMapperUtil.serializeProperties(callCtx, internalProps)); + principal.setEntityVersion(principal.getEntityVersion() + 1); + writeEntity(callCtx, ms, principal, true); + } else if (internalProps.containsKey( + PolarisEntityConstants.PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE)) { + internalProps.remove(PolarisEntityConstants.PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE); + principal.setInternalProperties( + PolarisObjectMapperUtil.serializeProperties(callCtx, internalProps)); + principal.setEntityVersion(principal.getEntityVersion() + 1); + writeEntity(callCtx, ms, principal, true); + } + return secrets; + } + + /** {@inheritDoc} */ + @Override + public @NotNull PrincipalSecretsResult rotatePrincipalSecrets( + @NotNull PolarisCallContext callCtx, + @NotNull String clientId, + long principalId, + @NotNull String mainSecret, + boolean reset) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + PolarisPrincipalSecrets secrets = + ms.runInTransaction( + callCtx, + () -> + this.rotatePrincipalSecrets(callCtx, ms, clientId, principalId, mainSecret, reset)); + + return (secrets == null) + ? new PrincipalSecretsResult(ReturnStatus.ENTITY_NOT_FOUND, null) + : new PrincipalSecretsResult(secrets); + } + + /** {@inheritDoc} */ + @Override + public @NotNull CreateCatalogResult createCatalog( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisBaseEntity catalog, + @NotNull List principalRoles) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + Map internalProp = getInternalPropertyMap(callCtx, catalog); + String integrationIdentifierOrId = + internalProp.get(PolarisEntityConstants.getStorageIntegrationIdentifierPropertyName()); + String storageConfigInfoStr = + internalProp.get(PolarisEntityConstants.getStorageConfigInfoPropertyName()); + PolarisStorageIntegration integration; + // storageConfigInfo's presence is needed to create a storage integration + // and the catalog should not have an internal property of storage identifier or id yet + if (storageConfigInfoStr != null && integrationIdentifierOrId == null) { + integration = + ms.createStorageIntegration( + callCtx, + catalog.getCatalogId(), + catalog.getId(), + PolarisStorageConfigurationInfo.deserialize( + callCtx.getDiagServices(), storageConfigInfoStr)); + } else { + integration = null; + } + // need to run inside a read/write transaction + return ms.runInTransaction( + callCtx, () -> this.createCatalog(callCtx, ms, catalog, integration, principalRoles)); + } + + /** {@link #createEntityIfNotExists(PolarisCallContext, List, PolarisBaseEntity)} */ + private @NotNull EntityResult createEntityIfNotExists( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable List catalogPath, + @NotNull PolarisBaseEntity entity) { + + // entity cannot be null + callCtx.getDiagServices().checkNotNull(entity, "unexpected_null_entity"); + + // entity name must be specified + callCtx.getDiagServices().checkNotNull(entity.getName(), "unexpected_null_entity_name"); + + // first, check if the entity has already been created, in which case we will simply return it + PolarisBaseEntity entityFound = ms.lookupEntity(callCtx, entity.getCatalogId(), entity.getId()); + if (entityFound != null) { + // probably the client retried, simply return it + return new EntityResult(entityFound); + } + + // first resolve again the catalogPath + PolarisEntityResolver resolver = new PolarisEntityResolver(callCtx, ms, catalogPath); + + // return if we failed to resolve + if (resolver.isFailure()) { + return new EntityResult(ReturnStatus.CATALOG_PATH_CANNOT_BE_RESOLVED, null); + } + + // check if an entity does not already exist with the same name. If true, this is an error + PolarisEntitiesActiveKey entityActiveKey = + new PolarisEntitiesActiveKey( + entity.getCatalogId(), + entity.getParentId(), + entity.getType().getCode(), + entity.getName()); + PolarisEntityActiveRecord entityActiveRecord = ms.lookupEntityActive(callCtx, entityActiveKey); + if (entityActiveRecord != null) { + return new EntityResult( + ReturnStatus.ENTITY_ALREADY_EXISTS, entityActiveRecord.getSubTypeCode()); + } + + // persist that new entity + this.persistNewEntity(callCtx, ms, entity); + + // done, return that newly created entity + return new EntityResult(entity); + } + + /** {@inheritDoc} */ + @Override + public @NotNull EntityResult createEntityIfNotExists( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull PolarisBaseEntity entity) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + return ms.runInTransaction( + callCtx, () -> this.createEntityIfNotExists(callCtx, ms, catalogPath, entity)); + } + + @Override + public @NotNull EntitiesResult createEntitiesIfNotExist( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull List entities) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + return ms.runInTransaction( + callCtx, + () -> { + List createdEntities = new ArrayList<>(entities.size()); + for (PolarisBaseEntity entity : entities) { + EntityResult entityCreateResult = + createEntityIfNotExists(callCtx, ms, catalogPath, entity); + // abort everything if error + if (entityCreateResult.getReturnStatus() != ReturnStatus.SUCCESS) { + ms.rollback(); + return new EntitiesResult( + entityCreateResult.getReturnStatus(), entityCreateResult.getExtraInformation()); + } + createdEntities.add(entityCreateResult.getEntity()); + } + return new EntitiesResult(createdEntities); + }); + } + + /** + * See {@link #updateEntityPropertiesIfNotChanged(PolarisCallContext, List, PolarisBaseEntity)} + */ + private @NotNull EntityResult updateEntityPropertiesIfNotChanged( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable List catalogPath, + @NotNull PolarisBaseEntity entity) { + // entity cannot be null + callCtx.getDiagServices().checkNotNull(entity, "unexpected_null_entity"); + + // re-resolve everything including that entity + PolarisEntityResolver resolver = new PolarisEntityResolver(callCtx, ms, catalogPath, entity); + + // if resolution failed, return false + if (resolver.isFailure()) { + return new EntityResult(ReturnStatus.CATALOG_PATH_CANNOT_BE_RESOLVED, null); + } + + // lookup the entity, cannot be null + PolarisBaseEntity entityRefreshed = + ms.lookupEntity(callCtx, entity.getCatalogId(), entity.getId()); + callCtx + .getDiagServices() + .checkNotNull(entityRefreshed, "unexpected_entity_not_found", "entity={}", entity); + + // check that the version of the entity has not changed at all to avoid concurrent updates + if (entityRefreshed.getEntityVersion() != entity.getEntityVersion()) { + return new EntityResult(ReturnStatus.TARGET_ENTITY_CONCURRENTLY_MODIFIED, null); + } + + // update the two properties + entityRefreshed.setInternalProperties(entity.getInternalProperties()); + entityRefreshed.setProperties(entity.getProperties()); + + // persist this entity after changing it. This will update the version and update the last + // updated time. Because the entity version is changed, we will update the change tracking table + PolarisBaseEntity persistedEntity = this.persistEntityAfterChange(callCtx, ms, entityRefreshed); + return new EntityResult(persistedEntity); + } + + /** {@inheritDoc} */ + @Override + public @NotNull EntityResult updateEntityPropertiesIfNotChanged( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull PolarisBaseEntity entity) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + return ms.runInTransaction( + callCtx, () -> this.updateEntityPropertiesIfNotChanged(callCtx, ms, catalogPath, entity)); + } + + /** See {@link #updateEntitiesPropertiesIfNotChanged(PolarisCallContext, List)} */ + private @NotNull EntitiesResult updateEntitiesPropertiesIfNotChanged( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull List entities) { + // ensure that the entities list is not null + callCtx.getDiagServices().checkNotNull(entities, "unexpected_null_entities"); + + // list of all updated entities + List updatedEntities = new ArrayList<>(entities.size()); + + // iterate over the list and update each, one at a time + for (EntityWithPath entityWithPath : entities) { + // update that entity, abort if it fails + EntityResult updatedEntityResult = + this.updateEntityPropertiesIfNotChanged( + callCtx, ms, entityWithPath.getCatalogPath(), entityWithPath.getEntity()); + + // if failed, rollback and return the last error + if (updatedEntityResult.getReturnStatus() != ReturnStatus.SUCCESS) { + ms.rollback(); + return new EntitiesResult( + updatedEntityResult.getReturnStatus(), updatedEntityResult.getExtraInformation()); + } + + // one more was updated + updatedEntities.add(updatedEntityResult.getEntity()); + } + + // good, all success + return new EntitiesResult(updatedEntities); + } + + /** {@inheritDoc} */ + @Override + public @NotNull EntitiesResult updateEntitiesPropertiesIfNotChanged( + @NotNull PolarisCallContext callCtx, @NotNull List entities) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + return ms.runInTransaction( + callCtx, () -> this.updateEntitiesPropertiesIfNotChanged(callCtx, ms, entities)); + } + + /** + * See {@link PolarisMetaStoreManager#renameEntity(PolarisCallContext, List, PolarisEntityCore, + * List, PolarisEntity)} + */ + private @NotNull EntityResult renameEntity( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable List catalogPath, + @NotNull PolarisEntityCore entityToRename, + @Nullable List newCatalogPath, + @NotNull PolarisBaseEntity renamedEntity) { + + // entity and new name cannot be null + callCtx.getDiagServices().checkNotNull(entityToRename, "unexpected_null_entityToRename"); + callCtx.getDiagServices().checkNotNull(renamedEntity, "unexpected_null_renamedEntity"); + + // if a new catalog path is specified (i.e. re-parent operation), a catalog path should be + // specified too + callCtx + .getDiagServices() + .check( + (newCatalogPath == null) || (catalogPath != null), + "newCatalogPath_specified_without_catalogPath"); + + // null is shorthand for saying the path isn't changing + if (newCatalogPath == null) { + newCatalogPath = catalogPath; + } + + // re-resolve everything including that entity + PolarisEntityResolver resolver = + new PolarisEntityResolver(callCtx, ms, catalogPath, entityToRename); + + // if resolution failed, return false + if (resolver.isFailure()) { + return new EntityResult(ReturnStatus.ENTITY_CANNOT_BE_RESOLVED, null); + } + + // find the entity to rename + PolarisBaseEntity refreshEntityToRename = + ms.lookupEntity(callCtx, entityToRename.getCatalogId(), entityToRename.getId()); + + // if this entity was not found, return failure. Not expected here because it was + // resolved successfully (see above) + if (refreshEntityToRename == null) { + return new EntityResult(ReturnStatus.ENTITY_NOT_FOUND, null); + } + + // check that the source entity has not changed since it was updated by the caller + if (refreshEntityToRename.getEntityVersion() != renamedEntity.getEntityVersion()) { + return new EntityResult(ReturnStatus.TARGET_ENTITY_CONCURRENTLY_MODIFIED, null); + } + + // ensure it can be renamed + if (refreshEntityToRename.cannotBeDroppedOrRenamed()) { + return new EntityResult(ReturnStatus.ENTITY_CANNOT_BE_RENAMED, null); + } + + // re-resolve the new catalog path if this entity is going to be moved + if (newCatalogPath != null) { + resolver = new PolarisEntityResolver(callCtx, ms, newCatalogPath); + + // if resolution failed, return false + if (resolver.isFailure()) { + return new EntityResult(ReturnStatus.CATALOG_PATH_CANNOT_BE_RESOLVED, null); + } + } + + // ensure that nothing exists where we create that entity + PolarisEntitiesActiveKey entityActiveKey = + new PolarisEntitiesActiveKey( + resolver.getCatalogIdOrNull(), + resolver.getParentId(), + refreshEntityToRename.getTypeCode(), + renamedEntity.getName()); + // if this entity already exists, this is an error + PolarisEntityActiveRecord entityActiveRecord = ms.lookupEntityActive(callCtx, entityActiveKey); + if (entityActiveRecord != null) { + return new EntityResult( + ReturnStatus.ENTITY_ALREADY_EXISTS, entityActiveRecord.getSubTypeCode()); + } + + // all good, delete the existing entity from the active slice + ms.deleteFromEntitiesActive(callCtx, refreshEntityToRename); + + // change its name now + refreshEntityToRename.setName(renamedEntity.getName()); + refreshEntityToRename.setProperties(renamedEntity.getProperties()); + refreshEntityToRename.setInternalProperties(renamedEntity.getInternalProperties()); + + // re-parent if a new catalog path was specified + if (newCatalogPath != null) { + refreshEntityToRename.setParentId(resolver.getParentId()); + } + + // persist back to the active slice with its new name and parent + ms.writeToEntitiesActive(callCtx, refreshEntityToRename); + + // persist the entity after change. This wil update the lastUpdateTimestamp and bump up the + // version + PolarisBaseEntity renamedEntityToReturn = + this.persistEntityAfterChange(callCtx, ms, refreshEntityToRename); + return new EntityResult(renamedEntityToReturn); + } + + /** {@inheritDoc} */ + @Override + public @NotNull EntityResult renameEntity( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull PolarisEntityCore entityToRename, + @Nullable List newCatalogPath, + @NotNull PolarisEntity renamedEntity) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + return ms.runInTransaction( + callCtx, + () -> + this.renameEntity( + callCtx, ms, catalogPath, entityToRename, newCatalogPath, renamedEntity)); + } + + /** + * See + * + *

{@link #dropEntityIfExists(PolarisCallContext, List, PolarisEntityCore, Map, boolean)} + */ + private @NotNull DropEntityResult dropEntityIfExists( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable List catalogPath, + @NotNull PolarisEntityCore entityToDrop, + @Nullable Map cleanupProperties, + boolean cleanup) { + // entity cannot be null + callCtx.getDiagServices().checkNotNull(entityToDrop, "unexpected_null_entity"); + + // re-resolve everything including that entity + PolarisEntityResolver resolver = + new PolarisEntityResolver(callCtx, ms, catalogPath, entityToDrop); + + // if resolution failed, return false + if (resolver.isFailure()) { + return new DropEntityResult(ReturnStatus.CATALOG_PATH_CANNOT_BE_RESOLVED, null); + } + + // first find the entity to drop + PolarisBaseEntity refreshEntityToDrop = + ms.lookupEntity(callCtx, entityToDrop.getCatalogId(), entityToDrop.getId()); + + // if this entity was not found, return failure + if (refreshEntityToDrop == null) { + return new DropEntityResult(ReturnStatus.ENTITY_NOT_FOUND, null); + } + + // ensure that this entity is droppable + if (refreshEntityToDrop.cannotBeDroppedOrRenamed()) { + return new DropEntityResult(ReturnStatus.ENTITY_UNDROPPABLE, null); + } + + // check that the entity has children, in which case it is an error. This only applies to + // a namespaces or a catalog + if (refreshEntityToDrop.getType() == PolarisEntityType.CATALOG) { + // the id of the catalog + long catalogId = refreshEntityToDrop.getId(); + + // if not all namespaces are dropped, we cannot drop this catalog + if (ms.hasChildren(callCtx, PolarisEntityType.NAMESPACE, catalogId, catalogId)) { + return new DropEntityResult(ReturnStatus.NAMESPACE_NOT_EMPTY, null); + } + + // get the list of catalog roles, at most 2 + List catalogRoles = + ms.listActiveEntities( + callCtx, + catalogId, + catalogId, + PolarisEntityType.CATALOG_ROLE, + 2, + entity -> true, + Function.identity()); + + // if we have 2, we cannot drop the catalog. If only one left, better be the admin role + if (catalogRoles.size() > 1) { + return new DropEntityResult(ReturnStatus.CATALOG_NOT_EMPTY, null); + } + + // if 1, drop the last catalog role. Should be the catalog admin role but don't validate this + if (!catalogRoles.isEmpty()) { + // drop the last catalog role in that catalog, should be the admin catalog role + this.dropEntity(callCtx, ms, catalogRoles.get(0)); + } + } else if (refreshEntityToDrop.getType() == PolarisEntityType.NAMESPACE) { + if (ms.hasChildren( + callCtx, null, refreshEntityToDrop.getCatalogId(), refreshEntityToDrop.getId())) { + return new DropEntityResult(ReturnStatus.NAMESPACE_NOT_EMPTY, null); + } + } + + // simply delete that entity. Will be removed from entities_active, added to the + // entities_dropped and its version will be changed. + this.dropEntity(callCtx, ms, refreshEntityToDrop); + + // if cleanup, schedule a cleanup task for the entity. do this here, so that drop and scheduling + // the cleanup task is transactional. Otherwise, we'll be unable to schedule the cleanup task + // later + if (cleanup) { + PolarisBaseEntity taskEntity = + new PolarisEntity.Builder() + .setId(generateNewEntityId(callCtx).getId()) + .setCatalogId(0L) + .setName("entityCleanup_" + entityToDrop.getId()) + .setType(PolarisEntityType.TASK) + .setSubType(PolarisEntitySubType.NULL_SUBTYPE) + .setCreateTimestamp(callCtx.getClock().millis()) + .build(); + + Map properties = new HashMap<>(); + properties.put( + PolarisTaskConstants.TASK_TYPE, + String.valueOf(AsyncTaskType.ENTITY_CLEANUP_SCHEDULER.typeCode())); + properties.put("data", PolarisObjectMapperUtil.serialize(callCtx, refreshEntityToDrop)); + taskEntity.setProperties(PolarisObjectMapperUtil.serializeProperties(callCtx, properties)); + if (cleanupProperties != null) { + taskEntity.setInternalProperties( + PolarisObjectMapperUtil.serializeProperties(callCtx, cleanupProperties)); + } + createEntityIfNotExists(callCtx, ms, null, taskEntity); + return new DropEntityResult(taskEntity.getId()); + } + + // done, return success + return new DropEntityResult(); + } + + /** {@inheritDoc} */ + @Override + public @NotNull DropEntityResult dropEntityIfExists( + @NotNull PolarisCallContext callCtx, + @Nullable List catalogPath, + @NotNull PolarisEntityCore entityToDrop, + @Nullable Map cleanupProperties, + boolean cleanup) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + return ms.runInTransaction( + callCtx, + () -> + this.dropEntityIfExists( + callCtx, ms, catalogPath, entityToDrop, cleanupProperties, cleanup)); + } + + /** + * Resolve the arguments of granting/revoking a usage grant between a role (catalog or principal + * role) and a grantee (either a principal role or a principal) + * + * @param callCtx call context + * @param ms meta store in read/write mode + * @param catalog if the role is a catalog role, the caller needs to pass-in the catalog entity + * which was used to resolve that role. Else null. + * @param role the role, either a catalog or principal role + * @param grantee the grantee + * @return resolver for the specified entities + */ + private @NotNull PolarisEntityResolver resolveRoleToGranteeUsageGrant( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable PolarisEntityCore catalog, + @NotNull PolarisEntityCore role, + @NotNull PolarisEntityCore grantee) { + + // validate the grantee input + callCtx.getDiagServices().checkNotNull(grantee, "unexpected_null_grantee"); + callCtx + .getDiagServices() + .check(grantee.getType().isGrantee(), "not_a_grantee", "grantee={}", grantee); + + // validate role + callCtx.getDiagServices().checkNotNull(role, "unexpected_null_role"); + + // role should be a catalog or a principal role + boolean isCatalogRole = role.getTypeCode() == PolarisEntityType.CATALOG_ROLE.getCode(); + boolean isPrincipalRole = role.getTypeCode() == PolarisEntityType.PRINCIPAL_ROLE.getCode(); + callCtx.getDiagServices().check(isCatalogRole || isPrincipalRole, "not_a_role"); + + // if the role is a catalog role, ensure a catalog is specified and + // vice-versa, catalog should be null if the role is a principal role + callCtx + .getDiagServices() + .check( + (catalog == null && isPrincipalRole) || (catalog != null && isCatalogRole), + "catalog_mismatch", + "catalog={} role={}", + catalog, + role); + + // re-resolve now all these entities + List otherTopLevelEntities = new ArrayList<>(2); + otherTopLevelEntities.add(role); + otherTopLevelEntities.add(grantee); + + // ensure these entities have not changed + return new PolarisEntityResolver( + callCtx, ms, catalog != null ? List.of(catalog) : null, null, otherTopLevelEntities); + } + + /** + * Helper function to resolve the securable to role grant privilege + * + * @param grantee resolved grantee + * @param catalogPath path to that entity, cannot be null or empty if securable has a catalogId + * @param securable securable entity, must have been resolved by the client + * @return a resolver for the role, the catalog path and the securable + */ + private PolarisEntityResolver resolveSecurableToRoleGrant( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull PolarisEntityCore grantee, + @Nullable List catalogPath, + @NotNull PolarisEntityCore securable) { + // validate role input + callCtx.getDiagServices().checkNotNull(grantee, "unexpected_null_grantee"); + callCtx + .getDiagServices() + .check(grantee.getType().isGrantee(), "not_grantee_type", "grantee={}", grantee); + + // securable must be supplied + callCtx.getDiagServices().checkNotNull(securable, "unexpected_null_securable"); + if (securable.getCatalogId() > 0) { + // catalogPath must be supplied if the securable has a catalogId + callCtx.getDiagServices().checkNotNull(catalogPath, "unexpected_null_catalogPath"); + } + + // re-resolve now all these entities + return new PolarisEntityResolver(callCtx, ms, catalogPath, securable, List.of(grantee)); + } + + /** + * See {@link #grantUsageOnRoleToGrantee(PolarisCallContext, PolarisEntityCore, PolarisEntityCore, + * PolarisEntityCore)} + */ + private @NotNull PrivilegeResult grantUsageOnRoleToGrantee( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable PolarisEntityCore catalog, + @NotNull PolarisEntityCore role, + @NotNull PolarisEntityCore grantee) { + + // ensure these entities have not changed + PolarisEntityResolver resolver = + this.resolveRoleToGranteeUsageGrant(callCtx, ms, catalog, role, grantee); + + // if failure to resolve, let the caller know + if (resolver.isFailure()) { + return new PrivilegeResult(ReturnStatus.ENTITY_CANNOT_BE_RESOLVED, null); + } + + // the usage privilege to grant + PolarisPrivilege usagePriv = + (grantee.getType() == PolarisEntityType.PRINCIPAL_ROLE) + ? PolarisPrivilege.CATALOG_ROLE_USAGE + : PolarisPrivilege.PRINCIPAL_ROLE_USAGE; + + // grant usage on this role to this principal + callCtx + .getDiagServices() + .check(grantee.getType().isGrantee(), "not_a_grantee", "grantee={}", grantee); + PolarisGrantRecord grantRecord = + this.persistNewGrantRecord(callCtx, ms, role, grantee, usagePriv); + return new PrivilegeResult(grantRecord); + } + + /** {@inheritDoc} */ + @Override + public @NotNull PrivilegeResult grantUsageOnRoleToGrantee( + @NotNull PolarisCallContext callCtx, + @Nullable PolarisEntityCore catalog, + @NotNull PolarisEntityCore role, + @NotNull PolarisEntityCore grantee) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + return ms.runInTransaction( + callCtx, () -> this.grantUsageOnRoleToGrantee(callCtx, ms, catalog, role, grantee)); + } + + /** + * See {@link #revokeUsageOnRoleFromGrantee(PolarisCallContext, PolarisEntityCore, + * PolarisEntityCore, PolarisEntityCore)} + */ + private @NotNull PrivilegeResult revokeUsageOnRoleFromGrantee( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @Nullable PolarisEntityCore catalog, + @NotNull PolarisEntityCore role, + @NotNull PolarisEntityCore grantee) { + + // ensure these entities have not changed + PolarisEntityResolver resolver = + this.resolveRoleToGranteeUsageGrant(callCtx, ms, catalog, role, grantee); + + // if failure to resolve, let the caller know + if (resolver.isFailure()) { + return new PrivilegeResult(ReturnStatus.ENTITY_CANNOT_BE_RESOLVED, null); + } + + // the usage privilege to revoke + PolarisPrivilege usagePriv = + (grantee.getType() == PolarisEntityType.PRINCIPAL_ROLE) + ? PolarisPrivilege.CATALOG_ROLE_USAGE + : PolarisPrivilege.PRINCIPAL_ROLE_USAGE; + + // first, ensure that this privilege has been granted + PolarisGrantRecord grantRecord = + ms.lookupGrantRecord( + callCtx, + role.getCatalogId(), + role.getId(), + grantee.getCatalogId(), + grantee.getId(), + usagePriv.getCode()); + + // this is not a really bad error, no-op really + if (grantRecord == null) { + return new PrivilegeResult(ReturnStatus.GRANT_NOT_FOUND, null); + } + + // revoke usage on the role from the grantee + this.revokeGrantRecord(callCtx, ms, role, grantee, grantRecord); + + return new PrivilegeResult(grantRecord); + } + + /** {@inheritDoc} */ + @Override + public @NotNull PrivilegeResult revokeUsageOnRoleFromGrantee( + @NotNull PolarisCallContext callCtx, + @Nullable PolarisEntityCore catalog, + @NotNull PolarisEntityCore role, + @NotNull PolarisEntityCore grantee) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + return ms.runInTransaction( + callCtx, () -> this.revokeUsageOnRoleFromGrantee(callCtx, ms, catalog, role, grantee)); + } + + /** + * See {@link #grantPrivilegeOnSecurableToRole(PolarisCallContext, PolarisEntityCore, List, + * PolarisEntityCore, PolarisPrivilege)} + */ + private @NotNull PrivilegeResult grantPrivilegeOnSecurableToRole( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull PolarisEntityCore grantee, + @Nullable List catalogPath, + @NotNull PolarisEntityCore securable, + @NotNull PolarisPrivilege priv) { + + // re-resolve now all these entities + PolarisEntityResolver resolver = + this.resolveSecurableToRoleGrant(callCtx, ms, grantee, catalogPath, securable); + + // if failure to resolve, let the caller know + if (resolver.isFailure()) { + return new PrivilegeResult(ReturnStatus.ENTITY_CANNOT_BE_RESOLVED, null); + } + + // grant specified privilege on this securable to this role and return the grant + PolarisGrantRecord grantRecord = + this.persistNewGrantRecord(callCtx, ms, securable, grantee, priv); + return new PrivilegeResult(grantRecord); + } + + /** {@inheritDoc} */ + @Override + public @NotNull PrivilegeResult grantPrivilegeOnSecurableToRole( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisEntityCore grantee, + @Nullable List catalogPath, + @NotNull PolarisEntityCore securable, + @NotNull PolarisPrivilege privilege) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + return ms.runInTransaction( + callCtx, + () -> + this.grantPrivilegeOnSecurableToRole( + callCtx, ms, grantee, catalogPath, securable, privilege)); + } + + /** + * See {@link #revokePrivilegeOnSecurableFromRole(PolarisCallContext, PolarisEntityCore, List, + * PolarisEntityCore, PolarisPrivilege)} + */ + private @NotNull PrivilegeResult revokePrivilegeOnSecurableFromRole( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull PolarisEntityCore grantee, + @Nullable List catalogPath, + @NotNull PolarisEntityCore securable, + @NotNull PolarisPrivilege priv) { + + // re-resolve now all these entities + PolarisEntityResolver resolver = + this.resolveSecurableToRoleGrant(callCtx, ms, grantee, catalogPath, securable); + + // if failure to resolve, let the caller know + if (resolver.isFailure()) { + return new PrivilegeResult(ReturnStatus.ENTITY_CANNOT_BE_RESOLVED, null); + } + + // lookup the grants records to find this grant + PolarisGrantRecord grantRecord = + ms.lookupGrantRecord( + callCtx, + securable.getCatalogId(), + securable.getId(), + grantee.getCatalogId(), + grantee.getId(), + priv.getCode()); + + // the grant does not exist, nothing to do really + if (grantRecord == null) { + return new PrivilegeResult(ReturnStatus.GRANT_NOT_FOUND, null); + } + + // revoke the specified privilege on this securable from this role + this.revokeGrantRecord(callCtx, ms, securable, grantee, grantRecord); + + // success! + return new PrivilegeResult(grantRecord); + } + + /** {@inheritDoc} */ + @Override + public @NotNull PrivilegeResult revokePrivilegeOnSecurableFromRole( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisEntityCore grantee, + @Nullable List catalogPath, + @NotNull PolarisEntityCore securable, + @NotNull PolarisPrivilege privilege) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read/write transaction + return ms.runInTransaction( + callCtx, + () -> + this.revokePrivilegeOnSecurableFromRole( + callCtx, ms, grantee, catalogPath, securable, privilege)); + } + + /** {@link #loadGrantsOnSecurable(PolarisCallContext, long, long)} */ + private @NotNull LoadGrantsResult loadGrantsOnSecurable( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + long securableCatalogId, + long securableId) { + + // lookup grants version for this securable entity + int grantsVersion = + ms.lookupEntityGrantRecordsVersion(callCtx, securableCatalogId, securableId); + + // return null if securable does not exists + if (grantsVersion == 0) { + return new LoadGrantsResult(ReturnStatus.ENTITY_NOT_FOUND, null); + } + + // now fetch all grants for this securable + final List returnGrantRecords = + ms.loadAllGrantRecordsOnSecurable(callCtx, securableCatalogId, securableId); + + // find all unique grantees + List entityIds = + returnGrantRecords.stream() + .map( + grantRecord -> + new PolarisEntityId( + grantRecord.getGranteeCatalogId(), grantRecord.getGranteeId())) + .distinct() + .collect(Collectors.toList()); + List entities = ms.lookupEntities(callCtx, entityIds); + + // done, return the list of grants and their version + return new LoadGrantsResult( + grantsVersion, + returnGrantRecords, + entities.stream().filter(Objects::nonNull).collect(Collectors.toList())); + } + + /** {@inheritDoc} */ + @Override + public @NotNull LoadGrantsResult loadGrantsOnSecurable( + @NotNull PolarisCallContext callCtx, long securableCatalogId, long securableId) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read transaction + return ms.runInReadTransaction( + callCtx, () -> this.loadGrantsOnSecurable(callCtx, ms, securableCatalogId, securableId)); + } + + /** {@link #loadGrantsToGrantee(PolarisCallContext, long, long)} */ + public @NotNull LoadGrantsResult loadGrantsToGrantee( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + long granteeCatalogId, + long granteeId) { + + // lookup grants version for this grantee entity + int grantsVersion = ms.lookupEntityGrantRecordsVersion(callCtx, granteeCatalogId, granteeId); + + // return null if grantee does not exists + if (grantsVersion == 0) { + return new LoadGrantsResult(ReturnStatus.ENTITY_NOT_FOUND, null); + } + + // now fetch all grants for this grantee + final List returnGrantRecords = + ms.loadAllGrantRecordsOnGrantee(callCtx, granteeCatalogId, granteeId); + + // find all unique securables + List entityIds = + returnGrantRecords.stream() + .map( + grantRecord -> + new PolarisEntityId( + grantRecord.getSecurableCatalogId(), grantRecord.getSecurableId())) + .distinct() + .collect(Collectors.toList()); + List entities = ms.lookupEntities(callCtx, entityIds); + + // done, return the list of grants and their version + return new LoadGrantsResult( + grantsVersion, + returnGrantRecords, + entities.stream().filter(Objects::nonNull).collect(Collectors.toList())); + } + + /** {@inheritDoc} */ + @Override + public @NotNull LoadGrantsResult loadGrantsToGrantee( + @NotNull PolarisCallContext callCtx, long granteeCatalogId, long granteeId) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read transaction + return ms.runInReadTransaction( + callCtx, () -> this.loadGrantsToGrantee(callCtx, ms, granteeCatalogId, granteeId)); + } + + /** {@link PolarisMetaStoreManager#loadEntitiesChangeTracking(PolarisCallContext, List)} */ + private @NotNull ChangeTrackingResult loadEntitiesChangeTracking( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + @NotNull List entityIds) { + List changeTracking = + ms.lookupEntityVersions(callCtx, entityIds); + return new ChangeTrackingResult(changeTracking); + } + + /** {@inheritDoc} */ + @Override + public @NotNull ChangeTrackingResult loadEntitiesChangeTracking( + @NotNull PolarisCallContext callCtx, @NotNull List entityIds) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read transaction + return ms.runInReadTransaction( + callCtx, () -> this.loadEntitiesChangeTracking(callCtx, ms, entityIds)); + } + + /** Refer to {@link #loadEntity(PolarisCallContext, long, long)} */ + private @NotNull EntityResult loadEntity( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + long entityCatalogId, + long entityId) { + // this is an easy one + PolarisBaseEntity entity = ms.lookupEntity(callCtx, entityCatalogId, entityId); + return (entity != null) + ? new EntityResult(entity) + : new EntityResult(ReturnStatus.ENTITY_NOT_FOUND, null); + } + + /** {@inheritDoc} */ + @Override + public @NotNull EntityResult loadEntity( + @NotNull PolarisCallContext callCtx, long entityCatalogId, long entityId) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read transaction + return ms.runInReadTransaction( + callCtx, () -> this.loadEntity(callCtx, ms, entityCatalogId, entityId)); + } + + /** Refer to {@link #loadTasks(PolarisCallContext, String, int)} */ + private @NotNull EntitiesResult loadTasks( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + String executorId, + int limit) { + + // find all available tasks + List availableTasks = + ms.listActiveEntities( + callCtx, + PolarisEntityConstants.getRootEntityId(), + PolarisEntityConstants.getRootEntityId(), + PolarisEntityType.TASK, + limit, + entity -> { + PolarisObjectMapperUtil.TaskExecutionState taskState = + PolarisObjectMapperUtil.parseTaskState(entity); + long taskAgeTimeout = + callCtx + .getConfigurationStore() + .getConfiguration( + callCtx, + PolarisTaskConstants.TASK_TIMEOUT_MILLIS_CONFIG, + PolarisTaskConstants.TASK_TIMEOUT_MILLIS); + return taskState == null + || taskState.executor == null + || callCtx.getClock().millis() - taskState.lastAttemptStartTime > taskAgeTimeout; + }, + Function.identity()); + + availableTasks.forEach( + task -> { + Map properties = + PolarisObjectMapperUtil.deserializeProperties(callCtx, task.getProperties()); + properties.put(PolarisTaskConstants.LAST_ATTEMPT_EXECUTOR_ID, executorId); + properties.put( + PolarisTaskConstants.LAST_ATTEMPT_START_TIME, + String.valueOf(callCtx.getClock().millis())); + properties.put( + PolarisTaskConstants.ATTEMPT_COUNT, + String.valueOf( + Integer.parseInt(properties.getOrDefault(PolarisTaskConstants.ATTEMPT_COUNT, "0")) + + 1)); + task.setEntityVersion(task.getEntityVersion() + 1); + task.setProperties(PolarisObjectMapperUtil.serializeProperties(callCtx, properties)); + writeEntity(callCtx, ms, task, false); + }); + return new EntitiesResult(availableTasks); + } + + @Override + public @NotNull EntitiesResult loadTasks( + @NotNull PolarisCallContext callCtx, String executorId, int limit) { + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + return ms.runInTransaction(callCtx, () -> this.loadTasks(callCtx, ms, executorId, limit)); + } + + /** {@inheritDoc} */ + @Override + public @NotNull ScopedCredentialsResult getSubscopedCredsForEntity( + @NotNull PolarisCallContext callCtx, + long catalogId, + long entityId, + boolean allowListOperation, + @NotNull Set allowedReadLocations, + @NotNull Set allowedWriteLocations) { + + // get meta store session we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + callCtx + .getDiagServices() + .check( + !allowedReadLocations.isEmpty() || !allowedWriteLocations.isEmpty(), + "allowed_locations_to_subscope_is_required"); + + // reload the entity, error out if not found + EntityResult reloadedEntity = loadEntity(callCtx, catalogId, entityId); + if (reloadedEntity.getReturnStatus() != ReturnStatus.SUCCESS) { + return new ScopedCredentialsResult( + reloadedEntity.getReturnStatus(), reloadedEntity.getExtraInformation()); + } + + // get storage integration + PolarisStorageIntegration storageIntegration = + ms.loadPolarisStorageIntegration(callCtx, reloadedEntity.getEntity()); + + // cannot be null + callCtx + .getDiagServices() + .checkNotNull( + storageIntegration, + "storage_integration_not_exists", + "catalogId={}, entityId={}", + catalogId, + entityId); + + PolarisStorageConfigurationInfo storageConfigurationInfo = + readStorageConfiguration(callCtx, reloadedEntity.getEntity()); + try { + EnumMap creds = + storageIntegration.getSubscopedCreds( + callCtx.getDiagServices(), + storageConfigurationInfo, + allowListOperation, + allowedReadLocations, + allowedWriteLocations); + return new ScopedCredentialsResult(creds); + } catch (Exception ex) { + return new ScopedCredentialsResult(ReturnStatus.SUBSCOPE_CREDS_ERROR, ex.getMessage()); + } + } + + /** {@inheritDoc} */ + @Override + public @NotNull ValidateAccessResult validateAccessToLocations( + @NotNull PolarisCallContext callCtx, + long catalogId, + long entityId, + @NotNull Set actions, + @NotNull Set locations) { + // get meta store we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + callCtx + .getDiagServices() + .check( + !actions.isEmpty() && !locations.isEmpty(), + "locations_and_operations_privileges_are_required"); + // reload the entity, error out if not found + EntityResult reloadedEntity = loadEntity(callCtx, catalogId, entityId); + if (reloadedEntity.getReturnStatus() != ReturnStatus.SUCCESS) { + return new ValidateAccessResult( + reloadedEntity.getReturnStatus(), reloadedEntity.getExtraInformation()); + } + + // get storage integration, expect not null + PolarisStorageIntegration storageIntegration = + ms.loadPolarisStorageIntegration(callCtx, reloadedEntity.getEntity()); + callCtx + .getDiagServices() + .checkNotNull( + storageIntegration, + "storage_integration_not_exists", + "catalogId={}, entityId={}", + catalogId, + entityId); + + // validate access + PolarisStorageConfigurationInfo storageConfigurationInfo = + readStorageConfiguration(callCtx, reloadedEntity.getEntity()); + Map validateLocationAccess = + storageIntegration + .validateAccessToLocations(storageConfigurationInfo, actions, locations) + .entrySet() + .stream() + .collect( + Collectors.toMap( + Map.Entry::getKey, + e -> PolarisObjectMapperUtil.serialize(callCtx, e.getValue()))); + + // done, return result + return new ValidateAccessResult(validateLocationAccess); + } + + public static PolarisStorageConfigurationInfo readStorageConfiguration( + @NotNull PolarisCallContext callCtx, PolarisBaseEntity reloadedEntity) { + Map propMap = + PolarisObjectMapperUtil.deserializeProperties( + callCtx, reloadedEntity.getInternalProperties()); + String storageConfigInfoStr = + propMap.get(PolarisEntityConstants.getStorageConfigInfoPropertyName()); + + callCtx + .getDiagServices() + .check( + storageConfigInfoStr != null, + "missing_storage_configuration_info", + "catalogId={}, entityId={}", + reloadedEntity.getCatalogId(), + reloadedEntity.getId()); + return PolarisStorageConfigurationInfo.deserialize( + callCtx.getDiagServices(), storageConfigInfoStr); + } + + /** + * Get the internal property map for an entity + * + * @param callCtx the polaris call context + * @param entity the target entity + * @return a map of string representing the internal properties + */ + public Map getInternalPropertyMap( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + String internalPropStr = entity.getInternalProperties(); + Map res = new HashMap<>(); + if (internalPropStr == null) { + return res; + } + return deserializeProperties(callCtx, internalPropStr); + } + + /** {@link #loadCachedEntryById(PolarisCallContext, long, long)} */ + private @NotNull CachedEntryResult loadCachedEntryById( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + long entityCatalogId, + long entityId) { + + // load that entity + PolarisBaseEntity entity = ms.lookupEntity(callCtx, entityCatalogId, entityId); + + // if entity not found, return null + if (entity == null) { + return new CachedEntryResult(ReturnStatus.ENTITY_NOT_FOUND, null); + } + + // load the grant records + final List grantRecords; + if (entity.getType().isGrantee()) { + grantRecords = + new ArrayList<>(ms.loadAllGrantRecordsOnGrantee(callCtx, entityCatalogId, entityId)); + grantRecords.addAll(ms.loadAllGrantRecordsOnSecurable(callCtx, entityCatalogId, entityId)); + } else { + grantRecords = ms.loadAllGrantRecordsOnSecurable(callCtx, entityCatalogId, entityId); + } + + // return the result + return new CachedEntryResult(entity, entity.getGrantRecordsVersion(), grantRecords); + } + + /** {@inheritDoc} */ + @Override + public @NotNull CachedEntryResult loadCachedEntryById( + @NotNull PolarisCallContext callCtx, long entityCatalogId, long entityId) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read transaction + return ms.runInReadTransaction( + callCtx, () -> this.loadCachedEntryById(callCtx, ms, entityCatalogId, entityId)); + } + + /** {@link #loadCachedEntryById(PolarisCallContext, long, long)} */ + private @NotNull PolarisMetaStoreManager.CachedEntryResult loadCachedEntryByName( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + long entityCatalogId, + long parentId, + @NotNull PolarisEntityType entityType, + @NotNull String entityName) { + + // load that entity + PolarisEntitiesActiveKey entityActiveKey = + new PolarisEntitiesActiveKey(entityCatalogId, parentId, entityType.getCode(), entityName); + PolarisBaseEntity entity = this.lookupEntityByName(callCtx, ms, entityActiveKey); + + // null if entity not found + if (entity == null) { + return new CachedEntryResult(ReturnStatus.ENTITY_NOT_FOUND, null); + } + + // load the grant records + final List grantRecords; + if (entity.getType().isGrantee()) { + grantRecords = + new ArrayList<>( + ms.loadAllGrantRecordsOnGrantee(callCtx, entityCatalogId, entity.getId())); + grantRecords.addAll( + ms.loadAllGrantRecordsOnSecurable(callCtx, entityCatalogId, entity.getId())); + } else { + grantRecords = ms.loadAllGrantRecordsOnSecurable(callCtx, entityCatalogId, entity.getId()); + } + + // return the result + return new CachedEntryResult(entity, entity.getGrantRecordsVersion(), grantRecords); + } + + /** {@inheritDoc} */ + @Override + public @NotNull CachedEntryResult loadCachedEntryByName( + @NotNull PolarisCallContext callCtx, + long entityCatalogId, + long parentId, + @NotNull PolarisEntityType entityType, + @NotNull String entityName) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read transaction + CachedEntryResult result = + ms.runInReadTransaction( + callCtx, + () -> + this.loadCachedEntryByName( + callCtx, ms, entityCatalogId, parentId, entityType, entityName)); + if (PolarisEntityConstants.getRootContainerName().equals(entityName) + && entityType == PolarisEntityType.ROOT + && !result.isSuccess()) { + // Backfill rootContainer if needed. + ms.runActionInTransaction( + callCtx, + () -> { + PolarisBaseEntity rootContainer = + new PolarisBaseEntity( + PolarisEntityConstants.getNullId(), + PolarisEntityConstants.getRootEntityId(), + PolarisEntityType.ROOT, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityConstants.getRootEntityId(), + PolarisEntityConstants.getRootContainerName()); + EntityResult backfillResult = + this.createEntityIfNotExists(callCtx, ms, null, rootContainer); + if (backfillResult.isSuccess()) { + PolarisEntitiesActiveKey serviceAdminRoleKey = + new PolarisEntitiesActiveKey( + 0L, + 0L, + PolarisEntityType.PRINCIPAL_ROLE.getCode(), + PolarisEntityConstants.getNameOfPrincipalServiceAdminRole()); + PolarisBaseEntity serviceAdminRole = + this.lookupEntityByName(callCtx, ms, serviceAdminRoleKey); + if (serviceAdminRole != null) { + this.persistNewGrantRecord( + callCtx, + ms, + rootContainer, + serviceAdminRole, + PolarisPrivilege.SERVICE_MANAGE_ACCESS); + } + } + }); + + // Redo the lookup in a separate read transaction. + result = + ms.runInReadTransaction( + callCtx, + () -> + this.loadCachedEntryByName( + callCtx, ms, entityCatalogId, parentId, entityType, entityName)); + } + return result; + } + + /** {@inheritDoc} */ + private @NotNull CachedEntryResult refreshCachedEntity( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisMetaStoreSession ms, + int entityVersion, + int entityGrantRecordsVersion, + @NotNull PolarisEntityType entityType, + long entityCatalogId, + long entityId) { + + // load version information + PolarisChangeTrackingVersions entityVersions = + ms.lookupEntityVersions(callCtx, List.of(new PolarisEntityId(entityCatalogId, entityId))) + .get(0); + + // if null, the entity has been purged + if (entityVersions == null) { + return new CachedEntryResult(ReturnStatus.ENTITY_NOT_FOUND, null); + } + + // load the entity if something changed + final PolarisBaseEntity entity; + if (entityVersion != entityVersions.getEntityVersion()) { + entity = ms.lookupEntity(callCtx, entityCatalogId, entityId); + + // if not found, return null + if (entity == null) { + return new CachedEntryResult(ReturnStatus.ENTITY_NOT_FOUND, null); + } + } else { + // entity has not changed, no need to reload it + entity = null; + } + + // load the grant records if required + final List grantRecords; + if (entityVersions.getGrantRecordsVersion() != entityGrantRecordsVersion) { + if (entityType.isGrantee()) { + grantRecords = + new ArrayList<>(ms.loadAllGrantRecordsOnGrantee(callCtx, entityCatalogId, entityId)); + grantRecords.addAll(ms.loadAllGrantRecordsOnSecurable(callCtx, entityCatalogId, entityId)); + } else { + grantRecords = ms.loadAllGrantRecordsOnSecurable(callCtx, entityCatalogId, entityId); + } + } else { + grantRecords = null; + } + + // return the result + return new CachedEntryResult(entity, entityVersions.getGrantRecordsVersion(), grantRecords); + } + + /** {@inheritDoc} */ + @Override + public @NotNull PolarisMetaStoreManager.CachedEntryResult refreshCachedEntity( + @NotNull PolarisCallContext callCtx, + int entityVersion, + int entityGrantRecordsVersion, + @NotNull PolarisEntityType entityType, + long entityCatalogId, + long entityId) { + // get metastore we should be using + PolarisMetaStoreSession ms = callCtx.getMetaStore(); + + // need to run inside a read transaction + return ms.runInReadTransaction( + callCtx, + () -> + this.refreshCachedEntity( + callCtx, + ms, + entityVersion, + entityGrantRecordsVersion, + entityType, + entityCatalogId, + entityId)); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/PolarisMetaStoreSession.java b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisMetaStoreSession.java new file mode 100644 index 0000000000..8034887155 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisMetaStoreSession.java @@ -0,0 +1,524 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisChangeTrackingVersions; +import io.polaris.core.entity.PolarisEntitiesActiveKey; +import io.polaris.core.entity.PolarisEntityActiveRecord; +import io.polaris.core.entity.PolarisEntityCore; +import io.polaris.core.entity.PolarisEntityId; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import io.polaris.core.storage.PolarisStorageIntegration; +import java.util.List; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.function.Supplier; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** + * Interface to the Polaris metadata store, allows to persist and retrieve all Polaris metadata like + * metadata for Polaris entities and metadata about grants between these entities which is the + * foundation of our role base access control model. + * + *

Note that APIs to the actual persistence store are very basic, often point read or write to + * the underlying data store. The goal is to make it really easy to back this using databases like + * Postgres or simpler KV store. + */ +public interface PolarisMetaStoreSession { + + /** + * Run the specified transaction code (a Supplier lambda type) in a database read/write + * transaction. If the code of the transaction does not throw any exception and returns normally, + * the transaction will be committed, else the transaction will be automatically rolled-back on + * error. The result of the supplier lambda is returned if success, else the error will be + * re-thrown. + * + * @param callCtx call context + * @param transactionCode code of the transaction being executed, a supplier lambda + */ + T runInTransaction(@NotNull PolarisCallContext callCtx, @NotNull Supplier transactionCode); + + /** + * Run the specified transaction code (a runnable lambda type) in a database read/write + * transaction. If the code of the transaction does not throw any exception and returns normally, + * the transaction will be committed, else the transaction will be automatically rolled-back on + * error. + * + * @param callCtx call context + * @param transactionCode code of the transaction being executed, a runnable lambda + */ + void runActionInTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Runnable transactionCode); + + /** + * Run the specified transaction code (a Supplier lambda type) in a database read transaction. If + * the code of the transaction does not throw any exception and returns normally, the transaction + * will be committed, else the transaction will be automatically rolled-back on error. The result + * of the supplier lambda is returned if success, else the error will be re-thrown. + * + * @param callCtx call context + * @param transactionCode code of the transaction being executed, a supplier lambda + */ + T runInReadTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Supplier transactionCode); + + /** + * Run the specified transaction code (a runnable lambda type) in a database read transaction. If + * the code of the transaction does not throw any exception and returns normally, the transaction + * will be committed, else the transaction will be automatically rolled-back on error. + * + * @param callCtx call context + * @param transactionCode code of the transaction being executed, a runnable lambda + */ + void runActionInReadTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Runnable transactionCode); + + /** + * @param callCtx call context + * @return new unique entity identifier + */ + long generateNewId(@NotNull PolarisCallContext callCtx); + + /** + * Write the base entity to the entities table. If there is a conflict (existing record with the + * same id), all attributes of the new record will replace the existing one. + * + * @param callCtx call context + * @param entity entity record to write, potentially replacing an existing entity record with the + * same key + */ + void writeToEntities(@NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity); + + /** + * Write the base entity to the entities_active table. If there is a conflict (existing record + * with the same PK), all attributes of the new record will replace the existing one. + * + * @param callCtx call context + * @param entity entity record to write, potentially replacing an existing entity record with the + * same key + */ + void writeToEntitiesActive( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity); + + /** + * Write the base entity to the entities_dropped table. If there is a conflict (existing record + * with the same PK), all attributes of the new record will replace the existing one. + * + * @param callCtx call context + * @param entity entity record to write, potentially replacing an existing entity record with the + * same key + */ + void writeToEntitiesDropped( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity); + + /** + * Write the base entity to the entities change tracking table. If there is a conflict (existing + * record with the same id), all attributes of the new record will replace the existing one. + * + * @param callCtx call context + * @param entity entity record to write, potentially replacing an existing entity record with the + * same key + */ + void writeToEntitiesChangeTracking( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity); + + /** + * Write the specified grantRecord to the grant_records table. If there is a conflict (existing + * record with the same PK), all attributes of the new record will replace the existing one. + * + * @param callCtx call context + * @param grantRec entity record to write, potentially replacing an existing entity record with + * the same key + */ + void writeToGrantRecords( + @NotNull PolarisCallContext callCtx, @NotNull PolarisGrantRecord grantRec); + + /** + * Delete the base entity from the entities table. + * + * @param callCtx call context + * @param entity entity record to delete + */ + void deleteFromEntities(@NotNull PolarisCallContext callCtx, @NotNull PolarisEntityCore entity); + + /** + * Delete the base entity from the entities_active table. + * + * @param callCtx call context + * @param entity entity record to delete + */ + void deleteFromEntitiesActive( + @NotNull PolarisCallContext callCtx, @NotNull PolarisEntityCore entity); + + /** + * Delete the base entity to the entities_dropped table + * + * @param callCtx call context + * @param entity entity record to delete + */ + void deleteFromEntitiesDropped( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity); + + /** + * Delete the base entity from the entities change tracking table + * + * @param callCtx call context + * @param entity entity record to delete + */ + void deleteFromEntitiesChangeTracking( + @NotNull PolarisCallContext callCtx, @NotNull PolarisEntityCore entity); + + /** + * Delete the specified grantRecord to the grant_records table. + * + * @param callCtx call context + * @param grantRec entity record to delete. + */ + void deleteFromGrantRecords( + @NotNull PolarisCallContext callCtx, @NotNull PolarisGrantRecord grantRec); + + /** + * Delete the all grant records in the grant_records table for the specified entity. This method + * will delete all grant records on that securable entity and also all grants to that grantee + * entity assuming that the entity is a grantee (catalog role, principal role or principal). + * + * @param callCtx call context + * @param entity entity whose grant records to and from should be deleted + * @param grantsOnGrantee all grants to that grantee entity. Empty list if that entity is not a + * grantee + * @param grantsOnSecurable all grants on that securable entity + */ + void deleteAllEntityGrantRecords( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisEntityCore entity, + @NotNull List grantsOnGrantee, + @NotNull List grantsOnSecurable); + + /** + * Delete Polaris entity and grant record metadata from all tables. This is used during metadata + * bootstrap to reset all tables to their original state + * + * @param callCtx call context + */ + void deleteAll(@NotNull PolarisCallContext callCtx); + + /** + * Lookup an entity given its catalog id (which can be NULL_ID for top-level entities) and its + * unique id. + * + * @param callCtx call context + * @param catalogId catalog id or NULL_ID + * @param entityId unique entity id + * @return NULL if the entity was not found, else the base entity. + */ + @Nullable + PolarisBaseEntity lookupEntity( + @NotNull PolarisCallContext callCtx, long catalogId, long entityId); + + /** + * Lookup a set of entities given their catalog id/entity id unique identifier + * + * @param callCtx call context + * @param entityIds list of entity ids + * @return list of polaris base entities, parallel to the input list of ids. An entity in the list + * will be null if the corresponding entity could not be found. + */ + @NotNull + List lookupEntities( + @NotNull PolarisCallContext callCtx, List entityIds); + + /** + * Lookup in the entities_change_tracking table the current version of an entity given its catalog + * id (which can be NULL_ID for top-level entities) and its unique id. Will return 0 if the entity + * does not exist. + * + * @param callCtx call context + * @param catalogId catalog id or NULL_ID + * @param entityId unique entity id + * @return current version for that entity or 0 if entity was not found. + */ + int lookupEntityVersion(@NotNull PolarisCallContext callCtx, long catalogId, long entityId); + + /** + * Get change tracking versions for all specified entity ids. + * + * @param callCtx call context + * @param entityIds list of entity id + * @return list parallel to the input list of entity versions. If an entity cannot be found, the + * corresponding element in the list will be null + */ + @NotNull + List lookupEntityVersions( + @NotNull PolarisCallContext callCtx, List entityIds); + + /** + * Lookup in the entities_active table to determine if the specified entity exists. Return the + * result of that lookup + * + * @param callCtx call context + * @param entityActiveKey key in the ENTITIES_ACTIVE table + * @return null if the specified entity does not exist or has been dropped. + */ + @Nullable + PolarisEntityActiveRecord lookupEntityActive( + @NotNull PolarisCallContext callCtx, @NotNull PolarisEntitiesActiveKey entityActiveKey); + + /** + * Lookup in the entities_active table to determine if the specified set of entities exist. Return + * the result, a parallel list of active records. A record in that list will be null if its + * associated lookup failed + * + * @return the list of entities_active records for the specified lookup operation + */ + @NotNull + List lookupEntityActiveBatch( + @NotNull PolarisCallContext callCtx, List entityActiveKeys); + + /** + * List all active entities of the specified type which are child entities of the specified parent + * + * @param callCtx call context + * @param catalogId catalog id for that entity, NULL_ID if the entity is top-level + * @param parentId id of the parent, can be the special 0 value representing the root entity + * @param entityType type of entities to list + * @return the list of entities_active records for the specified list operation + */ + @NotNull + List listActiveEntities( + @NotNull PolarisCallContext callCtx, + long catalogId, + long parentId, + @NotNull PolarisEntityType entityType); + + /** + * List active entities where some predicate returns true + * + * @param callCtx call context + * @param catalogId catalog id for that entity, NULL_ID if the entity is top-level + * @param parentId id of the parent, can be the special 0 value representing the root entity + * @param entityType type of entities to list + * @param entityFilter the filter to be applied to each entity. Only entities where the predicate + * returns true are returned in the list + * @return the list of entities for which the predicate returns true + */ + @NotNull + List listActiveEntities( + @NotNull PolarisCallContext callCtx, + long catalogId, + long parentId, + @NotNull PolarisEntityType entityType, + @NotNull Predicate entityFilter); + + /** + * List active entities where some predicate returns true and transform the entities with a + * function + * + * @param callCtx call context + * @param catalogId catalog id for that entity, NULL_ID if the entity is top-level + * @param parentId id of the parent, can be the special 0 value representing the root entity + * @param entityType type of entities to list + * @param limit the max number of items to return + * @param entityFilter the filter to be applied to each entity. Only entities where the predicate + * returns true are returned in the list + * @param transformer the transformation function applied to the {@link PolarisBaseEntity} before + * returning + * @return the list of entities for which the predicate returns true + */ + @NotNull + List listActiveEntities( + @NotNull PolarisCallContext callCtx, + long catalogId, + long parentId, + @NotNull PolarisEntityType entityType, + int limit, + @NotNull Predicate entityFilter, + @NotNull Function transformer); + + /** + * Lookup in the entities_change_tracking table the current version of the grant records for this + * entity. That version is changed everytime a grant record is added or removed on a base + * securable or added to a grantee. + * + * @param callCtx call context + * @param catalogId catalog id or NULL_ID + * @param entityId unique entity id + * @return current grant records version for that entity. + */ + int lookupEntityGrantRecordsVersion( + @NotNull PolarisCallContext callCtx, long catalogId, long entityId); + + /** + * Lookup the specified grant record from the grant_records table. Return NULL if not found + * + * @param callCtx call context + * @param securableCatalogId catalog id of the securable entity, NULL_ID if the entity is + * top-level + * @param securableId id of the securable entity + * @param granteeCatalogId catalog id of the grantee entity, NULL_ID if the entity is top-level + * @param granteeId id of the grantee entity + * @param privilegeCode code for the privilege we are looking up + * @return the grant record if found, NULL if not found + */ + @Nullable + PolarisGrantRecord lookupGrantRecord( + @NotNull PolarisCallContext callCtx, + long securableCatalogId, + long securableId, + long granteeCatalogId, + long granteeId, + int privilegeCode); + + /** + * Get all grant records on the specified securable entity. + * + * @param callCtx call context + * @param securableCatalogId catalog id of the securable entity, NULL_ID if the entity is + * top-level + * @param securableId id of the securable entity + * @return the list of grant records for the specified securable + */ + @NotNull + List loadAllGrantRecordsOnSecurable( + @NotNull PolarisCallContext callCtx, long securableCatalogId, long securableId); + + /** + * Get all grant records granted to the specified grantee entity. + * + * @param callCtx call context + * @param granteeCatalogId catalog id of the grantee entity, NULL_ID if the entity is top-level + * @param granteeId id of the grantee entity + * @return the list of grant records for the specified grantee + */ + @NotNull + List loadAllGrantRecordsOnGrantee( + @NotNull PolarisCallContext callCtx, long granteeCatalogId, long granteeId); + + /** + * Allows to retrieve to the secrets of a principal given its unique client id + * + * @param callCtx call context + * @param clientId principal client id + * @return the secrets + */ + @Nullable + PolarisPrincipalSecrets loadPrincipalSecrets( + @NotNull PolarisCallContext callCtx, @NotNull String clientId); + + /** + * generate and store a client id and associated secrets for a newly created principal entity + * + * @param callCtx call context + * @param principalName name of the principal + * @param principalId principal id + */ + @NotNull + PolarisPrincipalSecrets generateNewPrincipalSecrets( + @NotNull PolarisCallContext callCtx, @NotNull String principalName, long principalId); + + /** + * Rotate the secrets of a principal entity, i.e. make the specified main secrets the secondary + * and generate a new main secret + * + * @param callCtx call context + * @param clientId principal client id + * @param principalId principal id + * @param mainSecretToRotate main secret for comparison with the current entity version + * @param reset true if the principal secrets should be disabled and replaced with a one-time + * password + */ + @Nullable + PolarisPrincipalSecrets rotatePrincipalSecrets( + @NotNull PolarisCallContext callCtx, + @NotNull String clientId, + long principalId, + @NotNull String mainSecretToRotate, + boolean reset); + + /** + * When dropping a principal, we also need to drop the secrets of that principal + * + * @param callCtx the call context + * @param clientId principal client id + * @param principalId the id of the principal whose secrets are dropped + */ + void deletePrincipalSecrets( + @NotNull PolarisCallContext callCtx, @NotNull String clientId, long principalId); + + /** + * Create an in-memory storage integration + * + * @param callCtx the polaris calllctx + * @param catalogId the catalog id + * @param entityId the entity id + * @param polarisStorageConfigurationInfo the storage configuration information + * @return a storage integration object + */ + @Nullable + PolarisStorageIntegration createStorageIntegration( + @NotNull PolarisCallContext callCtx, + long catalogId, + long entityId, + PolarisStorageConfigurationInfo polarisStorageConfigurationInfo); + + /** + * Persist a storage integration in the metastore + * + * @param callContext the polaris call context + * @param entity the entity of the object + * @param storageIntegration the storage integration to persist + */ + void persistStorageIntegrationIfNeeded( + @NotNull PolarisCallContext callContext, + @NotNull PolarisBaseEntity entity, + @Nullable PolarisStorageIntegration storageIntegration); + + /** + * Load the polaris storage integration for a polaris entity (Catalog,Namespace,Table,View) + * + * @param callContext the polaris call context + * @param entity the polaris entity + * @return a polaris storage integration + */ + @Nullable + + PolarisStorageIntegration loadPolarisStorageIntegration( + @NotNull PolarisCallContext callContext, @NotNull PolarisBaseEntity entity); + + /** + * Check if the specified parent entity has children. + * + * @param callContext the polaris call context + * @param optionalEntityType if not null, only check for the specified type, else check for all + * types of children entities + * @param catalogId id of the catalog + * @param parentId id of the parent, either a namespace or a catalog + * @return true if the parent entity has children + */ + boolean hasChildren( + @NotNull PolarisCallContext callContext, + @Nullable PolarisEntityType optionalEntityType, + long catalogId, + long parentId); + + /** Rollback the current transaction */ + void rollback(); +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/PolarisObjectMapperUtil.java b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisObjectMapperUtil.java new file mode 100644 index 0000000000..35a3f05d56 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisObjectMapperUtil.java @@ -0,0 +1,185 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import com.fasterxml.jackson.core.JsonFactory; +import com.fasterxml.jackson.core.JsonParser; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.JsonToken; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.DeserializationFeature; +import com.fasterxml.jackson.databind.JsonMappingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisTaskConstants; +import java.io.IOException; +import java.util.Map; +import org.apache.iceberg.rest.RESTSerializers; +import org.jetbrains.annotations.Nullable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** A mapper to serialize/deserialize polaris objects. */ +public class PolarisObjectMapperUtil { + /** mapper, allows to serialize/deserialize properties to/from JSON */ + private static final ObjectMapper MAPPER = configureMapper(); + + private static ObjectMapper configureMapper() { + ObjectMapper mapper = new ObjectMapper(); + mapper.configure(DeserializationFeature.FAIL_ON_IGNORED_PROPERTIES, false); + RESTSerializers.registerAll(mapper); + return mapper; + } + + /** + * Given the internal property as a map of key/value pairs, serialize it to a String + * + * @param properties a map of key/value pairs + * @return a String, the JSON representation of the map + */ + public static String serializeProperties( + PolarisCallContext callCtx, Map properties) { + + String jsonString = null; + try { + // Deserialize the JSON string to a Map + jsonString = MAPPER.writeValueAsString(properties); + } catch (JsonProcessingException ex) { + callCtx.getDiagServices().fail("got_json_processing_exception", ex.getMessage()); + } + + return jsonString; + } + + public static String serialize(PolarisCallContext callCtx, Object object) { + try { + return MAPPER.writeValueAsString(object); + } catch (JsonProcessingException e) { + callCtx.getDiagServices().fail("got_json_processing_exception", e.getMessage()); + } + return ""; + } + + public static T deserialize(PolarisCallContext callCtx, String text, Class klass) { + try { + return MAPPER.readValue(text, klass); + } catch (JsonProcessingException e) { + callCtx.getDiagServices().fail("got_json_processing_exception", e.getMessage()); + } + return null; + } + + /** + * Given the serialized properties, deserialize those to a Map + * + * @param properties a JSON string representing the set of properties + * @return a Map of string + */ + public static Map deserializeProperties( + PolarisCallContext callCtx, String properties) { + + Map retProperties = null; + try { + // Deserialize the JSON string to a Map + retProperties = MAPPER.readValue(properties, new TypeReference<>() {}); + } catch (JsonMappingException ex) { + callCtx + .getDiagServices() + .fail("got_json_mapping_exception", "properties={}, ex={}", properties, ex); + } catch (JsonProcessingException ex) { + callCtx + .getDiagServices() + .fail("got_json_processing_exception", "properties={}, ex={}", properties, ex); + } + + return retProperties; + } + + static class TaskExecutionState { + final String executor; + final long lastAttemptStartTime; + final int attemptCount; + + TaskExecutionState(String executor, long lastAttemptStartTime, int attemptCount) { + this.executor = executor; + this.lastAttemptStartTime = lastAttemptStartTime; + this.attemptCount = attemptCount; + } + + public String getExecutor() { + return executor; + } + + public long getLastAttemptStartTime() { + return lastAttemptStartTime; + } + + public int getAttemptCount() { + return attemptCount; + } + } + + /** + * Parse a task entity's properties field in order to find the current {@link TaskExecutionState}. + * Avoids parsing most of the data in the properties field, so we can look at just the fields we + * need. + * + * @param entity entity + * @return TaskExecutionState + */ + static @Nullable TaskExecutionState parseTaskState(PolarisBaseEntity entity) { + JsonFactory jfactory = new JsonFactory(); + try (JsonParser jParser = jfactory.createParser(entity.getProperties())) { + String executorId = null; + long lastAttemptStartTime = 0; + int attemptCount = 0; + while (jParser.nextToken() != JsonToken.END_OBJECT) { + if (jParser.getCurrentToken() == JsonToken.FIELD_NAME) { + String fieldName = jParser.currentName(); + if (fieldName.equals(PolarisTaskConstants.LAST_ATTEMPT_EXECUTOR_ID)) { + jParser.nextToken(); + executorId = jParser.getText(); + } else if (fieldName.equals(PolarisTaskConstants.LAST_ATTEMPT_START_TIME)) { + jParser.nextToken(); + lastAttemptStartTime = Long.parseLong(jParser.getText()); + } else if (fieldName.equals(PolarisTaskConstants.ATTEMPT_COUNT)) { + jParser.nextToken(); + attemptCount = Integer.parseInt(jParser.getText()); + } else { + JsonToken next = jParser.nextToken(); + if (next == JsonToken.START_OBJECT || next == JsonToken.START_ARRAY) { + jParser.skipChildren(); + } + } + } + } + return new TaskExecutionState(executorId, lastAttemptStartTime, attemptCount); + } catch (IOException e) { + Logger logger = LoggerFactory.getLogger(PolarisObjectMapperUtil.class); + logger + .atWarn() + .addKeyValue("json", entity.getProperties()) + .addKeyValue("error", e.getMessage()) + .log("Unable to parse task properties"); + return null; + } + } + + long now() { + return 0; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/PolarisResolvedPathWrapper.java b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisResolvedPathWrapper.java new file mode 100644 index 0000000000..cf3e99c49e --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisResolvedPathWrapper.java @@ -0,0 +1,81 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import io.polaris.core.entity.PolarisEntity; +import java.util.List; + +/** + * Holds fully-resolved path of PolarisEntities representing the targetEntity with all its grants + * and grant records. + */ +public class PolarisResolvedPathWrapper { + private final List resolvedPath; + + // TODO: Distinguish between whether parentPath had a null in the chain or whether only + // the leaf element was null. + public PolarisResolvedPathWrapper(List resolvedPath) { + this.resolvedPath = resolvedPath; + } + + public ResolvedPolarisEntity getResolvedLeafEntity() { + if (resolvedPath == null || resolvedPath.isEmpty()) { + return null; + } + return resolvedPath.get(resolvedPath.size() - 1); + } + + public PolarisEntity getRawLeafEntity() { + ResolvedPolarisEntity resolvedEntity = getResolvedLeafEntity(); + if (resolvedEntity != null) { + return resolvedEntity.getEntity(); + } + return null; + } + + public List getResolvedFullPath() { + return resolvedPath; + } + + public List getRawFullPath() { + if (resolvedPath == null) { + return null; + } + return resolvedPath.stream().map(resolved -> resolved.getEntity()).toList(); + } + + public List getResolvedParentPath() { + if (resolvedPath == null) { + return null; + } + return resolvedPath.subList(0, resolvedPath.size() - 1); + } + + public List getRawParentPath() { + if (resolvedPath == null) { + return null; + } + return getResolvedParentPath().stream().map(resolved -> resolved.getEntity()).toList(); + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("resolvedPath:"); + sb.append(resolvedPath); + return sb.toString(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/PolarisTreeMapMetaStoreSessionImpl.java b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisTreeMapMetaStoreSessionImpl.java new file mode 100644 index 0000000000..8374fab262 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisTreeMapMetaStoreSessionImpl.java @@ -0,0 +1,568 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import com.google.common.base.Predicates; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisChangeTrackingVersions; +import io.polaris.core.entity.PolarisEntitiesActiveKey; +import io.polaris.core.entity.PolarisEntityActiveRecord; +import io.polaris.core.entity.PolarisEntityCore; +import io.polaris.core.entity.PolarisEntityId; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import io.polaris.core.storage.PolarisStorageIntegration; +import io.polaris.core.storage.PolarisStorageIntegrationProvider; +import java.util.List; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.function.Supplier; +import java.util.stream.Collectors; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +public class PolarisTreeMapMetaStoreSessionImpl implements PolarisMetaStoreSession { + + // the TreeMap store to use + private final PolarisTreeMapStore store; + private final PolarisStorageIntegrationProvider storageIntegrationProvider; + + public PolarisTreeMapMetaStoreSessionImpl( + @NotNull PolarisTreeMapStore store, + @NotNull PolarisStorageIntegrationProvider storageIntegrationProvider) { + + // init store + this.store = store; + this.storageIntegrationProvider = storageIntegrationProvider; + } + + /** {@inheritDoc} */ + @Override + public T runInTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Supplier transactionCode) { + + // run transaction on our underlying store + return store.runInTransaction(callCtx, transactionCode); + } + + /** {@inheritDoc} */ + @Override + public void runActionInTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Runnable transactionCode) { + + // run transaction on our underlying store + store.runActionInTransaction(callCtx, transactionCode); + } + + /** {@inheritDoc} */ + @Override + public T runInReadTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Supplier transactionCode) { + // run transaction on our underlying store + return store.runInReadTransaction(callCtx, transactionCode); + } + + /** {@inheritDoc} */ + @Override + public void runActionInReadTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Runnable transactionCode) { + + // run transaction on our underlying store + store.runActionInReadTransaction(callCtx, transactionCode); + } + + /** + * @return new unique entity identifier + */ + public long generateNewId(@NotNull PolarisCallContext callCtx) { + return this.store.getNextSequence(); + } + + /** {@inheritDoc} */ + @Override + public void writeToEntities( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + // write it + this.store.getSliceEntities().write(entity); + } + + /** {@inheritDoc} */ + @Override + public void persistStorageIntegrationIfNeeded( + @NotNull PolarisCallContext callContext, + @NotNull PolarisBaseEntity entity, + @Nullable PolarisStorageIntegration storageIntegration) { + // not implemented for in-memory store + } + + /** {@inheritDoc} */ + @Override + public void writeToEntitiesActive( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + // write it + this.store.getSliceEntitiesActive().write(entity); + } + + /** {@inheritDoc} */ + @Override + public void writeToEntitiesDropped( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + // write it + this.store.getSliceEntitiesDropped().write(entity); + this.store.getSliceEntitiesDroppedToPurge().write(entity); + } + + /** {@inheritDoc} */ + @Override + public void writeToEntitiesChangeTracking( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + // write it + this.store.getSliceEntitiesChangeTracking().write(entity); + } + + /** {@inheritDoc} */ + @Override + public void writeToGrantRecords( + @NotNull PolarisCallContext callCtx, @NotNull PolarisGrantRecord grantRec) { + // write it + this.store.getSliceGrantRecords().write(grantRec); + this.store.getSliceGrantRecordsByGrantee().write(grantRec); + } + + /** {@inheritDoc} */ + @Override + public void deleteFromEntities( + @NotNull PolarisCallContext callCtx, @NotNull PolarisEntityCore entity) { + + // delete it + this.store.getSliceEntities().delete(this.store.buildEntitiesKey(entity)); + } + + /** {@inheritDoc} */ + @Override + public void deleteFromEntitiesActive( + @NotNull PolarisCallContext callCtx, @NotNull PolarisEntityCore entity) { + // delete it + this.store.getSliceEntitiesActive().delete(this.store.buildEntitiesActiveKey(entity)); + } + + /** {@inheritDoc} */ + @Override + public void deleteFromEntitiesDropped( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + // delete it + this.store.getSliceEntitiesDropped().delete(entity); + this.store.getSliceEntitiesDroppedToPurge().delete(entity); + } + + /** + * {@inheritDoc} + * + * @param callCtx + * @param entity entity record to delete + */ + @Override + public void deleteFromEntitiesChangeTracking( + @NotNull PolarisCallContext callCtx, @NotNull PolarisEntityCore entity) { + // delete it + this.store.getSliceEntitiesChangeTracking().delete(this.store.buildEntitiesKey(entity)); + } + + /** {@inheritDoc} */ + @Override + public void deleteFromGrantRecords( + @NotNull PolarisCallContext callCtx, @NotNull PolarisGrantRecord grantRec) { + + // delete it + this.store.getSliceGrantRecords().delete(grantRec); + this.store.getSliceGrantRecordsByGrantee().delete(grantRec); + } + + /** {@inheritDoc} */ + @Override + public void deleteAllEntityGrantRecords( + @NotNull PolarisCallContext callCtx, + @NotNull PolarisEntityCore entity, + @NotNull List grantsOnGrantee, + @NotNull List grantsOnSecurable) { + + // build composite prefix key and delete grant records on the indexed side of each grant table + String prefix = this.store.buildPrefixKeyComposite(entity.getCatalogId(), entity.getId()); + this.store.getSliceGrantRecords().deleteRange(prefix); + this.store.getSliceGrantRecordsByGrantee().deleteRange(prefix); + + // also delete the other side. We need to delete these grants one at a time versus doing a + // range delete + grantsOnGrantee.forEach(gr -> this.store.getSliceGrantRecords().delete(gr)); + grantsOnSecurable.forEach(gr -> this.store.getSliceGrantRecordsByGrantee().delete(gr)); + } + + /** {@inheritDoc} */ + @Override + public void deleteAll(@NotNull PolarisCallContext callCtx) { + // clear all slices + this.store.deleteAll(); + } + + /** {@inheritDoc} */ + @Override + public @Nullable PolarisBaseEntity lookupEntity( + @NotNull PolarisCallContext callCtx, long catalogId, long entityId) { + return this.store.getSliceEntities().read(this.store.buildKeyComposite(catalogId, entityId)); + } + + /** {@inheritDoc} */ + @Override + public @NotNull List lookupEntities( + @NotNull PolarisCallContext callCtx, List entityIds) { + // allocate return list + return entityIds.stream() + .map( + id -> + this.store + .getSliceEntities() + .read(this.store.buildKeyComposite(id.getCatalogId(), id.getId()))) + .collect(Collectors.toList()); + } + + /** {@inheritDoc} */ + @Override + public int lookupEntityVersion( + @NotNull PolarisCallContext callCtx, long catalogId, long entityId) { + PolarisBaseEntity baseEntity = + this.store + .getSliceEntitiesChangeTracking() + .read(this.store.buildKeyComposite(catalogId, entityId)); + + return baseEntity == null ? 0 : baseEntity.getEntityVersion(); + } + + /** {@inheritDoc} */ + @Override + public @NotNull List lookupEntityVersions( + @NotNull PolarisCallContext callCtx, List entityIds) { + // allocate return list + return entityIds.stream() + .map( + id -> + this.store + .getSliceEntitiesChangeTracking() + .read(this.store.buildKeyComposite(id.getCatalogId(), id.getId()))) + .map( + entity -> + (entity != null) + ? new PolarisChangeTrackingVersions( + entity.getEntityVersion(), entity.getGrantRecordsVersion()) + : null) + .collect(Collectors.toList()); + } + + /** {@inheritDoc} */ + @Override + @Nullable + public PolarisEntityActiveRecord lookupEntityActive( + @NotNull PolarisCallContext callCtx, @NotNull PolarisEntitiesActiveKey entityActiveKey) { + // lookup the active entity slice + PolarisBaseEntity entity = + this.store + .getSliceEntitiesActive() + .read( + this.store.buildKeyComposite( + entityActiveKey.getCatalogId(), + entityActiveKey.getParentId(), + entityActiveKey.getTypeCode(), + entityActiveKey.getName())); + + // return record + return (entity == null) + ? null + : new PolarisEntityActiveRecord( + entity.getCatalogId(), + entity.getId(), + entity.getParentId(), + entity.getName(), + entity.getTypeCode(), + entity.getSubTypeCode()); + } + + /** {@inheritDoc} */ + @Override + @NotNull + public List lookupEntityActiveBatch( + @NotNull PolarisCallContext callCtx, + @NotNull List entityActiveKeys) { + // now build a list to quickly verify that nothing has changed + return entityActiveKeys.stream() + .map(entityActiveKey -> this.lookupEntityActive(callCtx, entityActiveKey)) + .collect(Collectors.toList()); + } + + /** {@inheritDoc} */ + @Override + public @NotNull List listActiveEntities( + @NotNull PolarisCallContext callCtx, + long catalogId, + long parentId, + @NotNull PolarisEntityType entityType) { + return listActiveEntities(callCtx, catalogId, parentId, entityType, Predicates.alwaysTrue()); + } + + @Override + public @NotNull List listActiveEntities( + @NotNull PolarisCallContext callCtx, + long catalogId, + long parentId, + @NotNull PolarisEntityType entityType, + @NotNull Predicate entityFilter) { + // full range scan under the parent for that type + return listActiveEntities( + callCtx, + catalogId, + parentId, + entityType, + Integer.MAX_VALUE, + entityFilter, + entity -> + new PolarisEntityActiveRecord( + entity.getCatalogId(), + entity.getId(), + entity.getParentId(), + entity.getName(), + entity.getTypeCode(), + entity.getSubTypeCode())); + } + + @Override + public @NotNull List listActiveEntities( + @NotNull PolarisCallContext callCtx, + long catalogId, + long parentId, + @NotNull PolarisEntityType entityType, + int limit, + @NotNull Predicate entityFilter, + @NotNull Function transformer) { + // full range scan under the parent for that type + return this.store + .getSliceEntitiesActive() + .readRange(this.store.buildPrefixKeyComposite(catalogId, parentId, entityType.getCode())) + .stream() + .filter(entityFilter) + .limit(limit) + .map(transformer) + .collect(Collectors.toList()); + } + + /** {@inheritDoc} */ + public boolean hasChildren( + @NotNull PolarisCallContext callContext, + @Nullable PolarisEntityType entityType, + long catalogId, + long parentId) { + // determine key prefix, add type if one is passed-in + String prefixKey = + entityType == null + ? this.store.buildPrefixKeyComposite(catalogId, parentId) + : this.store.buildPrefixKeyComposite(catalogId, parentId, entityType.getCode()); + // check if it has children + return !this.store.getSliceEntitiesActive().readRange(prefixKey).isEmpty(); + } + + /** {@inheritDoc} */ + @Override + public int lookupEntityGrantRecordsVersion( + @NotNull PolarisCallContext callCtx, long catalogId, long entityId) { + PolarisBaseEntity entity = + this.store + .getSliceEntitiesChangeTracking() + .read(this.store.buildKeyComposite(catalogId, entityId)); + + // does not exist, 0 + return entity == null ? 0 : entity.getGrantRecordsVersion(); + } + + /** {@inheritDoc} */ + @Override + public @Nullable PolarisGrantRecord lookupGrantRecord( + @NotNull PolarisCallContext callCtx, + long securableCatalogId, + long securableId, + long granteeCatalogId, + long granteeId, + int privilegeCode) { + // lookup the grants records slice to find the usage role + return this.store + .getSliceGrantRecords() + .read( + this.store.buildKeyComposite( + securableCatalogId, securableId, granteeCatalogId, granteeId, privilegeCode)); + } + + /** {@inheritDoc} */ + @Override + public @NotNull List loadAllGrantRecordsOnSecurable( + @NotNull PolarisCallContext callCtx, long securableCatalogId, long securableId) { + // now fetch all grants for this securable + return this.store + .getSliceGrantRecords() + .readRange(this.store.buildPrefixKeyComposite(securableCatalogId, securableId)); + } + + /** {@inheritDoc} */ + @Override + public @NotNull List loadAllGrantRecordsOnGrantee( + @NotNull PolarisCallContext callCtx, long granteeCatalogId, long granteeId) { + // now fetch all grants assigned to this grantee + return this.store + .getSliceGrantRecordsByGrantee() + .readRange(this.store.buildPrefixKeyComposite(granteeCatalogId, granteeId)); + } + + /** {@inheritDoc} */ + @Override + public @Nullable PolarisPrincipalSecrets loadPrincipalSecrets( + @NotNull PolarisCallContext callCtx, @NotNull String clientId) { + return this.store.getSlicePrincipalSecrets().read(clientId); + } + + /** {@inheritDoc} */ + @Override + public @NotNull PolarisPrincipalSecrets generateNewPrincipalSecrets( + @NotNull PolarisCallContext callCtx, @NotNull String principalName, long principalId) { + // ensure principal client id is unique + PolarisPrincipalSecrets principalSecrets; + PolarisPrincipalSecrets lookupPrincipalSecrets; + do { + // generate new random client id and secrets + principalSecrets = new PolarisPrincipalSecrets(principalId); + + // load the existing secrets + lookupPrincipalSecrets = + this.store.getSlicePrincipalSecrets().read(principalSecrets.getPrincipalClientId()); + } while (lookupPrincipalSecrets != null); + + // write new principal secrets + this.store.getSlicePrincipalSecrets().write(principalSecrets); + + // if not found, return null + return principalSecrets; + } + + /** {@inheritDoc} */ + @Override + public @NotNull PolarisPrincipalSecrets rotatePrincipalSecrets( + @NotNull PolarisCallContext callCtx, + @NotNull String clientId, + long principalId, + @NotNull String mainSecretToRotate, + boolean reset) { + + // load the existing secrets + PolarisPrincipalSecrets principalSecrets = this.store.getSlicePrincipalSecrets().read(clientId); + + // should be found + callCtx + .getDiagServices() + .checkNotNull( + principalSecrets, + "cannot_find_secrets", + "client_id={} principalId={}", + clientId, + principalId); + + // ensure principal id is matching + callCtx + .getDiagServices() + .check( + principalId == principalSecrets.getPrincipalId(), + "principal_id_mismatch", + "expectedId={} id={}", + principalId, + principalSecrets.getPrincipalId()); + + // rotate the secrets + principalSecrets.rotateSecrets(mainSecretToRotate); + if (reset) { + principalSecrets.rotateSecrets(principalSecrets.getMainSecret()); + } + + // write back new secrets + this.store.getSlicePrincipalSecrets().write(principalSecrets); + + // return those + return principalSecrets; + } + + /** {@inheritDoc} */ + @Override + public void deletePrincipalSecrets( + @NotNull PolarisCallContext callCtx, @NotNull String clientId, long principalId) { + // load the existing secrets + PolarisPrincipalSecrets principalSecrets = this.store.getSlicePrincipalSecrets().read(clientId); + + // should be found + callCtx + .getDiagServices() + .checkNotNull( + principalSecrets, + "cannot_find_secrets", + "client_id={} principalId={}", + clientId, + principalId); + + // ensure principal id is matching + callCtx + .getDiagServices() + .check( + principalId == principalSecrets.getPrincipalId(), + "principal_id_mismatch", + "expectedId={} id={}", + principalId, + principalSecrets.getPrincipalId()); + + // delete these secrets + this.store.getSlicePrincipalSecrets().delete(clientId); + } + + /** {@inheritDoc} */ + @Override + public @Nullable + PolarisStorageIntegration createStorageIntegration( + @NotNull PolarisCallContext callCtx, + long catalogId, + long entityId, + PolarisStorageConfigurationInfo polarisStorageConfigurationInfo) { + return storageIntegrationProvider.getStorageIntegrationForConfig( + polarisStorageConfigurationInfo); + } + + /** {@inheritDoc} */ + @Override + public @Nullable + PolarisStorageIntegration loadPolarisStorageIntegration( + @NotNull PolarisCallContext callCtx, @NotNull PolarisBaseEntity entity) { + PolarisStorageConfigurationInfo storageConfig = + PolarisMetaStoreManagerImpl.readStorageConfiguration(callCtx, entity); + return storageIntegrationProvider.getStorageIntegrationForConfig(storageConfig); + } + + @Override + public void rollback() { + this.store.rollback(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/PolarisTreeMapStore.java b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisTreeMapStore.java new file mode 100644 index 0000000000..366c04c277 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/PolarisTreeMapStore.java @@ -0,0 +1,555 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntityCore; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import java.util.ArrayList; +import java.util.List; +import java.util.TreeMap; +import java.util.concurrent.atomic.AtomicLong; +import java.util.function.Function; +import java.util.function.Supplier; +import org.jetbrains.annotations.NotNull; + +/** Implements a simple in-memory store for Polaris, using tree-map */ +public class PolarisTreeMapStore { + + /** Slice of data, simple KV store. */ + public class Slice { + // main KV slice + private final TreeMap slice; + + // if we need to rollback + private final TreeMap undoSlice; + + // the key builder + private final Function buildKey; + + // the key builder + private final Function copyRecord; + + private Slice(Function buildKey, Function copyRecord) { + this.slice = new TreeMap<>(); + this.undoSlice = new TreeMap<>(); + this.buildKey = buildKey; + this.copyRecord = copyRecord; + } + + public String buildKey(T value) { + return this.buildKey.apply(value); + } + + /** + * read a value in the slice, will return null if not found + * + *

TODO: return a copy of each object to avoid mutating the records + * + * @param key key for that value + */ + public T read(String key) { + PolarisTreeMapStore.this.ensureReadTr(); + T value = this.slice.getOrDefault(key, null); + return (value != null) ? this.copyRecord.apply(value) : null; + } + + /** + * read a range of values in the slice corresponding to a key prefix + * + * @param prefix key prefix + */ + public List readRange(String prefix) { + PolarisTreeMapStore.this.ensureReadTr(); + // end of the key + String endKey = + prefix.substring(0, prefix.length() - 1) + + (char) (prefix.charAt(prefix.length() - 1) + 1); + + // Get the sub-map with keys in the range [prefix, endKey) + return new ArrayList<>(slice.subMap(prefix, true, endKey, false).values()); + } + + /** + * write a value in the slice + * + * @param value value to write + */ + public void write(T value) { + PolarisTreeMapStore.this.ensureReadWriteTr(); + T valueToWrite = (value != null) ? this.copyRecord.apply(value) : null; + String key = this.buildKey(valueToWrite); + // write undo if needs be + if (!this.undoSlice.containsKey(key)) { + this.undoSlice.put(key, this.slice.getOrDefault(key, null)); + } + this.slice.put(key, valueToWrite); + } + + /** + * delete the specified record from the slice + * + * @param key key for the record to remove + */ + public void delete(String key) { + PolarisTreeMapStore.this.ensureReadWriteTr(); + if (slice.containsKey(key)) { + // write undo if needs be + if (!this.undoSlice.containsKey(key)) { + this.undoSlice.put(key, this.slice.getOrDefault(key, null)); + } + this.slice.remove(key); + } + } + + /** + * delete range of values + * + * @param prefix key prefix for the record to remove + */ + public void deleteRange(String prefix) { + PolarisTreeMapStore.this.ensureReadWriteTr(); + List elements = this.readRange(prefix); + for (T element : elements) { + this.delete(element); + } + } + + void deleteAll() { + PolarisTreeMapStore.this.ensureReadWriteTr(); + slice.clear(); + undoSlice.clear(); + } + + /** + * delete the specified record from the slice + * + * @param value value to remove + */ + public void delete(T value) { + this.delete(this.buildKey(value)); + } + + /** Rollback all changes made to this slice since transaction started */ + private void rollback() { + PolarisTreeMapStore.this.ensureReadWriteTr(); + undoSlice.forEach( + (key, value) -> { + if (value == null) { + slice.remove(key); + } else { + slice.put(key, value); + } + }); + } + + private void startWriteTransaction() { + undoSlice.clear(); + } + } + + /** Transaction on the tree-map store */ + private static class Transaction { + // if true, we have open a read/write transaction + private final boolean isWrite; + + /** Constructor */ + private Transaction(boolean isWrite) { + this.isWrite = isWrite; + } + + public boolean isWrite() { + return isWrite; + } + } + + // synchronization lock to ensure that only one transaction can be started + private final Object lock; + + // transaction which was started, will be null if no transaction started + private Transaction tr; + + // diagnostic services + private PolarisDiagnostics diagnosticServices; + + // all entities + private final Slice sliceEntities; + + // all entities + private final Slice sliceEntitiesActive; + + // all entities dropped + private final Slice sliceEntitiesDropped; + + // all entities dropped + private final Slice sliceEntitiesDroppedToPurge; + + // all entities dropped + private final Slice sliceEntitiesChangeTracking; + + // all grant records indexed by securable + private final Slice sliceGrantRecords; + + // all grant records indexed by grantees + private final Slice sliceGrantRecordsByGrantee; + + // slice to store principal secrets + private final Slice slicePrincipalSecrets; + + // next id generator + private final AtomicLong nextId = new AtomicLong(); + + /** + * Constructor, allocate everything at once + * + * @param diagnostics diagnostic services + */ + public PolarisTreeMapStore(@NotNull PolarisDiagnostics diagnostics) { + + // the entities slice + this.sliceEntities = + new Slice<>( + entity -> String.format("%d::%d", entity.getCatalogId(), entity.getId()), + PolarisBaseEntity::new); + + // the entities active slice + this.sliceEntitiesActive = new Slice<>(this::buildEntitiesActiveKey, PolarisBaseEntity::new); + + // the entities active slice + this.sliceEntitiesDropped = + new Slice<>( + entity -> + String.format( + "%d::%d::%s::%d::%d::%d", + entity.getCatalogId(), + entity.getParentId(), + entity.getName(), + entity.getTypeCode(), + entity.getSubTypeCode(), + entity.getDropTimestamp()), + PolarisBaseEntity::new); + + // the entities active slice + this.sliceEntitiesDroppedToPurge = + new Slice<>( + entity -> + String.format( + "%d::%d::%s", + entity.getToPurgeTimestamp(), entity.getCatalogId(), entity.getId()), + PolarisBaseEntity::new); + + // change tracking + this.sliceEntitiesChangeTracking = + new Slice<>( + entity -> String.format("%d::%d", entity.getCatalogId(), entity.getId()), + PolarisBaseEntity::new); + + // grant records by securable + this.sliceGrantRecords = + new Slice<>( + grantRecord -> + String.format( + "%d::%d::%d::%d::%d", + grantRecord.getSecurableCatalogId(), + grantRecord.getSecurableId(), + grantRecord.getGranteeCatalogId(), + grantRecord.getGranteeId(), + grantRecord.getPrivilegeCode()), + PolarisGrantRecord::new); + + // grant records by securable + this.sliceGrantRecordsByGrantee = + new Slice<>( + grantRecord -> + String.format( + "%d::%d::%d::%d::%d", + grantRecord.getGranteeCatalogId(), + grantRecord.getGranteeId(), + grantRecord.getSecurableCatalogId(), + grantRecord.getSecurableId(), + grantRecord.getPrivilegeCode()), + PolarisGrantRecord::new); + + // principal secrets + slicePrincipalSecrets = + new Slice<>( + principalSecrets -> String.format("%s", principalSecrets.getPrincipalClientId()), + PolarisPrincipalSecrets::new); + + // no transaction open yet + this.diagnosticServices = diagnostics; + this.tr = null; + this.lock = new Object(); + } + + /** + * Key for the entities_active slice + * + * @param coreEntity core entity + * @return the key + */ + String buildEntitiesActiveKey(PolarisEntityCore coreEntity) { + return String.format( + "%d::%d::%d::%s", + coreEntity.getCatalogId(), + coreEntity.getParentId(), + coreEntity.getTypeCode(), + coreEntity.getName()); + } + + /** + * Key for the entities slice + * + * @param coreEntity core entity + * @return the key + */ + String buildEntitiesKey(PolarisEntityCore coreEntity) { + return String.format("%d::%d", coreEntity.getCatalogId(), coreEntity.getId()); + } + + /** + * Build key from a set of value pairs + * + * @param keys string/long/integer values + * @return unique string identifier + */ + String buildKeyComposite(Object... keys) { + StringBuilder result = new StringBuilder(); + for (Object key : keys) { + if (result.length() != 0) { + result.append("::"); + } + result.append(key.toString()); + } + return result.toString(); + } + + /** + * Build prefix key from a set of value pairs; prefix key will end with the key separator + * + * @param keys string/long/integer values + * @return unique string identifier + */ + String buildPrefixKeyComposite(Object... keys) { + StringBuilder result = new StringBuilder(); + for (Object key : keys) { + result.append(key.toString()); + result.append("::"); + } + return result.toString(); + } + + /** Start a read transaction */ + private void startReadTransaction() { + this.diagnosticServices.check(this.tr == null, "cannot nest transaction"); + this.tr = new Transaction(false); + } + + /** Start a write transaction */ + private void startWriteTransaction() { + this.diagnosticServices.check(this.tr == null, "cannot nest transaction"); + this.tr = new Transaction(true); + this.sliceEntities.startWriteTransaction(); + this.sliceEntitiesActive.startWriteTransaction(); + this.sliceEntitiesDropped.startWriteTransaction(); + this.sliceEntitiesDroppedToPurge.startWriteTransaction(); + this.sliceEntitiesChangeTracking.startWriteTransaction(); + this.sliceGrantRecords.startWriteTransaction(); + this.sliceGrantRecordsByGrantee.startWriteTransaction(); + this.slicePrincipalSecrets.startWriteTransaction(); + } + + /** Rollback transaction */ + void rollback() { + this.sliceEntities.rollback(); + this.sliceEntitiesActive.rollback(); + this.sliceEntitiesDropped.rollback(); + this.sliceEntitiesDroppedToPurge.rollback(); + this.sliceEntitiesChangeTracking.rollback(); + this.sliceGrantRecords.rollback(); + this.sliceGrantRecordsByGrantee.rollback(); + this.slicePrincipalSecrets.rollback(); + } + + /** Ensure that a read/write FDB transaction has been started */ + public void ensureReadWriteTr() { + this.diagnosticServices.check( + this.tr != null && this.tr.isWrite(), "no_write_transaction_started"); + } + + /** Ensure that a read FDB transaction has been started */ + private void ensureReadTr() { + this.diagnosticServices.checkNotNull(this.tr, "no_read_transaction_started"); + } + + /** + * Run inside a read/write transaction + * + * @param callCtx call context to use + * @param transactionCode transaction code + * @return the result of the execution + */ + public T runInTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Supplier transactionCode) { + + synchronized (lock) { + // execute transaction + try { + // init diagnostic services + this.diagnosticServices = callCtx.getDiagServices(); + this.startWriteTransaction(); + return transactionCode.get(); + } catch (Throwable e) { + this.rollback(); + throw e; + } finally { + this.tr = null; + this.diagnosticServices = null; + } + } + } + + /** + * Run inside a read/write transaction + * + * @param callCtx call context to use + * @param transactionCode transaction code + */ + public void runActionInTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Runnable transactionCode) { + + synchronized (lock) { + + // execute transaction + try { + // init diagnostic services + this.diagnosticServices = callCtx.getDiagServices(); + this.startWriteTransaction(); + transactionCode.run(); + } catch (Throwable e) { + this.rollback(); + throw e; + } finally { + this.tr = null; + this.diagnosticServices = null; + } + } + } + + /** + * Run inside a read only transaction + * + * @param callCtx call context to use + * @param transactionCode transaction code + * @return the result of the execution + */ + public T runInReadTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Supplier transactionCode) { + synchronized (lock) { + + // execute transaction + try { + // init diagnostic services + this.diagnosticServices = callCtx.getDiagServices(); + this.startReadTransaction(); + return transactionCode.get(); + } finally { + this.tr = null; + this.diagnosticServices = null; + } + } + } + + /** + * Run inside a read only transaction + * + * @param callCtx call context to use + * @param transactionCode transaction code + */ + public void runActionInReadTransaction( + @NotNull PolarisCallContext callCtx, @NotNull Runnable transactionCode) { + synchronized (lock) { + + // execute transaction + try { + // init diagnostic services + this.diagnosticServices = callCtx.getDiagServices(); + this.startReadTransaction(); + transactionCode.run(); + } finally { + this.tr = null; + this.diagnosticServices = null; + } + } + } + + public Slice getSliceEntities() { + return sliceEntities; + } + + public Slice getSliceEntitiesActive() { + return sliceEntitiesActive; + } + + public Slice getSliceEntitiesDropped() { + return sliceEntitiesDropped; + } + + public Slice getSliceEntitiesDroppedToPurge() { + return sliceEntitiesDroppedToPurge; + } + + public Slice getSliceEntitiesChangeTracking() { + return sliceEntitiesChangeTracking; + } + + public Slice getSliceGrantRecords() { + return sliceGrantRecords; + } + + public Slice getSliceGrantRecordsByGrantee() { + return sliceGrantRecordsByGrantee; + } + + public Slice getSlicePrincipalSecrets() { + return slicePrincipalSecrets; + } + + /** + * Next sequence number generator + * + * @return next id, must be in a read/write transaction + */ + public long getNextSequence() { + return this.nextId.incrementAndGet(); + } + + /** Clear all slices from data */ + void deleteAll() { + this.ensureReadWriteTr(); + this.sliceEntities.deleteAll(); + this.sliceEntitiesActive.deleteAll(); + this.sliceEntitiesDropped.deleteAll(); + this.sliceEntitiesDroppedToPurge.deleteAll(); + this.sliceEntitiesChangeTracking.deleteAll(); + this.sliceGrantRecordsByGrantee.deleteAll(); + this.sliceGrantRecords.deleteAll(); + this.slicePrincipalSecrets.deleteAll(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/ResolvedPolarisEntity.java b/polaris-core/src/main/java/io/polaris/core/persistence/ResolvedPolarisEntity.java new file mode 100644 index 0000000000..f5b5c674c5 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/ResolvedPolarisEntity.java @@ -0,0 +1,78 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import com.google.common.collect.ImmutableList; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.persistence.cache.EntityCacheEntry; +import java.util.List; + +public class ResolvedPolarisEntity { + private final PolarisEntity entity; + + // only non-empty if this entity can be a grantee; these are the grants on other + // roles/securables granted to this entity. + private final List grantRecordsAsGrantee; + + // grants associated to this entity as the securable; for a principal role or catalog role + // these may be ROLE_USAGE or other permission-management privileges. For a catalog securable, + // these are the grants like TABLE_READ_PROPERTIES, NAMESPACE_LIST, etc. + private final List grantRecordsAsSecurable; + + public ResolvedPolarisEntity( + PolarisEntity entity, + List grantRecordsAsGrantee, + List grantRecordsAsSecurable) { + this.entity = entity; + // TODO: Precondition checks that grantee or securable ids in grant records match entity as + // expected. + this.grantRecordsAsGrantee = grantRecordsAsGrantee; + this.grantRecordsAsSecurable = grantRecordsAsSecurable; + } + + public ResolvedPolarisEntity(EntityCacheEntry cacheEntry) { + this.entity = PolarisEntity.of(cacheEntry.getEntity()); + this.grantRecordsAsGrantee = ImmutableList.copyOf(cacheEntry.getGrantRecordsAsGrantee()); + this.grantRecordsAsSecurable = ImmutableList.copyOf(cacheEntry.getGrantRecordsAsSecurable()); + } + + public PolarisEntity getEntity() { + return entity; + } + + /** The grant records associated with this entity being the grantee of the record. */ + public List getGrantRecordsAsGrantee() { + return grantRecordsAsGrantee; + } + + /** The grant records associated with this entity being the securable of the record. */ + public List getGrantRecordsAsSecurable() { + return grantRecordsAsSecurable; + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("entity:"); + sb.append(entity); + sb.append(";grantRecordsAsGrantee:"); + sb.append(grantRecordsAsGrantee); + sb.append(";grantRecordsAsSecurable:"); + sb.append(grantRecordsAsSecurable); + return sb.toString(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/RetryOnConcurrencyException.java b/polaris-core/src/main/java/io/polaris/core/persistence/RetryOnConcurrencyException.java new file mode 100644 index 0000000000..d7081c0208 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/RetryOnConcurrencyException.java @@ -0,0 +1,35 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import com.google.errorprone.annotations.FormatMethod; + +/** Exception raised when the data is accessed concurrently with conflict. */ +public class RetryOnConcurrencyException extends RuntimeException { + @FormatMethod + public RetryOnConcurrencyException(String message, Object... args) { + super(String.format(message, args)); + } + + @FormatMethod + public RetryOnConcurrencyException(Throwable cause, String message, Object... args) { + super(String.format(message, args), cause); + } + + public RetryOnConcurrencyException(Throwable cause) { + super(cause); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCache.java b/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCache.java new file mode 100644 index 0000000000..85be4d4860 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCache.java @@ -0,0 +1,467 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.cache; + +import com.github.benmanes.caffeine.cache.Cache; +import com.github.benmanes.caffeine.cache.Caffeine; +import com.github.benmanes.caffeine.cache.RemovalListener; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import java.util.AbstractMap; +import java.util.List; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.TimeUnit; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** The entity cache, can be private or shared */ +public class EntityCache { + + // cache mode + private EntityCacheMode cacheMode; + + // the meta store manager + private final PolarisMetaStoreManager metaStoreManager; + + // Caffeine cache to keep entries by id + private final Cache byId; + + // index by name + private final AbstractMap byName; + + /** + * Constructor. Cache can be private or shared + * + * @param metaStoreManager the meta store manager implementation + */ + public EntityCache(@NotNull PolarisMetaStoreManager metaStoreManager) { + + // by name cache + this.byName = new ConcurrentHashMap<>(); + + // When an entry is removed, we simply remove it from the byName map + RemovalListener removalListener = + (key, value, cause) -> { + if (value != null) { + // compute name key + EntityCacheByNameKey nameKey = new EntityCacheByNameKey(value.getEntity()); + + // if it is still active, remove it from the name key + this.byName.remove(nameKey, value); + } + }; + + // use a Caffeine cache to purge entries when those have not been used for a long time. + // Assuming 1KB per entry, 100K entries is about 100MB. + this.byId = + Caffeine.newBuilder() + .maximumSize(100_000) // Set maximum size to 100,000 elements + .expireAfterAccess(1, TimeUnit.HOURS) // Expire entries after 1 hour of no access + .removalListener(removalListener) // Set the removal listener + .build(); + + // remember the meta store manager + this.metaStoreManager = metaStoreManager; + + // enabled by default + this.cacheMode = EntityCacheMode.ENABLE; + } + + /** + * Remove the specified cache entry from the cache + * + * @param cacheEntry cache entry to remove + */ + public void removeCacheEntry(@NotNull EntityCacheEntry cacheEntry) { + // compute name key + EntityCacheByNameKey nameKey = new EntityCacheByNameKey(cacheEntry.getEntity()); + + // remove this old entry, this will immediately remove the named entry + this.byId.asMap().remove(cacheEntry.getEntity().getId(), cacheEntry); + + // remove it from the name key + this.byName.remove(nameKey, cacheEntry); + } + + /** + * Cache new entry + * + * @param cacheEntry new cache entry + */ + private void cacheNewEntry(@NotNull EntityCacheEntry cacheEntry) { + + // compute name key + EntityCacheByNameKey nameKey = new EntityCacheByNameKey(cacheEntry.getEntity()); + + // get old value if one exist + EntityCacheEntry oldCacheEntry = this.byId.getIfPresent(cacheEntry.getEntity().getId()); + + // put new entry, only if really newer one + this.byId + .asMap() + .merge( + cacheEntry.getEntity().getId(), + cacheEntry, + (oldValue, newValue) -> this.isNewer(newValue, oldValue) ? newValue : oldValue); + + // only update the name key if this entity was not dropped + if (!cacheEntry.getEntity().isDropped()) { + // here we don't really care about concurrent update to the key. Basically if we are + // pointing to the wrong entry, we will detect this and fix the issue + this.byName.put(nameKey, cacheEntry); + } + + // remove old name if it has changed + if (oldCacheEntry != null) { + // old name + EntityCacheByNameKey oldNameKey = new EntityCacheByNameKey(oldCacheEntry.getEntity()); + if (!oldNameKey.equals(nameKey)) { + this.byName.remove(oldNameKey, oldCacheEntry); + } + } + } + + /** + * Determine if the newer value is really newer + * + * @param newValue new cache entry + * @param oldValue old cache entry + * @return true if the newer cache entry + */ + private boolean isNewer(EntityCacheEntry newValue, EntityCacheEntry oldValue) { + return (newValue.getEntity().getEntityVersion() > oldValue.getEntity().getEntityVersion() + || newValue.getEntity().getGrantRecordsVersion() + > oldValue.getEntity().getGrantRecordsVersion()); + } + + /** + * Replace an old entry with a new one + * + * @param oldCacheEntry old entry + * @param newCacheEntry new entry + */ + private void replaceCacheEntry( + @Nullable EntityCacheEntry oldCacheEntry, @NotNull EntityCacheEntry newCacheEntry) { + + // need to remove old? + if (oldCacheEntry != null) { + // only replace if there is a difference + if (this.entityNameKeyMismatch(oldCacheEntry.getEntity(), newCacheEntry.getEntity()) + || oldCacheEntry.getEntity().getEntityVersion() + < newCacheEntry.getEntity().getEntityVersion() + || oldCacheEntry.getEntity().getGrantRecordsVersion() + < newCacheEntry.getEntity().getGrantRecordsVersion()) { + // write new one + this.cacheNewEntry(newCacheEntry); + + // delete the old one assuming it has not been replaced by the above new entry + this.removeCacheEntry(oldCacheEntry); + } else { + oldCacheEntry.updateLastAccess(); + } + } else { + // write new one + this.cacheNewEntry(newCacheEntry); + } + } + + /** + * Check if two entities have different cache keys (either by id or by name) + * + * @param entity the entity + * @param otherEntity the other entity + * @return true if there is a mismatch + */ + private boolean entityNameKeyMismatch( + @NotNull PolarisBaseEntity entity, @NotNull PolarisBaseEntity otherEntity) { + return entity.getId() != otherEntity.getId() + || entity.getParentId() != otherEntity.getParentId() + || !entity.getName().equals(otherEntity.getName()) + || entity.getTypeCode() != otherEntity.getTypeCode(); + } + + /** + * Get the current cache mode + * + * @return the cache mode + */ + public EntityCacheMode getCacheMode() { + return cacheMode; + } + + /** + * Allows to change the caching mode for testing + * + * @param cacheMode the cache mode + */ + public void setCacheMode(EntityCacheMode cacheMode) { + this.cacheMode = cacheMode; + } + + /** + * Get a cache entity entry given the id of the entity + * + * @param entityId entity id + * @return the cache entry or null if not found + */ + public @Nullable EntityCacheEntry getEntityById(long entityId) { + return byId.getIfPresent(entityId); + } + + /** + * Get a cache entity entry given the name key of the entity + * + * @param entityNameKey entity name key + * @return the cache entry or null if not found + */ + public @Nullable EntityCacheEntry getEntityByName(@NotNull EntityCacheByNameKey entityNameKey) { + return byName.get(entityNameKey); + } + + /** + * Refresh the cache if needs be with a version of the entity/grant records matching the minimum + * specified version. + * + * @param callContext the Polaris call context + * @param entityToValidate copy of the entity held by the caller to validate + * @param entityMinVersion minimum expected version. Should be reloaded if found in a cache with a + * version less than this one + * @param entityGrantRecordsMinVersion minimum grant records version which is expected, grants + * records should be reloaded if needed + * @return the cache entry for the entity or null if the specified entity does not exist + */ + public @Nullable EntityCacheEntry getAndRefreshIfNeeded( + @NotNull PolarisCallContext callContext, + @NotNull PolarisBaseEntity entityToValidate, + int entityMinVersion, + int entityGrantRecordsMinVersion) { + long entityCatalogId = entityToValidate.getCatalogId(); + long entityId = entityToValidate.getId(); + PolarisEntityType entityType = entityToValidate.getType(); + + // first lookup the cache to find the existing cache entry + EntityCacheEntry existingCacheEntry = this.getEntityById(entityId); + + // the caller's fetched entity may have come from a stale lookup byName; we should consider + // the existingCacheEntry to be the older of the two for purposes of invalidation to make + // sure when we replaceCacheEntry we're also removing the old name if it's no longer valid + EntityCacheByNameKey nameKey = new EntityCacheByNameKey(entityToValidate); + EntityCacheEntry existingCacheEntryByName = this.getEntityByName(nameKey); + if (existingCacheEntryByName != null + && existingCacheEntry != null + && isNewer(existingCacheEntry, existingCacheEntryByName)) { + existingCacheEntry = existingCacheEntryByName; + } + + // the new one to be returned + final EntityCacheEntry newCacheEntry; + + // see if we need to load or refresh that entity + if (existingCacheEntry == null + || existingCacheEntry.getEntity().getEntityVersion() < entityMinVersion + || existingCacheEntry.getEntity().getGrantRecordsVersion() < entityGrantRecordsMinVersion) { + + // the refreshed entity + final PolarisMetaStoreManager.CachedEntryResult refreshedCacheEntry; + + // was not found in the cache? + final PolarisBaseEntity entity; + final List grantRecords; + final int grantRecordsVersion; + if (existingCacheEntry == null) { + // try to load it + refreshedCacheEntry = + this.metaStoreManager.loadCachedEntryById(callContext, entityCatalogId, entityId); + if (refreshedCacheEntry.isSuccess()) { + entity = refreshedCacheEntry.getEntity(); + grantRecords = refreshedCacheEntry.getEntityGrantRecords(); + grantRecordsVersion = refreshedCacheEntry.getGrantRecordsVersion(); + } else { + return null; + } + } else { + // refresh it + refreshedCacheEntry = + this.metaStoreManager.refreshCachedEntity( + callContext, + existingCacheEntry.getEntity().getEntityVersion(), + existingCacheEntry.getEntity().getGrantRecordsVersion(), + entityType, + entityCatalogId, + entityId); + if (refreshedCacheEntry.isSuccess()) { + entity = + (refreshedCacheEntry.getEntity() != null) + ? refreshedCacheEntry.getEntity() + : existingCacheEntry.getEntity(); + if (refreshedCacheEntry.getEntityGrantRecords() != null) { + grantRecords = refreshedCacheEntry.getEntityGrantRecords(); + grantRecordsVersion = refreshedCacheEntry.getGrantRecordsVersion(); + } else { + grantRecords = existingCacheEntry.getAllGrantRecords(); + grantRecordsVersion = existingCacheEntry.getEntity().getGrantRecordsVersion(); + } + } else { + // entity has been purged, remove it + this.removeCacheEntry(existingCacheEntry); + return null; + } + } + + // assert that entity, grant records and version are all set + callContext.getDiagServices().checkNotNull(entity, "unexpected_null_entity"); + callContext.getDiagServices().checkNotNull(grantRecords, "unexpected_null_grant_records"); + callContext + .getDiagServices() + .check(grantRecordsVersion > 0, "unexpected_null_grant_records_version"); + + // create new cache entry + newCacheEntry = + new EntityCacheEntry( + callContext.getDiagServices(), + existingCacheEntry == null + ? System.nanoTime() + : existingCacheEntry.getCreatedOnNanoTimestamp(), + entity, + grantRecords, + grantRecordsVersion); + + // insert cache entry + this.replaceCacheEntry(existingCacheEntry, newCacheEntry); + } else { + // found it in the cache and it is up-to-date, simply return it + existingCacheEntry.updateLastAccess(); + newCacheEntry = existingCacheEntry; + } + + return newCacheEntry; + } + + /** + * Get the specified entity by name and load it if it is not found. + * + * @param callContext the Polaris call context + * @param entityCatalogId id of the catalog where this entity resides or NULL_ID if top-level + * @param entityId id of the entity to lookup + * @return null if the entity does not exist or was dropped. Else return the entry for that + * entity, either as found in the cache or loaded from the backend + */ + public @Nullable EntityCacheLookupResult getOrLoadEntityById( + @NotNull PolarisCallContext callContext, long entityCatalogId, long entityId) { + + // if it exists, we are set + EntityCacheEntry entry = this.getEntityById(entityId); + final boolean cacheHit; + + // we need to load it if it does not exist + if (entry == null) { + // this is a miss + cacheHit = false; + + // load it + PolarisMetaStoreManager.CachedEntryResult result = + metaStoreManager.loadCachedEntryById(callContext, entityCatalogId, entityId); + + // not found, exit + if (!result.isSuccess()) { + return null; + } + + // if found, setup entry + callContext.getDiagServices().checkNotNull(result.getEntity(), "entity_should_loaded"); + callContext + .getDiagServices() + .checkNotNull(result.getEntityGrantRecords(), "entity_grant_records_should_loaded"); + entry = + new EntityCacheEntry( + callContext.getDiagServices(), + System.nanoTime(), + result.getEntity(), + result.getEntityGrantRecords(), + result.getGrantRecordsVersion()); + + // the above loading could take a long time so check again if the entry exists and only + this.cacheNewEntry(entry); + } else { + cacheHit = true; + } + + // return what we found + return new EntityCacheLookupResult(entry, cacheHit); + } + + /** + * Get the specified entity by name and load it if it is not found. + * + * @param callContext the Polaris call context + * @param entityNameKey name of the entity to load + * @return null if the entity does not exist or was dropped. Else return the entry for that + * entity, either as found in the cache or loaded from the backend + */ + public @Nullable EntityCacheLookupResult getOrLoadEntityByName( + @NotNull PolarisCallContext callContext, @NotNull EntityCacheByNameKey entityNameKey) { + + // if it exists, we are set + EntityCacheEntry entry = this.getEntityByName(entityNameKey); + final boolean cacheHit; + + // we need to load it if it does not exist + if (entry == null) { + // this is a miss + cacheHit = false; + + // load it + PolarisMetaStoreManager.CachedEntryResult result = + metaStoreManager.loadCachedEntryByName( + callContext, + entityNameKey.getCatalogId(), + entityNameKey.getParentId(), + entityNameKey.getType(), + entityNameKey.getName()); + + // not found, exit + if (!result.isSuccess()) { + return null; + } + + // validate return + callContext.getDiagServices().checkNotNull(result.getEntity(), "entity_should_loaded"); + callContext + .getDiagServices() + .checkNotNull(result.getEntityGrantRecords(), "entity_grant_records_should_loaded"); + + // if found, setup entry + entry = + new EntityCacheEntry( + callContext.getDiagServices(), + System.nanoTime(), + result.getEntity(), + result.getEntityGrantRecords(), + result.getGrantRecordsVersion()); + + // the above loading could take a long time so check again if the entry exists and only + this.cacheNewEntry(entry); + } else { + cacheHit = true; + } + + // return what we found + return new EntityCacheLookupResult(entry, cacheHit); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCacheByNameKey.java b/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCacheByNameKey.java new file mode 100644 index 0000000000..d41581a8d7 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCacheByNameKey.java @@ -0,0 +1,112 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.cache; + +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.entity.PolarisEntityType; +import java.util.Objects; + +/** Key on the name of an entity */ +public class EntityCacheByNameKey { + + // id of the catalog where this entity resides + private final long catalogId; + + // id of the parent of that entity + private final long parentId; + + // entity type code + private final int typeCode; + + // entity name + private final String name; + + /** + * Constructor for a top-level service entity (principal, principal role or catalog) + * + * @param type entity type + * @param name name of that entity + */ + public EntityCacheByNameKey(PolarisEntityType type, String name) { + this.catalogId = PolarisEntityConstants.getNullId(); + this.parentId = PolarisEntityConstants.getRootEntityId(); + this.typeCode = type.getCode(); + this.name = name; + } + + /** + * Constructor for a non-top-level entity + * + * @param catalogId id of the catalog where this entity is located + * @param parentId id of the parent of this entity + * @param type entity type + * @param name name of that entity + */ + public EntityCacheByNameKey(long catalogId, long parentId, PolarisEntityType type, String name) { + this.catalogId = catalogId; + this.parentId = parentId; + this.typeCode = type.getCode(); + this.name = name; + } + + /** + * Constructor of a key from an existing base entity + * + * @param baseEntity base entity + */ + public EntityCacheByNameKey(PolarisBaseEntity baseEntity) { + this.catalogId = baseEntity.getCatalogId(); + this.parentId = baseEntity.getParentId(); + this.typeCode = baseEntity.getTypeCode(); + this.name = baseEntity.getName(); + } + + public long getCatalogId() { + return catalogId; + } + + public long getParentId() { + return parentId; + } + + public int getTypeCode() { + return typeCode; + } + + public PolarisEntityType getType() { + return PolarisEntityType.fromCode(typeCode); + } + + public String getName() { + return name; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + EntityCacheByNameKey that = (EntityCacheByNameKey) o; + return parentId == that.parentId + && typeCode == that.typeCode + && Objects.equals(name, that.name); + } + + @Override + public int hashCode() { + return Objects.hash(parentId, typeCode, name); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCacheEntry.java b/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCacheEntry.java new file mode 100644 index 0000000000..294d835b74 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCacheEntry.java @@ -0,0 +1,123 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.cache; + +import com.google.common.collect.ImmutableList; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisGrantRecord; +import java.util.List; +import org.jetbrains.annotations.NotNull; + +/** An entry in our entity cache. Note, this is fully immutable */ +public class EntityCacheEntry { + + // epoch time (ns) when the cache entry was added to the cache + private long createdOnNanoTimestamp; + + // epoch time (ns) when the cache entry was added to the cache + private long lastAccessedNanoTimestamp; + + // the entity which have been cached. + private PolarisBaseEntity entity; + + // grants associated to this entity, for a principal, a principal role, or a catalog role these + // are role usage + // grants on that entity. For a catalog securable (i.e. a catalog, namespace, or table_like + // securable), these are + // the grants on this securable. + private List grantRecords; + + /** + * Constructor used when an entry is initially created after loading the entity and its grants + * from the backend. + * + * @param diagnostics diagnostic services + * @param createdOnNanoTimestamp when the entity was created + * @param entity the entity which has just been loaded + * @param grantRecords associated grant records, including grants for this entity as a securable + * as well as grants for this entity as a grantee if applicable + * @param grantsVersion version of the grants when they were loaded + */ + EntityCacheEntry( + @NotNull PolarisDiagnostics diagnostics, + long createdOnNanoTimestamp, + @NotNull PolarisBaseEntity entity, + @NotNull List grantRecords, + int grantsVersion) { + // validate not null + diagnostics.checkNotNull(entity, "entity_null"); + diagnostics.checkNotNull(grantRecords, "grant_records_null"); + + // when this entry has been created + this.createdOnNanoTimestamp = createdOnNanoTimestamp; + + // last accessed time is now + this.lastAccessedNanoTimestamp = System.nanoTime(); + + // we copy all attributes of the entity to avoid any contamination + this.entity = new PolarisBaseEntity(entity); + + // if only the grant records have been reloaded because they were changed, the entity will + // have an old version for those. Patch the entity if this is the case, as if we had reloaded it + if (this.entity.getGrantRecordsVersion() != grantsVersion) { + // remember the grants versions. For now grants should be loaded after the entity, so expect + // grants version to be same or higher + diagnostics.check( + this.entity.getGrantRecordsVersion() <= grantsVersion, + "grants_version_going_backward", + "entity={} grantsVersion={}", + entity, + grantsVersion); + + // patch grant records version + this.entity.setGrantRecordsVersion(grantsVersion); + } + + // the grants + this.grantRecords = ImmutableList.copyOf(grantRecords); + } + + public long getCreatedOnNanoTimestamp() { + return createdOnNanoTimestamp; + } + + public long getLastAccessedNanoTimestamp() { + return lastAccessedNanoTimestamp; + } + + public @NotNull PolarisBaseEntity getEntity() { + return entity; + } + + public @NotNull List getAllGrantRecords() { + return grantRecords; + } + + public @NotNull List getGrantRecordsAsGrantee() { + return grantRecords.stream().filter(record -> record.getGranteeId() == entity.getId()).toList(); + } + + public @NotNull List getGrantRecordsAsSecurable() { + return grantRecords.stream() + .filter(record -> record.getSecurableId() == entity.getId()) + .toList(); + } + + public void updateLastAccess() { + this.lastAccessedNanoTimestamp = System.nanoTime(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCacheLookupResult.java b/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCacheLookupResult.java new file mode 100644 index 0000000000..c8fe77f953 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCacheLookupResult.java @@ -0,0 +1,42 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.cache; + +import org.jetbrains.annotations.Nullable; + +/** Result of a lookup operation */ +public class EntityCacheLookupResult { + + // if not null, we found the entity and this is the entry. If not found, the entity was dropped or + // does not exist + private final @Nullable EntityCacheEntry cacheEntry; + + // true if the entity was found in the cache + private final boolean cacheHit; + + public EntityCacheLookupResult(@Nullable EntityCacheEntry cacheEntry, boolean cacheHit) { + this.cacheEntry = cacheEntry; + this.cacheHit = cacheHit; + } + + public @Nullable EntityCacheEntry getCacheEntry() { + return cacheEntry; + } + + public boolean isCacheHit() { + return cacheHit; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCacheMode.java b/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCacheMode.java new file mode 100644 index 0000000000..f2addb9622 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/cache/EntityCacheMode.java @@ -0,0 +1,28 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.cache; + +/** Cache mode, the default is ENABLE. */ +public enum EntityCacheMode { + // bypass the cache, always load + BYPASS, + // enable the cache, this is the default + ENABLE, + // enable but verify that the cache content is consistent. Used in QA mode to detect when + // versioning information is + // not properly maintained + ENABLE_BUT_VERIFY +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelEntity.java b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelEntity.java new file mode 100644 index 0000000000..4b85a15867 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelEntity.java @@ -0,0 +1,301 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.models; + +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import jakarta.persistence.Column; +import jakarta.persistence.Entity; +import jakarta.persistence.Id; +import jakarta.persistence.Table; +import jakarta.persistence.Version; + +/** + * Entity model representing all attributes of a Polaris Entity. This is used to exchange full + * entity information with ENTITIES table + */ +@Entity +@Table(name = "ENTITIES") +public class ModelEntity { + // the id of the catalog associated to that entity. NULL_ID if this entity is top-level like + // a catalog + @Id private long catalogId; + + // the id of the entity which was resolved + @Id private long id; + + // the id of the parent of this entity, use 0 for a top-level entity whose parent is the account + private long parentId; + + // the type of the entity when it was resolved + private int typeCode; + + // the name that this entity had when it was resolved + private String name; + + // the version that this entity had when it was resolved + private int entityVersion; + + public static final String EMPTY_MAP_STRING = "{}"; + // the type of the entity when it was resolved + private int subTypeCode; + + // timestamp when this entity was created + private long createTimestamp; + + // when this entity was dropped. Null if was never dropped + private long dropTimestamp; + + // when did we start purging this entity. When not null, un-drop is no longer possible + private long purgeTimestamp; + + // when should we start purging this entity + private long toPurgeTimestamp; + + // last time this entity was updated, only for troubleshooting + private long lastUpdateTimestamp; + + // properties, serialized as a JSON string + @Column(length = 65535) + private String properties; + + // internal properties, serialized as a JSON string + @Column(length = 65535) + private String internalProperties; + + // current version for that entity, will be monotonically incremented + private int grantRecordsVersion; + + // Used for Optimistic Locking to handle concurrent reads and updates + @Version private long version; + + public long getId() { + return id; + } + + public long getParentId() { + return parentId; + } + + public int getTypeCode() { + return typeCode; + } + + public String getName() { + return name; + } + + public int getEntityVersion() { + return entityVersion; + } + + public long getCatalogId() { + return catalogId; + } + + public int getSubTypeCode() { + return subTypeCode; + } + + public long getCreateTimestamp() { + return createTimestamp; + } + + public long getDropTimestamp() { + return dropTimestamp; + } + + public long getPurgeTimestamp() { + return purgeTimestamp; + } + + public long getToPurgeTimestamp() { + return toPurgeTimestamp; + } + + public long getLastUpdateTimestamp() { + return lastUpdateTimestamp; + } + + public String getProperties() { + return properties != null ? properties : EMPTY_MAP_STRING; + } + + public String getInternalProperties() { + return internalProperties != null ? internalProperties : EMPTY_MAP_STRING; + } + + public int getGrantRecordsVersion() { + return grantRecordsVersion; + } + + public static Builder builder() { + return new Builder(); + } + + public static final class Builder { + private final ModelEntity entity; + + private Builder() { + entity = new ModelEntity(); + } + + public Builder catalogId(long catalogId) { + entity.catalogId = catalogId; + return this; + } + + public Builder id(long id) { + entity.id = id; + return this; + } + + public Builder parentId(long parentId) { + entity.parentId = parentId; + return this; + } + + public Builder typeCode(int typeCode) { + entity.typeCode = typeCode; + return this; + } + + public Builder name(String name) { + entity.name = name; + return this; + } + + public Builder entityVersion(int entityVersion) { + entity.entityVersion = entityVersion; + return this; + } + + public Builder subTypeCode(int subTypeCode) { + entity.subTypeCode = subTypeCode; + return this; + } + + public Builder createTimestamp(long createTimestamp) { + entity.createTimestamp = createTimestamp; + return this; + } + + public Builder dropTimestamp(long dropTimestamp) { + entity.dropTimestamp = dropTimestamp; + return this; + } + + public Builder purgeTimestamp(long purgeTimestamp) { + entity.purgeTimestamp = purgeTimestamp; + return this; + } + + public Builder toPurgeTimestamp(long toPurgeTimestamp) { + entity.toPurgeTimestamp = toPurgeTimestamp; + return this; + } + + public Builder lastUpdateTimestamp(long lastUpdateTimestamp) { + entity.lastUpdateTimestamp = lastUpdateTimestamp; + return this; + } + + public Builder properties(String properties) { + entity.properties = properties; + return this; + } + + public Builder internalProperties(String internalProperties) { + entity.internalProperties = internalProperties; + return this; + } + + public Builder grantRecordsVersion(int grantRecordsVersion) { + entity.grantRecordsVersion = grantRecordsVersion; + return this; + } + + public ModelEntity build() { + return entity; + } + } + + public static ModelEntity fromEntity(PolarisBaseEntity entity) { + return ModelEntity.builder() + .catalogId(entity.getCatalogId()) + .id(entity.getId()) + .parentId(entity.getParentId()) + .typeCode(entity.getTypeCode()) + .name(entity.getName()) + .entityVersion(entity.getEntityVersion()) + .subTypeCode(entity.getSubTypeCode()) + .createTimestamp(entity.getCreateTimestamp()) + .dropTimestamp(entity.getDropTimestamp()) + .purgeTimestamp(entity.getPurgeTimestamp()) + .toPurgeTimestamp(entity.getToPurgeTimestamp()) + .lastUpdateTimestamp(entity.getLastUpdateTimestamp()) + .properties(entity.getProperties()) + .internalProperties(entity.getInternalProperties()) + .grantRecordsVersion(entity.getGrantRecordsVersion()) + .build(); + } + + public static PolarisBaseEntity toEntity(ModelEntity model) { + if (model == null) { + return null; + } + + var entity = + new PolarisBaseEntity( + model.getCatalogId(), + model.getId(), + PolarisEntityType.fromCode(model.getTypeCode()), + PolarisEntitySubType.fromCode(model.getSubTypeCode()), + model.getParentId(), + model.getName()); + entity.setEntityVersion(model.getEntityVersion()); + entity.setCreateTimestamp(model.getCreateTimestamp()); + entity.setDropTimestamp(model.getDropTimestamp()); + entity.setPurgeTimestamp(model.getPurgeTimestamp()); + entity.setToPurgeTimestamp(model.getToPurgeTimestamp()); + entity.setLastUpdateTimestamp(model.getLastUpdateTimestamp()); + entity.setProperties(model.getProperties()); + entity.setInternalProperties(model.getInternalProperties()); + entity.setGrantRecordsVersion(model.getGrantRecordsVersion()); + return entity; + } + + public void update(PolarisBaseEntity entity) { + if (entity == null) return; + + this.catalogId = entity.getCatalogId(); + this.id = entity.getId(); + this.parentId = entity.getParentId(); + this.typeCode = entity.getTypeCode(); + this.name = entity.getName(); + this.entityVersion = entity.getEntityVersion(); + this.subTypeCode = entity.getSubTypeCode(); + this.createTimestamp = entity.getCreateTimestamp(); + this.dropTimestamp = entity.getDropTimestamp(); + this.purgeTimestamp = entity.getPurgeTimestamp(); + this.toPurgeTimestamp = entity.getToPurgeTimestamp(); + this.lastUpdateTimestamp = entity.getLastUpdateTimestamp(); + this.properties = entity.getProperties(); + this.internalProperties = entity.getInternalProperties(); + this.grantRecordsVersion = entity.getGrantRecordsVersion(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelEntityActive.java b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelEntityActive.java new file mode 100644 index 0000000000..58ed614556 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelEntityActive.java @@ -0,0 +1,147 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.models; + +import io.polaris.core.entity.PolarisEntityActiveRecord; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import jakarta.persistence.Entity; +import jakarta.persistence.Id; +import jakarta.persistence.Table; + +/** + * EntityActive model representing some attributes of a Polaris Entity. This is used to exchange + * entity information with ENTITIES_ACTIVE table + */ +@Entity +@Table(name = "ENTITIES_ACTIVE") +public class ModelEntityActive { + // entity catalog id + @Id private long catalogId; + + // id of the entity + @Id private long id; + + // parent id of the entity + @Id private long parentId; + + // name of the entity + private String name; + + // code representing the type of that entity + @Id private int typeCode; + + // code representing the subtype of that entity + private int subTypeCode; + + public long getCatalogId() { + return catalogId; + } + + public long getId() { + return id; + } + + public long getParentId() { + return parentId; + } + + public String getName() { + return name; + } + + public int getTypeCode() { + return typeCode; + } + + public PolarisEntityType getType() { + return PolarisEntityType.fromCode(this.typeCode); + } + + public int getSubTypeCode() { + return subTypeCode; + } + + public PolarisEntitySubType getSubType() { + return PolarisEntitySubType.fromCode(this.subTypeCode); + } + + public static Builder builder() { + return new Builder(); + } + + public static final class Builder { + private final ModelEntityActive entity; + + private Builder() { + entity = new ModelEntityActive(); + } + + public Builder catalogId(long catalogId) { + entity.catalogId = catalogId; + return this; + } + + public Builder id(long id) { + entity.id = id; + return this; + } + + public Builder parentId(long parentId) { + entity.parentId = parentId; + return this; + } + + public Builder typeCode(int typeCode) { + entity.typeCode = typeCode; + return this; + } + + public Builder name(String name) { + entity.name = name; + return this; + } + + public Builder subTypeCode(int subTypeCode) { + entity.subTypeCode = subTypeCode; + return this; + } + + public ModelEntityActive build() { + return entity; + } + } + + public static ModelEntityActive fromEntityActive(PolarisEntityActiveRecord record) { + return ModelEntityActive.builder() + .catalogId(record.getCatalogId()) + .id(record.getId()) + .parentId(record.getParentId()) + .name(record.getName()) + .typeCode(record.getTypeCode()) + .subTypeCode(record.getSubTypeCode()) + .build(); + } + + public static PolarisEntityActiveRecord toEntityActive(ModelEntityActive model) { + if (model == null) { + return null; + } + + return new PolarisEntityActiveRecord( + model.catalogId, model.id, model.parentId, model.name, model.typeCode, model.subTypeCode); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelEntityChangeTracking.java b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelEntityChangeTracking.java new file mode 100644 index 0000000000..857d5a6e3a --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelEntityChangeTracking.java @@ -0,0 +1,76 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.models; + +import io.polaris.core.entity.PolarisBaseEntity; +import jakarta.persistence.Entity; +import jakarta.persistence.Id; +import jakarta.persistence.Table; +import jakarta.persistence.Version; + +/** + * EntityChangeTracking model representing some attributes of a Polaris Entity. This is used to + * exchange entity information with ENTITIES_CHANGE_TRACKING table + */ +@Entity +@Table(name = "ENTITIES_CHANGE_TRACKING") +public class ModelEntityChangeTracking { + // the id of the catalog associated to that entity. NULL_ID if this entity is top-level like + // a catalog + @Id private long catalogId; + + // the id of the entity which was resolved + @Id private long id; + + // the version that this entity had when it was resolved + private int entityVersion; + + // current version for that entity, will be monotonically incremented + private int grantRecordsVersion; + + // Used for Optimistic Locking to handle concurrent reads and updates + @Version private long version; + + public ModelEntityChangeTracking() {} + + public ModelEntityChangeTracking(PolarisBaseEntity entity) { + this.catalogId = entity.getCatalogId(); + this.id = entity.getId(); + this.entityVersion = entity.getEntityVersion(); + this.grantRecordsVersion = entity.getGrantRecordsVersion(); + } + + public long getCatalogId() { + return catalogId; + } + + public long getId() { + return id; + } + + public int getEntityVersion() { + return entityVersion; + } + + public int getGrantRecordsVersion() { + return grantRecordsVersion; + } + + public void update(PolarisBaseEntity entity) { + this.entityVersion = entity.getEntityVersion(); + this.grantRecordsVersion = entity.getGrantRecordsVersion(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelEntityDropped.java b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelEntityDropped.java new file mode 100644 index 0000000000..44ada31438 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelEntityDropped.java @@ -0,0 +1,161 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.models; + +import io.polaris.core.entity.PolarisBaseEntity; +import jakarta.persistence.Entity; +import jakarta.persistence.Id; +import jakarta.persistence.Table; +import jakarta.persistence.Version; + +/** + * EntityDropped model representing some attributes of a Polaris Entity. This is used to exchange + * entity information with ENTITIES_DROPPED table + */ +@Entity +@Table(name = "ENTITIES_DROPPED") +public class ModelEntityDropped { + // the id of the catalog associated to that entity. NULL_ID if this entity is top-level like + // a catalog + @Id private long catalogId; + + // the id of the entity which was resolved + private long id; + + // the id of the parent of this entity, use 0 for a top-level entity whose parent is the account + @Id private long parentId; + + // the type of the entity when it was resolved + @Id private int typeCode; + + // the name that this entity had when it was resolved + @Id private String name; + + // the type of the entity when it was resolved + @Id private int subTypeCode; + + // when this entity was dropped. Null if was never dropped + @Id private long dropTimestamp; + + // when should we start purging this entity + private long toPurgeTimestamp; + + // Used for Optimistic Locking to handle concurrent reads and updates + @Version private long version; + + public long getCatalogId() { + return catalogId; + } + + public long getId() { + return id; + } + + public long getParentId() { + return parentId; + } + + public int getTypeCode() { + return typeCode; + } + + public String getName() { + return name; + } + + public int getSubTypeCode() { + return subTypeCode; + } + + public long getDropTimestamp() { + return dropTimestamp; + } + + public long getToPurgeTimestamp() { + return toPurgeTimestamp; + } + + public static Builder builder() { + return new Builder(); + } + + public static final class Builder { + private final ModelEntityDropped entity; + + private Builder() { + entity = new ModelEntityDropped(); + } + + public Builder catalogId(long catalogId) { + entity.catalogId = catalogId; + return this; + } + + public Builder id(long id) { + entity.id = id; + return this; + } + + public Builder parentId(long parentId) { + entity.parentId = parentId; + return this; + } + + public Builder typeCode(int typeCode) { + entity.typeCode = typeCode; + return this; + } + + public Builder name(String name) { + entity.name = name; + return this; + } + + public Builder subTypeCode(int subTypeCode) { + entity.subTypeCode = subTypeCode; + return this; + } + + public Builder dropTimestamp(long dropTimestamp) { + entity.dropTimestamp = dropTimestamp; + return this; + } + + public Builder toPurgeTimestamp(long toPurgeTimestamp) { + entity.toPurgeTimestamp = toPurgeTimestamp; + return this; + } + + public ModelEntityDropped build() { + return entity; + } + } + + public static ModelEntityDropped fromEntity(PolarisBaseEntity entity) { + if (entity == null) return null; + + return ModelEntityDropped.builder() + .catalogId(entity.getCatalogId()) + .id(entity.getId()) + .parentId(entity.getParentId()) + .typeCode(entity.getTypeCode()) + .name(entity.getName()) + .subTypeCode(entity.getSubTypeCode()) + .dropTimestamp(entity.getDropTimestamp()) + .toPurgeTimestamp(entity.getToPurgeTimestamp()) + .build(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelGrantRecord.java b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelGrantRecord.java new file mode 100644 index 0000000000..b464a4eead --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelGrantRecord.java @@ -0,0 +1,142 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.models; + +import io.polaris.core.entity.PolarisGrantRecord; +import jakarta.persistence.Entity; +import jakarta.persistence.Id; +import jakarta.persistence.Index; +import jakarta.persistence.Table; +import jakarta.persistence.Version; + +/** + * GrantRecord model representing a privilege record of a securable granted to grantee. This is used + * to exchange the information with GRANT_RECORDS table + */ +@Entity +@Table( + name = "GRANT_RECORDS", + indexes = { + @Index( + name = "GRANT_RECORDS_BY_GRANTEE_INDEX", + columnList = "granteeCatalogId,granteeId,securableCatalogId,securableId,privilegeCode") + }) +public class ModelGrantRecord { + + // id of the catalog where the securable entity resides, NULL_ID if this entity is a top-level + // account entity + @Id private long securableCatalogId; + + // id of the securable + @Id private long securableId; + + // id of the catalog where the grantee entity resides, NULL_ID if this entity is a top-level + // account entity + @Id private long granteeCatalogId; + + // id of the grantee + @Id private long granteeId; + + // id associated to the privilege + @Id private int privilegeCode; + + // Used for Optimistic Locking to handle concurrent reads and updates + @Version private long version; + + public long getSecurableCatalogId() { + return securableCatalogId; + } + + public long getSecurableId() { + return securableId; + } + + public long getGranteeCatalogId() { + return granteeCatalogId; + } + + public long getGranteeId() { + return granteeId; + } + + public int getPrivilegeCode() { + return privilegeCode; + } + + public static Builder builder() { + return new Builder(); + } + + public static final class Builder { + private final ModelGrantRecord grantRecord; + + private Builder() { + grantRecord = new ModelGrantRecord(); + } + + public Builder securableCatalogId(long securableCatalogId) { + grantRecord.securableCatalogId = securableCatalogId; + return this; + } + + public Builder securableId(long securableId) { + grantRecord.securableId = securableId; + return this; + } + + public Builder granteeCatalogId(long granteeCatalogId) { + grantRecord.granteeCatalogId = granteeCatalogId; + return this; + } + + public Builder granteeId(long granteeId) { + grantRecord.granteeId = granteeId; + return this; + } + + public Builder privilegeCode(int privilegeCode) { + grantRecord.privilegeCode = privilegeCode; + return this; + } + + public ModelGrantRecord build() { + return grantRecord; + } + } + + public static ModelGrantRecord fromGrantRecord(PolarisGrantRecord record) { + if (record == null) return null; + + return ModelGrantRecord.builder() + .securableCatalogId(record.getSecurableCatalogId()) + .securableId(record.getSecurableId()) + .granteeCatalogId(record.getGranteeCatalogId()) + .granteeId(record.getGranteeId()) + .privilegeCode(record.getPrivilegeCode()) + .build(); + } + + public static PolarisGrantRecord toGrantRecord(ModelGrantRecord model) { + if (model == null) return null; + + return new PolarisGrantRecord( + model.getSecurableCatalogId(), + model.getSecurableId(), + model.getGranteeCatalogId(), + model.getGranteeId(), + model.getPrivilegeCode()); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelPrincipalSecrets.java b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelPrincipalSecrets.java new file mode 100644 index 0000000000..c0f35dec9a --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelPrincipalSecrets.java @@ -0,0 +1,127 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.models; + +import io.polaris.core.entity.PolarisPrincipalSecrets; +import jakarta.persistence.Entity; +import jakarta.persistence.Id; +import jakarta.persistence.Table; +import jakarta.persistence.Version; + +/** + * PrincipalSecrets model representing the secrets used to authenticate a catalog principal. This is + * used to exchange the information with PRINCIPAL_SECRETS table + */ +@Entity +@Table(name = "PRINCIPAL_SECRETS") +public class ModelPrincipalSecrets { + // the id of the principal + private long principalId; + + // the client id for that principal + @Id private String principalClientId; + + // the main secret for that principal + private String mainSecret; + + // the secondary secret for that principal + private String secondarySecret; + + // Used for Optimistic Locking to handle concurrent reads and updates + @Version private long version; + + public long getPrincipalId() { + return principalId; + } + + public String getPrincipalClientId() { + return principalClientId; + } + + public String getMainSecret() { + return mainSecret; + } + + public String getSecondarySecret() { + return secondarySecret; + } + + public static Builder builder() { + return new Builder(); + } + + public static final class Builder { + private final ModelPrincipalSecrets principalSecrets; + + private Builder() { + principalSecrets = new ModelPrincipalSecrets(); + } + + public Builder principalId(long principalId) { + principalSecrets.principalId = principalId; + return this; + } + + public Builder principalClientId(String principalClientId) { + principalSecrets.principalClientId = principalClientId; + return this; + } + + public Builder mainSecret(String mainSecret) { + principalSecrets.mainSecret = mainSecret; + return this; + } + + public Builder secondarySecret(String secondarySecret) { + principalSecrets.secondarySecret = secondarySecret; + return this; + } + + public ModelPrincipalSecrets build() { + return principalSecrets; + } + } + + public static ModelPrincipalSecrets fromPrincipalSecrets(PolarisPrincipalSecrets record) { + if (record == null) return null; + + return ModelPrincipalSecrets.builder() + .principalId(record.getPrincipalId()) + .principalClientId(record.getPrincipalClientId()) + .mainSecret(record.getMainSecret()) + .secondarySecret(record.getSecondarySecret()) + .build(); + } + + public static PolarisPrincipalSecrets toPrincipalSecrets(ModelPrincipalSecrets model) { + if (model == null) return null; + + return new PolarisPrincipalSecrets( + model.getPrincipalId(), + model.getPrincipalClientId(), + model.getMainSecret(), + model.getSecondarySecret()); + } + + public void update(PolarisPrincipalSecrets principalSecrets) { + if (principalSecrets == null) return; + + this.principalId = principalSecrets.getPrincipalId(); + this.principalClientId = principalSecrets.getPrincipalClientId(); + this.mainSecret = principalSecrets.getMainSecret(); + this.secondarySecret = principalSecrets.getSecondarySecret(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelSequenceId.java b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelSequenceId.java new file mode 100644 index 0000000000..52e6c8f44b --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/models/ModelSequenceId.java @@ -0,0 +1,36 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.models; + +import jakarta.persistence.Entity; +import jakarta.persistence.GeneratedValue; +import jakarta.persistence.GenerationType; +import jakarta.persistence.Id; +import jakarta.persistence.SequenceGenerator; +import jakarta.persistence.Table; + +@Entity +@Table(name = "POLARIS_SEQUENCE") +public class ModelSequenceId { + @Id + @SequenceGenerator( + name = "sequenceGen", + sequenceName = "POLARIS_SEQ", + initialValue = 1000, + allocationSize = 25) + @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "sequenceGen") + private Long id; +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/resolver/PolarisResolutionManifest.java b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/PolarisResolutionManifest.java new file mode 100644 index 0000000000..7c50039411 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/PolarisResolutionManifest.java @@ -0,0 +1,409 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.resolver; + +import com.google.common.collect.HashMultimap; +import com.google.common.collect.Multimap; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PrincipalRoleEntity; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisResolvedPathWrapper; +import io.polaris.core.persistence.ResolvedPolarisEntity; +import io.polaris.core.persistence.cache.EntityCacheEntry; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Holds a collection of related resolved PolarisEntity and associated grants including caller + * Principal/PrincipalRoles/CatalogRoles and target securables that will participate in any given + * operation. + * + *

Implemented as a wrapper around a Resolver with helper methods and book-keeping to better + * function as a lookup manifest for downstream callers. + */ +public class PolarisResolutionManifest implements PolarisResolutionManifestCatalogView { + private static final Logger LOG = LoggerFactory.getLogger(PolarisResolutionManifest.class); + + private final PolarisEntityManager entityManager; + private final CallContext callContext; + private final AuthenticatedPolarisPrincipal authenticatedPrincipal; + private final String catalogName; + private final Resolver primaryResolver; + private final PolarisDiagnostics diagnostics; + + private final Map pathLookup = new HashMap<>(); + private final List addedPaths = new ArrayList<>(); + private final Multimap addedTopLevelNames = HashMultimap.create(); + + private final Map passthroughPaths = new HashMap<>(); + + // For applicable operations, this represents the topmost root entity which services as an + // authorization parent for all other entities that reside at the root level, such as + // Catalog, Principal, and PrincipalRole. + // This simulated entity will be used if the actual resolver fails to resolve the rootContainer + // on the backend due to compatibility mismatches. + private ResolvedPolarisEntity simulatedResolvedRootContainerEntity = null; + + private int currentPathIndex = 0; + + // Set when resolveAll is called + private ResolverStatus primaryResolverStatus = null; + + public PolarisResolutionManifest( + CallContext callContext, + PolarisEntityManager entityManager, + AuthenticatedPolarisPrincipal authenticatedPrincipal, + String catalogName) { + this.entityManager = entityManager; + this.callContext = callContext; + this.authenticatedPrincipal = authenticatedPrincipal; + this.catalogName = catalogName; + this.primaryResolver = + entityManager.prepareResolver(callContext, authenticatedPrincipal, catalogName); + this.diagnostics = callContext.getPolarisCallContext().getDiagServices(); + + // TODO: Make the rootContainer lookup no longer optional in the persistence store. + // For now, we'll try to resolve the rootContainer as "optional", and only if we fail to find + // it, we'll use the "simulated" rootContainer entity. + addTopLevelName(PolarisEntityConstants.getRootContainerName(), PolarisEntityType.ROOT, true); + } + + /** Adds a name of a top-level entity (Catalog, Principal, PrincipalRole) to be resolved. */ + public void addTopLevelName(String entityName, PolarisEntityType entityType, boolean isOptional) { + addedTopLevelNames.put(entityName, entityType); + if (isOptional) { + primaryResolver.addOptionalEntityByName(entityType, entityName); + } else { + primaryResolver.addEntityByName(entityType, entityName); + } + } + + /** + * Adds a path that will be statically resolved with the primary Resolver when resolveAll() is + * called, and which contributes to the resolution status of whether all paths have successfully + * resolved. + * + * @param key the friendly lookup key for retrieving resolvedPaths after resolveAll(); typically + * might be a Namespace or TableIdentifier object. + */ + public void addPath(ResolverPath path, Object key) { + primaryResolver.addPath(path); + pathLookup.put(key, currentPathIndex); + addedPaths.add(path); + ++currentPathIndex; + } + + /** + * Adds a path that is allowed to be dynamically resolved with a new Resolver when + * getPassthroughResolvedPath is called. These paths are also included in the primary static + * resolution set resolved during resolveAll(). + */ + public void addPassthroughPath(ResolverPath path, Object key) { + addPath(path, key); + passthroughPaths.put(key, path); + } + + public ResolverStatus resolveAll() { + primaryResolverStatus = primaryResolver.resolveAll(); + // TODO: This could be a race condition where a Principal is dropped after initial authn + // but before the resolution attempt; consider whether 403 forbidden is more appropriate. + diagnostics.check( + primaryResolverStatus.getStatus() + != ResolverStatus.StatusEnum.CALLER_PRINCIPAL_DOES_NOT_EXIST, + "caller_principal_does_not_exist_at_resolution_time"); + + // activated principal roles are known, add them to the call context + if (primaryResolverStatus.getStatus() == ResolverStatus.StatusEnum.SUCCESS) { + List activatedPrincipalRoles = + primaryResolver.getResolvedCallerPrincipalRoles().stream() + .map(ce -> PrincipalRoleEntity.of(ce.getEntity())) + .collect(Collectors.toList()); + this.authenticatedPrincipal.setActivatedPrincipalRoles(activatedPrincipalRoles); + } + return primaryResolverStatus; + } + + @Override + public PolarisResolvedPathWrapper getResolvedReferenceCatalogEntity() { + return getResolvedReferenceCatalogEntity(false); + } + + /** + * @param key the key associated with the path to retrieve that was specified in addPath + * @return null if the path resolved for {@code key} isn't fully-resolved when specified as + * "optional" + */ + @Override + public PolarisResolvedPathWrapper getResolvedPath(Object key) { + return getResolvedPath(key, false); + } + + /** + * @return null if the path resolved for {@code key} isn't fully-resolved when specified as + * "optional", or if it was resolved but the subType doesn't match the specified subType. + */ + @Override + public PolarisResolvedPathWrapper getResolvedPath(Object key, PolarisEntitySubType subType) { + return getResolvedPath(key, subType, false); + } + + /** + * @param key the key associated with the path to retrieve that was specified in addPath + * @return null if the path resolved for {@code key} isn't fully-resolved when specified as + * "optional" + */ + @Override + public PolarisResolvedPathWrapper getPassthroughResolvedPath(Object key) { + diagnostics.check( + passthroughPaths.containsKey(key), + "invalid_key_for_passthrough_resolved_path", + "key={} passthroughPaths={}", + key, + passthroughPaths); + ResolverPath requestedPath = passthroughPaths.get(key); + + // Run a single-use Resolver for this path. + Resolver passthroughResolver = + entityManager.prepareResolver(callContext, authenticatedPrincipal, catalogName); + passthroughResolver.addPath(requestedPath); + ResolverStatus status = passthroughResolver.resolveAll(); + + if (status.getStatus() != ResolverStatus.StatusEnum.SUCCESS) { + LOG.debug("Returning null for key {} due to resolver status {}", key, status.getStatus()); + return null; + } + + List resolvedPath = passthroughResolver.getResolvedPath(); + if (requestedPath.isOptional()) { + if (resolvedPath.size() != requestedPath.getEntityNames().size()) { + LOG.debug( + "Returning null for key {} due to size mismatch from getPassthroughResolvedPath " + + "resolvedPath: {}, requestedPath.getEntityNames(): {}", + key, + resolvedPath.stream().map(ResolvedPolarisEntity::new).toList(), + requestedPath.getEntityNames()); + return null; + } + } + + List resolvedEntities = new ArrayList<>(); + resolvedEntities.add( + new ResolvedPolarisEntity(passthroughResolver.getResolvedReferenceCatalog())); + resolvedPath.stream() + .forEach(cacheEntry -> resolvedEntities.add(new ResolvedPolarisEntity(cacheEntry))); + LOG.debug("Returning resolvedEntities from getPassthroughResolvedPath: {}", resolvedEntities); + return new PolarisResolvedPathWrapper(resolvedEntities); + } + + /** + * @return null if the path resolved for {@code key} isn't fully-resolved when specified as + * "optional", or if it was resolved but the subType doesn't match the specified subType. + */ + @Override + public PolarisResolvedPathWrapper getPassthroughResolvedPath( + Object key, PolarisEntitySubType subType) { + PolarisResolvedPathWrapper resolvedPath = getPassthroughResolvedPath(key); + if (resolvedPath == null) { + return null; + } + if (resolvedPath.getRawLeafEntity() != null + && subType != PolarisEntitySubType.ANY_SUBTYPE + && resolvedPath.getRawLeafEntity().getSubType() != subType) { + return null; + } + return resolvedPath; + } + + public Set getAllActivatedCatalogRoleAndPrincipalRoleIds() { + Set activatedIds = new HashSet<>(); + primaryResolver.getResolvedCallerPrincipalRoles().stream() + .map(EntityCacheEntry::getEntity) + .map(PolarisBaseEntity::getId) + .forEach(activatedIds::add); + if (primaryResolver.getResolvedCatalogRoles() != null) { + primaryResolver.getResolvedCatalogRoles().values().stream() + .map(EntityCacheEntry::getEntity) + .map(PolarisBaseEntity::getId) + .forEach(activatedIds::add); + } + return activatedIds; + } + + public Set getAllActivatedPrincipalRoleIds() { + Set activatedIds = new HashSet<>(); + primaryResolver.getResolvedCallerPrincipalRoles().stream() + .map(EntityCacheEntry::getEntity) + .map(PolarisBaseEntity::getId) + .forEach(activatedIds::add); + return activatedIds; + } + + public void setSimulatedResolvedRootContainerEntity( + ResolvedPolarisEntity simulatedResolvedRootContainerEntity) { + this.simulatedResolvedRootContainerEntity = simulatedResolvedRootContainerEntity; + } + + private ResolvedPolarisEntity getResolvedRootContainerEntity() { + if (primaryResolverStatus.getStatus() != ResolverStatus.StatusEnum.SUCCESS) { + return null; + } + EntityCacheEntry resolvedCacheEntry = + primaryResolver.getResolvedEntity( + PolarisEntityType.ROOT, PolarisEntityConstants.getRootContainerName()); + if (resolvedCacheEntry == null) { + LOG.debug("Failed to find rootContainer, so using simulated rootContainer instead."); + return simulatedResolvedRootContainerEntity; + } + return new ResolvedPolarisEntity(resolvedCacheEntry); + } + + public PolarisResolvedPathWrapper getResolvedRootContainerEntityAsPath() { + return new PolarisResolvedPathWrapper(List.of(getResolvedRootContainerEntity())); + } + + public PolarisResolvedPathWrapper getResolvedReferenceCatalogEntity( + boolean prependRootContainer) { + // This is a server error instead of being able to legitimately return null, since this means + // a callsite failed to incorporate a reference catalog into its authorization flow but is + // still trying to perform operations on the (nonexistence) reference catalog. + diagnostics.checkNotNull(catalogName, "null_catalog_name_for_resolved_reference_catalog"); + EntityCacheEntry resolvedCachedCatalog = primaryResolver.getResolvedReferenceCatalog(); + if (resolvedCachedCatalog == null) { + return null; + } + if (prependRootContainer) { + // Operations directly on Catalogs also consider the root container to be a parent of its + // authorization chain. + // TODO: Throw appropriate Catalog NOT_FOUND exception before any call to + // getResolvedReferenceCatalogEntity(). + return new PolarisResolvedPathWrapper( + List.of( + getResolvedRootContainerEntity(), new ResolvedPolarisEntity(resolvedCachedCatalog))); + } else { + return new PolarisResolvedPathWrapper( + List.of(new ResolvedPolarisEntity(resolvedCachedCatalog))); + } + } + + public PolarisEntitySubType getLeafSubType(Object key) { + diagnostics.check( + pathLookup.containsKey(key), + "never_registered_key_for_resolved_path", + "key={} pathLookup={}", + key, + pathLookup); + int index = pathLookup.get(key); + List resolved = primaryResolver.getResolvedPaths().get(index); + if (resolved.size() == 0) { + return PolarisEntitySubType.NULL_SUBTYPE; + } + return resolved.get(resolved.size() - 1).getEntity().getSubType(); + } + + /** + * @param key the key associated with the path to retrieve that was specified in addPath + * @param prependRootContainer if true, also includes the rootContainer as the first element of + * the path; otherwise, the first element begins with the referenceCatalog. + * @return null if the path resolved for {@code key} isn't fully-resolved when specified as + * "optional" + */ + public PolarisResolvedPathWrapper getResolvedPath(Object key, boolean prependRootContainer) { + diagnostics.check( + pathLookup.containsKey(key), + "never_registered_key_for_resolved_path", + "key={} pathLookup={}", + key, + pathLookup); + + if (primaryResolverStatus.getStatus() != ResolverStatus.StatusEnum.SUCCESS) { + return null; + } + int index = pathLookup.get(key); + + // Return null for a partially-resolved "optional" path. + ResolverPath requestedPath = addedPaths.get(index); + List resolvedPath = primaryResolver.getResolvedPaths().get(index); + if (requestedPath.isOptional()) { + if (resolvedPath.size() != requestedPath.getEntityNames().size()) { + return null; + } + } + + List resolvedEntities = new ArrayList<>(); + if (prependRootContainer) { + resolvedEntities.add(getResolvedRootContainerEntity()); + } + resolvedEntities.add(new ResolvedPolarisEntity(primaryResolver.getResolvedReferenceCatalog())); + resolvedPath.stream() + .forEach(cacheEntry -> resolvedEntities.add(new ResolvedPolarisEntity(cacheEntry))); + return new PolarisResolvedPathWrapper(resolvedEntities); + } + + /** + * @return null if the path resolved for {@code key} isn't fully-resolved when specified as + * "optional", or if it was resolved but the subType doesn't match the specified subType. + */ + public PolarisResolvedPathWrapper getResolvedPath( + Object key, PolarisEntitySubType subType, boolean prependRootContainer) { + PolarisResolvedPathWrapper resolvedPath = getResolvedPath(key, prependRootContainer); + if (resolvedPath == null) { + return null; + } + if (resolvedPath.getRawLeafEntity() != null + && subType != PolarisEntitySubType.ANY_SUBTYPE + && resolvedPath.getRawLeafEntity().getSubType() != subType) { + return null; + } + return resolvedPath; + } + + public PolarisResolvedPathWrapper getResolvedTopLevelEntity( + String entityName, PolarisEntityType entityType) { + // For now, all top-level entities will have the root container prepended so we don't have + // a variation of this method that allows specifying whether to prepend the root container. + diagnostics.check( + addedTopLevelNames.containsEntry(entityName, entityType), + "never_registered_top_level_name_and_type_for_resolved_entity", + "entityName={} entityType={} addedTopLevelNames={}", + entityName, + entityType, + addedTopLevelNames); + + if (primaryResolverStatus.getStatus() != ResolverStatus.StatusEnum.SUCCESS) { + return null; + } + + EntityCacheEntry resolvedCacheEntry = primaryResolver.getResolvedEntity(entityType, entityName); + if (resolvedCacheEntry == null) { + return null; + } + return new PolarisResolvedPathWrapper( + List.of(getResolvedRootContainerEntity(), new ResolvedPolarisEntity(resolvedCacheEntry))); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/resolver/PolarisResolutionManifestCatalogView.java b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/PolarisResolutionManifestCatalogView.java new file mode 100644 index 0000000000..08865dd80c --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/PolarisResolutionManifestCatalogView.java @@ -0,0 +1,35 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.resolver; + +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.persistence.PolarisResolvedPathWrapper; + +/** + * Defines the methods by which a Catalog is expected to access resolved catalog-path entities, + * typically backed by a PolarisResolutionManifest. + */ +public interface PolarisResolutionManifestCatalogView { + PolarisResolvedPathWrapper getResolvedReferenceCatalogEntity(); + + PolarisResolvedPathWrapper getResolvedPath(Object key); + + PolarisResolvedPathWrapper getResolvedPath(Object key, PolarisEntitySubType subType); + + PolarisResolvedPathWrapper getPassthroughResolvedPath(Object key); + + PolarisResolvedPathWrapper getPassthroughResolvedPath(Object key, PolarisEntitySubType subType); +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/resolver/Resolver.java b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/Resolver.java new file mode 100644 index 0000000000..ca8d87dccf --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/Resolver.java @@ -0,0 +1,983 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.resolver; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisChangeTrackingVersions; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.entity.PolarisEntityId; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.persistence.cache.EntityCache; +import io.polaris.core.persistence.cache.EntityCacheByNameKey; +import io.polaris.core.persistence.cache.EntityCacheEntry; +import io.polaris.core.persistence.cache.EntityCacheLookupResult; +import java.util.AbstractSet; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** + * REST request resolver, allows to resolve all entities referenced directly or indirectly by in + * incoming rest request, Once resolved, the request can be authorized. + */ +public class Resolver { + + // we stash the Polaris call context here + private final @NotNull PolarisCallContext polarisCallContext; + + // the diagnostic services + private final @NotNull PolarisDiagnostics diagnostics; + + // the polaris metastore manager + private final @NotNull PolarisMetaStoreManager metaStoreManager; + + // the cache of entities + private final @NotNull EntityCache cache; + + // the id of the principal making the call or 0 if unknown + private final long callerPrincipalId; + + // the name of the principal making the call or null if unknown. If 0, the principal name will be + // not null + private final String callerPrincipalName; + + // reference catalog name for name resolution + private final String referenceCatalogName; + + // if not null, subset of principal roles to activate + private final @Nullable Set callerPrincipalRoleNamesScope; + + // set of entities to resolve given their name. This does not include namespaces or table_like + // entities which are + // part of a path + private final AbstractSet entitiesToResolve; + + // list of paths to resolve + private final List pathsToResolve; + + // caller principal + private EntityCacheEntry resolvedCallerPrincipal; + + // all principal roles which have been resolved + private List resolvedCallerPrincipalRoles; + + // catalog to use as the reference catalog for role activation + private EntityCacheEntry resolvedReferenceCatalog; + + // all catalog roles which have been activated + private final Map resolvedCatalogRoles; + + // all resolved paths + private List> resolvedPaths; + + // all entities which have been successfully resolved, by name + private final Map resolvedEntriesByName; + + // all entities which have been fully resolved, by id + private final Map resolvedEntriesById; + + private ResolverStatus resolverStatus; + + /** + * Constructor, effectively starts an entity resolver session + * + * @param polarisCallContext the polaris call context + * @param metaStoreManager meta store manager + * @param callerPrincipalId if not 0, the id of the principal calling the service + * @param callerPrincipalName if callerPrincipalId is 0, the name of the principal calling the + * service + * @param callerPrincipalRoleNamesScope if not null, scope principal roles + * @param cache shared entity cache + * @param referenceCatalogName if not null, specifies the name of the reference catalog. The + * reference catalog is the catalog used to resolve catalog roles and catalog path. Also, if a + * catalog reference is added, we will determine all catalog roles which are activated by the + * caller. Note that when a catalog name needs to be resolved because the principal creates or + * drop a catalog, it should not be specified here. Instead, it should be resolved by calling + * {@link #addEntityByName(PolarisEntityType, String)}. Generally, any DDL executed as a + * service admin should use null for that parameter. + */ + public Resolver( + @NotNull PolarisCallContext polarisCallContext, + @NotNull PolarisMetaStoreManager metaStoreManager, + long callerPrincipalId, + @Nullable String callerPrincipalName, + @Nullable Set callerPrincipalRoleNamesScope, + @NotNull EntityCache cache, + @Nullable String referenceCatalogName) { + this.polarisCallContext = polarisCallContext; + this.diagnostics = polarisCallContext.getDiagServices(); + this.metaStoreManager = metaStoreManager; + this.cache = cache; + this.callerPrincipalName = callerPrincipalName; + this.callerPrincipalId = callerPrincipalId; + this.referenceCatalogName = referenceCatalogName; + + // scoped principal role names + this.callerPrincipalRoleNamesScope = callerPrincipalRoleNamesScope; + + // validate inputs + this.diagnostics.checkNotNull(metaStoreManager, "unexpected_null_metaStoreManager"); + this.diagnostics.checkNotNull(cache, "unexpected_null_cache"); + this.diagnostics.check( + callerPrincipalId != 0 || callerPrincipalName != null, "principal_must_be_specified"); + + // paths to resolve + this.pathsToResolve = new ArrayList<>(); + this.resolvedPaths = new ArrayList<>(); + + // all entities we need to resolve by name + this.entitiesToResolve = new HashSet<>(); + + // will contain all principal roles which we were able to resolve + this.resolvedCallerPrincipalRoles = new ArrayList<>(); + + // remember if a reference catalog name was specified + if (referenceCatalogName != null) { + this.resolvedCatalogRoles = new HashMap<>(); + } else { + this.resolvedCatalogRoles = null; + } + + // all resolved entities, by name and by if + this.resolvedEntriesByName = new HashMap<>(); + resolvedEntriesById = new HashMap<>(); + + // the resolver has not yet been called + this.resolverStatus = null; + } + + /** + * Add a top-level entity to resolve. If the entity type is a catalog role, we also expect that a + * reference catalog entity was specified at creation time, else we will assert. That catalog role + * entity will be resolved from there. We will fail the entire resolution process if that entity + * cannot be resolved. If this is not expected, use addOptionalEntityByName() instead. + * + * @param entityType the type of the entity, either a principal, a principal role, a catalog or a + * catalog role. + * @param entityName the name of the entity + */ + public void addEntityByName(@NotNull PolarisEntityType entityType, @NotNull String entityName) { + diagnostics.checkNotNull(entityType, "entity_type_is_null"); + diagnostics.checkNotNull(entityName, "entity_name_is_null"); + // can only be called if the resolver has not yet been called + this.diagnostics.check(resolverStatus == null, "resolver_called"); + this.addEntityByName(entityType, entityName, false); + } + + /** + * Add an optional top-level entity to resolve. If the entity type is a catalog role, we also + * expect that a reference catalog entity was specified at creation time, else we will assert. + * That catalog role entity will be resolved from there. If the entity cannot be resolved, we will + * not fail the resolution process + * + * @param entityType the type of the entity, either a principal, a principal role, a catalog or a + * catalog role. + * @param entityName the name of the entity + */ + public void addOptionalEntityByName( + @NotNull PolarisEntityType entityType, @NotNull String entityName) { + diagnostics.checkNotNull(entityType, "entity_type_is_null"); + diagnostics.checkNotNull(entityName, "entity_name_is_null"); + // can only be called if the resolver has not yet been called + this.diagnostics.check(resolverStatus == null, "resolver_called"); + this.addEntityByName(entityType, entityName, true); + } + + /** + * Add a path to resolve + * + * @param path path to resolve + */ + public void addPath(@NotNull ResolverPath path) { + // can only be called if the resolver has not yet been called + this.diagnostics.check(resolverStatus == null, "resolver_called"); + diagnostics.checkNotNull(path, "unexpected_null_entity_path"); + this.pathsToResolve.add(path); + } + + /** + * Run the resolution process and return the status, either an error or success + * + *

+   * resolution might be working using multiple passes when using the cache since anything we find in the cache might
+   * have changed in the backend store.
+   * For each pass we will
+   *    -  go over all entities and call EntityCache.getOrLoad...() on these entities, including all paths.
+   *    -  split these entities into 3 groups:
+   *          - dropped or purged. We will return an error for these.
+   *          - to be validated entities, they were found in the cache. For those we need to ensure that the
+   *            entity id, its name and parent id has not changed. If yes we need to perform another pass.
+   *          - reloaded from backend, so the entity is validated. Validated entities will not be validated again
+   * 
+ * + * @return the status of the resolver. If success, all entities have been resolved and the + * getResolvedXYZ() method can be called. + */ + public ResolverStatus resolveAll() { + // can only be called if the resolver has not yet been called + this.diagnostics.check(resolverStatus == null, "resolver_called"); + + // retry until a pass terminates, or we reached the maximum iteration count. Note that we should + // finish normally in no more than few passes so the 1000 limit is really to avoid spinning + // forever if there is a bug. + int count = 0; + ResolverStatus status; + do { + status = runResolvePass(); + count++; + } while (status == null && ++count < 1000); + + // assert if status is null + this.diagnostics.checkNotNull(status, "cannot_resolve_all_entities"); + + // remember the resolver status + this.resolverStatus = status; + + // all has been resolved + return status; + } + + /** + * @return the principal we resolved + */ + public @NotNull EntityCacheEntry getResolvedCallerPrincipal() { + // can only be called if the resolver has been called and was success + this.diagnostics.checkNotNull(resolverStatus, "resolver_must_be_called_first"); + this.diagnostics.check( + resolverStatus.getStatus() == ResolverStatus.StatusEnum.SUCCESS, + "resolver_must_be_successful"); + + return resolvedCallerPrincipal; + } + + /** + * @return all principal roles which were activated. The list can be empty + */ + public @NotNull List getResolvedCallerPrincipalRoles() { + // can only be called if the resolver has been called and was success + this.diagnostics.checkNotNull(resolverStatus, "resolver_must_be_called_first"); + this.diagnostics.check( + resolverStatus.getStatus() == ResolverStatus.StatusEnum.SUCCESS, + "resolver_must_be_successful"); + + return resolvedCallerPrincipalRoles; + } + + /** + * @return the reference catalog which has been resolved. Will be null if null was passed in for + * the parameter referenceCatalogName when the Resolver was constructed. + */ + public @Nullable EntityCacheEntry getResolvedReferenceCatalog() { + // can only be called if the resolver has been called and was success + this.diagnostics.checkNotNull(resolverStatus, "resolver_must_be_called_first"); + this.diagnostics.check( + resolverStatus.getStatus() == ResolverStatus.StatusEnum.SUCCESS, + "resolver_must_be_successful"); + + return resolvedReferenceCatalog; + } + + /** + * Empty map if no catalog was resolved. Else the list of catalog roles which are activated by the + * caller + * + * @return map of activated catalog roles or null if no referenceCatalogName was specified + */ + public @Nullable Map getResolvedCatalogRoles() { + // can only be called if the resolver has been called and was success + this.diagnostics.checkNotNull(resolverStatus, "resolver_must_be_called_first"); + this.diagnostics.check( + resolverStatus.getStatus() == ResolverStatus.StatusEnum.SUCCESS, + "resolver_must_be_successful"); + + return resolvedCatalogRoles; + } + + /** + * Get path which has been resolved, should be used only when a single path was added to the + * resolver. If the path to resolve was optional, only the prefix that was resolved will be + * returned. + * + * @return single resolved path + */ + public @NotNull List getResolvedPath() { + // can only be called if the resolver has been called and was success + this.diagnostics.checkNotNull(resolverStatus, "resolver_must_be_called_first"); + this.diagnostics.check( + resolverStatus.getStatus() == ResolverStatus.StatusEnum.SUCCESS, + "resolver_must_be_successful"); + this.diagnostics.check(this.resolvedPaths.size() == 1, "only_if_single"); + + return resolvedPaths.getFirst(); + } + + /** + * One of more resolved path, in the order they were added to the resolver. + * + * @return list of resolved path + */ + public @NotNull List> getResolvedPaths() { + // can only be called if the resolver has been called and was success + this.diagnostics.checkNotNull(resolverStatus, "resolver_must_be_called_first"); + this.diagnostics.check( + resolverStatus.getStatus() == ResolverStatus.StatusEnum.SUCCESS, + "resolver_must_be_successful"); + this.diagnostics.check(!this.resolvedPaths.isEmpty(), "no_path_resolved"); + + return resolvedPaths; + } + + /** + * Get resolved entity associated to the specified type and name or null if not found + * + * @param entityType type of the entity, cannot be a NAMESPACE or a TABLE_LIKE entity. If it is a + * top-level catalog entity (i.e. CATALOG_ROLE), a reference catalog must have been specified + * at construction time. + * @param entityName name of the entity. + * @return the entity which has been resolved or null if that entity does not exist + */ + public @Nullable EntityCacheEntry getResolvedEntity( + @NotNull PolarisEntityType entityType, @NotNull String entityName) { + // can only be called if the resolver has been called and was success + this.diagnostics.checkNotNull(resolverStatus, "resolver_must_be_called_first"); + this.diagnostics.check( + resolverStatus.getStatus() == ResolverStatus.StatusEnum.SUCCESS, + "resolver_must_be_successful"); + + // validate input + diagnostics.check( + entityType != PolarisEntityType.NAMESPACE && entityType != PolarisEntityType.TABLE_LIKE, + "cannot_be_path"); + diagnostics.check( + entityType.isTopLevel() || this.referenceCatalogName != null, "reference_catalog_expected"); + + if (entityType.isTopLevel()) { + return this.resolvedEntriesByName.get(new EntityCacheByNameKey(entityType, entityName)); + } else { + long catalogId = this.resolvedReferenceCatalog.getEntity().getId(); + return this.resolvedEntriesByName.get( + new EntityCacheByNameKey(catalogId, catalogId, entityType, entityName)); + } + } + + /** + * Execute one resolve pass on all entities + * + * @return status of the resolve pass + */ + private ResolverStatus runResolvePass() { + + // we will resolve those again + this.resolvedCallerPrincipal = null; + this.resolvedReferenceCatalog = null; + if (this.resolvedCatalogRoles != null) { + this.resolvedCatalogRoles.clear(); + } + this.resolvedCallerPrincipalRoles.clear(); + this.resolvedPaths.clear(); + + // all entries we found in the cache but that we need to validate since they might be stale + List toValidate = new ArrayList<>(); + + // first resolve the principal and determine the set of activated principal roles + ResolverStatus status = + this.resolveCallerPrincipalAndPrincipalRoles( + toValidate, + this.callerPrincipalId, + this.callerPrincipalName, + this.callerPrincipalRoleNamesScope); + + // if success, continue resolving + if (status.getStatus() == ResolverStatus.StatusEnum.SUCCESS) { + // then resolve the reference catalog if one was specified + if (this.referenceCatalogName != null) { + status = this.resolveReferenceCatalog(toValidate, this.referenceCatalogName); + } + + // if success, continue resolving + if (status.getStatus() == ResolverStatus.StatusEnum.SUCCESS) { + // then resolve all the additional entities we were asked to resolve + status = this.resolveEntities(toValidate, this.entitiesToResolve); + + // if success, continue resolving + if (status.getStatus() == ResolverStatus.StatusEnum.SUCCESS + && this.referenceCatalogName != null) { + // finally, resolve all paths we need to resolve + status = this.resolvePaths(toValidate, this.pathsToResolve); + } + } + } + + // all the above resolution was optimistic i.e. when we probe the cache and find an entity, we + // don't validate if this entity has been changed in the backend. So validate now all these + // entities in one single + // go, + boolean validationSuccess = this.bulkValidate(toValidate); + + if (validationSuccess) { + this.updateResolved(); + } + + // if success, we are done, simply return the status. + return validationSuccess ? status : null; + } + + /** + * Update all entities which have been resolved since after validation, some might have changed + */ + private void updateResolved() { + + // if success, we need to get the validated entries + // we will resolve those again + this.resolvedCallerPrincipal = this.getResolved(this.resolvedCallerPrincipal); + + // update all principal roles with latest + if (!this.resolvedCallerPrincipalRoles.isEmpty()) { + List refreshedResolvedCallerPrincipalRoles = + new ArrayList<>(this.resolvedCallerPrincipalRoles.size()); + this.resolvedCallerPrincipalRoles.forEach( + ce -> refreshedResolvedCallerPrincipalRoles.add(this.getResolved(ce))); + this.resolvedCallerPrincipalRoles = refreshedResolvedCallerPrincipalRoles; + } + + // update referenced catalog + this.resolvedReferenceCatalog = this.getResolved(this.resolvedReferenceCatalog); + + // update all resolved catalog roles + if (this.resolvedCatalogRoles != null) { + for (EntityCacheEntry catalogCacheEntry : this.resolvedCatalogRoles.values()) { + this.resolvedCatalogRoles.put( + catalogCacheEntry.getEntity().getId(), this.getResolved(catalogCacheEntry)); + } + } + + // update all resolved paths + if (!this.resolvedPaths.isEmpty()) { + List> refreshedResolvedPaths = + new ArrayList<>(this.resolvedPaths.size()); + this.resolvedPaths.forEach( + rp -> { + List refreshedRp = new ArrayList<>(rp.size()); + rp.forEach(ce -> refreshedRp.add(this.getResolved(ce))); + refreshedResolvedPaths.add(refreshedRp); + }); + this.resolvedPaths = refreshedResolvedPaths; + } + } + + /** + * Get the fully resolved cache entry for the specified cache entry + * + * @param cacheEntry input cache entry + * @return the fully resolved cached entry which will often be the same + */ + private EntityCacheEntry getResolved(EntityCacheEntry cacheEntry) { + final EntityCacheEntry refreshedEntry; + if (cacheEntry == null) { + refreshedEntry = null; + } else { + // the latest refreshed entry + refreshedEntry = this.resolvedEntriesById.get(cacheEntry.getEntity().getId()); + this.diagnostics.checkNotNull( + refreshedEntry, "cache_entry_should_be_resolved", "entity={}", cacheEntry.getEntity()); + } + return refreshedEntry; + } + + /** + * Bulk validate now the set of entities we didn't validate when we were accessing the entity + * cache + * + * @param toValidate entities to validate + * @return true if none of the entities in the cache has changed + */ + private boolean bulkValidate(List toValidate) { + // assume everything is good + boolean validationStatus = true; + + // bulk validate + if (!toValidate.isEmpty()) { + List entityIds = + toValidate.stream() + .map( + cacheEntry -> + new PolarisEntityId( + cacheEntry.getEntity().getCatalogId(), cacheEntry.getEntity().getId())) + .collect(Collectors.toList()); + + // now get the current backend versions of all these entities + PolarisMetaStoreManager.ChangeTrackingResult changeTrackingResult = + this.metaStoreManager.loadEntitiesChangeTracking(this.polarisCallContext, entityIds); + + // refresh any entity which is not fresh. If an entity is missing, reload it + Iterator entityIterator = toValidate.iterator(); + Iterator versionIterator = + changeTrackingResult.getChangeTrackingVersions().iterator(); + + // determine the ones we need to reload or refresh and the ones which are up-to-date + while (entityIterator.hasNext()) { + // get cache entry and associated versions + EntityCacheEntry cacheEntry = entityIterator.next(); + PolarisChangeTrackingVersions versions = versionIterator.next(); + + // entity we found in the cache + PolarisBaseEntity entity = cacheEntry.getEntity(); + + // refresh cache entry if the entity or grant records version is different + final EntityCacheEntry refreshedCacheEntry; + if (versions == null + || entity.getEntityVersion() != versions.getEntityVersion() + || entity.getGrantRecordsVersion() != versions.getGrantRecordsVersion()) { + // if null version we need to invalidate the cached entry since it has probably been + // dropped + if (versions == null) { + this.cache.removeCacheEntry(cacheEntry); + refreshedCacheEntry = null; + } else { + // refresh that entity. If versions is null, it has been dropped + refreshedCacheEntry = + this.cache.getAndRefreshIfNeeded( + this.polarisCallContext, + entity, + versions.getEntityVersion(), + versions.getGrantRecordsVersion()); + } + + // get the refreshed entity + PolarisBaseEntity refreshedEntity = + (refreshedCacheEntry == null) ? null : refreshedCacheEntry.getEntity(); + + // if the entity has been removed, or its name has changed, or it was re-parented, or it + // was dropped, we will have to perform another pass + if (refreshedEntity == null + || refreshedEntity.getParentId() != entity.getParentId() + || refreshedEntity.isDropped() != entity.isDropped() + || !refreshedEntity.getName().equals(entity.getName())) { + validationStatus = false; + } + + // special cases: the set of principal roles or catalog roles which have been + // activated might change if usage grants to a principal or a principal role have + // changed. Hence, force another pass if we are in that scenario + if (entity.getTypeCode() == PolarisEntityType.PRINCIPAL.getCode() + || entity.getTypeCode() == PolarisEntityType.PRINCIPAL_ROLE.getCode()) { + validationStatus = false; + } + } else { + // no need to refresh, it is up-to-date + refreshedCacheEntry = cacheEntry; + } + + // if it was found, it has been resolved, so if there is another pass, we will not have to + // resolve it again + if (refreshedCacheEntry != null) { + this.addToResolved(refreshedCacheEntry); + } + } + } + + // done, return final validation status + return validationStatus; + } + + /** + * Resolve a set of top-level service or catalog entities + * + * @param toValidate all entities we have resolved from the cache, hence we will have to verify + * that these entities have not changed in the backend + * @param entitiesToResolve the set of entities to resolve + * @return the status of resolution + */ + private ResolverStatus resolveEntities( + List toValidate, AbstractSet entitiesToResolve) { + // resolve each + for (ResolverEntityName entityName : entitiesToResolve) { + // resolve that entity + EntityCacheEntry resolvedEntity = + this.resolveByName(toValidate, entityName.getEntityType(), entityName.getEntityName()); + + // if not found, we can exit unless the entity is optional + if (!entityName.isOptional() + && (resolvedEntity == null || resolvedEntity.getEntity().isDropped())) { + return new ResolverStatus(entityName.getEntityType(), entityName.getEntityName()); + } + } + + // complete success + return new ResolverStatus(ResolverStatus.StatusEnum.SUCCESS); + } + + /** + * Resolve a set of path inside the referenced catalog + * + * @param toValidate all entities we have resolved from the cache, hence we will have to verify + * that these entities have not changed in the backend + * @param pathsToResolve the set of paths to resolve + * @return the status of resolution + */ + private ResolverStatus resolvePaths( + List toValidate, List pathsToResolve) { + + // id of the catalog for all these paths + final long catalogId = this.resolvedReferenceCatalog.getEntity().getId(); + + // resolve each path + for (ResolverPath path : pathsToResolve) { + + // path we are resolving + List resolvedPath = new ArrayList<>(); + + // initial parent id is the catalog itself + long parentId = catalogId; + + // resolve each segment + Iterator pathIt = path.getEntityNames().iterator(); + for (int segmentIndex = 0; segmentIndex < path.getEntityNames().size(); segmentIndex++) { + // get segment name + String segmentName = pathIt.next(); + + // determine the segment type + PolarisEntityType segmentType = + pathIt.hasNext() ? PolarisEntityType.NAMESPACE : path.getLastEntityType(); + + // resolve that entity + EntityCacheEntry segment = + this.resolveByName(toValidate, catalogId, segmentType, parentId, segmentName); + + // if not found, abort + if (segment == null || segment.getEntity().isDropped()) { + if (path.isOptional()) { + // we have resolved as much as what we could have + break; + } else { + return new ResolverStatus(path, segmentIndex); + } + } + + // this is the parent of the next segment + parentId = segment.getEntity().getId(); + + // add it to the path we are resolving + resolvedPath.add(segment); + } + + // one more path has been resolved + this.resolvedPaths.add(resolvedPath); + } + + // complete success + return new ResolverStatus(ResolverStatus.StatusEnum.SUCCESS); + } + + /** + * Resolve the principal and determine which principal roles are activated. Resolved those. + * + * @param toValidate all entities we have resolved from the cache, hence we will have to verify + * that these entities have not changed in the backend + * @param callerPrincipalId the id of the principal which made the call + * @param callerPrincipalRoleNamesScope if not null, subset of roles activated by this call + * @return the status of resolution + */ + private ResolverStatus resolveCallerPrincipalAndPrincipalRoles( + List toValidate, + long callerPrincipalId, + String callerPrincipalName, + Set callerPrincipalRoleNamesScope) { + + // resolve the principal, by name or id + this.resolvedCallerPrincipal = + (callerPrincipalId != PolarisEntityConstants.getNullId()) + ? this.resolveById( + toValidate, + PolarisEntityType.PRINCIPAL, + PolarisEntityConstants.getNullId(), + callerPrincipalId) + : this.resolveByName(toValidate, PolarisEntityType.PRINCIPAL, callerPrincipalName); + + // if the principal was not found, we can end right there + if (this.resolvedCallerPrincipal == null + || this.resolvedCallerPrincipal.getEntity().isDropped()) { + return new ResolverStatus(ResolverStatus.StatusEnum.CALLER_PRINCIPAL_DOES_NOT_EXIST); + } + + // activate all principal roles which still exist + for (PolarisGrantRecord grantRecord : this.resolvedCallerPrincipal.getGrantRecordsAsGrantee()) { + if (grantRecord.getPrivilegeCode() == PolarisPrivilege.PRINCIPAL_ROLE_USAGE.getCode()) { + + // resolve principal role granted to that principal + EntityCacheEntry principalRole = + this.resolveById( + toValidate, + PolarisEntityType.PRINCIPAL_ROLE, + PolarisEntityConstants.getNullId(), + grantRecord.getSecurableId()); + + // skip if purged or has been dropped + if (principalRole != null && !principalRole.getEntity().isDropped()) { + // add it to the activated list if no scoped principal role or this principal role is + // activated + if (callerPrincipalRoleNamesScope == null + || callerPrincipalRoleNamesScope.contains(principalRole.getEntity().getName())) { + // this principal role is activated + this.resolvedCallerPrincipalRoles.add(principalRole); + } + } + } + } + + // total success + return new ResolverStatus(ResolverStatus.StatusEnum.SUCCESS); + } + + /** + * Resolve the reference catalog and determine all activated role. The principal and principal + * roles should have already been resolved + * + * @param toValidate all entities we have resolved from the cache, hence we will have to verify + * that these entities have not changed in the backend + * @param referenceCatalogName name of the reference catalog to resolve, along with all catalog + * roles which are activated + * @return the status of resolution + */ + private ResolverStatus resolveReferenceCatalog( + @NotNull List toValidate, @NotNull String referenceCatalogName) { + // resolve the catalog + this.resolvedReferenceCatalog = + this.resolveByName(toValidate, PolarisEntityType.CATALOG, referenceCatalogName); + + // error out if we couldn't find it + if (this.resolvedReferenceCatalog == null + || this.resolvedReferenceCatalog.getEntity().isDropped()) { + return new ResolverStatus(PolarisEntityType.CATALOG, this.referenceCatalogName); + } + + // determine the set of catalog roles which have been activated + long catalogId = this.resolvedReferenceCatalog.getEntity().getId(); + for (EntityCacheEntry principalRole : resolvedCallerPrincipalRoles) { + for (PolarisGrantRecord grantRecord : principalRole.getGrantRecordsAsGrantee()) { + // the securable is a catalog role belonging to + if (grantRecord.getPrivilegeCode() == PolarisPrivilege.CATALOG_ROLE_USAGE.getCode() + && grantRecord.getSecurableCatalogId() == catalogId) { + // the id of the catalog role + long catalogRoleId = grantRecord.getSecurableId(); + + // skip if it has already been added + if (!this.resolvedCatalogRoles.containsKey(catalogRoleId)) { + // see if this catalog can be resolved + EntityCacheEntry catalogRole = + this.resolveById( + toValidate, PolarisEntityType.CATALOG_ROLE, catalogId, catalogRoleId); + + // if found and not dropped, add it to the list of activated catalog roles + if (catalogRole != null && !catalogRole.getEntity().isDropped()) { + this.resolvedCatalogRoles.put(catalogRoleId, catalogRole); + } + } + } + } + } + + // all good + return new ResolverStatus(ResolverStatus.StatusEnum.SUCCESS); + } + + /** + * Add a cache entry to the set of resolved entities + * + * @param refreshedCacheEntry refreshed cache entry + */ + private void addToResolved(EntityCacheEntry refreshedCacheEntry) { + // underlying entity + PolarisBaseEntity entity = refreshedCacheEntry.getEntity(); + + // add it by ID + this.resolvedEntriesById.put(entity.getId(), refreshedCacheEntry); + + // in the by name map, only add it if it has not been dropped + if (!entity.isDropped()) { + this.resolvedEntriesByName.put( + new EntityCacheByNameKey( + entity.getCatalogId(), entity.getParentId(), entity.getType(), entity.getName()), + refreshedCacheEntry); + } + } + + /** + * Add a top-level entity to resolve. If the entity type is a catalog role, we also expect that a + * reference catalog entity was specified at creation time, else we will assert. That catalog role + * entity will be resolved from there. We will fail the entire resolution process if that entity + * cannot be resolved. If this is not expected, use addOptionalEntityByName() instead. + * + * @param entityType the type of the entity, either a principal, a principal role, a catalog or a + * catalog role. + * @param entityName the name of the entity + * @param optional if true, the entity is optional + */ + private void addEntityByName( + @NotNull PolarisEntityType entityType, @NotNull String entityName, boolean optional) { + + // can only be called if the resolver has not yet been called + this.diagnostics.check(resolverStatus == null, "resolver_called"); + + // ensure everything was specified + diagnostics.checkNotNull(entityType, "unexpected_null_entity_type"); + diagnostics.checkNotNull(entityName, "unexpected_null_entity_name"); + + // ensure that a reference catalog has been specified if this entity is a catalog role + diagnostics.check( + entityType != PolarisEntityType.CATALOG_ROLE || this.referenceCatalogName != null, + "reference_catalog_must_be_specified"); + + // one more to resolve + this.entitiesToResolve.add(new ResolverEntityName(entityType, entityName, optional)); + } + + /** + * Resolve a top-level entity by name + * + * @param toValidate set of entries we will have to validate + * @param entityType entity type + * @param entityName name of the entity to resolve + * @return cache entry created for that entity + */ + private EntityCacheEntry resolveByName( + List toValidate, PolarisEntityType entityType, String entityName) { + if (entityType.isTopLevel()) { + return this.resolveByName( + toValidate, + PolarisEntityConstants.getNullId(), + entityType, + PolarisEntityConstants.getNullId(), + entityName); + } else { + // only top-level catalog entity + long catalogId = this.resolvedReferenceCatalog.getEntity().getId(); + this.diagnostics.check(entityType == PolarisEntityType.CATALOG_ROLE, "catalog_role_expected"); + return this.resolveByName(toValidate, catalogId, entityType, catalogId, entityName); + } + } + + /** + * Resolve a top-level entity by name + * + * @param toValidate (IN/OUT) list of entities we will have to validate + * @param entityType entity type + * @param entityName name of the entity to resolve + * @return the resolve entity. Potentially update the toValidate list if we will have to validate + * that this entity is up-to-date + */ + private EntityCacheEntry resolveByName( + @NotNull List toValidate, + long catalogId, + @NotNull PolarisEntityType entityType, + long parentId, + @NotNull String entityName) { + + // key for that entity + EntityCacheByNameKey nameKey = + new EntityCacheByNameKey(catalogId, parentId, entityType, entityName); + + // first check if this entity has not yet been resolved + EntityCacheEntry cacheEntry = this.resolvedEntriesByName.get(nameKey); + if (cacheEntry != null) { + return cacheEntry; + } + + // then check if it does not exist in the toValidate list. The same entity might be resolved + // several times with multi-path resolution + for (EntityCacheEntry ce : toValidate) { + PolarisBaseEntity entity = ce.getEntity(); + if (entity.getCatalogId() == catalogId + && entity.getParentId() == parentId + && entity.getType() == entityType + && entity.getName().equals(entityName)) { + return ce; + } + } + + // get or load by name + EntityCacheLookupResult lookupResult = + this.cache.getOrLoadEntityByName( + this.polarisCallContext, + new EntityCacheByNameKey(catalogId, parentId, entityType, entityName)); + + // if not found + if (lookupResult == null) { + // not found + return null; + } else if (lookupResult.isCacheHit()) { + // found in the cache, we will have to validate this entity + toValidate.add(lookupResult.getCacheEntry()); + } else { + // entry cannot be null + this.diagnostics.checkNotNull(lookupResult.getCacheEntry(), "cache_entry_is_null"); + // if not found in cache, it was loaded from backend, hence it has been resolved + this.addToResolved(lookupResult.getCacheEntry()); + } + + // return the cache entry + return lookupResult.getCacheEntry(); + } + + /** + * Resolve an entity by id + * + * @param toValidate (IN/OUT) list of entities we will have to validate + * @param entityType type of the entity to resolve + * @param catalogId entity catalog id + * @param entityId entity id + * @return the resolve entity. Potentially update the toValidate list if we will have to validate + * that this entity is up-to-date + */ + private EntityCacheEntry resolveById( + @NotNull List toValidate, + @NotNull PolarisEntityType entityType, + long catalogId, + long entityId) { + // get or load by name + EntityCacheLookupResult lookupResult = + this.cache.getOrLoadEntityById(this.polarisCallContext, catalogId, entityId); + + // if not found, return null + if (lookupResult == null) { + return null; + } else if (lookupResult.isCacheHit()) { + // found in the cache, we will have to validate this entity + toValidate.add(lookupResult.getCacheEntry()); + } else { + // entry cannot be null + this.diagnostics.checkNotNull(lookupResult.getCacheEntry(), "cache_entry_is_null"); + + // if not found in cache, it was loaded from backend, hence it has been resolved + this.addToResolved(lookupResult.getCacheEntry()); + } + + // return the cache entry + return lookupResult.getCacheEntry(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/resolver/ResolverEntityName.java b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/ResolverEntityName.java new file mode 100644 index 0000000000..3ec5c0a5b3 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/ResolverEntityName.java @@ -0,0 +1,64 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.resolver; + +import io.polaris.core.entity.PolarisEntityType; +import java.util.Objects; + +/** Simple class to represent the name of an entity to resolve */ +public class ResolverEntityName { + + // type of the entity + private final PolarisEntityType entityType; + + // the name of the entity + private final String entityName; + + // true if we should not fail while resolving this entity + private final boolean isOptional; + + public ResolverEntityName(PolarisEntityType entityType, String entityName, boolean isOptional) { + this.entityType = entityType; + this.entityName = entityName; + this.isOptional = isOptional; + } + + public PolarisEntityType getEntityType() { + return entityType; + } + + public String getEntityName() { + return entityName; + } + + public boolean isOptional() { + return isOptional; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + ResolverEntityName that = (ResolverEntityName) o; + return getEntityType() == that.getEntityType() + && Objects.equals(getEntityName(), that.getEntityName()); + } + + @Override + public int hashCode() { + return Objects.hash(getEntityType(), getEntityName()); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/resolver/ResolverPath.java b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/ResolverPath.java new file mode 100644 index 0000000000..2f926a5398 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/ResolverPath.java @@ -0,0 +1,85 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.resolver; + +import com.google.common.collect.ImmutableList; +import io.polaris.core.entity.PolarisEntityType; +import java.util.List; + +/** Simple class to represent a path within a catalog */ +public class ResolverPath { + + // name of the entities in that path. The parent of the first named entity is the path is the + // catalog + private final List entityNames; + + // all entities in a path are namespaces except the last one which can be a table_like entity + // versus a namespace + private final PolarisEntityType lastEntityType; + + // true if this path is optional, i.e. failing to fully resolve it is not an error + private final boolean isOptional; + + /** + * Constructor for an optional path + * + * @param entityNames set of entity names, all are namespaces except the last one which is either + * a namespace or a table_like entity + * @param lastEntityType type of the last entity, either namespace or table_like + */ + public ResolverPath(List entityNames, PolarisEntityType lastEntityType) { + this(entityNames, lastEntityType, false); + } + + /** + * Constructor for an optional path + * + * @param entityNames set of entity names, all are namespaces except the last one which is either + * a namespace or a table_like entity + * @param lastEntityType type of the last entity, either namespace or table_like + * @param isOptional true if optional + */ + public ResolverPath( + List entityNames, PolarisEntityType lastEntityType, boolean isOptional) { + this.entityNames = ImmutableList.copyOf(entityNames); + this.lastEntityType = lastEntityType; + this.isOptional = isOptional; + } + + public List getEntityNames() { + return entityNames; + } + + public PolarisEntityType getLastEntityType() { + return lastEntityType; + } + + public boolean isOptional() { + return isOptional; + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("entityNames:"); + sb.append(entityNames.toString()); + sb.append(";lastEntityType:"); + sb.append(lastEntityType.toString()); + sb.append(";isOptional:"); + sb.append(isOptional); + return sb.toString(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/resolver/ResolverPrincipalRole.java b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/ResolverPrincipalRole.java new file mode 100644 index 0000000000..2c34f89f41 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/ResolverPrincipalRole.java @@ -0,0 +1,23 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.resolver; + +/** Expected principal type for the principal. Expectation depends on the REST request type */ +public enum ResolverPrincipalRole { + ANY_PRINCIPAL, + CATALOG_ADMIN_PRINCIPAL, + SERVICE_ADMIN_PRINCIPAL +} diff --git a/polaris-core/src/main/java/io/polaris/core/persistence/resolver/ResolverStatus.java b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/ResolverStatus.java new file mode 100644 index 0000000000..180043bbf3 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/persistence/resolver/ResolverStatus.java @@ -0,0 +1,103 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence.resolver; + +import io.polaris.core.entity.PolarisEntityType; + +public class ResolverStatus { + + /** + * Status code for the caller to know if all entities were resolved successfully or if resolution + * failed. Anything but success is a failure + */ + public enum StatusEnum { + // success + SUCCESS, + + // error, principal making the call does not exist + CALLER_PRINCIPAL_DOES_NOT_EXIST, + + // error, the path could not be resolved. The payload of the status will provide the path and + // the index in that + // path for the segment of the path which could not be resolved + PATH_COULD_NOT_BE_FULLY_RESOLVED, + + // error, an entity could not be resolved + ENTITY_COULD_NOT_BE_RESOLVED, + }; + + private final StatusEnum status; + + // if status is ENTITY_COULD_NOT_BE_RESOLVED, will be set to the entity type which couldn't be + // resolved + private final PolarisEntityType failedToResolvedEntityType; + + // if status is ENTITY_COULD_NOT_BE_RESOLVED, will be set to the entity name which couldn't be + // resolved + private final String failedToResolvedEntityName; + + // if status is PATH_COULD_NOT_BE_FULLY_RESOLVED, path which we failed to resolve + private final ResolverPath failedToResolvePath; + + // if status is PATH_COULD_NOT_BE_FULLY_RESOLVED, index in the path which we failed to + // resolve + private final int failedToResolvedEntityIndex; + + public ResolverStatus(StatusEnum status) { + this.status = status; + this.failedToResolvedEntityType = null; + this.failedToResolvedEntityName = null; + this.failedToResolvePath = null; + this.failedToResolvedEntityIndex = 0; + } + + public ResolverStatus( + PolarisEntityType failedToResolvedEntityType, String failedToResolvedEntityName) { + this.status = StatusEnum.ENTITY_COULD_NOT_BE_RESOLVED; + this.failedToResolvedEntityType = failedToResolvedEntityType; + this.failedToResolvedEntityName = failedToResolvedEntityName; + this.failedToResolvePath = null; + this.failedToResolvedEntityIndex = 0; + } + + public ResolverStatus(ResolverPath failedToResolvePath, int failedToResolvedEntityIndex) { + this.status = StatusEnum.PATH_COULD_NOT_BE_FULLY_RESOLVED; + this.failedToResolvedEntityType = null; + this.failedToResolvedEntityName = null; + this.failedToResolvePath = failedToResolvePath; + this.failedToResolvedEntityIndex = failedToResolvedEntityIndex; + } + + public StatusEnum getStatus() { + return status; + } + + public PolarisEntityType getFailedToResolvedEntityType() { + return failedToResolvedEntityType; + } + + public String getFailedToResolvedEntityName() { + return failedToResolvedEntityName; + } + + public ResolverPath getFailedToResolvePath() { + return failedToResolvePath; + } + + public int getFailedToResolvedEntityIndex() { + return failedToResolvedEntityIndex; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/resource/TimedApi.java b/polaris-core/src/main/java/io/polaris/core/resource/TimedApi.java new file mode 100644 index 0000000000..d7514be3fa --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/resource/TimedApi.java @@ -0,0 +1,41 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.resource; + +import io.polaris.core.monitor.PolarisMetricRegistry; +import java.lang.annotation.ElementType; +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; +import java.lang.annotation.Target; + +/** + * Annotation to specify metrics to be registered on initialization. Users need to explicitly call + * {@link PolarisMetricRegistry#init} to register the metrics. + * + *

If used on a Jersey resource method, this annotation also serves as a marker for the {@link + * io.polaris.service.TimedApplicationEventListener} to time the underlying method and count errors + * on failures. + */ +@Retention(RetentionPolicy.RUNTIME) +@Target(ElementType.METHOD) +public @interface TimedApi { + /** + * The name of the metric to be recorded. + * + * @return the metric name + */ + String value(); +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/FileStorageConfigurationInfo.java b/polaris-core/src/main/java/io/polaris/core/storage/FileStorageConfigurationInfo.java new file mode 100644 index 0000000000..b602d0e847 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/FileStorageConfigurationInfo.java @@ -0,0 +1,51 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage; + +import com.fasterxml.jackson.annotation.JsonProperty; +import java.util.List; +import org.jetbrains.annotations.NotNull; + +/** + * Support for file:// URLs in storage configuration. This is pretty-much only used for testing. + * Supports URLs that start with file:// or /, but also supports wildcard (*) to support certain + * test cases. + */ +public class FileStorageConfigurationInfo extends PolarisStorageConfigurationInfo { + + public FileStorageConfigurationInfo( + @JsonProperty(value = "allowedLocations", required = true) @NotNull + List allowedLocations) { + super(StorageType.FILE, allowedLocations); + } + + @Override + public String getFileIoImplClassName() { + return "org.apache.iceberg.hadoop.HadoopFileIO"; + } + + @Override + public void validatePrefixForStorageType(String loc) { + if (!loc.startsWith(getStorageType().getPrefix()) + && !loc.startsWith("file:/") + && !loc.startsWith("/") + && !loc.equals("*")) { + throw new IllegalArgumentException( + String.format( + "Location prefix not allowed: '%s', expected prefix: file:// or / or *", loc)); + } + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/InMemoryStorageIntegration.java b/polaris-core/src/main/java/io/polaris/core/storage/InMemoryStorageIntegration.java new file mode 100644 index 0000000000..5e107b144e --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/InMemoryStorageIntegration.java @@ -0,0 +1,143 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage; + +import io.polaris.core.context.CallContext; +import java.util.HashMap; +import java.util.Map; +import java.util.NavigableSet; +import java.util.Optional; +import java.util.Set; +import java.util.TreeSet; +import java.util.function.Function; +import java.util.stream.Collectors; +import org.jetbrains.annotations.NotNull; + +/** + * Base class for in-memory implementations of {@link PolarisStorageIntegration}. A basic + * implementation of {@link #validateAccessToLocations(PolarisStorageConfigurationInfo, Set, Set)} + * is provided that checks to see that the list of locations being accessed is among the list of + * {@link PolarisStorageConfigurationInfo#allowedLocations}. Locations being accessed must be equal + * to or a subdirectory of at least one of the allowed locations. + * + * @param + */ +public abstract class InMemoryStorageIntegration + extends PolarisStorageIntegration { + + public InMemoryStorageIntegration(String identifierOrId) { + super(identifierOrId); + } + + /** + * Check that the locations being accessed are all equal to or subdirectories of at least one of + * the {@link PolarisStorageConfigurationInfo#allowedLocations}. + * + * @param storageConfig + * @param actions a set of operation actions to validate, like LIST/READ/DELETE/WRITE/ALL + * @param locations a set of locations to get access to + * @return a map of location to a validation result for each action passed in. In this + * implementation, all actions have the same validation result, as we only verify the + * locations are equal to or subdirectories of the allowed locations. + */ + public static Map> + validateSubpathsOfAllowedLocations( + @NotNull PolarisStorageConfigurationInfo storageConfig, + @NotNull Set actions, + @NotNull Set locations) { + // trim trailing / from allowed locations so that locations missing the trailing slash still + // match + // TODO: Canonicalize with URI and compare scheme/authority/path components separately + TreeSet allowedLocations = + storageConfig.getAllowedLocations().stream() + .map( + str -> { + if (str.endsWith("/") && str.length() > 1) { + return str.substring(0, str.length() - 1); + } else { + return str; + } + }) + .map(str -> str.replace("file:///", "file:/")) + .collect(Collectors.toCollection(TreeSet::new)); + boolean allowWildcardLocation = + Optional.ofNullable(CallContext.getCurrentContext()) + .flatMap(c -> Optional.ofNullable(c.getPolarisCallContext())) + .map( + pc -> + pc.getConfigurationStore() + .getConfiguration(pc, "ALLOW_WILDCARD_LOCATION", false)) + .orElse(false); + + if (allowWildcardLocation && allowedLocations.contains("*")) { + return locations.stream() + .collect( + Collectors.toMap( + Function.identity(), + loc -> + actions.stream() + .collect( + Collectors.toMap( + Function.identity(), + a -> + new ValidationResult( + true, loc + " in the list of allowed locations"))))); + } + Map> resultMap = new HashMap<>(); + for (String rawLocation : locations) { + String location = rawLocation.replace("file:///", "file:/"); + StringBuilder builder = new StringBuilder(); + NavigableSet prefixes = allowedLocations; + boolean validLocation = false; + for (char c : location.toCharArray()) { + builder.append(c); + prefixes = allowedLocations.tailSet(builder.toString(), true); + if (prefixes.isEmpty()) { + break; + } else if (prefixes.first().equals(builder.toString())) { + validLocation = true; + break; + } + } + final boolean isValidLocation = validLocation; + Map locationResult = + actions.stream() + .collect( + Collectors.toMap( + Function.identity(), + a -> + new ValidationResult( + isValidLocation, + rawLocation + + " is " + + (isValidLocation ? "" : "not ") + + "in the list of allowed locations: " + + allowedLocations))); + + resultMap.put(rawLocation, locationResult); + } + return resultMap; + } + + @Override + @NotNull + public Map> validateAccessToLocations( + @NotNull T storageConfig, + @NotNull Set actions, + @NotNull Set locations) { + return validateSubpathsOfAllowedLocations(storageConfig, actions, locations); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/PolarisCredentialProperty.java b/polaris-core/src/main/java/io/polaris/core/storage/PolarisCredentialProperty.java new file mode 100644 index 0000000000..ee3e3fd60d --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/PolarisCredentialProperty.java @@ -0,0 +1,59 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage; + +/** Enum of polaris supported credential properties */ +public enum PolarisCredentialProperty { + AWS_KEY_ID(String.class, "s3.access-key-id", "the aws access key id"), + AWS_SECRET_KEY(String.class, "s3.secret-access-key", "the aws access key secret"), + AWS_TOKEN(String.class, "s3.session-token", "the aws scoped access token"), + + GCS_ACCESS_TOKEN(String.class, "gcs.oauth2.token", "the gcs scoped access token"), + GCS_ACCESS_TOKEN_EXPIRES_AT( + String.class, + "gcs.oauth2.token-expires-at", + "the time the gcs access token expires, in milliseconds"), + + // Currently not using ACCESS TOKEN as the ResolvingFileIO is using ADLSFileIO for azure case and + // it expects for SAS + AZURE_ACCESS_TOKEN(String.class, "", "the azure scoped access token"), + AZURE_SAS_TOKEN(String.class, "adls.sas-token.", "an azure shared access signature token"), + AZURE_ACCOUNT_HOST( + String.class, + "the azure storage account host", + "the azure account name + endpoint that will append to the ADLS_SAS_TOKEN_PREFIX"), + EXPIRATION_TIME( + Long.class, "expiration-time", "the expiration time for the access token, in milliseconds"); + + private final Class valueType; + private final String propertyName; + private final String description; + + /* + s3.access-key-id`: id for for credentials that provide access to the data in S3 + - `s3.secret-access-key`: secret for credentials that provide access to data in S3 + - `s3.session-token + */ + PolarisCredentialProperty(Class valueType, String propertyName, String description) { + this.valueType = valueType; + this.propertyName = propertyName; + this.description = description; + } + + public String getPropertyName() { + return propertyName; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/PolarisStorageActions.java b/polaris-core/src/main/java/io/polaris/core/storage/PolarisStorageActions.java new file mode 100644 index 0000000000..fe0f562b86 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/PolarisStorageActions.java @@ -0,0 +1,35 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage; + +public enum PolarisStorageActions { + READ, + WRITE, + LIST, + DELETE, + ALL, + ; + + /** check if the provided string is a valid action. */ + public static boolean isValidAction(String s) { + for (PolarisStorageActions action : PolarisStorageActions.values()) { + if (action.name().equalsIgnoreCase(s)) { + return true; + } + } + return false; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/PolarisStorageConfigurationInfo.java b/polaris-core/src/main/java/io/polaris/core/storage/PolarisStorageConfigurationInfo.java new file mode 100644 index 0000000000..ab53b80ae0 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/PolarisStorageConfigurationInfo.java @@ -0,0 +1,236 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage; + +import com.fasterxml.jackson.annotation.JsonInclude; +import com.fasterxml.jackson.annotation.JsonProperty; +import com.fasterxml.jackson.annotation.JsonSubTypes; +import com.fasterxml.jackson.annotation.JsonTypeInfo; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.DeserializationFeature; +import com.fasterxml.jackson.databind.ObjectMapper; +import io.polaris.core.PolarisConfiguration; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.admin.model.Catalog; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.storage.aws.AwsStorageConfigurationInfo; +import io.polaris.core.storage.azure.AzureStorageConfigurationInfo; +import io.polaris.core.storage.gcp.GcpStorageConfigurationInfo; +import java.util.List; +import java.util.Optional; +import org.jetbrains.annotations.NotNull; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The polaris storage configuration information, is part of a polaris entity's internal property, + * that holds necessary information including + * + *

+ * 1. locations that allows polaris to get access to
+ * 2. cloud identity info that a service principle can request access token to the locations
+ * 
+ */
+@JsonTypeInfo(use = JsonTypeInfo.Id.NAME)
+@JsonSubTypes({
+  @JsonSubTypes.Type(value = AwsStorageConfigurationInfo.class),
+  @JsonSubTypes.Type(value = AzureStorageConfigurationInfo.class),
+  @JsonSubTypes.Type(value = GcpStorageConfigurationInfo.class),
+  @JsonSubTypes.Type(value = FileStorageConfigurationInfo.class),
+})
+public abstract class PolarisStorageConfigurationInfo {
+
+  private static final Logger LOGGER =
+      LoggerFactory.getLogger(PolarisStorageConfigurationInfo.class);
+
+  // a list of allowed locations
+  private final List allowedLocations;
+
+  // storage type
+  private final StorageType storageType;
+
+  public PolarisStorageConfigurationInfo(
+      @JsonProperty(value = "storageType", required = true) @NotNull StorageType storageType,
+      @JsonProperty(value = "allowedLocations", required = true) @NotNull
+          List allowedLocations) {
+    this(storageType, allowedLocations, true);
+  }
+
+  protected PolarisStorageConfigurationInfo(
+      StorageType storageType, List allowedLocations, boolean validatePrefix) {
+    this.allowedLocations = allowedLocations;
+    this.storageType = storageType;
+    if (validatePrefix) {
+      allowedLocations.forEach(this::validatePrefixForStorageType);
+    }
+  }
+
+  public List getAllowedLocations() {
+    return allowedLocations;
+  }
+
+  public StorageType getStorageType() {
+    return storageType;
+  }
+
+  private static final ObjectMapper DEFAULT_MAPPER;
+
+  static {
+    DEFAULT_MAPPER = new ObjectMapper();
+    DEFAULT_MAPPER.setSerializationInclusion(JsonInclude.Include.NON_NULL);
+    DEFAULT_MAPPER.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES);
+  }
+
+  public String serialize() {
+    try {
+      return DEFAULT_MAPPER.writeValueAsString(this);
+    } catch (JsonProcessingException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  /**
+   * Deserialize a json string into a PolarisStorageConfiguration object
+   *
+   * @param diagnostics the diagnostics instance
+   * @param jsonStr a json string
+   * @return the PolarisStorageConfiguration object
+   */
+  public static PolarisStorageConfigurationInfo deserialize(
+      @NotNull PolarisDiagnostics diagnostics, final @NotNull String jsonStr) {
+    try {
+      return DEFAULT_MAPPER.readValue(jsonStr, PolarisStorageConfigurationInfo.class);
+    } catch (JsonProcessingException exception) {
+      diagnostics.fail(
+          "fail_to_deserialize_storage_configuration", exception, "jsonStr={}", jsonStr);
+    }
+    return null;
+  }
+
+  public static Optional forEntityPath(
+      PolarisDiagnostics diagnostics, List entityPath) {
+    return findStorageInfoFromHierarchy(entityPath)
+        .map(
+            storageInfo ->
+                deserialize(
+                    diagnostics,
+                    storageInfo
+                        .getInternalPropertiesAsMap()
+                        .get(PolarisEntityConstants.getStorageConfigInfoPropertyName())))
+        .map(
+            configInfo -> {
+              String baseLocation =
+                  entityPath.reversed().stream()
+                      .flatMap(
+                          e ->
+                              Optional.ofNullable(
+                                  e.getPropertiesAsMap()
+                                      .get(PolarisEntityConstants.ENTITY_BASE_LOCATION))
+                                  .stream())
+                      .findFirst()
+                      .orElse(null);
+              CatalogEntity catalog = CatalogEntity.of(entityPath.get(0));
+              boolean allowEscape =
+                  Optional.ofNullable(
+                          catalog
+                              .getPropertiesAsMap()
+                              .get(PolarisConfiguration.CATALOG_ALLOW_UNSTRUCTURED_TABLE_LOCATION))
+                      .map(
+                          val -> {
+                            LOGGER.debug(
+                                "Found catalog level property to allow unstructured table location: {}",
+                                val);
+                            return Boolean.parseBoolean(val);
+                          })
+                      .orElseGet(() -> Catalog.TypeEnum.EXTERNAL.equals(catalog.getCatalogType()));
+              if (!allowEscape && baseLocation != null) {
+                LOGGER.debug(
+                    "Not allowing unstructured table location for entity: {}",
+                    entityPath.getLast().getName());
+                return new StorageConfigurationOverride(configInfo, List.of(baseLocation));
+              } else {
+                LOGGER.debug(
+                    "Allowing unstructured table location for entity: {}",
+                    entityPath.getLast().getName());
+                return configInfo;
+              }
+            });
+  }
+
+  private static @NotNull Optional findStorageInfoFromHierarchy(
+      List entityPath) {
+    return entityPath.reversed().stream()
+        .filter(
+            e ->
+                e.getInternalPropertiesAsMap()
+                    .containsKey(PolarisEntityConstants.getStorageConfigInfoPropertyName()))
+        .findFirst();
+  }
+
+  /** Subclasses must provide the Iceberg FileIO impl associated with their type in this method. */
+  public abstract String getFileIoImplClassName();
+
+  /** Validate if the provided allowed locations are valid for the storage type */
+  protected void validatePrefixForStorageType(String loc) {
+    if (!loc.toLowerCase().startsWith(storageType.prefix)) {
+      throw new IllegalArgumentException(
+          String.format(
+              "Location prefix not allowed: '%s', expected prefix: '%s'", loc, storageType.prefix));
+    }
+  }
+
+  /** Validate the number of allowed locations not exceeding the max value. */
+  public void validateMaxAllowedLocations(int maxAllowedLocations) {
+    if (allowedLocations.size() > maxAllowedLocations) {
+      throw new IllegalArgumentException(
+          "Number of allowed locations exceeds " + maxAllowedLocations);
+    }
+  }
+
+  /** Polaris' storage type, each has a fixed prefix for its location */
+  public enum StorageType {
+    S3("s3://"),
+    AZURE("abfs"), // abfs or abfss
+    GCS("gs://"),
+    FILE("file://"),
+    ;
+
+    final String prefix;
+
+    StorageType(String prefix) {
+      this.prefix = prefix;
+    }
+
+    public String getPrefix() {
+      return prefix;
+    }
+  }
+
+  /** Enum property for describe storage integration for config purpose. */
+  public enum DescribeProperty {
+    STORAGE_PROVIDER,
+    STORAGE_ALLOWED_LOCATIONS,
+    STORAGE_AWS_ROLE_ARN,
+    STORAGE_AWS_IAM_USER_ARN,
+    STORAGE_AWS_EXTERNAL_ID,
+    STORAGE_GCP_SERVICE_ACCOUNT,
+    AZURE_TENANT_ID,
+    AZURE_CONSENT_URL,
+    AZURE_MULTI_TENANT_APP_NAME,
+  }
+}
diff --git a/polaris-core/src/main/java/io/polaris/core/storage/PolarisStorageIntegration.java b/polaris-core/src/main/java/io/polaris/core/storage/PolarisStorageIntegration.java
new file mode 100644
index 0000000000..4f89382bc5
--- /dev/null
+++ b/polaris-core/src/main/java/io/polaris/core/storage/PolarisStorageIntegration.java
@@ -0,0 +1,146 @@
+/*
+ * Copyright (c) 2024 Snowflake Computing Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package io.polaris.core.storage;
+
+import io.polaris.core.PolarisDiagnostics;
+import java.util.EnumMap;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Set;
+import org.jetbrains.annotations.NotNull;
+
+/**
+ * Abstract of Polaris Storage Integration. It holds the reference to an object that having the
+ * service principle information
+ *
+ * @param  the concrete type of {@link PolarisStorageConfigurationInfo} this integration supports
+ */
+public abstract class PolarisStorageIntegration {
+
+  private final String integrationIdentifierOrId;
+
+  public PolarisStorageIntegration(String identifierOrId) {
+    this.integrationIdentifierOrId = identifierOrId;
+  }
+
+  public String getStorageIdentifierOrId() {
+    return integrationIdentifierOrId;
+  }
+
+  /**
+   * Subscope the creds against the allowed read and write locations.
+   *
+   * @param diagnostics the diagnostics service
+   * @param storageConfig storage configuration
+   * @param allowListOperation whether to allow LIST on all the provided allowed read/write
+   *     locations
+   * @param allowedReadLocations a set of allowed to read locations
+   * @param allowedWriteLocations a set of allowed to write locations
+   * @return An enum map including the scoped credentials
+   */
+  public abstract EnumMap getSubscopedCreds(
+      @NotNull PolarisDiagnostics diagnostics,
+      @NotNull T storageConfig,
+      boolean allowListOperation,
+      @NotNull Set allowedReadLocations,
+      @NotNull Set allowedWriteLocations);
+
+  /**
+   * Describe the configuration for the current storage integration.
+   *
+   * @param storageConfigInfo the configuration info provided by the user.
+   * @return an enum map
+   */
+  public abstract EnumMap
+      descPolarisStorageConfiguration(@NotNull PolarisStorageConfigurationInfo storageConfigInfo);
+
+  /**
+   * Validate access for the provided operation actions and locations.
+   *
+   * @param actions a set of operation actions to validate, like LIST/READ/DELETE/WRITE/ALL
+   * @param locations a set of locations to get access to
+   * @return A Map of string, representing the result of validation, the key value is . A validate result looks like this
+   *     
+   * {
+   *   "status" : "failure",
+   *   "actions" : {
+   *     "READ" : {
+   *       "message" : "The specified file was not found",
+   *       "status" : "failure"
+   *     },
+   *     "DELETE" : {
+   *       "message" : "One or more objects could not be deleted (Status Code: 200; Error Code: null)",
+   *       "status" : "failure"
+   *     },
+   *     "LIST" : {
+   *       "status" : "success"
+   *     },
+   *     "WRITE" : {
+   *       "message" : "Access Denied (Status Code: 403; Error Code: AccessDenied)",
+   *       "status" : "failure"
+   *     }
+   *   },
+   *   "message" : "Some of the integration checks failed. Check the Snowflake documentation for more information."
+   * }
+   * 
+ */ + @NotNull + public abstract Map> + validateAccessToLocations( + @NotNull T storageConfig, + @NotNull Set actions, + @NotNull Set locations); + + /** + * Result of calling {@link #validateAccessToLocations(PolarisStorageConfigurationInfo, Set, Set)} + */ + public static final class ValidationResult { + private final boolean success; + private final String message; + + public ValidationResult(boolean success, String message) { + this.success = success; + this.message = message; + } + + public boolean isSuccess() { + return success; + } + + public String getMessage() { + return message; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (!(o instanceof ValidationResult)) return false; + ValidationResult that = (ValidationResult) o; + return success == that.success; + } + + @Override + public int hashCode() { + return Objects.hashCode(success); + } + + @Override + public String toString() { + return "ValidationResult{" + "success=" + success + ", message='" + message + '\'' + '}'; + } + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/PolarisStorageIntegrationProvider.java b/polaris-core/src/main/java/io/polaris/core/storage/PolarisStorageIntegrationProvider.java new file mode 100644 index 0000000000..12e2fc39c9 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/PolarisStorageIntegrationProvider.java @@ -0,0 +1,29 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage; + +import org.jetbrains.annotations.Nullable; + +/** + * Factory interface that knows how to construct a {@link PolarisStorageIntegration} given a {@link + * PolarisStorageConfigurationInfo}. + */ +public interface PolarisStorageIntegrationProvider { + @SuppressWarnings("unchecked") + @Nullable + PolarisStorageIntegration getStorageIntegrationForConfig( + PolarisStorageConfigurationInfo polarisStorageConfigurationInfo); +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/StorageConfigurationOverride.java b/polaris-core/src/main/java/io/polaris/core/storage/StorageConfigurationOverride.java new file mode 100644 index 0000000000..1116a7e553 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/StorageConfigurationOverride.java @@ -0,0 +1,54 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage; + +import java.util.List; +import org.jetbrains.annotations.NotNull; + +/** + * Allows overriding the allowed locations for specific entities. Only the allowedLocations + * specified in the constructor are allowed. allowedLocations are not inherited from the parent + * storage configuration. All other storage configuration is inherited from the parent configuration + * and cannot be overridden. + */ +public class StorageConfigurationOverride extends PolarisStorageConfigurationInfo { + + private final PolarisStorageConfigurationInfo parentStorageConfiguration; + + public StorageConfigurationOverride( + @NotNull PolarisStorageConfigurationInfo parentStorageConfiguration, + List allowedLocations) { + super(parentStorageConfiguration.getStorageType(), allowedLocations, false); + this.parentStorageConfiguration = parentStorageConfiguration; + allowedLocations.forEach(this::validatePrefixForStorageType); + } + + @Override + public String getFileIoImplClassName() { + return parentStorageConfiguration.getFileIoImplClassName(); + } + + // delegate to the wrapped class in case they override the parent behavior + @Override + protected void validatePrefixForStorageType(String loc) { + parentStorageConfiguration.validatePrefixForStorageType(loc); + } + + @Override + public void validateMaxAllowedLocations(int maxAllowedLocations) { + parentStorageConfiguration.validateMaxAllowedLocations(maxAllowedLocations); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/StorageUtil.java b/polaris-core/src/main/java/io/polaris/core/storage/StorageUtil.java new file mode 100644 index 0000000000..3e16ced537 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/StorageUtil.java @@ -0,0 +1,40 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage; + +import org.jetbrains.annotations.NotNull; + +public class StorageUtil { + /** + * Concatenating two file paths by making sure one and only one path separator is placed between + * the two paths. + * + * @param leftPath left path + * @param rightPath right path + * @param fileSep File separator to use. + * @return Well formatted file path. + */ + public static @NotNull String concatFilePrefixes( + @NotNull String leftPath, String rightPath, String fileSep) { + if (leftPath.endsWith(fileSep) && rightPath.startsWith(fileSep)) { + return leftPath + rightPath.substring(1); + } else if (!leftPath.endsWith(fileSep) && !rightPath.startsWith(fileSep)) { + return leftPath + fileSep + rightPath; + } else { + return leftPath + rightPath; + } + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/aws/AwsCredentialsStorageIntegration.java b/polaris-core/src/main/java/io/polaris/core/storage/aws/AwsCredentialsStorageIntegration.java new file mode 100644 index 0000000000..c64e6a2352 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/aws/AwsCredentialsStorageIntegration.java @@ -0,0 +1,189 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage.aws; + +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.storage.InMemoryStorageIntegration; +import io.polaris.core.storage.PolarisCredentialProperty; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import io.polaris.core.storage.StorageUtil; +import java.net.URI; +import java.util.EnumMap; +import java.util.HashMap; +import java.util.Map; +import java.util.Objects; +import java.util.Set; +import java.util.stream.Stream; +import org.jetbrains.annotations.NotNull; +import software.amazon.awssdk.policybuilder.iam.IamConditionOperator; +import software.amazon.awssdk.policybuilder.iam.IamEffect; +import software.amazon.awssdk.policybuilder.iam.IamPolicy; +import software.amazon.awssdk.policybuilder.iam.IamResource; +import software.amazon.awssdk.policybuilder.iam.IamStatement; +import software.amazon.awssdk.services.sts.StsClient; +import software.amazon.awssdk.services.sts.model.AssumeRoleRequest; +import software.amazon.awssdk.services.sts.model.AssumeRoleResponse; + +/** Credential vendor that supports generating */ +public class AwsCredentialsStorageIntegration + extends InMemoryStorageIntegration { + private final StsClient stsClient; + + public AwsCredentialsStorageIntegration(StsClient stsClient) { + super(AwsCredentialsStorageIntegration.class.getName()); + this.stsClient = stsClient; + } + + /** {@inheritDoc} */ + @Override + public EnumMap getSubscopedCreds( + @NotNull PolarisDiagnostics diagnostics, + @NotNull AwsStorageConfigurationInfo storageConfig, + boolean allowListOperation, + @NotNull Set allowedReadLocations, + @NotNull Set allowedWriteLocations) { + AssumeRoleResponse response = + stsClient.assumeRole( + AssumeRoleRequest.builder() + .externalId(storageConfig.getExternalId()) + .roleArn(storageConfig.getRoleARN()) + .roleSessionName("PolarisAwsCredentialsStorageIntegration") + .policy( + policyString( + storageConfig.getRoleARN(), + allowListOperation, + allowedReadLocations, + allowedWriteLocations) + .toJson()) + .build()); + EnumMap credentialMap = + new EnumMap<>(PolarisCredentialProperty.class); + credentialMap.put(PolarisCredentialProperty.AWS_KEY_ID, response.credentials().accessKeyId()); + credentialMap.put( + PolarisCredentialProperty.AWS_SECRET_KEY, response.credentials().secretAccessKey()); + credentialMap.put(PolarisCredentialProperty.AWS_TOKEN, response.credentials().sessionToken()); + return credentialMap; + } + + /** + * generate an IamPolicy from the input readLocations and writeLocations, optionally with list + * support. Credentials will be scoped to exactly the resources provided. If read and write + * locations are empty, a non-empty policy will be generated that grants GetObject and (optionally + * ListBucket privileges with no resources. This prevents us from sending an empty policy to AWS + * and just assuming the role with full privileges. + * + * @param roleArn + * @param allowList + * @param readLocations + * @param writeLocations + * @return + */ + // TODO - add KMS key access + private IamPolicy policyString( + String roleArn, boolean allowList, Set readLocations, Set writeLocations) { + IamPolicy.Builder policyBuilder = IamPolicy.builder(); + IamStatement.Builder allowGetObjectStatementBuilder = + IamStatement.builder() + .effect(IamEffect.ALLOW) + .addAction("s3:GetObject") + .addAction("s3:GetObjectVersion"); + Map bucketListStatmentBuilder = new HashMap<>(); + + String arnPrefix = getArnPrefixFor(roleArn); + Stream.concat(readLocations.stream(), writeLocations.stream()) + .distinct() + .forEach( + location -> { + URI uri = URI.create(location); + allowGetObjectStatementBuilder.addResource( + // TODO add support for CN and GOV + IamResource.create( + arnPrefix + StorageUtil.concatFilePrefixes(parseS3Path(uri), "*", "/"))); + if (allowList) { + bucketListStatmentBuilder + .computeIfAbsent( + arnPrefix + uri.getHost(), + (String key) -> + IamStatement.builder() + .effect(IamEffect.ALLOW) + .addAction("s3:ListBucket") + .addResource(key)) + .addCondition( + IamConditionOperator.STRING_LIKE, + "s3:prefix", + StorageUtil.concatFilePrefixes(trimLeadingSlash(uri.getPath()), "*", "/")); + } + }); + + if (!writeLocations.isEmpty()) { + IamStatement.Builder allowPutObjectStatementBuilder = + IamStatement.builder() + .effect(IamEffect.ALLOW) + .addAction("s3:PutObject") + .addAction("s3:DeleteObject"); + writeLocations.forEach( + location -> { + URI uri = URI.create(location); + // TODO add support for CN and GOV + allowPutObjectStatementBuilder.addResource( + IamResource.create( + arnPrefix + StorageUtil.concatFilePrefixes(parseS3Path(uri), "*", "/"))); + }); + policyBuilder.addStatement(allowPutObjectStatementBuilder.build()); + } + if (!bucketListStatmentBuilder.isEmpty()) { + bucketListStatmentBuilder + .values() + .forEach(statementBuilder -> policyBuilder.addStatement(statementBuilder.build())); + } else if (allowList) { + // add list privilege with 0 resources + policyBuilder.addStatement( + IamStatement.builder().effect(IamEffect.ALLOW).addAction("s3:ListBucket").build()); + } + return policyBuilder.addStatement(allowGetObjectStatementBuilder.build()).build(); + } + + private String getArnPrefixFor(String roleArn) { + if (roleArn.contains("aws-cn")) { + return "arn:aws-cn:s3:::"; + } else if (roleArn.contains("aws-us-gov")) { + return "arn:aws-us-gov:s3:::"; + } else { + return "arn:aws:s3:::"; + } + } + + private static @NotNull String parseS3Path(URI uri) { + String bucket = uri.getHost(); + String path = trimLeadingSlash(uri.getPath()); + return String.join( + "/", Stream.of(bucket, path).filter(Objects::nonNull).toArray(String[]::new)); + } + + private static @NotNull String trimLeadingSlash(String path) { + if (path.startsWith("/")) { + path = path.substring(1); + } + return path; + } + + // FIXME - we don't need this method in the interface + @Override + public EnumMap + descPolarisStorageConfiguration(@NotNull PolarisStorageConfigurationInfo storageConfigInfo) { + return null; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/aws/AwsStorageConfigurationInfo.java b/polaris-core/src/main/java/io/polaris/core/storage/aws/AwsStorageConfigurationInfo.java new file mode 100644 index 0000000000..48ed960524 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/aws/AwsStorageConfigurationInfo.java @@ -0,0 +1,118 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage.aws; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonIgnore; +import com.fasterxml.jackson.annotation.JsonProperty; +import com.google.common.base.MoreObjects; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import java.util.List; +import java.util.regex.Pattern; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** Aws Polaris Storage Configuration information */ +public class AwsStorageConfigurationInfo extends PolarisStorageConfigurationInfo { + + // 5 is the approximate max allowed locations for the size of AccessPolicy when LIST is required + // for allowed read and write locations for subscoping creds. + @JsonIgnore private static final int MAX_ALLOWED_LOCATIONS = 5; + + // Technically, it should be ^arn:(aws|aws-cn|aws-us-gov):iam::\d{12}:role/.+$, + @JsonIgnore public static String ROLE_ARN_PATTERN = "^arn:aws:iam::\\d{12}:role/.+$"; + + // AWS role to be assumed + private final @NotNull String roleARN; + + // AWS external ID, optional + @JsonProperty(value = "externalId", required = false) + private @Nullable String externalId = null; + + /** User ARN for the service principal */ + @JsonProperty(value = "userARN", required = false) + private @Nullable String userARN = null; + + @JsonCreator + public AwsStorageConfigurationInfo( + @JsonProperty(value = "storageType", required = true) @NotNull StorageType storageType, + @JsonProperty(value = "allowedLocations", required = true) @NotNull + List allowedLocations, + @JsonProperty(value = "roleARN", required = true) @NotNull String roleARN) { + this(storageType, allowedLocations, roleARN, null); + } + + public AwsStorageConfigurationInfo( + @NotNull StorageType storageType, + @NotNull List allowedLocations, + @NotNull String roleARN, + @Nullable String externalId) { + super(storageType, allowedLocations); + this.roleARN = roleARN; + this.externalId = externalId; + validateMaxAllowedLocations(MAX_ALLOWED_LOCATIONS); + } + + @Override + public String getFileIoImplClassName() { + return "org.apache.iceberg.aws.s3.S3FileIO"; + } + + public void validateArn(String arn) { + if (arn == null || arn.isEmpty()) { + throw new IllegalArgumentException("ARN cannot be null or empty"); + } + // specifically throw errors for China and Gov + if (arn.contains("aws-cn") || arn.contains("aws-us-gov")) { + throw new IllegalArgumentException("AWS China or Gov Cloud are temporarily not supported"); + } + if (!Pattern.matches(ROLE_ARN_PATTERN, arn)) { + throw new IllegalArgumentException("Invalid role ARN format"); + } + } + + public @NotNull String getRoleARN() { + return roleARN; + } + + public @Nullable String getExternalId() { + return externalId; + } + + public void setExternalId(String externalId) { + this.externalId = externalId; + } + + public @Nullable String getUserARN() { + return userARN; + } + + public void setUserARN(@Nullable String userARN) { + this.userARN = userARN; + } + + @Override + public String toString() { + return MoreObjects.toStringHelper(this) + .add("storageType", getStorageType()) + .add("storageType", getStorageType().name()) + .add("roleARN", roleARN) + .add("userARN", userARN) + .add("externalId", externalId) + .add("allowedLocation", getAllowedLocations()) + .toString(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/aws/PolarisS3FileIOClientFactory.java b/polaris-core/src/main/java/io/polaris/core/storage/aws/PolarisS3FileIOClientFactory.java new file mode 100644 index 0000000000..93afe887a2 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/aws/PolarisS3FileIOClientFactory.java @@ -0,0 +1,66 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage.aws; + +import java.util.Map; +import org.apache.iceberg.aws.AwsClientProperties; +import org.apache.iceberg.aws.HttpClientProperties; +import org.apache.iceberg.aws.s3.S3FileIOAwsClientFactory; +import org.apache.iceberg.aws.s3.S3FileIOProperties; +import software.amazon.awssdk.services.s3.S3Client; + +/** + * A S3FileIOAwsClientFactory that will be used by the S3FileIO to initialize S3 client. The + * difference of this factory and DefaultS3FileIOAwsClientFactory is that this one enables cross + * region access. The S3FileIO is not supporting cross region access due to the issue described here + * https://github.com/apache/iceberg/issues/9785 + */ +public class PolarisS3FileIOClientFactory implements S3FileIOAwsClientFactory { + private S3FileIOProperties s3FileIOProperties; + private HttpClientProperties httpClientProperties; + private AwsClientProperties awsClientProperties; + + PolarisS3FileIOClientFactory() { + this.s3FileIOProperties = new S3FileIOProperties(); + this.httpClientProperties = new HttpClientProperties(); + this.awsClientProperties = new AwsClientProperties(); + } + + @Override + public void initialize(Map properties) { + this.s3FileIOProperties = new S3FileIOProperties(properties); + this.awsClientProperties = new AwsClientProperties(properties); + this.httpClientProperties = new HttpClientProperties(properties); + } + + @Override + public S3Client s3() { + return S3Client.builder() + .applyMutation(awsClientProperties::applyClientRegionConfiguration) + .applyMutation(httpClientProperties::applyHttpClientConfigurations) + .applyMutation(s3FileIOProperties::applyEndpointConfigurations) + .applyMutation(s3FileIOProperties::applyServiceConfigurations) + .applyMutation( + s3ClientBuilder -> { + s3FileIOProperties.applyCredentialConfigurations( + awsClientProperties, s3ClientBuilder); + }) + .applyMutation(s3FileIOProperties::applySignerConfiguration) + .applyMutation(s3FileIOProperties::applyS3AccessGrantsConfigurations) + .applyMutation(s3ClientBuilder -> s3ClientBuilder.crossRegionAccessEnabled(true)) + .build(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/azure/AzureCredentialsStorageIntegration.java b/polaris-core/src/main/java/io/polaris/core/storage/azure/AzureCredentialsStorageIntegration.java new file mode 100644 index 0000000000..97bcbe0095 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/azure/AzureCredentialsStorageIntegration.java @@ -0,0 +1,285 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage.azure; + +import com.azure.core.credential.AccessToken; +import com.azure.core.credential.TokenRequestContext; +import com.azure.identity.DefaultAzureCredential; +import com.azure.identity.DefaultAzureCredentialBuilder; +import com.azure.storage.blob.BlobContainerClientBuilder; +import com.azure.storage.blob.BlobServiceClient; +import com.azure.storage.blob.BlobServiceClientBuilder; +import com.azure.storage.blob.models.BlobStorageException; +import com.azure.storage.blob.models.UserDelegationKey; +import com.azure.storage.blob.sas.BlobSasPermission; +import com.azure.storage.blob.sas.BlobServiceSasSignatureValues; +import com.azure.storage.file.datalake.DataLakeFileSystemClientBuilder; +import com.azure.storage.file.datalake.DataLakeServiceClient; +import com.azure.storage.file.datalake.DataLakeServiceClientBuilder; +import com.azure.storage.file.datalake.models.DataLakeStorageException; +import com.azure.storage.file.datalake.sas.DataLakeServiceSasSignatureValues; +import com.azure.storage.file.datalake.sas.PathSasPermission; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.storage.InMemoryStorageIntegration; +import io.polaris.core.storage.PolarisCredentialProperty; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import java.time.Instant; +import java.time.OffsetDateTime; +import java.time.Period; +import java.time.ZoneOffset; +import java.time.temporal.ChronoUnit; +import java.util.EnumMap; +import java.util.HashSet; +import java.util.Objects; +import java.util.Set; +import org.jetbrains.annotations.NotNull; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import reactor.core.publisher.Mono; + +/** Azure credential vendor that supports generating SAS token */ +public class AzureCredentialsStorageIntegration + extends InMemoryStorageIntegration { + + private final Logger LOGGER = LoggerFactory.getLogger(AzureCredentialsStorageIntegration.class); + + final DefaultAzureCredential defaultAzureCredential; + + public AzureCredentialsStorageIntegration() { + super(AzureCredentialsStorageIntegration.class.getName()); + // The DefaultAzureCredential will by default load the environment variables for client id, + // client secret, tenant id + defaultAzureCredential = new DefaultAzureCredentialBuilder().build(); + } + + @Override + public EnumMap getSubscopedCreds( + @NotNull PolarisDiagnostics diagnostics, + @NotNull AzureStorageConfigurationInfo storageConfig, + boolean allowListOperation, + @NotNull Set allowedReadLocations, + @NotNull Set allowedWriteLocations) { + EnumMap credentialMap = + new EnumMap<>(PolarisCredentialProperty.class); + String loc = + !allowedWriteLocations.isEmpty() + ? allowedWriteLocations.stream().findAny().orElse(null) + : allowedReadLocations.stream().findAny().orElse(null); + if (loc == null) { + throw new IllegalArgumentException("Expect valid location"); + } + // schema://@./ + AzureLocation location = new AzureLocation(loc); + validateAccountAndContainer(location, allowedReadLocations, allowedWriteLocations); + + String storageDnsName = location.getStorageAccount() + "." + location.getEndpoint(); + String endpoint = "https://" + storageDnsName; + String filePath = location.getFilePath(); + + BlobSasPermission blobSasPermission = new BlobSasPermission(); + // pathSasPermission is for Data lake storage + PathSasPermission pathSasPermission = new PathSasPermission(); + + if (allowListOperation) { + // container level + blobSasPermission.setListPermission(true); + pathSasPermission.setListPermission(true); + } + if (!allowedReadLocations.isEmpty()) { + blobSasPermission.setReadPermission(true); + pathSasPermission.setReadPermission(true); + } + if (!allowedWriteLocations.isEmpty()) { + blobSasPermission.setAddPermission(true); + blobSasPermission.setWritePermission(true); + blobSasPermission.setDeletePermission(true); + pathSasPermission.setAddPermission(true); + pathSasPermission.setWritePermission(true); + pathSasPermission.setDeletePermission(true); + } + + Instant start = Instant.now(); + OffsetDateTime expiry = + OffsetDateTime.ofInstant( + start.plusSeconds(3600), ZoneOffset.UTC); // 1 hr to sync with AWS and GCP Access token + + AccessToken accessToken = getAccessToken(storageConfig.getTenantId()); + // Get user delegation key. + // Set the new generated user delegation key expiry to 7 days and minute 1 min + // Azure strictly requires the end time to be <= 7 days from the current time, -1 min to avoid + // clock skew between the client and server, + OffsetDateTime startTime = start.truncatedTo(ChronoUnit.SECONDS).atOffset(ZoneOffset.UTC); + OffsetDateTime sanitizedEndTime = + start.plus(Period.ofDays(7)).minusSeconds(60).atOffset(ZoneOffset.UTC); + LOGGER + .atDebug() + .addKeyValue("allowedListAction", allowListOperation) + .addKeyValue("allowedReadLoc", allowedReadLocations) + .addKeyValue("allowedWriteLoc", allowedWriteLocations) + .addKeyValue("location", loc) + .addKeyValue("storageAccount", location.getStorageAccount()) + .addKeyValue("endpoint", location.getEndpoint()) + .addKeyValue("container", location.getContainer()) + .addKeyValue("filePath", filePath) + .log("Subscope Azure SAS"); + String sasToken = ""; + if (location.getEndpoint().equalsIgnoreCase(AzureLocation.BLOB_ENDPOINT)) { + sasToken = + getBlobUserDelegationSas( + startTime, + sanitizedEndTime, + expiry, + storageDnsName, + location.getContainer(), + blobSasPermission, + Mono.just(accessToken)); + } else if (location.getEndpoint().equalsIgnoreCase(AzureLocation.ADLS_ENDPOINT)) { + sasToken = + getAdlsUserDelegationSas( + startTime, + sanitizedEndTime, + expiry, + storageDnsName, + location.getContainer(), + pathSasPermission, + Mono.just(accessToken)); + } else { + throw new RuntimeException( + String.format("Endpoint %s not supported", location.getEndpoint())); + } + credentialMap.put(PolarisCredentialProperty.AZURE_SAS_TOKEN, sasToken); + credentialMap.put(PolarisCredentialProperty.AZURE_ACCOUNT_HOST, storageDnsName); + return credentialMap; + } + + private String getBlobUserDelegationSas( + OffsetDateTime startTime, + OffsetDateTime keyEndtime, + OffsetDateTime sasExpiry, + String storageDnsName, + String container, + BlobSasPermission blobSasPermission, + Mono accessTokenMono) { + String endpoint = "https://" + storageDnsName; + try { + BlobServiceClient serviceClient = + new BlobServiceClientBuilder() + .endpoint(endpoint) + .credential(c -> accessTokenMono) + .buildClient(); + UserDelegationKey userDelegationKey = + serviceClient.getUserDelegationKey(startTime, keyEndtime); + BlobServiceSasSignatureValues sigValues = + new BlobServiceSasSignatureValues(sasExpiry, blobSasPermission); + // scoped to the container + return new BlobContainerClientBuilder() + .endpoint(endpoint) + .containerName(container) + .buildClient() + .generateUserDelegationSas(sigValues, userDelegationKey); + } catch (BlobStorageException ex) { + LOGGER.debug( + "Azure DataLakeStorageException for getBlobUserDelegationSas. keyStart={} keyEnd={}, storageDns={}, container={}", + startTime, + keyEndtime, + storageDnsName, + container, + ex); + throw ex; + } + } + + private String getAdlsUserDelegationSas( + OffsetDateTime startTime, + OffsetDateTime endTime, + OffsetDateTime sasExpiry, + String storageDnsName, + String fileSystemNameOrContainer, + PathSasPermission pathSasPermission, + Mono accessTokenMono) { + String endpoint = "https://" + storageDnsName; + try { + DataLakeServiceClient dataLakeServiceClient = + new DataLakeServiceClientBuilder() + .endpoint(endpoint) + .credential(c -> accessTokenMono) + .buildClient(); + com.azure.storage.file.datalake.models.UserDelegationKey userDelegationKey = + dataLakeServiceClient.getUserDelegationKey(startTime, endTime); + + DataLakeServiceSasSignatureValues signatureValues = + new DataLakeServiceSasSignatureValues(sasExpiry, pathSasPermission); + + return new DataLakeFileSystemClientBuilder() + .endpoint(endpoint) + .fileSystemName(fileSystemNameOrContainer) + .buildClient() + .generateUserDelegationSas(signatureValues, userDelegationKey); + } catch (DataLakeStorageException ex) { + LOGGER.debug( + "Azure DataLakeStorageException for getAdlsUserDelegationSas. keyStart={} keyEnd={}, storageDns={}, fileSystemName={}", + startTime, + endTime, + storageDnsName, + fileSystemNameOrContainer, + ex); + throw ex; + } + } + + /** + * Verify that storage accounts, containers and endpoint are the same + * + * @param target + * @param readLocations + * @param writeLocations + */ + private void validateAccountAndContainer( + AzureLocation target, Set readLocations, Set writeLocations) { + Set allLocations = new HashSet<>(); + allLocations.addAll(readLocations); + allLocations.addAll(writeLocations); + allLocations.forEach( + loc -> { + AzureLocation location = new AzureLocation(loc); + if (!Objects.equals(location.getStorageAccount(), target.getStorageAccount()) + || !Objects.equals(location.getContainer(), target.getContainer()) + || !Objects.equals(location.getEndpoint(), target.getEndpoint())) { + throw new RuntimeException( + "Expect allowed read write locations belong to the same storage account and container"); + } + }); + } + + private AccessToken getAccessToken(String tenantId) { + String scope = "https://storage.azure.com/.default"; + AccessToken accessToken = + defaultAzureCredential + .getToken(new TokenRequestContext().addScopes(scope).setTenantId(tenantId)) + .blockOptional() + .orElse(null); + if (accessToken == null) { + throw new RuntimeException("No access token fetched!"); + } + return accessToken; + } + + @Override + public EnumMap + descPolarisStorageConfiguration(@NotNull PolarisStorageConfigurationInfo storageConfigInfo) { + return null; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/azure/AzureLocation.java b/polaris-core/src/main/java/io/polaris/core/storage/azure/AzureLocation.java new file mode 100644 index 0000000000..0e6fa0d641 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/azure/AzureLocation.java @@ -0,0 +1,92 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage.azure; + +import java.util.regex.Matcher; +import java.util.regex.Pattern; +import org.jetbrains.annotations.NotNull; + +/** This class represents all information for a azure location */ +public class AzureLocation { + /** The pattern only allows abfs[s] now because the ResovlingFileIO only accept ADLSFileIO */ + private static final Pattern URI_PATTERN = Pattern.compile("^abfss?://([^/?#]+)(.*)?$"); + + public static final String ADLS_ENDPOINT = "dfs.core.windows.net"; + + public static final String BLOB_ENDPOINT = "blob.core.windows.net"; + + private final String storageAccount; + private final String container; + + private final String endpoint; + private final String filePath; + + /** + * Construct an Azure location object from a location uri, it should follow this pattern: + * + *
 abfs[s]://[@]/ 
+ * + * @param location a uri + */ + public AzureLocation(@NotNull String location) { + Matcher matcher = URI_PATTERN.matcher(location); + if (!matcher.matches()) { + throw new IllegalArgumentException("Invalid azure adls location uri " + location); + } + String authority = matcher.group(1); + // look for @ + String[] parts = authority.split("@", -1); + + // expect container and account both exist + if (parts.length != 2) { + throw new IllegalArgumentException("container and account name must be both provided"); + } + this.container = parts[0]; + String accountHost = parts[1]; + String[] hostParts = accountHost.split("\\.", 2); + if (hostParts.length != 2) { + throw new IllegalArgumentException("storage account and endpoint must be both provided"); + } + this.storageAccount = hostParts[0]; + this.endpoint = hostParts[1]; + String path = matcher.group(2); + filePath = path == null ? "" : path.startsWith("/") ? path.substring(1) : path; + } + + /** + * Get the storage account + * + * @return + */ + public String getStorageAccount() { + return storageAccount; + } + + /** Get the container name */ + public String getContainer() { + return container; + } + + /** Get the endpoint, for example: blob.core.windows.net */ + public String getEndpoint() { + return endpoint; + } + + /** Get the file path */ + public String getFilePath() { + return filePath; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/azure/AzureStorageConfigurationInfo.java b/polaris-core/src/main/java/io/polaris/core/storage/azure/AzureStorageConfigurationInfo.java new file mode 100644 index 0000000000..dbdf843cce --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/azure/AzureStorageConfigurationInfo.java @@ -0,0 +1,97 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage.azure; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonIgnore; +import com.fasterxml.jackson.annotation.JsonProperty; +import com.google.common.base.MoreObjects; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import java.util.List; +import java.util.Objects; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** Azure storage configuration information. */ +public class AzureStorageConfigurationInfo extends PolarisStorageConfigurationInfo { + // technically there is no limitation since expectation for Azure locations are for the same + // storage account and same container + @JsonIgnore private static final int MAX_ALLOWED_LOCATIONS = 20; + + // Azure tenant id + private final @NotNull String tenantId; + + /** The multi tenant app name for the service principal */ + @JsonProperty(value = "multiTenantAppName", required = false) + private @Nullable String multiTenantAppName = null; + + /** The consent url to the Azure permissions request page */ + @JsonProperty(value = "consentUrl", required = false) + private @Nullable String consentUrl = null; + + @JsonCreator + public AzureStorageConfigurationInfo( + @JsonProperty(value = "allowedLocations", required = true) @NotNull + List allowedLocations, + @JsonProperty(value = "tenantId", required = true) @NotNull String tenantId) { + super(StorageType.AZURE, allowedLocations); + this.tenantId = tenantId; + validateMaxAllowedLocations(MAX_ALLOWED_LOCATIONS); + } + + @Override + public String getFileIoImplClassName() { + return "org.apache.iceberg.azure.adlsv2.ADLSFileIO"; + } + + public @NotNull String getTenantId() { + return tenantId; + } + + public String getMultiTenantAppName() { + return multiTenantAppName; + } + + public void setMultiTenantAppName(String multiTenantAppName) { + this.multiTenantAppName = multiTenantAppName; + } + + public String getConsentUrl() { + return consentUrl; + } + + public void setConsentUrl(String consentUrl) { + this.consentUrl = consentUrl; + } + + @Override + public String toString() { + return MoreObjects.toStringHelper(this) + .add("storageType", getStorageType()) + .add("tenantId", tenantId) + .add("allowedLocation", getAllowedLocations()) + .add("multiTenantAppName", multiTenantAppName) + .add("consentUrl", consentUrl) + .toString(); + } + + @Override + public void validatePrefixForStorageType(String loc) { + AzureLocation location = new AzureLocation(loc); + Objects.requireNonNull( + location); // do something with the variable so the JVM doesn't optimize out the check + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/cache/StorageCredentialCache.java b/polaris-core/src/main/java/io/polaris/core/storage/cache/StorageCredentialCache.java new file mode 100644 index 0000000000..a3130e4ba9 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/cache/StorageCredentialCache.java @@ -0,0 +1,168 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage.cache; + +import com.github.benmanes.caffeine.cache.CacheLoader; +import com.github.benmanes.caffeine.cache.Caffeine; +import com.github.benmanes.caffeine.cache.Expiry; +import com.github.benmanes.caffeine.cache.LoadingCache; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import java.util.Map; +import java.util.Optional; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.function.Function; +import org.apache.iceberg.exceptions.UnprocessableEntityException; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.VisibleForTesting; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** Storage subscoped credential cache. */ +public class StorageCredentialCache { + + private static final Logger LOGGER = LoggerFactory.getLogger(StorageCredentialCache.class); + + private static final long CACHE_MAX_DURATION_MS = 30 * 60 * 1000L; // 30 minutes + private static final long CACHE_MAX_NUMBER_OF_ENTRIES = 10_000L; + private final LoadingCache cache; + + /** Initialize the creds cache, max cache duration is half an hr. */ + public StorageCredentialCache() { + cache = + Caffeine.newBuilder() + .maximumSize(CACHE_MAX_NUMBER_OF_ENTRIES) + .expireAfter( + new Expiry() { + @Override + public long expireAfterCreate( + StorageCredentialCacheKey key, + StorageCredentialCacheEntry entry, + long currentTime) { + long expireAfterMillis = + Math.max( + 0, + Math.min( + (entry.getExpirationTime() - System.currentTimeMillis()) / 2, + CACHE_MAX_DURATION_MS)); + return TimeUnit.MILLISECONDS.toNanos(expireAfterMillis); + } + + @Override + public long expireAfterUpdate( + StorageCredentialCacheKey key, + StorageCredentialCacheEntry entry, + long currentTime, + long currentDuration) { + return currentDuration; + } + + @Override + public long expireAfterRead( + StorageCredentialCacheKey key, + StorageCredentialCacheEntry entry, + long currentTime, + long currentDuration) { + return currentDuration; + } + }) + .build( + new CacheLoader() { + @Override + public StorageCredentialCacheEntry load(StorageCredentialCacheKey key) { + // the load happen at getOrGenerateSubScopeCreds() + return null; + } + }); + } + + /** + * Either get from the cache or generate a new entry for a scoped creds + * + * @param metaStoreManager the meta storage manager used to generate a new scoped creds if needed + * @param callCtx the call context + * @param polarisEntity the polaris entity that is going to scoped creds + * @param allowListOperation whether allow list action on the provided read and write locations + * @param allowedReadLocations a set of allowed to read locations + * @param allowedWriteLocations a set of allowed to write locations. + * @return the a map of string containing the scoped creds information + */ + public Map getOrGenerateSubScopeCreds( + @NotNull PolarisMetaStoreManager metaStoreManager, + @NotNull PolarisCallContext callCtx, + @NotNull PolarisEntity polarisEntity, + boolean allowListOperation, + @NotNull Set allowedReadLocations, + @NotNull Set allowedWriteLocations) { + if (!isTypeSupported(polarisEntity.getType())) { + callCtx + .getDiagServices() + .fail("entity_type_not_suppported_to_scope_creds", "type={}", polarisEntity.getType()); + } + StorageCredentialCacheKey key = + new StorageCredentialCacheKey( + polarisEntity, + allowListOperation, + allowedReadLocations, + allowedWriteLocations, + callCtx); + LOGGER.atDebug().addKeyValue("key", key).log("subscopedCredsCache"); + Function loader = + k -> { + LOGGER.atDebug().log("StorageCredentialCache::load"); + PolarisMetaStoreManager.ScopedCredentialsResult scopedCredentialsResult = + metaStoreManager.getSubscopedCredsForEntity( + k.getCallContext(), + k.getCatalogId(), + k.getEntityId(), + k.isAllowedListAction(), + k.getAllowedReadLocations(), + k.getAllowedWriteLocations()); + if (scopedCredentialsResult.isSuccess()) { + return new StorageCredentialCacheEntry(scopedCredentialsResult); + } + LOGGER + .atDebug() + .addKeyValue("errorMessage", scopedCredentialsResult.getExtraInformation()) + .log("Failed to get subscoped credentials"); + throw new UnprocessableEntityException( + "Failed to get subscoped credentials: " + + scopedCredentialsResult.getExtraInformation()); + }; + return cache.get(key, loader).convertToMapOfString(); + } + + public Map getIfPresent(StorageCredentialCacheKey key) { + return Optional.ofNullable(cache.getIfPresent(key)) + .map(value -> value.convertToMapOfString()) + .orElse(null); + } + + private boolean isTypeSupported(PolarisEntityType type) { + return type == PolarisEntityType.CATALOG + || type == PolarisEntityType.NAMESPACE + || type == PolarisEntityType.TABLE_LIKE + || type == PolarisEntityType.TASK; + } + + @VisibleForTesting + public long getEstimatedSize() { + return this.cache.estimatedSize(); + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/cache/StorageCredentialCacheEntry.java b/polaris-core/src/main/java/io/polaris/core/storage/cache/StorageCredentialCacheEntry.java new file mode 100644 index 0000000000..4f0eefd7bf --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/cache/StorageCredentialCacheEntry.java @@ -0,0 +1,76 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage.cache; + +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.storage.PolarisCredentialProperty; +import java.util.EnumMap; +import java.util.HashMap; +import java.util.Map; + +/** A storage credential cached entry. */ +public class StorageCredentialCacheEntry { + /** The scoped creds map that is fetched from a creds vending service */ + public final EnumMap credsMap; + + private final PolarisMetaStoreManager.ScopedCredentialsResult scopedCredentialsResult; + + public StorageCredentialCacheEntry( + PolarisMetaStoreManager.ScopedCredentialsResult scopedCredentialsResult) { + this.scopedCredentialsResult = scopedCredentialsResult; + this.credsMap = scopedCredentialsResult.getCredentials(); + } + + /** + * Get the expiration time in millisecond for the cached entry + * + * @return + */ + public long getExpirationTime() { + if (credsMap.containsKey(PolarisCredentialProperty.GCS_ACCESS_TOKEN_EXPIRES_AT)) { + return Long.parseLong(credsMap.get(PolarisCredentialProperty.GCS_ACCESS_TOKEN_EXPIRES_AT)); + } + if (credsMap.containsKey(PolarisCredentialProperty.EXPIRATION_TIME)) { + return Long.parseLong(credsMap.get(PolarisCredentialProperty.EXPIRATION_TIME)); + } + return Long.MAX_VALUE; + } + + /** + * Get the map of string creds that is needed for the query engine. + * + * @return a map of string representing the subscoped creds info. + */ + public Map convertToMapOfString() { + Map resCredsMap = new HashMap<>(); + if (!credsMap.isEmpty()) { + credsMap.forEach( + (key, value) -> { + // only Azure needs special handle, the target key is dynamically with storageaccount + // endpoint appended + if (key.equals(PolarisCredentialProperty.AZURE_SAS_TOKEN)) { + resCredsMap.put( + key.getPropertyName() + + credsMap.get(PolarisCredentialProperty.AZURE_ACCOUNT_HOST), + value); + } else if (!key.equals(PolarisCredentialProperty.AZURE_ACCOUNT_HOST)) { + resCredsMap.put(key.getPropertyName(), value); + } + }); + } + return resCredsMap; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/cache/StorageCredentialCacheKey.java b/polaris-core/src/main/java/io/polaris/core/storage/cache/StorageCredentialCacheKey.java new file mode 100644 index 0000000000..1791ceac29 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/cache/StorageCredentialCacheKey.java @@ -0,0 +1,139 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage.cache; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityConstants; +import java.util.Objects; +import java.util.Set; +import org.jetbrains.annotations.Nullable; + +public class StorageCredentialCacheKey { + + private final long catalogId; + + /** The serialized string of the storage config. */ + private final String storageConfigSerializedStr; + + /** + * The entity id is passed to be used to fetch subscoped creds, but is not used to do hash/equals + * as part of the cache key. + */ + private final long entityId; + + private final boolean allowedListAction; + private final Set allowedReadLocations; + + private final Set allowedWriteLocations; + + /** + * The callContext is passed to be used to fetch subscoped creds, but is not used to hash/equals + * as part of the cache key. + */ + private @Nullable PolarisCallContext callContext; + + public StorageCredentialCacheKey( + PolarisEntity entity, + boolean allowedListAction, + Set allowedReadLocations, + Set allowedWriteLocations, + PolarisCallContext callContext) { + this.catalogId = entity.getCatalogId(); + this.storageConfigSerializedStr = + entity + .getInternalPropertiesAsMap() + .get(PolarisEntityConstants.getStorageConfigInfoPropertyName()); + this.entityId = entity.getId(); + this.allowedListAction = allowedListAction; + this.allowedReadLocations = allowedReadLocations; + this.allowedWriteLocations = allowedWriteLocations; + this.callContext = callContext; + if (this.callContext == null) { + this.callContext = CallContext.getCurrentContext().getPolarisCallContext(); + } + } + + public long getCatalogId() { + return catalogId; + } + + public String getStorageConfigSerializedStr() { + return storageConfigSerializedStr; + } + + public long getEntityId() { + return entityId; + } + + public boolean isAllowedListAction() { + return allowedListAction; + } + + public Set getAllowedReadLocations() { + return allowedReadLocations; + } + + public Set getAllowedWriteLocations() { + return allowedWriteLocations; + } + + public PolarisCallContext getCallContext() { + return callContext; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + StorageCredentialCacheKey cacheKey = (StorageCredentialCacheKey) o; + return catalogId == cacheKey.getCatalogId() + && Objects.equals(storageConfigSerializedStr, cacheKey.getStorageConfigSerializedStr()) + && allowedListAction == cacheKey.allowedListAction + && Objects.equals(allowedReadLocations, cacheKey.allowedReadLocations) + && Objects.equals(allowedWriteLocations, cacheKey.allowedWriteLocations); + } + + @Override + public int hashCode() { + return Objects.hash( + catalogId, + storageConfigSerializedStr, + allowedListAction, + allowedReadLocations, + allowedWriteLocations); + } + + @Override + public String toString() { + return "StorageCredentialCacheKey{" + + "catalogId=" + + catalogId + + ", storageConfigSerializedStr='" + + storageConfigSerializedStr + + '\'' + + ", entityId=" + + entityId + + ", allowedListAction=" + + allowedListAction + + ", allowedReadLocations=" + + allowedReadLocations + + ", allowedWriteLocations=" + + allowedWriteLocations + + '}'; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/gcp/GcpCredentialsStorageIntegration.java b/polaris-core/src/main/java/io/polaris/core/storage/gcp/GcpCredentialsStorageIntegration.java new file mode 100644 index 0000000000..88d322f64f --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/gcp/GcpCredentialsStorageIntegration.java @@ -0,0 +1,216 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage.gcp; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.auth.http.HttpTransportFactory; +import com.google.auth.oauth2.AccessToken; +import com.google.auth.oauth2.CredentialAccessBoundary; +import com.google.auth.oauth2.DownscopedCredentials; +import com.google.auth.oauth2.GoogleCredentials; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.storage.InMemoryStorageIntegration; +import io.polaris.core.storage.PolarisCredentialProperty; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import io.polaris.core.storage.PolarisStorageIntegration; +import java.io.IOException; +import java.net.URI; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.EnumMap; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; +import java.util.stream.Stream; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.VisibleForTesting; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * GCS implementation of {@link PolarisStorageIntegration} with support for scoping credentials for + * input read/write locations + */ +public class GcpCredentialsStorageIntegration + extends InMemoryStorageIntegration { + public static final String TOKEN_URL = "https://sts.googleapis.com/v1/token"; + private final Logger LOGGER = LoggerFactory.getLogger(GcpCredentialsStorageIntegration.class); + + private final GoogleCredentials sourceCredentials; + private final HttpTransportFactory transportFactory; + public static final Set TOKEN_ENDPOINT_RETRYABLE_STATUS_CODES = + new HashSet<>(Arrays.asList(500, 503, 408, 429)); + + public GcpCredentialsStorageIntegration( + GoogleCredentials sourceCredentials, HttpTransportFactory transportFactory) { + super(GcpCredentialsStorageIntegration.class.getName()); + // Needed for when environment variable GOOGLE_APPLICATION_CREDENTIALS points to google service + // account key json + this.sourceCredentials = + sourceCredentials.createScoped("https://www.googleapis.com/auth/cloud-platform"); + this.transportFactory = transportFactory; + } + + @Override + public EnumMap getSubscopedCreds( + @NotNull PolarisDiagnostics diagnostics, + @NotNull GcpStorageConfigurationInfo storageConfig, + boolean allowListOperation, + @NotNull Set allowedReadLocations, + @NotNull Set allowedWriteLocations) { + try { + sourceCredentials.refresh(); + } catch (IOException e) { + throw new RuntimeException("Unable to refresh GCP credentials", e); + } + AccessToken sourceCredentialsAccessToken = this.sourceCredentials.getAccessToken(); + + CredentialAccessBoundary accessBoundary = + generateAccessBoundaryRules( + allowListOperation, allowedReadLocations, allowedWriteLocations); + DownscopedCredentials credentials = + DownscopedCredentials.newBuilder() + .setHttpTransportFactory(transportFactory) + .setSourceCredential(sourceCredentials) + .setCredentialAccessBoundary(accessBoundary) + .build(); + AccessToken token; + try { + token = credentials.refreshAccessToken(); + } catch (IOException e) { + LOGGER + .atError() + .addKeyValue("readLocations", allowedReadLocations) + .addKeyValue("writeLocations", allowedWriteLocations) + .addKeyValue("includesList", allowListOperation) + .addKeyValue("accessBoundary", convertToString(accessBoundary)) + .log("Unable to refresh access credentials", e); + throw new RuntimeException("Unable to fetch access credentials " + e.getMessage()); + } + + // If expires_in missing, use source credential's expire time, which require another api call to + // get. + EnumMap propertyMap = + new EnumMap<>(PolarisCredentialProperty.class); + propertyMap.put(PolarisCredentialProperty.GCS_ACCESS_TOKEN, token.getTokenValue()); + propertyMap.put( + PolarisCredentialProperty.GCS_ACCESS_TOKEN_EXPIRES_AT, + String.valueOf(token.getExpirationTime().getTime())); + return propertyMap; + } + + private String convertToString(CredentialAccessBoundary accessBoundary) { + try { + return new ObjectMapper().writeValueAsString(accessBoundary); + } catch (JsonProcessingException e) { + LOGGER.warn("Unable to convert access boundary to json", e); + return Objects.toString(accessBoundary); + } + } + + @VisibleForTesting + public static CredentialAccessBoundary generateAccessBoundaryRules( + boolean allowListOperation, + @NotNull Set allowedReadLocations, + @NotNull Set allowedWriteLocations) { + Map> readConditionsMap = new HashMap<>(); + Map> writeConditionsMap = new HashMap<>(); + + HashSet readBuckets = new HashSet<>(); + HashSet writeBuckets = new HashSet<>(); + Stream.concat(allowedReadLocations.stream(), allowedWriteLocations.stream()) + .distinct() + .forEach( + location -> { + URI uri = URI.create(location); + String bucket = uri.getHost(); + readBuckets.add(bucket); + String path = uri.getPath().substring(1); + List resourceExpressions = + readConditionsMap.computeIfAbsent(bucket, key -> new ArrayList<>()); + resourceExpressions.add( + String.format( + "resource.name.startsWith('projects/_/buckets/%s/objects/%s')", + bucket, path)); + if (allowListOperation) { + resourceExpressions.add( + String.format( + "api.getAttribute('storage.googleapis.com/objectListPrefix', '').startsWith('%s')", + path)); + } + if (allowedWriteLocations.contains(location)) { + writeBuckets.add(bucket); + List writeExpressions = + writeConditionsMap.computeIfAbsent(bucket, key -> new ArrayList<>()); + writeExpressions.add( + String.format( + "resource.name.startsWith('projects/_/buckets/%s/objects/%s')", + bucket, path)); + } + }); + CredentialAccessBoundary.Builder accessBoundaryBuilder = CredentialAccessBoundary.newBuilder(); + readBuckets.forEach( + bucket -> { + List readConditions = readConditionsMap.get(bucket); + if (readConditions == null || readConditions.isEmpty()) { + return; + } + CredentialAccessBoundary.AccessBoundaryRule.Builder builder = + CredentialAccessBoundary.AccessBoundaryRule.newBuilder(); + builder.setAvailableResource(bucketResource(bucket)); + builder.setAvailabilityCondition( + CredentialAccessBoundary.AccessBoundaryRule.AvailabilityCondition.newBuilder() + .setExpression(String.join(" || ", readConditions)) + .build()); + builder.setAvailablePermissions(List.of("inRole:roles/storage.legacyObjectReader")); + if (allowListOperation) { + builder.addAvailablePermission("inRole:roles/storage.objectViewer"); + } + accessBoundaryBuilder.addRule(builder.build()); + }); + writeBuckets.forEach( + bucket -> { + List writeConditions = writeConditionsMap.get(bucket); + if (writeConditions == null || writeConditions.isEmpty()) { + return; + } + CredentialAccessBoundary.AccessBoundaryRule.Builder builder = + CredentialAccessBoundary.AccessBoundaryRule.newBuilder(); + builder.setAvailableResource(bucketResource(bucket)); + builder.setAvailabilityCondition( + CredentialAccessBoundary.AccessBoundaryRule.AvailabilityCondition.newBuilder() + .setExpression(String.join(" || ", writeConditions)) + .build()); + builder.setAvailablePermissions(List.of("inRole:roles/storage.legacyBucketWriter")); + accessBoundaryBuilder.addRule(builder.build()); + }); + return accessBoundaryBuilder.build(); + } + + private static String bucketResource(String bucket) { + return "//storage.googleapis.com/projects/_/buckets/" + bucket; + } + + @Override + public EnumMap + descPolarisStorageConfiguration(@NotNull PolarisStorageConfigurationInfo storageConfigInfo) { + return null; + } +} diff --git a/polaris-core/src/main/java/io/polaris/core/storage/gcp/GcpStorageConfigurationInfo.java b/polaris-core/src/main/java/io/polaris/core/storage/gcp/GcpStorageConfigurationInfo.java new file mode 100644 index 0000000000..2275ce7a66 --- /dev/null +++ b/polaris-core/src/main/java/io/polaris/core/storage/gcp/GcpStorageConfigurationInfo.java @@ -0,0 +1,68 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage.gcp; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonIgnore; +import com.fasterxml.jackson.annotation.JsonProperty; +import com.google.common.base.MoreObjects; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import java.util.List; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; + +/** Gcp storage storage configuration information. */ +public class GcpStorageConfigurationInfo extends PolarisStorageConfigurationInfo { + + // 8 is an experimental result from generating GCP accessBoundaryRules when subscoping creds, + // when the rule is too large, GCS only returns error: 400 bad request "Invalid arguments + // provided in the request" + @JsonIgnore private static final int MAX_ALLOWED_LOCATIONS = 8; + + /** The gcp service account */ + @JsonProperty(value = "gcpServiceAccount", required = false) + private @Nullable String gcpServiceAccount = null; + + @JsonCreator + public GcpStorageConfigurationInfo( + @JsonProperty(value = "allowedLocations", required = true) @NotNull + List allowedLocations) { + super(StorageType.GCS, allowedLocations); + validateMaxAllowedLocations(MAX_ALLOWED_LOCATIONS); + } + + @Override + public String getFileIoImplClassName() { + return "org.apache.iceberg.gcp.gcs.GCSFileIO"; + } + + public void setGcpServiceAccount(String gcpServiceAccount) { + this.gcpServiceAccount = gcpServiceAccount; + } + + public String getGcpServiceAccount() { + return gcpServiceAccount; + } + + @Override + public String toString() { + return MoreObjects.toStringHelper(this) + .add("storageType", getStorageType()) + .add("allowedLocation", getAllowedLocations()) + .add("gcpServiceAccount", gcpServiceAccount) + .toString(); + } +} diff --git a/polaris-core/src/test/java/io/polaris/core/persistence/EntityCacheTest.java b/polaris-core/src/test/java/io/polaris/core/persistence/EntityCacheTest.java new file mode 100644 index 0000000000..4d486d5278 --- /dev/null +++ b/polaris-core/src/test/java/io/polaris/core/persistence/EntityCacheTest.java @@ -0,0 +1,463 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.persistence.cache.EntityCache; +import io.polaris.core.persistence.cache.EntityCacheByNameKey; +import io.polaris.core.persistence.cache.EntityCacheEntry; +import io.polaris.core.persistence.cache.EntityCacheLookupResult; +import java.util.List; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; +import org.mockito.Mockito; + +/** Unit testing of the entity cache */ +public class EntityCacheTest { + + // diag services + private final PolarisDiagnostics diagServices; + + // the entity store, use treemap implementation + private final PolarisTreeMapStore store; + + // to interact with the metastore + private final PolarisMetaStoreSession metaStore; + + // polaris call context + private final PolarisCallContext callCtx; + + // utility to bootstrap the mata store + private final PolarisTestMetaStoreManager tm; + + // the meta store manager + private final PolarisMetaStoreManager metaStoreManager; + + /** + * Initialize and create the test metadata + * + *
+   * - test
+   * - (N1/N2/T1)
+   * - (N1/N2/T2)
+   * - (N1/N2/V1)
+   * - (N1/N3/T3)
+   * - (N1/N3/V2)
+   * - (N1/T4)
+   * - (N1/N4)
+   * - N5/N6/T5
+   * - N5/N6/T6
+   * - R1(TABLE_READ on N1/N2, VIEW_CREATE on C, TABLE_LIST on N2, TABLE_DROP on N5/N6/T5)
+   * - R2(TABLE_WRITE_DATA on N5, VIEW_LIST on C)
+   * - PR1(R1, R2)
+   * - PR2(R2)
+   * - P1(PR1, PR2)
+   * - P2(PR2)
+   * 
+ */ + public EntityCacheTest() { + diagServices = new PolarisDefaultDiagServiceImpl(); + store = new PolarisTreeMapStore(diagServices); + metaStore = new PolarisTreeMapMetaStoreSessionImpl(store, Mockito.mock()); + callCtx = new PolarisCallContext(metaStore, diagServices); + metaStoreManager = new PolarisMetaStoreManagerImpl(); + + // bootstrap the mata store with our test schema + tm = new PolarisTestMetaStoreManager(metaStoreManager, callCtx); + tm.testCreateTestCatalog(); + } + + /** + * @return new cache for the entity store + */ + EntityCache allocateNewCache() { + return new EntityCache(this.metaStoreManager); + } + + @Test + void testGetOrLoadEntityByName() { + // get a new cache + EntityCache cache = this.allocateNewCache(); + + // should exist and no cache hit + EntityCacheLookupResult lookup = + cache.getOrLoadEntityByName( + this.callCtx, new EntityCacheByNameKey(PolarisEntityType.CATALOG, "test")); + Assertions.assertNotNull(lookup); + Assertions.assertFalse(lookup.isCacheHit()); + Assertions.assertNotNull(lookup.getCacheEntry()); + + // validate the cache entry + PolarisBaseEntity catalog = lookup.getCacheEntry().getEntity(); + Assertions.assertNotNull(catalog); + Assertions.assertEquals(PolarisEntityType.CATALOG, catalog.getType()); + + // do it again, should be found in the cache + lookup = + cache.getOrLoadEntityByName( + this.callCtx, new EntityCacheByNameKey(PolarisEntityType.CATALOG, "test")); + Assertions.assertNotNull(lookup); + Assertions.assertTrue(lookup.isCacheHit()); + + // do it again by id, should be found in the cache + lookup = cache.getOrLoadEntityById(this.callCtx, catalog.getCatalogId(), catalog.getId()); + Assertions.assertNotNull(lookup); + Assertions.assertTrue(lookup.isCacheHit()); + Assertions.assertNotNull(lookup.getCacheEntry()); + Assertions.assertNotNull(lookup.getCacheEntry().getEntity()); + Assertions.assertNotNull(lookup.getCacheEntry().getGrantRecordsAsSecurable()); + + // get N1 + PolarisBaseEntity N1 = + this.tm.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N1"); + + // get it directly from the cache, should not be there + EntityCacheByNameKey N1_name = + new EntityCacheByNameKey( + catalog.getId(), catalog.getId(), PolarisEntityType.NAMESPACE, "N1"); + EntityCacheEntry cacheEntry = cache.getEntityByName(N1_name); + Assertions.assertNull(cacheEntry); + + // try to find it in the cache by id. Should not be there, i.e. no cache hit + lookup = cache.getOrLoadEntityById(this.callCtx, N1.getCatalogId(), N1.getId()); + Assertions.assertNotNull(lookup); + Assertions.assertFalse(lookup.isCacheHit()); + + // should be there now, by name + cacheEntry = cache.getEntityByName(N1_name); + Assertions.assertNotNull(cacheEntry); + Assertions.assertNotNull(cacheEntry.getEntity()); + Assertions.assertNotNull(cacheEntry.getGrantRecordsAsSecurable()); + + // should be there now, by id + cacheEntry = cache.getEntityById(N1.getId()); + Assertions.assertNotNull(cacheEntry); + Assertions.assertNotNull(cacheEntry.getEntity()); + Assertions.assertNotNull(cacheEntry.getGrantRecordsAsSecurable()); + + // lookup N1 + EntityCacheEntry N1_entry = cache.getEntityById(N1.getId()); + Assertions.assertNotNull(N1_entry); + Assertions.assertNotNull(N1_entry.getEntity()); + Assertions.assertNotNull(N1_entry.getGrantRecordsAsSecurable()); + + // negative tests, load an entity which does not exist + lookup = cache.getOrLoadEntityById(this.callCtx, N1.getCatalogId(), 10000); + Assertions.assertNull(lookup); + lookup = + cache.getOrLoadEntityByName( + this.callCtx, + new EntityCacheByNameKey(PolarisEntityType.CATALOG, "non_existant_catalog")); + Assertions.assertNull(lookup); + + // lookup N2 to validate grants + EntityCacheByNameKey N2_name = + new EntityCacheByNameKey(catalog.getId(), N1.getId(), PolarisEntityType.NAMESPACE, "N2"); + lookup = cache.getOrLoadEntityByName(callCtx, N2_name); + Assertions.assertNotNull(lookup); + EntityCacheEntry cacheEntry_N1 = lookup.getCacheEntry(); + Assertions.assertNotNull(cacheEntry_N1); + Assertions.assertNotNull(cacheEntry_N1.getEntity()); + Assertions.assertNotNull(cacheEntry_N1.getGrantRecordsAsSecurable()); + + // lookup catalog role R1 + EntityCacheByNameKey R1_name = + new EntityCacheByNameKey( + catalog.getId(), catalog.getId(), PolarisEntityType.CATALOG_ROLE, "R1"); + lookup = cache.getOrLoadEntityByName(callCtx, R1_name); + Assertions.assertNotNull(lookup); + EntityCacheEntry cacheEntry_R1 = lookup.getCacheEntry(); + Assertions.assertNotNull(cacheEntry_R1); + Assertions.assertNotNull(cacheEntry_R1.getEntity()); + Assertions.assertNotNull(cacheEntry_R1.getGrantRecordsAsSecurable()); + Assertions.assertNotNull(cacheEntry_R1.getGrantRecordsAsGrantee()); + + // we expect one TABLE_READ grant on that securable granted to the catalog role R1 + Assertions.assertEquals(1, cacheEntry_N1.getGrantRecordsAsSecurable().size()); + PolarisGrantRecord gr = cacheEntry_N1.getGrantRecordsAsSecurable().get(0); + + // securable is N1, grantee is R1 + Assertions.assertEquals(cacheEntry_R1.getEntity().getId(), gr.getGranteeId()); + Assertions.assertEquals(cacheEntry_R1.getEntity().getCatalogId(), gr.getGranteeCatalogId()); + Assertions.assertEquals(cacheEntry_N1.getEntity().getId(), gr.getSecurableId()); + Assertions.assertEquals(cacheEntry_N1.getEntity().getCatalogId(), gr.getSecurableCatalogId()); + Assertions.assertEquals(PolarisPrivilege.TABLE_READ_DATA.getCode(), gr.getPrivilegeCode()); + + // R1 should have 4 privileges granted to it + Assertions.assertEquals(4, cacheEntry_R1.getGrantRecordsAsGrantee().size()); + List matchPriv = + cacheEntry_R1.getGrantRecordsAsGrantee().stream() + .filter( + grantRecord -> + grantRecord.getPrivilegeCode() == PolarisPrivilege.TABLE_READ_DATA.getCode()) + .toList(); + Assertions.assertEquals(1, matchPriv.size()); + gr = matchPriv.getFirst(); + Assertions.assertEquals(cacheEntry_R1.getEntity().getId(), gr.getGranteeId()); + Assertions.assertEquals(cacheEntry_R1.getEntity().getCatalogId(), gr.getGranteeCatalogId()); + Assertions.assertEquals(cacheEntry_N1.getEntity().getId(), gr.getSecurableId()); + Assertions.assertEquals(cacheEntry_N1.getEntity().getCatalogId(), gr.getSecurableCatalogId()); + Assertions.assertEquals(PolarisPrivilege.TABLE_READ_DATA.getCode(), gr.getPrivilegeCode()); + + // lookup principal role PR1 + EntityCacheByNameKey PR1_name = + new EntityCacheByNameKey(PolarisEntityType.PRINCIPAL_ROLE, "PR1"); + lookup = cache.getOrLoadEntityByName(callCtx, PR1_name); + Assertions.assertNotNull(lookup); + EntityCacheEntry cacheEntry_PR1 = lookup.getCacheEntry(); + Assertions.assertNotNull(cacheEntry_PR1); + Assertions.assertNotNull(cacheEntry_PR1.getEntity()); + Assertions.assertNotNull(cacheEntry_PR1.getGrantRecordsAsSecurable()); + Assertions.assertNotNull(cacheEntry_PR1.getGrantRecordsAsGrantee()); + + // R1 should have 1 CATALOG_ROLE_USAGE privilege granted *on* it to PR1 + Assertions.assertEquals(1, cacheEntry_R1.getGrantRecordsAsSecurable().size()); + gr = cacheEntry_R1.getGrantRecordsAsSecurable().get(0); + Assertions.assertEquals(cacheEntry_R1.getEntity().getId(), gr.getSecurableId()); + Assertions.assertEquals(cacheEntry_R1.getEntity().getCatalogId(), gr.getSecurableCatalogId()); + Assertions.assertEquals(cacheEntry_PR1.getEntity().getId(), gr.getGranteeId()); + Assertions.assertEquals(cacheEntry_PR1.getEntity().getCatalogId(), gr.getGranteeCatalogId()); + Assertions.assertEquals(PolarisPrivilege.CATALOG_ROLE_USAGE.getCode(), gr.getPrivilegeCode()); + + // PR1 should have 1 grant on it to P1. + Assertions.assertEquals(1, cacheEntry_PR1.getGrantRecordsAsSecurable().size()); + Assertions.assertEquals( + PolarisPrivilege.PRINCIPAL_ROLE_USAGE.getCode(), + cacheEntry_PR1.getGrantRecordsAsSecurable().get(0).getPrivilegeCode()); + + // PR1 should have 2 grants to it, on R1 and R2 + Assertions.assertEquals(2, cacheEntry_PR1.getGrantRecordsAsGrantee().size()); + Assertions.assertEquals( + PolarisPrivilege.CATALOG_ROLE_USAGE.getCode(), + cacheEntry_PR1.getGrantRecordsAsGrantee().get(0).getPrivilegeCode()); + Assertions.assertEquals( + PolarisPrivilege.CATALOG_ROLE_USAGE.getCode(), + cacheEntry_PR1.getGrantRecordsAsGrantee().get(1).getPrivilegeCode()); + } + + @Test + void testRefresh() { + // allocate a new cache + EntityCache cache = this.allocateNewCache(); + + // should exist and no cache hit + EntityCacheLookupResult lookup = + cache.getOrLoadEntityByName( + this.callCtx, new EntityCacheByNameKey(PolarisEntityType.CATALOG, "test")); + Assertions.assertNotNull(lookup); + Assertions.assertFalse(lookup.isCacheHit()); + + // the catalog + Assertions.assertNotNull(lookup.getCacheEntry()); + PolarisBaseEntity catalog = lookup.getCacheEntry().getEntity(); + Assertions.assertNotNull(catalog); + Assertions.assertEquals(PolarisEntityType.CATALOG, catalog.getType()); + + // find table N5/N6/T6 + PolarisBaseEntity N5 = + this.tm.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N5"); + PolarisBaseEntity N5_N6 = + this.tm.ensureExistsByName(List.of(catalog, N5), PolarisEntityType.NAMESPACE, "N6"); + PolarisBaseEntity T6v1 = + this.tm.ensureExistsByName( + List.of(catalog, N5, N5_N6), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T6"); + Assertions.assertNotNull(T6v1); + + // that table is not in the cache + EntityCacheEntry cacheEntry = cache.getEntityById(T6v1.getId()); + Assertions.assertNull(cacheEntry); + + // now load that table in the cache + cacheEntry = + cache.getAndRefreshIfNeeded( + this.callCtx, T6v1, T6v1.getEntityVersion(), T6v1.getGrantRecordsVersion()); + Assertions.assertNotNull(cacheEntry); + Assertions.assertNotNull(cacheEntry.getEntity()); + Assertions.assertNotNull(cacheEntry.getGrantRecordsAsSecurable()); + PolarisBaseEntity table = cacheEntry.getEntity(); + Assertions.assertEquals(T6v1.getId(), table.getId()); + Assertions.assertEquals(T6v1.getEntityVersion(), table.getEntityVersion()); + Assertions.assertEquals(T6v1.getGrantRecordsVersion(), table.getGrantRecordsVersion()); + + // update the entity + PolarisBaseEntity T6v2 = + this.tm.updateEntity( + List.of(catalog, N5, N5_N6), + T6v1, + "{\"v2_properties\": \"some value\"}", + "{\"v2_internal_properties\": \"internal value\"}"); + Assertions.assertNotNull(T6v2); + + // now refresh that entity. But because we don't change the versions, nothing should be reloaded + cacheEntry = + cache.getAndRefreshIfNeeded( + this.callCtx, T6v1, T6v1.getEntityVersion(), T6v1.getGrantRecordsVersion()); + Assertions.assertNotNull(cacheEntry); + Assertions.assertNotNull(cacheEntry.getEntity()); + Assertions.assertNotNull(cacheEntry.getGrantRecordsAsSecurable()); + table = cacheEntry.getEntity(); + Assertions.assertEquals(T6v1.getId(), table.getId()); + Assertions.assertEquals(T6v1.getEntityVersion(), table.getEntityVersion()); + Assertions.assertEquals(T6v1.getGrantRecordsVersion(), table.getGrantRecordsVersion()); + + // now refresh again, this time with the new versions. Should be reloaded + cacheEntry = + cache.getAndRefreshIfNeeded( + this.callCtx, T6v2, T6v2.getEntityVersion(), T6v2.getGrantRecordsVersion()); + Assertions.assertNotNull(cacheEntry); + Assertions.assertNotNull(cacheEntry.getEntity()); + Assertions.assertNotNull(cacheEntry.getGrantRecordsAsSecurable()); + table = cacheEntry.getEntity(); + Assertions.assertEquals(T6v2.getId(), table.getId()); + Assertions.assertEquals(T6v2.getEntityVersion(), table.getEntityVersion()); + Assertions.assertEquals(T6v2.getGrantRecordsVersion(), table.getGrantRecordsVersion()); + + // update it again + PolarisBaseEntity T6v3 = + this.tm.updateEntity( + List.of(catalog, N5, N5_N6), + T6v2, + "{\"v3_properties\": \"some value\"}", + "{\"v3_internal_properties\": \"internal value\"}"); + Assertions.assertNotNull(T6v3); + + // the two catalog roles + PolarisBaseEntity R1 = + this.tm.ensureExistsByName(List.of(catalog), PolarisEntityType.CATALOG_ROLE, "R1"); + PolarisBaseEntity N1 = + this.tm.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N1"); + PolarisBaseEntity N2 = + this.tm.ensureExistsByName(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N2"); + + // load that namespace + cacheEntry = + cache.getAndRefreshIfNeeded( + this.callCtx, N2, N2.getEntityVersion(), N2.getGrantRecordsVersion()); + + // should have one single grant + Assertions.assertNotNull(cacheEntry); + Assertions.assertNotNull(cacheEntry.getGrantRecordsAsSecurable()); + Assertions.assertEquals(1, cacheEntry.getGrantRecordsAsSecurable().size()); + + // perform an additional grant to R1 + this.tm.grantPrivilege(R1, List.of(catalog, N1), N2, PolarisPrivilege.NAMESPACE_FULL_METADATA); + + // now reload N2, grant records version should have changed + PolarisBaseEntity N2v2 = + this.tm.ensureExistsByName(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N2"); + + // same entity version but different grant records + Assertions.assertNotNull(N2v2); + Assertions.assertEquals(N2.getGrantRecordsVersion() + 1, N2v2.getGrantRecordsVersion()); + + // the cache is outdated now + lookup = + cache.getOrLoadEntityByName( + this.callCtx, + new EntityCacheByNameKey( + catalog.getId(), N1.getId(), PolarisEntityType.NAMESPACE, "N2")); + Assertions.assertNotNull(lookup); + cacheEntry = lookup.getCacheEntry(); + Assertions.assertNotNull(cacheEntry); + Assertions.assertNotNull(cacheEntry.getEntity()); + Assertions.assertNotNull(cacheEntry.getGrantRecordsAsSecurable()); + Assertions.assertEquals(1, cacheEntry.getGrantRecordsAsSecurable().size()); + Assertions.assertEquals( + N2.getGrantRecordsVersion(), cacheEntry.getEntity().getGrantRecordsVersion()); + + // now refresh + cacheEntry = + cache.getAndRefreshIfNeeded( + this.callCtx, N2, N2v2.getEntityVersion(), N2v2.getGrantRecordsVersion()); + Assertions.assertNotNull(cacheEntry); + Assertions.assertNotNull(cacheEntry.getEntity()); + Assertions.assertNotNull(cacheEntry.getGrantRecordsAsSecurable()); + Assertions.assertEquals(2, cacheEntry.getGrantRecordsAsSecurable().size()); + Assertions.assertEquals( + N2v2.getGrantRecordsVersion(), cacheEntry.getEntity().getGrantRecordsVersion()); + } + + @Test + void testRenameAndCacheDestinationBeforeLoadingSource() { + // get a new cache + EntityCache cache = this.allocateNewCache(); + + EntityCacheLookupResult lookup = + cache.getOrLoadEntityByName( + this.callCtx, new EntityCacheByNameKey(PolarisEntityType.CATALOG, "test")); + Assertions.assertNotNull(lookup); + Assertions.assertNotNull(lookup.getCacheEntry()); + PolarisBaseEntity catalog = lookup.getCacheEntry().getEntity(); + + PolarisBaseEntity N1 = + this.tm.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N1"); + lookup = cache.getOrLoadEntityById(this.callCtx, N1.getCatalogId(), N1.getId()); + Assertions.assertNotNull(lookup); + + EntityCacheByNameKey T4_name = + new EntityCacheByNameKey(N1.getCatalogId(), N1.getId(), PolarisEntityType.TABLE_LIKE, "T4"); + lookup = cache.getOrLoadEntityByName(callCtx, T4_name); + Assertions.assertNotNull(lookup); + EntityCacheEntry cacheEntry_T4 = lookup.getCacheEntry(); + Assertions.assertNotNull(cacheEntry_T4); + Assertions.assertNotNull(cacheEntry_T4.getEntity()); + Assertions.assertNotNull(cacheEntry_T4.getGrantRecordsAsSecurable()); + + PolarisBaseEntity T4 = cacheEntry_T4.getEntity(); + + this.tm.renameEntity(List.of(catalog, N1), T4, null, "T4_renamed"); + + // load the renamed entity into cache + EntityCacheByNameKey T4_renamed = + new EntityCacheByNameKey( + N1.getCatalogId(), N1.getId(), PolarisEntityType.TABLE_LIKE, "T4_renamed"); + lookup = cache.getOrLoadEntityByName(callCtx, T4_renamed); + Assertions.assertNotNull(lookup); + EntityCacheEntry cacheEntry_T4_renamed = lookup.getCacheEntry(); + Assertions.assertNotNull(cacheEntry_T4_renamed); + PolarisBaseEntity T4_renamed_entity = cacheEntry_T4_renamed.getEntity(); + + // new entry if lookup by id + EntityCacheLookupResult lookupResult = + cache.getOrLoadEntityById(callCtx, T4.getCatalogId(), T4.getId()); + Assertions.assertNotNull(lookupResult); + Assertions.assertNotNull(lookupResult.getCacheEntry()); + Assertions.assertEquals("T4_renamed", lookupResult.getCacheEntry().getEntity().getName()); + + // old name is gone, replaced by new name + // Assertions.assertNull(cache.getOrLoadEntityByName(callCtx, T4_name)); + + // refreshing should return null since our current held T4 is outdated + cache.getAndRefreshIfNeeded( + callCtx, + T4, + T4_renamed_entity.getEntityVersion(), + T4_renamed_entity.getGrantRecordsVersion()); + + // now the loading by the old name should return null + Assertions.assertNull(cache.getOrLoadEntityByName(callCtx, T4_name)); + } +} diff --git a/polaris-core/src/test/java/io/polaris/core/persistence/PolarisObjectMapperUtilTest.java b/polaris-core/src/test/java/io/polaris/core/persistence/PolarisObjectMapperUtilTest.java new file mode 100644 index 0000000000..3a89b31d91 --- /dev/null +++ b/polaris-core/src/test/java/io/polaris/core/persistence/PolarisObjectMapperUtilTest.java @@ -0,0 +1,74 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import org.assertj.core.api.Assertions; +import org.junit.jupiter.api.Test; + +class PolarisObjectMapperUtilTest { + + @Test + public void testParseTaskState() { + PolarisBaseEntity entity = + new PolarisBaseEntity( + 0L, 1L, PolarisEntityType.TASK, PolarisEntitySubType.NULL_SUBTYPE, 0L, "task"); + entity.setProperties( + "{\"name\": \"my name\", \"lastAttemptExecutorId\": \"the_executor\", \"data\": {\"nestedFields\": " + + "{\"further_nesting\": \"astring\", \"anArray\": [1, 2, 3, 4]}, \"anotherNestedField\": \"simple string\"}, " + + "\"lastAttemptStartTime\": \"100\", \"attemptCount\": \"9\"}"); + PolarisObjectMapperUtil.TaskExecutionState state = + PolarisObjectMapperUtil.parseTaskState(entity); + Assertions.assertThat(state) + .isNotNull() + .returns(100L, PolarisObjectMapperUtil.TaskExecutionState::getLastAttemptStartTime) + .returns(9, PolarisObjectMapperUtil.TaskExecutionState::getAttemptCount) + .returns("the_executor", PolarisObjectMapperUtil.TaskExecutionState::getExecutor); + } + + @Test + public void testParseTaskStateWithMissingFields() { + PolarisBaseEntity entity = + new PolarisBaseEntity( + 0L, 1L, PolarisEntityType.TASK, PolarisEntitySubType.NULL_SUBTYPE, 0L, "task"); + entity.setProperties( + "{\"name\": \"my name\", \"data\": {\"nestedFields\": " + + "{\"further_nesting\": \"astring\", \"anArray\": [1, 2, 3, 4]}, \"anotherNestedField\": \"simple string\"}, " + + "\"attemptCount\": \"5\"}"); + PolarisObjectMapperUtil.TaskExecutionState state = + PolarisObjectMapperUtil.parseTaskState(entity); + Assertions.assertThat(state) + .isNotNull() + .returns(0L, PolarisObjectMapperUtil.TaskExecutionState::getLastAttemptStartTime) + .returns(5, PolarisObjectMapperUtil.TaskExecutionState::getAttemptCount) + .returns(null, PolarisObjectMapperUtil.TaskExecutionState::getExecutor); + } + + @Test + public void testParseTaskStateWithInvalidJson() { + PolarisBaseEntity entity = + new PolarisBaseEntity( + 0L, 1L, PolarisEntityType.TASK, PolarisEntitySubType.NULL_SUBTYPE, 0L, "task"); + entity.setProperties( + "{\"name\": \"my name\", \"data\": {\"nestedFields\": " + + "{\"further_nesting\": \"astring\", \"anArray\": , : \"simple string\"}, "); + PolarisObjectMapperUtil.TaskExecutionState state = + PolarisObjectMapperUtil.parseTaskState(entity); + Assertions.assertThat(state).isNull(); + } +} diff --git a/polaris-core/src/test/java/io/polaris/core/persistence/PolarisTreeMapMetaStoreManagerTest.java b/polaris-core/src/test/java/io/polaris/core/persistence/PolarisTreeMapMetaStoreManagerTest.java new file mode 100644 index 0000000000..f910fa6d4d --- /dev/null +++ b/polaris-core/src/test/java/io/polaris/core/persistence/PolarisTreeMapMetaStoreManagerTest.java @@ -0,0 +1,39 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisConfigurationStore; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.PolarisDiagnostics; +import java.time.ZoneId; +import org.mockito.Mockito; + +public class PolarisTreeMapMetaStoreManagerTest extends PolarisMetaStoreManagerTest { + @Override + public PolarisTestMetaStoreManager createPolarisTestMetaStoreManager() { + PolarisDiagnostics diagServices = new PolarisDefaultDiagServiceImpl(); + PolarisTreeMapStore store = new PolarisTreeMapStore(diagServices); + PolarisCallContext callCtx = + new PolarisCallContext( + new PolarisTreeMapMetaStoreSessionImpl(store, Mockito.mock()), + diagServices, + new PolarisConfigurationStore() {}, + timeSource.withZone(ZoneId.systemDefault())); + + return new PolarisTestMetaStoreManager(new PolarisMetaStoreManagerImpl(), callCtx); + } +} diff --git a/polaris-core/src/test/java/io/polaris/core/persistence/ResolverTest.java b/polaris-core/src/test/java/io/polaris/core/persistence/ResolverTest.java new file mode 100644 index 0000000000..e077cd9d0e --- /dev/null +++ b/polaris-core/src/test/java/io/polaris/core/persistence/ResolverTest.java @@ -0,0 +1,938 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntityCore; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.persistence.cache.EntityCache; +import io.polaris.core.persistence.cache.EntityCacheEntry; +import io.polaris.core.persistence.resolver.Resolver; +import io.polaris.core.persistence.resolver.ResolverPath; +import io.polaris.core.persistence.resolver.ResolverStatus; +import java.util.ArrayList; +import java.util.Comparator; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; +import org.mockito.Mockito; + +public class ResolverTest { + + // diag services + private final PolarisDiagnostics diagServices; + + // the entity store, use treemap implementation + private final PolarisTreeMapStore store; + + // to interact with the metastore + private final PolarisMetaStoreSession metaStore; + + // polaris call context + private final PolarisCallContext callCtx; + + // utility to bootstrap the mata store + private final PolarisTestMetaStoreManager tm; + + // the meta store manager + private final PolarisMetaStoreManager metaStoreManager; + + // Principal P1 + private final PolarisBaseEntity P1; + + // cache we are using + private EntityCache cache; + + /** + * Initialize and create the test metadata + * + *
+   * - test
+   * - (N1/N2/T1)
+   * - (N1/N2/T2)
+   * - (N1/N2/V1)
+   * - (N1/N3/T3)
+   * - (N1/N3/V2)
+   * - (N1/T4)
+   * - (N1/N4)
+   * - N5/N6/T5
+   * - N5/N6/T6
+   * - R1(TABLE_READ on N1/N2, VIEW_CREATE on C, TABLE_LIST on N2, TABLE_DROP on N5/N6/T5)
+   * - R2(TABLE_WRITE_DATA on N5, VIEW_LIST on C)
+   * - PR1(R1, R2)
+   * - PR2(R2)
+   * - P1(PR1, PR2)
+   * - P2(PR1)
+   * 
+ */ + public ResolverTest() { + diagServices = new PolarisDefaultDiagServiceImpl(); + store = new PolarisTreeMapStore(diagServices); + metaStore = new PolarisTreeMapMetaStoreSessionImpl(store, Mockito.mock()); + callCtx = new PolarisCallContext(metaStore, diagServices); + metaStoreManager = new PolarisMetaStoreManagerImpl(); + + // bootstrap the mata store with our test schema + tm = new PolarisTestMetaStoreManager(metaStoreManager, callCtx); + tm.testCreateTestCatalog(); + + // principal P1 + this.P1 = tm.ensureExistsByName(null, PolarisEntityType.PRINCIPAL, "P1"); + } + + /** This test resolver for a create-principal scenario */ + @Test + void testResolvePrincipal() { + + // resolve a principal which does not exist, but make it optional so will succeed + this.resolveDriver(null, null, "P3", true, null, null); + + // resolve same principal but now make it non optional, so should fail + this.resolveDriver( + null, null, "P3", false, null, ResolverStatus.StatusEnum.ENTITY_COULD_NOT_BE_RESOLVED); + + // then resolve a principal which does exist + this.resolveDriver(null, null, "P2", false, null, null); + + // do it again, but this time using the primed cache + this.resolveDriver(this.cache, null, "P2", false, null, null); + + // now add a principal roles + this.resolveDriver(this.cache, null, "P2", false, "PR1", null); + + // do it again, everything in the cache + this.resolveDriver(this.cache, null, "P2", false, "PR1", null); + + // do it again on a cold cache + this.resolveDriver(this.cache, null, "P2", false, "PR1", null); + } + + /** Test that we can specify a subset of principal role names */ + @Test + void testScopedPrincipalRole() { + + // start without a scope + this.resolveDriver(null, null, "P2", false, "PR1", null); + + // specify various scopes + this.resolveDriver(this.cache, Set.of("PR1"), "P2", false, "PR1", null); + this.resolveDriver(this.cache, Set.of("PR2"), "P2", false, "PR1", null); + this.resolveDriver(this.cache, Set.of("PR2", "PR3"), "P2", false, "PR1", null); + this.resolveDriver(null, Set.of("PR2", "PR3"), "P2", false, "PR1", null); + this.resolveDriver(null, Set.of("PR3"), "P2", false, "PR1", null); + this.resolveDriver(this.cache, Set.of("PR1", "PR2"), "P2", false, "PR1", null); + } + + /** + * Test that the set of catalog roles being activated is correctly inferred, based of a set of + * principal roles + */ + @Test + void testCatalogRolesActivation() { + + // start simple, with both PR1 and PR2, you get R1 and R2 + this.resolveDriver(null, Set.of("PR1", "PR2"), "test", Set.of("R1", "R2")); + + // PR1 itself is enough to activate both R1 and R2 + this.resolveDriver(this.cache, Set.of("PR1"), "test", Set.of("R1", "R2")); + + // PR2 only activates R2 + this.resolveDriver(this.cache, Set.of("PR2"), "test", Set.of("R2")); + + // With a non-existing principal roles, nothing gets activated + this.resolveDriver(this.cache, Set.of("NOT_EXISTING"), "test", Set.of()); + } + + /** Test that paths, one or more, are properly resolved */ + @Test + void testResolvePath() { + // N1 which exists + ResolverPath N1 = new ResolverPath(List.of("N1"), PolarisEntityType.NAMESPACE); + this.resolveDriver(null, "test", N1, null, null); + + // N1/N2 which exists + ResolverPath N1_N2 = new ResolverPath(List.of("N1", "N2"), PolarisEntityType.NAMESPACE); + this.resolveDriver(null, "test", N1_N2, null, null); + + // N1/N2/T1 which exists + ResolverPath N1_N2_T1 = + new ResolverPath(List.of("N1", "N2", "T1"), PolarisEntityType.TABLE_LIKE); + this.resolveDriver(this.cache, "test", N1_N2_T1, null, null); + + // N1/N2/T1 which exists + ResolverPath N1_N2_V1 = + new ResolverPath(List.of("N1", "N2", "V1"), PolarisEntityType.TABLE_LIKE); + this.resolveDriver(this.cache, "test", N1_N2_V1, null, null); + + // N5/N6 which exists + ResolverPath N5_N6 = new ResolverPath(List.of("N5", "N6"), PolarisEntityType.NAMESPACE); + this.resolveDriver(this.cache, "test", N5_N6, null, null); + + // N5/N6/T5 which exists + ResolverPath N5_N6_T5 = + new ResolverPath(List.of("N5", "N6", "T5"), PolarisEntityType.TABLE_LIKE); + this.resolveDriver(this.cache, "test", N5_N6_T5, null, null); + + // Error scenarios: N5/N6/T8 which does not exists + ResolverPath N5_N6_T8 = + new ResolverPath(List.of("N5", "N6", "T8"), PolarisEntityType.TABLE_LIKE); + this.resolveDriver( + this.cache, + "test", + N5_N6_T8, + null, + ResolverStatus.StatusEnum.PATH_COULD_NOT_BE_FULLY_RESOLVED); + + // Error scenarios: N8/N6/T8 which does not exists + ResolverPath N8_N6_T8 = + new ResolverPath(List.of("N8", "N6", "T8"), PolarisEntityType.TABLE_LIKE); + this.resolveDriver( + this.cache, + "test", + N8_N6_T8, + null, + ResolverStatus.StatusEnum.PATH_COULD_NOT_BE_FULLY_RESOLVED); + + // now test multiple paths + this.resolveDriver( + this.cache, "test", null, List.of(N1, N5_N6, N1, N1_N2, N5_N6_T5, N1_N2), null); + this.resolveDriver( + this.cache, + "test", + null, + List.of(N1, N5_N6_T8, N5_N6_T5, N1_N2), + ResolverStatus.StatusEnum.PATH_COULD_NOT_BE_FULLY_RESOLVED); + + // except if the optional flag is specified + N5_N6_T8 = new ResolverPath(List.of("N5", "N6", "T8"), PolarisEntityType.TABLE_LIKE, true); + Resolver resolver = + this.resolveDriver(this.cache, "test", null, List.of(N1, N5_N6_T8, N5_N6_T5, N1_N2), null); + // get all the resolved paths + List> resolvedPath = resolver.getResolvedPaths(); + Assertions.assertEquals(1, resolvedPath.get(0).size()); + Assertions.assertEquals(2, resolvedPath.get(1).size()); + Assertions.assertEquals(3, resolvedPath.get(2).size()); + Assertions.assertEquals(2, resolvedPath.get(3).size()); + } + + /** + * Ensure that if data changes while entities are cached, we will always resolve to the latest + * version + */ + @Test + void testConsistency() { + + // resolve principal "P2" + this.resolveDriver(null, null, "P2", false, null, null); + this.resolveDriver(this.cache, null, "P2", false, null, null); + + // now drop this principal. It is still cached + PolarisBaseEntity P2 = this.tm.ensureExistsByName(null, PolarisEntityType.PRINCIPAL, "P2"); + this.tm.dropEntity(null, P2); + + // now resolve it again. Should fail because the entity was dropped + this.resolveDriver( + this.cache, + null, + "P2", + false, + null, + ResolverStatus.StatusEnum.ENTITY_COULD_NOT_BE_RESOLVED); + + // recreate P2 + this.tm.createPrincipal("P2"); + + // now resolve it again. Should succeed because the entity has been re-created + this.resolveDriver(this.cache, null, "P2", false, null, ResolverStatus.StatusEnum.SUCCESS); + + // resolve existing grants on catalog + this.resolveDriver(this.cache, Set.of("PR1", "PR2"), "test", Set.of("R1", "R2")); + + // with only PR2, we will only activate R2 + Resolver resolver = this.resolveDriver(this.cache, Set.of("PR2"), "test", Set.of("R2")); + + // Now add a new catalog role and see if the changes are reflected + Assertions.assertNotNull(resolver.getResolvedReferenceCatalog()); + PolarisBaseEntity TEST = resolver.getResolvedReferenceCatalog().getEntity(); + PolarisBaseEntity R3 = + this.tm.createEntity(List.of(TEST), PolarisEntityType.CATALOG_ROLE, "R3"); + + // now grant R3 to PR2 + Assertions.assertEquals(1, resolver.getResolvedCallerPrincipalRoles().size()); + PolarisBaseEntity PR2 = resolver.getResolvedCallerPrincipalRoles().getFirst().getEntity(); + this.tm.grantToGrantee(TEST, R3, PR2, PolarisPrivilege.CATALOG_ROLE_USAGE); + + // now resolve again with only PR2 activated, should see the new catalog role R3 + this.resolveDriver(this.cache, Set.of("PR2"), "test", Set.of("R2", "R3")); + + // now drop that role and then recreate it. The new incarnation should be used + this.tm.dropEntity(List.of(TEST), R3); + PolarisBaseEntity R3_NEW = + this.tm.createEntity(List.of(TEST), PolarisEntityType.CATALOG_ROLE, "R3"); + + // now grant R3_NEW to PR2 and resolve it again + this.tm.grantToGrantee(TEST, R3_NEW, PR2, PolarisPrivilege.CATALOG_ROLE_USAGE); + resolver = this.resolveDriver(this.cache, Set.of("PR2"), "test", Set.of("R2", "R3")); + + // ensure that the correct catalog role was resolved + Assertions.assertTrue(resolver.getResolvedCatalogRoles().containsKey(R3_NEW.getId())); + } + + /** Check resolve paths when cache is inconsistent */ + @Test + void testPathConsistency() { + // resolve few paths path + ResolverPath N1_PATH = new ResolverPath(List.of("N1"), PolarisEntityType.NAMESPACE); + this.resolveDriver(null, "test", N1_PATH, null, null); + ResolverPath N1_N2_PATH = new ResolverPath(List.of("N1", "N2"), PolarisEntityType.NAMESPACE); + this.resolveDriver(this.cache, "test", N1_N2_PATH, null, null); + ResolverPath N1_N2_T1_PATH = + new ResolverPath(List.of("N1", "N2", "T1"), PolarisEntityType.TABLE_LIKE); + Resolver resolver = this.resolveDriver(this.cache, "test", N1_N2_T1_PATH, null, null); + + // get the catalog + Assertions.assertNotNull(resolver.getResolvedReferenceCatalog()); + PolarisBaseEntity TEST = resolver.getResolvedReferenceCatalog().getEntity(); + + // get the various entities in the path + Assertions.assertNotNull(resolver.getResolvedPath()); + Assertions.assertEquals(3, resolver.getResolvedPath().size()); + PolarisBaseEntity N1 = resolver.getResolvedPath().getFirst().getEntity(); + PolarisBaseEntity N2 = resolver.getResolvedPath().get(1).getEntity(); + PolarisBaseEntity T1 = resolver.getResolvedPath().get(2).getEntity(); + + // resolve N3 + ResolverPath N1_N3_PATH = new ResolverPath(List.of("N1", "N3"), PolarisEntityType.NAMESPACE); + resolver = this.resolveDriver(this.cache, "test", N1_N3_PATH, null, null); + Assertions.assertNotNull(resolver.getResolvedPath()); + Assertions.assertEquals(2, resolver.getResolvedPath().size()); + PolarisBaseEntity N3 = resolver.getResolvedPath().get(1).getEntity(); + + // now re-parent T1 under N3, keeping the same name + this.tm.renameEntity(List.of(TEST, N1, N2), T1, List.of(TEST, N1, N3), "T1"); + + // now expect to fail resolving T1 under N1/N2 + this.resolveDriver( + this.cache, + "test", + N1_N2_T1_PATH, + null, + ResolverStatus.StatusEnum.PATH_COULD_NOT_BE_FULLY_RESOLVED); + + // but we should be able to resolve it under N1/N3 + ResolverPath N1_N3_T1_PATH = + new ResolverPath(List.of("N1", "N3", "T1"), PolarisEntityType.TABLE_LIKE); + this.resolveDriver(this.cache, "test", N1_N3_T1_PATH, null, null); + } + + /** Resolve catalog roles */ + @Test + void testResolveCatalogRole() { + + // resolve catalog role + this.resolveDriver(null, "test", "R1", null); + + // do it again + this.resolveDriver(this.cache, "test", "R1", null); + this.resolveDriver(this.cache, "test", "R1", null); + + // failure scenario + this.resolveDriver( + this.cache, "test", "R5", ResolverStatus.StatusEnum.ENTITY_COULD_NOT_BE_RESOLVED); + } + + /** + * Create a simple resolver without a reference catalog, any principal roles sub-scope and using + * P1 as the caller principal + * + * @return new resolver to test with + */ + @NotNull + private Resolver allocateResolver() { + return this.allocateResolver(null, null); + } + + /** + * Create a simple resolver without any principal roles sub-scope and using P1 as the caller + * principal + * + * @param referenceCatalogName the reference e catalog name, can be null + * @return new resolver to test with + */ + @NotNull + private Resolver allocateResolver(@Nullable String referenceCatalogName) { + return this.allocateResolver(null, referenceCatalogName); + } + + /** + * Create a simple resolver without any principal roles sub-scope and using P1 as the caller + * principal + * + * @param cache if not null, cache to use, else one will be created + * @return new resolver to test with + */ + @NotNull + private Resolver allocateResolver(@Nullable EntityCache cache) { + return this.allocateResolver(cache, null); + } + + /** + * Create a simple resolver without any principal roles sub-scope and using P1 as the caller + * principal + * + * @param cache if not null, cache to use, else one will be created + * @param referenceCatalogName the reference e catalog name, can be null + * @return new resolver to test with + */ + @NotNull + private Resolver allocateResolver( + @Nullable EntityCache cache, @Nullable String referenceCatalogName) { + return this.allocateResolver(cache, null, referenceCatalogName); + } + + /** + * Create a simple resolver without any principal roles sub-scope and using P1 as the caller + * principal + * + * @param cache if not null, cache to use, else one will be created + * @param principalRolesScope if not null, scoped principal roles + * @param referenceCatalogName the reference e catalog name, can be null + * @return new resolver to test with + */ + @NotNull + private Resolver allocateResolver( + @Nullable EntityCache cache, + Set principalRolesScope, + @Nullable String referenceCatalogName) { + + // create a new cache if needs be + if (cache == null) { + this.cache = new EntityCache(this.metaStoreManager); + } + return new Resolver( + this.callCtx, + this.metaStoreManager, + this.P1.getId(), + null, + principalRolesScope, + this.cache, + referenceCatalogName); + } + + /** + * Resolve a principal and optionally a principal role + * + * @param cache if not null, cache to use + * @param principalName name of the principal name being created + * @param exists true if this principal already exists + * @param principalRoleName name of the principal role, should exist + */ + private void resolvePrincipalAndPrincipalRole( + EntityCache cache, String principalName, boolean exists, String principalRoleName) { + Resolver resolver = allocateResolver(cache); + + // for a principal creation, we simply want to test if the principal we are creating exists + // or not + resolver.addOptionalEntityByName(PolarisEntityType.PRINCIPAL, principalName); + + // add principal role if one passed-in + if (principalRoleName != null) { + resolver.addOptionalEntityByName(PolarisEntityType.PRINCIPAL_ROLE, principalRoleName); + } + + // done, run resolve + ResolverStatus status = resolver.resolveAll(); + + // we expect success + Assertions.assertEquals(ResolverStatus.StatusEnum.SUCCESS, status.getStatus()); + + // the principal does not exist, check that this is the case + if (exists) { + // the principal exist, check that this is the case + this.ensureResolved( + resolver.getResolvedEntity(PolarisEntityType.PRINCIPAL, principalName), + PolarisEntityType.PRINCIPAL, + principalName); + } else { + // not found + Assertions.assertNull(resolver.getResolvedEntity(PolarisEntityType.PRINCIPAL, principalName)); + } + + // validate that we were able to resolve the principal and the two principal roles + this.ensureResolved(resolver.getResolvedCallerPrincipal(), PolarisEntityType.PRINCIPAL, "P1"); + + // validate that the two principal roles have been activated + List principalRolesResolved = resolver.getResolvedCallerPrincipalRoles(); + + // expect two principal roles + Assertions.assertEquals(2, principalRolesResolved.size()); + principalRolesResolved.sort(Comparator.comparing(p -> p.getEntity().getName())); + + // ensure they are PR1 and PR2 + this.ensureResolved(principalRolesResolved.getFirst(), PolarisEntityType.PRINCIPAL_ROLE, "PR1"); + this.ensureResolved(principalRolesResolved.getLast(), PolarisEntityType.PRINCIPAL_ROLE, "PR2"); + + // if a principal role was passed-in, ensure it exists + if (principalRoleName != null) { + this.ensureResolved( + resolver.getResolvedEntity(PolarisEntityType.PRINCIPAL_ROLE, principalRoleName), + PolarisEntityType.PRINCIPAL_ROLE, + principalRoleName); + } + } + + /** + * Main resolve driver + * + * @param cache if not null, cache we can use + * @param principalRolesScope if not null, scoped roles + * @param principalName if not null, name of the principal to resolve + * @param isPrincipalNameOptional if true, the name of the principal is optional + * @param principalRoleName if not null, name of the principal role to resolve + * @param expectedStatus the expected status if not success + * @return resolver we created and which has been validated. + */ + private Resolver resolveDriver( + EntityCache cache, + Set principalRolesScope, + String principalName, + boolean isPrincipalNameOptional, + String principalRoleName, + ResolverStatus.StatusEnum expectedStatus) { + return this.resolveDriver( + cache, + principalRolesScope, + principalName, + isPrincipalNameOptional, + principalRoleName, + null, + null, + null, + null, + expectedStatus, + null); + } + + /** + * Main resolve driver + * + * @param cache if not null, cache we can use + * @param catalogName if not null, name of the catalog to resolve + * @param path if not null, single path in that catalog + * @param paths if not null, set of path in that catalog. Path and paths are mutually exclusive + * @param expectedStatus the expected status if not success activated + * @return resolver we created and which has been validated. + */ + private Resolver resolveDriver( + EntityCache cache, + String catalogName, + ResolverPath path, + List paths, + ResolverStatus.StatusEnum expectedStatus) { + return this.resolveDriver( + cache, null, null, false, null, catalogName, null, path, paths, expectedStatus, null); + } + + /** + * Main resolve driver for testing catalog role activation + * + * @param cache if not null, cache we can use + * @param principalRolesScope if not null, scoped roles + * @param catalogName if not null, name of the catalog to resolve + * @param expectedActivatedCatalogRoles set of catalog role names the caller expects to be + * activated + * @return resolver we created and which has been validated. + */ + private Resolver resolveDriver( + EntityCache cache, + Set principalRolesScope, + String catalogName, + Set expectedActivatedCatalogRoles) { + return this.resolveDriver( + cache, + principalRolesScope, + null, + false, + null, + catalogName, + null, + null, + null, + null, + expectedActivatedCatalogRoles); + } + + /** + * Main resolve driver for resolving catalog roles + * + * @param cache if not null, cache we can use + * @param catalogName if not null, name of the catalog to resolve + * @param catalogRoleName if not null, name of catalog role name to resolve + * @param expectedStatus the expected status if not success + * @return resolver we created and which has been validated. + */ + private Resolver resolveDriver( + EntityCache cache, + String catalogName, + String catalogRoleName, + ResolverStatus.StatusEnum expectedStatus) { + return this.resolveDriver( + cache, + null, + null, + false, + null, + catalogName, + catalogRoleName, + null, + null, + expectedStatus, + null); + } + + /** + * Main resolve driver + * + * @param cache if not null, cache we can use + * @param principalRolesScope if not null, scoped roles + * @param principalName if not null, name of the principal to resolve + * @param isPrincipalNameOptional if true, the name of the principal is optional + * @param principalRoleName if not null, name of the principal role to resolve + * @param catalogName if not null, name of the catalog to resolve + * @param catalogRoleName if not null, name of catalog role name to resolve + * @param path if not null, single path in that catalog + * @param paths if not null, set of path in that catalog. Path and paths are mutually exclusive + * @param expectedStatus the expected status if not success + * @param expectedActivatedCatalogRoles set of catalog role names the caller expects to be + * activated + * @return resolver we created and which has been validated. + */ + private Resolver resolveDriver( + EntityCache cache, + Set principalRolesScope, + String principalName, + boolean isPrincipalNameOptional, + String principalRoleName, + String catalogName, + String catalogRoleName, + ResolverPath path, + List paths, + ResolverStatus.StatusEnum expectedStatus, + Set expectedActivatedCatalogRoles) { + + // if null we expect success + if (expectedStatus == null) { + expectedStatus = ResolverStatus.StatusEnum.SUCCESS; + } + + // allocate resolver + Resolver resolver = allocateResolver(cache, principalRolesScope, catalogName); + + // principal name? + if (principalName != null) { + if (isPrincipalNameOptional) { + resolver.addOptionalEntityByName(PolarisEntityType.PRINCIPAL, principalName); + } else { + resolver.addEntityByName(PolarisEntityType.PRINCIPAL, principalName); + } + } + + // add principal role if one passed-in + if (principalRoleName != null) { + resolver.addEntityByName(PolarisEntityType.PRINCIPAL_ROLE, principalRoleName); + } + + // add catalog role if one passed-in + if (catalogRoleName != null) { + resolver.addEntityByName(PolarisEntityType.CATALOG_ROLE, catalogRoleName); + } + + // add all paths + if (path != null) { + resolver.addPath(path); + } else if (paths != null) { + paths.forEach(resolver::addPath); + } + + // done, run resolve + ResolverStatus status = resolver.resolveAll(); + + // we expect success unless a status + Assertions.assertNotNull(status); + Assertions.assertEquals(expectedStatus, status.getStatus()); + + // validate if status is success + if (status.getStatus() == ResolverStatus.StatusEnum.SUCCESS) { + + // the principal does not exist, check that this is the case + if (principalName != null) { + // see if the principal exists + PolarisMetaStoreManager.EntityResult result = + this.metaStoreManager.readEntityByName( + this.callCtx, + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + principalName); + // if found, ensure properly resolved + if (result.getEntity() != null) { + // the principal exist, check that this is the case + this.ensureResolved( + resolver.getResolvedEntity(PolarisEntityType.PRINCIPAL, principalName), + PolarisEntityType.PRINCIPAL, + principalName); + } else { + // principal was optional + Assertions.assertTrue(isPrincipalNameOptional); + // not found + Assertions.assertNull( + resolver.getResolvedEntity(PolarisEntityType.PRINCIPAL, principalName)); + } + } + + // validate that we were able to resolve the caller principal + this.ensureResolved(resolver.getResolvedCallerPrincipal(), PolarisEntityType.PRINCIPAL, "P1"); + + // validate that the correct set if principal roles have been activated + List principalRolesResolved = resolver.getResolvedCallerPrincipalRoles(); + principalRolesResolved.sort(Comparator.comparing(p -> p.getEntity().getName())); + + // expect two principal roles if not scoped + int expectedSize; + if (principalRolesScope != null) { + expectedSize = 0; + for (String pr : principalRolesScope) { + if (pr.equals("PR1") || pr.equals("PR2")) { + expectedSize++; + } + } + } else { + // both PR1 and PR2 + expectedSize = 2; + } + + // ensure the right set of principal roles were activated + Assertions.assertEquals(expectedSize, principalRolesResolved.size()); + + // expect either PR1 and PR2 + for (EntityCacheEntry principalRoleResolved : principalRolesResolved) { + Assertions.assertNotNull(principalRoleResolved); + Assertions.assertNotNull(principalRoleResolved.getEntity()); + String roleName = principalRoleResolved.getEntity().getName(); + + // should be either PR1 or PR2 + Assertions.assertTrue(roleName.equals("PR1") || roleName.equals("PR2")); + + // ensure they are PR1 and PR2 + this.ensureResolved(principalRoleResolved, PolarisEntityType.PRINCIPAL_ROLE, roleName); + } + + // if a principal role was passed-in, ensure it exists + if (principalRoleName != null) { + this.ensureResolved( + resolver.getResolvedEntity(PolarisEntityType.PRINCIPAL_ROLE, principalRoleName), + PolarisEntityType.PRINCIPAL_ROLE, + principalRoleName); + } + + // if a catalog was passed-in, ensure it exists + if (catalogName != null) { + EntityCacheEntry catalogEntry = + resolver.getResolvedEntity(PolarisEntityType.CATALOG, catalogName); + Assertions.assertNotNull(catalogEntry); + this.ensureResolved(catalogEntry, PolarisEntityType.CATALOG, catalogName); + + // if a catalog role was passed-in, ensure that it was properly resolved + if (catalogRoleName != null) { + EntityCacheEntry catalogRoleEntry = + resolver.getResolvedEntity(PolarisEntityType.CATALOG_ROLE, catalogRoleName); + this.ensureResolved( + catalogRoleEntry, + List.of(catalogEntry.getEntity()), + PolarisEntityType.CATALOG_ROLE, + catalogRoleName); + } + + // validate activated catalog roles + Map activatedCatalogs = resolver.getResolvedCatalogRoles(); + + // if there is an expected set, ensure we have the same set + if (expectedActivatedCatalogRoles != null) { + Assertions.assertEquals(expectedActivatedCatalogRoles.size(), activatedCatalogs.size()); + } + + // process each of those + for (EntityCacheEntry resolvedActivatedCatalogEntry : activatedCatalogs.values()) { + // must be in the expected list + Assertions.assertNotNull(resolvedActivatedCatalogEntry); + PolarisBaseEntity activatedCatalogRole = resolvedActivatedCatalogEntry.getEntity(); + Assertions.assertNotNull(activatedCatalogRole); + // ensure well resolved + this.ensureResolved( + resolvedActivatedCatalogEntry, + List.of(catalogEntry.getEntity()), + PolarisEntityType.CATALOG_ROLE, + activatedCatalogRole.getName()); + + // in the set of expected catalog roles + Assertions.assertTrue( + expectedActivatedCatalogRoles == null + || expectedActivatedCatalogRoles.contains(activatedCatalogRole.getName())); + } + + // resolve each path + if (path != null || paths != null) { + // path to validate + List allPathsToCheck = (paths == null) ? List.of(path) : paths; + + // all resolved path + List> allResolvedPaths = resolver.getResolvedPaths(); + + // same size + Assertions.assertEquals(allPathsToCheck.size(), allResolvedPaths.size()); + + // check that each path was properly resolved + int pathCount = 0; + Iterator allPathsToCheckIt = allPathsToCheck.iterator(); + for (List resolvedPath : allResolvedPaths) { + this.ensurePathResolved( + pathCount++, catalogEntry.getEntity(), allPathsToCheckIt.next(), resolvedPath); + } + } + } + } + return resolver; + } + + /** + * Ensure a path has been properly resolved + * + * @param pathCount pathCount + * @param catalog catalog + * @param pathToResolve the path to resolve + * @param resolvedPath resolved path + */ + private void ensurePathResolved( + int pathCount, + PolarisBaseEntity catalog, + ResolverPath pathToResolve, + List resolvedPath) { + + // ensure same cardinality + if (!pathToResolve.isOptional()) { + Assertions.assertEquals(pathToResolve.getEntityNames().size(), resolvedPath.size()); + } + + // catalog path + List catalogPath = new ArrayList<>(); + catalogPath.add(catalog); + + // loop and validate each element + for (int index = 0; index < resolvedPath.size(); index++) { + EntityCacheEntry cacheEntry = resolvedPath.get(index); + String entityName = pathToResolve.getEntityNames().get(index); + PolarisEntityType entityType = + (index == pathToResolve.getEntityNames().size() - 1) + ? pathToResolve.getLastEntityType() + : PolarisEntityType.NAMESPACE; + + // ensure that this entity has been properly resolved + this.ensureResolved(cacheEntry, catalogPath, entityType, entityName); + + // add to the path under construction + catalogPath.add(cacheEntry.getEntity()); + } + } + + /** + * Ensure that an entity has been properly resolved + * + * @param cacheEntry the entity as resolved by the resolver + * @param catalogPath path to that entity, can be null for top-level entities + * @param entityType entity type + * @param entityName entity name + */ + private void ensureResolved( + EntityCacheEntry cacheEntry, + List catalogPath, + PolarisEntityType entityType, + String entityName) { + // everything was resolved + Assertions.assertNotNull(cacheEntry); + PolarisBaseEntity entity = cacheEntry.getEntity(); + Assertions.assertNotNull(entity); + List grantRecords = cacheEntry.getAllGrantRecords(); + Assertions.assertNotNull(grantRecords); + + // reference entity cannot be null + PolarisBaseEntity refEntity = + this.tm.ensureExistsByName( + catalogPath, entityType, PolarisEntitySubType.ANY_SUBTYPE, entityName); + Assertions.assertNotNull(refEntity); + + // reload the cached entry from the backend + PolarisMetaStoreManager.CachedEntryResult refCachedEntry = + this.metaStoreManager.loadCachedEntryById( + this.callCtx, refEntity.getCatalogId(), refEntity.getId()); + + // should exist + Assertions.assertNotNull(refCachedEntry); + + // ensure same entity + refEntity = refCachedEntry.getEntity(); + List refGrantRecords = refCachedEntry.getEntityGrantRecords(); + Assertions.assertNotNull(refEntity); + Assertions.assertNotNull(refGrantRecords); + Assertions.assertEquals(refEntity, entity); + Assertions.assertEquals(refEntity.getEntityVersion(), entity.getEntityVersion()); + + // ensure it has not been dropped + Assertions.assertEquals(0, entity.getDropTimestamp()); + + // same number of grants + Assertions.assertEquals(refGrantRecords.size(), grantRecords.size()); + + // ensure same grant records. The order in the list should be deterministic + Iterator refGrantRecordsIt = refGrantRecords.iterator(); + for (PolarisGrantRecord grantRecord : grantRecords) { + PolarisGrantRecord refGrantRecord = refGrantRecordsIt.next(); + Assertions.assertEquals(refGrantRecord, grantRecord); + } + } + + /** + * Ensure that an entity has been properly resolved + * + * @param cacheEntry the entity as resolved by the resolver + * @param entityType entity type + * @param entityName entity name + */ + private void ensureResolved( + EntityCacheEntry cacheEntry, PolarisEntityType entityType, String entityName) { + this.ensureResolved(cacheEntry, null, entityType, entityName); + } +} diff --git a/polaris-core/src/test/java/io/polaris/core/storage/InMemoryStorageIntegrationTest.java b/polaris-core/src/test/java/io/polaris/core/storage/InMemoryStorageIntegrationTest.java new file mode 100644 index 0000000000..f4749fcbe4 --- /dev/null +++ b/polaris-core/src/test/java/io/polaris/core/storage/InMemoryStorageIntegrationTest.java @@ -0,0 +1,205 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisConfigurationStore; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.context.CallContext; +import io.polaris.core.storage.aws.AwsStorageConfigurationInfo; +import java.time.Clock; +import java.util.EnumMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import org.assertj.core.api.Assertions; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.junit.jupiter.api.Test; +import org.mockito.Mockito; + +class InMemoryStorageIntegrationTest { + + @Test + public void testValidateAccessToLocations() { + MockInMemoryStorageIntegration storage = new MockInMemoryStorageIntegration(); + Map> result = + storage.validateAccessToLocations( + new AwsStorageConfigurationInfo( + PolarisStorageConfigurationInfo.StorageType.S3, + List.of( + "s3://bucket/path/to/warehouse", + "s3://bucket/anotherpath/to/warehouse", + "s3://bucket2/warehouse/"), + "arn:aws:iam::012345678901:role/jdoe"), + Set.of(PolarisStorageActions.READ), + Set.of( + "s3://bucket/path/to/warehouse/namespace/table", + "s3://bucket2/warehouse", + "s3://arandombucket/path/to/warehouse/namespace/table")); + Assertions.assertThat(result) + .hasSize(3) + .containsEntry( + "s3://bucket/path/to/warehouse/namespace/table", + Map.of( + PolarisStorageActions.READ, + new PolarisStorageIntegration.ValidationResult(true, ""))) + .containsEntry( + "s3://bucket2/warehouse", + Map.of( + PolarisStorageActions.READ, + new PolarisStorageIntegration.ValidationResult(true, ""))) + .containsEntry( + "s3://arandombucket/path/to/warehouse/namespace/table", + Map.of( + PolarisStorageActions.READ, + new PolarisStorageIntegration.ValidationResult(false, ""))); + } + + @Test + public void testValidateAccessToLocationsWithWildcard() { + MockInMemoryStorageIntegration storage = new MockInMemoryStorageIntegration(); + Map config = Map.of("ALLOW_WILDCARD_LOCATION", true); + PolarisCallContext polarisCallContext = + new PolarisCallContext( + Mockito.mock(), + new PolarisDefaultDiagServiceImpl(), + new PolarisConfigurationStore() { + @SuppressWarnings("unchecked") + @Override + public @Nullable T getConfiguration(PolarisCallContext ctx, String configName) { + return (T) config.get(configName); + } + }, + Clock.systemUTC()); + try (CallContext cc = + CallContext.setCurrentContext(CallContext.of(() -> "realm", polarisCallContext))) { + Map> result = + storage.validateAccessToLocations( + new FileStorageConfigurationInfo(List.of("file://", "*")), + Set.of(PolarisStorageActions.READ), + Set.of( + "s3://bucket/path/to/warehouse/namespace/table", + "file:///etc/passwd", + "a/relative/subdirectory")); + Assertions.assertThat(result) + .hasSize(3) + .hasEntrySatisfying( + "s3://bucket/path/to/warehouse/namespace/table", + val -> + Assertions.assertThat(val) + .hasSize(1) + .containsKey(PolarisStorageActions.READ) + .extractingByKey(PolarisStorageActions.READ) + .returns(true, PolarisStorageIntegration.ValidationResult::isSuccess)) + .hasEntrySatisfying( + "file:///etc/passwd", + val -> + Assertions.assertThat(val) + .hasSize(1) + .containsKey(PolarisStorageActions.READ) + .extractingByKey(PolarisStorageActions.READ) + .returns(true, PolarisStorageIntegration.ValidationResult::isSuccess)) + .hasEntrySatisfying( + "a/relative/subdirectory", + val -> + Assertions.assertThat(val) + .hasSize(1) + .containsKey(PolarisStorageActions.READ) + .extractingByKey(PolarisStorageActions.READ) + .returns(true, PolarisStorageIntegration.ValidationResult::isSuccess)); + } + } + + @Test + public void testValidateAccessToLocationsNoAllowedLocations() { + MockInMemoryStorageIntegration storage = new MockInMemoryStorageIntegration(); + Map> result = + storage.validateAccessToLocations( + new AwsStorageConfigurationInfo( + PolarisStorageConfigurationInfo.StorageType.S3, + List.of(), + "arn:aws:iam::012345678901:role/jdoe"), + Set.of(PolarisStorageActions.READ), + Set.of( + "s3://bucket/path/to/warehouse/namespace/table", + "s3://bucket2/warehouse/namespace/table", + "s3://arandombucket/path/to/warehouse/namespace/table")); + Assertions.assertThat(result) + .hasSize(3) + .containsEntry( + "s3://bucket/path/to/warehouse/namespace/table", + Map.of( + PolarisStorageActions.READ, + new PolarisStorageIntegration.ValidationResult(false, ""))) + .containsEntry( + "s3://bucket2/warehouse/namespace/table", + Map.of( + PolarisStorageActions.READ, + new PolarisStorageIntegration.ValidationResult(false, ""))) + .containsEntry( + "s3://arandombucket/path/to/warehouse/namespace/table", + Map.of( + PolarisStorageActions.READ, + new PolarisStorageIntegration.ValidationResult(false, ""))); + } + + @Test + public void testValidateAccessToLocationsWithPrefixOfAllowedLocation() { + MockInMemoryStorageIntegration storage = new MockInMemoryStorageIntegration(); + Map> result = + storage.validateAccessToLocations( + new AwsStorageConfigurationInfo( + PolarisStorageConfigurationInfo.StorageType.S3, + List.of("s3://bucket/path/to/warehouse"), + "arn:aws:iam::012345678901:role/jdoe"), + Set.of(PolarisStorageActions.READ), + // trying to read a prefix under the allowed location + Set.of("s3://bucket/path/to")); + Assertions.assertThat(result) + .hasSize(1) + .containsEntry( + "s3://bucket/path/to", + Map.of( + PolarisStorageActions.READ, + new PolarisStorageIntegration.ValidationResult(false, ""))); + } + + private static final class MockInMemoryStorageIntegration + extends InMemoryStorageIntegration { + public MockInMemoryStorageIntegration() { + super(MockInMemoryStorageIntegration.class.getName()); + } + + @Override + public EnumMap getSubscopedCreds( + @NotNull PolarisDiagnostics diagnostics, + @NotNull PolarisStorageConfigurationInfo storageConfig, + boolean allowListOperation, + @NotNull Set allowedReadLocations, + @NotNull Set allowedWriteLocations) { + return null; + } + + @Override + public EnumMap + descPolarisStorageConfiguration( + @NotNull PolarisStorageConfigurationInfo storageConfigInfo) { + return null; + } + } +} diff --git a/polaris-core/src/test/java/io/polaris/core/storage/cache/StorageCredentialCacheTest.java b/polaris-core/src/test/java/io/polaris/core/storage/cache/StorageCredentialCacheTest.java new file mode 100644 index 0000000000..efbaf19c82 --- /dev/null +++ b/polaris-core/src/test/java/io/polaris/core/storage/cache/StorageCredentialCacheTest.java @@ -0,0 +1,433 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.storage.cache; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.persistence.PolarisMetaStoreManagerImpl; +import io.polaris.core.persistence.PolarisMetaStoreSession; +import io.polaris.core.persistence.PolarisObjectMapperUtil; +import io.polaris.core.persistence.PolarisTreeMapMetaStoreSessionImpl; +import io.polaris.core.persistence.PolarisTreeMapStore; +import io.polaris.core.storage.PolarisCredentialProperty; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.EnumMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import org.jetbrains.annotations.NotNull; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.RepeatedTest; +import org.junit.jupiter.api.Test; +import org.mockito.Mockito; + +public class StorageCredentialCacheTest { + + // polaris call context + private final PolarisCallContext callCtx; + + // the meta store manager + private final PolarisMetaStoreManager metaStoreManager; + + StorageCredentialCache storageCredentialCache; + + public StorageCredentialCacheTest() { + // diag services + PolarisDiagnostics diagServices = new PolarisDefaultDiagServiceImpl(); + // the entity store, use treemap implementation + PolarisTreeMapStore store = new PolarisTreeMapStore(diagServices); + // to interact with the metastore + PolarisMetaStoreSession metaStore = + new PolarisTreeMapMetaStoreSessionImpl(store, Mockito.mock()); + callCtx = new PolarisCallContext(metaStore, diagServices); + metaStoreManager = Mockito.mock(PolarisMetaStoreManagerImpl.class); + storageCredentialCache = new StorageCredentialCache(); + } + + @Test + public void testBadResult() { + storageCredentialCache = new StorageCredentialCache(); + PolarisMetaStoreManager.ScopedCredentialsResult badResult = + new PolarisMetaStoreManager.ScopedCredentialsResult( + PolarisMetaStoreManager.ReturnStatus.SUBSCOPE_CREDS_ERROR, "extra_error_info"); + Mockito.when( + metaStoreManager.getSubscopedCredsForEntity( + Mockito.any(), + Mockito.anyLong(), + Mockito.anyLong(), + Mockito.anyBoolean(), + Mockito.anySet(), + Mockito.anySet())) + .thenReturn(badResult); + PolarisEntity polarisEntity = + new PolarisEntity( + new PolarisBaseEntity( + 1, 2, PolarisEntityType.CATALOG, PolarisEntitySubType.TABLE, 0, "name")); + Assertions.assertThrows( + RuntimeException.class, + () -> + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + polarisEntity, + true, + new HashSet<>(Arrays.asList("s3://bucket1/path")), + new HashSet<>(Arrays.asList("s3://bucket3/path")))); + } + + @Test + public void testCacheHit() { + storageCredentialCache = new StorageCredentialCache(); + List mockedScopedCreds = + getFakeScopedCreds(3, /* expireSoon= */ false); + Mockito.when( + metaStoreManager.getSubscopedCredsForEntity( + Mockito.any(), + Mockito.anyLong(), + Mockito.anyLong(), + Mockito.anyBoolean(), + Mockito.anySet(), + Mockito.anySet())) + .thenReturn(mockedScopedCreds.get(0)) + .thenReturn(mockedScopedCreds.get(1)) + .thenReturn(mockedScopedCreds.get(1)); + PolarisBaseEntity baseEntity = + new PolarisBaseEntity( + 1, 2, PolarisEntityType.CATALOG, PolarisEntitySubType.TABLE, 0, "name"); + PolarisEntity polarisEntity = new PolarisEntity(baseEntity); + + // add an item to the cache + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + polarisEntity, + true, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket3/path", "s3://bucket4/path"))); + Assertions.assertEquals(1, storageCredentialCache.getEstimatedSize()); + + // subscope for the same entity and same allowed locations, will hit the cache + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + polarisEntity, + true, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket3/path", "s3://bucket4/path"))); + Assertions.assertEquals(1, storageCredentialCache.getEstimatedSize()); + } + + @RepeatedTest(10) + public void testCacheEvict() throws InterruptedException { + storageCredentialCache = new StorageCredentialCache(); + List mockedScopedCreds = + getFakeScopedCreds(3, /* expireSoon= */ true); + + Mockito.when( + metaStoreManager.getSubscopedCredsForEntity( + Mockito.any(), + Mockito.anyLong(), + Mockito.anyLong(), + Mockito.anyBoolean(), + Mockito.anySet(), + Mockito.anySet())) + .thenReturn(mockedScopedCreds.get(0)) + .thenReturn(mockedScopedCreds.get(1)) + .thenReturn(mockedScopedCreds.get(2)); + PolarisBaseEntity baseEntity = + new PolarisBaseEntity( + 1, 2, PolarisEntityType.CATALOG, PolarisEntitySubType.TABLE, 0, "name"); + PolarisEntity polarisEntity = new PolarisEntity(baseEntity); + StorageCredentialCacheKey cacheKey = + new StorageCredentialCacheKey( + polarisEntity, + true, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket/path")), + callCtx); + + // the entry will be evicted immediately because the token is expired + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + polarisEntity, + true, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket/path"))); + Assertions.assertNull(storageCredentialCache.getIfPresent(cacheKey)); + + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + polarisEntity, + true, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket/path"))); + Assertions.assertNull(storageCredentialCache.getIfPresent(cacheKey)); + + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + polarisEntity, + true, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket/path"))); + Assertions.assertNull(storageCredentialCache.getIfPresent(cacheKey)); + } + + @Test + public void testCacheGenerateNewEntries() { + storageCredentialCache = new StorageCredentialCache(); + List mockedScopedCreds = + getFakeScopedCreds(3, /* expireSoon= */ false); + Mockito.when( + metaStoreManager.getSubscopedCredsForEntity( + Mockito.any(), + Mockito.anyLong(), + Mockito.anyLong(), + Mockito.anyBoolean(), + Mockito.anySet(), + Mockito.anySet())) + .thenReturn(mockedScopedCreds.get(0)) + .thenReturn(mockedScopedCreds.get(1)) + .thenReturn(mockedScopedCreds.get(2)); + List entityList = getPolarisEntities(); + int cacheSize = 0; + // different catalog will generate new cache entries + for (PolarisEntity entity : entityList) { + Map res = + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + entity, + true, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket/path"))); + Assertions.assertEquals(++cacheSize, storageCredentialCache.getEstimatedSize()); + } + // update the entity's storage config, since StorageConfig changed, cache will generate new + // entry + for (PolarisEntity entity : entityList) { + Map internalMap = entity.getPropertiesAsMap(); + internalMap.put( + PolarisEntityConstants.getStorageConfigInfoPropertyName(), "newStorageConfig"); + entity.setInternalProperties( + PolarisObjectMapperUtil.serializeProperties(callCtx, internalMap)); + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + entity, + /* allowedListAction= */ true, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket/path"))); + Assertions.assertEquals(++cacheSize, storageCredentialCache.getEstimatedSize()); + } + // allowedListAction changed to different value FALSE, will generate new entry + for (PolarisEntity entity : entityList) { + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + entity, + /* allowedListAction= */ false, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket/path"))); + Assertions.assertEquals(++cacheSize, storageCredentialCache.getEstimatedSize()); + } + // different allowedWriteLocations, will generate new entry + for (PolarisEntity entity : entityList) { + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + entity, + /* allowedListAction= */ false, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://differentbucket/path"))); + Assertions.assertEquals(++cacheSize, storageCredentialCache.getEstimatedSize()); + } + // different allowedReadLocations, will generate new try + for (PolarisEntity entity : entityList) { + Map internalMap = entity.getPropertiesAsMap(); + internalMap.put( + PolarisEntityConstants.getStorageConfigInfoPropertyName(), "newStorageConfig"); + entity.setInternalProperties( + PolarisObjectMapperUtil.serializeProperties(callCtx, internalMap)); + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + entity, + /* allowedListAction= */ false, + new HashSet<>(Arrays.asList("s3://differentbucket/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket/path"))); + Assertions.assertEquals(++cacheSize, storageCredentialCache.getEstimatedSize()); + } + } + + @Test + public void testCacheNotAffectedBy() { + storageCredentialCache = new StorageCredentialCache(); + List mockedScopedCreds = + getFakeScopedCreds(3, /* expireSoon= */ false); + + Mockito.when( + metaStoreManager.getSubscopedCredsForEntity( + Mockito.any(), + Mockito.anyLong(), + Mockito.anyLong(), + Mockito.anyBoolean(), + Mockito.anySet(), + Mockito.anySet())) + .thenReturn(mockedScopedCreds.get(0)) + .thenReturn(mockedScopedCreds.get(1)) + .thenReturn(mockedScopedCreds.get(2)); + List entityList = getPolarisEntities(); + for (PolarisEntity entity : entityList) { + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + entity, + true, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket3/path", "s3://bucket4/path"))); + } + Assertions.assertEquals(entityList.size(), storageCredentialCache.getEstimatedSize()); + + // entity ID does not affect the cache + for (PolarisEntity entity : entityList) { + entity.setId(1234); + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + entity, + true, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket3/path", "s3://bucket4/path"))); + Assertions.assertEquals(entityList.size(), storageCredentialCache.getEstimatedSize()); + } + + // other property changes does not affect the cache + for (PolarisEntity entity : entityList) { + entity.setEntityVersion(5); + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + entity, + true, + new HashSet<>(Arrays.asList("s3://bucket1/path", "s3://bucket2/path")), + new HashSet<>(Arrays.asList("s3://bucket3/path", "s3://bucket4/path"))); + Assertions.assertEquals(entityList.size(), storageCredentialCache.getEstimatedSize()); + } + // order of the allowedReadLocations does not affect the cache + for (PolarisEntity entity : entityList) { + entity.setEntityVersion(5); + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + entity, + true, + new HashSet<>(Arrays.asList("s3://bucket2/path", "s3://bucket1/path")), + new HashSet<>(Arrays.asList("s3://bucket3/path", "s3://bucket4/path"))); + Assertions.assertEquals(entityList.size(), storageCredentialCache.getEstimatedSize()); + } + + // order of the allowedWriteLocations does not affect the cache + for (PolarisEntity entity : entityList) { + entity.setEntityVersion(5); + storageCredentialCache.getOrGenerateSubScopeCreds( + metaStoreManager, + callCtx, + entity, + true, + new HashSet<>(Arrays.asList("s3://bucket2/path", "s3://bucket1/path")), + new HashSet<>(Arrays.asList("s3://bucket4/path", "s3://bucket3/path"))); + Assertions.assertEquals(entityList.size(), storageCredentialCache.getEstimatedSize()); + } + } + + private static List getFakeScopedCreds( + int number, boolean expireSoon) { + List res = new ArrayList<>(); + for (int i = 1; i <= number; i = i + 3) { + int finalI = i; + // NOTE: The default behavior of the Caffeine cache seems to have a bug; if our + // expireAfter definition in the StorageCredentialCache constructor doesn't clip + // the returned time to minimum of 0, and we set the expiration time to more than + // 1 second in the past, it seems the cache fails to remove the expired entries + // no matter how long we wait. This is possibly related to the implementation-specific + // "minimum difference between the scheduled executions" documented in Caffeine.java + // to be 1 second. + String expireTime = + expireSoon + ? String.valueOf(System.currentTimeMillis() - 100) + : String.valueOf(Long.MAX_VALUE); + res.add( + new PolarisMetaStoreManager.ScopedCredentialsResult( + new EnumMap<>(PolarisCredentialProperty.class) { + { + put(PolarisCredentialProperty.AWS_KEY_ID, "key_id_" + finalI); + put(PolarisCredentialProperty.AWS_SECRET_KEY, "key_secret_" + finalI); + put(PolarisCredentialProperty.EXPIRATION_TIME, expireTime); + } + })); + if (res.size() == number) return res; + res.add( + new PolarisMetaStoreManager.ScopedCredentialsResult( + new EnumMap<>(PolarisCredentialProperty.class) { + { + put(PolarisCredentialProperty.AZURE_SAS_TOKEN, "sas_token_" + finalI); + put(PolarisCredentialProperty.AZURE_ACCOUNT_HOST, "account_host"); + put(PolarisCredentialProperty.EXPIRATION_TIME, expireTime); + } + })); + if (res.size() == number) return res; + res.add( + new PolarisMetaStoreManager.ScopedCredentialsResult( + new EnumMap<>(PolarisCredentialProperty.class) { + { + put(PolarisCredentialProperty.GCS_ACCESS_TOKEN, "gcs_token_" + finalI); + put(PolarisCredentialProperty.GCS_ACCESS_TOKEN_EXPIRES_AT, expireTime); + } + })); + } + return res; + } + + @NotNull + private static List getPolarisEntities() { + PolarisEntity polarisEntity1 = + new PolarisEntity( + new PolarisBaseEntity( + 1, 2, PolarisEntityType.CATALOG, PolarisEntitySubType.TABLE, 0, "name")); + PolarisEntity polarisEntity2 = + new PolarisEntity( + new PolarisBaseEntity( + 2, 2, PolarisEntityType.CATALOG, PolarisEntitySubType.TABLE, 0, "name")); + PolarisEntity polarisEntity3 = + new PolarisEntity( + new PolarisBaseEntity( + 3, 2, PolarisEntityType.CATALOG, PolarisEntitySubType.TABLE, 0, "name")); + + List entityList = Arrays.asList(polarisEntity1, polarisEntity2, polarisEntity3); + return entityList; + } +} diff --git a/polaris-core/src/test/java/io/polaris/service/storage/aws/AwsCredentialsStorageIntegrationTest.java b/polaris-core/src/test/java/io/polaris/service/storage/aws/AwsCredentialsStorageIntegrationTest.java new file mode 100644 index 0000000000..44acbb8807 --- /dev/null +++ b/polaris-core/src/test/java/io/polaris/service/storage/aws/AwsCredentialsStorageIntegrationTest.java @@ -0,0 +1,464 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.storage.aws; + +import static org.assertj.core.api.Assertions.assertThat; + +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.storage.PolarisCredentialProperty; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import io.polaris.core.storage.aws.AwsCredentialsStorageIntegration; +import io.polaris.core.storage.aws.AwsStorageConfigurationInfo; +import java.util.EnumMap; +import java.util.List; +import java.util.Set; +import org.assertj.core.api.InstanceOfAssertFactories; +import org.jetbrains.annotations.NotNull; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.ValueSource; +import org.mockito.Mockito; +import software.amazon.awssdk.policybuilder.iam.IamAction; +import software.amazon.awssdk.policybuilder.iam.IamCondition; +import software.amazon.awssdk.policybuilder.iam.IamConditionOperator; +import software.amazon.awssdk.policybuilder.iam.IamEffect; +import software.amazon.awssdk.policybuilder.iam.IamPolicy; +import software.amazon.awssdk.policybuilder.iam.IamResource; +import software.amazon.awssdk.policybuilder.iam.IamStatement; +import software.amazon.awssdk.services.sts.StsClient; +import software.amazon.awssdk.services.sts.model.AssumeRoleRequest; +import software.amazon.awssdk.services.sts.model.AssumeRoleResponse; +import software.amazon.awssdk.services.sts.model.Credentials; + +class AwsCredentialsStorageIntegrationTest { + + public static final AssumeRoleResponse ASSUME_ROLE_RESPONSE = + AssumeRoleResponse.builder() + .credentials( + Credentials.builder() + .accessKeyId("accessKey") + .secretAccessKey("secretKey") + .sessionToken("sess") + .build()) + .build(); + public static final String AWS_PARTITION = "aws"; + + @Test + public void testGetSubscopedCreds() { + StsClient stsClient = Mockito.mock(StsClient.class); + String roleARN = "arn:aws:iam::012345678901:role/jdoe"; + String externalId = "externalId"; + Mockito.when(stsClient.assumeRole(Mockito.isA(AssumeRoleRequest.class))) + .thenAnswer( + invocation -> { + assertThat(invocation.getArguments()[0]) + .isInstanceOf(AssumeRoleRequest.class) + .asInstanceOf(InstanceOfAssertFactories.type(AssumeRoleRequest.class)) + .returns(externalId, AssumeRoleRequest::externalId) + .returns(roleARN, AssumeRoleRequest::roleArn); + return ASSUME_ROLE_RESPONSE; + }); + String warehouseDir = "s3://bucket/path/to/warehouse"; + EnumMap credentials = + new AwsCredentialsStorageIntegration(stsClient) + .getSubscopedCreds( + Mockito.mock(PolarisDiagnostics.class), + new AwsStorageConfigurationInfo( + PolarisStorageConfigurationInfo.StorageType.S3, + List.of(warehouseDir), + roleARN, + externalId), + true, + Set.of(warehouseDir + "/namespace/table"), + Set.of(warehouseDir + "/namespace/table")); + assertThat(credentials) + .isNotEmpty() + .containsEntry(PolarisCredentialProperty.AWS_TOKEN, "sess") + .containsEntry(PolarisCredentialProperty.AWS_KEY_ID, "accessKey") + .containsEntry(PolarisCredentialProperty.AWS_SECRET_KEY, "secretKey"); + } + + @ParameterizedTest + @ValueSource(strings = {AWS_PARTITION, "aws-cn", "aws-us-gov"}) + public void testGetSubscopedCredsInlinePolicy(String awsPartition) { + PolarisStorageConfigurationInfo.StorageType storageType = + PolarisStorageConfigurationInfo.StorageType.S3; + String roleARN = + switch (awsPartition) { + case AWS_PARTITION -> "arn:aws:iam::012345678901:role/jdoe"; + case "aws-cn" -> "arn:aws-cn:iam::012345678901:role/jdoe"; + case "aws-us-gov" -> "arn:aws-us-gov:iam::012345678901:role/jdoe"; + default -> throw new IllegalArgumentException("Unknown aws partition: " + awsPartition); + }; + StsClient stsClient = Mockito.mock(StsClient.class); + String externalId = "externalId"; + String bucket = "bucket"; + String warehouseKeyPrefix = "path/to/warehouse"; + String firstPath = warehouseKeyPrefix + "/namespace/table"; + String secondPath = warehouseKeyPrefix + "/oldnamespace/table"; + Mockito.when(stsClient.assumeRole(Mockito.isA(AssumeRoleRequest.class))) + .thenAnswer( + invocation -> { + assertThat(invocation.getArguments()[0]) + .isInstanceOf(AssumeRoleRequest.class) + .asInstanceOf(InstanceOfAssertFactories.type(AssumeRoleRequest.class)) + .extracting(AssumeRoleRequest::policy) + .extracting(IamPolicy::fromJson) + .satisfies( + policy -> { + assertThat(policy) + .extracting(IamPolicy::statements) + .asInstanceOf(InstanceOfAssertFactories.list(IamStatement.class)) + .hasSize(3) + .satisfiesExactly( + statement -> + assertThat(statement) + .returns(IamEffect.ALLOW, IamStatement::effect) + .returns( + List.of( + IamResource.create( + s3Arn(awsPartition, bucket, firstPath))), + IamStatement::resources) + .returns( + List.of( + IamAction.create("s3:PutObject"), + IamAction.create("s3:DeleteObject")), + IamStatement::actions), + statement -> + assertThat(statement) + .returns(IamEffect.ALLOW, IamStatement::effect) + .returns( + List.of( + IamResource.create( + s3Arn(awsPartition, bucket, null))), + IamStatement::resources) + .returns( + List.of(IamAction.create("s3:ListBucket")), + IamStatement::actions) + .returns( + List.of( + IamResource.create( + s3Arn(awsPartition, bucket, null))), + IamStatement::resources) + .satisfies( + st -> + assertThat(st.conditions()) + .containsExactlyInAnyOrder( + IamCondition.builder() + .operator( + IamConditionOperator.STRING_LIKE) + .key("s3:prefix") + .value(secondPath + "/*") + .build(), + IamCondition.builder() + .operator( + IamConditionOperator.STRING_LIKE) + .key("s3:prefix") + .value(firstPath + "/*") + .build())), + statement -> + assertThat(statement) + .returns(IamEffect.ALLOW, IamStatement::effect) + .satisfies( + st -> + assertThat(st.resources()) + .containsExactlyInAnyOrder( + IamResource.create( + s3Arn(awsPartition, bucket, firstPath)), + IamResource.create( + s3Arn( + awsPartition, bucket, secondPath)))) + .returns( + List.of( + IamAction.create("s3:GetObject"), + IamAction.create("s3:GetObjectVersion")), + IamStatement::actions)); + }); + return ASSUME_ROLE_RESPONSE; + }); + EnumMap credentials = + new AwsCredentialsStorageIntegration(stsClient) + .getSubscopedCreds( + Mockito.mock(PolarisDiagnostics.class), + new AwsStorageConfigurationInfo( + storageType, + List.of(s3Path(bucket, warehouseKeyPrefix, storageType)), + roleARN, + externalId), + true, + Set.of( + s3Path(bucket, firstPath, storageType), + s3Path(bucket, secondPath, storageType)), + Set.of(s3Path(bucket, firstPath, storageType))); + assertThat(credentials) + .isNotEmpty() + .containsEntry(PolarisCredentialProperty.AWS_TOKEN, "sess") + .containsEntry(PolarisCredentialProperty.AWS_KEY_ID, "accessKey") + .containsEntry(PolarisCredentialProperty.AWS_SECRET_KEY, "secretKey"); + } + + @Test + public void testGetSubscopedCredsInlinePolicyWithoutList() { + StsClient stsClient = Mockito.mock(StsClient.class); + String roleARN = "arn:aws:iam::012345678901:role/jdoe"; + String externalId = "externalId"; + String bucket = "bucket"; + String warehouseKeyPrefix = "path/to/warehouse"; + String firstPath = warehouseKeyPrefix + "/namespace/table"; + String secondPath = warehouseKeyPrefix + "/oldnamespace/table"; + Mockito.when(stsClient.assumeRole(Mockito.isA(AssumeRoleRequest.class))) + .thenAnswer( + invocation -> { + assertThat(invocation.getArguments()[0]) + .isInstanceOf(AssumeRoleRequest.class) + .asInstanceOf(InstanceOfAssertFactories.type(AssumeRoleRequest.class)) + .extracting(AssumeRoleRequest::policy) + .extracting(IamPolicy::fromJson) + .satisfies( + policy -> { + assertThat(policy) + .extracting(IamPolicy::statements) + .asInstanceOf(InstanceOfAssertFactories.list(IamStatement.class)) + .hasSize(2) + .satisfiesExactly( + statement -> + assertThat(statement) + .returns(IamEffect.ALLOW, IamStatement::effect) + .returns( + List.of( + IamResource.create( + s3Arn(AWS_PARTITION, bucket, firstPath))), + IamStatement::resources) + .returns( + List.of( + IamAction.create("s3:PutObject"), + IamAction.create("s3:DeleteObject")), + IamStatement::actions), + statement -> + assertThat(statement) + .returns(IamEffect.ALLOW, IamStatement::effect) + .satisfies( + st -> + assertThat(st.resources()) + .containsExactlyInAnyOrder( + IamResource.create( + s3Arn( + AWS_PARTITION, bucket, firstPath)), + IamResource.create( + s3Arn( + AWS_PARTITION, + bucket, + secondPath)))) + .returns( + List.of( + IamAction.create("s3:GetObject"), + IamAction.create("s3:GetObjectVersion")), + IamStatement::actions)); + }); + return ASSUME_ROLE_RESPONSE; + }); + PolarisStorageConfigurationInfo.StorageType storageType = + PolarisStorageConfigurationInfo.StorageType.S3; + EnumMap credentials = + new AwsCredentialsStorageIntegration(stsClient) + .getSubscopedCreds( + Mockito.mock(PolarisDiagnostics.class), + new AwsStorageConfigurationInfo( + PolarisStorageConfigurationInfo.StorageType.S3, + List.of(s3Path(bucket, warehouseKeyPrefix, storageType)), + roleARN, + externalId), + false, /* allowList = false*/ + Set.of( + s3Path(bucket, firstPath, storageType), + s3Path(bucket, secondPath, storageType)), + Set.of(s3Path(bucket, firstPath, storageType))); + assertThat(credentials) + .isNotEmpty() + .containsEntry(PolarisCredentialProperty.AWS_TOKEN, "sess") + .containsEntry(PolarisCredentialProperty.AWS_KEY_ID, "accessKey") + .containsEntry(PolarisCredentialProperty.AWS_SECRET_KEY, "secretKey"); + } + + @Test + public void testGetSubscopedCredsInlinePolicyWithoutWrites() { + StsClient stsClient = Mockito.mock(StsClient.class); + String roleARN = "arn:aws:iam::012345678901:role/jdoe"; + String externalId = "externalId"; + String bucket = "bucket"; + String warehouseKeyPrefix = "path/to/warehouse"; + String firstPath = warehouseKeyPrefix + "/namespace/table"; + String secondPath = warehouseKeyPrefix + "/oldnamespace/table"; + Mockito.when(stsClient.assumeRole(Mockito.isA(AssumeRoleRequest.class))) + .thenAnswer( + invocation -> { + assertThat(invocation.getArguments()[0]) + .isInstanceOf(AssumeRoleRequest.class) + .asInstanceOf(InstanceOfAssertFactories.type(AssumeRoleRequest.class)) + .extracting(AssumeRoleRequest::policy) + .extracting(IamPolicy::fromJson) + .satisfies( + policy -> { + assertThat(policy) + .extracting(IamPolicy::statements) + .asInstanceOf(InstanceOfAssertFactories.list(IamStatement.class)) + .hasSize(2) + .satisfiesExactly( + statement -> + assertThat(statement) + .returns(IamEffect.ALLOW, IamStatement::effect) + .returns( + List.of( + IamResource.create( + s3Arn(AWS_PARTITION, bucket, null))), + IamStatement::resources) + .returns( + List.of(IamAction.create("s3:ListBucket")), + IamStatement::actions), + statement -> + assertThat(statement) + .returns(IamEffect.ALLOW, IamStatement::effect) + .satisfies( + st -> + assertThat(st.resources()) + .containsExactlyInAnyOrder( + IamResource.create( + s3Arn( + AWS_PARTITION, bucket, firstPath)), + IamResource.create( + s3Arn( + AWS_PARTITION, + bucket, + secondPath)))) + .returns( + List.of( + IamAction.create("s3:GetObject"), + IamAction.create("s3:GetObjectVersion")), + IamStatement::actions)); + }); + return ASSUME_ROLE_RESPONSE; + }); + PolarisStorageConfigurationInfo.StorageType storageType = + PolarisStorageConfigurationInfo.StorageType.S3; + EnumMap credentials = + new AwsCredentialsStorageIntegration(stsClient) + .getSubscopedCreds( + Mockito.mock(PolarisDiagnostics.class), + new AwsStorageConfigurationInfo( + storageType, + List.of(s3Path(bucket, warehouseKeyPrefix, storageType)), + roleARN, + externalId), + true, /* allowList = true */ + Set.of( + s3Path(bucket, firstPath, storageType), + s3Path(bucket, secondPath, storageType)), + Set.of()); + assertThat(credentials) + .isNotEmpty() + .containsEntry(PolarisCredentialProperty.AWS_TOKEN, "sess") + .containsEntry(PolarisCredentialProperty.AWS_KEY_ID, "accessKey") + .containsEntry(PolarisCredentialProperty.AWS_SECRET_KEY, "secretKey"); + } + + @Test + public void testGetSubscopedCredsInlinePolicyWithEmptyReadAndWrite() { + StsClient stsClient = Mockito.mock(StsClient.class); + String roleARN = "arn:aws:iam::012345678901:role/jdoe"; + String externalId = "externalId"; + String bucket = "bucket"; + String warehouseKeyPrefix = "path/to/warehouse"; + String firstPath = warehouseKeyPrefix + "/namespace/table"; + String secondPath = warehouseKeyPrefix + "/oldnamespace/table"; + Mockito.when(stsClient.assumeRole(Mockito.isA(AssumeRoleRequest.class))) + .thenAnswer( + invocation -> { + assertThat(invocation.getArguments()[0]) + .isInstanceOf(AssumeRoleRequest.class) + .asInstanceOf(InstanceOfAssertFactories.type(AssumeRoleRequest.class)) + .extracting(AssumeRoleRequest::policy) + .extracting(IamPolicy::fromJson) + .satisfies( + policy -> { + assertThat(policy) + .extracting(IamPolicy::statements) + .asInstanceOf(InstanceOfAssertFactories.list(IamStatement.class)) + .hasSize(2) + .satisfiesExactly( + statement -> + assertThat(statement) + .returns(IamEffect.ALLOW, IamStatement::effect) + .returns(List.of(), IamStatement::resources) + .returns( + List.of(IamAction.create("s3:ListBucket")), + IamStatement::actions), + statement -> + assertThat(statement) + .returns(IamEffect.ALLOW, IamStatement::effect) + .satisfies( + st -> assertThat(st.resources()).containsExactly()) + .returns( + List.of( + IamAction.create("s3:GetObject"), + IamAction.create("s3:GetObjectVersion")), + IamStatement::actions)); + }); + return ASSUME_ROLE_RESPONSE; + }); + EnumMap credentials = + new AwsCredentialsStorageIntegration(stsClient) + .getSubscopedCreds( + Mockito.mock(PolarisDiagnostics.class), + new AwsStorageConfigurationInfo( + PolarisStorageConfigurationInfo.StorageType.S3, + List.of( + s3Path( + bucket, + warehouseKeyPrefix, + PolarisStorageConfigurationInfo.StorageType.S3)), + roleARN, + externalId), + true, /* allowList = true */ + Set.of(), + Set.of()); + assertThat(credentials) + .isNotEmpty() + .containsEntry(PolarisCredentialProperty.AWS_TOKEN, "sess") + .containsEntry(PolarisCredentialProperty.AWS_KEY_ID, "accessKey") + .containsEntry(PolarisCredentialProperty.AWS_SECRET_KEY, "secretKey"); + } + + private static @NotNull String s3Arn(String partition, String bucket, String keyPrefix) { + String bucketArn = "arn:" + partition + ":s3:::" + bucket; + if (keyPrefix == null) { + return bucketArn; + } + return bucketArn + "/" + keyPrefix + "/*"; + } + + private static @NotNull String s3CnArn(String bucket, String keyPrefix) { + String bucketArn = "arn:aws-cn:s3:::" + bucket; + if (keyPrefix == null) { + return bucketArn; + } + return bucketArn + "/" + keyPrefix + "/*"; + } + + private static @NotNull String s3Path( + String bucket, String keyPrefix, PolarisStorageConfigurationInfo.StorageType storageType) { + return storageType.getPrefix() + bucket + "/" + keyPrefix; + } +} diff --git a/polaris-core/src/test/java/io/polaris/service/storage/azure/AzureCredentialStorageIntegrationTest.java b/polaris-core/src/test/java/io/polaris/service/storage/azure/AzureCredentialStorageIntegrationTest.java new file mode 100644 index 0000000000..d80957bdc4 --- /dev/null +++ b/polaris-core/src/test/java/io/polaris/service/storage/azure/AzureCredentialStorageIntegrationTest.java @@ -0,0 +1,401 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.storage.azure; + +import com.azure.storage.blob.BlobClient; +import com.azure.storage.blob.BlobClientBuilder; +import com.azure.storage.blob.BlobContainerClient; +import com.azure.storage.blob.BlobServiceClient; +import com.azure.storage.blob.BlobServiceClientBuilder; +import com.azure.storage.blob.models.BlobStorageException; +import com.azure.storage.blob.models.ListBlobsOptions; +import com.azure.storage.blob.options.BlobParallelUploadOptions; +import com.azure.storage.common.Utility; +import com.azure.storage.file.datalake.DataLakeFileClient; +import com.azure.storage.file.datalake.DataLakeFileSystemClient; +import com.azure.storage.file.datalake.DataLakeFileSystemClientBuilder; +import com.azure.storage.file.datalake.models.DataLakeStorageException; +import com.azure.storage.file.datalake.models.PathItem; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.storage.PolarisCredentialProperty; +import io.polaris.core.storage.azure.AzureCredentialsStorageIntegration; +import io.polaris.core.storage.azure.AzureStorageConfigurationInfo; +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.lang.annotation.ElementType; +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; +import java.lang.annotation.Target; +import java.nio.charset.StandardCharsets; +import java.time.Duration; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.EnumMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.stream.Stream; +import org.assertj.core.util.Strings; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtensionContext; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.Arguments; +import org.junit.jupiter.params.provider.ArgumentsProvider; +import org.junit.jupiter.params.provider.ArgumentsSource; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class AzureCredentialStorageIntegrationTest { + + private final Logger LOGGER = + LoggerFactory.getLogger(AzureCredentialStorageIntegrationTest.class); + + private final String clientId = System.getenv("AZURE_CLIENT_ID"); + private final String clientSecret = System.getenv("AZURE_CLIENT_SECRET"); + private final String tenantId = "d479c7c9-2632-445a-b22d-7c19e68774f6"; + + private boolean checkEnvNullVariables() { + if (Strings.isNullOrEmpty(clientId) || Strings.isNullOrEmpty(clientSecret)) { + LOGGER.debug("Null Azure testing environment variables! Skip " + this.getClass().getName()); + return true; + } + return false; + } + + @Test + public void testNegativeCases() { + List differentEndpointList = + Arrays.asList( + "abfss://container@icebergdfsstorageacct.dfs.core.windows.net/polaris-test/", + "abfss://container@icebergdfsstorageacct.blob.core.windows.net/polaris-test/"); + Assertions.assertThrows( + RuntimeException.class, + () -> + subscopedCredsForOperations( + differentEndpointList, /* allowedWriteLoc= */ new ArrayList<>(), true)); + + List differentStorageAccts = + Arrays.asList( + "abfss://container@polarisadls.dfs.core.windows.net/polaris-test/", + "abfss://container@icebergdfsstorageacct.dfs.core.windows.net/polaris-test/"); + Assertions.assertThrows( + RuntimeException.class, + () -> + subscopedCredsForOperations( + differentStorageAccts, /* allowedWriteLoc= */ new ArrayList<>(), true)); + List differentContainers = + Arrays.asList( + "abfss://container1@icebergdfsstorageacct.dfs.core.windows.net/polaris-test/", + "abfss://container2@icebergdfsstorageacct.dfs.core.windows.net/polaris-test/"); + + Assertions.assertThrows( + RuntimeException.class, + () -> + subscopedCredsForOperations( + differentContainers, /* allowedWriteLoc= */ new ArrayList<>(), true)); + } + + @TestWithAzureArgs + public void testGetSubscopedTokenList(boolean allowListAction, String service) { + + if (checkEnvNullVariables()) { + return; + } + boolean isBlobService = service.equals("blob"); + List allowedLoc = + Arrays.asList( + String.format( + "abfss://container@icebergdfsstorageacct.%s.core.windows.net/polaris-test/", + service)); + Map credsMap = + subscopedCredsForOperations( + /* allowedReadLoc= */ allowedLoc, + /* allowedWriteLoc= */ new ArrayList<>(), + allowListAction); + Assertions.assertEquals(2, credsMap.size()); + String sasToken = credsMap.get(PolarisCredentialProperty.AZURE_SAS_TOKEN); + Assertions.assertNotNull(sasToken); + String serviceEndpoint = + String.format("https://icebergdfsstorageacct.%s.core.windows.net", service); + BlobContainerClient containerClient = + createContainerClient(sasToken, serviceEndpoint, "container"); + DataLakeFileSystemClient fileSystemClient = + createDatalakeFileSystemClient(sasToken, serviceEndpoint, "container"); + + if (allowListAction) { + // LIST succeed + Assertions.assertDoesNotThrow( + () -> { + if (isBlobService) { + containerClient + .listBlobs( + new ListBlobsOptions().setPrefix(Utility.urlEncode("polaris-test/")), + Duration.ofSeconds(5)) + .streamByPage() + .findFirst() + .orElse(null); + } else { + fileSystemClient + .getDirectoryClient("polaris-test") + .listPaths() + .forEach(PathItem::getName); + } + }); + } else { + if (isBlobService) { + Assertions.assertThrows( + BlobStorageException.class, + () -> + containerClient + .listBlobs( + new ListBlobsOptions().setPrefix(Utility.urlEncode("polaris-test/")), + Duration.ofSeconds(5)) + .streamByPage() + .findFirst() + .orElse(null)); + } else { + Assertions.assertThrows( + DataLakeStorageException.class, + () -> + fileSystemClient + .getDirectoryClient("polaris-test") + .listPaths() + .forEach(PathItem::getName)); + } + } + } + + @TestWithAzureArgs + public void testGetSubscopedTokenRead(boolean allowListAction, String service) { + if (checkEnvNullVariables()) { + return; + } + String allowedPrefix = "polaris-test"; + String blockedPrefix = "blocked-prefix"; + List allowedLoc = + Arrays.asList( + String.format( + "abfss://container@icebergdfsstorageacct.%s.core.windows.net/%s", + service, allowedPrefix)); + Map credsMap = + subscopedCredsForOperations( + /* allowedReadLoc= */ allowedLoc, + /* allowedWriteLoc= */ new ArrayList<>(), + /* allowListAction= */ false); + + BlobClient blobClient = + createBlobClient( + credsMap.get(PolarisCredentialProperty.AZURE_SAS_TOKEN), + "https://icebergdfsstorageacct.dfs.core.windows.net", + "container", + allowedPrefix); + + // READ succeed + Assertions.assertDoesNotThrow( + () -> + blobClient.downloadStreamWithResponse( + new ByteArrayOutputStream(), null, null, null, false, Duration.ofSeconds(5), null)); + + // read will fail because only READ permission allowed + Assertions.assertThrows( + BlobStorageException.class, + () -> + blobClient.uploadWithResponse( + new BlobParallelUploadOptions( + new ByteArrayInputStream("polaris".getBytes(StandardCharsets.UTF_8))), + Duration.ofSeconds(5), + null)); + + // read fail because container is blocked + BlobClient blobClientReadFail = + createBlobClient( + credsMap.get(PolarisCredentialProperty.AZURE_SAS_TOKEN), + String.format("https://icebergdfsstorageacct.%s.core.windows.net", service), + "regtest", + blockedPrefix); + + Assertions.assertThrows( + BlobStorageException.class, + () -> + blobClientReadFail.downloadStreamWithResponse( + new ByteArrayOutputStream(), null, null, null, false, Duration.ofSeconds(5), null)); + } + + @TestWithAzureArgs + public void testGetSubscopedTokenWrite(boolean allowListAction, String service) { + if (checkEnvNullVariables()) { + return; + } + boolean isBlobService = service.equals("blob"); + String allowedPrefix = "polaris-test/scopedcreds/"; + String blockedPrefix = "blocked-prefix"; + List allowedLoc = + Arrays.asList( + String.format( + "abfss://container@icebergdfsstorageacct.%s.core.windows.net/%s", + service, allowedPrefix)); + Map credsMap = + subscopedCredsForOperations( + /* allowedReadLoc= */ new ArrayList<>(), + /* allowedWriteLoc= */ allowedLoc, + /* allowListAction= */ false); + String serviceEndpoint = + String.format("https://icebergdfsstorageacct.%s.core.windows.net", service); + BlobClient blobClient = + createBlobClient( + credsMap.get(PolarisCredentialProperty.AZURE_SAS_TOKEN), + serviceEndpoint, + "container", + allowedPrefix + "metadata/00000-65ffa17b-fe64-4c38-bcb9-06f9bd12aa2a.metadata.json"); + DataLakeFileClient fileClient = + createDatalakeFileClient( + credsMap.get(PolarisCredentialProperty.AZURE_SAS_TOKEN), + serviceEndpoint, + "container", + "polaris-test/scopedcreds/metadata", + "00000-65ffa17b-fe64-4c38-bcb9-06f9bd12aa2a.metadata.json"); + // upload succeed + ByteArrayInputStream inputStream = + new ByteArrayInputStream("polaris".getBytes(StandardCharsets.UTF_8)); + Assertions.assertDoesNotThrow( + () -> { + if (isBlobService) { + blobClient.uploadWithResponse( + new BlobParallelUploadOptions(inputStream), Duration.ofSeconds(5), null); + } else { + fileClient.upload(inputStream, "polaris".length(), /*override*/ true); + } + }); + ByteArrayOutputStream outStream = new ByteArrayOutputStream(); + // READ not allowed + if (isBlobService) { + Assertions.assertThrows( + BlobStorageException.class, + () -> + blobClient.downloadStreamWithResponse( + outStream, null, null, null, false, Duration.ofSeconds(5), null)); + } else { + Assertions.assertThrows(DataLakeStorageException.class, () -> fileClient.read(outStream)); + } + + // upload fail because container not allowed + String blockedContainer = "regtest"; + BlobClient blobClientWriteFail = + createBlobClient( + credsMap.get(PolarisCredentialProperty.AZURE_SAS_TOKEN), + serviceEndpoint, + blockedContainer, + blockedPrefix); + DataLakeFileClient fileClientFail = + createDatalakeFileClient( + credsMap.get(PolarisCredentialProperty.AZURE_SAS_TOKEN), + serviceEndpoint, + blockedContainer, + "polaris-test/scopedcreds/metadata", + "00000-65ffa17b-fe64-4c38-bcb9-06f9bd12aa2a.metadata.json"); + + if (isBlobService) { + Assertions.assertThrows( + BlobStorageException.class, + () -> + blobClientWriteFail.uploadWithResponse( + new BlobParallelUploadOptions( + new ByteArrayInputStream("polaris".getBytes(StandardCharsets.UTF_8))), + Duration.ofSeconds(5), + null)); + } else { + Assertions.assertThrows( + DataLakeStorageException.class, + () -> fileClientFail.upload(inputStream, "polaris".length())); + } + } + + private Map subscopedCredsForOperations( + List allowedReadLoc, List allowedWriteLoc, boolean allowListAction) { + List allowedLoc = new ArrayList<>(); + allowedLoc.addAll(allowedReadLoc); + allowedLoc.addAll(allowedWriteLoc); + AzureStorageConfigurationInfo azureConfig = + new AzureStorageConfigurationInfo(allowedLoc, tenantId); + AzureCredentialsStorageIntegration azureCredsIntegration = + new AzureCredentialsStorageIntegration(); + EnumMap credsMap = + azureCredsIntegration.getSubscopedCreds( + new PolarisDefaultDiagServiceImpl(), + azureConfig, + allowListAction, + new HashSet<>(allowedReadLoc), + new HashSet<>(allowedWriteLoc)); + return credsMap; + } + + private BlobContainerClient createContainerClient( + String sasToken, String endpoint, String container) { + BlobServiceClient blobServiceClient = + new BlobServiceClientBuilder().sasToken(sasToken).endpoint(endpoint).buildClient(); + return blobServiceClient.getBlobContainerClient(container); + } + + private DataLakeFileSystemClient createDatalakeFileSystemClient( + String sasToken, String endpoint, String containerOrFileSystem) { + return new DataLakeFileSystemClientBuilder() + .sasToken(sasToken) + .endpoint(endpoint) + .fileSystemName(containerOrFileSystem) + .buildClient(); + } + + private BlobClient createBlobClient( + String sasToken, String endpoint, String container, String filePath) { + BlobServiceClient blobServiceClient = + new BlobServiceClientBuilder().sasToken(sasToken).endpoint(endpoint).buildClient(); + return new BlobClientBuilder() + .endpoint(blobServiceClient.getAccountUrl()) + .pipeline(blobServiceClient.getHttpPipeline()) + .containerName(container) + .blobName(filePath) + .buildClient(); + } + + private DataLakeFileClient createDatalakeFileClient( + String sasToken, + String endpoint, + String containerOrFileSystem, + String directory, + String fileName) { + DataLakeFileSystemClient dataLakeFileSystemClient = + createDatalakeFileSystemClient(sasToken, endpoint, containerOrFileSystem); + return dataLakeFileSystemClient.getDirectoryClient(directory).getFileClient(fileName); + } + + @Target({ElementType.METHOD}) + @Retention(RetentionPolicy.RUNTIME) + @ParameterizedTest + @ArgumentsSource(AzureTestArgs.class) + protected @interface TestWithAzureArgs {} + + protected static class AzureTestArgs implements ArgumentsProvider { + @Override + public Stream provideArguments(ExtensionContext extensionContext) { + return Stream.of( + Arguments.of(/* allowedList= */ true, "blob"), + Arguments.of(/* allowedList= */ false, "blob"), + Arguments.of(/* allowedList= */ true, "dfs"), + Arguments.of(/* allowedList= */ false, "dfs")); + } + } +} diff --git a/polaris-core/src/test/java/io/polaris/service/storage/azure/AzureLocationTest.java b/polaris-core/src/test/java/io/polaris/service/storage/azure/AzureLocationTest.java new file mode 100644 index 0000000000..973357e3d2 --- /dev/null +++ b/polaris-core/src/test/java/io/polaris/service/storage/azure/AzureLocationTest.java @@ -0,0 +1,46 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.storage.azure; + +import io.polaris.core.storage.azure.AzureLocation; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; + +public class AzureLocationTest { + + @Test + public void testLocation() { + String uri = "abfss://container@storageaccount.blob.core.windows.net/myfile"; + AzureLocation azureLocation = new AzureLocation(uri); + Assertions.assertEquals("container", azureLocation.getContainer()); + Assertions.assertEquals("storageaccount", azureLocation.getStorageAccount()); + Assertions.assertEquals("blob.core.windows.net", azureLocation.getEndpoint()); + Assertions.assertEquals("myfile", azureLocation.getFilePath()); + } + + @Test + public void testLocation_negative_cases() { + Assertions.assertThrows( + IllegalArgumentException.class, + () -> new AzureLocation("wasbs://container@storageaccount.blob.core.windows.net/myfile")); + Assertions.assertThrows( + IllegalArgumentException.class, + () -> new AzureLocation("abfss://storageaccount.blob.core.windows.net/myfile")); + Assertions.assertThrows( + IllegalArgumentException.class, + () -> new AzureLocation("abfss://container@storageaccount/myfile")); + } +} diff --git a/polaris-core/src/test/java/io/polaris/service/storage/gcp/GcpCredentialsStorageIntegrationTest.java b/polaris-core/src/test/java/io/polaris/service/storage/gcp/GcpCredentialsStorageIntegrationTest.java new file mode 100644 index 0000000000..8aedf9bcbc --- /dev/null +++ b/polaris-core/src/test/java/io/polaris/service/storage/gcp/GcpCredentialsStorageIntegrationTest.java @@ -0,0 +1,429 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.storage.gcp; + +import static org.assertj.core.api.Assertions.assertThat; + +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.node.ContainerNode; +import com.fasterxml.jackson.databind.node.ObjectNode; +import com.google.api.client.http.javanet.NetHttpTransport; +import com.google.auth.http.HttpTransportFactory; +import com.google.auth.oauth2.AccessToken; +import com.google.auth.oauth2.CredentialAccessBoundary; +import com.google.auth.oauth2.GoogleCredentials; +import com.google.cloud.ServiceOptions; +import com.google.cloud.http.HttpTransportOptions; +import com.google.cloud.storage.BlobId; +import com.google.cloud.storage.BlobInfo; +import com.google.cloud.storage.Storage; +import com.google.cloud.storage.StorageException; +import com.google.cloud.storage.StorageOptions; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.storage.PolarisCredentialProperty; +import io.polaris.core.storage.gcp.GcpCredentialsStorageIntegration; +import io.polaris.core.storage.gcp.GcpStorageConfigurationInfo; +import java.io.IOException; +import java.time.Instant; +import java.time.temporal.ChronoUnit; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Date; +import java.util.EnumMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import org.assertj.core.api.recursive.comparison.RecursiveComparisonConfiguration; +import org.assertj.core.util.Strings; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.ValueSource; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +class GcpCredentialsStorageIntegrationTest { + + private final String gcsServiceKeyJsonFileLocation = + System.getenv("GOOGLE_APPLICATION_CREDENTIALS"); + + private final Logger LOGGER = LoggerFactory.getLogger(GcpCredentialsStorageIntegrationTest.class); + + @ParameterizedTest + @ValueSource(booleans = {true, false}) + public void testSubscope(boolean allowedListAction) throws IOException { + if (Strings.isNullOrEmpty(gcsServiceKeyJsonFileLocation)) { + LOGGER.debug( + "Environment variable GOOGLE_APPLICATION_CREDENTIALS not exits, skip test " + + getClass().getName()); + return; + } + List allowedRead = + Arrays.asList( + "gs://sfc-dev1-regtest/polaris-test/subscoped-test/read1/", + "gs://sfc-dev1-regtest/polaris-test/subscoped-test/read2/"); + List allowedWrite = + Arrays.asList( + "gs://sfc-dev1-regtest/polaris-test/subscoped-test/write1/", + "gs://sfc-dev1-regtest/polaris-test/subscoped-test/write2/"); + Storage storageClient = setupStorageClient(allowedRead, allowedWrite, allowedListAction); + BlobInfo blobInfoGoodWrite = + createStorageBlob("sfc-dev1-regtest", "polaris-test/subscoped-test/write1/", "file.txt"); + BlobInfo blobInfoBad = + createStorageBlob("sfc-dev1-regtest", "polaris-test/subscoped-test/write3/", "file.txt"); + BlobInfo blobInfoGoodRead = + createStorageBlob("sfc-dev1-regtest", "polaris-test/subscoped-test/read1/", "file.txt"); + final byte[] fileContent = "hello-polaris".getBytes(); + // GOOD WRITE + Assertions.assertDoesNotThrow(() -> storageClient.create(blobInfoGoodWrite, fileContent)); + + // BAD WROTE + Assertions.assertThrows( + StorageException.class, () -> storageClient.create(blobInfoBad, fileContent)); + + Assertions.assertDoesNotThrow(() -> storageClient.get(blobInfoGoodRead.getBlobId())); + Assertions.assertThrows( + StorageException.class, () -> storageClient.get(blobInfoBad.getBlobId())); + + // LIST + if (allowedListAction) { + Assertions.assertDoesNotThrow( + () -> + storageClient.list( + "sfc-dev1-regtest", + Storage.BlobListOption.prefix("polaris-test/subscoped-test/read1/"))); + } else { + Assertions.assertThrows( + StorageException.class, + () -> + storageClient.list( + "sfc-dev1-regtest", + Storage.BlobListOption.prefix("polaris-test/subscoped-test/read1/"))); + } + // DELETE + List allowedWrite2 = + Arrays.asList( + "gs://sfc-dev1-regtest/polaris-test/subscoped-test/write2/", + "gs://sfc-dev1-regtest/polaris-test/subscoped-test/write3/"); + Storage clientForDelete = setupStorageClient(List.of(), allowedWrite2, allowedListAction); + + // can not delete because it is not in allowed write path for this client + Assertions.assertThrows( + StorageException.class, () -> clientForDelete.delete(blobInfoGoodWrite.getBlobId())); + + // good to delete allowed location + Assertions.assertDoesNotThrow(() -> storageClient.delete(blobInfoGoodWrite.getBlobId())); + } + + private Storage setupStorageClient( + List allowedReadLoc, List allowedWriteLoc, boolean allowListAction) + throws IOException { + Map credsMap = + subscopedCredsForOperations(allowedReadLoc, allowedWriteLoc, allowListAction); + return createStorageClient(credsMap); + } + + BlobInfo createStorageBlob(String bucket, String prefix, String fileName) { + BlobId blobId = BlobId.of(bucket, prefix + fileName); + return BlobInfo.newBuilder(blobId).build(); + } + + private Storage createStorageClient(Map credsMap) { + AccessToken accessToken = + new AccessToken( + credsMap.get(PolarisCredentialProperty.GCS_ACCESS_TOKEN), + new Date( + Long.parseLong( + credsMap.get(PolarisCredentialProperty.GCS_ACCESS_TOKEN_EXPIRES_AT)))); + return StorageOptions.newBuilder() + .setCredentials(GoogleCredentials.create(accessToken)) + .build() + .getService(); + } + + private Map subscopedCredsForOperations( + List allowedReadLoc, List allowedWriteLoc, boolean allowListAction) + throws IOException { + List allowedLoc = new ArrayList<>(); + allowedLoc.addAll(allowedReadLoc); + allowedLoc.addAll(allowedWriteLoc); + GcpStorageConfigurationInfo gcpConfig = new GcpStorageConfigurationInfo(allowedLoc); + GcpCredentialsStorageIntegration gcpCredsIntegration = + new GcpCredentialsStorageIntegration( + GoogleCredentials.getApplicationDefault(), + ServiceOptions.getFromServiceLoader(HttpTransportFactory.class, NetHttpTransport::new)); + EnumMap credsMap = + gcpCredsIntegration.getSubscopedCreds( + new PolarisDefaultDiagServiceImpl(), + gcpConfig, + allowListAction, + new HashSet<>(allowedReadLoc), + new HashSet<>(allowedWriteLoc)); + return credsMap; + } + + @Test + public void testGenerateAccessBoundary() throws IOException { + GcpCredentialsStorageIntegration integration = + new GcpCredentialsStorageIntegration( + GoogleCredentials.newBuilder() + .setAccessToken( + new AccessToken( + "my_token", + new Date(Instant.now().plus(10, ChronoUnit.MINUTES).toEpochMilli()))) + .build(), + new HttpTransportOptions.DefaultHttpTransportFactory()); + CredentialAccessBoundary credentialAccessBoundary = + integration.generateAccessBoundaryRules( + true, Set.of("gs://bucket1/path/to/data"), Set.of("gs://bucket1/path/to/data")); + assertThat(credentialAccessBoundary).isNotNull(); + ObjectMapper mapper = new ObjectMapper(); + JsonNode parsedRules = mapper.convertValue(credentialAccessBoundary, JsonNode.class); + JsonNode refRules = + mapper.readTree( + """ +{ + "accessBoundaryRules": [ + { + "availablePermissions": [ + "inRole:roles/storage.objectViewer" + ], + "availableResource": "//storage.googleapis.com/projects/_/buckets/bucket1", + "availabilityCondition": { + "expression": "resource.name.startsWith('projects/_/buckets/bucket1/objects/path/to/data') || api.getAttribute('storage.googleapis.com/objectListPrefix', '').startsWith('path/to/data')" + } + }, + { + "availablePermissions": [ + "inRole:roles/storage.objectCreator" + ], + "availableResource": "//storage.googleapis.com/projects/_/buckets/bucket1", + "availabilityCondition": { + "expression": "resource.name.startsWith('projects/_/buckets/bucket1/objects/path/to/data')" + } + } + ] +} + """); + assertThat(parsedRules) + .usingRecursiveComparison( + RecursiveComparisonConfiguration.builder() + .withEqualsForType(this::recursiveEquals, ObjectNode.class) + .build()) + .isEqualTo(refRules); + } + + @Test + public void testGenerateAccessBoundaryWithMultipleBuckets() throws IOException { + GcpCredentialsStorageIntegration integration = + new GcpCredentialsStorageIntegration( + GoogleCredentials.newBuilder() + .setAccessToken( + new AccessToken( + "my_token", + new Date(Instant.now().plus(10, ChronoUnit.MINUTES).toEpochMilli()))) + .build(), + new HttpTransportOptions.DefaultHttpTransportFactory()); + CredentialAccessBoundary credentialAccessBoundary = + integration.generateAccessBoundaryRules( + true, + Set.of( + "gs://bucket1/normal/path/to/data", + "gs://bucket1/awesome/path/to/data", + "gs://bucket2/a/super/path/to/data"), + Set.of("gs://bucket1/normal/path/to/data")); + assertThat(credentialAccessBoundary).isNotNull(); + ObjectMapper mapper = new ObjectMapper(); + JsonNode parsedRules = mapper.convertValue(credentialAccessBoundary, JsonNode.class); + JsonNode refRules = + mapper.readTree( + """ +{ + "accessBoundaryRules": [ + { + "availablePermissions": [ + "inRole:roles/storage.objectViewer" + ], + "availableResource": "//storage.googleapis.com/projects/_/buckets/bucket1", + "availabilityCondition": { + "expression": "resource.name.startsWith('projects/_/buckets/bucket1/objects/normal/path/to/data') || api.getAttribute('storage.googleapis.com/objectListPrefix', '').startsWith('normal/path/to/data') || resource.name.startsWith('projects/_/buckets/bucket1/objects/awesome/path/to/data') || api.getAttribute('storage.googleapis.com/objectListPrefix', '').startsWith('awesome/path/to/data')" + } + }, + { + "availablePermissions": [ + "inRole:roles/storage.objectViewer" + ], + "availableResource": "//storage.googleapis.com/projects/_/buckets/bucket2", + "availabilityCondition": { + "expression": "resource.name.startsWith('projects/_/buckets/bucket2/objects/a/super/path/to/data') || api.getAttribute('storage.googleapis.com/objectListPrefix', '').startsWith('a/super/path/to/data')" + } + }, + { + "availablePermissions": [ + "inRole:roles/storage.objectCreator" + ], + "availableResource": "//storage.googleapis.com/projects/_/buckets/bucket1", + "availabilityCondition": { + "expression": "resource.name.startsWith('projects/_/buckets/bucket1/objects/path/to/data')" + } + } + ] +} + """); + assertThat(parsedRules) + .usingRecursiveComparison( + RecursiveComparisonConfiguration.builder() + .withEqualsForType(this::recursiveEquals, ObjectNode.class) + .build()) + .isEqualTo(refRules); + } + + @Test + public void testGenerateAccessBoundaryWithoutList() throws IOException { + GcpCredentialsStorageIntegration integration = + new GcpCredentialsStorageIntegration( + GoogleCredentials.newBuilder() + .setAccessToken( + new AccessToken( + "my_token", + new Date(Instant.now().plus(10, ChronoUnit.MINUTES).toEpochMilli()))) + .build(), + new HttpTransportOptions.DefaultHttpTransportFactory()); + CredentialAccessBoundary credentialAccessBoundary = + integration.generateAccessBoundaryRules( + false, + Set.of("gs://bucket1/path/to/data", "gs://bucket1/another/path/to/data"), + Set.of("gs://bucket1/path/to/data")); + assertThat(credentialAccessBoundary).isNotNull(); + ObjectMapper mapper = new ObjectMapper(); + JsonNode parsedRules = mapper.convertValue(credentialAccessBoundary, JsonNode.class); + JsonNode refRules = + mapper.readTree( + """ +{ + "accessBoundaryRules": [ + { + "availablePermissions": [ + "inRole:roles/storage.objectViewer" + ], + "availableResource": "//storage.googleapis.com/projects/_/buckets/bucket1", + "availabilityCondition": { + "expression": "resource.name.startsWith('projects/_/buckets/bucket1/objects/path/to/data') || resource.name.startsWith('projects/_/buckets/bucket1/objects/another/path/to/data')" + } + }, + { + "availablePermissions": [ + "inRole:roles/storage.objectCreator" + ], + "availableResource": "//storage.googleapis.com/projects/_/buckets/bucket1", + "availabilityCondition": { + "expression": "resource.name.startsWith('projects/_/buckets/bucket1/objects/path/to/data')" + } + } + ] +} + """); + assertThat(parsedRules) + .usingRecursiveComparison( + RecursiveComparisonConfiguration.builder() + .withEqualsForType(this::recursiveEquals, ObjectNode.class) + .build()) + .isEqualTo(refRules); + } + + @Test + public void testGenerateAccessBoundaryWithoutWrites() throws IOException { + GcpCredentialsStorageIntegration integration = + new GcpCredentialsStorageIntegration( + GoogleCredentials.newBuilder() + .setAccessToken( + new AccessToken( + "my_token", + new Date(Instant.now().plus(10, ChronoUnit.MINUTES).toEpochMilli()))) + .build(), + new HttpTransportOptions.DefaultHttpTransportFactory()); + CredentialAccessBoundary credentialAccessBoundary = + integration.generateAccessBoundaryRules( + false, + Set.of("gs://bucket1/normal/path/to/data", "gs://bucket1/awesome/path/to/data"), + Set.of()); + assertThat(credentialAccessBoundary).isNotNull(); + ObjectMapper mapper = new ObjectMapper(); + JsonNode parsedRules = mapper.convertValue(credentialAccessBoundary, JsonNode.class); + JsonNode refRules = + mapper.readTree( + """ +{ + "accessBoundaryRules": [ + { + "availablePermissions": [ + "inRole:roles/storage.objectViewer" + ], + "availableResource": "//storage.googleapis.com/projects/_/buckets/bucket1", + "availabilityCondition": { + "expression": "resource.name.startsWith('projects/_/buckets/bucket1/objects/normal/path/to/data') || api.getAttribute('storage.googleapis.com/objectListPrefix', '').startsWith('normal/path/to/data') || resource.name.startsWith('projects/_/buckets/bucket1/objects/awesome/path/to/data') || api.getAttribute('storage.googleapis.com/objectListPrefix', '').startsWith('awesome/path/to/data')" + } + } + ] +} + """); + assertThat(parsedRules) + .usingRecursiveComparison( + RecursiveComparisonConfiguration.builder() + .withEqualsForType(this::recursiveEquals, ObjectNode.class) + .build()) + .isEqualTo(refRules); + } + + /** + * Custom comparator as ObjectNodes are compared by field indexes as opposed to field names. They + * also don't equate a field that is present and set to null with a field that is omitted + * + * @param on1 + * @param on2 + * @return + */ + private boolean recursiveEquals(ContainerNode on1, ContainerNode on2) { + Set fieldNames = new HashSet<>(); + on1.fieldNames().forEachRemaining(fieldNames::add); + on2.fieldNames().forEachRemaining(fieldNames::add); + for (String fieldName : fieldNames) { + if ((!on1.has(fieldName) || !on2.has(fieldName))) { + if (isNotNull(on1.get(fieldName)) || isNotNull(on2.get(fieldName))) { + return false; + } + } else { + JsonNode fieldValue = on1.get(fieldName); + JsonNode fieldValue2 = on2.get(fieldName); + if (fieldValue.isContainerNode()) { + if (!fieldValue2.isContainerNode() + || !recursiveEquals((ContainerNode) fieldValue, (ContainerNode) fieldValue2)) { + return false; + } + } else if (!fieldValue.equals(fieldValue2)) { + return false; + } + } + } + return true; + } + + private boolean isNotNull(JsonNode node) { + return node != null && !node.isNull(); + } +} diff --git a/polaris-core/src/testFixtures/java/io/polaris/core/persistence/PolarisMetaStoreManagerTest.java b/polaris-core/src/testFixtures/java/io/polaris/core/persistence/PolarisMetaStoreManagerTest.java new file mode 100644 index 0000000000..afb5ab0244 --- /dev/null +++ b/polaris-core/src/testFixtures/java/io/polaris/core/persistence/PolarisMetaStoreManagerTest.java @@ -0,0 +1,484 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.AsyncTaskType; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityActiveRecord; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.TaskEntity; +import java.time.Clock; +import java.time.Duration; +import java.time.Instant; +import java.time.InstantSource; +import java.time.ZoneId; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Random; +import java.util.Set; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; +import java.util.function.Function; +import java.util.stream.Collectors; +import java.util.stream.Stream; +import org.assertj.core.api.Assertions; +import org.assertj.core.api.InstanceOfAssertFactories; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +/** + * Integration test for the polaris persistence layer + * + *
@TODO
+ *   - Update multiple entities in one shot
+ *   - Lookup active: test non existent stuff
+ *   - Failure to resolve, i.e. something has changed
+ *   - better status report
+ * 
+ * + * @author bdagevil + */ +public abstract class PolarisMetaStoreManagerTest { + + protected static final MockInstantSource timeSource = new MockInstantSource(); + + private PolarisTestMetaStoreManager polarisTestMetaStoreManager; + + @BeforeEach + public void setupPolariMetaStoreManager() { + this.polarisTestMetaStoreManager = createPolarisTestMetaStoreManager(); + } + + protected abstract PolarisTestMetaStoreManager createPolarisTestMetaStoreManager(); + + /** validate that the root catalog was properly constructed */ + @Test + void validateBootstrap() { + // allocate test driver + polarisTestMetaStoreManager.validateBootstrap(); + } + + @Test + void testCreateTestCatalog() { + // allocate test driver + polarisTestMetaStoreManager.testCreateTestCatalog(); + } + + @Test + void testCreateTestCatalogWithRetry() { + // allocate test driver + polarisTestMetaStoreManager.forceRetry(); + polarisTestMetaStoreManager.testCreateTestCatalog(); + } + + @Test + void testBrowse() { + // allocate test driver + polarisTestMetaStoreManager.testBrowse(); + } + + @Test + void testCreateEntities() { + PolarisMetaStoreManager metaStoreManager = polarisTestMetaStoreManager.polarisMetaStoreManager; + try (CallContext callCtx = + CallContext.of(() -> "testRealm", polarisTestMetaStoreManager.polarisCallContext)) { + if (CallContext.getCurrentContext() == null) { + CallContext.setCurrentContext(callCtx); + } + TaskEntity task1 = createTask("task1", 100L); + TaskEntity task2 = createTask("task2", 101L); + List createdEntities = + metaStoreManager + .createEntitiesIfNotExist( + polarisTestMetaStoreManager.polarisCallContext, null, List.of(task1, task2)) + .getEntities(); + + Assertions.assertThat(createdEntities) + .isNotNull() + .hasSize(2) + .extracting(PolarisEntity::toCore) + .containsExactly(PolarisEntity.toCore(task1), PolarisEntity.toCore(task2)); + + List listedEntities = + metaStoreManager + .listEntities( + polarisTestMetaStoreManager.polarisCallContext, + null, + PolarisEntityType.TASK, + PolarisEntitySubType.NULL_SUBTYPE) + .getEntities(); + Assertions.assertThat(listedEntities) + .isNotNull() + .hasSize(2) + .containsExactly( + new PolarisEntityActiveRecord( + task1.getCatalogId(), + task1.getId(), + task1.getParentId(), + task1.getName(), + task1.getTypeCode(), + task1.getSubTypeCode()), + new PolarisEntityActiveRecord( + task2.getCatalogId(), + task2.getId(), + task2.getParentId(), + task2.getName(), + task2.getTypeCode(), + task2.getSubTypeCode())); + } + } + + @Test + void testCreateEntitiesAlreadyExisting() { + PolarisMetaStoreManager metaStoreManager = polarisTestMetaStoreManager.polarisMetaStoreManager; + try (CallContext callCtx = + CallContext.of(() -> "testRealm", polarisTestMetaStoreManager.polarisCallContext)) { + if (CallContext.getCurrentContext() == null) { + CallContext.setCurrentContext(callCtx); + } + TaskEntity task1 = createTask("task1", 100L); + TaskEntity task2 = createTask("task2", 101L); + List createdEntities = + metaStoreManager + .createEntitiesIfNotExist( + polarisTestMetaStoreManager.polarisCallContext, null, List.of(task1, task2)) + .getEntities(); + + Assertions.assertThat(createdEntities) + .isNotNull() + .hasSize(2) + .extracting(PolarisEntity::toCore) + .containsExactly(PolarisEntity.toCore(task1), PolarisEntity.toCore(task2)); + + TaskEntity task3 = createTask("task3", 103L); + + // entities task1 and task2 already exist with the same identifier, so the full list is + // returned + createdEntities = + metaStoreManager + .createEntitiesIfNotExist( + polarisTestMetaStoreManager.polarisCallContext, + null, + List.of(task1, task2, task3)) + .getEntities(); + Assertions.assertThat(createdEntities) + .isNotNull() + .hasSize(3) + .extracting(PolarisEntity::toCore) + .containsExactly( + PolarisEntity.toCore(task1), + PolarisEntity.toCore(task2), + PolarisEntity.toCore(task3)); + } + } + + @Test + void testCreateEntitiesWithConflict() { + PolarisMetaStoreManager metaStoreManager = polarisTestMetaStoreManager.polarisMetaStoreManager; + try (CallContext callCtx = + CallContext.of(() -> "testRealm", polarisTestMetaStoreManager.polarisCallContext)) { + if (CallContext.getCurrentContext() == null) { + CallContext.setCurrentContext(callCtx); + } + TaskEntity task1 = createTask("task1", 100L); + TaskEntity task2 = createTask("task2", 101L); + TaskEntity task3 = createTask("task3", 103L); + List createdEntities = + metaStoreManager + .createEntitiesIfNotExist( + polarisTestMetaStoreManager.polarisCallContext, + null, + List.of(task1, task2, task3)) + .getEntities(); + + Assertions.assertThat(createdEntities) + .isNotNull() + .hasSize(3) + .extracting(PolarisEntity::toCore) + .containsExactly( + PolarisEntity.toCore(task1), + PolarisEntity.toCore(task2), + PolarisEntity.toCore(task3)); + + TaskEntity secondTask3 = createTask("task3", 104L); + + TaskEntity task4 = createTask("task4", 105L); + createdEntities = + metaStoreManager + .createEntitiesIfNotExist( + polarisTestMetaStoreManager.polarisCallContext, null, List.of(secondTask3, task4)) + .getEntities(); + Assertions.assertThat(createdEntities).isNull(); + } + } + + private static TaskEntity createTask(String taskName, long id) { + return new TaskEntity.Builder() + .setName(taskName) + .withData("data") + .setId(id) + .withTaskType(AsyncTaskType.FILE_CLEANUP) + .setCreateTimestamp(Instant.now().toEpochMilli()) + .build(); + } + + /** Test that entity updates works well */ + @Test + void testUpdateEntities() { + // allocate test driver + polarisTestMetaStoreManager.testUpdateEntities(); + } + + /** Test that entity drop works well */ + @Test + void testDropEntities() { + // allocate test driver + polarisTestMetaStoreManager.testDropEntities(); + } + + /** Test that granting/revoking privileges works well */ + @Test + void testPrivileges() { + // allocate test driver + polarisTestMetaStoreManager.testPrivileges(); + } + + /** test entity rename */ + @Test + void testRename() { + // allocate test driver + polarisTestMetaStoreManager.testRename(); + } + + /** Test the set of functions for the entity cache */ + @Test + void testEntityCache() { + // allocate test driver + polarisTestMetaStoreManager.testEntityCache(); + } + + @Test + void testLoadTasks() { + for (int i = 0; i < 20; i++) { + polarisTestMetaStoreManager.createEntity( + null, PolarisEntityType.TASK, PolarisEntitySubType.NULL_SUBTYPE, "task_" + i); + } + String executorId = "testExecutor_abc"; + PolarisMetaStoreManager metaStoreManager = polarisTestMetaStoreManager.polarisMetaStoreManager; + PolarisCallContext callCtx = polarisTestMetaStoreManager.polarisCallContext; + List taskList = + metaStoreManager.loadTasks(callCtx, executorId, 5).getEntities(); + Assertions.assertThat(taskList) + .isNotNull() + .isNotEmpty() + .hasSize(5) + .allSatisfy( + entry -> + Assertions.assertThat(entry) + .extracting( + e -> + PolarisObjectMapperUtil.deserializeProperties( + callCtx, e.getProperties())) + .asInstanceOf(InstanceOfAssertFactories.map(String.class, String.class)) + .containsEntry("lastAttemptExecutorId", executorId) + .containsEntry("attemptCount", "1")); + Set firstTasks = + taskList.stream().map(PolarisBaseEntity::getName).collect(Collectors.toSet()); + + // grab a second round of tasks. Assert that none of the original 5 are in the list + List newTaskList = + metaStoreManager.loadTasks(callCtx, executorId, 5).getEntities(); + Assertions.assertThat(newTaskList) + .isNotNull() + .isNotEmpty() + .hasSize(5) + .extracting(PolarisBaseEntity::getName) + .noneMatch(firstTasks::contains); + + Set firstTenTaskNames = + Stream.concat(firstTasks.stream(), newTaskList.stream().map(PolarisBaseEntity::getName)) + .collect(Collectors.toSet()); + + // only 10 tasks are unnassigned. Requesting 20, we should only receive those 10 + List lastTen = + metaStoreManager.loadTasks(callCtx, executorId, 20).getEntities(); + + Assertions.assertThat(lastTen) + .isNotNull() + .isNotEmpty() + .hasSize(10) + .extracting(PolarisBaseEntity::getName) + .noneMatch(firstTenTaskNames::contains); + + Set allTaskNames = + Stream.concat(firstTenTaskNames.stream(), lastTen.stream().map(PolarisBaseEntity::getName)) + .collect(Collectors.toSet()); + + List emtpyList = + metaStoreManager.loadTasks(callCtx, executorId, 20).getEntities(); + + Assertions.assertThat(emtpyList).isNotNull().isEmpty(); + + timeSource.updateClock(Clock.offset(timeSource.currentClock, Duration.ofMinutes(10))); + + // all the tasks are unnassigned. Fetch them all + List allTasks = + metaStoreManager.loadTasks(callCtx, executorId, 20).getEntities(); + + Assertions.assertThat(allTasks) + .isNotNull() + .isNotEmpty() + .hasSize(20) + .extracting(PolarisBaseEntity::getName) + .allMatch(allTaskNames::contains); + + // drop all the tasks. Skip the clock forward and fetch. empty list expected + allTasks.forEach( + entity -> metaStoreManager.dropEntityIfExists(callCtx, null, entity, Map.of(), false)); + timeSource.updateClock(Clock.offset(timeSource.currentClock, Duration.ofMinutes(10))); + + List finalList = + metaStoreManager.loadTasks(callCtx, executorId, 20).getEntities(); + + Assertions.assertThat(finalList).isNotNull().isEmpty(); + } + + @Test + void testLoadTasksInParallel() { + for (int i = 0; i < 100; i++) { + polarisTestMetaStoreManager.createEntity( + null, PolarisEntityType.TASK, PolarisEntitySubType.NULL_SUBTYPE, "task_" + i); + } + PolarisMetaStoreManager metaStoreManager = polarisTestMetaStoreManager.polarisMetaStoreManager; + PolarisCallContext callCtx = polarisTestMetaStoreManager.polarisCallContext; + List>> futureList = new ArrayList<>(); + List> responses; + try (ExecutorService executorService = Executors.newVirtualThreadPerTaskExecutor()) { + for (int i = 0; i < 3; i++) { + final String executorId = "taskExecutor_" + i; + + futureList.add( + executorService.submit( + () -> { + Set taskNames = new HashSet<>(); + List taskList = List.of(); + boolean retry = false; + do { + retry = false; + try { + taskList = metaStoreManager.loadTasks(callCtx, executorId, 5).getEntities(); + taskList.stream().map(PolarisBaseEntity::getName).forEach(taskNames::add); + } catch (RetryOnConcurrencyException e) { + retry = true; + } + } while (retry || !taskList.isEmpty()); + return taskNames; + })); + } + responses = + futureList.stream() + .map( + f -> { + try { + return f.get(); + } catch (Exception e) { + throw new RuntimeException(e); + } + }) + .toList(); + } + Assertions.assertThat(responses) + .hasSize(3) + .satisfies(l -> Assertions.assertThat(l.stream().flatMap(Set::stream)).hasSize(100)); + Map taskCounts = + responses.stream() + .flatMap(Set::stream) + .collect(Collectors.toMap(Function.identity(), (val) -> 1, Integer::sum)); + Assertions.assertThat(taskCounts) + .hasSize(100) + .allSatisfy((k, v) -> Assertions.assertThat(v).isEqualTo(1)); + } + + /** Test generateNewEntityId() function that generates unique ids by creating Tasks in parallel */ + @Test + void testCreateTasksInParallel() { + List>> futureList = new ArrayList<>(); + Random rand = new Random(); + try (ExecutorService executorService = Executors.newVirtualThreadPerTaskExecutor()) { + for (int threadId = 0; threadId < 10; threadId++) { + Future> future = + executorService.submit( + () -> { + List list = new ArrayList<>(); + for (int i = 0; i < 10; i++) { + var entity = + polarisTestMetaStoreManager.createEntity( + null, + PolarisEntityType.TASK, + PolarisEntitySubType.NULL_SUBTYPE, + "task_" + rand.nextLong() + "" + i); + list.add(entity.getId()); + } + return list; + }); + futureList.add(future); + } + + List> responses = + futureList.stream() + .map( + f -> { + try { + return f.get(); + } catch (Exception e) { + throw new RuntimeException(e); + } + }) + .toList(); + + Assertions.assertThat(responses) + .hasSize(10) + .satisfies(l -> Assertions.assertThat(l.stream().flatMap(List::stream)).hasSize(100)); + Map idCounts = + responses.stream() + .flatMap(List::stream) + .collect(Collectors.toMap(Function.identity(), (val) -> 1, Integer::sum)); + Assertions.assertThat(idCounts) + .hasSize(100) + .allSatisfy((k, v) -> Assertions.assertThat(v).isEqualTo(1)); + } + } + + protected static final class MockInstantSource implements InstantSource { + private Clock currentClock = Clock.system(ZoneId.systemDefault()); + + @Override + public Instant instant() { + return Instant.now(currentClock); + } + + public void updateClock(Clock clock) { + this.currentClock = clock; + } + } +} diff --git a/polaris-core/src/testFixtures/java/io/polaris/core/persistence/PolarisTestMetaStoreManager.java b/polaris-core/src/testFixtures/java/io/polaris/core/persistence/PolarisTestMetaStoreManager.java new file mode 100644 index 0000000000..8ba30563df --- /dev/null +++ b/polaris-core/src/testFixtures/java/io/polaris/core/persistence/PolarisTestMetaStoreManager.java @@ -0,0 +1,2392 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.core.persistence; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.ObjectMapper; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisChangeTrackingVersions; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityActiveRecord; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.entity.PolarisEntityCore; +import io.polaris.core.entity.PolarisEntityId; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.entity.PolarisTaskConstants; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import org.apache.commons.lang3.tuple.ImmutablePair; +import org.apache.commons.lang3.tuple.Pair; +import org.jetbrains.annotations.NotNull; +import org.junit.jupiter.api.Assertions; + +/** Test the Polaris persistence layer */ +public class PolarisTestMetaStoreManager { + + // call context + final PolarisCallContext polarisCallContext; + + // call metastore manager + final PolarisMetaStoreManager polarisMetaStoreManager; + + // the start time + private final long testStartTime = System.currentTimeMillis(); + private final ObjectMapper objectMapper = new ObjectMapper(); + + // if true, simulate retries by client + private boolean doRetry; + + // initialize the test + public PolarisTestMetaStoreManager( + PolarisMetaStoreManager polarisMetaStoreManager, PolarisCallContext polarisCallContext) { + this.polarisCallContext = polarisCallContext; + this.polarisMetaStoreManager = polarisMetaStoreManager; + this.doRetry = false; + + // bootstrap the Polaris service + polarisMetaStoreManager.bootstrapPolarisService(polarisCallContext); + } + + public void forceRetry() { + this.doRetry = true; + } + + /** + * Validate that the specified identity identified by the pair catalogId, id has been properly + * persisted. + * + * @param catalogPath path of that entity in the catalog. If null, this entity is top-level + * @param entityId id + * @param expectedActive true if this entity should be active + * @param expectedName its expected name + * @param expectedType its expected type + * @param expectedSubType its expected subtype + * @return the persisted entity as a DPO + */ + private PolarisBaseEntity ensureExistsById( + List catalogPath, + long entityId, + boolean expectedActive, + String expectedName, + PolarisEntityType expectedType, + PolarisEntitySubType expectedSubType) { + + // derive id of the catalog for that entity as well as its parent id + final long catalogId; + final long parentId; + if (catalogPath == null) { + // top-level entity + catalogId = PolarisEntityConstants.getNullId(); + parentId = PolarisEntityConstants.getRootEntityId(); + } else { + catalogId = catalogPath.get(0).getId(); + parentId = catalogPath.get(catalogPath.size() - 1).getId(); + } + + // make sure this entity was persisted + PolarisBaseEntity entity = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, catalogId, entityId) + .getEntity(); + + // assert all expected values + Assertions.assertNotNull(entity); + Assertions.assertEquals(expectedName, entity.getName()); + Assertions.assertEquals(parentId, entity.getParentId()); + Assertions.assertEquals(expectedType.getCode(), entity.getTypeCode()); + Assertions.assertEquals(expectedSubType.getCode(), entity.getSubTypeCode()); + + // ensure creation time set + Assertions.assertTrue(entity.getCreateTimestamp() >= this.testStartTime); + Assertions.assertTrue(entity.getLastUpdateTimestamp() >= this.testStartTime); + + // test active + if (expectedActive) { + // make sure any other timestamps are 0 + Assertions.assertEquals(0, entity.getPurgeTimestamp()); + Assertions.assertEquals(0, entity.getDropTimestamp()); + Assertions.assertEquals(0, entity.getPurgeTimestamp()); + + // we should find it + PolarisMetaStoreManager.EntityResult result = + polarisMetaStoreManager.readEntityByName( + this.polarisCallContext, catalogPath, expectedType, expectedSubType, expectedName); + + // should be success, nothing changed + Assertions.assertNotNull(result); + + // should be success + Assertions.assertTrue(result.isSuccess()); + + // same id + Assertions.assertEquals(entity.getId(), result.getEntity().getId()); + } else { + // make sure any other timestamps are 0 + Assertions.assertNotEquals(0, entity.getDropTimestamp()); + + // we should not find it + PolarisMetaStoreManager.EntityResult result = + polarisMetaStoreManager.readEntityByName( + this.polarisCallContext, catalogPath, expectedType, expectedSubType, expectedName); + + // lookup must be success, nothing changed + Assertions.assertNotNull(result); + + // should be success + Assertions.assertTrue(result.isSuccess()); + + // should be null, not found + Assertions.assertNull(result.getEntity()); + } + + return entity; + } + + /** + * Check if the specified grant record exists + * + * @param grantRecords list of grant records + * @param securable the securable + * @param grantee the grantee + * @param priv privilege that was granted + */ + boolean isGrantRecordExists( + List grantRecords, + PolarisEntityCore securable, + PolarisEntityCore grantee, + PolarisPrivilege priv) { + // ensure that this grant record is present + long grantCount = + grantRecords.stream() + .filter( + gr -> + gr.getSecurableCatalogId() == securable.getCatalogId() + && gr.getSecurableId() == securable.getId() + && gr.getGranteeCatalogId() == grantee.getCatalogId() + && gr.getGranteeId() == grantee.getId() + && gr.getPrivilegeCode() == priv.getCode()) + .count(); + return grantCount == 1; + } + + /** + * Ensure that the specified grant record exists + * + * @param grantRecords list of grant records + * @param securable the securable + * @param grantee the grantee + * @param priv privilege that was granted + */ + void checkGrantRecordExists( + List grantRecords, + PolarisEntityCore securable, + PolarisEntityCore grantee, + PolarisPrivilege priv) { + // ensure that this grant record is present + boolean exists = this.isGrantRecordExists(grantRecords, securable, grantee, priv); + Assertions.assertTrue(exists); + } + + /** + * Ensure that the specified grant record has been removed + * + * @param grantRecords list of grant records + * @param securable the securable + * @param grantee the grantee + * @param priv privilege that was granted + */ + void checkGrantRecordRemoved( + List grantRecords, + PolarisEntityCore securable, + PolarisEntityCore grantee, + PolarisPrivilege priv) { + // ensure that this grant record is absent + boolean exists = this.isGrantRecordExists(grantRecords, securable, grantee, priv); + Assertions.assertFalse(exists); + } + + /** + * Ensure that the specified grant record has been properly persisted + * + * @param securable the securable + * @param grantee the grantee + * @param priv privilege that was granted + */ + void ensureGrantRecordExists( + PolarisEntityCore securable, PolarisEntityCore grantee, PolarisPrivilege priv) { + // re-load both entities, ensure not null + securable = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, securable.getCatalogId(), securable.getId()) + .getEntity(); + Assertions.assertNotNull(securable); + grantee = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, grantee.getCatalogId(), grantee.getId()) + .getEntity(); + Assertions.assertNotNull(grantee); + + // the grantee better be a grantee + Assertions.assertTrue(grantee.getType().isGrantee()); + + // load all grant records on that securable, should not fail + PolarisMetaStoreManager.LoadGrantsResult loadGrantsOnSecurable = + polarisMetaStoreManager.loadGrantsOnSecurable( + this.polarisCallContext, securable.getCatalogId(), securable.getId()); + // ensure entities for these grant records have been properly loaded + this.validateLoadedGrants(loadGrantsOnSecurable, false); + + // check that the grant record exists in the list + this.checkGrantRecordExists(loadGrantsOnSecurable.getGrantRecords(), securable, grantee, priv); + + // load all grant records on that grantee, should not fail + PolarisMetaStoreManager.LoadGrantsResult loadGrantsOnGrantee = + polarisMetaStoreManager.loadGrantsToGrantee( + this.polarisCallContext, grantee.getCatalogId(), grantee.getId()); + // ensure entities for these grant records have been properly loaded + this.validateLoadedGrants(loadGrantsOnGrantee, true); + + // check that the grant record exists + this.checkGrantRecordExists(loadGrantsOnGrantee.getGrantRecords(), securable, grantee, priv); + } + + /** + * Validate the return of loadGrantsToGrantee() or loadGrantsOnSecurable() + * + * @param loadGrantRecords return from calling loadGrantsToGrantee()/loadGrantsOnSecurable() + * @param isGrantee if true, loadGrantsToGrantee() was called, else loadGrantsOnSecurable() was + * called + */ + private void validateLoadedGrants( + PolarisMetaStoreManager.LoadGrantsResult loadGrantRecords, boolean isGrantee) { + // ensure not null + Assertions.assertNotNull(loadGrantRecords); + + // ensure that entities have been populated + Map entities = loadGrantRecords.getEntitiesAsMap(); + Assertions.assertNotNull(entities); + + // ensure all present + for (PolarisGrantRecord grantRecord : loadGrantRecords.getGrantRecords()) { + + long catalogId = + isGrantee ? grantRecord.getSecurableCatalogId() : grantRecord.getGranteeCatalogId(); + long entityId = isGrantee ? grantRecord.getSecurableId() : grantRecord.getGranteeId(); + + // load that entity + PolarisBaseEntity entity = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, catalogId, entityId) + .getEntity(); + Assertions.assertNotNull(entity); + Assertions.assertEquals(entity, entities.get(entityId)); + } + } + + /** + * Ensure that the specified grant record has been properly removed + * + * @param securable the securable + * @param grantee the grantee + * @param priv privilege that was granted + */ + void ensureGrantRecordRemoved( + PolarisEntityCore securable, PolarisEntityCore grantee, PolarisPrivilege priv) { + // re-load both entities, ensure not null + securable = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, securable.getCatalogId(), securable.getId()) + .getEntity(); + Assertions.assertNotNull(securable); + grantee = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, grantee.getCatalogId(), grantee.getId()) + .getEntity(); + Assertions.assertNotNull(grantee); + + // the grantee better be a grantee + Assertions.assertTrue(grantee.getType().isGrantee()); + + // load all grant records on that securable, should not fail + PolarisMetaStoreManager.LoadGrantsResult loadGrantsOnSecurable = + polarisMetaStoreManager.loadGrantsOnSecurable( + this.polarisCallContext, securable.getCatalogId(), securable.getId()); + // ensure entities for these grant records have been properly loaded + this.validateLoadedGrants(loadGrantsOnSecurable, false); + + // check that the grant record no longer exists + this.checkGrantRecordRemoved(loadGrantsOnSecurable.getGrantRecords(), securable, grantee, priv); + + // load all grant records on that grantee, should not fail + PolarisMetaStoreManager.LoadGrantsResult loadGrantsOnGrantee = + polarisMetaStoreManager.loadGrantsToGrantee( + this.polarisCallContext, grantee.getCatalogId(), grantee.getId()); + this.validateLoadedGrants(loadGrantsOnGrantee, true); + + // check that the grant record has been removed + this.checkGrantRecordRemoved(loadGrantsOnGrantee.getGrantRecords(), securable, grantee, priv); + } + + /** + * Ensure that the specified catalog has been properly created. + * + * @param catalogName name of the catalog + */ + Pair validateCatalogCreated(String catalogName) { + // load all catalogs + List catalogs = + polarisMetaStoreManager + .listEntities( + this.polarisCallContext, + null, + PolarisEntityType.CATALOG, + PolarisEntitySubType.NULL_SUBTYPE) + .getEntities(); + + // cannot be null + Assertions.assertNotNull(catalogs); + + // iterate to find our catalog + PolarisEntityActiveRecord catalogListInfo = null; + for (PolarisEntityActiveRecord cat : catalogs) { + if (cat.getName().equals(catalogName)) { + catalogListInfo = cat; + break; + } + } + + // we must find it + Assertions.assertNotNull(catalogListInfo); + + // now make sure this catalog was properly persisted + PolarisBaseEntity catalog = + this.ensureExistsById( + null, + catalogListInfo.getId(), + true, + catalogName, + PolarisEntityType.CATALOG, + PolarisEntitySubType.NULL_SUBTYPE); + + // build catalog path to our catalog + List catalogPath = new ArrayList<>(); + catalogPath.add(catalog); + + // load all roles + List roles = + polarisMetaStoreManager + .listEntities( + this.polarisCallContext, + catalogPath, + PolarisEntityType.CATALOG_ROLE, + PolarisEntitySubType.NULL_SUBTYPE) + .getEntities(); + + // ensure not null, one element only + Assertions.assertNotNull(roles); + Assertions.assertEquals(1, roles.size()); + + // get catalog list information + PolarisEntityActiveRecord roleListInfo = roles.get(0); + + // now make sure this principal was properly persisted + PolarisBaseEntity role = + this.ensureExistsById( + catalogPath, + roleListInfo.getId(), + true, + PolarisEntityConstants.getNameOfCatalogAdminRole(), + PolarisEntityType.CATALOG_ROLE, + PolarisEntitySubType.NULL_SUBTYPE); + + // ensure that the admin role has been granted CATALOG_MANAGE_ACCESS and + // CATALOG_MANAGE_METADATA priv on the catalog + this.ensureGrantRecordExists(catalog, role, PolarisPrivilege.CATALOG_MANAGE_ACCESS); + this.ensureGrantRecordExists(catalog, role, PolarisPrivilege.CATALOG_MANAGE_METADATA); + + // success, return result + return new ImmutablePair<>(catalog, role); + } + + /** Create a principal */ + PolarisBaseEntity createPrincipal(String name) { + // create new principal identity + PolarisBaseEntity principalEntity = + new PolarisBaseEntity( + PolarisEntityConstants.getNullId(), + polarisMetaStoreManager.generateNewEntityId(this.polarisCallContext).getId(), + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityConstants.getRootEntityId(), + name); + principalEntity.setInternalProperties( + PolarisObjectMapperUtil.serializeProperties( + this.polarisCallContext, + Map.of(PolarisEntityConstants.PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE, "true"))); + PolarisMetaStoreManager.CreatePrincipalResult createPrincipalResult = + polarisMetaStoreManager.createPrincipal(this.polarisCallContext, principalEntity); + Assertions.assertNotNull(createPrincipalResult); + + // ensure well created + this.ensureExistsById( + null, + createPrincipalResult.getPrincipal().getId(), + true, + name, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE); + + // the client id + PolarisPrincipalSecrets secrets = createPrincipalResult.getPrincipalSecrets(); + String clientId = secrets.getPrincipalClientId(); + + // ensure secrets are properly populated + Assertions.assertNotNull(secrets.getMainSecret()); + Assertions.assertTrue(secrets.getMainSecret().length() >= 32); + Assertions.assertNotNull(secrets.getSecondarySecret()); + Assertions.assertTrue(secrets.getSecondarySecret().length() >= 32); + + // should be same principal id + Assertions.assertEquals(principalEntity.getId(), secrets.getPrincipalId()); + + // ensure that the secrets have been properly saved and match + PolarisPrincipalSecrets reloadSecrets = + polarisMetaStoreManager + .loadPrincipalSecrets(this.polarisCallContext, clientId) + .getPrincipalSecrets(); + Assertions.assertNotNull(reloadSecrets); + Assertions.assertEquals(secrets.getPrincipalId(), reloadSecrets.getPrincipalId()); + Assertions.assertEquals(secrets.getPrincipalClientId(), reloadSecrets.getPrincipalClientId()); + Assertions.assertEquals(secrets.getMainSecret(), reloadSecrets.getMainSecret()); + Assertions.assertEquals(secrets.getSecondarySecret(), reloadSecrets.getSecondarySecret()); + + Map internalProperties = + PolarisObjectMapperUtil.deserializeProperties( + this.polarisCallContext, createPrincipalResult.getPrincipal().getInternalProperties()); + Assertions.assertNotNull( + internalProperties.get( + PolarisEntityConstants.PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE)); + + // simulate retry if we are asked to + if (this.doRetry) { + // simulate that we retried + PolarisMetaStoreManager.CreatePrincipalResult newCreatePrincipalResult = + polarisMetaStoreManager.createPrincipal(this.polarisCallContext, principalEntity); + Assertions.assertNotNull(newCreatePrincipalResult); + + // ensure same + Assertions.assertEquals( + createPrincipalResult.getPrincipal().getId(), + newCreatePrincipalResult.getPrincipal().getId()); + PolarisPrincipalSecrets newSecrets = newCreatePrincipalResult.getPrincipalSecrets(); + Assertions.assertEquals(secrets.getPrincipalId(), newSecrets.getPrincipalId()); + Assertions.assertEquals(secrets.getPrincipalClientId(), newSecrets.getPrincipalClientId()); + Assertions.assertEquals(secrets.getMainSecret(), newSecrets.getMainSecret()); + Assertions.assertEquals(secrets.getMainSecret(), newSecrets.getMainSecret()); + } + + secrets = + polarisMetaStoreManager + .rotatePrincipalSecrets( + this.polarisCallContext, + clientId, + principalEntity.getId(), + secrets.getMainSecret(), + false) + .getPrincipalSecrets(); + Assertions.assertNotEquals(reloadSecrets.getMainSecret(), secrets.getMainSecret()); + Assertions.assertNotEquals(reloadSecrets.getMainSecret(), secrets.getMainSecret()); + + PolarisBaseEntity reloadPrincipal = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, 0L, createPrincipalResult.getPrincipal().getId()) + .getEntity(); + internalProperties = + PolarisObjectMapperUtil.deserializeProperties( + this.polarisCallContext, reloadPrincipal.getInternalProperties()); + Assertions.assertNull( + internalProperties.get( + PolarisEntityConstants.PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE)); + + // rotate the secrets, twice! + polarisMetaStoreManager.rotatePrincipalSecrets( + this.polarisCallContext, clientId, principalEntity.getId(), secrets.getMainSecret(), false); + polarisMetaStoreManager.rotatePrincipalSecrets( + this.polarisCallContext, clientId, principalEntity.getId(), secrets.getMainSecret(), false); + + // reload and check that now the main should be secondary + reloadSecrets = + polarisMetaStoreManager + .loadPrincipalSecrets(this.polarisCallContext, clientId) + .getPrincipalSecrets(); + Assertions.assertNotNull(reloadSecrets); + Assertions.assertEquals(secrets.getPrincipalId(), reloadSecrets.getPrincipalId()); + Assertions.assertEquals(secrets.getPrincipalClientId(), reloadSecrets.getPrincipalClientId()); + Assertions.assertEquals(secrets.getMainSecret(), reloadSecrets.getSecondarySecret()); + String newMainSecret = reloadSecrets.getMainSecret(); + + // reset - the previous main secret is no longer one of the secrets + polarisMetaStoreManager.rotatePrincipalSecrets( + this.polarisCallContext, + clientId, + principalEntity.getId(), + reloadSecrets.getMainSecret(), + true); + reloadSecrets = + polarisMetaStoreManager + .loadPrincipalSecrets(this.polarisCallContext, clientId) + .getPrincipalSecrets(); + Assertions.assertNotNull(reloadSecrets); + Assertions.assertEquals(secrets.getPrincipalId(), reloadSecrets.getPrincipalId()); + Assertions.assertEquals(secrets.getPrincipalClientId(), reloadSecrets.getPrincipalClientId()); + Assertions.assertNotEquals(newMainSecret, reloadSecrets.getMainSecret()); + Assertions.assertNotEquals(newMainSecret, reloadSecrets.getSecondarySecret()); + + PolarisBaseEntity newPrincipal = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, 0L, principalEntity.getId()) + .getEntity(); + internalProperties = + PolarisObjectMapperUtil.deserializeProperties( + this.polarisCallContext, newPrincipal.getInternalProperties()); + Assertions.assertNotNull( + internalProperties.get( + PolarisEntityConstants.PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE)); + + // reset again. we should get new secrets and the CREDENTIAL_ROTATION_REQUIRED flag should be + // gone + polarisMetaStoreManager.rotatePrincipalSecrets( + this.polarisCallContext, + clientId, + principalEntity.getId(), + reloadSecrets.getMainSecret(), + true); + PolarisPrincipalSecrets postResetCredentials = + polarisMetaStoreManager + .loadPrincipalSecrets(this.polarisCallContext, clientId) + .getPrincipalSecrets(); + Assertions.assertNotNull(reloadSecrets); + Assertions.assertEquals(reloadSecrets.getPrincipalId(), postResetCredentials.getPrincipalId()); + Assertions.assertEquals( + reloadSecrets.getPrincipalClientId(), postResetCredentials.getPrincipalClientId()); + Assertions.assertNotEquals(reloadSecrets.getMainSecret(), postResetCredentials.getMainSecret()); + Assertions.assertNotEquals( + reloadSecrets.getSecondarySecret(), postResetCredentials.getSecondarySecret()); + + PolarisBaseEntity finalPrincipal = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, 0L, principalEntity.getId()) + .getEntity(); + internalProperties = + PolarisObjectMapperUtil.deserializeProperties( + this.polarisCallContext, finalPrincipal.getInternalProperties()); + Assertions.assertNull( + internalProperties.get( + PolarisEntityConstants.PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE)); + + // return it + return finalPrincipal; + } + + /** Create an entity */ + public PolarisBaseEntity createEntity( + List catalogPath, + PolarisEntityType entityType, + PolarisEntitySubType entitySubType, + String name) { + return createEntity( + catalogPath, + entityType, + entitySubType, + name, + polarisMetaStoreManager.generateNewEntityId(this.polarisCallContext).getId()); + } + + PolarisBaseEntity createEntity( + List catalogPath, + PolarisEntityType entityType, + PolarisEntitySubType entitySubType, + String name, + long entityId) { + long parentId; + long catalogId; + if (catalogPath != null) { + catalogId = catalogPath.get(0).getId(); + parentId = catalogPath.get(catalogPath.size() - 1).getId(); + } else { + catalogId = PolarisEntityConstants.getNullId(); + parentId = PolarisEntityConstants.getRootEntityId(); + } + PolarisBaseEntity newEntity = + new PolarisBaseEntity(catalogId, entityId, entityType, entitySubType, parentId, name); + PolarisBaseEntity entity = + polarisMetaStoreManager + .createEntityIfNotExists(this.polarisCallContext, catalogPath, newEntity) + .getEntity(); + Assertions.assertNotNull(entity); + + // same id + Assertions.assertEquals(newEntity.getId(), entity.getId()); + + // ensure well created + this.ensureExistsById(catalogPath, entity.getId(), true, name, entityType, entitySubType); + + // retry if we are asked to + if (this.doRetry) { + PolarisBaseEntity retryEntity = + polarisMetaStoreManager + .createEntityIfNotExists(this.polarisCallContext, catalogPath, newEntity) + .getEntity(); + Assertions.assertNotNull(retryEntity); + + // same id + Assertions.assertEquals(retryEntity.getId(), entity.getId()); + + // ensure well created + this.ensureExistsById( + catalogPath, retryEntity.getId(), true, name, entityType, entitySubType); + } + + // return it + return entity; + } + + /** + * Create an entity with a null subtype + * + * @return the entity + */ + PolarisBaseEntity createEntity( + List catalogPath, PolarisEntityType entityType, String name) { + return createEntity(catalogPath, entityType, PolarisEntitySubType.NULL_SUBTYPE, name); + } + + /** Drop the entity if it exists. */ + void dropEntity(List catalogPath, PolarisEntityCore entityToDrop) { + // see if the entity exists + final boolean exists; + boolean hasChildren = false; + + // check if it exists + PolarisBaseEntity entity = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, entityToDrop.getCatalogId(), entityToDrop.getId()) + .getEntity(); + if (entity != null) { + PolarisMetaStoreManager.EntityResult entityFound = + polarisMetaStoreManager.readEntityByName( + this.polarisCallContext, + catalogPath, + entity.getType(), + entity.getSubType(), + entity.getName()); + exists = entityFound.isSuccess(); + + // if exists, see if empty + if (exists + && (entity.getType() == PolarisEntityType.CATALOG + || entity.getType() == PolarisEntityType.NAMESPACE)) { + // build path + List path = new ArrayList<>(); + if (catalogPath != null) { + path.addAll(catalogPath); + } + path.add(entityToDrop); + + // get all children, cannot be null + List children = + polarisMetaStoreManager + .listEntities( + this.polarisCallContext, + path, + PolarisEntityType.NAMESPACE, + PolarisEntitySubType.NULL_SUBTYPE) + .getEntities(); + Assertions.assertNotNull(children); + if (children.isEmpty() && entity.getType() == PolarisEntityType.NAMESPACE) { + children = + polarisMetaStoreManager + .listEntities( + this.polarisCallContext, + path, + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.ANY_SUBTYPE) + .getEntities(); + Assertions.assertNotNull(children); + } else if (children.isEmpty()) { + children = + polarisMetaStoreManager + .listEntities( + this.polarisCallContext, + path, + PolarisEntityType.CATALOG_ROLE, + PolarisEntitySubType.ANY_SUBTYPE) + .getEntities(); + Assertions.assertNotNull(children); + // if only one left, it can be dropped. + if (children.size() == 1) { + children.clear(); + } + } + hasChildren = !children.isEmpty(); + } + } else { + exists = false; + } + + // load all the grants to ensure they are properly cleaned + final List granteeEntities; + final List securableEntities; + if (exists) { + granteeEntities = + new ArrayList<>( + polarisMetaStoreManager + .loadGrantsOnSecurable( + this.polarisCallContext, entity.getCatalogId(), entity.getId()) + .getEntities()); + securableEntities = + new ArrayList<>( + polarisMetaStoreManager + .loadGrantsToGrantee( + this.polarisCallContext, entity.getCatalogId(), entity.getId()) + .getEntities()); + } else { + granteeEntities = List.of(); + securableEntities = List.of(); + } + + // now drop it + Map cleanupProperties = + Map.of("taskId", String.valueOf(entity.getId()), "cleanupProperty", "cleanupValue"); + PolarisMetaStoreManager.DropEntityResult dropResult = + polarisMetaStoreManager.dropEntityIfExists( + this.polarisCallContext, catalogPath, entityToDrop, cleanupProperties, true); + + // should have been dropped if exists + if (entityToDrop.cannotBeDroppedOrRenamed()) { + Assertions.assertFalse(dropResult.isSuccess()); + Assertions.assertFalse(dropResult.failedBecauseNotEmpty()); + Assertions.assertTrue(dropResult.isEntityUnDroppable()); + } else if (exists && hasChildren) { + Assertions.assertFalse(dropResult.isSuccess()); + Assertions.assertTrue(dropResult.failedBecauseNotEmpty()); + Assertions.assertFalse(dropResult.isEntityUnDroppable()); + } else { + Assertions.assertEquals(exists, dropResult.isSuccess()); + Assertions.assertFalse(dropResult.failedBecauseNotEmpty()); + Assertions.assertFalse(dropResult.isEntityUnDroppable()); + Assertions.assertNotNull(dropResult.getCleanupTaskId()); + PolarisBaseEntity cleanupTask = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, 0L, dropResult.getCleanupTaskId()) + .getEntity(); + Assertions.assertNotNull(cleanupTask); + Assertions.assertEquals(PolarisEntityType.TASK, cleanupTask.getType()); + Assertions.assertNotNull(cleanupTask.getInternalProperties()); + Map internalProperties = + PolarisObjectMapperUtil.deserializeProperties( + polarisCallContext, cleanupTask.getInternalProperties()); + Assertions.assertEquals(cleanupProperties, internalProperties); + Map properties = + PolarisObjectMapperUtil.deserializeProperties( + polarisCallContext, cleanupTask.getProperties()); + Assertions.assertNotNull(properties); + Assertions.assertNotNull(properties.get(PolarisTaskConstants.TASK_DATA)); + PolarisBaseEntity droppedEntity = + PolarisObjectMapperUtil.deserialize( + polarisCallContext, + properties.get(PolarisTaskConstants.TASK_DATA), + PolarisBaseEntity.class); + Assertions.assertNotNull(droppedEntity); + Assertions.assertEquals(entity.getId(), droppedEntity.getId()); + } + + // verify gone if it was dropped + if (dropResult.isSuccess()) { + // should be found but deleted + PolarisBaseEntity entityAfterDrop = + polarisMetaStoreManager + .loadEntity( + this.polarisCallContext, entityToDrop.getCatalogId(), entityToDrop.getId()) + .getEntity(); + + // ensure dropped + Assertions.assertNull(entityAfterDrop); + + // should no longer exists + Assertions.assertNotNull(entity); + PolarisMetaStoreManager.EntityResult entityFound = + polarisMetaStoreManager.readEntityByName( + this.polarisCallContext, + catalogPath, + entity.getType(), + entity.getSubType(), + entity.getName()); + + // should not be found + Assertions.assertEquals( + entityFound.getReturnStatus(), PolarisMetaStoreManager.ReturnStatus.ENTITY_NOT_FOUND); + + // make sure that the entity which was dropped is no longer referenced by a grant with any + // of the entity it was connected with before being dropped + for (PolarisBaseEntity connectedEntity : granteeEntities) { + PolarisMetaStoreManager.LoadGrantsResult grantResult = + polarisMetaStoreManager.loadGrantsToGrantee( + this.polarisCallContext, connectedEntity.getCatalogId(), connectedEntity.getId()); + if (grantResult.isSuccess()) { + long cnt = + grantResult.getGrantRecords().stream() + .filter(gr -> gr.getSecurableId() == entityToDrop.getId()) + .count(); + Assertions.assertEquals(0, cnt); + } else { + // special case when a catalog is dropped, the catalog_admin role is also dropped with it + Assertions.assertTrue( + grantResult.getReturnStatus() == PolarisMetaStoreManager.ReturnStatus.ENTITY_NOT_FOUND + && entityToDrop.getType() == PolarisEntityType.CATALOG + && connectedEntity.getType() == PolarisEntityType.CATALOG_ROLE + && connectedEntity + .getName() + .equals(PolarisEntityConstants.getNameOfCatalogAdminRole())); + } + } + for (PolarisBaseEntity connectedEntity : securableEntities) { + PolarisMetaStoreManager.LoadGrantsResult grantResult = + polarisMetaStoreManager.loadGrantsOnSecurable( + this.polarisCallContext, connectedEntity.getCatalogId(), connectedEntity.getId()); + long cnt = + grantResult.getGrantRecords().stream() + .filter(gr -> gr.getGranteeId() == entityToDrop.getId()) + .count(); + Assertions.assertEquals(0, cnt); + } + } + } + + /** Grant a privilege to a catalog role */ + void grantPrivilege( + PolarisBaseEntity role, + List catalogPath, + PolarisBaseEntity securable, + PolarisPrivilege priv) { + // grant the privilege + polarisMetaStoreManager.grantPrivilegeOnSecurableToRole( + this.polarisCallContext, role, catalogPath, securable, priv); + + // now validate the privilege + this.ensureGrantRecordExists(securable, role, priv); + } + + /** Revoke a privilege from a catalog role */ + void revokePrivilege( + PolarisBaseEntity role, + List catalogPath, + PolarisBaseEntity securable, + PolarisPrivilege priv) { + // grant the privilege + polarisMetaStoreManager.revokePrivilegeOnSecurableFromRole( + this.polarisCallContext, role, catalogPath, securable, priv); + + // now validate the privilege + this.ensureGrantRecordRemoved(securable, role, priv); + } + + /** Grant a privilege to a catalog role */ + void grantToGrantee( + PolarisEntityCore catalog, + PolarisBaseEntity granted, + PolarisBaseEntity grantee, + PolarisPrivilege priv) { + // grant the privilege + polarisMetaStoreManager.grantUsageOnRoleToGrantee( + this.polarisCallContext, catalog, granted, grantee); + + // now validate the privilege + this.ensureGrantRecordExists(granted, grantee, priv); + } + + /** Grant a privilege to a catalog role */ + void revokeToGrantee( + PolarisEntityCore catalog, + PolarisBaseEntity granted, + PolarisBaseEntity grantee, + PolarisPrivilege priv) { + // revoked the privilege + polarisMetaStoreManager.revokeUsageOnRoleFromGrantee( + this.polarisCallContext, catalog, granted, grantee); + + // now validate that the privilege is gone + this.ensureGrantRecordRemoved(granted, grantee, priv); + } + + /** Create a new catalog */ + PolarisBaseEntity createCatalog(String catalogName) { + // create new catalog + PolarisBaseEntity catalog = + new PolarisBaseEntity( + PolarisEntityConstants.getNullId(), + polarisMetaStoreManager.generateNewEntityId(this.polarisCallContext).getId(), + PolarisEntityType.CATALOG, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityConstants.getRootEntityId(), + "test"); + PolarisMetaStoreManager.CreateCatalogResult catalogCreated = + polarisMetaStoreManager.createCatalog(this.polarisCallContext, catalog, List.of()); + Assertions.assertNotNull(catalogCreated); + + // ensure well created + this.ensureExistsById( + null, + catalog.getId(), + true, + catalogName, + PolarisEntityType.CATALOG, + PolarisEntitySubType.NULL_SUBTYPE); + + // retry if we are asked to + if (this.doRetry) { + PolarisMetaStoreManager.CreateCatalogResult retryCatalogCreated = + polarisMetaStoreManager.createCatalog(this.polarisCallContext, catalog, List.of()); + Assertions.assertNotNull(retryCatalogCreated); + + // ensure well created + this.ensureExistsById( + null, + catalog.getId(), + true, + catalogName, + PolarisEntityType.CATALOG, + PolarisEntitySubType.NULL_SUBTYPE); + + // should be same id as the first time around + Assertions.assertEquals(catalog.getId(), retryCatalogCreated.getCatalog().getId()); + } + + return catalogCreated.getCatalog(); + } + + /** + * Create a test catalog. This is a new catalog which will have the following objects (N is for a + * namespace, T for a table, V for a view, R for a role, P for a principal): + * + *
+   * - C
+   * - (N1/N2/T1)
+   * - (N1/N2/T2)
+   * - (N1/N2/V1)
+   * - (N1/N3/T3)
+   * - (N1/N3/V2)
+   * - (N1/T4)
+   * - (N1/N4)
+   * - N5/N6/T5
+   * - N5/N6/T6
+   * - R1(TABLE_READ on N1/N2, VIEW_CREATE on C, TABLE_LIST on N1/N2, TABLE_DROP on N5/N6/T5)
+   * - R2(TABLE_WRITE_DATA on N5, VIEW_LIST on C)
+   * - PR1(R1, R2)
+   * - PR2(R2)
+   * - P1(PR1, PR2)
+   * - P2(PR1)
+   * 
+ */ + PolarisBaseEntity createTestCatalog(String catalogName) { + // create new catalog + PolarisBaseEntity catalog = + new PolarisBaseEntity( + PolarisEntityConstants.getNullId(), + polarisMetaStoreManager.generateNewEntityId(this.polarisCallContext).getId(), + PolarisEntityType.CATALOG, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityConstants.getRootEntityId(), + catalogName); + PolarisMetaStoreManager.CreateCatalogResult catalogCreated = + polarisMetaStoreManager.createCatalog(this.polarisCallContext, catalog, List.of()); + Assertions.assertNotNull(catalogCreated); + catalog = catalogCreated.getCatalog(); + + // now create all objects + PolarisBaseEntity N1 = this.createEntity(List.of(catalog), PolarisEntityType.NAMESPACE, "N1"); + PolarisBaseEntity N1_N2 = + this.createEntity(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N2"); + this.createEntity( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T1"); + this.createEntity( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T2"); + this.createEntity( + List.of(catalog, N1, N1_N2), PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.VIEW, "V1"); + PolarisBaseEntity N1_N3 = + this.createEntity(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N3"); + this.createEntity( + List.of(catalog, N1, N1_N3), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T3"); + this.createEntity( + List.of(catalog, N1, N1_N3), PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.VIEW, "V2"); + this.createEntity( + List.of(catalog, N1), PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.TABLE, "T4"); + this.createEntity(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N4"); + PolarisBaseEntity N5 = this.createEntity(List.of(catalog), PolarisEntityType.NAMESPACE, "N5"); + PolarisBaseEntity N5_N6 = + this.createEntity(List.of(catalog, N5), PolarisEntityType.NAMESPACE, "N6"); + PolarisBaseEntity N5_N6_T5 = + this.createEntity( + List.of(catalog, N5, N5_N6), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T5"); + this.createEntity( + List.of(catalog, N5, N5_N6), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T6"); + + // the two catalog roles + PolarisBaseEntity R1 = + this.createEntity(List.of(catalog), PolarisEntityType.CATALOG_ROLE, "R1"); + PolarisBaseEntity R2 = + this.createEntity(List.of(catalog), PolarisEntityType.CATALOG_ROLE, "R2"); + + // perform the grants to R1 + grantPrivilege(R1, List.of(catalog, N1, N1_N2), N1_N2, PolarisPrivilege.TABLE_READ_DATA); + grantPrivilege(R1, List.of(catalog), catalog, PolarisPrivilege.VIEW_CREATE); + grantPrivilege(R1, List.of(catalog, N5), N5, PolarisPrivilege.TABLE_LIST); + grantPrivilege(R1, List.of(catalog, N1, N5_N6), N5_N6_T5, PolarisPrivilege.TABLE_DROP); + + // perform the grants to R2 + grantPrivilege(R2, List.of(catalog, N5), N5, PolarisPrivilege.TABLE_WRITE_DATA); + grantPrivilege(R2, List.of(catalog), catalog, PolarisPrivilege.VIEW_LIST); + + // now create two principal roles + PolarisBaseEntity PR1 = this.createEntity(null, PolarisEntityType.PRINCIPAL_ROLE, "PR1"); + PolarisBaseEntity PR2 = this.createEntity(null, PolarisEntityType.PRINCIPAL_ROLE, "PR2"); + + // assign R1 and R2 to PR1 + grantToGrantee(catalog, R1, PR1, PolarisPrivilege.CATALOG_ROLE_USAGE); + grantToGrantee(catalog, R2, PR1, PolarisPrivilege.CATALOG_ROLE_USAGE); + grantToGrantee(catalog, R2, PR2, PolarisPrivilege.CATALOG_ROLE_USAGE); + + // also create two new principals + PolarisBaseEntity P1 = this.createPrincipal("P1"); + PolarisBaseEntity P2 = this.createPrincipal("P2"); + + // assign PR1 and PR2 to this principal + grantToGrantee(null, PR1, P1, PolarisPrivilege.PRINCIPAL_ROLE_USAGE); + grantToGrantee(null, PR2, P1, PolarisPrivilege.PRINCIPAL_ROLE_USAGE); + grantToGrantee(null, PR2, P2, PolarisPrivilege.PRINCIPAL_ROLE_USAGE); + + return catalog; + } + + /** + * Find and entity by name, ensure it is there and has been properly initialized + * + * @return the identity we found + */ + PolarisBaseEntity ensureExistsByName( + List catalogPath, + PolarisEntityType entityType, + PolarisEntitySubType entitySubType, + String name) { + // find by name, ensure we found it + PolarisMetaStoreManager.EntityResult entityFound = + polarisMetaStoreManager.readEntityByName( + this.polarisCallContext, catalogPath, entityType, entitySubType, name); + Assertions.assertNotNull(entityFound); + Assertions.assertTrue(entityFound.isSuccess()); + + PolarisBaseEntity entity = entityFound.getEntity(); + Assertions.assertNotNull(entity); + Assertions.assertEquals(name, entity.getName()); + Assertions.assertEquals(entityType, entity.getType()); + if (entitySubType != PolarisEntitySubType.ANY_SUBTYPE) { + Assertions.assertEquals(entitySubType, entity.getSubType()); + } + Assertions.assertTrue(entity.getCreateTimestamp() >= this.testStartTime); + Assertions.assertEquals(0, entity.getDropTimestamp()); + Assertions.assertTrue(entity.getLastUpdateTimestamp() >= entity.getCreateTimestamp()); + Assertions.assertEquals(0, entity.getToPurgeTimestamp()); + Assertions.assertEquals(0, entity.getPurgeTimestamp()); + Assertions.assertEquals( + (catalogPath == null) ? PolarisEntityConstants.getNullId() : catalogPath.get(0).getId(), + entity.getCatalogId()); + Assertions.assertEquals( + (catalogPath == null) + ? PolarisEntityConstants.getRootEntityId() + : catalogPath.get(catalogPath.size() - 1).getId(), + entity.getParentId()); + Assertions.assertTrue(entity.getEntityVersion() >= 1 && entity.getGrantRecordsVersion() >= 1); + + return entity; + } + + /** + * Find and entity by name, ensure it is there and has been properly initialized + * + * @return the identity we found + */ + PolarisBaseEntity ensureExistsByName( + List catalogPath, PolarisEntityType entityType, String name) { + return this.ensureExistsByName( + catalogPath, entityType, PolarisEntitySubType.NULL_SUBTYPE, name); + } + + /** + * Update the specified entity. Validate that versions are properly maintained + * + * @param catalogPath path to the catalog where this entity is stored + * @param entity entity to update + * @param props updated properties + * @param internalProps updated internal properties + * @return updated entity + */ + PolarisBaseEntity updateEntity( + List catalogPath, + PolarisBaseEntity entity, + String props, + String internalProps) { + // ok, remember version and grants_version + int version = entity.getEntityVersion(); + int grantRecsVersion = entity.getGrantRecordsVersion(); + + // derive the catalogId for that entity + long catalogId = + (catalogPath == null) ? PolarisEntityConstants.getNullId() : catalogPath.get(0).getId(); + Assertions.assertEquals(entity.getCatalogId(), catalogId); + + // let's make some property updates + entity.setProperties(props); + entity.setInternalProperties(internalProps); + + // lookup that entity, ensure it exists + PolarisBaseEntity beforeUpdateEntity = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, entity.getCatalogId(), entity.getId()) + .getEntity(); + + // update that property + PolarisBaseEntity updatedEntity = + polarisMetaStoreManager + .updateEntityPropertiesIfNotChanged(this.polarisCallContext, catalogPath, entity) + .getEntity(); + + // if version mismatch, nothing should be updated + if (beforeUpdateEntity == null + || beforeUpdateEntity.getEntityVersion() != entity.getEntityVersion()) { + Assertions.assertNull(updatedEntity); + + // refresh catalog info + entity = + polarisMetaStoreManager + .loadEntity(this.polarisCallContext, entity.getCatalogId(), entity.getId()) + .getEntity(); + + // ensure nothing has changed + if (beforeUpdateEntity != null && entity != null) { + Assertions.assertEquals(beforeUpdateEntity.getEntityVersion(), entity.getEntityVersion()); + Assertions.assertEquals( + beforeUpdateEntity.getGrantRecordsVersion(), entity.getGrantRecordsVersion()); + Assertions.assertEquals(beforeUpdateEntity.getProperties(), entity.getProperties()); + Assertions.assertEquals( + beforeUpdateEntity.getInternalProperties(), entity.getInternalProperties()); + } + + return null; + } + + // entity should have been updated + Assertions.assertNotNull(updatedEntity); + + // read back this entity and ensure that the update was performed + PolarisBaseEntity afterUpdateEntity = + this.ensureExistsById( + catalogPath, + entity.getId(), + true, + entity.getName(), + entity.getType(), + entity.getSubType()); + + // verify that version has changed, but not grantRecsVersion + Assertions.assertEquals(version + 1, updatedEntity.getEntityVersion()); + Assertions.assertEquals(version, entity.getEntityVersion()); + Assertions.assertEquals(version + 1, afterUpdateEntity.getEntityVersion()); + + // grantRecsVersion should not have changed + Assertions.assertEquals(grantRecsVersion, updatedEntity.getGrantRecordsVersion()); + Assertions.assertEquals(grantRecsVersion, entity.getGrantRecordsVersion()); + Assertions.assertEquals(grantRecsVersion, afterUpdateEntity.getGrantRecordsVersion()); + + // update should have been performed + Assertions.assertEquals( + jsonNode(entity.getProperties()), jsonNode(updatedEntity.getProperties())); + Assertions.assertEquals( + jsonNode(entity.getProperties()), jsonNode(afterUpdateEntity.getProperties())); + Assertions.assertEquals( + jsonNode(entity.getInternalProperties()), jsonNode(updatedEntity.getInternalProperties())); + Assertions.assertEquals( + jsonNode(entity.getInternalProperties()), + jsonNode(afterUpdateEntity.getInternalProperties())); + + // lookup the tracking slice to verify this has been updated too + List versions = + polarisMetaStoreManager + .loadEntitiesChangeTracking( + this.polarisCallContext, List.of(new PolarisEntityId(catalogId, entity.getId()))) + .getChangeTrackingVersions(); + Assertions.assertEquals(1, versions.size()); + Assertions.assertEquals(updatedEntity.getEntityVersion(), versions.get(0).getEntityVersion()); + Assertions.assertEquals( + updatedEntity.getGrantRecordsVersion(), versions.get(0).getGrantRecordsVersion()); + + return updatedEntity; + } + + private JsonNode jsonNode(String json) { + if (json == null) { + return null; + } + try { + return objectMapper.readTree(json); + } catch (JsonProcessingException e) { + throw new RuntimeException(e); + } + } + + /** Execute a list operation and validate the result */ + private void validateListReturn( + List path, + PolarisEntityType entityType, + PolarisEntitySubType entitySubType, + List> expectedResult) { + + // list the entities under the specified path + List result = + polarisMetaStoreManager + .listEntities(this.polarisCallContext, path, entityType, entitySubType) + .getEntities(); + Assertions.assertNotNull(result); + + // now validate the result + Assertions.assertEquals(expectedResult.size(), result.size()); + + // ensure all elements are found + for (Pair expected : expectedResult) { + boolean found = false; + for (PolarisEntityActiveRecord res : result) { + if (res.getName().equals(expected.getLeft()) + && expected.getRight().getCode() == res.getSubTypeCode()) { + found = true; + break; + } + } + // we should find it + Assertions.assertTrue(found); + } + } + + /** Execute a list operation and validate the result */ + private void validateListReturn( + List path, + PolarisEntityType entityType, + List> expectedResult) { + validateListReturn(path, entityType, PolarisEntitySubType.NULL_SUBTYPE, expectedResult); + } + + /** + * Validate a cached entry which has just been loaded from the store, assuming it is not null. + * + * @param cacheEntry the cached entity to validate + */ + private void validateCacheEntryLoad(PolarisMetaStoreManager.CachedEntryResult cacheEntry) { + + // cannot be null + Assertions.assertNotNull(cacheEntry); + PolarisEntity entity = PolarisEntity.of(cacheEntry.getEntity()); + Assertions.assertNotNull(entity); + List grantRecords = cacheEntry.getEntityGrantRecords(); + Assertions.assertNotNull(grantRecords); + + // same grant record version + Assertions.assertEquals(entity.getGrantRecordsVersion(), cacheEntry.getGrantRecordsVersion()); + + // reload the entity + PolarisEntity refEntity = + PolarisEntity.of( + this.polarisMetaStoreManager.loadEntity( + this.polarisCallContext, entity.getCatalogId(), entity.getId())); + Assertions.assertNotNull(refEntity); + + // same entity + Assertions.assertEquals(refEntity, entity); + // same version + Assertions.assertEquals(refEntity.getEntityVersion(), entity.getEntityVersion()); + + // reload the grants + List refGrantRecords = new ArrayList<>(); + if (refEntity.getType().isGrantee()) { + PolarisMetaStoreManager.LoadGrantsResult loadGrantResult = + this.polarisMetaStoreManager.loadGrantsToGrantee( + this.polarisCallContext, refEntity.getCatalogId(), refEntity.getId()); + this.validateLoadedGrants(loadGrantResult, true); + + // same version + Assertions.assertEquals( + cacheEntry.getGrantRecordsVersion(), loadGrantResult.getGrantsVersion()); + + refGrantRecords.addAll(loadGrantResult.getGrantRecords()); + } + + PolarisMetaStoreManager.LoadGrantsResult loadGrantResult = + this.polarisMetaStoreManager.loadGrantsOnSecurable( + this.polarisCallContext, refEntity.getCatalogId(), refEntity.getId()); + this.validateLoadedGrants(loadGrantResult, false); + + // same version + Assertions.assertEquals( + cacheEntry.getGrantRecordsVersion(), loadGrantResult.getGrantsVersion()); + + refGrantRecords.addAll(loadGrantResult.getGrantRecords()); + + // same grants + Assertions.assertEquals(new HashSet<>(refGrantRecords), new HashSet<>(grantRecords)); + } + + /** + * Validate a cached entry which has just been refreshed from the store, assuming it is not null. + * + * @param cacheEntry the cached entity to validate + */ + private void validateCacheEntryRefresh( + PolarisMetaStoreManager.CachedEntryResult cacheEntry, + long catalogId, + long entityId, + int entityVersion, + int entityGrantRecordsVersion) { + // cannot be null + Assertions.assertNotNull(cacheEntry); + PolarisBaseEntity entity = cacheEntry.getEntity(); + List grantRecords = cacheEntry.getEntityGrantRecords(); + + // reload the entity + PolarisBaseEntity refEntity = + this.polarisMetaStoreManager + .loadEntity(this.polarisCallContext, catalogId, entityId) + .getEntity(); + Assertions.assertNotNull(refEntity); + + // reload the grants + PolarisMetaStoreManager.LoadGrantsResult loadGrantResult = + refEntity.getType().isGrantee() + ? this.polarisMetaStoreManager.loadGrantsToGrantee( + this.polarisCallContext, catalogId, entityId) + : this.polarisMetaStoreManager.loadGrantsOnSecurable( + this.polarisCallContext, catalogId, entityId); + this.validateLoadedGrants(loadGrantResult, refEntity.getType().isGrantee()); + Assertions.assertEquals( + loadGrantResult.getGrantsVersion(), cacheEntry.getGrantRecordsVersion()); + + // if entity version has not changed, entity should not be loaded + if (refEntity.getEntityVersion() == entityVersion) { + // no need to reload in that case + Assertions.assertNull(entity); + } else { + // should have been reloaded + Assertions.assertNotNull(entity); + // should be same as refEntity + Assertions.assertEquals(PolarisEntity.of(refEntity), PolarisEntity.of(entity)); + // same version + Assertions.assertEquals(refEntity.getEntityVersion(), entity.getEntityVersion()); + } + + // if grant records version has not changed, grant records should not be loaded + if (refEntity.getGrantRecordsVersion() == entityGrantRecordsVersion) { + // no need to reload in that case + Assertions.assertNull(grantRecords); + } else { + List refGrantRecords = loadGrantResult.getGrantRecords(); + // should have been reloaded + Assertions.assertNotNull(grantRecords); + // should be same as refEntity + Assertions.assertEquals(new HashSet<>(refGrantRecords), new HashSet<>(grantRecords)); + // same version + Assertions.assertEquals( + loadGrantResult.getGrantsVersion(), cacheEntry.getGrantRecordsVersion()); + } + } + + /** + * Helper function to validate loading the cache by name. We will load the cache entry by name, + * check that the result is correct and return the entity or null if it cannot be found. + * + * @param entityCatalogId catalog id for the entity + * @param parentId parent id of the entity + * @param entityType type of the entity + * @param entityName name of the entity + * @param expectExists if true, we should find it + * @return return just the entity + */ + private PolarisBaseEntity loadCacheEntryByName( + long entityCatalogId, + long parentId, + @NotNull PolarisEntityType entityType, + @NotNull String entityName, + boolean expectExists) { + // load cached entry + PolarisMetaStoreManager.CachedEntryResult cacheEntry = + this.polarisMetaStoreManager.loadCachedEntryByName( + this.polarisCallContext, entityCatalogId, parentId, entityType, entityName); + + // if null, validate that indeed the entry does not exist + Assertions.assertEquals(expectExists, cacheEntry.isSuccess()); + + // if not null, validate it + if (cacheEntry.isSuccess()) { + this.validateCacheEntryLoad(cacheEntry); + return cacheEntry.getEntity(); + } else { + return null; + } + } + + /** + * Helper function to validate loading the cache by name. We will load the cache entry by name, + * check that the result exists and is correct and return the entity. + * + * @param entityCatalogId catalog id for the entity + * @param parentId parent id of the entity + * @param entityType type of the entity + * @param entityName name of the entity + * @return return just the entity + */ + private PolarisBaseEntity loadCacheEntryByName( + long entityCatalogId, + long parentId, + @NotNull PolarisEntityType entityType, + @NotNull String entityName) { + return this.loadCacheEntryByName(entityCatalogId, parentId, entityType, entityName, true); + } + + /** + * Helper function to validate loading the cache by id. We will load the cache entry by id, check + * that the result is correct and return the entity or null if it cannot be found. + * + * @param entityCatalogId catalog id for the entity + * @param entityId parent id of the entity + * @param expectExists if true, we should find it + * @return return just the entity + */ + private PolarisBaseEntity loadCacheEntryById( + long entityCatalogId, long entityId, boolean expectExists) { + // load cached entry + PolarisMetaStoreManager.CachedEntryResult cacheEntry = + this.polarisMetaStoreManager.loadCachedEntryById( + this.polarisCallContext, entityCatalogId, entityId); + + // if null, validate that indeed the entry does not exist + Assertions.assertEquals(expectExists, cacheEntry.isSuccess()); + + // if not null, validate it + if (cacheEntry.isSuccess()) { + this.validateCacheEntryLoad(cacheEntry); + return cacheEntry.getEntity(); + } else { + return null; + } + } + + /** + * Helper function to validate loading the cache by id. We will load the cache entry by id, check + * that it exists and validate the result. + * + * @param entityCatalogId catalog id for the entity + * @param entityId parent id of the entity + * @return return just the entity + */ + private PolarisBaseEntity loadCacheEntryById(long entityCatalogId, long entityId) { + return this.loadCacheEntryById(entityCatalogId, entityId, true); + } + + /** + * Helper function to validate the refresh of a cached entry. We will refresh the cache entry and + * check if the result exists based on "expectExists" and, if exists, validate it is correct + * + * @param entityVersion entity version in the cache + * @param entityGrantRecordsVersion entity grant records version in the cache + * @param entityType type of the entity to load + * @param entityCatalogId catalog id for the entity + * @param entityId parent id of the entity + * @param expectExists if true, we should find it + */ + private void refreshCacheEntry( + int entityVersion, + int entityGrantRecordsVersion, + PolarisEntityType entityType, + long entityCatalogId, + long entityId, + boolean expectExists) { + // load cached entry + PolarisMetaStoreManager.CachedEntryResult cacheEntry = + this.polarisMetaStoreManager.refreshCachedEntity( + this.polarisCallContext, + entityVersion, + entityGrantRecordsVersion, + entityType, + entityCatalogId, + entityId); + + // if null, validate that indeed the entry does not exist + Assertions.assertEquals(expectExists, cacheEntry.isSuccess()); + + // if not null, validate it + if (cacheEntry.isSuccess()) { + this.validateCacheEntryRefresh( + cacheEntry, entityCatalogId, entityId, entityVersion, entityGrantRecordsVersion); + } + } + + /** + * Helper function to validate the refresh of a cached entry. We will refresh the cache entry and + * check that the result exists and is correct + * + * @param entityVersion entity version in the cache + * @param entityGrantRecordsVersion entity grant records version in the cache + * @param entityType type of the entity to load + * @param entityCatalogId catalog id for the entity + * @param entityId parent id of the entity + */ + private void refreshCacheEntry( + int entityVersion, + int entityGrantRecordsVersion, + @NotNull PolarisEntityType entityType, + long entityCatalogId, + long entityId) { + // refresh cached entry + this.refreshCacheEntry( + entityVersion, entityGrantRecordsVersion, entityType, entityCatalogId, entityId, true); + } + + /** validate that the root catalog was properly constructed */ + void validateBootstrap() { + // load all principals + List principals = + polarisMetaStoreManager + .listEntities( + this.polarisCallContext, + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE) + .getEntities(); + + // ensure not null, one element only + Assertions.assertNotNull(principals); + Assertions.assertEquals(1, principals.size()); + + // get catalog list information + PolarisEntityActiveRecord principalListInfo = principals.get(0); + + // now make sure this principal was properly persisted + PolarisBaseEntity principal = + this.ensureExistsById( + null, + principalListInfo.getId(), + true, + PolarisEntityConstants.getRootPrincipalName(), + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE); + + // load all principal roles + List principalRoles = + polarisMetaStoreManager + .listEntities( + this.polarisCallContext, + null, + PolarisEntityType.PRINCIPAL_ROLE, + PolarisEntitySubType.NULL_SUBTYPE) + .getEntities(); + + // ensure not null, one element only + Assertions.assertNotNull(principalRoles); + Assertions.assertEquals(1, principalRoles.size()); + + // get catalog list information + PolarisEntityActiveRecord roleListInfo = principalRoles.get(0); + + // now make sure this principal role was properly persisted + PolarisBaseEntity principalRole = + this.ensureExistsById( + null, + roleListInfo.getId(), + true, + PolarisEntityConstants.getNameOfPrincipalServiceAdminRole(), + PolarisEntityType.PRINCIPAL_ROLE, + PolarisEntitySubType.NULL_SUBTYPE); + + // also between the principal_role and the principal + this.ensureGrantRecordExists(principalRole, principal, PolarisPrivilege.PRINCIPAL_ROLE_USAGE); + } + + void testCreateTestCatalog() { + // create test catalog + this.createTestCatalog("test"); + + // validate that it has been properly created + PolarisBaseEntity catalog = this.ensureExistsByName(null, PolarisEntityType.CATALOG, "test"); + PolarisBaseEntity N1 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N1"); + PolarisBaseEntity N1_N2 = + this.ensureExistsByName(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N2"); + this.ensureExistsByName( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T1"); + this.ensureExistsByName( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T2"); + this.ensureExistsByName( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.ANY_SUBTYPE, + "T2"); + this.ensureExistsByName( + List.of(catalog, N1, N1_N2), PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.VIEW, "V1"); + this.ensureExistsByName( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.ANY_SUBTYPE, + "V1"); + PolarisBaseEntity N1_N3 = + this.ensureExistsByName(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N3"); + this.ensureExistsByName( + List.of(catalog, N1, N1_N3), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T3"); + this.ensureExistsByName( + List.of(catalog, N1, N1_N3), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.ANY_SUBTYPE, + "V2"); + this.ensureExistsByName( + List.of(catalog, N1), PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.TABLE, "T4"); + this.ensureExistsByName(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N4"); + PolarisBaseEntity N5 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N5"); + PolarisBaseEntity N5_N6 = + this.ensureExistsByName( + List.of(catalog, N5), + PolarisEntityType.NAMESPACE, + PolarisEntitySubType.ANY_SUBTYPE, + "N6"); + this.ensureExistsByName( + List.of(catalog, N5, N5_N6), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T5"); + PolarisBaseEntity N5_N6_T5 = + this.ensureExistsByName( + List.of(catalog, N5, N5_N6), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.ANY_SUBTYPE, + "T5"); + this.ensureExistsByName( + List.of(catalog, N5, N5_N6), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T6"); + PolarisBaseEntity R1 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.CATALOG_ROLE, "R1"); + PolarisBaseEntity R2 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.CATALOG_ROLE, "R2"); + this.ensureGrantRecordExists(N1_N2, R1, PolarisPrivilege.TABLE_READ_DATA); + this.ensureGrantRecordExists(catalog, R1, PolarisPrivilege.VIEW_CREATE); + this.ensureGrantRecordExists(N5, R1, PolarisPrivilege.TABLE_LIST); + this.ensureGrantRecordExists(N5_N6_T5, R1, PolarisPrivilege.TABLE_DROP); + this.ensureGrantRecordExists(N5, R2, PolarisPrivilege.TABLE_WRITE_DATA); + this.ensureGrantRecordExists(catalog, R2, PolarisPrivilege.VIEW_LIST); + PolarisBaseEntity PR1 = this.ensureExistsByName(null, PolarisEntityType.PRINCIPAL_ROLE, "PR1"); + PolarisBaseEntity PR2 = this.ensureExistsByName(null, PolarisEntityType.PRINCIPAL_ROLE, "PR2"); + this.ensureGrantRecordExists(R1, PR1, PolarisPrivilege.CATALOG_ROLE_USAGE); + this.ensureGrantRecordExists(R2, PR1, PolarisPrivilege.CATALOG_ROLE_USAGE); + this.ensureGrantRecordExists(R2, PR2, PolarisPrivilege.CATALOG_ROLE_USAGE); + PolarisBaseEntity P1 = this.ensureExistsByName(null, PolarisEntityType.PRINCIPAL, "P1"); + PolarisBaseEntity P2 = this.ensureExistsByName(null, PolarisEntityType.PRINCIPAL, "P2"); + this.ensureGrantRecordExists(PR1, P1, PolarisPrivilege.PRINCIPAL_ROLE_USAGE); + this.ensureGrantRecordExists(PR2, P1, PolarisPrivilege.PRINCIPAL_ROLE_USAGE); + this.ensureGrantRecordExists(PR2, P2, PolarisPrivilege.PRINCIPAL_ROLE_USAGE); + } + + void testBrowse() { + // create test catalog + PolarisBaseEntity catalog = this.createTestCatalog("test"); + Assertions.assertNotNull(catalog); + + // should see 2 top-level namespaces + this.validateListReturn( + List.of(catalog), + PolarisEntityType.NAMESPACE, + List.of( + ImmutablePair.of("N1", PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of("N5", PolarisEntitySubType.NULL_SUBTYPE))); + + // should see 3 top-level catalog roles including the admin one + this.validateListReturn( + List.of(catalog), + PolarisEntityType.CATALOG_ROLE, + List.of( + ImmutablePair.of( + PolarisEntityConstants.getNameOfCatalogAdminRole(), + PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of("R1", PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of("R2", PolarisEntitySubType.NULL_SUBTYPE))); + + // 2 principals + this.validateListReturn( + null, + PolarisEntityType.PRINCIPAL, + List.of( + ImmutablePair.of( + PolarisEntityConstants.getRootPrincipalName(), PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of("P1", PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of("P2", PolarisEntitySubType.NULL_SUBTYPE))); + + // 3 principal roles with the bootstrap service_admin + this.validateListReturn( + null, + PolarisEntityType.PRINCIPAL_ROLE, + List.of( + ImmutablePair.of("PR1", PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of("PR2", PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of( + PolarisEntityConstants.getNameOfPrincipalServiceAdminRole(), + PolarisEntitySubType.NULL_SUBTYPE))); + + // three namespaces under top-level namespace N1 + PolarisBaseEntity N1 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N1"); + this.validateListReturn( + List.of(catalog, N1), + PolarisEntityType.NAMESPACE, + PolarisEntitySubType.NULL_SUBTYPE, + List.of( + ImmutablePair.of("N2", PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of("N3", PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of("N4", PolarisEntitySubType.NULL_SUBTYPE))); + this.validateListReturn( + List.of(catalog, N1), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.ANY_SUBTYPE, + List.of(ImmutablePair.of("T4", PolarisEntitySubType.TABLE))); + PolarisBaseEntity N5 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N5"); + this.validateListReturn( + List.of(catalog, N5), + PolarisEntityType.NAMESPACE, + List.of(ImmutablePair.of("N6", PolarisEntitySubType.NULL_SUBTYPE))); + + // two tables and one view under top-level namespace N1_N1 + PolarisBaseEntity N1_N2 = + this.ensureExistsByName(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N2"); + // table or view object + this.validateListReturn( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.ANY_SUBTYPE, + List.of( + ImmutablePair.of("T1", PolarisEntitySubType.TABLE), + ImmutablePair.of("T2", PolarisEntitySubType.TABLE), + ImmutablePair.of("V1", PolarisEntitySubType.VIEW))); + // table object only + this.validateListReturn( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + List.of( + ImmutablePair.of("T1", PolarisEntitySubType.TABLE), + ImmutablePair.of("T2", PolarisEntitySubType.TABLE))); + // view object only + this.validateListReturn( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.VIEW, + List.of(ImmutablePair.of("V1", PolarisEntitySubType.VIEW))); + // list all principals + this.validateListReturn( + null, + PolarisEntityType.PRINCIPAL, + List.of( + ImmutablePair.of("root", PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of("P1", PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of("P2", PolarisEntitySubType.NULL_SUBTYPE))); + // list all principal roles + this.validateListReturn( + null, + PolarisEntityType.PRINCIPAL_ROLE, + List.of( + ImmutablePair.of( + PolarisEntityConstants.getNameOfPrincipalServiceAdminRole(), + PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of("PR1", PolarisEntitySubType.NULL_SUBTYPE), + ImmutablePair.of("PR2", PolarisEntitySubType.NULL_SUBTYPE))); + } + + /** Test that entity updates works well */ + void testUpdateEntities() { + // create test catalog + PolarisBaseEntity catalog = this.createTestCatalog("test"); + Assertions.assertNotNull(catalog); + + // find table N5/N6/T6 + PolarisBaseEntity N5 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N5"); + PolarisBaseEntity N5_N6 = + this.ensureExistsByName(List.of(catalog, N5), PolarisEntityType.NAMESPACE, "N6"); + PolarisBaseEntity T6v1 = + this.ensureExistsByName( + List.of(catalog, N5, N5_N6), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T6"); + Assertions.assertNotNull(T6v1); + + // update the entity + PolarisBaseEntity T6v2 = + this.updateEntity( + List.of(catalog, N5, N5_N6), + T6v1, + "{\"v2property\": \"some value\"}", + "{\"v2internal_property\": \"some other value\"}"); + Assertions.assertNotNull(T6v2); + + // update it again + PolarisBaseEntity T6v3 = + this.updateEntity( + List.of(catalog, N5, N5_N6), + T6v2, + "{\"v3property\": \"some value\"}", + "{\"v3internal_property\": \"some other value\"}"); + Assertions.assertNotNull(T6v3); + + // now simulate concurrency issue where another thread tries to update T2v2 again. This should + // not be updated + PolarisBaseEntity T6v3p = + this.updateEntity( + List.of(catalog, N5, N5_N6), + T6v2, + "{\"v3pproperty\": \"some value\"}", + "{\"v3pinternal_property\": \"some other value\"}"); + Assertions.assertNull(T6v3p); + + // update an entity which does not exist + PolarisBaseEntity T5v1 = + this.ensureExistsByName( + List.of(catalog, N5, N5_N6), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T5"); + T5v1.setId(100000L); + PolarisBaseEntity notExists = + this.updateEntity( + List.of(catalog, N5, N5_N6), + T5v1, + "{\"v3pproperty\": \"some value\"}", + "{\"v3pinternal_property\": \"some other value\"}"); + Assertions.assertNull(notExists); + } + + /** Test that dropping entities works well */ + void testDropEntities() { + // create test catalog + PolarisBaseEntity catalog = this.createTestCatalog("test"); + Assertions.assertNotNull(catalog); + + // find namespace N1/N2 + PolarisBaseEntity N1 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N1"); + PolarisBaseEntity N1_N2 = + this.ensureExistsByName(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N2"); + + // attempt to drop the N1/N2 namespace. Will fail because not empty + this.dropEntity(List.of(catalog, N1), N1_N2); + + // attempt to drop the N1/N4 namespace. Will succeed because empty + PolarisBaseEntity N1_N4 = + this.ensureExistsByName(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N4"); + this.dropEntity(List.of(catalog, N1), N1_N4); + + // find table N5/N6/T6 + PolarisBaseEntity N5 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N5"); + PolarisBaseEntity N5_N6 = + this.ensureExistsByName(List.of(catalog, N5), PolarisEntityType.NAMESPACE, "N6"); + PolarisBaseEntity T6 = + this.ensureExistsByName( + List.of(catalog, N5, N5_N6), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T6"); + Assertions.assertNotNull(T6); + + // drop table N5/N6/T6 + this.dropEntity(List.of(catalog, N5, N5_N6), T6); + + // drop the catalog role R2 + PolarisBaseEntity R2 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.CATALOG_ROLE, "R2"); + this.dropEntity(List.of(catalog), R2); + + // attempt to drop the entire catalog, should not work since not empty + this.dropEntity(null, catalog); + + // now drop everything + PolarisBaseEntity T1 = + this.ensureExistsByName( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T1"); + this.dropEntity(List.of(catalog, N1, N1_N2), T1); + PolarisBaseEntity T2 = + this.ensureExistsByName( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T2"); + this.dropEntity(List.of(catalog, N1, N1_N2), T2); + PolarisBaseEntity V1 = + this.ensureExistsByName( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.VIEW, + "V1"); + this.dropEntity(List.of(catalog, N1, N1_N2), V1); + this.dropEntity(List.of(catalog, N1), N1_N2); + + PolarisBaseEntity N1_N3 = + this.ensureExistsByName(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N3"); + PolarisBaseEntity T3 = + this.ensureExistsByName( + List.of(catalog, N1, N1_N3), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T3"); + this.dropEntity(List.of(catalog, N1, N1_N3), T3); + PolarisBaseEntity V2 = + this.ensureExistsByName( + List.of(catalog, N1, N1_N3), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.VIEW, + "V2"); + this.dropEntity(List.of(catalog, N1, N1_N3), V2); + this.dropEntity(List.of(catalog, N1), N1_N3); + + PolarisBaseEntity T4 = + this.ensureExistsByName( + List.of(catalog, N1), PolarisEntityType.TABLE_LIKE, PolarisEntitySubType.TABLE, "T4"); + this.dropEntity(List.of(catalog, N1), T4); + this.dropEntity(List.of(catalog), N1); + + PolarisBaseEntity T5 = + this.ensureExistsByName( + List.of(catalog, N5, N5_N6), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.TABLE, + "T5"); + this.dropEntity(List.of(catalog, N5, N5_N6), T5); + this.dropEntity(List.of(catalog, N5), N5_N6); + this.dropEntity(List.of(catalog), N5); + + // attempt to drop the catalog again, should fail because of role R1 + this.dropEntity(null, catalog); + + // catalog exists + PolarisMetaStoreManager.EntityResult catalogFound = + polarisMetaStoreManager.readEntityByName( + this.polarisCallContext, + null, + PolarisEntityType.CATALOG, + PolarisEntitySubType.NULL_SUBTYPE, + "test"); + // success and found + Assertions.assertTrue(catalogFound.isSuccess()); + Assertions.assertNotNull(catalogFound.getEntity()); + + // drop the last role + PolarisBaseEntity R1 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.CATALOG_ROLE, "R1"); + this.dropEntity(List.of(catalog), R1); + + // the catalog admin role cannot be dropped + PolarisBaseEntity CATALOG_ADMIN = + this.ensureExistsByName( + List.of(catalog), + PolarisEntityType.CATALOG_ROLE, + PolarisEntityConstants.getNameOfCatalogAdminRole()); + this.dropEntity(List.of(catalog), CATALOG_ADMIN); + // should be found since it is undroppable + this.ensureExistsByName( + List.of(catalog), + PolarisEntityType.CATALOG_ROLE, + PolarisEntityConstants.getNameOfCatalogAdminRole()); + + // drop the catalog, should work now. The CATALOG_ADMIN role will be dropped too + this.dropEntity(null, catalog); + + // catalog exists? + catalogFound = + polarisMetaStoreManager.readEntityByName( + this.polarisCallContext, + null, + PolarisEntityType.CATALOG, + PolarisEntitySubType.NULL_SUBTYPE, + "test"); + // success and not found + Assertions.assertEquals( + catalogFound.getReturnStatus(), PolarisMetaStoreManager.ReturnStatus.ENTITY_NOT_FOUND); + + // drop the principal role PR1 + PolarisBaseEntity PR1 = this.ensureExistsByName(null, PolarisEntityType.PRINCIPAL_ROLE, "PR1"); + this.dropEntity(null, PR1); + + // drop the principal role P1 + PolarisBaseEntity P1 = this.ensureExistsByName(null, PolarisEntityType.PRINCIPAL, "P1"); + this.dropEntity(null, P1); + } + + /** Test granting/revoking privileges */ + public void testPrivileges() { + // create test catalog + PolarisBaseEntity catalog = this.createTestCatalog("test"); + Assertions.assertNotNull(catalog); + + // get catalog role R1 + PolarisBaseEntity R1 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.CATALOG_ROLE, "R1"); + + // get principal role PR1 + PolarisBaseEntity PR1 = this.ensureExistsByName(null, PolarisEntityType.PRINCIPAL_ROLE, "PR1"); + + // get principal P1 + PolarisBaseEntity P1 = this.ensureExistsByName(null, PolarisEntityType.PRINCIPAL, "P1"); + + // test revoking usage on catalog/principal roles + this.revokeToGrantee(catalog, R1, PR1, PolarisPrivilege.CATALOG_ROLE_USAGE); + this.revokeToGrantee(null, PR1, P1, PolarisPrivilege.PRINCIPAL_ROLE_USAGE); + + // remove some privileges + PolarisBaseEntity N1 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N1"); + PolarisBaseEntity N1_N2 = + this.ensureExistsByName(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N2"); + PolarisBaseEntity N5 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N5"); + PolarisBaseEntity N5_N6 = + this.ensureExistsByName( + List.of(catalog, N5), + PolarisEntityType.NAMESPACE, + PolarisEntitySubType.ANY_SUBTYPE, + "N6"); + PolarisBaseEntity N5_N6_T5 = + this.ensureExistsByName( + List.of(catalog, N5, N5_N6), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.ANY_SUBTYPE, + "T5"); + + // revoke grants + this.revokePrivilege(R1, List.of(catalog, N1), N1_N2, PolarisPrivilege.TABLE_READ_DATA); + + // revoke priv from the catalog itself + this.revokePrivilege(R1, List.of(catalog), catalog, PolarisPrivilege.VIEW_CREATE); + + // revoke privs from securables inside the catalog itself + this.revokePrivilege(R1, List.of(catalog), N5, PolarisPrivilege.TABLE_LIST); + this.revokePrivilege(R1, List.of(catalog, N5, N5_N6), N5_N6_T5, PolarisPrivilege.TABLE_DROP); + + // test with some entity ids which are prefixes of other entity ids + PolarisBaseEntity PR900 = + this.createEntity( + null, + PolarisEntityType.PRINCIPAL_ROLE, + PolarisEntitySubType.NULL_SUBTYPE, + "PR900", + 900L); + PolarisBaseEntity PR9000 = + this.createEntity( + null, + PolarisEntityType.PRINCIPAL_ROLE, + PolarisEntitySubType.NULL_SUBTYPE, + "PR9000", + 9000L); + + // assign catalog role to PR9000 + grantToGrantee(catalog, R1, PR9000, PolarisPrivilege.CATALOG_ROLE_USAGE); + + PolarisMetaStoreManager.LoadGrantsResult loadGrantsResult = + polarisMetaStoreManager.loadGrantsToGrantee(this.polarisCallContext, 0L, PR9000.getId()); + this.validateLoadedGrants(loadGrantsResult, true); + Assertions.assertEquals(1, loadGrantsResult.getGrantRecords().size()); + Assertions.assertEquals( + R1.getCatalogId(), loadGrantsResult.getGrantRecords().get(0).getSecurableCatalogId()); + Assertions.assertEquals(R1.getId(), loadGrantsResult.getGrantRecords().get(0).getSecurableId()); + + loadGrantsResult = + polarisMetaStoreManager.loadGrantsToGrantee(this.polarisCallContext, 0L, PR900.getId()); + Assertions.assertNotNull(loadGrantsResult); + Assertions.assertEquals(0, loadGrantsResult.getGrantRecords().size()); + } + + /** + * Rename an entity and validate it worked + * + * @param catPath catalog path + * @param entity entity to rename + * @param newCatPath new catalog path + * @param newName new name + */ + void renameEntity( + List catPath, + PolarisBaseEntity entity, + List newCatPath, + String newName) { + + // save old name + String oldName = entity.getName(); + + // the renamed entity + PolarisEntity renamedEntityInput = new PolarisEntity(entity); + renamedEntityInput.setName(newName); + String updatedInternalPropertiesString = "updatedDataForInternalProperties1234"; + String updatedPropertiesString = "updatedDataForProperties9876"; + + // this is to test that properties are also updated during the rename operation + renamedEntityInput.setInternalProperties(updatedInternalPropertiesString); + renamedEntityInput.setProperties(updatedPropertiesString); + + // check to see if we would have a name conflict + PolarisMetaStoreManager.EntityResult newNameLookup = + polarisMetaStoreManager.readEntityByName( + polarisCallContext, + newCatPath == null ? catPath : newCatPath, + entity.getType(), + PolarisEntitySubType.ANY_SUBTYPE, + newName); + + // rename it + PolarisBaseEntity renamedEntity = + polarisMetaStoreManager + .renameEntity(polarisCallContext, catPath, entity, newCatPath, renamedEntityInput) + .getEntity(); + + // ensure success + if (newNameLookup.getReturnStatus() == PolarisMetaStoreManager.ReturnStatus.ENTITY_NOT_FOUND) { + Assertions.assertNotNull(renamedEntity); + + // ensure it exists + PolarisBaseEntity renamedEntityOut = + this.ensureExistsByName( + newCatPath == null ? catPath : newCatPath, + entity.getType(), + entity.getSubType(), + newName); + + // what is returned should be same has what has been loaded + Assertions.assertEquals(renamedEntityOut, renamedEntity); + + // ensure properties have been updated + Assertions.assertEquals( + updatedInternalPropertiesString, renamedEntityOut.getInternalProperties()); + Assertions.assertEquals(updatedPropertiesString, renamedEntityOut.getProperties()); + + // ensure the old one is gone + PolarisMetaStoreManager.EntityResult res = + polarisMetaStoreManager.readEntityByName( + polarisCallContext, catPath, entity.getType(), entity.getSubType(), oldName); + + // not found + Assertions.assertEquals( + res.getReturnStatus(), PolarisMetaStoreManager.ReturnStatus.ENTITY_NOT_FOUND); + } else { + // cannot rename since the entity exists + Assertions.assertNull(renamedEntity); + } + } + + /** Play with renaming entities */ + public void testRename() { + // create test catalog + PolarisBaseEntity catalog = this.createTestCatalog("test"); + Assertions.assertNotNull(catalog); + + // get catalog role R1 and rename it to R3 + PolarisBaseEntity R1 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.CATALOG_ROLE, "R1"); + + // rename it to something that exists, should fail + this.renameEntity(List.of(catalog), R1, List.of(catalog), "R2"); + + // rename it to something that exists using null newCatalogPath as shorthand, should fail + this.renameEntity(List.of(catalog), R1, null, "R2"); + + // this one should succeed + this.renameEntity(List.of(catalog), R1, List.of(catalog), "R3"); + + // get principal role PR1 and rename it to PR3 + PolarisBaseEntity PR1 = this.ensureExistsByName(null, PolarisEntityType.PRINCIPAL_ROLE, "PR1"); + // exists => fails + this.renameEntity(null, PR1, null, "PR2"); + // does not exists => succeeds + this.renameEntity(null, PR1, null, "PR3"); + + // get principal P1 and rename it to P3 + PolarisBaseEntity P1 = this.ensureExistsByName(null, PolarisEntityType.PRINCIPAL, "P1"); + // exists => fails + this.renameEntity(null, P1, null, "P2"); + // does not exists => succeeds + this.renameEntity(null, P1, null, "P3"); + + // N2 namespace + PolarisBaseEntity N5 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N5"); + + // rename N1/N2/T1 to N5/T7 + PolarisBaseEntity N1 = + this.ensureExistsByName(List.of(catalog), PolarisEntityType.NAMESPACE, "N1"); + PolarisBaseEntity N1_N2 = + this.ensureExistsByName(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N2"); + PolarisBaseEntity N1_N3 = + this.ensureExistsByName(List.of(catalog, N1), PolarisEntityType.NAMESPACE, "N3"); + PolarisBaseEntity N1_N2_T1 = + this.ensureExistsByName( + List.of(catalog, N1, N1_N2), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.ANY_SUBTYPE, + "T1"); + // view with the same name exists, should fail + this.renameEntity(List.of(catalog, N1, N1_N2), N1_N2_T1, List.of(catalog, N1, N1_N2), "V1"); + // table with the same name exists, should fail + this.renameEntity(List.of(catalog, N1, N1_N2), N1_N2_T1, List.of(catalog, N1, N1_N2), "T2"); + // view with the same name exists, should fail + this.renameEntity(List.of(catalog, N1, N1_N2), N1_N2_T1, List.of(catalog, N1, N1_N3), "V2"); + // table with the same name exists, should fail + this.renameEntity(List.of(catalog, N1, N1_N2), N1_N2_T1, List.of(catalog, N1, N1_N3), "T3"); + + // this should work, T7 does not exist + this.renameEntity(List.of(catalog, N1, N1_N2), N1_N2_T1, List.of(catalog, N5), "T7"); + } + + /** Test the set of functions for the entity cache */ + public void testEntityCache() { + // create test catalog + PolarisBaseEntity catalog = this.createTestCatalog("test"); + Assertions.assertNotNull(catalog); + + // load catalog by name + PolarisBaseEntity TEST = + this.loadCacheEntryByName( + PolarisEntityConstants.getNullId(), + PolarisEntityConstants.getNullId(), + PolarisEntityType.CATALOG, + "test"); + + // and again by id + TEST = this.loadCacheEntryById(TEST.getCatalogId(), TEST.getId()); + + // get namespace N1 + PolarisBaseEntity N1 = + this.loadCacheEntryByName(TEST.getId(), TEST.getId(), PolarisEntityType.NAMESPACE, "N1"); + + // refresh it, nothing changed + this.refreshCacheEntry( + N1.getEntityVersion(), + N1.getGrantRecordsVersion(), + N1.getType(), + N1.getCatalogId(), + N1.getId()); + + // now update this N1 entity + this.updateEntity(List.of(TEST), N1, "{\"v1property\": \"property value\"}", null); + + // get namespace N1 + PolarisBaseEntity N1p = + this.loadCacheEntryByName(TEST.getId(), TEST.getId(), PolarisEntityType.NAMESPACE, "N1"); + + // entity version should have changed + Assertions.assertEquals(N1.getEntityVersion() + 1, N1p.getEntityVersion()); + + // but not the grant records version + Assertions.assertEquals(N1.getGrantRecordsVersion(), N1p.getGrantRecordsVersion()); + + // refresh it, nothing changed + this.refreshCacheEntry( + N1.getEntityVersion(), + N1.getGrantRecordsVersion(), + N1.getType(), + N1.getCatalogId(), + N1.getId()); + + // load role R1 + PolarisBaseEntity R1 = + this.loadCacheEntryByName(TEST.getId(), TEST.getId(), PolarisEntityType.CATALOG_ROLE, "R1"); + R1 = this.loadCacheEntryById(R1.getCatalogId(), R1.getId()); + + // add a grant record to N1 + this.grantPrivilege(R1, List.of(TEST), N1, PolarisPrivilege.NAMESPACE_FULL_METADATA); + + // get namespace N1 again + PolarisBaseEntity N1pp = + this.loadCacheEntryByName(TEST.getId(), TEST.getId(), PolarisEntityType.NAMESPACE, "N1"); + + // entity version should not have changed compared to N1p + Assertions.assertEquals(N1p.getEntityVersion(), N1pp.getEntityVersion()); + + // but the grant records version should have + Assertions.assertEquals(N1p.getGrantRecordsVersion() + 1, N1pp.getGrantRecordsVersion()); + + // refresh it, grants should be updated + this.refreshCacheEntry( + N1.getEntityVersion(), + N1.getGrantRecordsVersion(), + N1.getType(), + N1.getCatalogId(), + N1.getId()); + + // now validate that load something which does not exist, will also work + this.loadCacheEntryByName( + N1.getCatalogId(), N1.getId(), PolarisEntityType.TABLE_LIKE, "do_not_exists", false); + this.loadCacheEntryById(N1.getCatalogId() + 1000, N1.getId(), false); + + // refresh a purged entity + this.refreshCacheEntry( + 1, 1, PolarisEntityType.TABLE_LIKE, N1.getCatalogId() + 1000, N1.getId(), false); + } +} diff --git a/polaris-server.yml b/polaris-server.yml new file mode 100644 index 0000000000..886221937e --- /dev/null +++ b/polaris-server.yml @@ -0,0 +1,165 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +server: + # Maximum number of threads. + maxThreads: 200 + + # Minimum number of thread to keep alive. + minThreads: 10 + applicationConnectors: + # HTTP-specific options. + - type: http + + # The port on which the HTTP server listens for service requests. + port: 8181 + + adminConnectors: + - type: http + port: 8182 + + # The hostname of the interface to which the HTTP server socket wil be found. If omitted, the + # socket will listen on all interfaces. + #bindHost: localhost + + # ssl: + # keyStore: ./example.keystore + # keyStorePassword: example + # + # keyStoreType: JKS # (optional, JKS is default) + + # HTTP request log settings + requestLog: + appenders: + # Settings for logging to stdout. + - type: console + + # Settings for logging to a file. + - type: file + + # The file to which statements will be logged. + currentLogFilename: ./logs/request.log + + # When the log file rolls over, the file will be archived to requests-2012-03-15.log.gz, + # requests.log will be truncated, and new statements written to it. + archivedLogFilenamePattern: ./logs/requests-%d.log.gz + + # The maximum number of log files to archive. + archivedFileCount: 14 + + # Enable archiving if the request log entries go to the their own file + archive: true + +# Either 'jdbc' or 'polaris'; specifies the underlying delegate catalog +baseCatalogType: "polaris" + +featureConfiguration: + ENFORCE_PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_CHECKING: false + DISABLE_TOKEN_GENERATION_FOR_USER_PRINCIPALS: true + SUPPORTED_CATALOG_STORAGE_TYPES: + - S3 + - GCS + - AZURE + - FILE + + +# Whether we want to enable Snowflake OAuth locally. Setting this to true requires +# that you go through the setup outlined in the `README.md` file, specifically the +# `OAuth + Snowflake: Local Testing And Then Some` section +callContextResolver: + type: default + +realmContextResolver: + type: default + +defaultRealms: + - default-realm + +metaStoreManager: + type: in-memory + +# TODO - avoid duplicating token broker config +oauth2: + type: test +# type: default # - uncomment to support Auth0 JWT tokens +# tokenBroker: +# type: symmetric-key +# secret: polaris + +authenticator: + class: io.polaris.service.auth.TestInlineBearerTokenPolarisAuthenticator +# class: io.polaris.service.auth.DefaultPolarisAuthenticator # - uncomment to support Auth0 JWT tokens +# tokenBroker: +# type: symmetric-key +# secret: polaris + +cors: + allowed-origins: + - http://localhost:8080 + allowed-timing-origins: + - http://localhost:8080 + allowed-methods: + - PATCH + - POST + - DELETE + - GET + - PUT + allowed-headers: + - "*" + exposed-headers: + - "*" + preflight-max-age: 600 + allowed-credentials: true + +# Logging settings. + +logging: + + # The default level of all loggers. Can be OFF, ERROR, WARN, INFO, DEBUG, TRACE, or ALL. + level: INFO + + # Logger-specific levels. + loggers: + org.apache.iceberg.rest: DEBUG + io.polaris: DEBUG + + appenders: + + - type: console + # If true, write log statements to stdout. + # enabled: true + # Do not display log statements below this threshold to stdout. + threshold: ALL + # Custom Logback PatternLayout with threadname. + logFormat: "%-5p [%d{ISO8601} - %-6r] [%t] [%X{aid}%X{sid}%X{tid}%X{wid}%X{oid}%X{srv}%X{job}%X{rid}] %c{30}: %m %kvp%n%ex" + + # Settings for logging to a file. + - type: file + # If true, write log statements to a file. + # enabled: true + # Do not write log statements below this threshold to the file. + threshold: ALL + layout: + type: polaris + flattenKeyValues: false + includeKeyValues: true + + # The file to which statements will be logged. + currentLogFilename: ./logs/polaris.log + # When the log file rolls over, the file will be archived to snowflake-2012-03-15.log.gz, + # snowflake.log will be truncated, and new statements written to it. + archivedLogFilenamePattern: ./logs/polaris-%d.log.gz + # The maximum number of log files to archive. + archivedFileCount: 14 diff --git a/polaris-service/build.gradle b/polaris-service/build.gradle new file mode 100644 index 0000000000..fb99ef01ef --- /dev/null +++ b/polaris-service/build.gradle @@ -0,0 +1,212 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +plugins { + id "com.github.johnrengelman.shadow" version "8.1.1" + id "org.openapi.generator" version "7.6.0" +} + +dependencies { + implementation(project(":polaris-core")) + + implementation(platform("org.apache.iceberg:iceberg-bom:${icebergVersion}")) + implementation("org.apache.iceberg:iceberg-api") + implementation("org.apache.iceberg:iceberg-core") + implementation("org.apache.iceberg:iceberg-aws") + + implementation(platform("io.dropwizard:dropwizard-bom:${dropwizardVersion}")) + implementation("io.dropwizard:dropwizard-core") + implementation("io.dropwizard:dropwizard-auth") + implementation("io.dropwizard:dropwizard-json-logging") + + implementation(platform("com.fasterxml.jackson:jackson-bom:${jacksonVersion}")) + implementation("com.fasterxml.jackson.dataformat:jackson-dataformat-xml") + + implementation(platform("io.opentelemetry:opentelemetry-bom:1.38.0")) + implementation("io.opentelemetry:opentelemetry-api") + implementation("io.opentelemetry:opentelemetry-sdk-trace") + implementation("io.opentelemetry:opentelemetry-exporter-logging") + implementation("io.opentelemetry.semconv:opentelemetry-semconv:1.25.0-alpha") + + implementation("com.github.ben-manes.caffeine:caffeine:3.1.8") + + implementation("io.prometheus:prometheus-metrics-exporter-servlet-jakarta:1.3.0") + implementation(platform("io.micrometer:micrometer-bom:1.13.2")) + implementation("io.micrometer:micrometer-core") + implementation("io.micrometer:micrometer-registry-prometheus") + + implementation("io.swagger:swagger-annotations:1.6.14") + implementation("io.swagger:swagger-jaxrs:1.6.14") + implementation("javax.annotation:javax.annotation-api:1.3.2") + + implementation("org.apache.hadoop:hadoop-client-api:${hadoopVersion}") + + implementation("org.xerial:sqlite-jdbc:3.45.1.0") + implementation("com.auth0:java-jwt:4.2.1") + + implementation("ch.qos.logback:logback-core:1.4.14") + implementation("org.bouncycastle:bcprov-jdk18on:1.78") + + implementation("com.google.cloud:google-cloud-storage:2.39.0") + implementation(platform("software.amazon.awssdk:bom:2.26.25")) + implementation("software.amazon.awssdk:sts") + implementation("software.amazon.awssdk:sts") + implementation("software.amazon.awssdk:iam-policy-builder") + implementation("software.amazon.awssdk:s3") + + testImplementation("org.apache.iceberg:iceberg-api:${icebergVersion}:tests") + testImplementation("org.apache.iceberg:iceberg-core:${icebergVersion}:tests") + testImplementation("io.dropwizard:dropwizard-testing") + testImplementation("org.testcontainers:testcontainers:1.19.8") + testImplementation("com.adobe.testing:s3mock-testcontainers:3.9.1") + + testImplementation("org.apache.iceberg:iceberg-spark-3.5_2.12") + testImplementation("org.apache.iceberg:iceberg-spark-extensions-3.5_2.12") + testImplementation("org.apache.spark:spark-sql_2.12:3.5.1") { + // exclude log4j dependencies + exclude group: "org.apache.logging.log4j", module: "log4j-slf4j2-impl" + exclude group: "org.apache.logging.log4j", module: "log4j-api" + exclude group: "org.apache.logging.log4j", module: "log4j-1.2-api" + } + + testImplementation("software.amazon.awssdk:glue") + testImplementation("software.amazon.awssdk:kms") + testImplementation("software.amazon.awssdk:dynamodb") +} + +openApiGenerate { + inputSpec = "$rootDir/spec/rest-catalog-open-api.yaml" + generatorName = "jaxrs-resteasy" + outputDir = "$buildDir/generated" + apiPackage = "io.polaris.service.catalog.api" + ignoreFileOverride = "$rootDir/.openapi-generator-ignore" + removeOperationIdPrefix = true + templateDir = "$rootDir/server-templates" + globalProperties = [ + apis : "", + models : "false", + apiDocs : "false", + modelTests: "false", + ] + configOptions = [ + resourceName : "catalog", + useTags : "true", + useBeanValidation: "false", + sourceFolder : "src/main/java", + useJakartaEe : "true" + ] + openapiNormalizer = ["REFACTOR_ALLOF_WITH_PROPERTIES_ONLY": "true"] + additionalProperties = [apiNamePrefix: "IcebergRest", apiNameSuffix: "", metricsPrefix: "polaris"] + serverVariables = [basePath: "api/catalog"] + importMappings = [ + CatalogConfig : "org.apache.iceberg.rest.responses.ConfigResponse", + CommitTableResponse : "org.apache.iceberg.rest.responses.LoadTableResponse", + CreateNamespaceRequest : "org.apache.iceberg.rest.requests.CreateNamespaceRequest", + CreateNamespaceResponse : "org.apache.iceberg.rest.responses.CreateNamespaceResponse", + CreateTableRequest : "org.apache.iceberg.rest.requests.CreateTableRequest", + ErrorModel : "org.apache.iceberg.rest.responses.ErrorResponse", + GetNamespaceResponse : "org.apache.iceberg.rest.responses.GetNamespaceResponse", + ListNamespacesResponse : "org.apache.iceberg.rest.responses.ListNamespacesResponse", + ListTablesResponse : "org.apache.iceberg.rest.responses.ListTablesResponse", + LoadTableResult : "org.apache.iceberg.rest.responses.LoadTableResponse", + LoadViewResult : "org.apache.iceberg.rest.responses.LoadTableResponse", + OAuthTokenResponse : "org.apache.iceberg.rest.responses.OAuthTokenResponse", + OAuthErrorResponse : "org.apache.iceberg.rest.responses.OAuthErrorResponse", + RenameTableRequest : "org.apache.iceberg.rest.requests.RenameTableRequest", + ReportMetricsRequest : "org.apache.iceberg.rest.requests.ReportMetricsRequest", + UpdateNamespacePropertiesRequest : "org.apache.iceberg.rest.requests.UpdateNamespacePropertiesRequest", + UpdateNamespacePropertiesResponse: "org.apache.iceberg.rest.responses.UpdateNamespacePropertiesResponse", + CommitTransactionRequest : "org.apache.iceberg.rest.requests.CommitTransactionRequest", + CreateViewRequest : "org.apache.iceberg.rest.requests.CreateViewRequest", + RegisterTableRequest : "org.apache.iceberg.rest.requests.RegisterTableRequest", + IcebergErrorResponse : "org.apache.iceberg.rest.responses.ErrorResponse", + OAuthError : "org.apache.iceberg.rest.responses.ErrorResponse", + + // Custom types defined below + CommitViewRequest : "io.polaris.service.types.CommitViewRequest", + TokenType : "io.polaris.service.types.TokenType", + CommitTableRequest : "io.polaris.service.types.CommitTableRequest", + + NotificationRequest : "io.polaris.service.types.NotificationRequest", + TableUpdateNotification : "io.polaris.service.types.TableUpdateNotification", + NotificationType : "io.polaris.service.types.NotificationType" + ] +} + +task generatePolarisService(type: org.openapitools.generator.gradle.plugin.tasks.GenerateTask) { + inputSpec = "$rootDir/spec/polaris-management-service.yml" + generatorName = "jaxrs-resteasy" + outputDir = "$buildDir/generated" + apiPackage = "io.polaris.service.admin.api" + modelPackage = "io.polaris.core.admin.model" + ignoreFileOverride = "$rootDir/.openapi-generator-ignore" + removeOperationIdPrefix = true + templateDir = "$rootDir/server-templates" + globalProperties = [ + apis : "", + models : "false", + apiDocs : "false", + modelTests: "false" + ] + configOptions = [ + useBeanValidation : "true", + sourceFolder : "src/main/java", + useJakartaEe : "true", + generateBuilders : "true", + generateConstructorWithAllArgs: "true", + ] + additionalProperties = [apiNamePrefix: "Polaris", apiNameSuffix: "Api", metricsPrefix: "polaris"] + serverVariables = [basePath: "api/v1"] +} + +compileJava.dependsOn tasks.openApiGenerate, tasks.generatePolarisService +sourceSets.main.java.srcDirs += ["$buildDir/generated/src/main/java"] + +test { + if (System.getenv("AWS_REGION") == null) { + environment "AWS_REGION", "us-west-2" + } + jvmArgs "--add-exports", "java.base/sun.nio.ch=ALL-UNNAMED" + useJUnitPlatform() + maxParallelForks = 4 +} + +task runApp(type: JavaExec) { + if (System.getenv("AWS_REGION") == null) { + environment "AWS_REGION", "us-west-2" + } + classpath = sourceSets.main.runtimeClasspath + mainClass = "io.polaris.service.PolarisApplication" + args "server", "$rootDir/polaris-server.yml" +} + +application { + mainClass = "io.polaris.service.PolarisApplication" +} + +jar { + manifest { + attributes "Main-Class": "io.polaris.service.PolarisApplication" + } +} + +shadowJar { + mainClassName = "io.polaris.service.PolarisApplication" + mergeServiceFiles() + zip64 true +} + +build.dependsOn(shadowJar) diff --git a/polaris-service/src/main/java/io/polaris/service/BootstrapRealmsCommand.java b/polaris-service/src/main/java/io/polaris/service/BootstrapRealmsCommand.java new file mode 100644 index 0000000000..cf17edcf77 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/BootstrapRealmsCommand.java @@ -0,0 +1,60 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service; + +import io.dropwizard.core.cli.ConfiguredCommand; +import io.dropwizard.core.setup.Bootstrap; +import io.polaris.core.PolarisConfigurationStore; +import io.polaris.core.persistence.MetaStoreManagerFactory; +import io.polaris.service.config.ConfigurationStoreAware; +import io.polaris.service.config.PolarisApplicationConfig; +import io.polaris.service.config.RealmEntityManagerFactory; +import io.polaris.service.context.CallContextResolver; +import net.sourceforge.argparse4j.inf.Namespace; + +/** + * Command for bootstrapping root level service principals for each realm. This command will invoke + * a default implementation which generates random user id and secret. These credentials will be + * printed out to the log and standard output (stdout). + */ +public class BootstrapRealmsCommand extends ConfiguredCommand { + public BootstrapRealmsCommand() { + super("bootstrap", "bootstraps principal credentials for all realms and prints them to log"); + } + + @Override + protected void run( + Bootstrap bootstrap, + Namespace namespace, + PolarisApplicationConfig configuration) + throws Exception { + MetaStoreManagerFactory metaStoreManagerFactory = configuration.getMetaStoreManagerFactory(); + + PolarisConfigurationStore configurationStore = configuration.getConfigurationStore(); + if (metaStoreManagerFactory instanceof ConfigurationStoreAware) { + ((ConfigurationStoreAware) metaStoreManagerFactory).setConfigurationStore(configurationStore); + } + RealmEntityManagerFactory entityManagerFactory = + new RealmEntityManagerFactory(metaStoreManagerFactory); + CallContextResolver callContextResolver = configuration.getCallContextResolver(); + callContextResolver.setEntityManagerFactory(entityManagerFactory); + if (callContextResolver instanceof ConfigurationStoreAware csa) { + csa.setConfigurationStore(configurationStore); + } + + metaStoreManagerFactory.bootstrapRealms(configuration.getDefaultRealms()); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/IcebergExceptionMapper.java b/polaris-service/src/main/java/io/polaris/service/IcebergExceptionMapper.java new file mode 100644 index 0000000000..a623ea3e81 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/IcebergExceptionMapper.java @@ -0,0 +1,100 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service; + +import jakarta.ws.rs.WebApplicationException; +import jakarta.ws.rs.core.MediaType; +import jakarta.ws.rs.core.Response; +import jakarta.ws.rs.ext.ExceptionMapper; +import org.apache.iceberg.exceptions.AlreadyExistsException; +import org.apache.iceberg.exceptions.CherrypickAncestorCommitException; +import org.apache.iceberg.exceptions.CleanableFailure; +import org.apache.iceberg.exceptions.CommitFailedException; +import org.apache.iceberg.exceptions.CommitStateUnknownException; +import org.apache.iceberg.exceptions.DuplicateWAPCommitException; +import org.apache.iceberg.exceptions.ForbiddenException; +import org.apache.iceberg.exceptions.NamespaceNotEmptyException; +import org.apache.iceberg.exceptions.NoSuchIcebergTableException; +import org.apache.iceberg.exceptions.NoSuchNamespaceException; +import org.apache.iceberg.exceptions.NoSuchTableException; +import org.apache.iceberg.exceptions.NoSuchViewException; +import org.apache.iceberg.exceptions.NotAuthorizedException; +import org.apache.iceberg.exceptions.NotFoundException; +import org.apache.iceberg.exceptions.RESTException; +import org.apache.iceberg.exceptions.RuntimeIOException; +import org.apache.iceberg.exceptions.ServiceFailureException; +import org.apache.iceberg.exceptions.ServiceUnavailableException; +import org.apache.iceberg.exceptions.UnprocessableEntityException; +import org.apache.iceberg.exceptions.ValidationException; +import org.apache.iceberg.rest.responses.ErrorResponse; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class IcebergExceptionMapper implements ExceptionMapper { + private static final Logger LOG = LoggerFactory.getLogger(IcebergExceptionMapper.class); + + public IcebergExceptionMapper() {} + + @Override + public Response toResponse(RuntimeException runtimeException) { + LOG.info("Handling runtimeException {}", runtimeException.getMessage()); + int responseCode = + switch (runtimeException) { + case NoSuchNamespaceException e -> Response.Status.NOT_FOUND.getStatusCode(); + case NoSuchIcebergTableException e -> Response.Status.NOT_FOUND.getStatusCode(); + case NoSuchTableException e -> Response.Status.NOT_FOUND.getStatusCode(); + case NoSuchViewException e -> Response.Status.NOT_FOUND.getStatusCode(); + case NotFoundException e -> Response.Status.NOT_FOUND.getStatusCode(); + case AlreadyExistsException e -> Response.Status.CONFLICT.getStatusCode(); + case CommitFailedException e -> Response.Status.CONFLICT.getStatusCode(); + case UnprocessableEntityException e -> 422; + case CherrypickAncestorCommitException e -> Response.Status.BAD_REQUEST.getStatusCode(); + case CommitStateUnknownException e -> Response.Status.BAD_REQUEST.getStatusCode(); + case DuplicateWAPCommitException e -> Response.Status.BAD_REQUEST.getStatusCode(); + case ForbiddenException e -> Response.Status.FORBIDDEN.getStatusCode(); + case jakarta.ws.rs.ForbiddenException e -> Response.Status.FORBIDDEN.getStatusCode(); + case NotAuthorizedException e -> Response.Status.UNAUTHORIZED.getStatusCode(); + case NamespaceNotEmptyException e -> Response.Status.BAD_REQUEST.getStatusCode(); + case ValidationException e -> Response.Status.BAD_REQUEST.getStatusCode(); + case ServiceUnavailableException e -> Response.Status.SERVICE_UNAVAILABLE.getStatusCode(); + case RuntimeIOException e -> Response.Status.SERVICE_UNAVAILABLE.getStatusCode(); + case ServiceFailureException e -> Response.Status.SERVICE_UNAVAILABLE.getStatusCode(); + case CleanableFailure e -> Response.Status.BAD_REQUEST.getStatusCode(); + case RESTException e -> Response.Status.SERVICE_UNAVAILABLE.getStatusCode(); + case IllegalArgumentException e -> Response.Status.BAD_REQUEST.getStatusCode(); + case UnsupportedOperationException e -> Response.Status.NOT_ACCEPTABLE.getStatusCode(); + case WebApplicationException e -> e.getResponse().getStatus(); + default -> Response.Status.INTERNAL_SERVER_ERROR.getStatusCode(); + }; + if (responseCode == Response.Status.INTERNAL_SERVER_ERROR.getStatusCode()) { + LOG.error("Unhandled exception returning INTERNAL_SERVER_ERROR", runtimeException); + } + + ErrorResponse icebergErrorResponse = + ErrorResponse.builder() + .responseCode(responseCode) + .withType(runtimeException.getClass().getSimpleName()) + .withMessage(runtimeException.getMessage()) + .build(); + Response errorResp = + Response.status(responseCode) + .entity(icebergErrorResponse) + .type(MediaType.APPLICATION_JSON_TYPE) + .build(); + LOG.debug("Mapped exception to errorResp: {}", errorResp); + return errorResp; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/IcebergJerseyViolationExceptionMapper.java b/polaris-service/src/main/java/io/polaris/service/IcebergJerseyViolationExceptionMapper.java new file mode 100644 index 0000000000..f80369e107 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/IcebergJerseyViolationExceptionMapper.java @@ -0,0 +1,46 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service; + +import io.dropwizard.jersey.validation.JerseyViolationException; +import jakarta.ws.rs.core.MediaType; +import jakarta.ws.rs.core.Response; +import jakarta.ws.rs.ext.ExceptionMapper; +import jakarta.ws.rs.ext.Provider; +import org.apache.iceberg.rest.responses.ErrorResponse; + +/** + * Override of the default JerseyViolationExceptionMapper to provide an Iceberg ErrorResponse with + * the exception details. + */ +@Provider +public class IcebergJerseyViolationExceptionMapper + implements ExceptionMapper { + @Override + public Response toResponse(JerseyViolationException exception) { + final String message = "Invalid value: " + exception.getMessage(); + ErrorResponse icebergErrorResponse = + ErrorResponse.builder() + .responseCode(Response.Status.BAD_REQUEST.getStatusCode()) + .withType(exception.getClass().getSimpleName()) + .withMessage(message) + .build(); + return Response.status(Response.Status.BAD_REQUEST) + .type(MediaType.APPLICATION_JSON_TYPE) + .entity(icebergErrorResponse) + .build(); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/IcebergJsonProcessingExceptionMapper.java b/polaris-service/src/main/java/io/polaris/service/IcebergJsonProcessingExceptionMapper.java new file mode 100644 index 0000000000..5db1aed9a2 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/IcebergJsonProcessingExceptionMapper.java @@ -0,0 +1,70 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service; + +import com.fasterxml.jackson.core.JsonGenerationException; +import com.fasterxml.jackson.core.JsonParseException; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.exc.InvalidDefinitionException; +import com.fasterxml.jackson.databind.exc.ValueInstantiationException; +import io.dropwizard.jersey.errors.LoggingExceptionMapper; +import jakarta.ws.rs.core.MediaType; +import jakarta.ws.rs.core.Response; +import jakarta.ws.rs.ext.Provider; +import org.apache.iceberg.rest.responses.ErrorResponse; + +/** + * Override of the default JsonProcessingExceptionMapper to provide an Iceberg ErrorResponse with + * the exception details. This code mostly comes from Dropwizard's {@link + * io.dropwizard.jersey.jackson.JsonProcessingExceptionMapper} + */ +@Provider +public final class IcebergJsonProcessingExceptionMapper + extends LoggingExceptionMapper { + @Override + public Response toResponse(JsonProcessingException exception) { + /* + * If the error is in the JSON generation or an invalid definition, it's a server error. + */ + if (exception instanceof JsonGenerationException + || exception instanceof InvalidDefinitionException) { + return super.toResponse(exception); // LoggingExceptionMapper will log exception + } + + /* + * Otherwise, it's those pesky users. + */ + logger.info("Unable to process JSON: {}", exception.getMessage()); + + String messagePrefix = + switch (exception) { + case JsonParseException e -> "Invalid JSON: "; + case ValueInstantiationException ve -> "Invalid value: "; + default -> ""; + }; + final String message = messagePrefix + exception.getOriginalMessage(); + ErrorResponse icebergErrorResponse = + ErrorResponse.builder() + .responseCode(Response.Status.BAD_REQUEST.getStatusCode()) + .withType(exception.getClass().getSimpleName()) + .withMessage(message) + .build(); + return Response.status(Response.Status.BAD_REQUEST) + .type(MediaType.APPLICATION_JSON_TYPE) + .entity(icebergErrorResponse) + .build(); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/PolarisApplication.java b/polaris-service/src/main/java/io/polaris/service/PolarisApplication.java new file mode 100644 index 0000000000..74bff74ca2 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/PolarisApplication.java @@ -0,0 +1,386 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service; + +import com.fasterxml.jackson.annotation.JsonAutoDetect; +import com.fasterxml.jackson.annotation.PropertyAccessor; +import com.fasterxml.jackson.databind.DeserializationFeature; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.PropertyNamingStrategies; +import io.dropwizard.auth.AuthDynamicFeature; +import io.dropwizard.auth.AuthFilter; +import io.dropwizard.auth.oauth.OAuthCredentialAuthFilter; +import io.dropwizard.configuration.EnvironmentVariableSubstitutor; +import io.dropwizard.configuration.SubstitutingSourceProvider; +import io.dropwizard.core.Application; +import io.dropwizard.core.setup.Bootstrap; +import io.dropwizard.core.setup.Environment; +import io.micrometer.prometheusmetrics.PrometheusConfig; +import io.micrometer.prometheusmetrics.PrometheusMeterRegistry; +import io.opentelemetry.api.OpenTelemetry; +import io.opentelemetry.api.baggage.propagation.W3CBaggagePropagator; +import io.opentelemetry.api.trace.propagation.W3CTraceContextPropagator; +import io.opentelemetry.context.propagation.ContextPropagators; +import io.opentelemetry.context.propagation.TextMapPropagator; +import io.opentelemetry.exporter.logging.LoggingSpanExporter; +import io.opentelemetry.sdk.OpenTelemetrySdk; +import io.opentelemetry.sdk.resources.Resource; +import io.opentelemetry.sdk.trace.SdkTracerProvider; +import io.opentelemetry.sdk.trace.export.SimpleSpanProcessor; +import io.opentelemetry.semconv.ServiceAttributes; +import io.polaris.core.PolarisConfigurationStore; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.auth.PolarisAuthorizer; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.monitor.PolarisMetricRegistry; +import io.polaris.core.persistence.MetaStoreManagerFactory; +import io.polaris.service.admin.PolarisServiceImpl; +import io.polaris.service.admin.api.PolarisCatalogsApi; +import io.polaris.service.admin.api.PolarisPrincipalRolesApi; +import io.polaris.service.admin.api.PolarisPrincipalsApi; +import io.polaris.service.auth.DiscoverableAuthenticator; +import io.polaris.service.catalog.IcebergCatalogAdapter; +import io.polaris.service.catalog.api.IcebergRestCatalogApi; +import io.polaris.service.catalog.api.IcebergRestConfigurationApi; +import io.polaris.service.catalog.api.IcebergRestOAuth2Api; +import io.polaris.service.config.ConfigurationStoreAware; +import io.polaris.service.config.HasEntityManagerFactory; +import io.polaris.service.config.OAuth2ApiService; +import io.polaris.service.config.PolarisApplicationConfig; +import io.polaris.service.config.RealmEntityManagerFactory; +import io.polaris.service.config.Serializers; +import io.polaris.service.config.TaskHandlerConfiguration; +import io.polaris.service.context.CallContextCatalogFactory; +import io.polaris.service.context.CallContextResolver; +import io.polaris.service.context.PolarisCallContextCatalogFactory; +import io.polaris.service.context.RealmContextResolver; +import io.polaris.service.context.SqlliteCallContextCatalogFactory; +import io.polaris.service.persistence.InMemoryPolarisMetaStoreManagerFactory; +import io.polaris.service.storage.PolarisStorageIntegrationProviderImpl; +import io.polaris.service.task.ManifestFileCleanupTaskHandler; +import io.polaris.service.task.TableCleanupTaskHandler; +import io.polaris.service.task.TaskExecutorImpl; +import io.polaris.service.task.TaskFileIOSupplier; +import io.polaris.service.tracing.OpenTelemetryAware; +import io.polaris.service.tracing.TracingFilter; +import io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet; +import jakarta.servlet.DispatcherType; +import jakarta.servlet.Filter; +import jakarta.servlet.FilterChain; +import jakarta.servlet.FilterRegistration; +import jakarta.servlet.ServletException; +import jakarta.servlet.ServletRequest; +import jakarta.servlet.ServletResponse; +import jakarta.servlet.http.HttpServletRequest; +import java.io.Closeable; +import java.io.IOException; +import java.util.Collections; +import java.util.EnumSet; +import java.util.Map; +import java.util.Objects; +import java.util.concurrent.Executors; +import java.util.function.Function; +import java.util.stream.Collectors; +import java.util.stream.Stream; +import org.apache.iceberg.rest.RESTSerializers; +import org.eclipse.jetty.servlets.CrossOriginFilter; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.slf4j.MDC; +import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider; +import software.amazon.awssdk.services.sts.StsClient; +import software.amazon.awssdk.services.sts.StsClientBuilder; + +public class PolarisApplication extends Application { + private static final Logger LOGGER = LoggerFactory.getLogger(PolarisApplication.class); + + public static void main(final String[] args) throws Exception { + new PolarisApplication().run(args); + printAsciiArt(); + } + + private static void printAsciiArt() { + String bannerArt = + String.join( + "\n", + " @@@@ @@@ @ @ @@@@ @ @@@@ @@@@ @ @@@@@ @ @ @@@ @@@@ ", + " @ @ @ @ @ @ @ @ @ @ @@ @ @ @ @ @ @ @ @ @ @ ", + " @@@@ @ @ @ @@@@@ @@@@ @ @@ @ @@@@@ @ @@@@@ @ @ @ @ @@@", + " @ @@@ @@@@ @ @ @ @@ @ @@@@ @@@@ @ @ @ @@ @@ @@@@ @@@ @@@@ ", + " ", + " ", + " ", + " ", + " /////| ", + " //||///T||| ", + " ///|||////|||||| ", + " //||||T////||||||||| ", + " /T| //|||||T///T||//T|||||| ", + " //|||/////T||////||/////||||||| //|| ", + " //||||||T///////////////////T|||||||T||||| ", + " //||||/////T|//////////|///////T|||||T|||||||| ", + " //|||||/////|||T////////////////||||||/||||||||| ", + ",,..,,,..,,,..,//||||////////||||||||||/////////|||||///||||||||||,,,..,,..,,,..,,,.", + ",,..,,,..,,,..,,,..,,,..,,,..,,,..,,,..,,,..,,,..,,,..,,,..,,,..,,,..,,,.,,,..,,,..,"); + System.out.println(bannerArt.replaceAll("\\|", "\\\\")); + } + + @Override + public void initialize(Bootstrap bootstrap) { + // Enable variable substitution with environment variables + EnvironmentVariableSubstitutor substitutor = new EnvironmentVariableSubstitutor(false); + SubstitutingSourceProvider provider = + new SubstitutingSourceProvider(bootstrap.getConfigurationSourceProvider(), substitutor); + bootstrap.setConfigurationSourceProvider(provider); + + bootstrap.addCommand(new BootstrapRealmsCommand()); + } + + @Override + public void run(PolarisApplicationConfig configuration, Environment environment) { + // PolarisEntityManager will be used for Management APIs and optionally the core Catalog APIs + // depending on the value of the baseCatalogType config. + MetaStoreManagerFactory metaStoreManagerFactory = configuration.getMetaStoreManagerFactory(); + StsClientBuilder stsClientBuilder = StsClient.builder(); + AwsCredentialsProvider awsCredentialsProvider = configuration.credentialsProvider(); + if (awsCredentialsProvider != null) { + stsClientBuilder.credentialsProvider(awsCredentialsProvider); + } + metaStoreManagerFactory.setStorageIntegrationProvider( + new PolarisStorageIntegrationProviderImpl(stsClientBuilder::build)); + + PolarisMetricRegistry polarisMetricRegistry = + new PolarisMetricRegistry(new PrometheusMeterRegistry(PrometheusConfig.DEFAULT)); + metaStoreManagerFactory.setMetricRegistry(polarisMetricRegistry); + + OpenTelemetry openTelemetry = setupTracing(); + if (metaStoreManagerFactory instanceof OpenTelemetryAware otAware) { + otAware.setOpenTelemetry(openTelemetry); + } + PolarisConfigurationStore configurationStore = configuration.getConfigurationStore(); + if (metaStoreManagerFactory instanceof ConfigurationStoreAware) { + ((ConfigurationStoreAware) metaStoreManagerFactory).setConfigurationStore(configurationStore); + } + RealmEntityManagerFactory entityManagerFactory = + new RealmEntityManagerFactory(metaStoreManagerFactory); + CallContextResolver callContextResolver = configuration.getCallContextResolver(); + callContextResolver.setEntityManagerFactory(entityManagerFactory); + if (callContextResolver instanceof ConfigurationStoreAware csa) { + csa.setConfigurationStore(configurationStore); + } + + RealmContextResolver realmContextResolver = configuration.getRealmContextResolver(); + realmContextResolver.setEntityManagerFactory(entityManagerFactory); + environment + .servlets() + .addFilter( + "realmContext", new ContextResolverFilter(realmContextResolver, callContextResolver)) + .addMappingForUrlPatterns(EnumSet.of(DispatcherType.REQUEST), true, "/*"); + + TaskHandlerConfiguration taskConfig = configuration.getTaskHandler(); + TaskExecutorImpl taskExecutor = + new TaskExecutorImpl(taskConfig.executorService(), metaStoreManagerFactory); + TaskFileIOSupplier fileIOSupplier = new TaskFileIOSupplier(metaStoreManagerFactory); + taskExecutor.addTaskHandler( + new TableCleanupTaskHandler(taskExecutor, metaStoreManagerFactory, fileIOSupplier)); + taskExecutor.addTaskHandler( + new ManifestFileCleanupTaskHandler( + fileIOSupplier, Executors.newVirtualThreadPerTaskExecutor())); + + CallContextCatalogFactory catalogFactory; + if ("polaris".equals(configuration.getBaseCatalogType())) { + LOGGER.info( + "Initializing PolarisCallContextCatalogFactory for baseCatalogType {}, metaStoreManagerType {}", + configuration.getBaseCatalogType(), + metaStoreManagerFactory); + catalogFactory = new PolarisCallContextCatalogFactory(entityManagerFactory, taskExecutor); + } else if ("jdbc".equals(configuration.getBaseCatalogType())) { + LOGGER.info( + "Initializing SqlliteCallContextCatalogFactory for baseCatalogType {}", + configuration.getBaseCatalogType()); + catalogFactory = new SqlliteCallContextCatalogFactory(configuration.getSqlLiteCatalogDirs()); + } else { + LOGGER.error("Unrecognized baseCatalogType: {}", configuration.getBaseCatalogType()); + throw new RuntimeException("Invalid baseCatalogType: " + configuration.getBaseCatalogType()); + } + + PolarisAuthorizer authorizer = new PolarisAuthorizer(configurationStore); + IcebergCatalogAdapter catalogAdapter = + new IcebergCatalogAdapter(catalogFactory, entityManagerFactory, authorizer); + environment.jersey().register(new IcebergRestCatalogApi(catalogAdapter)); + environment.jersey().register(new IcebergRestConfigurationApi(catalogAdapter)); + + FilterRegistration.Dynamic corsRegistration = + environment.servlets().addFilter("CORS", CrossOriginFilter.class); + corsRegistration.setInitParameter( + CrossOriginFilter.ALLOWED_ORIGINS_PARAM, + String.join(",", configuration.getCorsConfiguration().getAllowedOrigins())); + corsRegistration.setInitParameter( + CrossOriginFilter.ALLOWED_TIMING_ORIGINS_PARAM, + String.join(",", configuration.getCorsConfiguration().getAllowedTimingOrigins())); + corsRegistration.setInitParameter( + CrossOriginFilter.ALLOWED_METHODS_PARAM, + String.join(",", configuration.getCorsConfiguration().getAllowedMethods())); + corsRegistration.setInitParameter( + CrossOriginFilter.ALLOWED_HEADERS_PARAM, + String.join(",", configuration.getCorsConfiguration().getAllowedHeaders())); + corsRegistration.setInitParameter( + CrossOriginFilter.ALLOW_CREDENTIALS_PARAM, + String.join(",", configuration.getCorsConfiguration().getAllowCredentials())); + corsRegistration.setInitParameter( + CrossOriginFilter.PREFLIGHT_MAX_AGE_PARAM, + Objects.toString(configuration.getCorsConfiguration().getPreflightMaxAge())); + corsRegistration.setInitParameter( + CrossOriginFilter.ALLOW_CREDENTIALS_PARAM, + configuration.getCorsConfiguration().getAllowCredentials()); + corsRegistration.addMappingForUrlPatterns(EnumSet.of(DispatcherType.REQUEST), true, "/*"); + environment + .servlets() + .addFilter("tracing", new TracingFilter(openTelemetry)) + .addMappingForUrlPatterns(EnumSet.of(DispatcherType.REQUEST), true, "/*"); + DiscoverableAuthenticator authenticator = + configuration.getPolarisAuthenticator(); + authenticator.setEntityManagerFactory(entityManagerFactory); + AuthFilter oauthCredentialAuthFilter = + new OAuthCredentialAuthFilter.Builder() + .setAuthenticator(authenticator) + .setPrefix("Bearer") + .buildAuthFilter(); + environment.jersey().register(new AuthDynamicFeature(oauthCredentialAuthFilter)); + environment.healthChecks().register("polaris", new PolarisHealthCheck()); + OAuth2ApiService oauth2Service = configuration.getOauth2Service(); + if (oauth2Service instanceof HasEntityManagerFactory emfAware) { + emfAware.setEntityManagerFactory(entityManagerFactory); + } + environment.jersey().register(new IcebergRestOAuth2Api(oauth2Service)); + environment.jersey().register(new IcebergExceptionMapper()); + PolarisServiceImpl polarisService = new PolarisServiceImpl(entityManagerFactory, authorizer); + environment.jersey().register(new PolarisCatalogsApi(polarisService)); + environment.jersey().register(new PolarisPrincipalsApi(polarisService)); + environment.jersey().register(new PolarisPrincipalRolesApi(polarisService)); + ObjectMapper objectMapper = environment.getObjectMapper(); + objectMapper.setVisibility(PropertyAccessor.FIELD, JsonAutoDetect.Visibility.ANY); + objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); + objectMapper.setPropertyNamingStrategy(new PropertyNamingStrategies.KebabCaseStrategy()); + RESTSerializers.registerAll(objectMapper); + Serializers.registerSerializers(objectMapper); + environment.jersey().register(new IcebergJsonProcessingExceptionMapper()); + environment.jersey().register(new IcebergJerseyViolationExceptionMapper()); + environment.jersey().register(new TimedApplicationEventListener(polarisMetricRegistry)); + + polarisMetricRegistry.init( + IcebergRestCatalogApi.class, + IcebergRestConfigurationApi.class, + IcebergRestOAuth2Api.class, + PolarisCatalogsApi.class, + PolarisPrincipalsApi.class, + PolarisPrincipalRolesApi.class); + + environment + .admin() + .addServlet( + "metrics", + new PrometheusMetricsServlet( + ((PrometheusMeterRegistry) polarisMetricRegistry.getMeterRegistry()) + .getPrometheusRegistry())) + .addMapping("/metrics"); + + // For in-memory metastore we need to bootstrap Service and Service principal at startup (for + // default realm) + // We can not utilize dropwizard Bootstrap command as command and server will be running two + // different processes + // and in-memory state will be lost b/w invocation of bootstrap command and running a server + if (metaStoreManagerFactory instanceof InMemoryPolarisMetaStoreManagerFactory) { + metaStoreManagerFactory.getOrCreateMetaStoreManager(configuration::getDefaultRealm); + } + } + + private static OpenTelemetry setupTracing() { + Resource resource = + Resource.getDefault().toBuilder() + .put(ServiceAttributes.SERVICE_NAME, "polaris") + .put(ServiceAttributes.SERVICE_VERSION, "0.1.0") + .build(); + SdkTracerProvider sdkTracerProvider = + SdkTracerProvider.builder() + .addSpanProcessor(SimpleSpanProcessor.create(LoggingSpanExporter.create())) + .setResource(resource) + .build(); + return OpenTelemetrySdk.builder() + .setTracerProvider(sdkTracerProvider) + .setPropagators( + ContextPropagators.create( + TextMapPropagator.composite( + W3CTraceContextPropagator.getInstance(), W3CBaggagePropagator.getInstance()))) + .build(); + } + + /** Resolves and sets ThreadLocal CallContext/RealmContext based on the request contents. */ + private static class ContextResolverFilter implements Filter { + private final RealmContextResolver realmContextResolver; + private final CallContextResolver callContextResolver; + + public ContextResolverFilter( + RealmContextResolver realmContextResolver, CallContextResolver callContextResolver) { + this.realmContextResolver = realmContextResolver; + this.callContextResolver = callContextResolver; + } + + @Override + public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) + throws IOException, ServletException { + HttpServletRequest httpRequest = (HttpServletRequest) request; + Stream headerNames = Collections.list(httpRequest.getHeaderNames()).stream(); + Map headers = + headerNames.collect(Collectors.toMap(Function.identity(), httpRequest::getHeader)); + RealmContext currentRealmContext = + realmContextResolver.resolveRealmContext( + httpRequest.getRequestURL().toString(), + httpRequest.getMethod(), + httpRequest.getRequestURI().substring(1), + request.getParameterMap().entrySet().stream() + .collect( + Collectors.toMap(Map.Entry::getKey, (e) -> ((String[]) e.getValue())[0])), + headers); + CallContext currentCallContext = + callContextResolver.resolveCallContext( + currentRealmContext, + httpRequest.getMethod(), + httpRequest.getRequestURI().substring(1), + request.getParameterMap().entrySet().stream() + .collect( + Collectors.toMap(Map.Entry::getKey, (e) -> ((String[]) e.getValue())[0])), + headers); + CallContext.setCurrentContext(currentCallContext); + try (MDC.MDCCloseable context = + MDC.putCloseable("realm", currentRealmContext.getRealmIdentifier()); + MDC.MDCCloseable requestId = + MDC.putCloseable("request_id", httpRequest.getHeader("request_id"))) { + chain.doFilter(request, response); + } finally { + Object contextCatalog = + currentCallContext + .contextVariables() + .get(CallContext.REQUEST_PATH_CATALOG_INSTANCE_KEY); + if (contextCatalog != null && contextCatalog instanceof Closeable) { + ((Closeable) contextCatalog).close(); + } + currentCallContext.close(); + } + } + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/PolarisHealthCheck.java b/polaris-service/src/main/java/io/polaris/service/PolarisHealthCheck.java new file mode 100644 index 0000000000..3302f2b651 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/PolarisHealthCheck.java @@ -0,0 +1,26 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service; + +import com.codahale.metrics.health.HealthCheck; + +/** Default {@link HealthCheck} implementation. */ +public class PolarisHealthCheck extends HealthCheck { + @Override + protected Result check() throws Exception { + return Result.healthy(); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/TimedApplicationEventListener.java b/polaris-service/src/main/java/io/polaris/service/TimedApplicationEventListener.java new file mode 100644 index 0000000000..06ba5bccaa --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/TimedApplicationEventListener.java @@ -0,0 +1,87 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service; + +import com.google.common.base.Stopwatch; +import io.polaris.core.context.CallContext; +import io.polaris.core.monitor.PolarisMetricRegistry; +import io.polaris.core.resource.TimedApi; +import java.lang.reflect.Method; +import java.util.concurrent.TimeUnit; +import javax.ws.rs.ext.Provider; +import org.glassfish.jersey.server.monitoring.ApplicationEvent; +import org.glassfish.jersey.server.monitoring.ApplicationEventListener; +import org.glassfish.jersey.server.monitoring.RequestEvent; +import org.glassfish.jersey.server.monitoring.RequestEventListener; + +/** + * An ApplicationEventListener that supports timing and error counting of Jersey resource methods + * annotated by {@link TimedApi}. It uses the {@link PolarisMetricRegistry} for metric collection + * and properly times the resource on success and increments the error counter on failure. + */ +@Provider +public class TimedApplicationEventListener implements ApplicationEventListener { + + // The PolarisMetricRegistry instance used for recording metrics and error counters. + private final PolarisMetricRegistry polarisMetricRegistry; + + public TimedApplicationEventListener(PolarisMetricRegistry polarisMetricRegistry) { + this.polarisMetricRegistry = polarisMetricRegistry; + } + + @Override + public void onEvent(ApplicationEvent event) {} + + @Override + public RequestEventListener onRequest(RequestEvent event) { + return new TimedRequestEventListener(); + } + + /** + * A RequestEventListener implementation that handles timing of resource method execution and + * increments error counters on failures. The lifetime of the listener is tied to a single HTTP + * request. + */ + private class TimedRequestEventListener implements RequestEventListener { + private String metric; + private Stopwatch sw; + + /** Handles various types of RequestEvents to start timing, stop timing, and record metrics. */ + @Override + public void onEvent(RequestEvent event) { + String realmId = CallContext.getCurrentContext().getRealmContext().getRealmIdentifier(); + if (event.getType() == RequestEvent.Type.RESOURCE_METHOD_START) { + Method method = + event.getUriInfo().getMatchedResourceMethod().getInvocable().getHandlingMethod(); + if (method.isAnnotationPresent(TimedApi.class)) { + TimedApi timedApi = method.getAnnotation(TimedApi.class); + metric = timedApi.value(); + sw = Stopwatch.createStarted(); + polarisMetricRegistry.incrementCounter(metric, realmId); + } + + } else if (event.getType() == RequestEvent.Type.FINISHED && metric != null) { + if (event.isSuccess()) { + sw.stop(); + polarisMetricRegistry.recordTimer(metric, sw.elapsed(TimeUnit.MILLISECONDS), realmId); + } else { + int statusCode = event.getContainerResponse().getStatus(); + polarisMetricRegistry.incrementErrorCounter(metric, statusCode, realmId); + } + } + } + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/admin/PolarisAdminService.java b/polaris-service/src/main/java/io/polaris/service/admin/PolarisAdminService.java new file mode 100644 index 0000000000..119caad618 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/admin/PolarisAdminService.java @@ -0,0 +1,1786 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.admin; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisConfiguration; +import io.polaris.core.admin.model.CatalogGrant; +import io.polaris.core.admin.model.CatalogPrivilege; +import io.polaris.core.admin.model.GrantResource; +import io.polaris.core.admin.model.NamespaceGrant; +import io.polaris.core.admin.model.NamespacePrivilege; +import io.polaris.core.admin.model.PrincipalWithCredentials; +import io.polaris.core.admin.model.PrincipalWithCredentialsCredentials; +import io.polaris.core.admin.model.TableGrant; +import io.polaris.core.admin.model.TablePrivilege; +import io.polaris.core.admin.model.UpdateCatalogRequest; +import io.polaris.core.admin.model.UpdateCatalogRoleRequest; +import io.polaris.core.admin.model.UpdatePrincipalRequest; +import io.polaris.core.admin.model.UpdatePrincipalRoleRequest; +import io.polaris.core.admin.model.ViewGrant; +import io.polaris.core.admin.model.ViewPrivilege; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.auth.PolarisAuthorizableOperation; +import io.polaris.core.auth.PolarisAuthorizer; +import io.polaris.core.catalog.PolarisCatalogHelpers; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.CatalogRoleEntity; +import io.polaris.core.entity.NamespaceEntity; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.entity.PrincipalEntity; +import io.polaris.core.entity.PrincipalRoleEntity; +import io.polaris.core.entity.TableLikeEntity; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.persistence.PolarisResolvedPathWrapper; +import io.polaris.core.persistence.resolver.PolarisResolutionManifest; +import io.polaris.core.persistence.resolver.ResolverPath; +import io.polaris.core.persistence.resolver.ResolverStatus; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import io.polaris.core.storage.aws.AwsStorageConfigurationInfo; +import io.polaris.core.storage.azure.AzureStorageConfigurationInfo; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.Set; +import java.util.function.Function; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.exceptions.AlreadyExistsException; +import org.apache.iceberg.exceptions.BadRequestException; +import org.apache.iceberg.exceptions.CommitFailedException; +import org.apache.iceberg.exceptions.NoSuchNamespaceException; +import org.apache.iceberg.exceptions.NoSuchTableException; +import org.apache.iceberg.exceptions.NoSuchViewException; +import org.apache.iceberg.exceptions.NotFoundException; +import org.apache.iceberg.exceptions.ValidationException; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Just as an Iceberg Catalog represents the logical model of Iceberg business logic to manage + * Namespaces, Tables and Views, abstracted away from Iceberg REST objects, this class represents + * the logical model for managing realm-level Catalogs, Principals, Roles, and Grants. + * + *

Different API implementors could expose different REST, gRPC, etc., interfaces that delegate + * to this logical model without being tightly coupled to a single frontend protocol, and can + * provide different implementations of PolarisEntityManager to abstract away the implementation of + * the persistence layer. + */ +public class PolarisAdminService { + private static final Logger LOG = LoggerFactory.getLogger(PolarisAdminService.class); + public static final String CLEANUP_ON_CATALOG_DROP = "CLEANUP_ON_CATALOG_DROP"; + + private final CallContext callContext; + private PolarisEntityManager entityManager; + private final AuthenticatedPolarisPrincipal authenticatedPrincipal; + private final PolarisAuthorizer authorizer; + + // Initialized in the authorize methods. + private PolarisResolutionManifest resolutionManifest = null; + + public PolarisAdminService( + CallContext callContext, + PolarisEntityManager entityManager, + AuthenticatedPolarisPrincipal authenticatedPrincipal, + PolarisAuthorizer authorizer) { + this.callContext = callContext; + this.entityManager = entityManager; + this.authenticatedPrincipal = authenticatedPrincipal; + this.authorizer = authorizer; + } + + private PolarisCallContext getCurrentPolarisContext() { + return callContext.getPolarisCallContext(); + } + + private Optional findCatalogByName(String name) { + return Optional.ofNullable(resolutionManifest.getResolvedReferenceCatalogEntity()) + .map(path -> CatalogEntity.of(path.getRawLeafEntity())); + } + + private Optional findPrincipalByName(String name) { + return Optional.ofNullable( + resolutionManifest.getResolvedTopLevelEntity(name, PolarisEntityType.PRINCIPAL)) + .map(path -> PrincipalEntity.of(path.getRawLeafEntity())); + } + + private Optional findPrincipalRoleByName(String name) { + return Optional.ofNullable( + resolutionManifest.getResolvedTopLevelEntity(name, PolarisEntityType.PRINCIPAL_ROLE)) + .map(path -> PrincipalRoleEntity.of(path.getRawLeafEntity())); + } + + private Optional findCatalogRoleByName(String catalogName, String name) { + return Optional.ofNullable(resolutionManifest.getResolvedPath(name)) + .map(path -> CatalogRoleEntity.of(path.getRawLeafEntity())); + } + + private void authorizeBasicRootOperationOrThrow(PolarisAuthorizableOperation op) { + resolutionManifest = + entityManager.prepareResolutionManifest( + callContext, authenticatedPrincipal, null /* referenceCatalogName */); + resolutionManifest.resolveAll(); + PolarisResolvedPathWrapper rootContainerWrapper = + resolutionManifest.getResolvedRootContainerEntityAsPath(); + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedPrincipalRoleIds(), + op, + rootContainerWrapper, + null /* secondary */); + } + + private void authorizeBasicTopLevelEntityOperationOrThrow( + PolarisAuthorizableOperation op, String topLevelEntityName, PolarisEntityType entityType) { + String referenceCatalogName = + entityType == PolarisEntityType.CATALOG ? topLevelEntityName : null; + authorizeBasicTopLevelEntityOperationOrThrow( + op, topLevelEntityName, entityType, referenceCatalogName); + } + + private void authorizeBasicTopLevelEntityOperationOrThrow( + PolarisAuthorizableOperation op, + String topLevelEntityName, + PolarisEntityType entityType, + @Nullable String referenceCatalogName) { + resolutionManifest = + entityManager.prepareResolutionManifest( + callContext, authenticatedPrincipal, referenceCatalogName); + resolutionManifest.addTopLevelName(topLevelEntityName, entityType, false /* isOptional */); + ResolverStatus status = resolutionManifest.resolveAll(); + if (status.getStatus() == ResolverStatus.StatusEnum.ENTITY_COULD_NOT_BE_RESOLVED) { + throw new NotFoundException( + "TopLevelEntity of type %s does not exist: %s", entityType, topLevelEntityName); + } + PolarisResolvedPathWrapper topLevelEntityWrapper = + resolutionManifest.getResolvedTopLevelEntity(topLevelEntityName, entityType); + + // TODO: If we do add more "self" privilege operations for PRINCIPAL targets this should + // be extracted into an EnumSet and/or pushed down into PolarisAuthorizer. + if (topLevelEntityWrapper.getResolvedLeafEntity().getEntity().getId() + == authenticatedPrincipal.getPrincipalEntity().getId() + && (op.equals(PolarisAuthorizableOperation.ROTATE_CREDENTIALS) + || op.equals(PolarisAuthorizableOperation.RESET_CREDENTIALS))) { + LOG.atDebug() + .addKeyValue("principalName", topLevelEntityName) + .log("Allowing rotate own credentials"); + return; + } + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + topLevelEntityWrapper, + null /* secondary */); + } + + private void authorizeBasicCatalogRoleOperationOrThrow( + PolarisAuthorizableOperation op, String catalogName, String catalogRoleName) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + resolutionManifest.addPath( + new ResolverPath(List.of(catalogRoleName), PolarisEntityType.CATALOG_ROLE), + catalogRoleName); + resolutionManifest.resolveAll(); + PolarisResolvedPathWrapper target = resolutionManifest.getResolvedPath(catalogRoleName, true); + if (target == null) { + throw new NotFoundException("CatalogRole does not exist: %s", catalogRoleName); + } + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + target, + null /* secondary */); + } + + private void authorizeGrantOnRootContainerToPrincipalRoleOperationOrThrow( + PolarisAuthorizableOperation op, String principalRoleName) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, null); + resolutionManifest.addTopLevelName( + principalRoleName, PolarisEntityType.PRINCIPAL_ROLE, false /* isOptional */); + ResolverStatus status = resolutionManifest.resolveAll(); + + if (status.getStatus() == ResolverStatus.StatusEnum.ENTITY_COULD_NOT_BE_RESOLVED) { + throw new NotFoundException( + "Entity %s not found when trying to grant on root to %s", + status.getFailedToResolvedEntityName(), principalRoleName); + } + + // TODO: Merge this method into authorizeGrantOnTopLevelEntityToPrincipalRoleOperationOrThrow + // once we remove any special handling logic for the rootContainer. + PolarisResolvedPathWrapper rootContainerWrapper = + resolutionManifest.getResolvedRootContainerEntityAsPath(); + PolarisResolvedPathWrapper principalRoleWrapper = + resolutionManifest.getResolvedTopLevelEntity( + principalRoleName, PolarisEntityType.PRINCIPAL_ROLE); + + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + rootContainerWrapper, + principalRoleWrapper); + } + + private void authorizeGrantOnTopLevelEntityToPrincipalRoleOperationOrThrow( + PolarisAuthorizableOperation op, + String topLevelEntityName, + PolarisEntityType topLevelEntityType, + String principalRoleName) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, null); + resolutionManifest.addTopLevelName( + topLevelEntityName, topLevelEntityType, false /* isOptional */); + resolutionManifest.addTopLevelName( + principalRoleName, PolarisEntityType.PRINCIPAL_ROLE, false /* isOptional */); + ResolverStatus status = resolutionManifest.resolveAll(); + + if (status.getStatus() == ResolverStatus.StatusEnum.ENTITY_COULD_NOT_BE_RESOLVED) { + throw new NotFoundException( + "Entity %s not found when trying to assign %s of type %s to %s", + status.getFailedToResolvedEntityName(), + topLevelEntityName, + topLevelEntityType, + principalRoleName); + } + + PolarisResolvedPathWrapper topLevelEntityWrapper = + resolutionManifest.getResolvedTopLevelEntity(topLevelEntityName, topLevelEntityType); + PolarisResolvedPathWrapper principalRoleWrapper = + resolutionManifest.getResolvedTopLevelEntity( + principalRoleName, PolarisEntityType.PRINCIPAL_ROLE); + + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + topLevelEntityWrapper, + principalRoleWrapper); + } + + private void authorizeGrantOnPrincipalRoleToPrincipalOperationOrThrow( + PolarisAuthorizableOperation op, String principalRoleName, String principalName) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, null); + resolutionManifest.addTopLevelName( + principalRoleName, PolarisEntityType.PRINCIPAL_ROLE, false /* isOptional */); + resolutionManifest.addTopLevelName( + principalName, PolarisEntityType.PRINCIPAL, false /* isOptional */); + ResolverStatus status = resolutionManifest.resolveAll(); + + if (status.getStatus() == ResolverStatus.StatusEnum.ENTITY_COULD_NOT_BE_RESOLVED) { + throw new NotFoundException( + "Entity %s not found when trying to assign %s to %s", + status.getFailedToResolvedEntityName(), principalRoleName, principalName); + } + + PolarisResolvedPathWrapper principalRoleWrapper = + resolutionManifest.getResolvedTopLevelEntity( + principalRoleName, PolarisEntityType.PRINCIPAL_ROLE); + PolarisResolvedPathWrapper principalWrapper = + resolutionManifest.getResolvedTopLevelEntity(principalName, PolarisEntityType.PRINCIPAL); + + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + principalRoleWrapper, + principalWrapper); + } + + private void authorizeGrantOnCatalogRoleToPrincipalRoleOperationOrThrow( + PolarisAuthorizableOperation op, + String catalogName, + String catalogRoleName, + String principalRoleName) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + resolutionManifest.addPath( + new ResolverPath(List.of(catalogRoleName), PolarisEntityType.CATALOG_ROLE), + catalogRoleName); + resolutionManifest.addTopLevelName( + principalRoleName, PolarisEntityType.PRINCIPAL_ROLE, false /* isOptional */); + ResolverStatus status = resolutionManifest.resolveAll(); + + if (status.getStatus() == ResolverStatus.StatusEnum.ENTITY_COULD_NOT_BE_RESOLVED) { + throw new NotFoundException( + "Entity %s not found when trying to assign %s.%s to %s", + status.getFailedToResolvedEntityName(), catalogName, catalogRoleName, principalRoleName); + } else if (status.getStatus() == ResolverStatus.StatusEnum.PATH_COULD_NOT_BE_FULLY_RESOLVED) { + throw new NotFoundException( + "Entity %s not found when trying to assign %s.%s to %s", + status.getFailedToResolvePath(), catalogName, catalogRoleName, principalRoleName); + } + + PolarisResolvedPathWrapper principalRoleWrapper = + resolutionManifest.getResolvedTopLevelEntity( + principalRoleName, PolarisEntityType.PRINCIPAL_ROLE); + PolarisResolvedPathWrapper catalogRoleWrapper = + resolutionManifest.getResolvedPath(catalogRoleName, true); + + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + catalogRoleWrapper, + principalRoleWrapper); + } + + private void authorizeGrantOnCatalogOperationOrThrow( + PolarisAuthorizableOperation op, String catalogName, String catalogRoleName) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + resolutionManifest.addTopLevelName( + catalogName, PolarisEntityType.CATALOG, false /* isOptional */); + resolutionManifest.addPath( + new ResolverPath(List.of(catalogRoleName), PolarisEntityType.CATALOG_ROLE), + catalogRoleName); + ResolverStatus status = resolutionManifest.resolveAll(); + + if (status.getStatus() == ResolverStatus.StatusEnum.ENTITY_COULD_NOT_BE_RESOLVED) { + throw new NotFoundException("Catalog not found: %s", catalogName); + } else if (status.getStatus() == ResolverStatus.StatusEnum.PATH_COULD_NOT_BE_FULLY_RESOLVED) { + throw new NotFoundException("CatalogRole not found: %s.%s", catalogName, catalogRoleName); + } + + PolarisResolvedPathWrapper catalogWrapper = + resolutionManifest.getResolvedTopLevelEntity(catalogName, PolarisEntityType.CATALOG); + PolarisResolvedPathWrapper catalogRoleWrapper = + resolutionManifest.getResolvedPath(catalogRoleName, true); + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + catalogWrapper, + catalogRoleWrapper); + } + + private void authorizeGrantOnNamespaceOperationOrThrow( + PolarisAuthorizableOperation op, + String catalogName, + Namespace namespace, + String catalogRoleName) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + resolutionManifest.addPath( + new ResolverPath(Arrays.asList(namespace.levels()), PolarisEntityType.NAMESPACE), + namespace); + resolutionManifest.addPath( + new ResolverPath(List.of(catalogRoleName), PolarisEntityType.CATALOG_ROLE), + catalogRoleName); + ResolverStatus status = resolutionManifest.resolveAll(); + + if (status.getStatus() == ResolverStatus.StatusEnum.ENTITY_COULD_NOT_BE_RESOLVED) { + throw new NotFoundException("Catalog not found: %s", catalogName); + } else if (status.getStatus() == ResolverStatus.StatusEnum.PATH_COULD_NOT_BE_FULLY_RESOLVED) { + if (status.getFailedToResolvePath().getLastEntityType() == PolarisEntityType.NAMESPACE) { + throw new NoSuchNamespaceException( + "Namespace does not exist: %s", status.getFailedToResolvePath().getEntityNames()); + } else { + throw new NotFoundException("CatalogRole not found: %s.%s", catalogName, catalogRoleName); + } + } + + PolarisResolvedPathWrapper namespaceWrapper = + resolutionManifest.getResolvedPath(namespace, true); + PolarisResolvedPathWrapper catalogRoleWrapper = + resolutionManifest.getResolvedPath(catalogRoleName, true); + + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + namespaceWrapper, + catalogRoleWrapper); + } + + private void authorizeGrantOnTableLikeOperationOrThrow( + PolarisAuthorizableOperation op, + String catalogName, + PolarisEntitySubType subType, + TableIdentifier identifier, + String catalogRoleName) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + resolutionManifest.addPath( + new ResolverPath( + PolarisCatalogHelpers.tableIdentifierToList(identifier), PolarisEntityType.TABLE_LIKE), + identifier); + resolutionManifest.addPath( + new ResolverPath(List.of(catalogRoleName), PolarisEntityType.CATALOG_ROLE), + catalogRoleName); + ResolverStatus status = resolutionManifest.resolveAll(); + + if (status.getStatus() == ResolverStatus.StatusEnum.ENTITY_COULD_NOT_BE_RESOLVED) { + throw new NotFoundException("Catalog not found: %s", catalogName); + } else if (status.getStatus() == ResolverStatus.StatusEnum.PATH_COULD_NOT_BE_FULLY_RESOLVED) { + if (status.getFailedToResolvePath().getLastEntityType() == PolarisEntityType.TABLE_LIKE) { + if (subType == PolarisEntitySubType.TABLE) { + throw new NoSuchTableException("Table does not exist: %s", identifier); + } else { + throw new NoSuchViewException("View does not exist: %s", identifier); + } + } else { + throw new NotFoundException("CatalogRole not found: %s.%s", catalogName, catalogRoleName); + } + } + + PolarisResolvedPathWrapper tableLikeWrapper = + resolutionManifest.getResolvedPath(identifier, subType, true); + PolarisResolvedPathWrapper catalogRoleWrapper = + resolutionManifest.getResolvedPath(catalogRoleName, true); + + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + tableLikeWrapper, + catalogRoleWrapper); + } + + /** Get all locations where data for a `CatalogEntity` may be stored */ + private Set getCatalogLocations(CatalogEntity catalogEntity) { + HashSet catalogLocations = new HashSet<>(); + catalogLocations.add(terminateWithSlash(catalogEntity.getDefaultBaseLocation())); + if (catalogEntity.getStorageConfigurationInfo() != null) { + catalogLocations.addAll( + catalogEntity.getStorageConfigurationInfo().getAllowedLocations().stream() + .map(this::terminateWithSlash) + .toList()); + } + return catalogLocations; + } + + /** Ensure a path is terminated with a `/` */ + private String terminateWithSlash(String path) { + if (path == null) { + return null; + } else if (path.endsWith("/")) { + return path; + } + return path + "/"; + } + + /** + * True if the `CatalogEntity` has a default base location or allowed location that overlaps with + * that of any existing catalog. If `ALLOW_OVERLAPPING_CATALOG_URLS` is set to true, this check + * will be skipped. + */ + private boolean catalogOverlapsWithExistingCatalog(CatalogEntity catalogEntity) { + boolean allowOverlappingCatalogUrls = + Boolean.parseBoolean( + String.valueOf( + getCurrentPolarisContext() + .getConfigurationStore() + .getConfiguration( + getCurrentPolarisContext(), + PolarisConfiguration.ALLOW_OVERLAPPING_CATALOG_URLS, + PolarisConfiguration.DEFAULT_ALLOW_OVERLAPPING_CATALOG_URLS))); + + if (allowOverlappingCatalogUrls) { + return false; + } + + Set newCatalogLocations = getCatalogLocations(catalogEntity); + return listCatalogsUnsafe().stream() + .map(CatalogEntity::new) + .anyMatch( + existingCatalog -> { + if (existingCatalog.getName().equals(catalogEntity.getName())) { + return false; + } + return getCatalogLocations(existingCatalog).stream() + .anyMatch( + existingLocation -> + newCatalogLocations.stream() + .anyMatch( + newLocation -> { + if (newLocation == null || existingLocation == null) { + return false; + } + return newLocation.startsWith(existingLocation) + || existingLocation.startsWith(newLocation); + })); + }); + } + + public PolarisEntity createCatalog(PolarisEntity entity) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.CREATE_CATALOG; + authorizeBasicRootOperationOrThrow(op); + + if (catalogOverlapsWithExistingCatalog((CatalogEntity) entity)) { + throw new ValidationException( + "Cannot create Catalog %s. One or more of its locations overlaps with an existing catalog", + entity.getName()); + } + + long id = + entity.getId() <= 0 + ? entityManager + .getMetaStoreManager() + .generateNewEntityId(getCurrentPolarisContext()) + .getId() + : entity.getId(); + PolarisEntity polarisEntity = + new PolarisEntity.Builder(entity) + .setId(id) + .setCreateTimestamp(System.currentTimeMillis()) + .build(); + PolarisMetaStoreManager.CreateCatalogResult catalogResult = + entityManager + .getMetaStoreManager() + .createCatalog(getCurrentPolarisContext(), polarisEntity, List.of()); + if (catalogResult.alreadyExists()) { + throw new AlreadyExistsException( + "Cannot create Catalog %s. Catalog already exists or resolution failed", + entity.getName()); + } + return PolarisEntity.of(catalogResult.getCatalog()); + } + + public void deleteCatalog(String name) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.DELETE_CATALOG; + authorizeBasicTopLevelEntityOperationOrThrow(op, name, PolarisEntityType.CATALOG); + + PolarisEntity entity = + findCatalogByName(name) + .orElseThrow(() -> new NotFoundException("Catalog %s not found", name)); + // TODO: Handle return value in case of concurrent modification + PolarisCallContext polarisCallContext = callContext.getPolarisCallContext(); + boolean cleanup = + polarisCallContext + .getConfigurationStore() + .getConfiguration(polarisCallContext, CLEANUP_ON_CATALOG_DROP, false); + PolarisMetaStoreManager.DropEntityResult dropEntityResult = + entityManager + .getMetaStoreManager() + .dropEntityIfExists(getCurrentPolarisContext(), null, entity, Map.of(), cleanup); + + // at least some handling of error + if (!dropEntityResult.isSuccess()) { + if (dropEntityResult.failedBecauseNotEmpty()) { + throw new BadRequestException( + String.format("Catalog '%s' cannot be dropped, it is not empty", entity.getName())); + } else { + throw new BadRequestException( + String.format( + "Catalog '%s' cannot be dropped, concurrent modification detected. Please try " + + "again", + entity.getName())); + } + } + } + + public @NotNull CatalogEntity getCatalog(String name) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.GET_CATALOG; + authorizeBasicTopLevelEntityOperationOrThrow(op, name, PolarisEntityType.CATALOG); + + return findCatalogByName(name) + .orElseThrow(() -> new NotFoundException("Catalog %s not found", name)); + } + + /** + * Helper to validate business logic of what is allowed to be updated or throw a + * BadRequestException. + */ + private void validateUpdateCatalogDiffOrThrow( + CatalogEntity currentEntity, CatalogEntity newEntity) { + // TODO: Expand the set of validations if there are other fields for other cloud providers + // that we can't successfully apply changes to. + PolarisStorageConfigurationInfo currentStorageConfig = + currentEntity.getStorageConfigurationInfo(); + PolarisStorageConfigurationInfo newStorageConfig = newEntity.getStorageConfigurationInfo(); + + if (currentStorageConfig == null && newStorageConfig == null) { + return; + } + + if (!currentStorageConfig.getClass().equals(newStorageConfig.getClass())) { + throw new BadRequestException( + "Cannot modify storage type of storage config from %s to %s", + currentStorageConfig, newStorageConfig); + } + + if (currentStorageConfig instanceof AwsStorageConfigurationInfo + && newStorageConfig instanceof AwsStorageConfigurationInfo) { + AwsStorageConfigurationInfo currentAwsConfig = + (AwsStorageConfigurationInfo) currentStorageConfig; + AwsStorageConfigurationInfo newAwsConfig = (AwsStorageConfigurationInfo) newStorageConfig; + + if ((currentAwsConfig.getRoleARN() != null + && !currentAwsConfig.getRoleARN().equals(newAwsConfig.getRoleARN())) + || (newAwsConfig.getRoleARN() != null + && !newAwsConfig.getRoleARN().equals(currentAwsConfig.getRoleARN()))) { + throw new BadRequestException( + "Cannot modify Role ARN in storage config from %s to %s", + currentStorageConfig, newStorageConfig); + } + + if ((currentAwsConfig.getExternalId() != null + && !currentAwsConfig.getExternalId().equals(newAwsConfig.getExternalId())) + || (newAwsConfig.getExternalId() != null + && !newAwsConfig.getExternalId().equals(currentAwsConfig.getExternalId()))) { + throw new BadRequestException( + "Cannot modify ExternalId in storage config from %s to %s", + currentStorageConfig, newStorageConfig); + } + } else if (currentStorageConfig instanceof AzureStorageConfigurationInfo + && newStorageConfig instanceof AzureStorageConfigurationInfo) { + AzureStorageConfigurationInfo currentAzureConfig = + (AzureStorageConfigurationInfo) currentStorageConfig; + AzureStorageConfigurationInfo newAzureConfig = + (AzureStorageConfigurationInfo) newStorageConfig; + + if ((currentAzureConfig.getTenantId() != null + && !currentAzureConfig.getTenantId().equals(newAzureConfig.getTenantId())) + || (newAzureConfig.getTenantId() != null + && !newAzureConfig.getTenantId().equals(currentAzureConfig.getTenantId()))) { + throw new BadRequestException( + "Cannot modify TenantId in storage config from %s to %s", + currentStorageConfig, newStorageConfig); + } + } + } + + public @NotNull CatalogEntity updateCatalog(String name, UpdateCatalogRequest updateRequest) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.UPDATE_CATALOG; + authorizeBasicTopLevelEntityOperationOrThrow(op, name, PolarisEntityType.CATALOG); + + CatalogEntity currentCatalogEntity = + findCatalogByName(name) + .orElseThrow(() -> new NotFoundException("Catalog %s not found", name)); + + if (currentCatalogEntity.getEntityVersion() != updateRequest.getCurrentEntityVersion()) { + throw new CommitFailedException( + "Failed to update Catalog; currentEntityVersion '%s', expected '%s'", + currentCatalogEntity.getEntityVersion(), updateRequest.getCurrentEntityVersion()); + } + + CatalogEntity.Builder updateBuilder = new CatalogEntity.Builder(currentCatalogEntity); + String defaultBaseLocation = currentCatalogEntity.getDefaultBaseLocation(); + if (updateRequest.getProperties() != null) { + updateBuilder.setProperties(updateRequest.getProperties()); + defaultBaseLocation = + updateRequest.getProperties().get(CatalogEntity.DEFAULT_BASE_LOCATION_KEY); + } + if (updateRequest.getStorageConfigInfo() != null) { + updateBuilder.setStorageConfigurationInfo( + updateRequest.getStorageConfigInfo(), defaultBaseLocation); + } + CatalogEntity updatedEntity = updateBuilder.build(); + + validateUpdateCatalogDiffOrThrow(currentCatalogEntity, updatedEntity); + + if (catalogOverlapsWithExistingCatalog(updatedEntity)) { + throw new ValidationException( + "Cannot update Catalog %s. One or more of its new locations overlaps with an existing catalog", + updatedEntity.getName()); + } + + CatalogEntity returnedEntity = + Optional.ofNullable( + CatalogEntity.of( + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .updateEntityPropertiesIfNotChanged( + getCurrentPolarisContext(), null, updatedEntity)))) + .orElseThrow( + () -> + new CommitFailedException( + "Concurrent modification on Catalog '%s'; retry later")); + return returnedEntity; + } + + public List listCatalogs() { + authorizeBasicRootOperationOrThrow(PolarisAuthorizableOperation.LIST_CATALOGS); + return listCatalogsUnsafe(); + } + + /** List all catalogs without checking for permission */ + private List listCatalogsUnsafe() { + return entityManager + .getMetaStoreManager() + .listEntities( + getCurrentPolarisContext(), + null, + PolarisEntityType.CATALOG, + PolarisEntitySubType.ANY_SUBTYPE) + .getEntities() + .stream() + .map( + nameAndId -> + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .loadEntity(getCurrentPolarisContext(), 0, nameAndId.getId()))) + .toList(); + } + + public PrincipalWithCredentials createPrincipal(PolarisEntity entity) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.CREATE_PRINCIPAL; + authorizeBasicRootOperationOrThrow(op); + + long id = + entity.getId() <= 0 + ? entityManager + .getMetaStoreManager() + .generateNewEntityId(getCurrentPolarisContext()) + .getId() + : entity.getId(); + PolarisMetaStoreManager.CreatePrincipalResult principalResult = + entityManager + .getMetaStoreManager() + .createPrincipal( + getCurrentPolarisContext(), + new PolarisEntity.Builder(entity) + .setId(id) + .setCreateTimestamp(System.currentTimeMillis()) + .build()); + if (principalResult.alreadyExists()) { + throw new AlreadyExistsException( + "Cannot create Principal %s. Principal already exists or resolution failed", + entity.getName()); + } + return new PrincipalWithCredentials( + new PrincipalEntity(principalResult.getPrincipal()).asPrincipal(), + new PrincipalWithCredentialsCredentials( + principalResult.getPrincipalSecrets().getPrincipalClientId(), + principalResult.getPrincipalSecrets().getMainSecret())); + } + + public void deletePrincipal(String name) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.DELETE_PRINCIPAL; + authorizeBasicTopLevelEntityOperationOrThrow(op, name, PolarisEntityType.PRINCIPAL); + + PolarisEntity entity = + findPrincipalByName(name) + .orElseThrow(() -> new NotFoundException("Principal %s not found", name)); + // TODO: Handle return value in case of concurrent modification + PolarisMetaStoreManager.DropEntityResult dropEntityResult = + entityManager + .getMetaStoreManager() + .dropEntityIfExists(getCurrentPolarisContext(), null, entity, Map.of(), false); + + // at least some handling of error + if (!dropEntityResult.isSuccess()) { + if (dropEntityResult.isEntityUnDroppable()) { + throw new BadRequestException("Root principal cannot be dropped"); + } else { + throw new BadRequestException( + "Root principal cannot be dropped, concurrent modification " + + "detected. Please try again"); + } + } + } + + public @NotNull PrincipalEntity getPrincipal(String name) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.GET_PRINCIPAL; + authorizeBasicTopLevelEntityOperationOrThrow(op, name, PolarisEntityType.PRINCIPAL); + + return findPrincipalByName(name) + .orElseThrow(() -> new NotFoundException("Principal %s not found", name)); + } + + public @NotNull PrincipalEntity updatePrincipal( + String name, UpdatePrincipalRequest updateRequest) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.UPDATE_PRINCIPAL; + authorizeBasicTopLevelEntityOperationOrThrow(op, name, PolarisEntityType.PRINCIPAL); + + PrincipalEntity currentPrincipalEntity = + findPrincipalByName(name) + .orElseThrow(() -> new NotFoundException("Principal %s not found", name)); + + if (currentPrincipalEntity.getEntityVersion() != updateRequest.getCurrentEntityVersion()) { + throw new CommitFailedException( + "Failed to update Principal; currentEntityVersion '%s', expected '%s'", + currentPrincipalEntity.getEntityVersion(), updateRequest.getCurrentEntityVersion()); + } + + PrincipalEntity.Builder updateBuilder = new PrincipalEntity.Builder(currentPrincipalEntity); + if (updateRequest.getProperties() != null) { + updateBuilder.setProperties(updateRequest.getProperties()); + } + PrincipalEntity updatedEntity = updateBuilder.build(); + PrincipalEntity returnedEntity = + Optional.ofNullable( + PrincipalEntity.of( + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .updateEntityPropertiesIfNotChanged( + getCurrentPolarisContext(), null, updatedEntity)))) + .orElseThrow( + () -> + new CommitFailedException( + "Concurrent modification on Principal '%s'; retry later")); + return returnedEntity; + } + + private @NotNull PrincipalWithCredentials rotateOrResetCredentialsHelper( + String principalName, boolean shouldReset) { + PrincipalEntity currentPrincipalEntity = + findPrincipalByName(principalName) + .orElseThrow(() -> new NotFoundException("Principal %s not found", principalName)); + + PolarisPrincipalSecrets currentSecrets = + entityManager + .getMetaStoreManager() + .loadPrincipalSecrets(getCurrentPolarisContext(), currentPrincipalEntity.getClientId()) + .getPrincipalSecrets(); + if (currentSecrets == null) { + throw new IllegalArgumentException( + String.format("Failed to load current secrets for principal '%s'", principalName)); + } + PolarisPrincipalSecrets newSecrets = + entityManager + .getMetaStoreManager() + .rotatePrincipalSecrets( + getCurrentPolarisContext(), + currentPrincipalEntity.getClientId(), + currentPrincipalEntity.getId(), + currentSecrets.getMainSecret(), + shouldReset) + .getPrincipalSecrets(); + if (newSecrets == null) { + throw new IllegalStateException( + String.format( + "Failed to %s secrets for principal '%s'", + shouldReset ? "reset" : "rotate", principalName)); + } + PolarisEntity newPrincipal = + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .loadEntity(getCurrentPolarisContext(), 0L, currentPrincipalEntity.getId())); + return new PrincipalWithCredentials( + PrincipalEntity.of(newPrincipal).asPrincipal(), + new PrincipalWithCredentialsCredentials( + newSecrets.getPrincipalClientId(), newSecrets.getMainSecret())); + } + + public @NotNull PrincipalWithCredentials rotateCredentials(String principalName) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.ROTATE_CREDENTIALS; + authorizeBasicTopLevelEntityOperationOrThrow(op, principalName, PolarisEntityType.PRINCIPAL); + + return rotateOrResetCredentialsHelper(principalName, false); + } + + public @NotNull PrincipalWithCredentials resetCredentials(String principalName) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.RESET_CREDENTIALS; + authorizeBasicTopLevelEntityOperationOrThrow(op, principalName, PolarisEntityType.PRINCIPAL); + + return rotateOrResetCredentialsHelper(principalName, true); + } + + public List listPrincipals() { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.LIST_PRINCIPALS; + authorizeBasicRootOperationOrThrow(op); + + return entityManager + .getMetaStoreManager() + .listEntities( + getCurrentPolarisContext(), + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE) + .getEntities() + .stream() + .map( + nameAndId -> + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .loadEntity(getCurrentPolarisContext(), 0, nameAndId.getId()))) + .toList(); + } + + public PolarisEntity createPrincipalRole(PolarisEntity entity) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.CREATE_PRINCIPAL_ROLE; + authorizeBasicRootOperationOrThrow(op); + + long id = + entity.getId() <= 0 + ? entityManager + .getMetaStoreManager() + .generateNewEntityId(getCurrentPolarisContext()) + .getId() + : entity.getId(); + PolarisEntity returnedEntity = + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .createEntityIfNotExists( + getCurrentPolarisContext(), + null, + new PolarisEntity.Builder(entity) + .setId(id) + .setCreateTimestamp(System.currentTimeMillis()) + .build())); + if (returnedEntity == null) { + throw new AlreadyExistsException( + "Cannot create PrincipalRole %s. PrincipalRole already exists or resolution failed", + entity.getName()); + } + return returnedEntity; + } + + public void deletePrincipalRole(String name) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.DELETE_PRINCIPAL_ROLE; + authorizeBasicTopLevelEntityOperationOrThrow(op, name, PolarisEntityType.PRINCIPAL_ROLE); + + PolarisEntity entity = + findPrincipalRoleByName(name) + .orElseThrow(() -> new NotFoundException("PrincipalRole %s not found", name)); + // TODO: Handle return value in case of concurrent modification + PolarisMetaStoreManager.DropEntityResult dropEntityResult = + entityManager + .getMetaStoreManager() + .dropEntityIfExists( + getCurrentPolarisContext(), null, entity, Map.of(), true); // cleanup grants + + // at least some handling of error + if (!dropEntityResult.isSuccess()) { + if (dropEntityResult.isEntityUnDroppable()) { + throw new BadRequestException("Polaris service admin principal role cannot be dropped"); + } else { + throw new BadRequestException( + "Polaris service admin principal role cannot be dropped, " + + "concurrent modification detected. Please try again"); + } + } + } + + public @NotNull PrincipalRoleEntity getPrincipalRole(String name) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.GET_PRINCIPAL_ROLE; + authorizeBasicTopLevelEntityOperationOrThrow(op, name, PolarisEntityType.PRINCIPAL_ROLE); + + return findPrincipalRoleByName(name) + .orElseThrow(() -> new NotFoundException("PrincipalRole %s not found", name)); + } + + public @NotNull PrincipalRoleEntity updatePrincipalRole( + String name, UpdatePrincipalRoleRequest updateRequest) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.UPDATE_PRINCIPAL_ROLE; + authorizeBasicTopLevelEntityOperationOrThrow(op, name, PolarisEntityType.PRINCIPAL_ROLE); + + PrincipalRoleEntity currentPrincipalRoleEntity = + findPrincipalRoleByName(name) + .orElseThrow(() -> new NotFoundException("PrincipalRole %s not found", name)); + + if (currentPrincipalRoleEntity.getEntityVersion() != updateRequest.getCurrentEntityVersion()) { + throw new CommitFailedException( + "Failed to update PrincipalRole; currentEntityVersion '%s', expected '%s'", + currentPrincipalRoleEntity.getEntityVersion(), updateRequest.getCurrentEntityVersion()); + } + + PrincipalRoleEntity.Builder updateBuilder = + new PrincipalRoleEntity.Builder(currentPrincipalRoleEntity); + if (updateRequest.getProperties() != null) { + updateBuilder.setProperties(updateRequest.getProperties()); + } + PrincipalRoleEntity updatedEntity = updateBuilder.build(); + PrincipalRoleEntity returnedEntity = + Optional.ofNullable( + PrincipalRoleEntity.of( + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .updateEntityPropertiesIfNotChanged( + getCurrentPolarisContext(), null, updatedEntity)))) + .orElseThrow( + () -> + new CommitFailedException( + "Concurrent modification on PrincipalRole '%s'; retry later")); + return returnedEntity; + } + + public List listPrincipalRoles() { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.LIST_PRINCIPAL_ROLES; + authorizeBasicRootOperationOrThrow(op); + + return entityManager + .getMetaStoreManager() + .listEntities( + getCurrentPolarisContext(), + null, + PolarisEntityType.PRINCIPAL_ROLE, + PolarisEntitySubType.NULL_SUBTYPE) + .getEntities() + .stream() + .map( + nameAndId -> + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .loadEntity(getCurrentPolarisContext(), 0, nameAndId.getId()))) + .toList(); + } + + public PolarisEntity createCatalogRole(String catalogName, PolarisEntity entity) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.CREATE_CATALOG_ROLE; + authorizeBasicTopLevelEntityOperationOrThrow(op, catalogName, PolarisEntityType.CATALOG); + + PolarisEntity catalogEntity = + findCatalogByName(catalogName) + .orElseThrow(() -> new NotFoundException("Parent catalog %s not found", catalogName)); + + long id = + entity.getId() <= 0 + ? entityManager + .getMetaStoreManager() + .generateNewEntityId(getCurrentPolarisContext()) + .getId() + : entity.getId(); + PolarisEntity returnedEntity = + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .createEntityIfNotExists( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(List.of(catalogEntity)), + new PolarisEntity.Builder(entity) + .setId(id) + .setCatalogId(catalogEntity.getId()) + .setParentId(catalogEntity.getId()) + .setCreateTimestamp(System.currentTimeMillis()) + .build())); + if (returnedEntity == null) { + throw new AlreadyExistsException( + "Cannot create CatalogRole %s in %s. CatalogRole already exists or resolution failed", + entity.getName(), catalogName); + } + return returnedEntity; + } + + public void deleteCatalogRole(String catalogName, String name) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.DELETE_CATALOG_ROLE; + authorizeBasicCatalogRoleOperationOrThrow(op, catalogName, name); + + PolarisResolvedPathWrapper resolvedCatalogRoleEntity = resolutionManifest.getResolvedPath(name); + if (resolvedCatalogRoleEntity == null) { + throw new NotFoundException("CatalogRole %s not found in catalog %s", name, catalogName); + } + // TODO: Handle return value in case of concurrent modification + PolarisMetaStoreManager.DropEntityResult dropEntityResult = + entityManager + .getMetaStoreManager() + .dropEntityIfExists( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(resolvedCatalogRoleEntity.getRawParentPath()), + resolvedCatalogRoleEntity.getRawLeafEntity(), + Map.of(), + true); // cleanup grants + + // at least some handling of error + if (!dropEntityResult.isSuccess()) { + if (dropEntityResult.isEntityUnDroppable()) { + throw new BadRequestException("Catalog admin role cannot be dropped"); + } else { + throw new BadRequestException( + "Catalog admin role cannot be dropped, concurrent " + + "modification detected. Please try again"); + } + } + } + + public @NotNull CatalogRoleEntity getCatalogRole(String catalogName, String name) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.GET_CATALOG_ROLE; + authorizeBasicCatalogRoleOperationOrThrow(op, catalogName, name); + + return findCatalogRoleByName(catalogName, name) + .orElseThrow(() -> new NotFoundException("CatalogRole %s not found", name)); + } + + public @NotNull CatalogRoleEntity updateCatalogRole( + String catalogName, String name, UpdateCatalogRoleRequest updateRequest) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.UPDATE_CATALOG_ROLE; + authorizeBasicCatalogRoleOperationOrThrow(op, catalogName, name); + + CatalogEntity catalogEntity = + findCatalogByName(catalogName) + .orElseThrow(() -> new NotFoundException("Catalog %s not found", catalogName)); + CatalogRoleEntity currentCatalogRoleEntity = + findCatalogRoleByName(catalogName, name) + .orElseThrow(() -> new NotFoundException("CatalogRole %s not found", name)); + + if (currentCatalogRoleEntity.getEntityVersion() != updateRequest.getCurrentEntityVersion()) { + throw new CommitFailedException( + "Failed to update CatalogRole; currentEntityVersion '%s', expected '%s'", + currentCatalogRoleEntity.getEntityVersion(), updateRequest.getCurrentEntityVersion()); + } + + CatalogRoleEntity.Builder updateBuilder = + new CatalogRoleEntity.Builder(currentCatalogRoleEntity); + if (updateRequest.getProperties() != null) { + updateBuilder.setProperties(updateRequest.getProperties()); + } + CatalogRoleEntity updatedEntity = updateBuilder.build(); + CatalogRoleEntity returnedEntity = + Optional.ofNullable( + CatalogRoleEntity.of( + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .updateEntityPropertiesIfNotChanged( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(List.of(catalogEntity)), + updatedEntity)))) + .orElseThrow( + () -> + new CommitFailedException( + "Concurrent modification on CatalogRole '%s'; retry later")); + return returnedEntity; + } + + public List listCatalogRoles(String catalogName) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.LIST_CATALOG_ROLES; + authorizeBasicTopLevelEntityOperationOrThrow(op, catalogName, PolarisEntityType.CATALOG); + + PolarisEntity catalogEntity = + findCatalogByName(catalogName) + .orElseThrow(() -> new NotFoundException("Parent catalog %s not found", catalogName)); + return entityManager + .getMetaStoreManager() + .listEntities( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(List.of(catalogEntity)), + PolarisEntityType.CATALOG_ROLE, + PolarisEntitySubType.NULL_SUBTYPE) + .getEntities() + .stream() + .map( + nameAndId -> + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .loadEntity( + getCurrentPolarisContext(), catalogEntity.getId(), nameAndId.getId()))) + .toList(); + } + + public boolean assignPrincipalRole(String principalName, String principalRoleName) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.ASSIGN_PRINCIPAL_ROLE; + authorizeGrantOnPrincipalRoleToPrincipalOperationOrThrow(op, principalRoleName, principalName); + + PolarisEntity principalEntity = + findPrincipalByName(principalName) + .orElseThrow(() -> new NotFoundException("Principal %s not found", principalName)); + PolarisEntity principalRoleEntity = + findPrincipalRoleByName(principalRoleName) + .orElseThrow( + () -> new NotFoundException("PrincipalRole %s not found", principalRoleName)); + + return entityManager + .getMetaStoreManager() + .grantUsageOnRoleToGrantee( + getCurrentPolarisContext(), null, principalRoleEntity, principalEntity) + .isSuccess(); + } + + public boolean revokePrincipalRole(String principalName, String principalRoleName) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.REVOKE_PRINCIPAL_ROLE; + authorizeGrantOnPrincipalRoleToPrincipalOperationOrThrow(op, principalRoleName, principalName); + + PolarisEntity principalEntity = + findPrincipalByName(principalName) + .orElseThrow(() -> new NotFoundException("Principal %s not found", principalName)); + PolarisEntity principalRoleEntity = + findPrincipalRoleByName(principalRoleName) + .orElseThrow( + () -> new NotFoundException("PrincipalRole %s not found", principalRoleName)); + return entityManager + .getMetaStoreManager() + .revokeUsageOnRoleFromGrantee( + getCurrentPolarisContext(), null, principalRoleEntity, principalEntity) + .isSuccess(); + } + + public List listPrincipalRolesAssigned(String principalName) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.LIST_PRINCIPAL_ROLES_ASSIGNED; + + authorizeBasicTopLevelEntityOperationOrThrow(op, principalName, PolarisEntityType.PRINCIPAL); + + PolarisEntity principalEntity = + findPrincipalByName(principalName) + .orElseThrow(() -> new NotFoundException("Principal %s not found", principalName)); + PolarisMetaStoreManager.LoadGrantsResult grantList = + entityManager + .getMetaStoreManager() + .loadGrantsToGrantee( + getCurrentPolarisContext(), + principalEntity.getCatalogId(), + principalEntity.getId()); + return buildEntitiesFromGrantResults(grantList, false, null); + } + + public boolean assignCatalogRoleToPrincipalRole( + String principalRoleName, String catalogName, String catalogRoleName) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.ASSIGN_CATALOG_ROLE_TO_PRINCIPAL_ROLE; + authorizeGrantOnCatalogRoleToPrincipalRoleOperationOrThrow( + op, catalogName, catalogRoleName, principalRoleName); + + PolarisEntity principalRoleEntity = + findPrincipalRoleByName(principalRoleName) + .orElseThrow( + () -> new NotFoundException("PrincipalRole %s not found", principalRoleName)); + PolarisEntity catalogEntity = + findCatalogByName(catalogName) + .orElseThrow(() -> new NotFoundException("Parent catalog %s not found", catalogName)); + PolarisEntity catalogRoleEntity = + findCatalogRoleByName(catalogName, catalogRoleName) + .orElseThrow(() -> new NotFoundException("CatalogRole %s not found", catalogRoleName)); + + return entityManager + .getMetaStoreManager() + .grantUsageOnRoleToGrantee( + getCurrentPolarisContext(), catalogEntity, catalogRoleEntity, principalRoleEntity) + .isSuccess(); + } + + public boolean revokeCatalogRoleFromPrincipalRole( + String principalRoleName, String catalogName, String catalogRoleName) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.REVOKE_CATALOG_ROLE_FROM_PRINCIPAL_ROLE; + authorizeGrantOnCatalogRoleToPrincipalRoleOperationOrThrow( + op, catalogName, catalogRoleName, principalRoleName); + + PolarisEntity principalRoleEntity = + findPrincipalRoleByName(principalRoleName) + .orElseThrow( + () -> new NotFoundException("PrincipalRole %s not found", principalRoleName)); + PolarisEntity catalogEntity = + findCatalogByName(catalogName) + .orElseThrow(() -> new NotFoundException("Parent catalog %s not found", catalogName)); + PolarisEntity catalogRoleEntity = + findCatalogRoleByName(catalogName, catalogRoleName) + .orElseThrow(() -> new NotFoundException("CatalogRole %s not found", catalogRoleName)); + return entityManager + .getMetaStoreManager() + .revokeUsageOnRoleFromGrantee( + getCurrentPolarisContext(), catalogEntity, catalogRoleEntity, principalRoleEntity) + .isSuccess(); + } + + public List listAssigneePrincipalsForPrincipalRole(String principalRoleName) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.LIST_ASSIGNEE_PRINCIPALS_FOR_PRINCIPAL_ROLE; + + authorizeBasicTopLevelEntityOperationOrThrow( + op, principalRoleName, PolarisEntityType.PRINCIPAL_ROLE); + + PolarisEntity principalRoleEntity = + findPrincipalRoleByName(principalRoleName) + .orElseThrow( + () -> new NotFoundException("PrincipalRole %s not found", principalRoleName)); + PolarisMetaStoreManager.LoadGrantsResult grantList = + entityManager + .getMetaStoreManager() + .loadGrantsOnSecurable( + getCurrentPolarisContext(), + principalRoleEntity.getCatalogId(), + principalRoleEntity.getId()); + return buildEntitiesFromGrantResults(grantList, true, null); + } + + /** + * Build the list of entities matching the set of grant records returned by a grant lookup + * request. + * + * @param grantList result of a load grants on a securable or to a grantee + * @param grantees if true, return the list of grantee entities, else the list of securable + * entities + * @param grantFilter filter on the grant records, use null for all + * @return list of grantees or securables matching the filter + */ + private List buildEntitiesFromGrantResults( + @NotNull PolarisMetaStoreManager.LoadGrantsResult grantList, + boolean grantees, + @Nullable Function grantFilter) { + Map granteeMap = grantList.getEntitiesAsMap(); + List toReturn = new ArrayList<>(grantList.getGrantRecords().size()); + for (PolarisGrantRecord grantRecord : grantList.getGrantRecords()) { + if (grantFilter == null || grantFilter.apply(grantRecord)) { + long catalogId = + grantees ? grantRecord.getGranteeCatalogId() : grantRecord.getSecurableCatalogId(); + long entityId = grantees ? grantRecord.getGranteeId() : grantRecord.getSecurableId(); + // get the entity associated with the grantee + PolarisBaseEntity entity = this.getOrLoadEntity(granteeMap, catalogId, entityId); + if (entity != null) { + toReturn.add(PolarisEntity.of(entity)); + } + } + } + return toReturn; + } + + public List listCatalogRolesForPrincipalRole( + String principalRoleName, String catalogName) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.LIST_CATALOG_ROLES_FOR_PRINCIPAL_ROLE; + authorizeBasicTopLevelEntityOperationOrThrow( + op, principalRoleName, PolarisEntityType.PRINCIPAL_ROLE, catalogName); + + PolarisEntity catalogEntity = + findCatalogByName(catalogName) + .orElseThrow(() -> new NotFoundException("Parent catalog %s not found", catalogName)); + PolarisEntity principalRoleEntity = + findPrincipalRoleByName(principalRoleName) + .orElseThrow( + () -> new NotFoundException("PrincipalRole %s not found", principalRoleName)); + PolarisMetaStoreManager.LoadGrantsResult grantList = + entityManager + .getMetaStoreManager() + .loadGrantsToGrantee( + getCurrentPolarisContext(), + principalRoleEntity.getCatalogId(), + principalRoleEntity.getId()); + return buildEntitiesFromGrantResults( + grantList, false, grantRec -> grantRec.getSecurableCatalogId() == catalogEntity.getId()); + } + + /** Adds a grant on the root container of this realm to {@code principalRoleName}. */ + public boolean grantPrivilegeOnRootContainerToPrincipalRole( + String principalRoleName, PolarisPrivilege privilege) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.ADD_ROOT_GRANT_TO_PRINCIPAL_ROLE; + authorizeGrantOnRootContainerToPrincipalRoleOperationOrThrow(op, principalRoleName); + + PolarisEntity rootContainerEntity = + resolutionManifest.getResolvedRootContainerEntityAsPath().getRawLeafEntity(); + PolarisEntity principalRoleEntity = + findPrincipalRoleByName(principalRoleName) + .orElseThrow( + () -> new NotFoundException("PrincipalRole %s not found", principalRoleName)); + + return entityManager + .getMetaStoreManager() + .grantPrivilegeOnSecurableToRole( + getCurrentPolarisContext(), principalRoleEntity, null, rootContainerEntity, privilege) + .isSuccess(); + } + + /** Revokes a grant on the root container of this realm from {@code principalRoleName}. */ + public boolean revokePrivilegeOnRootContainerFromPrincipalRole( + String principalRoleName, PolarisPrivilege privilege) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.REVOKE_ROOT_GRANT_FROM_PRINCIPAL_ROLE; + authorizeGrantOnRootContainerToPrincipalRoleOperationOrThrow(op, principalRoleName); + + PolarisEntity rootContainerEntity = + resolutionManifest.getResolvedRootContainerEntityAsPath().getRawLeafEntity(); + PolarisEntity principalRoleEntity = + findPrincipalRoleByName(principalRoleName) + .orElseThrow( + () -> new NotFoundException("PrincipalRole %s not found", principalRoleName)); + + return entityManager + .getMetaStoreManager() + .revokePrivilegeOnSecurableFromRole( + getCurrentPolarisContext(), principalRoleEntity, null, rootContainerEntity, privilege) + .isSuccess(); + } + + /** + * Adds a catalog-level grant on {@code catalogName} to {@code catalogRoleName} which resides + * within the same catalog on which it is being granted the privilege. + */ + public boolean grantPrivilegeOnCatalogToRole( + String catalogName, String catalogRoleName, PolarisPrivilege privilege) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.ADD_CATALOG_GRANT_TO_CATALOG_ROLE; + + authorizeGrantOnCatalogOperationOrThrow(op, catalogName, catalogRoleName); + + PolarisEntity catalogEntity = + findCatalogByName(catalogName) + .orElseThrow(() -> new NotFoundException("Parent catalog %s not found", catalogName)); + PolarisEntity catalogRoleEntity = + findCatalogRoleByName(catalogName, catalogRoleName) + .orElseThrow(() -> new NotFoundException("CatalogRole %s not found", catalogRoleName)); + + return entityManager + .getMetaStoreManager() + .grantPrivilegeOnSecurableToRole( + getCurrentPolarisContext(), + catalogRoleEntity, + PolarisEntity.toCoreList(List.of(catalogEntity)), + catalogEntity, + privilege) + .isSuccess(); + } + + /** Removes a catalog-level grant on {@code catalogName} from {@code catalogRoleName}. */ + public boolean revokePrivilegeOnCatalogFromRole( + String catalogName, String catalogRoleName, PolarisPrivilege privilege) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.REVOKE_CATALOG_GRANT_FROM_CATALOG_ROLE; + authorizeGrantOnCatalogOperationOrThrow(op, catalogName, catalogRoleName); + + PolarisEntity catalogEntity = + findCatalogByName(catalogName) + .orElseThrow(() -> new NotFoundException("Parent catalog %s not found", catalogName)); + PolarisEntity catalogRoleEntity = + findCatalogRoleByName(catalogName, catalogRoleName) + .orElseThrow(() -> new NotFoundException("CatalogRole %s not found", catalogRoleName)); + + return entityManager + .getMetaStoreManager() + .revokePrivilegeOnSecurableFromRole( + getCurrentPolarisContext(), + catalogRoleEntity, + PolarisEntity.toCoreList(List.of(catalogEntity)), + catalogEntity, + privilege) + .isSuccess(); + } + + /** Adds a namespace-level grant on {@code namespace} to {@code catalogRoleName}. */ + public boolean grantPrivilegeOnNamespaceToRole( + String catalogName, String catalogRoleName, Namespace namespace, PolarisPrivilege privilege) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.ADD_NAMESPACE_GRANT_TO_CATALOG_ROLE; + authorizeGrantOnNamespaceOperationOrThrow(op, catalogName, namespace, catalogRoleName); + + PolarisEntity catalogRoleEntity = + findCatalogRoleByName(catalogName, catalogRoleName) + .orElseThrow(() -> new NotFoundException("CatalogRole %s not found", catalogRoleName)); + + PolarisResolvedPathWrapper resolvedPathWrapper = resolutionManifest.getResolvedPath(namespace); + if (resolvedPathWrapper == null) { + throw new NotFoundException("Namespace %s not found", namespace); + } + List catalogPath = resolvedPathWrapper.getRawParentPath(); + PolarisEntity namespaceEntity = resolvedPathWrapper.getRawLeafEntity(); + + return entityManager + .getMetaStoreManager() + .grantPrivilegeOnSecurableToRole( + getCurrentPolarisContext(), + catalogRoleEntity, + PolarisEntity.toCoreList(catalogPath), + namespaceEntity, + privilege) + .isSuccess(); + } + + /** Removes a namespace-level grant on {@code namespace} from {@code catalogRoleName}. */ + public boolean revokePrivilegeOnNamespaceFromRole( + String catalogName, String catalogRoleName, Namespace namespace, PolarisPrivilege privilege) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.REVOKE_NAMESPACE_GRANT_FROM_CATALOG_ROLE; + authorizeGrantOnNamespaceOperationOrThrow(op, catalogName, namespace, catalogRoleName); + + PolarisEntity catalogRoleEntity = + findCatalogRoleByName(catalogName, catalogRoleName) + .orElseThrow(() -> new NotFoundException("CatalogRole %s not found", catalogRoleName)); + + PolarisResolvedPathWrapper resolvedPathWrapper = resolutionManifest.getResolvedPath(namespace); + if (resolvedPathWrapper == null) { + throw new NotFoundException("Namespace %s not found", namespace); + } + List catalogPath = resolvedPathWrapper.getRawParentPath(); + PolarisEntity namespaceEntity = resolvedPathWrapper.getRawLeafEntity(); + + return entityManager + .getMetaStoreManager() + .revokePrivilegeOnSecurableFromRole( + getCurrentPolarisContext(), + catalogRoleEntity, + PolarisEntity.toCoreList(catalogPath), + namespaceEntity, + privilege) + .isSuccess(); + } + + public boolean grantPrivilegeOnTableToRole( + String catalogName, + String catalogRoleName, + TableIdentifier identifier, + PolarisPrivilege privilege) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.ADD_TABLE_GRANT_TO_CATALOG_ROLE; + + authorizeGrantOnTableLikeOperationOrThrow( + op, catalogName, PolarisEntitySubType.TABLE, identifier, catalogRoleName); + + return grantPrivilegeOnTableLikeToRole( + catalogName, catalogRoleName, identifier, PolarisEntitySubType.TABLE, privilege); + } + + public boolean revokePrivilegeOnTableFromRole( + String catalogName, + String catalogRoleName, + TableIdentifier identifier, + PolarisPrivilege privilege) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.REVOKE_TABLE_GRANT_FROM_CATALOG_ROLE; + + authorizeGrantOnTableLikeOperationOrThrow( + op, catalogName, PolarisEntitySubType.TABLE, identifier, catalogRoleName); + + return revokePrivilegeOnTableLikeFromRole( + catalogName, catalogRoleName, identifier, PolarisEntitySubType.TABLE, privilege); + } + + public boolean grantPrivilegeOnViewToRole( + String catalogName, + String catalogRoleName, + TableIdentifier identifier, + PolarisPrivilege privilege) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.ADD_VIEW_GRANT_TO_CATALOG_ROLE; + + authorizeGrantOnTableLikeOperationOrThrow( + op, catalogName, PolarisEntitySubType.VIEW, identifier, catalogRoleName); + + return grantPrivilegeOnTableLikeToRole( + catalogName, catalogRoleName, identifier, PolarisEntitySubType.VIEW, privilege); + } + + public boolean revokePrivilegeOnViewFromRole( + String catalogName, + String catalogRoleName, + TableIdentifier identifier, + PolarisPrivilege privilege) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.REVOKE_VIEW_GRANT_FROM_CATALOG_ROLE; + + authorizeGrantOnTableLikeOperationOrThrow( + op, catalogName, PolarisEntitySubType.VIEW, identifier, catalogRoleName); + + return revokePrivilegeOnTableLikeFromRole( + catalogName, catalogRoleName, identifier, PolarisEntitySubType.VIEW, privilege); + } + + public List listAssigneePrincipalRolesForCatalogRole( + String catalogName, String catalogRoleName) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.LIST_ASSIGNEE_PRINCIPAL_ROLES_FOR_CATALOG_ROLE; + authorizeBasicCatalogRoleOperationOrThrow(op, catalogName, catalogRoleName); + + PolarisEntity catalogEntity = + findCatalogByName(catalogName) + .orElseThrow(() -> new NotFoundException("Parent catalog %s not found", catalogName)); + PolarisEntity catalogRoleEntity = + findCatalogRoleByName(catalogName, catalogRoleName) + .orElseThrow(() -> new NotFoundException("CatalogRole %s not found", catalogRoleName)); + PolarisMetaStoreManager.LoadGrantsResult grantList = + entityManager + .getMetaStoreManager() + .loadGrantsOnSecurable( + getCurrentPolarisContext(), + catalogRoleEntity.getCatalogId(), + catalogRoleEntity.getId()); + return buildEntitiesFromGrantResults(grantList, true, null); + } + + /** + * Lists all grants on Catalog-level resources (Catalog/Namespace/Table/View) granted to the + * specified catalogRole. + */ + public List listGrantsForCatalogRole(String catalogName, String catalogRoleName) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.LIST_GRANTS_FOR_CATALOG_ROLE; + authorizeBasicCatalogRoleOperationOrThrow(op, catalogName, catalogRoleName); + + PolarisEntity catalogRoleEntity = + findCatalogRoleByName(catalogName, catalogRoleName) + .orElseThrow(() -> new NotFoundException("CatalogRole %s not found", catalogRoleName)); + PolarisMetaStoreManager.LoadGrantsResult grantList = + entityManager + .getMetaStoreManager() + .loadGrantsToGrantee( + getCurrentPolarisContext(), + catalogRoleEntity.getCatalogId(), + catalogRoleEntity.getId()); + List catalogGrants = new ArrayList<>(); + List namespaceGrants = new ArrayList<>(); + List tableGrants = new ArrayList<>(); + List viewGrants = new ArrayList<>(); + Map entityMap = grantList.getEntitiesAsMap(); + for (PolarisGrantRecord record : grantList.getGrantRecords()) { + PolarisPrivilege privilege = PolarisPrivilege.fromCode(record.getPrivilegeCode()); + PolarisBaseEntity baseEntity = + this.getOrLoadEntity(entityMap, record.getSecurableCatalogId(), record.getSecurableId()); + if (baseEntity != null) { + switch (baseEntity.getType()) { + case CATALOG: + { + CatalogGrant grant = + new CatalogGrant( + CatalogPrivilege.valueOf(privilege.toString()), + GrantResource.TypeEnum.CATALOG); + catalogGrants.add(grant); + break; + } + case NAMESPACE: + { + NamespaceGrant grant = + new NamespaceGrant( + List.of(NamespaceEntity.of(baseEntity).asNamespace().levels()), + NamespacePrivilege.valueOf(privilege.toString()), + GrantResource.TypeEnum.NAMESPACE); + namespaceGrants.add(grant); + break; + } + case TABLE_LIKE: + { + if (baseEntity.getSubType() == PolarisEntitySubType.TABLE) { + TableIdentifier identifier = TableLikeEntity.of(baseEntity).getTableIdentifier(); + TableGrant grant = + new TableGrant( + List.of(identifier.namespace().levels()), + identifier.name(), + TablePrivilege.valueOf(privilege.toString()), + GrantResource.TypeEnum.TABLE); + tableGrants.add(grant); + } else { + TableIdentifier identifier = TableLikeEntity.of(baseEntity).getTableIdentifier(); + ViewGrant grant = + new ViewGrant( + List.of(identifier.namespace().levels()), + identifier.name(), + ViewPrivilege.valueOf(privilege.toString()), + GrantResource.TypeEnum.VIEW); + viewGrants.add(grant); + } + break; + } + default: + throw new IllegalArgumentException( + String.format( + "Unexpected entity type '%s' listing grants for catalogRole '%s' in catalog '%s'", + baseEntity.getType(), catalogRoleName, catalogName)); + } + } + } + // Assemble these at the end so that they're grouped by type. + List allGrants = new ArrayList<>(); + allGrants.addAll(catalogGrants); + allGrants.addAll(namespaceGrants); + allGrants.addAll(tableGrants); + allGrants.addAll(viewGrants); + return allGrants; + } + + /** + * Get the specified entity from the input map or load it from backend if the input map is null. + * Normally the input map is not expected to be null, except for backward compatibility issue. + * + * @param entitiesMap map of entities + * @param catalogId the id of the catalog of the entity we are looking for + * @param id id of the entity we are looking for + * @return null if the entity does not exist + */ + private @Nullable PolarisBaseEntity getOrLoadEntity( + @Nullable Map entitiesMap, long catalogId, long id) { + return (entitiesMap == null) + ? entityManager + .getMetaStoreManager() + .loadEntity(getCurrentPolarisContext(), catalogId, id) + .getEntity() + : entitiesMap.get(id); + } + + /** Adds a table-level or view-level grant on {@code identifier} to {@code catalogRoleName}. */ + private boolean grantPrivilegeOnTableLikeToRole( + String catalogName, + String catalogRoleName, + TableIdentifier identifier, + PolarisEntitySubType subType, + PolarisPrivilege privilege) { + PolarisEntity catalogEntity = + findCatalogByName(catalogName) + .orElseThrow(() -> new NotFoundException("Parent catalog %s not found", catalogName)); + PolarisEntity catalogRoleEntity = + findCatalogRoleByName(catalogName, catalogRoleName) + .orElseThrow(() -> new NotFoundException("CatalogRole %s not found", catalogRoleName)); + + PolarisResolvedPathWrapper resolvedPathWrapper = + resolutionManifest.getResolvedPath(identifier, subType); + if (resolvedPathWrapper == null) { + if (subType == PolarisEntitySubType.VIEW) { + throw new NotFoundException("View %s not found", identifier); + } else { + throw new NotFoundException("Table %s not found", identifier); + } + } + List catalogPath = resolvedPathWrapper.getRawParentPath(); + PolarisEntity tableLikeEntity = resolvedPathWrapper.getRawLeafEntity(); + + return entityManager + .getMetaStoreManager() + .grantPrivilegeOnSecurableToRole( + getCurrentPolarisContext(), + catalogRoleEntity, + PolarisEntity.toCoreList(catalogPath), + tableLikeEntity, + privilege) + .isSuccess(); + } + + /** + * Removes a table-level or view-level grant on {@code identifier} from {@code catalogRoleName}. + */ + private boolean revokePrivilegeOnTableLikeFromRole( + String catalogName, + String catalogRoleName, + TableIdentifier identifier, + PolarisEntitySubType subType, + PolarisPrivilege privilege) { + PolarisEntity catalogEntity = + findCatalogByName(catalogName) + .orElseThrow(() -> new NotFoundException("Parent catalog %s not found", catalogName)); + PolarisEntity catalogRoleEntity = + findCatalogRoleByName(catalogName, catalogRoleName) + .orElseThrow(() -> new NotFoundException("CatalogRole %s not found", catalogRoleName)); + + PolarisResolvedPathWrapper resolvedPathWrapper = + resolutionManifest.getResolvedPath(identifier, subType); + if (resolvedPathWrapper == null) { + if (subType == PolarisEntitySubType.VIEW) { + throw new NotFoundException("View %s not found", identifier); + } else { + throw new NotFoundException("Table %s not found", identifier); + } + } + List catalogPath = resolvedPathWrapper.getRawParentPath(); + PolarisEntity tableLikeEntity = resolvedPathWrapper.getRawLeafEntity(); + + return entityManager + .getMetaStoreManager() + .revokePrivilegeOnSecurableFromRole( + getCurrentPolarisContext(), + catalogRoleEntity, + PolarisEntity.toCoreList(catalogPath), + tableLikeEntity, + privilege) + .isSuccess(); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/admin/PolarisServiceImpl.java b/polaris-service/src/main/java/io/polaris/service/admin/PolarisServiceImpl.java new file mode 100644 index 0000000000..f033e036a0 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/admin/PolarisServiceImpl.java @@ -0,0 +1,620 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.admin; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.admin.model.AddGrantRequest; +import io.polaris.core.admin.model.Catalog; +import io.polaris.core.admin.model.CatalogGrant; +import io.polaris.core.admin.model.CatalogRole; +import io.polaris.core.admin.model.CatalogRoles; +import io.polaris.core.admin.model.Catalogs; +import io.polaris.core.admin.model.CreateCatalogRequest; +import io.polaris.core.admin.model.CreateCatalogRoleRequest; +import io.polaris.core.admin.model.CreatePrincipalRequest; +import io.polaris.core.admin.model.CreatePrincipalRoleRequest; +import io.polaris.core.admin.model.GrantCatalogRoleRequest; +import io.polaris.core.admin.model.GrantPrincipalRoleRequest; +import io.polaris.core.admin.model.GrantResource; +import io.polaris.core.admin.model.GrantResources; +import io.polaris.core.admin.model.NamespaceGrant; +import io.polaris.core.admin.model.Principal; +import io.polaris.core.admin.model.PrincipalRole; +import io.polaris.core.admin.model.PrincipalRoles; +import io.polaris.core.admin.model.PrincipalWithCredentials; +import io.polaris.core.admin.model.Principals; +import io.polaris.core.admin.model.RevokeGrantRequest; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.core.admin.model.TableGrant; +import io.polaris.core.admin.model.UpdateCatalogRequest; +import io.polaris.core.admin.model.UpdateCatalogRoleRequest; +import io.polaris.core.admin.model.UpdatePrincipalRequest; +import io.polaris.core.admin.model.UpdatePrincipalRoleRequest; +import io.polaris.core.admin.model.ViewGrant; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.auth.PolarisAuthorizer; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.CatalogRoleEntity; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.entity.PrincipalEntity; +import io.polaris.core.entity.PrincipalRoleEntity; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.service.admin.api.PolarisCatalogsApiService; +import io.polaris.service.admin.api.PolarisPrincipalRolesApiService; +import io.polaris.service.admin.api.PolarisPrincipalsApiService; +import io.polaris.service.config.RealmEntityManagerFactory; +import jakarta.ws.rs.core.Response; +import jakarta.ws.rs.core.SecurityContext; +import java.util.List; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.exceptions.NotAuthorizedException; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** Concrete implementation of the Polaris API services */ +public class PolarisServiceImpl + implements PolarisCatalogsApiService, + PolarisPrincipalsApiService, + PolarisPrincipalRolesApiService { + private static final Logger LOG = LoggerFactory.getLogger(PolarisServiceImpl.class); + private final RealmEntityManagerFactory entityManagerFactory; + private final PolarisAuthorizer polarisAuthorizer; + + public PolarisServiceImpl( + RealmEntityManagerFactory entityManagerFactory, PolarisAuthorizer polarisAuthorizer) { + this.entityManagerFactory = entityManagerFactory; + this.polarisAuthorizer = polarisAuthorizer; + } + + private PolarisAdminService newAdminService(SecurityContext securityContext) { + CallContext callContext = CallContext.getCurrentContext(); + AuthenticatedPolarisPrincipal authenticatedPrincipal = + (AuthenticatedPolarisPrincipal) securityContext.getUserPrincipal(); + if (authenticatedPrincipal == null) { + throw new NotAuthorizedException("Failed to find authenticatedPrincipal in SecurityContext"); + } + + PolarisEntityManager entityManager = + entityManagerFactory.getOrCreateEntityManager(callContext.getRealmContext()); + return new PolarisAdminService( + callContext, entityManager, authenticatedPrincipal, polarisAuthorizer); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response createCatalog(CreateCatalogRequest request, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + Catalog catalog = request.getCatalog(); + validateStorageConfig(catalog.getStorageConfigInfo()); + Catalog newCatalog = + new CatalogEntity(adminService.createCatalog(CatalogEntity.fromCatalog(catalog))) + .asCatalog(); + LOG.info("Created new catalog {}", newCatalog); + return Response.status(Response.Status.CREATED).build(); + } + + private void validateStorageConfig(StorageConfigInfo storageConfigInfo) { + CallContext callContext = CallContext.getCurrentContext(); + PolarisCallContext polarisCallContext = callContext.getPolarisCallContext(); + List allowedStorageTypes = + polarisCallContext + .getConfigurationStore() + .getConfiguration( + polarisCallContext, + "SUPPORTED_CATALOG_STORAGE_TYPES", + List.of( + StorageConfigInfo.StorageTypeEnum.S3.name(), + StorageConfigInfo.StorageTypeEnum.AZURE.name(), + StorageConfigInfo.StorageTypeEnum.GCS.name(), + StorageConfigInfo.StorageTypeEnum.FILE.name())); + if (!allowedStorageTypes.contains(storageConfigInfo.getStorageType().name())) { + LOG.atWarn() + .addKeyValue("storageConfig", storageConfigInfo) + .log("Disallowed storage type in catalog"); + throw new IllegalArgumentException( + "Unsupported storage type: " + storageConfigInfo.getStorageType()); + } + } + + /** From PolarisCatalogsApiService */ + @Override + public Response deleteCatalog(String catalogName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + adminService.deleteCatalog(catalogName); + return Response.status(Response.Status.NO_CONTENT).build(); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response getCatalog(String catalogName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + return Response.ok(adminService.getCatalog(catalogName).asCatalog()).build(); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response updateCatalog( + String catalogName, UpdateCatalogRequest updateRequest, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + if (updateRequest.getStorageConfigInfo() != null) { + validateStorageConfig(updateRequest.getStorageConfigInfo()); + } + return Response.ok(adminService.updateCatalog(catalogName, updateRequest).asCatalog()).build(); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response listCatalogs(SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + List catalogList = + adminService.listCatalogs().stream() + .map(CatalogEntity::new) + .map(CatalogEntity::asCatalog) + .toList(); + Catalogs catalogs = new Catalogs(catalogList); + LOG.debug("listCatalogs returning: {}", catalogs); + return Response.ok(catalogs).build(); + } + + /** From PolarisPrincipalsApiService */ + @Override + public Response createPrincipal(CreatePrincipalRequest request, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + PrincipalEntity principal = PrincipalEntity.fromPrincipal(request.getPrincipal()); + if (Boolean.TRUE.equals(request.getCredentialRotationRequired())) { + principal = + new PrincipalEntity.Builder(principal).setCredentialRotationRequiredState().build(); + } + PrincipalWithCredentials createdPrincipal = adminService.createPrincipal(principal); + LOG.info("Created new principal {}", createdPrincipal); + return Response.status(Response.Status.CREATED).entity(createdPrincipal).build(); + } + + /** From PolarisPrincipalsApiService */ + @Override + public Response deletePrincipal(String principalName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + adminService.deletePrincipal(principalName); + return Response.status(Response.Status.NO_CONTENT).build(); + } + + /** From PolarisPrincipalsApiService */ + @Override + public Response getPrincipal(String principalName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + return Response.ok(adminService.getPrincipal(principalName).asPrincipal()).build(); + } + + /** From PolarisPrincipalsApiService */ + @Override + public Response updatePrincipal( + String principalName, UpdatePrincipalRequest updateRequest, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + return Response.ok(adminService.updatePrincipal(principalName, updateRequest).asPrincipal()) + .build(); + } + + /** From PolarisPrincipalsApiService */ + @Override + public Response rotateCredentials(String principalName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + return Response.ok(adminService.rotateCredentials(principalName)).build(); + } + + /** From PolarisPrincipalsApiService */ + @Override + public Response listPrincipals(SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + List principalList = + adminService.listPrincipals().stream() + .map(PrincipalEntity::new) + .map(PrincipalEntity::asPrincipal) + .toList(); + Principals principals = new Principals(principalList); + LOG.debug("listPrincipals returning: {}", principals); + return Response.ok(principals).build(); + } + + /** From PolarisPrincipalRolesApiService */ + @Override + public Response createPrincipalRole( + CreatePrincipalRoleRequest request, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + PrincipalRole newPrincipalRole = + new PrincipalRoleEntity( + adminService.createPrincipalRole( + PrincipalRoleEntity.fromPrincipalRole(request.getPrincipalRole()))) + .asPrincipalRole(); + LOG.info("Created new principalRole {}", newPrincipalRole); + return Response.status(Response.Status.CREATED).build(); + } + + /** From PolarisPrincipalRolesApiService */ + @Override + public Response deletePrincipalRole(String principalRoleName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + adminService.deletePrincipalRole(principalRoleName); + return Response.status(Response.Status.NO_CONTENT).build(); + } + + /** From PolarisPrincipalRolesApiService */ + @Override + public Response getPrincipalRole(String principalRoleName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + return Response.ok(adminService.getPrincipalRole(principalRoleName).asPrincipalRole()).build(); + } + + /** From PolarisPrincipalRolesApiService */ + @Override + public Response updatePrincipalRole( + String principalRoleName, + UpdatePrincipalRoleRequest updateRequest, + SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + return Response.ok( + adminService.updatePrincipalRole(principalRoleName, updateRequest).asPrincipalRole()) + .build(); + } + + /** From PolarisPrincipalRolesApiService */ + @Override + public Response listPrincipalRoles(SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + List principalRoleList = + adminService.listPrincipalRoles().stream() + .map(PrincipalRoleEntity::new) + .map(PrincipalRoleEntity::asPrincipalRole) + .toList(); + PrincipalRoles principalRoles = new PrincipalRoles(principalRoleList); + LOG.debug("listPrincipalRoles returning: {}", principalRoles); + return Response.ok(principalRoles).build(); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response createCatalogRole( + String catalogName, CreateCatalogRoleRequest request, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + CatalogRole newCatalogRole = + new CatalogRoleEntity( + adminService.createCatalogRole( + catalogName, CatalogRoleEntity.fromCatalogRole(request.getCatalogRole()))) + .asCatalogRole(); + LOG.info("Created new catalogRole {}", newCatalogRole); + return Response.status(Response.Status.CREATED).build(); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response deleteCatalogRole( + String catalogName, String catalogRoleName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + adminService.deleteCatalogRole(catalogName, catalogRoleName); + return Response.status(Response.Status.NO_CONTENT).build(); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response getCatalogRole( + String catalogName, String catalogRoleName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + return Response.ok(adminService.getCatalogRole(catalogName, catalogRoleName).asCatalogRole()) + .build(); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response updateCatalogRole( + String catalogName, + String catalogRoleName, + UpdateCatalogRoleRequest updateRequest, + SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + return Response.ok( + adminService + .updateCatalogRole(catalogName, catalogRoleName, updateRequest) + .asCatalogRole()) + .build(); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response listCatalogRoles(String catalogName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + List catalogRoleList = + adminService.listCatalogRoles(catalogName).stream() + .map(CatalogRoleEntity::new) + .map(CatalogRoleEntity::asCatalogRole) + .toList(); + CatalogRoles catalogRoles = new CatalogRoles(catalogRoleList); + LOG.debug("listCatalogRoles returning: {}", catalogRoles); + return Response.ok(catalogRoles).build(); + } + + /** From PolarisPrincipalsApiService */ + @Override + public Response assignPrincipalRole( + String principalName, GrantPrincipalRoleRequest request, SecurityContext securityContext) { + LOG.info( + "Assigning principalRole {} to principal {}", + request.getPrincipalRole().getName(), + principalName); + PolarisAdminService adminService = newAdminService(securityContext); + adminService.assignPrincipalRole(principalName, request.getPrincipalRole().getName()); + return Response.status(Response.Status.CREATED).build(); + } + + /** From PolarisPrincipalsApiService */ + @Override + public Response revokePrincipalRole( + String principalName, String principalRoleName, SecurityContext securityContext) { + LOG.info("Revoking principalRole {} from principal {}", principalRoleName, principalName); + PolarisAdminService adminService = newAdminService(securityContext); + adminService.revokePrincipalRole(principalName, principalRoleName); + return Response.status(Response.Status.NO_CONTENT).build(); + } + + /** From PolarisPrincipalsApiService */ + @Override + public Response listPrincipalRolesAssigned( + String principalName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + List principalRoleList = + adminService.listPrincipalRolesAssigned(principalName).stream() + .map(PrincipalRoleEntity::new) + .map(PrincipalRoleEntity::asPrincipalRole) + .toList(); + PrincipalRoles principalRoles = new PrincipalRoles(principalRoleList); + LOG.debug("listPrincipalRolesAssigned returning: {}", principalRoles); + return Response.ok(principalRoles).build(); + } + + /** From PolarisPrincipalRolesApiService */ + @Override + public Response assignCatalogRoleToPrincipalRole( + String principalRoleName, + String catalogName, + GrantCatalogRoleRequest request, + SecurityContext securityContext) { + LOG.info( + "Assigning catalogRole {} in catalog {} to principalRole {}", + request.getCatalogRole().getName(), + catalogName, + principalRoleName); + PolarisAdminService adminService = newAdminService(securityContext); + adminService.assignCatalogRoleToPrincipalRole( + principalRoleName, catalogName, request.getCatalogRole().getName()); + return Response.status(Response.Status.CREATED).build(); + } + + /** From PolarisPrincipalRolesApiService */ + @Override + public Response revokeCatalogRoleFromPrincipalRole( + String principalRoleName, + String catalogName, + String catalogRoleName, + SecurityContext securityContext) { + LOG.info( + "Revoking catalogRole {} in catalog {} from principalRole {}", + catalogRoleName, + catalogName, + principalRoleName); + PolarisAdminService adminService = newAdminService(securityContext); + adminService.revokeCatalogRoleFromPrincipalRole( + principalRoleName, catalogName, catalogRoleName); + return Response.status(Response.Status.NO_CONTENT).build(); + } + + /** From PolarisPrincipalRolesApiService */ + @Override + public Response listAssigneePrincipalsForPrincipalRole( + String principalRoleName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + List principalList = + adminService.listAssigneePrincipalsForPrincipalRole(principalRoleName).stream() + .map(PrincipalEntity::new) + .map(PrincipalEntity::asPrincipal) + .toList(); + Principals principals = new Principals(principalList); + LOG.debug("listAssigneePrincipalsForPrincipalRole returning: {}", principals); + return Response.ok(principals).build(); + } + + /** From PolarisPrincipalRolesApiService */ + @Override + public Response listCatalogRolesForPrincipalRole( + String principalRoleName, String catalogName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + List catalogRoleList = + adminService.listCatalogRolesForPrincipalRole(principalRoleName, catalogName).stream() + .map(CatalogRoleEntity::new) + .map(CatalogRoleEntity::asCatalogRole) + .toList(); + CatalogRoles catalogRoles = new CatalogRoles(catalogRoleList); + LOG.debug("listCatalogRolesForPrincipalRole returning: {}", catalogRoles); + return Response.ok(catalogRoles).build(); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response addGrantToCatalogRole( + String catalogName, + String catalogRoleName, + AddGrantRequest grantRequest, + SecurityContext securityContext) { + LOG.info( + "Adding grant {} to catalogRole {} in catalog {}", + grantRequest, + catalogRoleName, + catalogName); + PolarisAdminService adminService = newAdminService(securityContext); + switch (grantRequest.getGrant()) { + // The per-securable-type Privilege enums must be exact String match for a subset of all + // PolarisPrivilege values. + case ViewGrant viewGrant: + { + PolarisPrivilege privilege = + PolarisPrivilege.valueOf(viewGrant.getPrivilege().toString()); + String viewName = viewGrant.getViewName(); + String[] namespaceParts = viewGrant.getNamespace().toArray(new String[0]); + adminService.grantPrivilegeOnViewToRole( + catalogName, + catalogRoleName, + TableIdentifier.of(Namespace.of(namespaceParts), viewName), + privilege); + break; + } + case TableGrant tableGrant: + { + PolarisPrivilege privilege = + PolarisPrivilege.valueOf(tableGrant.getPrivilege().toString()); + String tableName = tableGrant.getTableName(); + String[] namespaceParts = tableGrant.getNamespace().toArray(new String[0]); + adminService.grantPrivilegeOnTableToRole( + catalogName, + catalogRoleName, + TableIdentifier.of(Namespace.of(namespaceParts), tableName), + privilege); + break; + } + case NamespaceGrant namespaceGrant: + { + PolarisPrivilege privilege = + PolarisPrivilege.valueOf(namespaceGrant.getPrivilege().toString()); + String[] namespaceParts = namespaceGrant.getNamespace().toArray(new String[0]); + adminService.grantPrivilegeOnNamespaceToRole( + catalogName, catalogRoleName, Namespace.of(namespaceParts), privilege); + break; + } + case CatalogGrant catalogGrant: + { + PolarisPrivilege privilege = + PolarisPrivilege.valueOf(catalogGrant.getPrivilege().toString()); + adminService.grantPrivilegeOnCatalogToRole(catalogName, catalogRoleName, privilege); + break; + } + default: + LOG.atWarn() + .addKeyValue("catalog", catalogName) + .addKeyValue("role", catalogRoleName) + .log("Don't know how to handle privilege grant: {}", grantRequest); + return Response.status(Response.Status.BAD_REQUEST).build(); + } + return Response.status(Response.Status.CREATED).build(); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response revokeGrantFromCatalogRole( + String catalogName, + String catalogRoleName, + Boolean cascade, + RevokeGrantRequest grantRequest, + SecurityContext securityContext) { + LOG.info( + "Revoking grant {} from catalogRole {} in catalog {}", + grantRequest, + catalogRoleName, + catalogName); + if (cascade != null && cascade.booleanValue()) { + LOG.warn("Tried to use unimplemented 'cascade' feature when revoking grants."); + return Response.status(501).build(); // not implemented + } + + PolarisAdminService adminService = newAdminService(securityContext); + switch (grantRequest.getGrant()) { + // The per-securable-type Privilege enums must be exact String match for a subset of all + // PolarisPrivilege values. + case ViewGrant viewGrant: + { + PolarisPrivilege privilege = + PolarisPrivilege.valueOf(viewGrant.getPrivilege().toString()); + String viewName = viewGrant.getViewName(); + String[] namespaceParts = viewGrant.getNamespace().toArray(new String[0]); + adminService.revokePrivilegeOnViewFromRole( + catalogName, + catalogRoleName, + TableIdentifier.of(Namespace.of(namespaceParts), viewName), + privilege); + break; + } + case TableGrant tableGrant: + { + PolarisPrivilege privilege = + PolarisPrivilege.valueOf(tableGrant.getPrivilege().toString()); + String tableName = tableGrant.getTableName(); + String[] namespaceParts = tableGrant.getNamespace().toArray(new String[0]); + adminService.revokePrivilegeOnTableFromRole( + catalogName, + catalogRoleName, + TableIdentifier.of(Namespace.of(namespaceParts), tableName), + privilege); + break; + } + case NamespaceGrant namespaceGrant: + { + PolarisPrivilege privilege = + PolarisPrivilege.valueOf(namespaceGrant.getPrivilege().toString()); + String[] namespaceParts = namespaceGrant.getNamespace().toArray(new String[0]); + adminService.revokePrivilegeOnNamespaceFromRole( + catalogName, catalogRoleName, Namespace.of(namespaceParts), privilege); + break; + } + case CatalogGrant catalogGrant: + { + PolarisPrivilege privilege = + PolarisPrivilege.valueOf(catalogGrant.getPrivilege().toString()); + adminService.revokePrivilegeOnCatalogFromRole(catalogName, catalogRoleName, privilege); + break; + } + default: + LOG.atWarn() + .addKeyValue("catalog", catalogName) + .addKeyValue("role", catalogRoleName) + .log("Don't know how to handle privilege revocation: {}", grantRequest); + return Response.status(Response.Status.BAD_REQUEST).build(); + } + return Response.status(Response.Status.CREATED).build(); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response listAssigneePrincipalRolesForCatalogRole( + String catalogName, String catalogRoleName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + List principalRoleList = + adminService.listAssigneePrincipalRolesForCatalogRole(catalogName, catalogRoleName).stream() + .map(PrincipalRoleEntity::new) + .map(PrincipalRoleEntity::asPrincipalRole) + .toList(); + PrincipalRoles principalRoles = new PrincipalRoles(principalRoleList); + LOG.debug("listAssigneePrincipalRolesForCatalogRole returning: {}", principalRoles); + return Response.ok(principalRoles).build(); + } + + /** From PolarisCatalogsApiService */ + @Override + public Response listGrantsForCatalogRole( + String catalogName, String catalogRoleName, SecurityContext securityContext) { + PolarisAdminService adminService = newAdminService(securityContext); + List grantList = + adminService.listGrantsForCatalogRole(catalogName, catalogRoleName); + GrantResources grantResources = new GrantResources(grantList); + return Response.ok(grantResources).build(); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/BasePolarisAuthenticator.java b/polaris-service/src/main/java/io/polaris/service/auth/BasePolarisAuthenticator.java new file mode 100644 index 0000000000..2a088c3f5c --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/BasePolarisAuthenticator.java @@ -0,0 +1,117 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PrincipalEntity; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.service.config.RealmEntityManagerFactory; +import java.util.Arrays; +import java.util.HashSet; +import java.util.Optional; +import java.util.Set; +import org.apache.commons.lang3.exception.ExceptionUtils; +import org.apache.iceberg.exceptions.NotAuthorizedException; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Base implementation of {@link DiscoverableAuthenticator} constructs a {@link + * AuthenticatedPolarisPrincipal} from the token parsed by subclasses. The {@link + * AuthenticatedPolarisPrincipal} is read from the {@link PolarisMetaStoreManager} for the current + * {@link RealmContext}. If the token defines a non-empty set of scopes, only the principal roles + * specified in the scopes will be active for the current principal. Only the grants assigned to + * these roles will be active in the current request. + */ +public abstract class BasePolarisAuthenticator + implements DiscoverableAuthenticator { + public static final String PRINCIPAL_ROLE_ALL = "PRINCIPAL_ROLE:ALL"; + public static final String PRINCIPAL_ROLE_PREFIX = "PRINCIPAL_ROLE:"; + private static final Logger LOGGER = LoggerFactory.getLogger(BasePolarisAuthenticator.class); + + protected RealmEntityManagerFactory entityManagerFactory; + + public void setEntityManagerFactory(RealmEntityManagerFactory entityManagerFactory) { + this.entityManagerFactory = entityManagerFactory; + } + + public PolarisCallContext getCurrentPolarisContext() { + return CallContext.getCurrentContext().getPolarisCallContext(); + } + + protected Optional getPrincipal(DecodedToken tokenInfo) { + LOGGER.debug("Resolving principal for tokenInfo client_id={}", tokenInfo.getClientId()); + RealmContext realmContext = CallContext.getCurrentContext().getRealmContext(); + PolarisMetaStoreManager metaStoreManager = + entityManagerFactory.getOrCreateEntityManager(realmContext).getMetaStoreManager(); + PolarisEntity principal; + try { + principal = + tokenInfo.getPrincipalId() > 0 + ? PolarisEntity.of( + metaStoreManager.loadEntity( + getCurrentPolarisContext(), 0L, tokenInfo.getPrincipalId())) + : PolarisEntity.of( + metaStoreManager.readEntityByName( + getCurrentPolarisContext(), + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + tokenInfo.getSub())); + } catch (Exception e) { + LoggerFactory.getLogger(BasePolarisAuthenticator.class) + .atError() + .addKeyValue("errMsg", e.getMessage()) + .addKeyValue("stackTrace", ExceptionUtils.getStackTrace(e)) + .log("Unable to authenticate user with token"); + throw new NotAuthorizedException("Unable to authenticate"); + } + if (principal == null) { + LOGGER.warn( + "Failed to resolve principal from tokenInfo client_id={}", tokenInfo.getClientId()); + throw new NotAuthorizedException("Unable to authenticate"); + } + + Set activatedPrincipalRoles = new HashSet<>(); + // TODO: Consolidate the divergent "scopes" logic between test-bearer-token and token-exchange. + if (tokenInfo.getScope() != null && !tokenInfo.getScope().equals(PRINCIPAL_ROLE_ALL)) { + activatedPrincipalRoles.addAll( + Arrays.stream(tokenInfo.getScope().split(" ")) + .map( + s -> // strip the principal_role prefix, if present + s.startsWith(PRINCIPAL_ROLE_PREFIX) + ? s.substring(PRINCIPAL_ROLE_PREFIX.length()) + : s) + .toList()); + } + + LOGGER.debug("Resolved principal: {}", principal); + + AuthenticatedPolarisPrincipal authenticatedPrincipal = + new AuthenticatedPolarisPrincipal(new PrincipalEntity(principal), activatedPrincipalRoles); + LOGGER.debug("Populating authenticatedPrincipal into CallContext: {}", authenticatedPrincipal); + CallContext.getCurrentContext() + .contextVariables() + .put(CallContext.AUTHENTICATED_PRINCIPAL, authenticatedPrincipal); + return Optional.of(authenticatedPrincipal); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/DecodedToken.java b/polaris-service/src/main/java/io/polaris/service/auth/DecodedToken.java new file mode 100644 index 0000000000..20fde6eeb3 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/DecodedToken.java @@ -0,0 +1,26 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +public interface DecodedToken { + Long getPrincipalId(); + + String getClientId(); + + String getSub(); + + String getScope(); +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/DefaultOAuth2ApiService.java b/polaris-service/src/main/java/io/polaris/service/auth/DefaultOAuth2ApiService.java new file mode 100644 index 0000000000..4b1e6d1fc7 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/DefaultOAuth2ApiService.java @@ -0,0 +1,131 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.fasterxml.jackson.annotation.JsonTypeName; +import io.polaris.core.context.CallContext; +import io.polaris.service.config.HasEntityManagerFactory; +import io.polaris.service.config.OAuth2ApiService; +import io.polaris.service.config.RealmEntityManagerFactory; +import io.polaris.service.types.TokenType; +import jakarta.ws.rs.core.Response; +import jakarta.ws.rs.core.SecurityContext; +import java.net.URLDecoder; +import java.nio.charset.Charset; +import org.apache.commons.codec.binary.Base64; +import org.apache.hadoop.hdfs.web.oauth2.OAuth2Constants; +import org.apache.iceberg.rest.auth.OAuth2Properties; +import org.apache.iceberg.rest.responses.OAuthTokenResponse; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Default implementation of the {@link OAuth2ApiService} that generates a JWT token for the client + * if the client secret matches. + */ +@JsonTypeName("default") +public class DefaultOAuth2ApiService implements OAuth2ApiService, HasEntityManagerFactory { + public static final Logger LOGGER = LoggerFactory.getLogger(DefaultOAuth2ApiService.class); + private TokenBrokerFactory tokenBrokerFactory; + + public DefaultOAuth2ApiService() {} + + @Override + public Response getToken( + String authHeader, + String grantType, + String scope, + String clientId, + String clientSecret, + TokenType requestedTokenType, + String subjectToken, + TokenType subjectTokenType, + String actorToken, + TokenType actorTokenType, + SecurityContext securityContext) { + + TokenBroker tokenBroker = + tokenBrokerFactory.apply(CallContext.getCurrentContext().getRealmContext()); + if (!tokenBroker.supportsGrantType(grantType)) { + return OAuthUtils.getResponseFromError(OAuthTokenErrorResponse.Error.unsupported_grant_type); + } + if (!tokenBroker.supportsRequestedTokenType(requestedTokenType)) { + return OAuthUtils.getResponseFromError(OAuthTokenErrorResponse.Error.invalid_request); + } + if (authHeader == null && clientId == null) { + return OAuthUtils.getResponseFromError(OAuthTokenErrorResponse.Error.invalid_client); + } + if (authHeader != null && clientId == null && authHeader.startsWith("Basic ")) { + String credentials = new String(Base64.decodeBase64(authHeader.substring(6))); + if (!credentials.contains(":")) { + return OAuthUtils.getResponseFromError(OAuthTokenErrorResponse.Error.invalid_client); + } + LOGGER.debug("Found credentials in auth header - treating as client_credentials"); + String[] parts = credentials.split(":", 2); + clientId = URLDecoder.decode(parts[0], Charset.defaultCharset()); + clientSecret = URLDecoder.decode(parts[1], Charset.defaultCharset()); + } + TokenResponse tokenResponse = + switch (subjectTokenType) { + case TokenType.ID_TOKEN, + TokenType.REFRESH_TOKEN, + TokenType.JWT, + TokenType.SAML1, + TokenType.SAML2 -> + new TokenResponse(OAuthTokenErrorResponse.Error.invalid_request); + case TokenType.ACCESS_TOKEN -> { + // token exchange with client id and client secret means the client has previously + // attempted to refresh + // an access token, but refreshing was not supported by the token broker. Accept the + // client id and + // secret and treat it as a new token request + if (clientId != null && clientSecret != null) { + yield tokenBroker.generateFromClientSecrets( + clientId, clientSecret, OAuth2Constants.CLIENT_CREDENTIALS, scope); + } else { + yield tokenBroker.generateFromToken(subjectTokenType, subjectToken, grantType, scope); + } + } + case null -> + tokenBroker.generateFromClientSecrets(clientId, clientSecret, grantType, scope); + }; + if (tokenResponse == null) { + return OAuthUtils.getResponseFromError(OAuthTokenErrorResponse.Error.unsupported_grant_type); + } + if (!tokenResponse.isValid()) { + return OAuthUtils.getResponseFromError(tokenResponse.getError()); + } + return Response.ok( + OAuthTokenResponse.builder() + .withToken(tokenResponse.getAccessToken()) + .withTokenType(OAuth2Constants.BEARER) + .withIssuedTokenType(OAuth2Properties.ACCESS_TOKEN_TYPE) + .setExpirationInSeconds(tokenResponse.getExpiresIn()) + .build()) + .build(); + } + + @Override + public void setEntityManagerFactory(RealmEntityManagerFactory entityManagerFactory) { + if (tokenBrokerFactory instanceof HasEntityManagerFactory hemf) { + hemf.setEntityManagerFactory(entityManagerFactory); + } + } + + public void setTokenBroker(TokenBrokerFactory tokenBrokerFactory) { + this.tokenBrokerFactory = tokenBrokerFactory; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/DefaultPolarisAuthenticator.java b/polaris-service/src/main/java/io/polaris/service/auth/DefaultPolarisAuthenticator.java new file mode 100644 index 0000000000..2d30522383 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/DefaultPolarisAuthenticator.java @@ -0,0 +1,48 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.fasterxml.jackson.annotation.JsonProperty; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.context.CallContext; +import io.polaris.service.config.HasEntityManagerFactory; +import io.polaris.service.config.RealmEntityManagerFactory; +import java.util.Optional; + +public class DefaultPolarisAuthenticator extends BasePolarisAuthenticator { + private TokenBrokerFactory tokenBrokerFactory; + + @Override + public Optional authenticate(String credentials) { + TokenBroker handler = + tokenBrokerFactory.apply(CallContext.getCurrentContext().getRealmContext()); + DecodedToken decodedToken = handler.verify(credentials); + return getPrincipal(decodedToken); + } + + @Override + public void setEntityManagerFactory(RealmEntityManagerFactory entityManagerFactory) { + super.setEntityManagerFactory(entityManagerFactory); + if (tokenBrokerFactory instanceof HasEntityManagerFactory) { + ((HasEntityManagerFactory) tokenBrokerFactory).setEntityManagerFactory(entityManagerFactory); + } + } + + @JsonProperty("tokenBroker") + public void setTokenBroker(TokenBrokerFactory tokenBrokerFactory) { + this.tokenBrokerFactory = tokenBrokerFactory; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/DiscoverableAuthenticator.java b/polaris-service/src/main/java/io/polaris/service/auth/DiscoverableAuthenticator.java new file mode 100644 index 0000000000..d5a731336f --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/DiscoverableAuthenticator.java @@ -0,0 +1,35 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.fasterxml.jackson.annotation.JsonTypeInfo; +import io.dropwizard.auth.Authenticator; +import io.dropwizard.jackson.Discoverable; +import io.polaris.service.config.HasEntityManagerFactory; +import java.security.Principal; + +/** + * Extension of the {@link Authenticator} interface that extends {@link Discoverable} so + * implementations can be discovered using the mechanisms described in + * https://www.dropwizard.io/en/stable/manual/configuration.html#polymorphic-configuration . The + * default implementation is {@link TestInlineBearerTokenPolarisAuthenticator}. + * + * @param + * @param

+ */ +@JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, property = "class") +public interface DiscoverableAuthenticator + extends Authenticator, Discoverable, HasEntityManagerFactory {} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/JWTBroker.java b/polaris-service/src/main/java/io/polaris/service/auth/JWTBroker.java new file mode 100644 index 0000000000..1fa708f9db --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/JWTBroker.java @@ -0,0 +1,164 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.auth0.jwt.JWT; +import com.auth0.jwt.algorithms.Algorithm; +import com.auth0.jwt.interfaces.DecodedJWT; +import com.auth0.jwt.interfaces.JWTVerifier; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PrincipalEntity; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.service.types.TokenType; +import java.time.Instant; +import java.time.temporal.ChronoUnit; +import java.util.Optional; +import java.util.UUID; +import org.apache.commons.lang3.StringUtils; +import org.apache.iceberg.exceptions.NotAuthorizedException; + +/** Generates a JWT Token. */ +abstract class JWTBroker implements TokenBroker { + + private static final String ISSUER_KEY = "polaris"; + private static final String CLAIM_KEY_ACTIVE = "active"; + private static final String CLAIM_KEY_CLIENT_ID = "client_id"; + private static final String CLAIM_KEY_PRINCIPAL_ID = "principalId"; + private static final String CLAIM_KEY_SCOPE = "scope"; + + private final PolarisEntityManager entityManager; + private final int maxTokenGenerationInSeconds; + + JWTBroker(PolarisEntityManager entityManager, int maxTokenGenerationInSeconds) { + this.entityManager = entityManager; + this.maxTokenGenerationInSeconds = maxTokenGenerationInSeconds; + } + + abstract Algorithm getAlgorithm(); + + public DecodedToken verify(String token) { + JWTVerifier verifier = JWT.require(getAlgorithm()).build(); + DecodedJWT decodedJWT = verifier.verify(token); + Boolean isActive = decodedJWT.getClaim(CLAIM_KEY_ACTIVE).asBoolean(); + if (isActive == null || !isActive) { + throw new NotAuthorizedException("Token is not active"); + } + if (decodedJWT.getExpiresAtAsInstant().isBefore(Instant.now())) { + throw new NotAuthorizedException("Token has expired"); + } + return new DecodedToken() { + @Override + public Long getPrincipalId() { + return decodedJWT.getClaim("principalId").asLong(); + } + + @Override + public String getClientId() { + return decodedJWT.getClaim("client_id").asString(); + } + + @Override + public String getSub() { + return decodedJWT.getSubject(); + } + + @Override + public String getScope() { + return decodedJWT.getClaim("scope").asString(); + } + }; + } + + @Override + public TokenResponse generateFromToken( + TokenType tokenType, String subjectToken, String grantType, String scope) { + if (!TokenType.ACCESS_TOKEN.equals(tokenType)) { + return new TokenResponse(OAuthTokenErrorResponse.Error.invalid_request); + } + if (StringUtils.isBlank(subjectToken)) { + return new TokenResponse(OAuthTokenErrorResponse.Error.invalid_request); + } + DecodedToken decodedToken = verify(subjectToken); + PolarisMetaStoreManager.EntityResult principalLookup = + entityManager + .getMetaStoreManager() + .loadEntity( + CallContext.getCurrentContext().getPolarisCallContext(), + 0L, + decodedToken.getPrincipalId()); + if (!principalLookup.isSuccess() + || principalLookup.getEntity().getType() != PolarisEntityType.PRINCIPAL) { + return new TokenResponse(OAuthTokenErrorResponse.Error.unauthorized_client); + } + String tokenString = + generateTokenString( + decodedToken.getClientId(), decodedToken.getScope(), decodedToken.getPrincipalId()); + return new TokenResponse( + tokenString, TokenType.ACCESS_TOKEN.getValue(), maxTokenGenerationInSeconds); + } + + @Override + public TokenResponse generateFromClientSecrets( + String clientId, String clientSecret, String grantType, String scope) { + // Initial sanity checks + TokenRequestValidator validator = new TokenRequestValidator(); + Optional initialValidationResponse = + validator.validateForClientCredentialsFlow(clientId, clientSecret, grantType, scope); + if (initialValidationResponse.isPresent()) { + return new TokenResponse(initialValidationResponse.get()); + } + + Optional principal = + TokenBroker.findPrincipalEntity(entityManager, clientId, clientSecret); + if (principal.isEmpty()) { + return new TokenResponse(OAuthTokenErrorResponse.Error.unauthorized_client); + } + String tokenString = generateTokenString(clientId, scope, principal.get().getId()); + return new TokenResponse( + tokenString, TokenType.ACCESS_TOKEN.getValue(), maxTokenGenerationInSeconds); + } + + private String generateTokenString(String clientId, String scope, Long principalId) { + Instant now = Instant.now(); + return JWT.create() + .withIssuer(ISSUER_KEY) + .withSubject(String.valueOf(principalId)) + .withIssuedAt(now) + .withExpiresAt(now.plus(maxTokenGenerationInSeconds, ChronoUnit.SECONDS)) + .withJWTId(UUID.randomUUID().toString()) + .withClaim(CLAIM_KEY_ACTIVE, true) + .withClaim(CLAIM_KEY_CLIENT_ID, clientId) + .withClaim(CLAIM_KEY_PRINCIPAL_ID, principalId) + .withClaim(CLAIM_KEY_SCOPE, scopes(scope)) + .sign(getAlgorithm()); + } + + @Override + public boolean supportsGrantType(String grantType) { + return TokenRequestValidator.ALLOWED_GRANT_TYPES.contains(grantType); + } + + @Override + public boolean supportsRequestedTokenType(TokenType tokenType) { + return tokenType == null || TokenType.ACCESS_TOKEN.equals(tokenType); + } + + private String scopes(String scope) { + return StringUtils.isNotBlank(scope) ? scope : BasePolarisAuthenticator.PRINCIPAL_ROLE_ALL; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/JWTRSAKeyPair.java b/polaris-service/src/main/java/io/polaris/service/auth/JWTRSAKeyPair.java new file mode 100644 index 0000000000..76da383c55 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/JWTRSAKeyPair.java @@ -0,0 +1,40 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.auth0.jwt.algorithms.Algorithm; +import io.polaris.core.persistence.PolarisEntityManager; +import java.security.interfaces.RSAPrivateKey; +import java.security.interfaces.RSAPublicKey; + +/** Generates a JWT using a Public/Private RSA Key */ +public class JWTRSAKeyPair extends JWTBroker { + + JWTRSAKeyPair(PolarisEntityManager entityManager, int maxTokenGenerationInSeconds) { + super(entityManager, maxTokenGenerationInSeconds); + } + + KeyProvider getKeyProvider() { + return new LocalRSAKeyProvider(); + } + + @Override + Algorithm getAlgorithm() { + KeyProvider keyProvider = getKeyProvider(); + return Algorithm.RSA256( + (RSAPublicKey) keyProvider.getPublicKey(), (RSAPrivateKey) keyProvider.getPrivateKey()); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/JWTRSAKeyPairFactory.java b/polaris-service/src/main/java/io/polaris/service/auth/JWTRSAKeyPairFactory.java new file mode 100644 index 0000000000..876d2c96c2 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/JWTRSAKeyPairFactory.java @@ -0,0 +1,43 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.fasterxml.jackson.annotation.JsonTypeName; +import io.polaris.core.context.RealmContext; +import io.polaris.service.config.HasEntityManagerFactory; +import io.polaris.service.config.RealmEntityManagerFactory; + +@JsonTypeName("rsa-key-pair") +public class JWTRSAKeyPairFactory implements TokenBrokerFactory, HasEntityManagerFactory { + private int maxTokenGenerationInSeconds = 3600; + private RealmEntityManagerFactory realmEntityManagerFactory; + + public void setMaxTokenGenerationInSeconds(int maxTokenGenerationInSeconds) { + this.maxTokenGenerationInSeconds = maxTokenGenerationInSeconds; + } + + @Override + public TokenBroker apply(RealmContext realmContext) { + return new JWTRSAKeyPair( + realmEntityManagerFactory.getOrCreateEntityManager(realmContext), + maxTokenGenerationInSeconds); + } + + @Override + public void setEntityManagerFactory(RealmEntityManagerFactory entityManagerFactory) { + this.realmEntityManagerFactory = entityManagerFactory; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/JWTSymmetricKeyBroker.java b/polaris-service/src/main/java/io/polaris/service/auth/JWTSymmetricKeyBroker.java new file mode 100644 index 0000000000..02cdfb9af7 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/JWTSymmetricKeyBroker.java @@ -0,0 +1,38 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.auth0.jwt.algorithms.Algorithm; +import io.polaris.core.persistence.PolarisEntityManager; +import java.util.function.Supplier; + +/** Generates a JWT using a Symmetric Key. */ +public class JWTSymmetricKeyBroker extends JWTBroker { + private final Supplier secretSupplier; + + JWTSymmetricKeyBroker( + PolarisEntityManager entityManager, + int maxTokenGenerationInSeconds, + Supplier secretSupplier) { + super(entityManager, maxTokenGenerationInSeconds); + this.secretSupplier = secretSupplier; + } + + @Override + Algorithm getAlgorithm() { + return Algorithm.HMAC256(secretSupplier.get()); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/JWTSymmetricKeyFactory.java b/polaris-service/src/main/java/io/polaris/service/auth/JWTSymmetricKeyFactory.java new file mode 100644 index 0000000000..0714662485 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/JWTSymmetricKeyFactory.java @@ -0,0 +1,72 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.fasterxml.jackson.annotation.JsonTypeName; +import io.polaris.core.context.RealmContext; +import io.polaris.service.config.HasEntityManagerFactory; +import io.polaris.service.config.RealmEntityManagerFactory; +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.Paths; +import java.util.function.Supplier; + +@JsonTypeName("symmetric-key") +public class JWTSymmetricKeyFactory implements TokenBrokerFactory, HasEntityManagerFactory { + private RealmEntityManagerFactory realmEntityManagerFactory; + private int maxTokenGenerationInSeconds = 3600; + private String file; + private String secret; + + @Override + public TokenBroker apply(RealmContext realmContext) { + if (file == null && secret == null) { + throw new IllegalStateException("Either file or secret must be set"); + } + Supplier secretSupplier = secret != null ? () -> secret : readSecretFromDisk(); + return new JWTSymmetricKeyBroker( + realmEntityManagerFactory.getOrCreateEntityManager(realmContext), + maxTokenGenerationInSeconds, + secretSupplier); + } + + private Supplier readSecretFromDisk() { + return () -> { + try { + return Files.readString(Paths.get(file)); + } catch (IOException e) { + throw new RuntimeException("Failed to read secret from file: " + file, e); + } + }; + } + + public void setMaxTokenGenerationInSeconds(int maxTokenGenerationInSeconds) { + this.maxTokenGenerationInSeconds = maxTokenGenerationInSeconds; + } + + public void setFile(String file) { + this.file = file; + } + + public void setSecret(String secret) { + this.secret = secret; + } + + @Override + public void setEntityManagerFactory(RealmEntityManagerFactory entityManagerFactory) { + this.realmEntityManagerFactory = entityManagerFactory; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/KeyProvider.java b/polaris-service/src/main/java/io/polaris/service/auth/KeyProvider.java new file mode 100644 index 0000000000..e4b6dc64d8 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/KeyProvider.java @@ -0,0 +1,28 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.fasterxml.jackson.annotation.JsonTypeInfo; +import io.dropwizard.jackson.Discoverable; +import java.security.PrivateKey; +import java.security.PublicKey; + +@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type") +public interface KeyProvider extends Discoverable { + PublicKey getPublicKey(); + + PrivateKey getPrivateKey(); +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/LocalRSAKeyProvider.java b/polaris-service/src/main/java/io/polaris/service/auth/LocalRSAKeyProvider.java new file mode 100644 index 0000000000..317f56f593 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/LocalRSAKeyProvider.java @@ -0,0 +1,79 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.context.CallContext; +import java.io.IOException; +import java.security.PrivateKey; +import java.security.PublicKey; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Class that can load public / private keys stored on localhost. Meant to be a simple + * implementation for now where a PEM file is loaded off disk. + */ +public class LocalRSAKeyProvider implements KeyProvider { + + private static final String LOCAL_PRIVATE_KEY_LOCATION_KEY = "LOCAL_PRIVATE_KEY_LOCATION_KEY"; + private static final String LOCAL_PUBLIC_KEY_LOCATION_KEY = "LOCAL_PUBLIC_LOCATION_KEY"; + + private static final Logger LOGGER = LoggerFactory.getLogger(LocalRSAKeyProvider.class); + + private String getLocation(String configKey) { + CallContext callContext = CallContext.getCurrentContext(); + PolarisCallContext pCtx = callContext.getPolarisCallContext(); + String fileLocation = pCtx.getConfigurationStore().getConfiguration(pCtx, configKey); + if (fileLocation == null) { + throw new RuntimeException("Cannot find location for key " + configKey); + } + return fileLocation; + } + + /** + * Getter for the Public Key instance + * + * @return the Public Key instance + */ + @Override + public PublicKey getPublicKey() { + final String publicKeyFileLocation = getLocation(LOCAL_PUBLIC_KEY_LOCATION_KEY); + try { + return PemUtils.readPublicKeyFromFile(publicKeyFileLocation, "RSA"); + } catch (IOException e) { + LOGGER.error("Unable to read public key from file {}", publicKeyFileLocation, e); + throw new RuntimeException("Unable to read public key from file " + publicKeyFileLocation, e); + } + } + + /** + * Getter for the Private Key instance. Used to sign the content on the JWT signing stage. + * + * @return the Private Key instance + */ + @Override + public PrivateKey getPrivateKey() { + final String privateKeyFileLocation = getLocation(LOCAL_PRIVATE_KEY_LOCATION_KEY); + try { + return PemUtils.readPrivateKeyFromFile(privateKeyFileLocation, "RSA"); + } catch (IOException e) { + LOGGER.error("Unable to read private key from file {}", privateKeyFileLocation, e); + throw new RuntimeException( + "Unable to read private key from file " + privateKeyFileLocation, e); + } + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/OAuthTokenErrorResponse.java b/polaris-service/src/main/java/io/polaris/service/auth/OAuthTokenErrorResponse.java new file mode 100644 index 0000000000..942ef67127 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/OAuthTokenErrorResponse.java @@ -0,0 +1,72 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.fasterxml.jackson.annotation.JsonProperty; + +/** An OAuth Error Token Response as defined by the Iceberg REST API OpenAPI Spec. */ +public class OAuthTokenErrorResponse { + + public enum Error { + invalid_request("The request is invalid"), + invalid_client("The Client is invalid"), + invalid_grant("The grant is invalid"), + unauthorized_client("The client is not authorized"), + unsupported_grant_type("The grant type is invalid"), + invalid_scope("The scope is invalid"), + ; + + String errorDescription; + + Error(String errorDescription) { + this.errorDescription = errorDescription; + } + + public String getErrorDescription() { + return errorDescription; + } + } + + private final String error; + private final String errorDescription; + private String errorUri; + + /** + * Initlaizes a response from one of the supported errors + * + * @param error + */ + public OAuthTokenErrorResponse(Error error) { + this.error = error.name(); + this.errorDescription = error.getErrorDescription(); + this.errorUri = null; // Not yet used + } + + @JsonProperty("error") + public String getError() { + return error; + } + + @JsonProperty("error_description") + public String getErrorDescription() { + return errorDescription; + } + + @JsonProperty("error_uri") + public String getErrorUri() { + return errorUri; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/OAuthUtils.java b/polaris-service/src/main/java/io/polaris/service/auth/OAuthUtils.java new file mode 100644 index 0000000000..f42a891877 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/OAuthUtils.java @@ -0,0 +1,74 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import jakarta.ws.rs.core.Response; +import java.nio.charset.StandardCharsets; +import org.apache.commons.codec.binary.Base64; + +/** Simple utility class to assist with OAuth operations */ +public class OAuthUtils { + + public static final String AUTHORIZATION_HEADER = "Authorization"; + + public static final String SF_HEADER_ACCOUNT_NAME = "Snowflake-Account"; + + public static final String POLARIS_ROLE_PREFIX = "PRINCIPAL_ROLE:"; + + public static final String SF_ACCOUNT_NAME_HEADER = "sf-account"; + public static final String SF_ACCOUNT_URL_HEADER = "sf-account-url"; + + /** + * @param clientId + * @param clientSecret + * @return basic Authorization Header of the form `base64_encode(client_id:client_secret) + */ + public static String getBasicAuthHeader(String clientId, String clientSecret) { + return Base64.encodeBase64String( + (clientId + ":" + clientSecret).getBytes(StandardCharsets.UTF_8)); + } + + public static Response getResponseFromError(OAuthTokenErrorResponse.Error error) { + return switch (error) { + case unauthorized_client -> + Response.status(Response.Status.UNAUTHORIZED) + .entity( + new OAuthTokenErrorResponse(OAuthTokenErrorResponse.Error.unauthorized_client)) + .build(); + case invalid_client -> + Response.status(Response.Status.BAD_REQUEST) + .entity(new OAuthTokenErrorResponse(OAuthTokenErrorResponse.Error.invalid_client)) + .build(); + case invalid_grant -> + Response.status(Response.Status.BAD_REQUEST) + .entity(new OAuthTokenErrorResponse(OAuthTokenErrorResponse.Error.invalid_grant)) + .build(); + case unsupported_grant_type -> + Response.status(Response.Status.BAD_REQUEST) + .entity( + new OAuthTokenErrorResponse(OAuthTokenErrorResponse.Error.unsupported_grant_type)) + .build(); + case invalid_scope -> + Response.status(Response.Status.BAD_REQUEST) + .entity(new OAuthTokenErrorResponse(OAuthTokenErrorResponse.Error.invalid_scope)) + .build(); + default -> + Response.status(Response.Status.BAD_REQUEST) + .entity(new OAuthTokenErrorResponse(OAuthTokenErrorResponse.Error.invalid_request)) + .build(); + }; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/PemUtils.java b/polaris-service/src/main/java/io/polaris/service/auth/PemUtils.java new file mode 100644 index 0000000000..df9f052846 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/PemUtils.java @@ -0,0 +1,90 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import java.io.File; +import java.io.FileNotFoundException; +import java.io.FileReader; +import java.io.IOException; +import java.security.KeyFactory; +import java.security.NoSuchAlgorithmException; +import java.security.PrivateKey; +import java.security.PublicKey; +import java.security.spec.EncodedKeySpec; +import java.security.spec.InvalidKeySpecException; +import java.security.spec.PKCS8EncodedKeySpec; +import java.security.spec.X509EncodedKeySpec; +import org.bouncycastle.util.io.pem.PemObject; +import org.bouncycastle.util.io.pem.PemReader; + +public class PemUtils { + + private static byte[] parsePEMFile(File pemFile) throws IOException { + if (!pemFile.isFile() || !pemFile.exists()) { + throw new FileNotFoundException( + String.format("The file '%s' doesn't exist.", pemFile.getAbsolutePath())); + } + PemReader reader = new PemReader(new FileReader(pemFile)); + PemObject pemObject = reader.readPemObject(); + byte[] content = pemObject.getContent(); + reader.close(); + return content; + } + + private static PublicKey getPublicKey(byte[] keyBytes, String algorithm) { + PublicKey publicKey = null; + try { + KeyFactory kf = KeyFactory.getInstance(algorithm); + EncodedKeySpec keySpec = new X509EncodedKeySpec(keyBytes); + publicKey = kf.generatePublic(keySpec); + } catch (NoSuchAlgorithmException e) { + System.out.println( + "Could not reconstruct the public key, the given algorithm could not be found."); + } catch (InvalidKeySpecException e) { + System.out.println("Could not reconstruct the public key"); + } + + return publicKey; + } + + private static PrivateKey getPrivateKey(byte[] keyBytes, String algorithm) { + PrivateKey privateKey = null; + try { + KeyFactory kf = KeyFactory.getInstance(algorithm); + EncodedKeySpec keySpec = new PKCS8EncodedKeySpec(keyBytes); + privateKey = kf.generatePrivate(keySpec); + } catch (NoSuchAlgorithmException e) { + System.out.println( + "Could not reconstruct the private key, the given algorithm could not be found."); + } catch (InvalidKeySpecException e) { + System.out.println("Could not reconstruct the private key"); + } + + return privateKey; + } + + public static PublicKey readPublicKeyFromFile(String filepath, String algorithm) + throws IOException { + byte[] bytes = PemUtils.parsePEMFile(new File(filepath)); + return PemUtils.getPublicKey(bytes, algorithm); + } + + public static PrivateKey readPrivateKeyFromFile(String filepath, String algorithm) + throws IOException { + byte[] bytes = PemUtils.parsePEMFile(new File(filepath)); + return PemUtils.getPrivateKey(bytes, algorithm); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/TestInlineBearerTokenPolarisAuthenticator.java b/polaris-service/src/main/java/io/polaris/service/auth/TestInlineBearerTokenPolarisAuthenticator.java new file mode 100644 index 0000000000..e4b1a7d984 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/TestInlineBearerTokenPolarisAuthenticator.java @@ -0,0 +1,94 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.google.common.base.Splitter; +import io.dropwizard.auth.AuthenticationException; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Map; +import java.util.Optional; +import java.util.stream.Collectors; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * {@link io.dropwizard.auth.Authenticator} that parses a token as a sequence of key/value pairs. + * Specifically, we expect to find + * + *

    + *
  • principal - the clientId of the principal + *
  • realm - the current realm + *
+ * + * This class does not expect a client to be either present or correct. Lookup is delegated to the + * {@link PolarisMetaStoreManager} for the current realm. + */ +public class TestInlineBearerTokenPolarisAuthenticator extends BasePolarisAuthenticator { + private static final Logger LOGGER = + LoggerFactory.getLogger(TestInlineBearerTokenPolarisAuthenticator.class); + + @Override + public Optional authenticate(String credentials) + throws AuthenticationException { + Map properties = extractPrincipal(credentials); + PolarisMetaStoreManager metaStoreManager = + entityManagerFactory + .getOrCreateEntityManager(CallContext.getCurrentContext().getRealmContext()) + .getMetaStoreManager(); + PolarisCallContext callContext = CallContext.getCurrentContext().getPolarisCallContext(); + String principal = properties.get("principal"); + + LOGGER.info("Checking for existence of principal {} in map {}", principal, properties); + + TokenInfoExchangeResponse tokenInfo = new TokenInfoExchangeResponse(); + tokenInfo.setSub(principal); + if (properties.get("role") != null) { + tokenInfo.setScope( + Arrays.stream(properties.get("role").split(" ")) + .map(r -> PRINCIPAL_ROLE_PREFIX + r) + .collect(Collectors.joining(" "))); + } + + PolarisPrincipalSecrets secrets = + metaStoreManager.loadPrincipalSecrets(callContext, principal).getPrincipalSecrets(); + if (secrets == null) { + // For test scenarios, if we're allowing short-circuiting into the bearer flow, there may + // not be a clientId/clientSecret, and instead we'll let the BasePolarisAuthenticator + // resolve the principal by name from the persistence store. + LOGGER.warn("Failed to load secrets for principal {}", principal); + } else { + tokenInfo.setIntegrationId(secrets.getPrincipalId()); + } + + return getPrincipal(tokenInfo); + } + + private static Map extractPrincipal(String credentials) { + if (credentials.contains(";") || credentials.contains(":")) { + Map parsedProperties = new HashMap<>(); + parsedProperties.putAll( + Splitter.on(';').trimResults().withKeyValueSeparator(':').split(credentials)); + return parsedProperties; + } + return Map.of(); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/TestOAuth2ApiService.java b/polaris-service/src/main/java/io/polaris/service/auth/TestOAuth2ApiService.java new file mode 100644 index 0000000000..aab6526c07 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/TestOAuth2ApiService.java @@ -0,0 +1,119 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.fasterxml.jackson.annotation.JsonTypeName; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.service.config.HasEntityManagerFactory; +import io.polaris.service.config.OAuth2ApiService; +import io.polaris.service.config.RealmEntityManagerFactory; +import io.polaris.service.types.TokenType; +import jakarta.ws.rs.core.Response; +import jakarta.ws.rs.core.SecurityContext; +import java.util.HashMap; +import java.util.Map; +import java.util.Objects; +import org.apache.iceberg.exceptions.NotAuthorizedException; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +@JsonTypeName("test") +public class TestOAuth2ApiService implements OAuth2ApiService, HasEntityManagerFactory { + private static final Logger LOGGER = LoggerFactory.getLogger(TestOAuth2ApiService.class); + + private RealmEntityManagerFactory entityManagerFactory; + + @Override + public Response getToken( + String authHeader, + String grantType, + String scope, + String clientId, + String clientSecret, + TokenType requestedTokenType, + String subjectToken, + TokenType subjectTokenType, + String actorToken, + TokenType actorTokenType, + SecurityContext securityContext) { + Map response = new HashMap<>(); + String principalName = getPrincipalName(clientId); + response.put( + "access_token", + "principal:" + + principalName + + ";password:" + + clientSecret + + ";realm:" + + CallContext.getCurrentContext().getRealmContext().getRealmIdentifier() + + ";role:" + + scope.replaceAll(BasePolarisAuthenticator.PRINCIPAL_ROLE_PREFIX, "")); + response.put("token_type", "bearer"); + response.put("expires_in", 3600); + response.put("scope", Objects.requireNonNullElse(scope, "catalog")); + return Response.ok(response).build(); + } + + private String getPrincipalName(String clientId) { + PolarisEntityManager entityManager = + entityManagerFactory.getOrCreateEntityManager( + CallContext.getCurrentContext().getRealmContext()); + PolarisCallContext polarisCallContext = CallContext.getCurrentContext().getPolarisCallContext(); + PolarisMetaStoreManager.PrincipalSecretsResult secretsResult = + entityManager.getMetaStoreManager().loadPrincipalSecrets(polarisCallContext, clientId); + if (secretsResult.isSuccess()) { + LOGGER.debug("Found principal secrets for client id {}", clientId); + PolarisMetaStoreManager.EntityResult principalResult = + entityManager + .getMetaStoreManager() + .loadEntity( + polarisCallContext, 0L, secretsResult.getPrincipalSecrets().getPrincipalId()); + if (!principalResult.isSuccess()) { + throw new NotAuthorizedException("Failed to load principal entity"); + } + return principalResult.getEntity().getName(); + } else { + LOGGER.debug( + "Unable to find principal secrets for client id {} - trying as principal name", clientId); + PolarisMetaStoreManager.EntityResult principalResult = + entityManager + .getMetaStoreManager() + .readEntityByName( + polarisCallContext, + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + clientId); + if (!principalResult.isSuccess()) { + throw new NotAuthorizedException("Failed to read principal entity"); + } + return principalResult.getEntity().getName(); + } + } + + @Override + public void setEntityManagerFactory(RealmEntityManagerFactory entityManagerFactory) { + this.entityManagerFactory = entityManagerFactory; + } + + @Override + public void setTokenBroker(TokenBrokerFactory tokenBrokerFactory) {} +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/TokenBroker.java b/polaris-service/src/main/java/io/polaris/service/auth/TokenBroker.java new file mode 100644 index 0000000000..980335a285 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/TokenBroker.java @@ -0,0 +1,65 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PrincipalEntity; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.service.types.TokenType; +import java.util.Optional; +import org.jetbrains.annotations.NotNull; + +/** Generic token class intended to be extended by different token types */ +public interface TokenBroker { + + boolean supportsGrantType(String grantType); + + boolean supportsRequestedTokenType(TokenType tokenType); + + TokenResponse generateFromClientSecrets( + final String clientId, final String clientSecret, final String grantType, final String scope); + + TokenResponse generateFromToken( + TokenType tokenType, String subjectToken, final String grantType, final String scope); + + DecodedToken verify(String token); + + static @NotNull Optional findPrincipalEntity( + PolarisEntityManager entityManager, String clientId, String clientSecret) { + // Validate the principal is present and secrets match + PolarisMetaStoreManager metaStoreManager = entityManager.getMetaStoreManager(); + PolarisCallContext polarisCallContext = CallContext.getCurrentContext().getPolarisCallContext(); + PolarisMetaStoreManager.PrincipalSecretsResult principalSecrets = + metaStoreManager.loadPrincipalSecrets(polarisCallContext, clientId); + if (!principalSecrets.isSuccess()) { + return Optional.empty(); + } + if (!principalSecrets.getPrincipalSecrets().getMainSecret().equals(clientSecret) + && !principalSecrets.getPrincipalSecrets().getSecondarySecret().equals(clientSecret)) { + return Optional.empty(); + } + PolarisMetaStoreManager.EntityResult result = + metaStoreManager.loadEntity( + polarisCallContext, 0L, principalSecrets.getPrincipalSecrets().getPrincipalId()); + if (!result.isSuccess() || result.getEntity().getType() != PolarisEntityType.PRINCIPAL) { + return Optional.empty(); + } + return Optional.of(PrincipalEntity.of(result.getEntity())); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/TokenBrokerFactory.java b/polaris-service/src/main/java/io/polaris/service/auth/TokenBrokerFactory.java new file mode 100644 index 0000000000..90def62c72 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/TokenBrokerFactory.java @@ -0,0 +1,28 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.fasterxml.jackson.annotation.JsonTypeInfo; +import io.dropwizard.jackson.Discoverable; +import io.polaris.core.context.RealmContext; +import java.util.function.Function; + +/** + * Factory that creates a {@link TokenBroker} for generating and parsing. The {@link TokenBroker} is + * created based on the realm context. + */ +@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type") +public interface TokenBrokerFactory extends Function, Discoverable {} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/TokenInfoExchangeResponse.java b/polaris-service/src/main/java/io/polaris/service/auth/TokenInfoExchangeResponse.java new file mode 100644 index 0000000000..a3e6f016cb --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/TokenInfoExchangeResponse.java @@ -0,0 +1,147 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import com.fasterxml.jackson.annotation.JsonProperty; + +public class TokenInfoExchangeResponse implements DecodedToken { + + private boolean active; + + @JsonProperty("active") + public boolean isActive() { + return active; + } + + @JsonProperty("active") + public void setActive(boolean active) { + this.active = active; + } + + private String scope; + + @JsonProperty("scope") + public String getScope() { + return scope; + } + + @JsonProperty("scope") + public void setScope(String scope) { + this.scope = scope; + } + + private String clientId; + + @JsonProperty("client_id") + public String getClientId() { + return clientId; + } + + @JsonProperty("client_id") + public void setClientId(String clientId) { + this.clientId = clientId; + } + + private String tokenType; + + @JsonProperty("token_type") + public String getTokenType() { + return tokenType; + } + + @JsonProperty("token_type") + public void setTokenType(String tokenType) { + this.tokenType = tokenType; + } + + private Long exp; + + @JsonProperty("exp") + public Long getExp() { + return exp; + } + + @JsonProperty("exp") + public void setExp(Long exp) { + this.exp = exp; + } + + private String sub; + + @JsonProperty("sub") + public String getSub() { + return sub; + } + + @JsonProperty("sub") + public void setSub(String sub) { + this.sub = sub; + } + + private String aud; + + @JsonProperty("aud") + public String getAud() { + return aud; + } + + @JsonProperty("aud") + public void setAud(String aud) { + this.aud = aud; + } + + @JsonProperty("iss") + private String iss; + + @JsonProperty("iss") + public String getIss() { + return iss; + } + + @JsonProperty("iss") + public void setIss(String iss) { + this.iss = iss; + } + + private String token; + + @JsonProperty("token") + public String getToken() { + return token; + } + + @JsonProperty("token") + public void setToken(String token) { + this.token = token; + } + + private long integrationId; + + public long getIntegrationId() { + return integrationId; + } + + @JsonProperty("integration_id") + public void setIntegrationId(long integrationId) { + this.integrationId = integrationId; + } + + /* integration ID is effectively principal ID */ + @Override + public Long getPrincipalId() { + return integrationId; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/TokenRequestValidator.java b/polaris-service/src/main/java/io/polaris/service/auth/TokenRequestValidator.java new file mode 100644 index 0000000000..d13a6c430c --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/TokenRequestValidator.java @@ -0,0 +1,79 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import java.util.Optional; +import java.util.Set; +import java.util.logging.Logger; + +public class TokenRequestValidator { + + static final Logger LOGGER = Logger.getLogger(TokenRequestValidator.class.getName()); + + public static final String TOKEN_EXCHANGE = "urn:ietf:params:oauth:grant-type:token-exchange"; + public static final String CLIENT_CREDENTIALS = "client_credentials"; + public static final Set ALLOWED_GRANT_TYPES = Set.of(CLIENT_CREDENTIALS, TOKEN_EXCHANGE); + + /** Default constructor */ + public TokenRequestValidator() {} + + /** + * Validates the incoming Client Credentials flow. + * + *
    + *
  • Non-null scope: while optional in the spec we make it required and expect it to conform + * to the format + *
+ * + * @param clientId + * @param clientSecret + * @param grantType + * @param scope while optional in the Iceberg REST API Spec we make it required and expect it to + * conform to the format "PRINCIPAL_ROLE:NAME PRINCIPAL_ROLE:NAME2 ..." + * @return + */ + public Optional validateForClientCredentialsFlow( + final String clientId, + final String clientSecret, + final String grantType, + final String scope) { + if (clientId == null || clientId.isEmpty() || clientSecret == null || clientSecret.isEmpty()) { + // TODO: Figure out how to get the authorization header from `securityContext` + LOGGER.info("Missing Client ID or Client Secret in Request Body"); + return Optional.of(OAuthTokenErrorResponse.Error.invalid_client); + } + if (grantType == null || grantType.isEmpty() || !ALLOWED_GRANT_TYPES.contains(grantType)) { + LOGGER.info("Invalid grant type: " + grantType); + return Optional.of(OAuthTokenErrorResponse.Error.invalid_grant); + } + if (scope == null || scope.isEmpty()) { + LOGGER.info("Missing scope in Request Body"); + return Optional.of(OAuthTokenErrorResponse.Error.invalid_scope); + } + String[] scopes = scope.split(" "); + for (String s : scopes) { + if (!s.startsWith(OAuthUtils.POLARIS_ROLE_PREFIX)) { + LOGGER.info("Invalid scope provided. scopes=" + s + "scopes=" + scope); + return Optional.of(OAuthTokenErrorResponse.Error.invalid_scope); + } + if (s.replaceFirst(OAuthUtils.POLARIS_ROLE_PREFIX, "").isEmpty()) { + LOGGER.info("Invalid scope provided. scopes=" + s + "scopes=" + scope); + return Optional.of(OAuthTokenErrorResponse.Error.invalid_scope); + } + } + return Optional.empty(); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/auth/TokenResponse.java b/polaris-service/src/main/java/io/polaris/service/auth/TokenResponse.java new file mode 100644 index 0000000000..c7ca2ee8b6 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/auth/TokenResponse.java @@ -0,0 +1,56 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import java.util.Optional; + +public class TokenResponse { + private final Optional error; + private String accessToken; + private String tokenType; + private Integer expiresIn; + + public TokenResponse(OAuthTokenErrorResponse.Error error) { + this.error = Optional.of(error); + } + + public TokenResponse(String accessToken, String tokenType, int expiresIn) { + this.accessToken = accessToken; + this.expiresIn = expiresIn; + this.tokenType = tokenType; + this.error = Optional.empty(); + } + + public boolean isValid() { + return error.isEmpty(); + } + + public OAuthTokenErrorResponse.Error getError() { + return error.get(); + } + + public String getAccessToken() { + return accessToken; + } + + public int getExpiresIn() { + return expiresIn; + } + + public String getTokenType() { + return tokenType; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/catalog/BasePolarisCatalog.java b/polaris-service/src/main/java/io/polaris/service/catalog/BasePolarisCatalog.java new file mode 100644 index 0000000000..593fb87b2b --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/catalog/BasePolarisCatalog.java @@ -0,0 +1,1958 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.catalog; + +import static io.polaris.core.storage.StorageUtil.concatFilePrefixes; + +import com.google.common.annotations.VisibleForTesting; +import com.google.common.base.Joiner; +import com.google.common.base.Objects; +import com.google.common.base.Preconditions; +import com.google.common.collect.ImmutableMap; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisConfiguration; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.catalog.PolarisCatalogHelpers; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.NamespaceEntity; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisTaskConstants; +import io.polaris.core.entity.TableLikeEntity; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.persistence.PolarisResolvedPathWrapper; +import io.polaris.core.persistence.resolver.PolarisResolutionManifest; +import io.polaris.core.persistence.resolver.PolarisResolutionManifestCatalogView; +import io.polaris.core.persistence.resolver.ResolverPath; +import io.polaris.core.persistence.resolver.ResolverStatus; +import io.polaris.core.storage.InMemoryStorageIntegration; +import io.polaris.core.storage.PolarisStorageActions; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import io.polaris.core.storage.PolarisStorageIntegration; +import io.polaris.core.storage.aws.PolarisS3FileIOClientFactory; +import io.polaris.service.task.TaskExecutor; +import io.polaris.service.types.NotificationRequest; +import io.polaris.service.types.NotificationType; +import jakarta.ws.rs.BadRequestException; +import java.io.Closeable; +import java.io.IOException; +import java.net.URI; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.Set; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.function.Predicate; +import java.util.stream.Collectors; +import java.util.stream.Stream; +import org.apache.commons.lang3.exception.ExceptionUtils; +import org.apache.hadoop.conf.Configuration; +import org.apache.iceberg.BaseMetastoreTableOperations; +import org.apache.iceberg.CatalogProperties; +import org.apache.iceberg.CatalogUtil; +import org.apache.iceberg.Schema; +import org.apache.iceberg.TableMetadata; +import org.apache.iceberg.TableMetadataParser; +import org.apache.iceberg.TableOperations; +import org.apache.iceberg.aws.s3.S3FileIOProperties; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.SupportsNamespaces; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.exceptions.AlreadyExistsException; +import org.apache.iceberg.exceptions.CommitFailedException; +import org.apache.iceberg.exceptions.ForbiddenException; +import org.apache.iceberg.exceptions.NamespaceNotEmptyException; +import org.apache.iceberg.exceptions.NoSuchNamespaceException; +import org.apache.iceberg.exceptions.NoSuchTableException; +import org.apache.iceberg.exceptions.NoSuchViewException; +import org.apache.iceberg.exceptions.NotFoundException; +import org.apache.iceberg.exceptions.UnprocessableEntityException; +import org.apache.iceberg.exceptions.ValidationException; +import org.apache.iceberg.io.CloseableGroup; +import org.apache.iceberg.io.FileIO; +import org.apache.iceberg.view.BaseMetastoreViewCatalog; +import org.apache.iceberg.view.BaseViewOperations; +import org.apache.iceberg.view.ViewBuilder; +import org.apache.iceberg.view.ViewMetadata; +import org.apache.iceberg.view.ViewMetadataParser; +import org.apache.iceberg.view.ViewUtil; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.TestOnly; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import software.amazon.awssdk.core.exception.SdkException; + +/** Defines the relationship between PolarisEntities and Iceberg's business logic. */ +public class BasePolarisCatalog extends BaseMetastoreViewCatalog + implements SupportsNamespaces, SupportsNotifications, Closeable, SupportsCredentialDelegation { + private static final Logger LOG = LoggerFactory.getLogger(BasePolarisCatalog.class); + + private static final Joiner SLASH = Joiner.on("/"); + private static final Joiner DOT = Joiner.on("."); + + // Config key for whether to allow setting the FILE_IO_IMPL using catalog properties. Should + // only be allowed in dev/test environments. + static final String ALLOW_SPECIFYING_FILE_IO_IMPL = "ALLOW_SPECIFYING_FILE_IO_IMPL"; + private static final int MAX_RETRIES = 12; + + static final Predicate SHOULD_RETRY_REFRESH_PREDICATE = + new Predicate() { + @Override + public boolean test(Exception ex) { + // Default arguments from BaseMetastoreTableOperation only stop retries on + // NotFoundException. We should more carefully identify the set of retriable + // and non-retriable exceptions here. + return !(ex instanceof NotFoundException) + && !(ex instanceof IllegalArgumentException) + && !(ex instanceof AlreadyExistsException) + && !(ex instanceof ForbiddenException) + && !(ex instanceof UnprocessableEntityException) + && isStorageProviderRetryableException(ex); + } + }; + public static final String CLEANUP_ON_NAMESPACE_DROP = "CLEANUP_ON_NAMESPACE_DROP"; + + private final PolarisEntityManager entityManager; + private final CallContext callContext; + private final PolarisResolutionManifestCatalogView resolvedEntityView; + private final CatalogEntity catalogEntity; + private final TaskExecutor taskExecutor; + private final AuthenticatedPolarisPrincipal authenticatedPrincipal; + private String ioImplClassName; + private FileIO catalogFileIO; + private String catalogName; + private long catalogId = -1; + private String defaultBaseLocation; + private CloseableGroup closeableGroup; + private Map catalogProperties; + + /** + * @param entityManager provides handle to underlying PolarisMetaStoreManager with which to + * perform mutations on entities. + * @param callContext the current CallContext + * @param resolvedEntityView accessor to resolved entity paths that have been pre-vetted to ensure + * this catalog instance only interacts with authorized resolved paths. + * @param taskExecutor Executor we use to register cleanup task handlers + */ + public BasePolarisCatalog( + PolarisEntityManager entityManager, + CallContext callContext, + PolarisResolutionManifestCatalogView resolvedEntityView, + AuthenticatedPolarisPrincipal authenticatedPrincipal, + TaskExecutor taskExecutor) { + this.entityManager = entityManager; + this.callContext = callContext; + this.resolvedEntityView = resolvedEntityView; + this.catalogEntity = + CatalogEntity.of(resolvedEntityView.getResolvedReferenceCatalogEntity().getRawLeafEntity()); + this.authenticatedPrincipal = authenticatedPrincipal; + this.taskExecutor = taskExecutor; + this.catalogId = catalogEntity.getId(); + this.catalogName = catalogEntity.getName(); + } + + @Override + public String name() { + return catalogName; + } + + @TestOnly + FileIO getIo() { + return catalogFileIO; + } + + @Override + public void initialize(String name, Map properties) { + Preconditions.checkState( + this.catalogName.equals(name), + "Tried to initialize catalog as name %s but already constructed with name %s", + name, + this.catalogName); + + // Base location from catalogEntity is primary source of truth, otherwise fall through + // to the same key from the properties map, annd finally fall through to WAREHOUSE_LOCATION. + String baseLocation = + Optional.ofNullable(catalogEntity.getDefaultBaseLocation()) + .orElse( + properties.getOrDefault( + CatalogEntity.DEFAULT_BASE_LOCATION_KEY, + properties.getOrDefault(CatalogProperties.WAREHOUSE_LOCATION, ""))); + this.defaultBaseLocation = baseLocation.replaceAll("/*$", ""); + + Boolean allowSpecifyingFileIoImpl = + callContext + .getPolarisCallContext() + .getConfigurationStore() + .getConfiguration( + callContext.getPolarisCallContext(), ALLOW_SPECIFYING_FILE_IO_IMPL, false); + + PolarisStorageConfigurationInfo storageConfigurationInfo = + catalogEntity.getStorageConfigurationInfo(); + if (properties.containsKey(CatalogProperties.FILE_IO_IMPL)) { + ioImplClassName = properties.get(CatalogProperties.FILE_IO_IMPL); + + if (!Boolean.TRUE.equals(allowSpecifyingFileIoImpl)) { + throw new ValidationException( + "Cannot set property '%s' to '%s' for this catalog.", + CatalogProperties.FILE_IO_IMPL, ioImplClassName); + } + LOG.debug( + "Allowing overriding ioImplClassName to {} for storageConfiguration {}", + ioImplClassName, + storageConfigurationInfo); + } else { + ioImplClassName = storageConfigurationInfo.getFileIoImplClassName(); + LOG.debug( + "Resolved ioImplClassName {} for storageConfiguration {}", + ioImplClassName, + storageConfigurationInfo); + } + this.catalogFileIO = loadFileIO(ioImplClassName, properties); + + this.closeableGroup = CallContext.getCurrentContext().closeables(); + closeableGroup.addCloseable(metricsReporter()); + // TODO: FileIO initialization should should happen later depending on the operation so + // we'd also add it to the closeableGroup later. + closeableGroup.addCloseable(this.catalogFileIO); + closeableGroup.setSuppressCloseFailure(true); + catalogProperties = properties; + } + + @Override + protected Map properties() { + return catalogProperties == null ? ImmutableMap.of() : catalogProperties; + } + + @Override + public TableBuilder buildTable(TableIdentifier identifier, Schema schema) { + return new BasePolarisCatalogTableBuilder(identifier, schema); + } + + @Override + public ViewBuilder buildView(TableIdentifier identifier) { + return new BasePolarisCatalogViewBuilder(identifier); + } + + @Override + protected TableOperations newTableOps(TableIdentifier tableIdentifier) { + return new BasePolarisTableOperations(catalogFileIO, tableIdentifier); + } + + @Override + protected String defaultWarehouseLocation(TableIdentifier tableIdentifier) { + if (tableIdentifier.namespace().isEmpty()) { + return SLASH.join( + defaultNamespaceLocation(tableIdentifier.namespace()), tableIdentifier.name()); + } else { + PolarisResolvedPathWrapper resolvedNamespace = + resolvedEntityView.getResolvedPath(tableIdentifier.namespace()); + if (resolvedNamespace == null) { + throw new NoSuchNamespaceException( + "Namespace does not exist: %s", tableIdentifier.namespace()); + } + List namespacePath = resolvedNamespace.getRawFullPath(); + String namespaceLocation = resolveLocationForPath(namespacePath); + return SLASH.join(namespaceLocation, tableIdentifier.name()); + } + } + + private String defaultNamespaceLocation(Namespace namespace) { + if (namespace.isEmpty()) { + return defaultBaseLocation; + } else { + return SLASH.join(defaultBaseLocation, SLASH.join(namespace.levels())); + } + } + + private Set getLocationsAllowedToBeAccessed(TableMetadata tableMetadata) { + String basicLocation = tableMetadata.location(); + Set locations = new HashSet<>(); + locations.add(concatFilePrefixes(basicLocation, "data/", "/")); + locations.add(concatFilePrefixes(basicLocation, "metadata/", "/")); + return locations; + } + + private Set getLocationsAllowedToBeAccessed(ViewMetadata viewMetadata) { + String basicLocation = viewMetadata.location(); + Set locations = new HashSet<>(); + // a view won't have a "data" location, so only allowed to access "metadata" + locations.add(concatFilePrefixes(basicLocation, "metadata/", "/")); + return locations; + } + + @Override + public boolean dropTable(TableIdentifier tableIdentifier, boolean purge) { + TableOperations ops = newTableOps(tableIdentifier); + TableMetadata lastMetadata; + if (purge && ops.current() != null) { + lastMetadata = ops.current(); + } else { + lastMetadata = null; + } + + Optional storageInfoEntity = findStorageInfo(tableIdentifier); + if (purge && lastMetadata != null) { + Map credentialsMap = + storageInfoEntity + .map( + entity -> + refreshCredentials( + tableIdentifier, + Set.of(PolarisStorageActions.READ, PolarisStorageActions.WRITE), + getLocationsAllowedToBeAccessed(lastMetadata), + entity)) + .orElse(Map.of()); + Map tableProperties = new HashMap<>(lastMetadata.properties()); + tableProperties.putAll(credentialsMap); + if (!tableProperties.isEmpty()) { + catalogFileIO = loadFileIO(ioImplClassName, tableProperties); + // ensure the new fileIO is closed when the catalog is closed + closeableGroup.addCloseable(catalogFileIO); + } + } + Map storageProperties = + storageInfoEntity + .map(PolarisEntity::getInternalPropertiesAsMap) + .map( + properties -> { + if (lastMetadata == null) { + return Map.of(); + } + Map clone = new HashMap<>(properties); + clone.put(CatalogProperties.FILE_IO_IMPL, ioImplClassName); + try { + clone.putAll(catalogFileIO.properties()); + } catch (UnsupportedOperationException e) { + LOG.warn("FileIO doesn't implement properties()"); + } + clone.put(PolarisTaskConstants.STORAGE_LOCATION, lastMetadata.location()); + return clone; + }) + .orElse(Map.of()); + PolarisMetaStoreManager.DropEntityResult dropEntityResult = + dropTableLike( + catalogId, PolarisEntitySubType.TABLE, tableIdentifier, storageProperties, purge); + if (!dropEntityResult.isSuccess()) { + return false; + } + + if (purge && lastMetadata != null && dropEntityResult.getCleanupTaskId() != null) { + LOG.info( + "Scheduled cleanup task {} for table {}", + dropEntityResult.getCleanupTaskId(), + tableIdentifier); + taskExecutor.addTaskHandlerContext( + dropEntityResult.getCleanupTaskId(), CallContext.getCurrentContext()); + } + + return true; + } + + @Override + public List listTables(Namespace namespace) { + if (!namespaceExists(namespace) && !namespace.isEmpty()) { + throw new NoSuchNamespaceException( + "Cannot list tables for namespace. Namespace does not exist: %s", namespace); + } + + return listTableLike(catalogId, PolarisEntitySubType.TABLE, namespace); + } + + @Override + public void renameTable(TableIdentifier from, TableIdentifier to) { + if (from.equals(to)) { + return; + } + + renameTableLike(catalogId, PolarisEntitySubType.TABLE, from, to); + } + + @Override + public void createNamespace(Namespace namespace) { + createNamespace(namespace, Collections.emptyMap()); + } + + @Override + public void createNamespace(Namespace namespace, Map metadata) { + LOG.debug("Creating namespace {} with metadata {}", namespace, metadata); + if (namespace.isEmpty()) { + throw new AlreadyExistsException( + "Cannot create root namespace, as it already exists implicitly."); + } + + // TODO: These should really be helpers in core Iceberg Namespace. + Namespace parentNamespace = PolarisCatalogHelpers.getParentNamespace(namespace); + + PolarisResolvedPathWrapper resolvedParent = resolvedEntityView.getResolvedPath(parentNamespace); + if (resolvedParent == null) { + throw new NoSuchNamespaceException( + "Cannot create namespace %s. Parent namespace does not exist.", namespace); + } + createNamespaceInternal(namespace, metadata, resolvedParent); + } + + private void createNamespaceInternal( + Namespace namespace, + Map metadata, + PolarisResolvedPathWrapper resolvedParent) { + String baseLocation = resolveNamespaceLocation(namespace, metadata); + NamespaceEntity entity = + new NamespaceEntity.Builder(namespace) + .setCatalogId(getCatalogId()) + .setId( + entityManager + .getMetaStoreManager() + .generateNewEntityId(getCurrentPolarisContext()) + .getId()) + .setParentId(resolvedParent.getRawLeafEntity().getId()) + .setProperties(metadata) + .setCreateTimestamp(System.currentTimeMillis()) + .setBaseLocation(baseLocation) + .build(); + if (!callContext + .getPolarisCallContext() + .getConfigurationStore() + .getConfiguration( + callContext.getPolarisCallContext(), + PolarisConfiguration.ALLOW_NAMESPACE_LOCATION_OVERLAP, + PolarisConfiguration.DEFAULT_ALLOW_NAMESPACE_LOCATION_OVERLAP)) { + LOG.debug("Validating no overlap for {} with sibling tables or namespaces", namespace); + validateNoLocationOverlap( + entity.getBaseLocation(), resolvedParent.getRawFullPath(), entity.getName()); + } else { + LOG.debug("Skipping location overlap validation for namespace '{}'", namespace); + } + PolarisEntity returnedEntity = + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .createEntityIfNotExists( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(resolvedParent.getRawFullPath()), + entity)); + if (returnedEntity == null) { + throw new AlreadyExistsException( + "Cannot create namespace %s. Namespace already exists", namespace); + } + } + + private String resolveNamespaceLocation(Namespace namespace, Map properties) { + if (properties.containsKey(PolarisEntityConstants.ENTITY_BASE_LOCATION)) { + return properties.get(PolarisEntityConstants.ENTITY_BASE_LOCATION); + } else { + List parentPath = + namespace.length() > 1 + ? getResolvedParentNamespace(namespace).getRawFullPath() + : List.of(resolvedEntityView.getResolvedReferenceCatalogEntity().getRawLeafEntity()); + + String parentLocation = resolveLocationForPath(parentPath); + + return parentLocation + "/" + namespace.level(namespace.length() - 1); + } + } + + private static @NotNull String resolveLocationForPath(List parentPath) { + // always take the first object. If it has the base-location, stop there + AtomicBoolean foundBaseLocation = new AtomicBoolean(false); + return parentPath.reversed().stream() + .takeWhile( + entity -> + !foundBaseLocation.getAndSet( + entity + .getPropertiesAsMap() + .containsKey(PolarisEntityConstants.ENTITY_BASE_LOCATION))) + .toList() + .reversed() + .stream() + .map( + entity -> { + if (entity.getType().equals(PolarisEntityType.CATALOG)) { + return CatalogEntity.of(entity).getDefaultBaseLocation(); + } else { + String baseLocation = + entity.getPropertiesAsMap().get(PolarisEntityConstants.ENTITY_BASE_LOCATION); + if (baseLocation != null) { + return baseLocation; + } else { + return entity.getName(); + } + } + }) + .map(BasePolarisCatalog::stripLeadingTrailingSlash) + .collect(Collectors.joining("/")); + } + + private static String stripLeadingTrailingSlash(String location) { + if (location.startsWith("/")) { + return stripLeadingTrailingSlash(location.substring(1)); + } + if (location.endsWith("/")) { + return location.substring(0, location.length() - 1); + } else { + return location; + } + } + + private PolarisResolvedPathWrapper getResolvedParentNamespace(Namespace namespace) { + Namespace parentNamespace = + Namespace.of(Arrays.copyOf(namespace.levels(), namespace.length() - 1)); + PolarisResolvedPathWrapper resolvedParent = resolvedEntityView.getResolvedPath(parentNamespace); + if (resolvedParent == null) { + return resolvedEntityView.getPassthroughResolvedPath(parentNamespace); + } + return resolvedParent; + } + + @Override + public boolean namespaceExists(Namespace namespace) { + PolarisResolvedPathWrapper resolvedEntities = resolvedEntityView.getResolvedPath(namespace); + if (resolvedEntities == null) { + return false; + } + return true; + } + + @Override + public boolean dropNamespace(Namespace namespace) throws NamespaceNotEmptyException { + PolarisResolvedPathWrapper resolvedEntities = resolvedEntityView.getResolvedPath(namespace); + if (resolvedEntities == null) { + return false; + } + + List catalogPath = resolvedEntities.getRawParentPath(); + PolarisEntity leafEntity = resolvedEntities.getRawLeafEntity(); + + // drop if exists and is empty + PolarisCallContext polarisCallContext = callContext.getPolarisCallContext(); + PolarisMetaStoreManager.DropEntityResult dropEntityResult = + entityManager + .getMetaStoreManager() + .dropEntityIfExists( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(catalogPath), + leafEntity, + Map.of(), + polarisCallContext + .getConfigurationStore() + .getConfiguration(polarisCallContext, CLEANUP_ON_NAMESPACE_DROP, false)); + + if (!dropEntityResult.isSuccess() && dropEntityResult.failedBecauseNotEmpty()) { + throw new NamespaceNotEmptyException("Namespace %s is not empty", namespace); + } + + // return status of drop operation + return dropEntityResult.isSuccess(); + } + + @Override + public boolean setProperties(Namespace namespace, Map properties) + throws NoSuchNamespaceException { + PolarisResolvedPathWrapper resolvedEntities = resolvedEntityView.getResolvedPath(namespace); + if (resolvedEntities == null) { + throw new NoSuchNamespaceException("Namespace does not exist: %s", namespace); + } + PolarisEntity entity = resolvedEntities.getRawLeafEntity(); + Map newProperties = new HashMap<>(entity.getPropertiesAsMap()); + + // Merge new properties into existing map. + newProperties.putAll(properties); + PolarisEntity updatedEntity = + new PolarisEntity.Builder(entity).setProperties(newProperties).build(); + + if (!callContext + .getPolarisCallContext() + .getConfigurationStore() + .getConfiguration( + callContext.getPolarisCallContext(), + PolarisConfiguration.ALLOW_NAMESPACE_LOCATION_OVERLAP, + PolarisConfiguration.DEFAULT_ALLOW_NAMESPACE_LOCATION_OVERLAP)) { + LOG.debug("Validating no overlap with sibling tables or namespaces"); + validateNoLocationOverlap( + NamespaceEntity.of(updatedEntity).getBaseLocation(), + resolvedEntities.getRawParentPath(), + updatedEntity.getName()); + } else { + LOG.debug("Skipping location overlap validation for namespace '{}'", namespace); + } + + List parentPath = resolvedEntities.getRawFullPath(); + PolarisEntity returnedEntity = + Optional.ofNullable( + entityManager + .getMetaStoreManager() + .updateEntityPropertiesIfNotChanged( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(parentPath), + updatedEntity) + .getEntity()) + .map(PolarisEntity::new) + .orElse(null); + if (returnedEntity == null) { + throw new RuntimeException("Concurrent modification of namespace: " + namespace); + } + return true; + } + + @Override + public boolean removeProperties(Namespace namespace, Set properties) + throws NoSuchNamespaceException { + PolarisResolvedPathWrapper resolvedEntities = resolvedEntityView.getResolvedPath(namespace); + if (resolvedEntities == null) { + throw new NoSuchNamespaceException("Namespace does not exist: %s", namespace); + } + PolarisEntity entity = resolvedEntities.getRawLeafEntity(); + + Map updatedProperties = new HashMap<>(entity.getPropertiesAsMap()); + properties.forEach(updatedProperties::remove); + + PolarisEntity updatedEntity = + new PolarisEntity.Builder(entity).setProperties(updatedProperties).build(); + + List parentPath = resolvedEntities.getRawFullPath(); + PolarisEntity returnedEntity = + Optional.ofNullable( + entityManager + .getMetaStoreManager() + .updateEntityPropertiesIfNotChanged( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(parentPath), + updatedEntity) + .getEntity()) + .map(PolarisEntity::new) + .orElse(null); + if (returnedEntity == null) { + throw new RuntimeException("Concurrent modification of namespace: " + namespace); + } + return true; + } + + @Override + public Map loadNamespaceMetadata(Namespace namespace) + throws NoSuchNamespaceException { + PolarisResolvedPathWrapper resolvedEntities = resolvedEntityView.getResolvedPath(namespace); + if (resolvedEntities == null) { + throw new NoSuchNamespaceException("Namespace does not exist: %s", namespace); + } + NamespaceEntity entity = NamespaceEntity.of(resolvedEntities.getRawLeafEntity()); + Preconditions.checkState( + entity.getParentNamespace().equals(PolarisCatalogHelpers.getParentNamespace(namespace)), + "Mismatched stored parentNamespace '%s' vs looked up parentNamespace '%s", + entity.getParentNamespace(), + PolarisCatalogHelpers.getParentNamespace(namespace)); + + return entity.getPropertiesAsMap(); + } + + @Override + public List listNamespaces() { + return listNamespaces(Namespace.empty()); + } + + @Override + public List listNamespaces(Namespace namespace) throws NoSuchNamespaceException { + PolarisResolvedPathWrapper resolvedEntities = resolvedEntityView.getResolvedPath(namespace); + if (resolvedEntities == null) { + throw new NoSuchNamespaceException("Namespace does not exist: %s", namespace); + } + + List catalogPath = resolvedEntities.getRawFullPath(); + List entities = + PolarisEntity.toNameAndIdList( + entityManager + .getMetaStoreManager() + .listEntities( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(catalogPath), + PolarisEntityType.NAMESPACE, + PolarisEntitySubType.NULL_SUBTYPE) + .getEntities()); + return PolarisCatalogHelpers.nameAndIdToNamespaces(catalogPath, entities); + } + + @Override + public void close() throws IOException {} + + @Override + public List listViews(Namespace namespace) { + if (!namespaceExists(namespace) && !namespace.isEmpty()) { + throw new NoSuchNamespaceException( + "Cannot list views for namespace. Namespace does not exist: %s", namespace); + } + + return listTableLike(catalogId, PolarisEntitySubType.VIEW, namespace); + } + + @Override + protected BasePolarisViewOperations newViewOps(TableIdentifier identifier) { + return new BasePolarisViewOperations(catalogFileIO, identifier); + } + + @Override + public boolean dropView(TableIdentifier identifier) { + return dropTableLike(catalogId, PolarisEntitySubType.VIEW, identifier, Map.of(), true) + .isSuccess(); + } + + @Override + public void renameView(TableIdentifier from, TableIdentifier to) { + if (from.equals(to)) { + return; + } + + renameTableLike(catalogId, PolarisEntitySubType.VIEW, from, to); + } + + @Override + public boolean sendNotification( + TableIdentifier identifier, NotificationRequest notificationRequest) { + return sendNotificationForTableLike( + catalogId, PolarisEntitySubType.TABLE, identifier, notificationRequest); + } + + @Override + public Map getCredentialConfig( + TableIdentifier tableIdentifier, + TableMetadata tableMetadata, + Set storageActions) { + Optional storageInfo = findStorageInfo(tableIdentifier); + if (storageInfo.isEmpty()) { + LOG.atWarn() + .addKeyValue("tableIdentifier", tableIdentifier) + .log("Table entity has no storage configuration in its hierarchy"); + return Map.of(); + } + return refreshCredentials( + tableIdentifier, + storageActions, + getLocationsAllowedToBeAccessed(tableMetadata), + storageInfo.get()); + } + + /** + * Based on configuration settings, for callsites that need to handle potentially setting a new + * base location for a TableLike entity, produces the transformed location if applicable, or else + * the unaltered specified location. + */ + public String transformTableLikeLocation(String specifiedTableLikeLocation) { + String replaceNewLocationPrefix = catalogEntity.getReplaceNewLocationPrefixWithCatalogDefault(); + if (specifiedTableLikeLocation != null + && replaceNewLocationPrefix != null + && specifiedTableLikeLocation.startsWith(replaceNewLocationPrefix)) { + String modifiedLocation = + defaultBaseLocation + + specifiedTableLikeLocation.substring(replaceNewLocationPrefix.length()); + LOG.atDebug() + .addKeyValue("specifiedTableLikeLocation", specifiedTableLikeLocation) + .addKeyValue("modifiedLocation", modifiedLocation) + .log("Translating specifiedTableLikeLocation based on config"); + return modifiedLocation; + } + return specifiedTableLikeLocation; + } + + private @NotNull Optional findStorageInfo(TableIdentifier tableIdentifier) { + PolarisResolvedPathWrapper resolvedTableEntities = + resolvedEntityView.getResolvedPath(tableIdentifier, PolarisEntitySubType.TABLE); + + PolarisResolvedPathWrapper resolvedStorageEntity = + resolvedTableEntities == null + ? resolvedEntityView.getResolvedPath(tableIdentifier.namespace()) + : resolvedTableEntities; + + return findStorageInfoFromHierarchy(resolvedStorageEntity); + } + + private Map refreshCredentials( + TableIdentifier tableIdentifier, + Set storageActions, + String tableLocation, + PolarisEntity entity) { + return refreshCredentials(tableIdentifier, storageActions, Set.of(tableLocation), entity); + } + + private Map refreshCredentials( + TableIdentifier tableIdentifier, + Set storageActions, + Set tableLocations, + PolarisEntity entity) { + // Important: Any locations added to the set of requested locations need to be validated + // prior to requested subscoped credentials. + tableLocations.forEach(tl -> validateLocationForTableLike(tableIdentifier, tl)); + + boolean allowList = + storageActions.contains(PolarisStorageActions.LIST) + || storageActions.contains(PolarisStorageActions.ALL); + Set writeLocations = + storageActions.contains(PolarisStorageActions.WRITE) + || storageActions.contains(PolarisStorageActions.DELETE) + || storageActions.contains(PolarisStorageActions.ALL) + ? tableLocations + : Set.of(); + Map credentialsMap = + entityManager + .getCredentialCache() + .getOrGenerateSubScopeCreds( + entityManager.getMetaStoreManager(), + callContext.getPolarisCallContext(), + entity, + allowList, + tableLocations, + writeLocations); + LOG.atDebug() + .addKeyValue("tableIdentifier", tableIdentifier) + .addKeyValue("credentialKeys", credentialsMap.keySet()) + .log("Loaded scoped credentials for table"); + if (credentialsMap.isEmpty()) { + LOG.debug("No credentials found for table"); + } + return credentialsMap; + } + + /** + * Validates that the specified {@code location} is valid for whatever storage config is found for + * this TableLike's parent hierarchy. + */ + private void validateLocationForTableLike(TableIdentifier identifier, String location) { + PolarisResolvedPathWrapper resolvedStorageEntity = + resolvedEntityView.getResolvedPath(identifier, PolarisEntitySubType.ANY_SUBTYPE); + if (resolvedStorageEntity == null) { + resolvedStorageEntity = resolvedEntityView.getResolvedPath(identifier.namespace()); + } + if (resolvedStorageEntity == null) { + resolvedStorageEntity = resolvedEntityView.getPassthroughResolvedPath(identifier.namespace()); + } + + validateLocationForTableLike(identifier, location, resolvedStorageEntity); + } + + /** + * Validates that the specified {@code location} is valid for whatever storage config is found for + * this TableLike's parent hierarchy. + */ + private void validateLocationForTableLike( + TableIdentifier identifier, + String location, + PolarisResolvedPathWrapper resolvedStorageEntity) { + Optional optStorageConfiguration = + PolarisStorageConfigurationInfo.forEntityPath( + callContext.getPolarisCallContext().getDiagServices(), + resolvedStorageEntity.getRawFullPath()); + + optStorageConfiguration.ifPresentOrElse( + storageConfigInfo -> { + Map> + validationResults = + InMemoryStorageIntegration.validateSubpathsOfAllowedLocations( + storageConfigInfo, Set.of(PolarisStorageActions.ALL), Set.of(location)); + validationResults + .values() + .forEach( + actionResult -> + actionResult + .values() + .forEach( + result -> { + if (!result.isSuccess()) { + throw new ForbiddenException( + "Invalid location '%s' for identifier '%s': %s", + location, identifier, result.getMessage()); + } else { + LOG.debug( + "Validated location '{}' for identifier '{}'", + location, + identifier); + } + })); + + // TODO: Consider exposing a property to control whether to use the explicit default + // in-memory PolarisStorageIntegration implementation to perform validation or + // whether to delegate to PolarisMetaStoreManager::validateAccessToLocations. + // Usually the validation is better to perform with local business logic, but if + // there are additional rules to be evaluated by a custom PolarisMetaStoreManager + // implementation, then the validation should go through that API instead as follows: + // + // PolarisMetaStoreManager.ValidateAccessResult validateResult = + // entityManager.getMetaStoreManager().validateAccessToLocations( + // getCurrentPolarisContext(), + // storageInfoHolderEntity.getCatalogId(), + // storageInfoHolderEntity.getId(), + // Set.of(PolarisStorageActions.ALL), + // Set.of(location)); + // if (!validateResult.isSuccess()) { + // throw new ForbiddenException("Invalid location '%s' for identifier '%s': %s", + // location, identifier, validateResult.getExtraInformation()); + // } + }, + () -> { + if (location.startsWith("file:") || location.startsWith("http")) { + throw new ForbiddenException( + "Invalid location '%s' for identifier '%s': File locations are not allowed", + location, identifier); + } + }); + } + + /** + * Validates the table location has no overlap with other entities after checking the + * configuration of the service + * + * @param identifier + * @param resolvedNamespace + * @param location + */ + private void validateNoLocationOverlap( + TableIdentifier identifier, List resolvedNamespace, String location) { + if (callContext + .getPolarisCallContext() + .getConfigurationStore() + .getConfiguration( + callContext.getPolarisCallContext(), + PolarisConfiguration.ALLOW_TABLE_LOCATION_OVERLAP, + PolarisConfiguration.DEFAULT_ALLOW_TABLE_LOCATION_OVERLAP)) { + LOG.debug("Skipping location overlap validation for identifier '{}'", identifier); + } else { // if (entity.getSubType().equals(PolarisEntitySubType.TABLE)) { + // TODO - is this necessary for views? overlapping views do not expose subdirectories via the + // credential vending + // so this feels like an unnecessary restriction + LOG.debug("Validating no overlap with sibling tables or namespaces"); + validateNoLocationOverlap(location, resolvedNamespace, identifier.name()); + } + } + + /** + * Validate no location overlap exists between the entity path and its sibling entities. This + * resolves all siblings at the same level as the target entity (namespaces if the target entity + * is a namespace whose parent is the catalog, namespaces and tables otherwise) and checks the + * base-location property of each. The target entity's base location may not be a prefix or a + * suffix of any sibling entity's base location. + * + * @param location + * @param parentPath + */ + private void validateNoLocationOverlap( + String location, List parentPath, String name) { + PolarisMetaStoreManager.ListEntitiesResult siblingNamespacesResult = + entityManager + .getMetaStoreManager() + .listEntities( + callContext.getPolarisCallContext(), + parentPath.stream().map(PolarisEntity::toCore).collect(Collectors.toList()), + PolarisEntityType.NAMESPACE, + PolarisEntitySubType.ANY_SUBTYPE); + if (!siblingNamespacesResult.isSuccess()) { + throw new IllegalStateException( + "Unable to resolve siblings entities to validate location - could not list namespaces"); + } + + // if the entity path has more than just the catalog, check for tables as well as other + // namespaces + Optional parentNamespace = + parentPath.size() > 1 + ? Optional.of(NamespaceEntity.of(parentPath.get(parentPath.size() - 1))) + : Optional.empty(); + + List siblingTables = + parentNamespace + .map( + ns -> { + PolarisMetaStoreManager.ListEntitiesResult siblingTablesResult = + entityManager + .getMetaStoreManager() + .listEntities( + callContext.getPolarisCallContext(), + parentPath.stream() + .map(PolarisEntity::toCore) + .collect(Collectors.toList()), + PolarisEntityType.TABLE_LIKE, + PolarisEntitySubType.ANY_SUBTYPE); + if (!siblingTablesResult.isSuccess()) { + throw new IllegalStateException( + "Unable to resolve siblings entities to validate location - could not list tables"); + } + return siblingTablesResult.getEntities().stream() + .map(tbl -> TableIdentifier.of(ns.asNamespace(), tbl.getName())) + .collect(Collectors.toList()); + }) + .orElse(List.of()); + ; + + List siblingNamespaces = + siblingNamespacesResult.getEntities().stream() + .map( + ns -> { + String[] nsLevels = + parentNamespace + .map(parent -> parent.asNamespace().levels()) + .orElse(new String[0]); + String[] newLevels = Arrays.copyOf(nsLevels, nsLevels.length + 1); + newLevels[nsLevels.length] = ns.getName(); + return Namespace.of(newLevels); + }) + .collect(Collectors.toList()); + LOG.debug( + "Resolving {} sibling entities to validate location", + siblingTables.size() + siblingNamespaces.size()); + PolarisResolutionManifest resolutionManifest = + new PolarisResolutionManifest( + callContext, entityManager, authenticatedPrincipal, parentPath.get(0).getName()); + siblingTables.forEach( + tbl -> + resolutionManifest.addPath( + new ResolverPath( + PolarisCatalogHelpers.tableIdentifierToList(tbl), PolarisEntityType.TABLE_LIKE), + tbl)); + siblingNamespaces.forEach( + ns -> + resolutionManifest.addPath( + new ResolverPath(Arrays.asList(ns.levels()), PolarisEntityType.NAMESPACE), ns)); + ResolverStatus status = resolutionManifest.resolveAll(); + if (!status.getStatus().equals(ResolverStatus.StatusEnum.SUCCESS)) { + throw new IllegalStateException( + "Unable to resolve sibling entities to validate location - could not resolve" + + status.getFailedToResolvedEntityName()); + } + Stream.concat( + siblingTables.stream() + .filter(tbl -> !tbl.name().equals(name)) + .map( + tbl -> { + PolarisResolvedPathWrapper resolveTablePath = + resolutionManifest.getResolvedPath(tbl); + return TableLikeEntity.of(resolveTablePath.getRawLeafEntity()) + .getBaseLocation(); + }), + siblingNamespaces.stream() + .filter(ns -> !ns.level(ns.length() - 1).equals(name)) + .map( + ns -> { + PolarisResolvedPathWrapper resolveNamespacePath = + resolutionManifest.getResolvedPath(ns); + return NamespaceEntity.of(resolveNamespacePath.getRawLeafEntity()) + .getBaseLocation(); + })) + .filter(java.util.Objects::nonNull) + .forEach( + siblingLocation -> { + URI target = URI.create(location); + URI existing = URI.create(siblingLocation); + if (isUnderParentLocation(target, existing) + || isUnderParentLocation(existing, target)) { + throw new org.apache.iceberg.exceptions.BadRequestException( + "Unable to create table at location '%s' because it conflicts with existing table or namespace at location '%s'", + target, existing); + } + }); + } + + private class BasePolarisCatalogTableBuilder + extends BaseMetastoreViewCatalog.BaseMetastoreViewCatalogTableBuilder { + private final TableIdentifier identifier; + + public BasePolarisCatalogTableBuilder(TableIdentifier identifier, Schema schema) { + super(identifier, schema); + this.identifier = identifier; + } + + @Override + public TableBuilder withLocation(String newLocation) { + return super.withLocation(transformTableLikeLocation(newLocation)); + } + } + + private class BasePolarisCatalogViewBuilder extends BaseMetastoreViewCatalog.BaseViewBuilder { + private final TableIdentifier identifier; + + public BasePolarisCatalogViewBuilder(TableIdentifier identifier) { + super(identifier); + this.identifier = identifier; + } + + @Override + public ViewBuilder withLocation(String newLocation) { + return super.withLocation(transformTableLikeLocation(newLocation)); + } + } + + private class BasePolarisTableOperations extends BaseMetastoreTableOperations { + private final TableIdentifier tableIdentifier; + private final String fullTableName; + private FileIO tableFileIO; + + BasePolarisTableOperations(FileIO defaultFileIO, TableIdentifier tableIdentifier) { + LOG.debug("new BasePolarisTableOperations for {}", tableIdentifier); + this.tableIdentifier = tableIdentifier; + this.fullTableName = fullTableName(catalogName, tableIdentifier); + this.tableFileIO = defaultFileIO; + } + + @Override + public void doRefresh() { + LOG.debug("doRefresh for tableIdentifier {}", tableIdentifier); + // While doing refresh/commit protocols, we must fetch the fresh "passthrough" resolved + // table entity instead of the statically-resolved authz resolution set. + PolarisResolvedPathWrapper resolvedEntities = + resolvedEntityView.getPassthroughResolvedPath( + tableIdentifier, PolarisEntitySubType.TABLE); + TableLikeEntity entity = null; + + if (resolvedEntities != null) { + entity = TableLikeEntity.of(resolvedEntities.getRawLeafEntity()); + if (!tableIdentifier.equals(entity.getTableIdentifier())) { + LOG.atError() + .addKeyValue("entity.getTableIdentifier()", entity.getTableIdentifier()) + .addKeyValue("tableIdentifier", tableIdentifier) + .log("Stored entity identifier mismatches requested identifier"); + } + } + + String latestLocation = entity != null ? entity.getMetadataLocation() : null; + LOG.debug("Refreshing latestLocation: {}", latestLocation); + if (latestLocation == null) { + disableRefresh(); + } else { + refreshFromMetadataLocation( + latestLocation, + SHOULD_RETRY_REFRESH_PREDICATE, + MAX_RETRIES, + metadataLocation -> { + FileIO fileIO = this.tableFileIO; + boolean closeFileIO = false; + PolarisResolvedPathWrapper resolvedStorageEntity = + resolvedEntities == null + ? resolvedEntityView.getResolvedPath(tableIdentifier.namespace()) + : resolvedEntities; + String latestLocationDir = + latestLocation.substring(0, latestLocation.lastIndexOf('/')); + fileIO = + refreshIOWithCredentials( + tableIdentifier, + Set.of(latestLocationDir), + resolvedStorageEntity, + new HashMap<>(), + fileIO); + return TableMetadataParser.read(fileIO, metadataLocation); + }); + } + } + + @Override + public void doCommit(TableMetadata base, TableMetadata metadata) { + LOG.debug("doCommit for {} with base {}, metadata {}", tableIdentifier, base, metadata); + // TODO: Maybe avoid writing metadata if there's definitely a transaction conflict + if (null == base && !namespaceExists(tableIdentifier.namespace())) { + throw new NoSuchNamespaceException( + "Cannot create table %s. Namespace does not exist: %s", + tableIdentifier, tableIdentifier.namespace()); + } + + PolarisResolvedPathWrapper resolvedTableEntities = + resolvedEntityView.getPassthroughResolvedPath( + tableIdentifier, PolarisEntitySubType.TABLE); + + // Fetch credentials for the resolved entity. The entity could be the table itself (if it has + // already been stored and credentials have been configured directly) or it could be the + // table's namespace or catalog. + PolarisResolvedPathWrapper resolvedStorageEntity = + resolvedTableEntities == null + ? resolvedEntityView.getResolvedPath(tableIdentifier.namespace()) + : resolvedTableEntities; + + // refresh credentials because we need to read the metadata file to validate its location + tableFileIO = + refreshIOWithCredentials( + tableIdentifier, + getLocationsAllowedToBeAccessed(metadata), + resolvedStorageEntity, + new HashMap<>(metadata.properties()), + tableFileIO); + + List resolvedNamespace = + resolvedTableEntities == null + ? resolvedEntityView.getResolvedPath(tableIdentifier.namespace()).getRawFullPath() + : resolvedTableEntities.getRawParentPath(); + CatalogEntity catalog = CatalogEntity.of(resolvedNamespace.get(0)); + + if (base == null || !metadata.location().equals(base.location())) { + // If location is changing then we must validate that the requested location is valid + // for the storage configuration inherited under this entity's path. + validateLocationForTableLike(tableIdentifier, metadata.location(), resolvedStorageEntity); + // also validate that the view location doesn't overlap an existing table + validateNoLocationOverlap(tableIdentifier, resolvedNamespace, metadata.location()); + // and that the metadata file points to a location within the table's directory structure + if (metadata.metadataFileLocation() != null) { + validateMetadataFileInTableDir(tableIdentifier, metadata, catalog); + } + } + + String newLocation = writeNewMetadataIfRequired(base == null, metadata); + String oldLocation = base == null ? null : base.metadataFileLocation(); + + PolarisResolvedPathWrapper resolvedView = + resolvedEntityView.getPassthroughResolvedPath(tableIdentifier, PolarisEntitySubType.VIEW); + if (resolvedView != null) { + throw new AlreadyExistsException("View with same name already exists: %s", tableIdentifier); + } + + // TODO: Consider using the entity from doRefresh() directly to do the conflict detection + // instead of a two-layer CAS (checking metadataLocation to detect concurrent modification + // between doRefresh() and doCommit(), and then updateEntityPropertiesIfNotChanged to detect + // concurrent + // modification between our checking of unchanged metadataLocation here and actual + // persistence-layer commit). + PolarisResolvedPathWrapper resolvedEntities = + resolvedEntityView.getPassthroughResolvedPath( + tableIdentifier, PolarisEntitySubType.TABLE); + TableLikeEntity entity = + TableLikeEntity.of(resolvedEntities == null ? null : resolvedEntities.getRawLeafEntity()); + String existingLocation; + if (null == entity) { + existingLocation = null; + entity = + new TableLikeEntity.Builder(tableIdentifier, newLocation) + .setCatalogId(getCatalogId()) + .setSubType(PolarisEntitySubType.TABLE) + .setBaseLocation(metadata.location()) + .setId( + entityManager + .getMetaStoreManager() + .generateNewEntityId(getCurrentPolarisContext()) + .getId()) + .build(); + } else { + existingLocation = entity.getMetadataLocation(); + entity = + new TableLikeEntity.Builder(entity) + .setBaseLocation(metadata.location()) + .setMetadataLocation(newLocation) + .build(); + } + if (!Objects.equal(existingLocation, oldLocation)) { + if (null == base) { + throw new AlreadyExistsException("Table already exists: %s", tableName()); + } + + if (null == existingLocation) { + throw new NoSuchTableException("Table does not exist: %s", tableName()); + } + + throw new CommitFailedException( + "Cannot commit to table %s metadata location from %s to %s " + + "because it has been concurrently modified to %s", + tableIdentifier, oldLocation, newLocation, existingLocation); + } + if (null == existingLocation) { + createTableLike(catalogId, tableIdentifier, entity); + } else { + updateTableLike(catalogId, tableIdentifier, entity); + } + } + + @Override + public FileIO io() { + return tableFileIO; + } + + @Override + protected String tableName() { + return fullTableName; + } + } + + private void validateMetadataFileInTableDir( + TableIdentifier identifier, TableMetadata metadata, CatalogEntity catalog) { + PolarisCallContext polarisCallContext = callContext.getPolarisCallContext(); + String allowEscape = + catalog + .getPropertiesAsMap() + .get(PolarisConfiguration.CATALOG_ALLOW_EXTERNAL_TABLE_LOCATION); + if (!Boolean.parseBoolean(allowEscape) + && !polarisCallContext + .getConfigurationStore() + .getConfiguration( + polarisCallContext, + PolarisConfiguration.ALLOW_EXTERNAL_METADATA_FILE_LOCATION, + PolarisConfiguration.DEFAULT_ALLOW_EXTERNAL_METADATA_FILE_LOCATION)) { + LOG.debug( + "Validating base location {} for table {} in metadata file {}", + metadata.location(), + identifier, + metadata.metadataFileLocation()); + if (!isUnderParentLocation( + URI.create(metadata.metadataFileLocation()), + URI.create(metadata.location() + "/metadata").normalize())) { + throw new org.apache.iceberg.exceptions.BadRequestException( + "Metadata location %s is not allowed outside of table location %s", + metadata.metadataFileLocation(), metadata.location()); + } + } + } + + private static @NotNull Optional findStorageInfoFromHierarchy( + PolarisResolvedPathWrapper resolvedStorageEntity) { + Optional storageInfoEntity = + resolvedStorageEntity.getRawFullPath().reversed().stream() + .filter( + e -> + e.getInternalPropertiesAsMap() + .containsKey(PolarisEntityConstants.getStorageConfigInfoPropertyName())) + .findFirst(); + return storageInfoEntity; + } + + private class BasePolarisViewOperations extends BaseViewOperations { + private final TableIdentifier identifier; + private final String fullViewName; + private FileIO viewFileIO; + + BasePolarisViewOperations(FileIO io, TableIdentifier identifier) { + this.viewFileIO = io; + this.identifier = identifier; + this.fullViewName = ViewUtil.fullViewName(catalogName, identifier); + } + + @Override + public void doRefresh() { + PolarisResolvedPathWrapper resolvedEntities = + resolvedEntityView.getPassthroughResolvedPath(identifier, PolarisEntitySubType.VIEW); + TableLikeEntity entity = null; + + if (resolvedEntities != null) { + entity = TableLikeEntity.of(resolvedEntities.getRawLeafEntity()); + if (!identifier.equals(entity.getTableIdentifier())) { + LOG.atError() + .addKeyValue("entity.getTableIdentifier()", entity.getTableIdentifier()) + .addKeyValue("identifier", identifier) + .log("Stored entity identifier mismatches requested identifier"); + } + } + + String latestLocation = entity != null ? entity.getMetadataLocation() : null; + LOG.debug("Refreshing view latestLocation: {}", latestLocation); + if (latestLocation == null) { + disableRefresh(); + } else { + refreshFromMetadataLocation( + latestLocation, + SHOULD_RETRY_REFRESH_PREDICATE, + MAX_RETRIES, + metadataLocation -> { + FileIO fileIO = this.viewFileIO; + boolean closeFileIO = false; + PolarisResolvedPathWrapper resolvedStorageEntity = + resolvedEntities == null + ? resolvedEntityView.getResolvedPath(identifier.namespace()) + : resolvedEntities; + String latestLocationDir = + latestLocation.substring(0, latestLocation.lastIndexOf('/')); + Optional storageInfoEntity = + findStorageInfoFromHierarchy(resolvedStorageEntity); + Map credentialsMap = + storageInfoEntity + .map( + storageInfo -> + refreshCredentials( + identifier, + Set.of(PolarisStorageActions.READ), + latestLocationDir, + storageInfo)) + .orElse(Map.of()); + if (!credentialsMap.isEmpty()) { + String ioImpl = fileIO.getClass().getName(); + fileIO = loadFileIO(ioImpl, credentialsMap); + closeFileIO = true; + } + try { + return ViewMetadataParser.read(fileIO.newInputFile(metadataLocation)); + } finally { + if (closeFileIO) { + fileIO.close(); + } + } + }); + } + } + + @Override + public void doCommit(ViewMetadata base, ViewMetadata metadata) { + // TODO: Maybe avoid writing metadata if there's definitely a transaction conflict + LOG.debug("doCommit for {} with base {}, metadata {}", identifier, base, metadata); + if (null == base && !namespaceExists(identifier.namespace())) { + throw new NoSuchNamespaceException( + "Cannot create view %s. Namespace does not exist: %s", + identifier, identifier.namespace()); + } + + PolarisResolvedPathWrapper resolvedTable = + resolvedEntityView.getPassthroughResolvedPath(identifier, PolarisEntitySubType.TABLE); + if (resolvedTable != null) { + throw new AlreadyExistsException("Table with same name already exists: %s", identifier); + } + + PolarisResolvedPathWrapper resolvedEntities = + resolvedEntityView.getPassthroughResolvedPath(identifier, PolarisEntitySubType.VIEW); + + // Fetch credentials for the resolved entity. The entity could be the view itself (if it has + // already been stored and credentials have been configured directly) or it could be the + // table's namespace or catalog. + PolarisResolvedPathWrapper resolvedStorageEntity = + resolvedEntities == null + ? resolvedEntityView.getResolvedPath(identifier.namespace()) + : resolvedEntities; + + List resolvedNamespace = + resolvedEntities == null + ? resolvedEntityView.getResolvedPath(identifier.namespace()).getRawFullPath() + : resolvedEntities.getRawParentPath(); + if (base == null || !metadata.location().equals(base.location())) { + // If location is changing then we must validate that the requested location is valid + // for the storage configuration inherited under this entity's path. + validateLocationForTableLike(identifier, metadata.location(), resolvedStorageEntity); + validateNoLocationOverlap(identifier, resolvedNamespace, metadata.location()); + } + + Map tableProperties = new HashMap<>(metadata.properties()); + + viewFileIO = + refreshIOWithCredentials( + identifier, + getLocationsAllowedToBeAccessed(metadata), + resolvedStorageEntity, + tableProperties, + viewFileIO); + + String newLocation = writeNewMetadataIfRequired(metadata); + String oldLocation = base == null ? null : currentMetadataLocation(); + + if (null == base && !namespaceExists(identifier.namespace())) { + throw new NoSuchNamespaceException( + "Cannot create view %s. Namespace does not exist: %s", + identifier, identifier.namespace()); + } + + TableLikeEntity entity = + TableLikeEntity.of(resolvedEntities == null ? null : resolvedEntities.getRawLeafEntity()); + String existingLocation; + if (null == entity) { + existingLocation = null; + entity = + new TableLikeEntity.Builder(identifier, newLocation) + .setCatalogId(getCatalogId()) + .setSubType(PolarisEntitySubType.VIEW) + .setId( + entityManager + .getMetaStoreManager() + .generateNewEntityId(getCurrentPolarisContext()) + .getId()) + .build(); + } else { + existingLocation = entity.getMetadataLocation(); + entity = new TableLikeEntity.Builder(entity).setMetadataLocation(newLocation).build(); + } + if (!Objects.equal(existingLocation, oldLocation)) { + if (null == base) { + throw new AlreadyExistsException("View already exists: %s", identifier); + } + + if (null == existingLocation) { + throw new NoSuchViewException("View does not exist: %s", identifier); + } + + throw new CommitFailedException( + "Cannot commit to view %s metadata location from %s to %s " + + "because it has been concurrently modified to %s", + identifier, oldLocation, newLocation, existingLocation); + } + if (null == existingLocation) { + createTableLike(catalogId, identifier, entity); + } else { + updateTableLike(catalogId, identifier, entity); + } + } + + @Override + public FileIO io() { + return viewFileIO; + } + + @Override + protected String viewName() { + return fullViewName; + } + } + + private FileIO refreshIOWithCredentials( + TableIdentifier identifier, + Set readLocations, + PolarisResolvedPathWrapper resolvedStorageEntity, + Map tableProperties, + FileIO fileIO) { + Optional storageInfoEntity = findStorageInfoFromHierarchy(resolvedStorageEntity); + Map credentialsMap = + storageInfoEntity + .map( + storageInfo -> + refreshCredentials( + identifier, + Set.of(PolarisStorageActions.READ, PolarisStorageActions.WRITE), + readLocations, + storageInfo)) + .orElse(Map.of()); + + // Update the FileIO before we write the new metadata file + // update with table properties in case there are table-level overrides + // the credentials should always override table-level properties, since + // storage configuration will be found at whatever entity defines it + tableProperties.putAll(credentialsMap); + if (!tableProperties.isEmpty()) { + fileIO = loadFileIO(ioImplClassName, tableProperties); + // ensure the new fileIO is closed when the catalog is closed + closeableGroup.addCloseable(fileIO); + } + return fileIO; + } + + private PolarisCallContext getCurrentPolarisContext() { + return callContext.getPolarisCallContext(); + } + + @VisibleForTesting + long getCatalogId() { + // TODO: Properly handle initialization + if (catalogId <= 0) { + throw new RuntimeException( + "Failed to initialize catalogId before using catalog with name: " + catalogName); + } + return catalogId; + } + + private void renameTableLike( + long catalogId, PolarisEntitySubType subType, TableIdentifier from, TableIdentifier to) { + LOG.debug("Renaming tableLike from {} to {}", from, to); + PolarisResolvedPathWrapper resolvedEntities = resolvedEntityView.getResolvedPath(from, subType); + if (resolvedEntities == null) { + if (subType == PolarisEntitySubType.VIEW) { + throw new NoSuchViewException("Cannot rename %s to %s. View does not exist", from, to); + } else { + throw new NoSuchTableException("Cannot rename %s to %s. Table does not exist", from, to); + } + } + List catalogPath = resolvedEntities.getRawParentPath(); + PolarisEntity leafEntity = resolvedEntities.getRawLeafEntity(); + final TableLikeEntity toEntity; + List newCatalogPath = null; + if (!from.namespace().equals(to.namespace())) { + PolarisResolvedPathWrapper resolvedNewParentEntities = + resolvedEntityView.getResolvedPath(to.namespace()); + if (resolvedNewParentEntities == null) { + throw new NoSuchNamespaceException( + "Cannot rename %s to %s. Namespace does not exist: %s", from, to, to.namespace()); + } + newCatalogPath = resolvedNewParentEntities.getRawFullPath(); + + // the "to" table has a new parent and a new name / namespace path + toEntity = + new TableLikeEntity.Builder(TableLikeEntity.of(leafEntity)) + .setTableIdentifier(to) + .setParentId(resolvedNewParentEntities.getResolvedLeafEntity().getEntity().getId()) + .build(); + } else { + // only the name of the entity is changed + toEntity = + new TableLikeEntity.Builder(TableLikeEntity.of(leafEntity)) + .setTableIdentifier(to) + .build(); + } + + // rename the entity now + PolarisMetaStoreManager.EntityResult returnedEntityResult = + entityManager + .getMetaStoreManager() + .renameEntity( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(catalogPath), + leafEntity, + PolarisEntity.toCoreList(newCatalogPath), + toEntity); + + // handle error + if (!returnedEntityResult.isSuccess()) { + LOG.debug( + "Rename error {} trying to rename {} to {}. Checking existing object.", + returnedEntityResult.getReturnStatus(), + from, + to); + switch (returnedEntityResult.getReturnStatus()) { + case PolarisMetaStoreManager.ReturnStatus.ENTITY_ALREADY_EXISTS: + { + PolarisEntitySubType existingEntitySubType = + returnedEntityResult.getAlreadyExistsEntitySubType(); + if (existingEntitySubType == null) { + // this code path is unexpected + throw new AlreadyExistsException( + "Cannot rename %s to %s. Object %s already exists", from, to); + } else if (existingEntitySubType == PolarisEntitySubType.TABLE) { + throw new AlreadyExistsException( + "Cannot rename %s to %s. Table already exists", from, to); + } else if (existingEntitySubType == PolarisEntitySubType.VIEW) { + throw new AlreadyExistsException( + "Cannot rename %s to %s. View already exists", from, to); + } + } + + case PolarisMetaStoreManager.ReturnStatus.ENTITY_NOT_FOUND: + throw new NotFoundException("Cannot rename %s to %s. %s does not exist", from, to, from); + + // this is temporary. Should throw a special error that will be caught and retried + case PolarisMetaStoreManager.ReturnStatus.TARGET_ENTITY_CONCURRENTLY_MODIFIED: + case PolarisMetaStoreManager.ReturnStatus.ENTITY_CANNOT_BE_RESOLVED: + throw new RuntimeException("concurrent update detected, please retry"); + + // some entities cannot be renamed + case PolarisMetaStoreManager.ReturnStatus.ENTITY_CANNOT_BE_RENAMED: + throw new BadRequestException("Cannot rename built-in object " + leafEntity.getName()); + + // some entities cannot be renamed + default: + throw new IllegalStateException( + "Unknown error status " + returnedEntityResult.getReturnStatus()); + } + } else { + TableLikeEntity returnedEntity = TableLikeEntity.of(returnedEntityResult.getEntity()); + if (!toEntity.getTableIdentifier().equals(returnedEntity.getTableIdentifier())) { + // As long as there are older deployments which don't support the atomic update of the + // internalProperties during rename, we can log and then patch it up explicitly + // in a best-effort way. + LOG.atError() + .addKeyValue("toEntity.getTableIdentifier()", toEntity.getTableIdentifier()) + .addKeyValue("returnedEntity.getTableIdentifier()", returnedEntity.getTableIdentifier()) + .log("Returned entity identifier doesn't match toEntity identifier"); + entityManager + .getMetaStoreManager() + .updateEntityPropertiesIfNotChanged( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(newCatalogPath), + new TableLikeEntity.Builder(returnedEntity).setTableIdentifier(to).build()); + } + } + } + + /** + * Caller must fill in all entity fields except parentId, since the caller may not want to + * duplicate the logic to try to reolve parentIds before constructing the proposed entity. This + * method will fill in the parentId if needed upon resolution. + */ + private void createTableLike(long catalogId, TableIdentifier identifier, PolarisEntity entity) { + PolarisResolvedPathWrapper resolvedParent = + resolvedEntityView.getResolvedPath(identifier.namespace()); + if (resolvedParent == null) { + // Illegal state because the namespace should've already been in the static resolution set. + throw new IllegalStateException( + String.format("Failed to fetch resolved parent for TableIdentifier '%s'", identifier)); + } + + createTableLike(catalogId, identifier, entity, resolvedParent); + } + + private void createTableLike( + long catalogId, + TableIdentifier identifier, + PolarisEntity entity, + PolarisResolvedPathWrapper resolvedParent) { + // Make sure the metadata file is valid for our allowed locations. + String metadataLocation = TableLikeEntity.of(entity).getMetadataLocation(); + validateLocationForTableLike(identifier, metadataLocation, resolvedParent); + + List catalogPath = resolvedParent.getRawFullPath(); + + if (entity.getParentId() <= 0) { + // TODO: Validate catalogPath size is at least 1 for catalog entity? + entity = + new PolarisEntity.Builder(entity) + .setParentId(resolvedParent.getRawLeafEntity().getId()) + .build(); + } + entity = + new PolarisEntity.Builder(entity).setCreateTimestamp(System.currentTimeMillis()).build(); + + PolarisEntity returnedEntity = + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .createEntityIfNotExists( + getCurrentPolarisContext(), PolarisEntity.toCoreList(catalogPath), entity)); + LOG.debug("Created TableLike entity {} with TableIdentifier {}", entity, identifier); + if (returnedEntity == null) { + // TODO: Error or retry? + } + } + + private static boolean isUnderParentLocation(URI childLocation, URI expectedParentLocation) { + return !expectedParentLocation.relativize(childLocation).equals(childLocation); + } + + private void updateTableLike(long catalogId, TableIdentifier identifier, PolarisEntity entity) { + PolarisResolvedPathWrapper resolvedEntities = + resolvedEntityView.getResolvedPath(identifier, entity.getSubType()); + if (resolvedEntities == null) { + // Illegal state because the identifier should've already been in the static resolution set. + throw new IllegalStateException( + String.format("Failed to fetch resolved TableIdentifier '%s'", identifier)); + } + + // Make sure the metadata file is valid for our allowed locations. + String metadataLocation = TableLikeEntity.of(entity).getMetadataLocation(); + validateLocationForTableLike(identifier, metadataLocation, resolvedEntities); + + List catalogPath = resolvedEntities.getRawParentPath(); + PolarisEntity returnedEntity = + Optional.ofNullable( + entityManager + .getMetaStoreManager() + .updateEntityPropertiesIfNotChanged( + getCurrentPolarisContext(), PolarisEntity.toCoreList(catalogPath), entity) + .getEntity()) + .map(PolarisEntity::new) + .orElse(null); + if (returnedEntity == null) { + // TODO: Error or retry? + } + } + + private @NotNull PolarisMetaStoreManager.DropEntityResult dropTableLike( + long catalogId, + PolarisEntitySubType subType, + TableIdentifier identifier, + Map storageProperties, + boolean purge) { + PolarisResolvedPathWrapper resolvedEntities = + resolvedEntityView.getResolvedPath(identifier, subType); + if (resolvedEntities == null) { + // TODO: Error? + return new PolarisMetaStoreManager.DropEntityResult( + PolarisMetaStoreManager.ReturnStatus.ENTITY_NOT_FOUND, null); + } + + List catalogPath = resolvedEntities.getRawParentPath(); + PolarisEntity leafEntity = resolvedEntities.getRawLeafEntity(); + return entityManager + .getMetaStoreManager() + .dropEntityIfExists( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(catalogPath), + leafEntity, + storageProperties, + purge); + } + + private boolean sendNotificationForTableLike( + long catalogId, + PolarisEntitySubType subType, + TableIdentifier tableIdentifier, + NotificationRequest request) { + LOG.debug("Handling notification request {} for tableIdentifier {}", request, tableIdentifier); + PolarisResolvedPathWrapper resolvedEntities = + resolvedEntityView.getPassthroughResolvedPath(tableIdentifier, subType); + + NotificationType notificationType = request.getNotificationType(); + + Preconditions.checkNotNull(notificationType, "Expected a valid notification type."); + + if (notificationType == NotificationType.DROP) { + return dropTableLike( + catalogId, PolarisEntitySubType.TABLE, tableIdentifier, Map.of(), false /* purge */) + .isSuccess(); + } else if (notificationType == NotificationType.CREATE + || notificationType == NotificationType.UPDATE) { + + Namespace ns = tableIdentifier.namespace(); + createNonExistingNamespaces(ns); + + PolarisResolvedPathWrapper resolvedParent = resolvedEntityView.getPassthroughResolvedPath(ns); + + TableLikeEntity entity = + TableLikeEntity.of(resolvedEntities == null ? null : resolvedEntities.getRawLeafEntity()); + + String existingLocation; + String newLocation = transformTableLikeLocation(request.getPayload().getMetadataLocation()); + if (null == entity) { + existingLocation = null; + entity = + new TableLikeEntity.Builder(tableIdentifier, newLocation) + .setCatalogId(getCatalogId()) + .setSubType(PolarisEntitySubType.TABLE) + .setId( + entityManager + .getMetaStoreManager() + .generateNewEntityId(getCurrentPolarisContext()) + .getId()) + .build(); + } else { + existingLocation = entity.getMetadataLocation(); + entity = new TableLikeEntity.Builder(entity).setMetadataLocation(newLocation).build(); + } + // first validate we can read the metadata file + validateLocationForTableLike(tableIdentifier, newLocation); + + TableOperations tableOperations = newTableOps(tableIdentifier); + String locationDir = newLocation.substring(0, newLocation.lastIndexOf("/")); + ; + FileIO fileIO = + refreshIOWithCredentials( + tableIdentifier, + Set.of(locationDir), + resolvedParent, + new HashMap<>(), + tableOperations.io()); + TableMetadata tableMetadata = TableMetadataParser.read(fileIO, newLocation); + + // then validate that it points to a valid location for this table + validateLocationForTableLike(tableIdentifier, tableMetadata.location()); + + // finally, validate that the metadata file is within the table directory + validateMetadataFileInTableDir( + tableIdentifier, tableMetadata, CatalogEntity.of(resolvedParent.getRawFullPath().get(0))); + + // TODO: These might fail due to concurrent update; we need to do a retry in those cases. + if (null == existingLocation) { + LOG.debug( + "Creating table {} for notification with metadataLocation {}", + tableIdentifier, + newLocation); + createTableLike(catalogId, tableIdentifier, entity, resolvedParent); + } else { + LOG.debug( + "Updating table {} for notification with metadataLocation {}", + tableIdentifier, + newLocation); + + updateTableLike(catalogId, tableIdentifier, entity); + } + } + return true; + } + + private void createNonExistingNamespaces(Namespace namespace) { + // Pre-create namespaces if they don't exist + for (int i = 1; i <= namespace.length(); i++) { + Namespace nsLevel = + Namespace.of( + Arrays.stream(namespace.levels()) + .limit(i) + .collect(Collectors.toList()) + .toArray(String[]::new)); + if (resolvedEntityView.getPassthroughResolvedPath(nsLevel) == null) { + Namespace parentNamespace = PolarisCatalogHelpers.getParentNamespace(nsLevel); + PolarisResolvedPathWrapper resolvedParent = + resolvedEntityView.getPassthroughResolvedPath(parentNamespace); + createNamespaceInternal(nsLevel, Collections.emptyMap(), resolvedParent); + } + } + } + + private List listTableLike( + long catalogId, PolarisEntitySubType subType, Namespace namespace) { + PolarisResolvedPathWrapper resolvedEntities = resolvedEntityView.getResolvedPath(namespace); + if (resolvedEntities == null) { + // Illegal state because the namespace should've already been in the static resolution set. + throw new IllegalStateException( + String.format("Failed to fetch resolved namespace '%s'", namespace)); + } + + List catalogPath = resolvedEntities.getRawFullPath(); + List entities = + PolarisEntity.toNameAndIdList( + entityManager + .getMetaStoreManager() + .listEntities( + getCurrentPolarisContext(), + PolarisEntity.toCoreList(catalogPath), + PolarisEntityType.TABLE_LIKE, + subType) + .getEntities()); + return PolarisCatalogHelpers.nameAndIdToTableIdentifiers(catalogPath, entities); + } + + /** + * Load FileIO with provided impl and properties + * + * @param ioImpl full class name of a custom FileIO implementation + * @param properties used to initialize the FileIO implementation + * @return FileIO object + */ + private FileIO loadFileIO(String ioImpl, Map properties) { + blockedUserSpecifiedWriteLocation(properties); + Map propertiesWithS3CustomizedClientFactory = new HashMap<>(properties); + propertiesWithS3CustomizedClientFactory.put( + S3FileIOProperties.CLIENT_FACTORY, PolarisS3FileIOClientFactory.class.getName()); + return CatalogUtil.loadFileIO( + ioImpl, propertiesWithS3CustomizedClientFactory, new Configuration()); + } + + private void blockedUserSpecifiedWriteLocation(Map properties) { + if (properties != null + && (properties.containsKey(TableLikeEntity.USER_SPECIFIED_WRITE_DATA_LOCATION_KEY) + || properties.containsKey( + TableLikeEntity.USER_SPECIFIED_WRITE_METADATA_LOCATION_KEY))) { + throw new ForbiddenException( + "Delegate access to table with user-specified write location is temporarily not supported."); + } + } + + /** + * Check if the exception is retryable for the storage provider + * + * @param ex exception + * @return true if the exception is retryable + */ + private static boolean isStorageProviderRetryableException(Exception ex) { + // For S3/Azure, the exception is not wrapped, while for GCP the exception is wrapped as a + // RuntimeException + Throwable rootCause = ExceptionUtils.getRootCause(ex); + if (rootCause == null) { + // no root cause, let it retry + return true; + } + // only S3 SdkException has this retryable property + if (rootCause instanceof SdkException && ((SdkException) rootCause).retryable()) { + return true; + } + // add more cases here if needed + // AccessDenied is not retryable + return !isAccessDenied(rootCause.getMessage()); + } + + private static boolean isAccessDenied(String errorMsg) { + // corresponding error messages for storage providers Aws/Azure/Gcp + boolean isAccessDenied = + errorMsg != null + && (errorMsg.contains("Access Denied") + || errorMsg.contains("This request is not authorized to perform this operation") + || errorMsg.contains("Forbidden")); + if (isAccessDenied) { + LOG.debug("Access Denied or Forbidden error: {}", errorMsg); + return true; + } + return false; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/catalog/IcebergCatalogAdapter.java b/polaris-service/src/main/java/io/polaris/service/catalog/IcebergCatalogAdapter.java new file mode 100644 index 0000000000..87e8bf195e --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/catalog/IcebergCatalogAdapter.java @@ -0,0 +1,478 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.catalog; + +import com.google.common.base.Preconditions; +import com.google.common.base.Strings; +import com.google.common.collect.ImmutableMap; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.auth.PolarisAuthorizer; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.cache.EntityCacheEntry; +import io.polaris.core.persistence.resolver.Resolver; +import io.polaris.core.persistence.resolver.ResolverStatus; +import io.polaris.service.catalog.api.IcebergRestCatalogApiService; +import io.polaris.service.catalog.api.IcebergRestConfigurationApiService; +import io.polaris.service.config.RealmEntityManagerFactory; +import io.polaris.service.context.CallContextCatalogFactory; +import io.polaris.service.types.CommitTableRequest; +import io.polaris.service.types.CommitViewRequest; +import io.polaris.service.types.NotificationRequest; +import jakarta.ws.rs.core.Response; +import jakarta.ws.rs.core.SecurityContext; +import java.net.URLEncoder; +import java.nio.charset.Charset; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.stream.Collectors; +import org.apache.iceberg.UpdateRequirement; +import org.apache.iceberg.catalog.Catalog; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.exceptions.BadRequestException; +import org.apache.iceberg.exceptions.NotAuthorizedException; +import org.apache.iceberg.exceptions.NotFoundException; +import org.apache.iceberg.rest.RESTUtil; +import org.apache.iceberg.rest.requests.CommitTransactionRequest; +import org.apache.iceberg.rest.requests.CreateNamespaceRequest; +import org.apache.iceberg.rest.requests.CreateTableRequest; +import org.apache.iceberg.rest.requests.CreateViewRequest; +import org.apache.iceberg.rest.requests.RegisterTableRequest; +import org.apache.iceberg.rest.requests.RenameTableRequest; +import org.apache.iceberg.rest.requests.ReportMetricsRequest; +import org.apache.iceberg.rest.requests.UpdateNamespacePropertiesRequest; +import org.apache.iceberg.rest.requests.UpdateTableRequest; +import org.apache.iceberg.rest.responses.ConfigResponse; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * {@link IcebergRestCatalogApiService} implementation that delegates operations to {@link + * org.apache.iceberg.rest.CatalogHandlers} after finding the appropriate {@link Catalog} for the + * current {@link RealmContext}. + */ +public class IcebergCatalogAdapter + implements IcebergRestCatalogApiService, IcebergRestConfigurationApiService { + private static final Logger LOG = LoggerFactory.getLogger(IcebergCatalogAdapter.class); + + private final CallContextCatalogFactory catalogFactory; + private final RealmEntityManagerFactory entityManagerFactory; + private PolarisAuthorizer polarisAuthorizer; + + public IcebergCatalogAdapter( + CallContextCatalogFactory catalogFactory, + RealmEntityManagerFactory entityManagerFactory, + PolarisAuthorizer polarisAuthorizer) { + this.catalogFactory = catalogFactory; + this.entityManagerFactory = entityManagerFactory; + this.polarisAuthorizer = polarisAuthorizer; + } + + private PolarisCatalogHandlerWrapper newHandlerWrapper( + SecurityContext securityContext, String catalogName) { + CallContext callContext = CallContext.getCurrentContext(); + AuthenticatedPolarisPrincipal authenticatedPrincipal = + (AuthenticatedPolarisPrincipal) securityContext.getUserPrincipal(); + if (authenticatedPrincipal == null) { + throw new NotAuthorizedException("Failed to find authenticatedPrincipal in SecurityContext"); + } + + PolarisEntityManager entityManager = + entityManagerFactory.getOrCreateEntityManager(callContext.getRealmContext()); + + return new PolarisCatalogHandlerWrapper( + callContext, + entityManager, + authenticatedPrincipal, + catalogFactory, + catalogName, + polarisAuthorizer); + } + + @Override + public Response createNamespace( + String prefix, + CreateNamespaceRequest createNamespaceRequest, + SecurityContext securityContext) { + return Response.ok( + newHandlerWrapper(securityContext, prefix).createNamespace(createNamespaceRequest)) + .build(); + } + + @Override + public Response listNamespaces( + String prefix, + String pageToken, + Integer pageSize, + String parent, + SecurityContext securityContext) { + Optional namespaceOptional = + Optional.ofNullable(parent).map(IcebergCatalogAdapter::decodeNamespace); + return Response.ok( + newHandlerWrapper(securityContext, prefix) + .listNamespaces(namespaceOptional.orElse(Namespace.of()))) + .build(); + } + + @Override + public Response loadNamespaceMetadata( + String prefix, String namespace, SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + return Response.ok(newHandlerWrapper(securityContext, prefix).loadNamespaceMetadata(ns)) + .build(); + } + + private static Namespace decodeNamespace(String namespace) { + return RESTUtil.decodeNamespace(URLEncoder.encode(namespace, Charset.defaultCharset())); + } + + @Override + public Response namespaceExists( + String prefix, String namespace, SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + newHandlerWrapper(securityContext, prefix).namespaceExists(ns); + return Response.ok().build(); + } + + @Override + public Response dropNamespace(String prefix, String namespace, SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + newHandlerWrapper(securityContext, prefix).dropNamespace(ns); + return Response.ok(Response.Status.NO_CONTENT).build(); + } + + @Override + public Response updateProperties( + String prefix, + String namespace, + UpdateNamespacePropertiesRequest updateNamespacePropertiesRequest, + SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + return Response.ok( + newHandlerWrapper(securityContext, prefix) + .updateNamespaceProperties(ns, updateNamespacePropertiesRequest)) + .build(); + } + + @Override + public Response createTable( + String prefix, + String namespace, + CreateTableRequest createTableRequest, + String xIcebergAccessDelegation, + SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + if (createTableRequest.stageCreate()) { + if (Strings.isNullOrEmpty(xIcebergAccessDelegation)) { + return Response.ok( + newHandlerWrapper(securityContext, prefix) + .createTableStaged(ns, createTableRequest)) + .build(); + } else { + return Response.ok( + newHandlerWrapper(securityContext, prefix) + .createTableStagedWithWriteDelegation( + ns, createTableRequest, xIcebergAccessDelegation)) + .build(); + } + } else if (Strings.isNullOrEmpty(xIcebergAccessDelegation)) { + return Response.ok( + newHandlerWrapper(securityContext, prefix).createTableDirect(ns, createTableRequest)) + .build(); + } else { + return Response.ok( + newHandlerWrapper(securityContext, prefix) + .createTableDirectWithWriteDelegation(ns, createTableRequest)) + .build(); + } + } + + @Override + public Response listTables( + String prefix, + String namespace, + String pageToken, + Integer pageSize, + SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + return Response.ok(newHandlerWrapper(securityContext, prefix).listTables(ns)).build(); + } + + @Override + public Response loadTable( + String prefix, + String namespace, + String table, + String xIcebergAccessDelegation, + String snapshots, + SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + TableIdentifier tableIdentifier = TableIdentifier.of(ns, RESTUtil.decodeString(table)); + if (Strings.isNullOrEmpty(xIcebergAccessDelegation)) { + return Response.ok( + newHandlerWrapper(securityContext, prefix).loadTable(tableIdentifier, snapshots)) + .build(); + } else { + return Response.ok( + newHandlerWrapper(securityContext, prefix) + .loadTableWithAccessDelegation( + tableIdentifier, xIcebergAccessDelegation, snapshots)) + .build(); + } + } + + @Override + public Response tableExists( + String prefix, String namespace, String table, SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + TableIdentifier tableIdentifier = TableIdentifier.of(ns, RESTUtil.decodeString(table)); + newHandlerWrapper(securityContext, prefix).tableExists(tableIdentifier); + return Response.ok().build(); + } + + @Override + public Response dropTable( + String prefix, + String namespace, + String table, + Boolean purgeRequested, + SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + TableIdentifier tableIdentifier = TableIdentifier.of(ns, RESTUtil.decodeString(table)); + + if (purgeRequested != null && purgeRequested.booleanValue()) { + newHandlerWrapper(securityContext, prefix).dropTableWithPurge(tableIdentifier); + } else { + newHandlerWrapper(securityContext, prefix).dropTableWithoutPurge(tableIdentifier); + } + return Response.ok(Response.Status.NO_CONTENT).build(); + } + + @Override + public Response registerTable( + String prefix, + String namespace, + RegisterTableRequest registerTableRequest, + SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + return Response.ok( + newHandlerWrapper(securityContext, prefix).registerTable(ns, registerTableRequest)) + .build(); + } + + @Override + public Response renameTable( + String prefix, RenameTableRequest renameTableRequest, SecurityContext securityContext) { + newHandlerWrapper(securityContext, prefix).renameTable(renameTableRequest); + return Response.ok(javax.ws.rs.core.Response.Status.NO_CONTENT).build(); + } + + @Override + public Response updateTable( + String prefix, + String namespace, + String table, + CommitTableRequest commitTableRequest, + SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + TableIdentifier tableIdentifier = TableIdentifier.of(ns, RESTUtil.decodeString(table)); + + if (isCreate(commitTableRequest)) { + return Response.ok( + newHandlerWrapper(securityContext, prefix) + .updateTableForStagedCreate(tableIdentifier, commitTableRequest)) + .build(); + } else { + return Response.ok( + newHandlerWrapper(securityContext, prefix) + .updateTable(tableIdentifier, commitTableRequest)) + .build(); + } + } + + /** + * TODO: Make the helper in org.apache.iceberg.rest.CatalogHandlers public instead of needing to + * copy/pastehere. + */ + private static boolean isCreate(UpdateTableRequest request) { + boolean isCreate = + request.requirements().stream() + .anyMatch(UpdateRequirement.AssertTableDoesNotExist.class::isInstance); + + if (isCreate) { + List invalidRequirements = + request.requirements().stream() + .filter(req -> !(req instanceof UpdateRequirement.AssertTableDoesNotExist)) + .collect(Collectors.toList()); + Preconditions.checkArgument( + invalidRequirements.isEmpty(), "Invalid create requirements: %s", invalidRequirements); + } + + return isCreate; + } + + @Override + public Response createView( + String prefix, + String namespace, + CreateViewRequest createViewRequest, + SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + return Response.ok(newHandlerWrapper(securityContext, prefix).createView(ns, createViewRequest)) + .build(); + } + + @Override + public Response listViews( + String prefix, + String namespace, + String pageToken, + Integer pageSize, + SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + return Response.ok(newHandlerWrapper(securityContext, prefix).listViews(ns)).build(); + } + + @Override + public Response loadView( + String prefix, String namespace, String view, SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + TableIdentifier tableIdentifier = TableIdentifier.of(ns, RESTUtil.decodeString(view)); + return Response.ok(newHandlerWrapper(securityContext, prefix).loadView(tableIdentifier)) + .build(); + } + + @Override + public Response viewExists( + String prefix, String namespace, String view, SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + TableIdentifier tableIdentifier = TableIdentifier.of(ns, RESTUtil.decodeString(view)); + newHandlerWrapper(securityContext, prefix).viewExists(tableIdentifier); + return Response.ok().build(); + } + + @Override + public Response dropView( + String prefix, String namespace, String view, SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + TableIdentifier tableIdentifier = TableIdentifier.of(ns, RESTUtil.decodeString(view)); + newHandlerWrapper(securityContext, prefix).dropView(tableIdentifier); + return Response.ok(Response.Status.NO_CONTENT).build(); + } + + @Override + public Response renameView( + String prefix, RenameTableRequest renameTableRequest, SecurityContext securityContext) { + newHandlerWrapper(securityContext, prefix).renameView(renameTableRequest); + return Response.ok(Response.Status.NO_CONTENT).build(); + } + + @Override + public Response replaceView( + String prefix, + String namespace, + String view, + CommitViewRequest commitViewRequest, + SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + TableIdentifier tableIdentifier = TableIdentifier.of(ns, RESTUtil.decodeString(view)); + return Response.ok( + newHandlerWrapper(securityContext, prefix) + .replaceView(tableIdentifier, commitViewRequest)) + .build(); + } + + @Override + public Response commitTransaction( + String prefix, + CommitTransactionRequest commitTransactionRequest, + SecurityContext securityContext) { + newHandlerWrapper(securityContext, prefix).commitTransaction(commitTransactionRequest); + return Response.status(Response.Status.NO_CONTENT).build(); + } + + @Override + public Response reportMetrics( + String prefix, + String namespace, + String table, + ReportMetricsRequest reportMetricsRequest, + SecurityContext securityContext) { + return Response.status(Response.Status.NO_CONTENT).build(); + } + + @Override + public Response sendNotification( + String prefix, + String namespace, + String table, + NotificationRequest notificationRequest, + SecurityContext securityContext) { + Namespace ns = decodeNamespace(namespace); + TableIdentifier tableIdentifier = TableIdentifier.of(ns, RESTUtil.decodeString(table)); + newHandlerWrapper(securityContext, prefix) + .sendNotification(tableIdentifier, notificationRequest); + return Response.status(Response.Status.NO_CONTENT).build(); + } + + /** From IcebergRestConfigurationApiService. */ + @Override + public Response getConfig(String warehouse, SecurityContext securityContext) { + // 'warehouse' as an input here is catalogName. + // 'warehouse' as an output will be treated by the client as a default catalog + // storage + // base location. + // 'prefix' as an output is the REST subpath that routes to the catalog + // resource, + // which may be URL-escaped catalogName or potentially a different unique + // identifier for + // the catalog being accessed. + // TODO: Push this down into PolarisCatalogHandlerWrapper for authorizing "any" catalog + // role in this catalog. + PolarisEntityManager entityManager = + entityManagerFactory.getOrCreateEntityManager( + CallContext.getCurrentContext().getRealmContext()); + AuthenticatedPolarisPrincipal authenticatedPrincipal = + (AuthenticatedPolarisPrincipal) securityContext.getUserPrincipal(); + if (authenticatedPrincipal == null) { + throw new NotAuthorizedException("Failed to find authenticatedPrincipal in SecurityContext"); + } + if (warehouse == null) { + throw new BadRequestException("Please specify a warehouse"); + } + Resolver resolver = + entityManager.prepareResolver( + CallContext.getCurrentContext(), authenticatedPrincipal, warehouse); + ResolverStatus resolverStatus = resolver.resolveAll(); + if (!resolverStatus.getStatus().equals(ResolverStatus.StatusEnum.SUCCESS)) { + throw new NotFoundException("Unable to find warehouse " + warehouse); + } + EntityCacheEntry resolvedReferenceCatalog = resolver.getResolvedReferenceCatalog(); + Map properties = + PolarisEntity.of(resolvedReferenceCatalog.getEntity()).getPropertiesAsMap(); + + return Response.ok( + ConfigResponse.builder() + .withDefaults(properties) // catalog properties are defaults + .withOverrides(ImmutableMap.of("prefix", warehouse)) + .build()) + .build(); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/catalog/PolarisCatalogHandlerWrapper.java b/polaris-service/src/main/java/io/polaris/service/catalog/PolarisCatalogHandlerWrapper.java new file mode 100644 index 0000000000..8197e5826a --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/catalog/PolarisCatalogHandlerWrapper.java @@ -0,0 +1,1074 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.catalog; + +import com.google.common.collect.Maps; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.auth.PolarisAuthorizableOperation; +import io.polaris.core.auth.PolarisAuthorizer; +import io.polaris.core.catalog.PolarisCatalogHelpers; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisResolvedPathWrapper; +import io.polaris.core.persistence.resolver.PolarisResolutionManifest; +import io.polaris.core.persistence.resolver.ResolverPath; +import io.polaris.core.persistence.resolver.ResolverStatus; +import io.polaris.core.storage.PolarisStorageActions; +import io.polaris.service.context.CallContextCatalogFactory; +import io.polaris.service.types.NotificationRequest; +import java.io.Closeable; +import java.io.IOException; +import java.time.OffsetDateTime; +import java.time.ZoneOffset; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.Set; +import java.util.function.Supplier; +import java.util.stream.Collectors; +import org.apache.iceberg.BaseMetadataTable; +import org.apache.iceberg.BaseTable; +import org.apache.iceberg.BaseTransaction; +import org.apache.iceberg.MetadataUpdate; +import org.apache.iceberg.PartitionSpec; +import org.apache.iceberg.SortOrder; +import org.apache.iceberg.Table; +import org.apache.iceberg.TableMetadata; +import org.apache.iceberg.Transaction; +import org.apache.iceberg.Transactions; +import org.apache.iceberg.UpdateRequirement; +import org.apache.iceberg.catalog.Catalog; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.SupportsNamespaces; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.catalog.ViewCatalog; +import org.apache.iceberg.exceptions.AlreadyExistsException; +import org.apache.iceberg.exceptions.BadRequestException; +import org.apache.iceberg.exceptions.ForbiddenException; +import org.apache.iceberg.exceptions.NoSuchNamespaceException; +import org.apache.iceberg.exceptions.NoSuchTableException; +import org.apache.iceberg.exceptions.NoSuchViewException; +import org.apache.iceberg.rest.CatalogHandlers; +import org.apache.iceberg.rest.requests.CommitTransactionRequest; +import org.apache.iceberg.rest.requests.CreateNamespaceRequest; +import org.apache.iceberg.rest.requests.CreateTableRequest; +import org.apache.iceberg.rest.requests.CreateViewRequest; +import org.apache.iceberg.rest.requests.RegisterTableRequest; +import org.apache.iceberg.rest.requests.RenameTableRequest; +import org.apache.iceberg.rest.requests.UpdateNamespacePropertiesRequest; +import org.apache.iceberg.rest.requests.UpdateTableRequest; +import org.apache.iceberg.rest.responses.CreateNamespaceResponse; +import org.apache.iceberg.rest.responses.GetNamespaceResponse; +import org.apache.iceberg.rest.responses.ListNamespacesResponse; +import org.apache.iceberg.rest.responses.ListTablesResponse; +import org.apache.iceberg.rest.responses.LoadTableResponse; +import org.apache.iceberg.rest.responses.LoadViewResponse; +import org.apache.iceberg.rest.responses.UpdateNamespacePropertiesResponse; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Authorization-aware adapter between REST stubs and shared Iceberg SDK CatalogHandlers. + * + *

We must make authorization decisions based on entity resolution at this layer instead of the + * underlying BasePolarisCatalog layer, because this REST-adjacent layer captures intent of + * different REST calls that share underlying catalog calls (e.g. updateTable will call loadTable + * under the hood), and some features of the REST API aren't expressed at all in the underlying + * Catalog interfaces (e.g. credential-vending in createTable/loadTable). + * + *

We also want this layer to be independent of API-endpoint-specific idioms, such as dealing + * with jakarta.ws.rs.core.Response objects, and other implementations that expose different HTTP + * stubs or even tunnel the protocol over something like gRPC can still normalize on the Iceberg + * model objects used in this layer to still benefit from the shared implementation of + * authorization-aware catalog protocols. + */ +public class PolarisCatalogHandlerWrapper { + private static final Logger LOG = LoggerFactory.getLogger(PolarisCatalogHandlerWrapper.class); + + private final CallContext callContext; + private final PolarisEntityManager entityManager; + private final String catalogName; + private final AuthenticatedPolarisPrincipal authenticatedPrincipal; + private final PolarisAuthorizer authorizer; + private final CallContextCatalogFactory catalogFactory; + + // Initialized in the authorize methods. + private PolarisResolutionManifest resolutionManifest = null; + + // Catalog instance will be initialized after authorizing resolver successfully resolves + // the catalog entity. + private Catalog baseCatalog = null; + private SupportsNamespaces namespaceCatalog = null; + private ViewCatalog viewCatalog = null; + + public PolarisCatalogHandlerWrapper( + CallContext callContext, + PolarisEntityManager entityManager, + AuthenticatedPolarisPrincipal authenticatedPrincipal, + CallContextCatalogFactory catalogFactory, + String catalogName, + PolarisAuthorizer authorizer) { + this.callContext = callContext; + this.entityManager = entityManager; + this.catalogName = catalogName; + this.authenticatedPrincipal = authenticatedPrincipal; + this.authorizer = authorizer; + this.catalogFactory = catalogFactory; + } + + private void initializeCatalog() { + this.baseCatalog = + catalogFactory.createCallContextCatalog( + callContext, authenticatedPrincipal, resolutionManifest); + this.namespaceCatalog = + (baseCatalog instanceof SupportsNamespaces) ? (SupportsNamespaces) baseCatalog : null; + this.viewCatalog = (baseCatalog instanceof ViewCatalog) ? (ViewCatalog) baseCatalog : null; + } + + private void authorizeBasicNamespaceOperationOrThrow( + PolarisAuthorizableOperation op, Namespace namespace) { + authorizeBasicNamespaceOperationOrThrow(op, namespace, null, null); + } + + private void authorizeBasicNamespaceOperationOrThrow( + PolarisAuthorizableOperation op, + Namespace namespace, + List extraPassthroughNamespaces, + List extraPassthroughTableLikes) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + resolutionManifest.addPath( + new ResolverPath(Arrays.asList(namespace.levels()), PolarisEntityType.NAMESPACE), + namespace); + + if (extraPassthroughNamespaces != null) { + for (Namespace ns : extraPassthroughNamespaces) { + resolutionManifest.addPassthroughPath( + new ResolverPath( + Arrays.asList(ns.levels()), PolarisEntityType.NAMESPACE, true /* optional */), + ns); + } + } + if (extraPassthroughTableLikes != null) { + for (TableIdentifier id : extraPassthroughTableLikes) { + resolutionManifest.addPassthroughPath( + new ResolverPath( + PolarisCatalogHelpers.tableIdentifierToList(id), + PolarisEntityType.TABLE_LIKE, + true /* optional */), + id); + } + } + resolutionManifest.resolveAll(); + PolarisResolvedPathWrapper target = resolutionManifest.getResolvedPath(namespace, true); + if (target == null) { + throw new NoSuchNamespaceException("Namespace does not exist: %s", namespace); + } + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + target, + null /* secondary */); + + initializeCatalog(); + } + + private void authorizeCreateNamespaceUnderNamespaceOperationOrThrow( + PolarisAuthorizableOperation op, Namespace namespace) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + + Namespace parentNamespace = PolarisCatalogHelpers.getParentNamespace(namespace); + resolutionManifest.addPath( + new ResolverPath(Arrays.asList(parentNamespace.levels()), PolarisEntityType.NAMESPACE), + parentNamespace); + + // When creating an entity under a namespace, the authz target is the parentNamespace, but we + // must also add the actual path that will be created as an "optional" passthrough resolution + // path to indicate that the underlying catalog is "allowed" to check the creation path for + // a conflicting entity. + resolutionManifest.addPassthroughPath( + new ResolverPath( + Arrays.asList(namespace.levels()), PolarisEntityType.NAMESPACE, true /* optional */), + namespace); + resolutionManifest.resolveAll(); + PolarisResolvedPathWrapper target = resolutionManifest.getResolvedPath(parentNamespace, true); + if (target == null) { + throw new NoSuchNamespaceException("Namespace does not exist: %s", parentNamespace); + } + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + target, + null /* secondary */); + + initializeCatalog(); + } + + private void authorizeCreateTableLikeUnderNamespaceOperationOrThrow( + PolarisAuthorizableOperation op, TableIdentifier identifier) { + Namespace namespace = identifier.namespace(); + + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + resolutionManifest.addPath( + new ResolverPath(Arrays.asList(namespace.levels()), PolarisEntityType.NAMESPACE), + namespace); + + // When creating an entity under a namespace, the authz target is the namespace, but we must + // also + // add the actual path that will be created as an "optional" passthrough resolution path to + // indicate that the underlying catalog is "allowed" to check the creation path for a + // conflicting + // entity. + resolutionManifest.addPassthroughPath( + new ResolverPath( + PolarisCatalogHelpers.tableIdentifierToList(identifier), + PolarisEntityType.TABLE_LIKE, + true /* optional */), + identifier); + resolutionManifest.resolveAll(); + PolarisResolvedPathWrapper target = resolutionManifest.getResolvedPath(namespace, true); + if (target == null) { + throw new NoSuchNamespaceException("Namespace does not exist: %s", namespace); + } + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + target, + null /* secondary */); + + initializeCatalog(); + } + + private void authorizeBasicTableLikeOperationOrThrow( + PolarisAuthorizableOperation op, PolarisEntitySubType subType, TableIdentifier identifier) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + + // The underlying Catalog is also allowed to fetch "fresh" versions of the target entity. + resolutionManifest.addPassthroughPath( + new ResolverPath( + PolarisCatalogHelpers.tableIdentifierToList(identifier), + PolarisEntityType.TABLE_LIKE, + true /* optional */), + identifier); + resolutionManifest.resolveAll(); + PolarisResolvedPathWrapper target = + resolutionManifest.getResolvedPath(identifier, subType, true); + if (target == null) { + if (subType == PolarisEntitySubType.TABLE) { + throw new NoSuchTableException("Table does not exist: %s", identifier); + } else { + throw new NoSuchViewException("View does not exist: %s", identifier); + } + } + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + target, + null /* secondary */); + + initializeCatalog(); + } + + private void authorizeCollectionOfTableLikeOperationOrThrow( + PolarisAuthorizableOperation op, + final PolarisEntitySubType subType, + List ids) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + ids.stream() + .forEach( + identifier -> + resolutionManifest.addPassthroughPath( + new ResolverPath( + PolarisCatalogHelpers.tableIdentifierToList(identifier), + PolarisEntityType.TABLE_LIKE), + identifier)); + + ResolverStatus status = resolutionManifest.resolveAll(); + + // If one of the paths failed to resolve, throw exception based on the one that + // we first failed to resolve. + if (status.getStatus() == ResolverStatus.StatusEnum.PATH_COULD_NOT_BE_FULLY_RESOLVED) { + TableIdentifier identifier = + PolarisCatalogHelpers.listToTableIdentifier( + status.getFailedToResolvePath().getEntityNames()); + if (subType == PolarisEntitySubType.TABLE) { + throw new NoSuchTableException("Table does not exist: %s", identifier); + } else { + throw new NoSuchViewException("View does not exist: %s", identifier); + } + } + + List targets = + ids.stream() + .map( + identifier -> + Optional.ofNullable( + resolutionManifest.getResolvedPath(identifier, subType, true)) + .orElseThrow( + () -> + subType == PolarisEntitySubType.TABLE + ? new NoSuchTableException( + "Table does not exist: %s", identifier) + : new NoSuchViewException( + "View does not exist: %s", identifier))) + .toList(); + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + targets, + null /* secondaries */); + + initializeCatalog(); + } + + private void authorizeRenameTableLikeOperationOrThrow( + PolarisAuthorizableOperation op, + PolarisEntitySubType subType, + TableIdentifier src, + TableIdentifier dst) { + resolutionManifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + // Add src, dstParent, and dst(optional) + resolutionManifest.addPath( + new ResolverPath( + PolarisCatalogHelpers.tableIdentifierToList(src), PolarisEntityType.TABLE_LIKE), + src); + resolutionManifest.addPath( + new ResolverPath(Arrays.asList(dst.namespace().levels()), PolarisEntityType.NAMESPACE), + dst.namespace()); + resolutionManifest.addPath( + new ResolverPath( + PolarisCatalogHelpers.tableIdentifierToList(dst), + PolarisEntityType.TABLE_LIKE, + true /* optional */), + dst); + ResolverStatus status = resolutionManifest.resolveAll(); + if (status.getStatus() == ResolverStatus.StatusEnum.PATH_COULD_NOT_BE_FULLY_RESOLVED + && status.getFailedToResolvePath().getLastEntityType() == PolarisEntityType.NAMESPACE) { + throw new NoSuchNamespaceException("Namespace does not exist: %s", dst.namespace()); + } else if (resolutionManifest.getResolvedPath(src, subType) == null) { + if (subType == PolarisEntitySubType.TABLE) { + throw new NoSuchTableException("Table does not exist: %s", src); + } else { + throw new NoSuchViewException("View does not exist: %s", src); + } + } + + // Normally, since we added the dst as an optional path, we'd expect it to only get resolved + // up to its parent namespace, and for there to be no TABLE_LIKE already in the dst in which + // case the leafSubType will be NULL_SUBTYPE. + // If there is a conflicting TABLE or VIEW, this leafSubType will indicate that conflicting + // type. + // TODO: Possibly modify the exception thrown depending on whether the caller has privileges + // on the parent namespace. + PolarisEntitySubType dstLeafSubType = resolutionManifest.getLeafSubType(dst); + if (dstLeafSubType == PolarisEntitySubType.TABLE) { + throw new AlreadyExistsException("Cannot rename %s to %s. Table already exists", src, dst); + } else if (dstLeafSubType == PolarisEntitySubType.VIEW) { + throw new AlreadyExistsException("Cannot rename %s to %s. View already exists", src, dst); + } + + PolarisResolvedPathWrapper target = resolutionManifest.getResolvedPath(src, subType, true); + PolarisResolvedPathWrapper secondary = + resolutionManifest.getResolvedPath(dst.namespace(), true); + authorizer.authorizeOrThrow( + authenticatedPrincipal, + resolutionManifest.getAllActivatedCatalogRoleAndPrincipalRoleIds(), + op, + target, + secondary); + + initializeCatalog(); + } + + public ListNamespacesResponse listNamespaces(Namespace parent) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.LIST_NAMESPACES; + authorizeBasicNamespaceOperationOrThrow(op, parent); + + return doCatalogOperation(() -> CatalogHandlers.listNamespaces(namespaceCatalog, parent)); + } + + public CreateNamespaceResponse createNamespace(CreateNamespaceRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.CREATE_NAMESPACE; + + Namespace namespace = request.namespace(); + if (namespace.length() == 0) { + throw new AlreadyExistsException( + "Cannot create root namespace, as it already exists implicitly."); + } + authorizeCreateNamespaceUnderNamespaceOperationOrThrow(op, namespace); + + if (namespaceCatalog instanceof BasePolarisCatalog) { + // Note: The CatalogHandlers' default implementation will non-atomically create the + // namespace and then fetch its properties using loadNamespaceMetadata for the response. + // However, the latest namespace metadata technically isn't the same authorized instance, + // so we don't want all cals to loadNamespaceMetadata to automatically use the manifest + // in "passthrough" mode. + // + // For CreateNamespace, we consider this a special case in that the creator is able to + // retrieve the latest namespace metadata for the duration of the CreateNamespace + // operation, even if the entityVersion and/or grantsVersion update in the interim. + return doCatalogOperation( + () -> { + namespaceCatalog.createNamespace(namespace, request.properties()); + return CreateNamespaceResponse.builder() + .withNamespace(namespace) + .setProperties( + resolutionManifest + .getPassthroughResolvedPath(namespace) + .getRawLeafEntity() + .getPropertiesAsMap()) + .build(); + }); + } else { + return doCatalogOperation(() -> CatalogHandlers.createNamespace(namespaceCatalog, request)); + } + } + + private static boolean isExternal(CatalogEntity catalog) { + return io.polaris.core.admin.model.Catalog.TypeEnum.EXTERNAL.equals(catalog.getCatalogType()); + } + + private void doCatalogOperation(Runnable handler) { + doCatalogOperation( + () -> { + handler.run(); + return null; + }); + } + + /** + * Execute a catalog function and ensure we close the BaseCatalog afterward. This will typically + * ensure the underlying FileIO is closed + * + * @param handler + * @return + * @param + */ + private T doCatalogOperation(Supplier handler) { + try { + return handler.get(); + } finally { + if (baseCatalog instanceof Closeable closeable) { + try { + closeable.close(); + } catch (IOException e) { + LOG.error("Error while closing BaseCatalog", e); + } + } + } + } + + public GetNamespaceResponse loadNamespaceMetadata(Namespace namespace) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.LOAD_NAMESPACE_METADATA; + authorizeBasicNamespaceOperationOrThrow(op, namespace); + + return doCatalogOperation(() -> CatalogHandlers.loadNamespace(namespaceCatalog, namespace)); + } + + public void namespaceExists(Namespace namespace) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.NAMESPACE_EXISTS; + + // TODO: This authz check doesn't accomplish true authz in terms of blocking the ability + // for a caller to ascertain whether the namespace exists or not, but instead just behaves + // according to convention -- if existence is going to be privileged, we must instead + // add a base layer that throws NotFound exceptions instead of NotAuthorizedException + // for *all* operations in which we determine that the basic privilege for determining + // existence is also missing. + authorizeBasicNamespaceOperationOrThrow(op, namespace); + + // TODO: Just skip CatalogHandlers for this one maybe + doCatalogOperation(() -> CatalogHandlers.loadNamespace(namespaceCatalog, namespace)); + } + + public void dropNamespace(Namespace namespace) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.DROP_NAMESPACE; + authorizeBasicNamespaceOperationOrThrow(op, namespace); + + doCatalogOperation(() -> CatalogHandlers.dropNamespace(namespaceCatalog, namespace)); + } + + public UpdateNamespacePropertiesResponse updateNamespaceProperties( + Namespace namespace, UpdateNamespacePropertiesRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.UPDATE_NAMESPACE_PROPERTIES; + authorizeBasicNamespaceOperationOrThrow(op, namespace); + + return doCatalogOperation( + () -> CatalogHandlers.updateNamespaceProperties(namespaceCatalog, namespace, request)); + } + + public ListTablesResponse listTables(Namespace namespace) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.LIST_TABLES; + authorizeBasicNamespaceOperationOrThrow(op, namespace); + + return doCatalogOperation(() -> CatalogHandlers.listTables(baseCatalog, namespace)); + } + + public LoadTableResponse createTableDirect(Namespace namespace, CreateTableRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.CREATE_TABLE_DIRECT; + authorizeCreateTableLikeUnderNamespaceOperationOrThrow( + op, TableIdentifier.of(namespace, request.name())); + + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (isExternal(catalog)) { + throw new BadRequestException("Cannot create table on external catalogs."); + } + return doCatalogOperation(() -> CatalogHandlers.createTable(baseCatalog, namespace, request)); + } + + public LoadTableResponse createTableDirectWithWriteDelegation( + Namespace namespace, CreateTableRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.CREATE_TABLE_DIRECT; + authorizeCreateTableLikeUnderNamespaceOperationOrThrow( + op, TableIdentifier.of(namespace, request.name())); + + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (isExternal(catalog)) { + throw new BadRequestException("Cannot create table on external catalogs."); + } + return doCatalogOperation( + () -> { + request.validate(); + + TableIdentifier tableIdentifier = TableIdentifier.of(namespace, request.name()); + if (baseCatalog.tableExists(tableIdentifier)) { + throw new AlreadyExistsException("Table already exists: %s", tableIdentifier); + } + + Map properties = Maps.newHashMap(); + properties.put("created-at", OffsetDateTime.now(ZoneOffset.UTC).toString()); + properties.putAll(request.properties()); + + Table table = + baseCatalog + .buildTable(tableIdentifier, request.schema()) + .withLocation(request.location()) + .withPartitionSpec(request.spec()) + .withSortOrder(request.writeOrder()) + .withProperties(request.properties()) + .create(); + + if (table instanceof BaseTable baseTable) { + TableMetadata tableMetadata = baseTable.operations().current(); + LoadTableResponse.Builder responseBuilder = + LoadTableResponse.builder().withTableMetadata(tableMetadata); + if (baseCatalog instanceof SupportsCredentialDelegation credentialDelegation) { + try { + Set actionsRequested = + getValidTableActionsOrThrow(tableIdentifier); + + LOG.atDebug() + .addKeyValue("tableIdentifier", tableIdentifier) + .addKeyValue("tableLocation", tableMetadata.location()) + .log("Fetching client credentials for table"); + responseBuilder.addAllConfig( + credentialDelegation.getCredentialConfig( + tableIdentifier, tableMetadata, actionsRequested)); + } catch (ForbiddenException | NoSuchTableException e) { + // No privileges available + } + } + return responseBuilder.build(); + } else if (table instanceof BaseMetadataTable) { + // metadata tables are loaded on the client side, return NoSuchTableException for now + throw new NoSuchTableException("Table does not exist: %s", tableIdentifier.toString()); + } + + throw new IllegalStateException("Cannot wrap catalog that does not produce BaseTable"); + }); + } + + private TableMetadata stageTableCreateHelper(Namespace namespace, CreateTableRequest request) { + request.validate(); + + TableIdentifier ident = TableIdentifier.of(namespace, request.name()); + if (baseCatalog.tableExists(ident)) { + throw new AlreadyExistsException("Table already exists: %s", ident); + } + + Map properties = Maps.newHashMap(); + properties.put("created-at", OffsetDateTime.now(ZoneOffset.UTC).toString()); + properties.putAll(request.properties()); + + String location; + if (request.location() != null) { + // Even if the request provides a location, run it through the catalog's TableBuilder + // to inherit any override behaviors if applicable. + if (baseCatalog instanceof BasePolarisCatalog) { + location = + ((BasePolarisCatalog) baseCatalog).transformTableLikeLocation(request.location()); + } else { + location = request.location(); + } + } else { + location = + baseCatalog + .buildTable(ident, request.schema()) + .withPartitionSpec(request.spec()) + .withSortOrder(request.writeOrder()) + .withProperties(properties) + .createTransaction() + .table() + .location(); + } + + TableMetadata metadata = + TableMetadata.newTableMetadata( + request.schema(), + request.spec() != null ? request.spec() : PartitionSpec.unpartitioned(), + request.writeOrder() != null ? request.writeOrder() : SortOrder.unsorted(), + location, + properties); + return metadata; + } + + public LoadTableResponse createTableStaged(Namespace namespace, CreateTableRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.CREATE_TABLE_STAGED; + authorizeCreateTableLikeUnderNamespaceOperationOrThrow( + op, TableIdentifier.of(namespace, request.name())); + + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (isExternal(catalog)) { + throw new BadRequestException("Cannot create table on external catalogs."); + } + return doCatalogOperation( + () -> { + TableMetadata metadata = stageTableCreateHelper(namespace, request); + return LoadTableResponse.builder().withTableMetadata(metadata).build(); + }); + } + + public LoadTableResponse createTableStagedWithWriteDelegation( + Namespace namespace, CreateTableRequest request, String xIcebergAccessDelegation) { + PolarisAuthorizableOperation op = + PolarisAuthorizableOperation.CREATE_TABLE_STAGED_WITH_WRITE_DELEGATION; + authorizeCreateTableLikeUnderNamespaceOperationOrThrow( + op, TableIdentifier.of(namespace, request.name())); + + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (isExternal(catalog)) { + throw new BadRequestException("Cannot create table on external catalogs."); + } + return doCatalogOperation( + () -> { + TableIdentifier ident = TableIdentifier.of(namespace, request.name()); + TableMetadata metadata = stageTableCreateHelper(namespace, request); + + LoadTableResponse.Builder responseBuilder = + LoadTableResponse.builder().withTableMetadata(metadata); + + if (baseCatalog instanceof SupportsCredentialDelegation credentialDelegation) { + try { + Set actionsRequested = getValidTableActionsOrThrow(ident); + + LOG.atDebug() + .addKeyValue("tableIdentifier", ident) + .addKeyValue("tableLocation", metadata.location()) + .log("Fetching client credentials for table"); + responseBuilder.addAllConfig( + credentialDelegation.getCredentialConfig(ident, metadata, actionsRequested)); + } catch (ForbiddenException | NoSuchTableException e) { + // No privileges available + } + } + return responseBuilder.build(); + }); + } + + public LoadTableResponse registerTable(Namespace namespace, RegisterTableRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.REGISTER_TABLE; + authorizeCreateTableLikeUnderNamespaceOperationOrThrow( + op, TableIdentifier.of(namespace, request.name())); + + return doCatalogOperation(() -> CatalogHandlers.registerTable(baseCatalog, namespace, request)); + } + + public boolean sendNotification(TableIdentifier identifier, NotificationRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.SEND_NOTIFICATIONS; + + // For now, just require the full set of privileges on the base Catalog entity, which we can + // also express just as the "root" Namespace for purposes of the BasePolarisCatalog being + // able to fetch Namespace.empty() as path key. + List extraPassthroughTableLikes = List.of(identifier); + List extraPassthroughNamespaces = new ArrayList<>(); + extraPassthroughNamespaces.add(Namespace.empty()); + for (int i = 1; i <= identifier.namespace().length(); i++) { + Namespace nsLevel = + Namespace.of( + Arrays.stream(identifier.namespace().levels()) + .limit(i) + .collect(Collectors.toList()) + .toArray(String[]::new)); + extraPassthroughNamespaces.add(nsLevel); + } + authorizeBasicNamespaceOperationOrThrow( + op, Namespace.empty(), extraPassthroughNamespaces, extraPassthroughTableLikes); + + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (catalog.getCatalogType().equals(io.polaris.core.admin.model.Catalog.TypeEnum.INTERNAL)) { + LOG.atWarn() + .addKeyValue("catalog", catalog) + .addKeyValue("notification", request) + .log("Attempted notification on internal catalog"); + throw new BadRequestException("Cannot update internal catalog via notifications"); + } + if (!(baseCatalog instanceof SupportsNotifications)) { + return false; + } + SupportsNotifications notificationCatalog = (SupportsNotifications) baseCatalog; + return notificationCatalog.sendNotification(identifier, request); + } + + public LoadTableResponse loadTable(TableIdentifier tableIdentifier, String snapshots) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.LOAD_TABLE; + authorizeBasicTableLikeOperationOrThrow(op, PolarisEntitySubType.TABLE, tableIdentifier); + + return doCatalogOperation(() -> CatalogHandlers.loadTable(baseCatalog, tableIdentifier)); + } + + private Set getValidTableActionsOrThrow(TableIdentifier tableIdentifier) { + PolarisAuthorizableOperation read = + PolarisAuthorizableOperation.LOAD_TABLE_WITH_READ_DELEGATION; + PolarisAuthorizableOperation write = + PolarisAuthorizableOperation.LOAD_TABLE_WITH_WRITE_DELEGATION; + Set actionsRequested = + new HashSet<>(Set.of(PolarisStorageActions.READ, PolarisStorageActions.LIST)); + try { + // TODO: Refactor to have a boolean-return version of the helpers so we can fallthrough + // easily. + authorizeBasicTableLikeOperationOrThrow(write, PolarisEntitySubType.TABLE, tableIdentifier); + actionsRequested.add(PolarisStorageActions.WRITE); + } catch (ForbiddenException | NoSuchTableException e) { + LOG.atDebug() + .addKeyValue("tableIdentifier", tableIdentifier) + .log("Authz failed for LOAD_TABLE_WITH_WRITE_DELEGATION so attempting READ only"); + authorizeBasicTableLikeOperationOrThrow(read, PolarisEntitySubType.TABLE, tableIdentifier); + } + return actionsRequested; + } + + public LoadTableResponse loadTableWithAccessDelegation( + TableIdentifier tableIdentifier, String xIcebergAccessDelegation, String snapshots) { + // Here we have a single method that falls through multiple candidate + // PolarisAuthorizableOperations because instead of identifying the desired operation up-front + // and + // failing the authz check if grants aren't found, we find the first most-privileged authz match + // and respond according to that. + + // TODO: Find a way for the configuration or caller to better express whether to fail or omit + // when data-access is specified but access delegation grants are not found. + Set actionsRequested = getValidTableActionsOrThrow(tableIdentifier); + + return doCatalogOperation( + () -> { + Table table = baseCatalog.loadTable(tableIdentifier); + + if (table instanceof BaseTable baseTable) { + TableMetadata tableMetadata = baseTable.operations().current(); + LoadTableResponse.Builder responseBuilder = + LoadTableResponse.builder().withTableMetadata(tableMetadata); + if (baseCatalog instanceof SupportsCredentialDelegation credentialDelegation) { + LOG.atDebug() + .addKeyValue("tableIdentifier", tableIdentifier) + .addKeyValue("tableLocation", tableMetadata.location()) + .log("Fetching client credentials for table"); + responseBuilder.addAllConfig( + credentialDelegation.getCredentialConfig( + tableIdentifier, tableMetadata, actionsRequested)); + } + return responseBuilder.build(); + } else if (table instanceof BaseMetadataTable) { + // metadata tables are loaded on the client side, return NoSuchTableException for now + throw new NoSuchTableException("Table does not exist: %s", tableIdentifier.toString()); + } + + throw new IllegalStateException("Cannot wrap catalog that does not produce BaseTable"); + }); + } + + private UpdateTableRequest applyUpdateFilters(UpdateTableRequest request) { + // Certain MetadataUpdates need to be explicitly transformed to achieve the same behavior + // as using a local Catalog client via TableBuilder. + TableIdentifier identifier = request.identifier(); + List requirements = request.requirements(); + List updates = + request.updates().stream() + .map( + update -> { + if (baseCatalog instanceof BasePolarisCatalog + && update instanceof MetadataUpdate.SetLocation) { + String requestedLocation = ((MetadataUpdate.SetLocation) update).location(); + String filteredLocation = + ((BasePolarisCatalog) baseCatalog) + .transformTableLikeLocation(requestedLocation); + return new MetadataUpdate.SetLocation(filteredLocation); + } else { + return update; + } + }) + .toList(); + return UpdateTableRequest.create(identifier, requirements, updates); + } + + public LoadTableResponse updateTable( + TableIdentifier tableIdentifier, UpdateTableRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.UPDATE_TABLE; + authorizeBasicTableLikeOperationOrThrow(op, PolarisEntitySubType.TABLE, tableIdentifier); + + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (isExternal(catalog)) { + throw new BadRequestException("Cannot update table on external catalogs."); + } + return doCatalogOperation( + () -> + CatalogHandlers.updateTable(baseCatalog, tableIdentifier, applyUpdateFilters(request))); + } + + public LoadTableResponse updateTableForStagedCreate( + TableIdentifier tableIdentifier, UpdateTableRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.UPDATE_TABLE_FOR_STAGED_CREATE; + authorizeCreateTableLikeUnderNamespaceOperationOrThrow(op, tableIdentifier); + + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (isExternal(catalog)) { + throw new BadRequestException("Cannot update table on external catalogs."); + } + return doCatalogOperation( + () -> + CatalogHandlers.updateTable(baseCatalog, tableIdentifier, applyUpdateFilters(request))); + } + + public void dropTableWithoutPurge(TableIdentifier tableIdentifier) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.DROP_TABLE_WITHOUT_PURGE; + authorizeBasicTableLikeOperationOrThrow(op, PolarisEntitySubType.TABLE, tableIdentifier); + + doCatalogOperation(() -> CatalogHandlers.dropTable(baseCatalog, tableIdentifier)); + } + + public void dropTableWithPurge(TableIdentifier tableIdentifier) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.DROP_TABLE_WITH_PURGE; + authorizeBasicTableLikeOperationOrThrow(op, PolarisEntitySubType.TABLE, tableIdentifier); + + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (isExternal(catalog)) { + throw new BadRequestException("Cannot drop table on external catalogs."); + } + doCatalogOperation(() -> CatalogHandlers.purgeTable(baseCatalog, tableIdentifier)); + } + + public void tableExists(TableIdentifier tableIdentifier) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.TABLE_EXISTS; + authorizeBasicTableLikeOperationOrThrow(op, PolarisEntitySubType.TABLE, tableIdentifier); + + // TODO: Just skip CatalogHandlers for this one maybe + doCatalogOperation(() -> CatalogHandlers.loadTable(baseCatalog, tableIdentifier)); + } + + public void renameTable(RenameTableRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.RENAME_TABLE; + authorizeRenameTableLikeOperationOrThrow( + op, PolarisEntitySubType.TABLE, request.source(), request.destination()); + + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (isExternal(catalog)) { + throw new BadRequestException("Cannot rename table on external catalogs."); + } + doCatalogOperation(() -> CatalogHandlers.renameTable(baseCatalog, request)); + } + + public void commitTransaction(CommitTransactionRequest commitTransactionRequest) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.COMMIT_TRANSACTION; + // TODO: The authz actually needs to detect hidden updateForStagedCreate UpdateTableRequests + // and have some kind of per-item conditional privilege requirement if we want to make it + // so that only the stageCreate updates need TABLE_CREATE whereas everything else only + // needs TABLE_WRITE_PROPERTIES. + authorizeCollectionOfTableLikeOperationOrThrow( + op, + PolarisEntitySubType.TABLE, + commitTransactionRequest.tableChanges().stream().map(t -> t.identifier()).toList()); + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (isExternal(catalog)) { + throw new BadRequestException("Cannot update table on external catalogs."); + } + + // TODO: Implement this properly + List transactions = + commitTransactionRequest.tableChanges().stream() + .map( + change -> { + Table table = baseCatalog.loadTable(change.identifier()); + if (!(table instanceof BaseTable)) { + throw new IllegalStateException( + "Cannot wrap catalog that does not produce BaseTable"); + } + Transaction transaction = + Transactions.newTransaction( + change.identifier().toString(), ((BaseTable) table).operations()); + BaseTransaction.TransactionTable txTable = + (BaseTransaction.TransactionTable) transaction.table(); + CatalogHandlers.updateTable(baseCatalog, change.identifier(), change); + return transaction; + }) + .toList(); + + transactions.forEach(Transaction::commitTransaction); + } + + public ListTablesResponse listViews(Namespace namespace) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.LIST_VIEWS; + authorizeBasicNamespaceOperationOrThrow(op, namespace); + + return doCatalogOperation(() -> CatalogHandlers.listViews(viewCatalog, namespace)); + } + + public LoadViewResponse createView(Namespace namespace, CreateViewRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.CREATE_VIEW; + authorizeCreateTableLikeUnderNamespaceOperationOrThrow( + op, TableIdentifier.of(namespace, request.name())); + + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (isExternal(catalog)) { + throw new BadRequestException("Cannot create view on external catalogs."); + } + return doCatalogOperation(() -> CatalogHandlers.createView(viewCatalog, namespace, request)); + } + + public LoadViewResponse loadView(TableIdentifier viewIdentifier) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.LOAD_VIEW; + authorizeBasicTableLikeOperationOrThrow(op, PolarisEntitySubType.VIEW, viewIdentifier); + + return doCatalogOperation(() -> CatalogHandlers.loadView(viewCatalog, viewIdentifier)); + } + + public LoadViewResponse replaceView(TableIdentifier viewIdentifier, UpdateTableRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.REPLACE_VIEW; + authorizeBasicTableLikeOperationOrThrow(op, PolarisEntitySubType.VIEW, viewIdentifier); + + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (isExternal(catalog)) { + throw new BadRequestException("Cannot replace view on external catalogs."); + } + return doCatalogOperation( + () -> CatalogHandlers.updateView(viewCatalog, viewIdentifier, applyUpdateFilters(request))); + } + + public void dropView(TableIdentifier viewIdentifier) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.DROP_VIEW; + authorizeBasicTableLikeOperationOrThrow(op, PolarisEntitySubType.VIEW, viewIdentifier); + + doCatalogOperation(() -> CatalogHandlers.dropView(viewCatalog, viewIdentifier)); + } + + public void viewExists(TableIdentifier viewIdentifier) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.VIEW_EXISTS; + authorizeBasicTableLikeOperationOrThrow(op, PolarisEntitySubType.VIEW, viewIdentifier); + + // TODO: Just skip CatalogHandlers for this one maybe + doCatalogOperation(() -> CatalogHandlers.loadView(viewCatalog, viewIdentifier)); + } + + public void renameView(RenameTableRequest request) { + PolarisAuthorizableOperation op = PolarisAuthorizableOperation.RENAME_VIEW; + authorizeRenameTableLikeOperationOrThrow( + op, PolarisEntitySubType.VIEW, request.source(), request.destination()); + + CatalogEntity catalog = + CatalogEntity.of( + resolutionManifest + .getResolvedReferenceCatalogEntity() + .getResolvedLeafEntity() + .getEntity()); + if (isExternal(catalog)) { + throw new BadRequestException("Cannot rename view on external catalogs."); + } + doCatalogOperation(() -> CatalogHandlers.renameView(viewCatalog, request)); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/catalog/SupportsCredentialDelegation.java b/polaris-service/src/main/java/io/polaris/service/catalog/SupportsCredentialDelegation.java new file mode 100644 index 0000000000..0e4aa1bc07 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/catalog/SupportsCredentialDelegation.java @@ -0,0 +1,36 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.catalog; + +import io.polaris.core.storage.PolarisStorageActions; +import java.util.Map; +import java.util.Set; +import org.apache.iceberg.TableMetadata; +import org.apache.iceberg.catalog.TableIdentifier; + +/** + * Adds support for credential vending for (typically) {@link org.apache.iceberg.TableOperations} to + * fetch access credentials that are inserted into the {@link + * org.apache.iceberg.rest.responses.LoadTableResponse#config} property. See the + * rest-catalog-open-api.yaml spec for details on the expected format of vended credential + * configuration. + */ +public interface SupportsCredentialDelegation { + Map getCredentialConfig( + TableIdentifier tableIdentifier, + TableMetadata tableMetadata, + Set storageActions); +} diff --git a/polaris-service/src/main/java/io/polaris/service/catalog/SupportsNotifications.java b/polaris-service/src/main/java/io/polaris/service/catalog/SupportsNotifications.java new file mode 100644 index 0000000000..fd22ee9a74 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/catalog/SupportsNotifications.java @@ -0,0 +1,24 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.catalog; + +import io.polaris.service.types.NotificationRequest; +import org.apache.iceberg.catalog.TableIdentifier; + +public interface SupportsNotifications { + + public boolean sendNotification(TableIdentifier table, NotificationRequest notificationRequest); +} diff --git a/polaris-service/src/main/java/io/polaris/service/config/ConfigurationStoreAware.java b/polaris-service/src/main/java/io/polaris/service/config/ConfigurationStoreAware.java new file mode 100644 index 0000000000..5dd737c7d6 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/config/ConfigurationStoreAware.java @@ -0,0 +1,24 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.config; + +import io.polaris.core.PolarisConfigurationStore; + +/** Interface allows injection of a {@link PolarisConfigurationStore} */ +public interface ConfigurationStoreAware { + + void setConfigurationStore(PolarisConfigurationStore configurationStore); +} diff --git a/polaris-service/src/main/java/io/polaris/service/config/CorsConfiguration.java b/polaris-service/src/main/java/io/polaris/service/config/CorsConfiguration.java new file mode 100644 index 0000000000..a85c3b5d25 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/config/CorsConfiguration.java @@ -0,0 +1,92 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.config; + +import com.fasterxml.jackson.annotation.JsonProperty; +import java.util.List; + +public class CorsConfiguration { + private List allowedOrigins = List.of("*"); + private List allowedTimingOrigins = List.of("*"); + private List allowedMethods = List.of("*"); + private List allowedHeaders = List.of("*"); + private List exposedHeaders = List.of("*"); + private Integer preflightMaxAge = 600; + private String allowCredentials = "true"; + + public List getAllowedOrigins() { + return allowedOrigins; + } + + @JsonProperty("allowed-origins") + public void setAllowedOrigins(List allowedOrigins) { + this.allowedOrigins = allowedOrigins; + } + + public void setAllowedTimingOrigins(List allowedTimingOrigins) { + this.allowedTimingOrigins = allowedTimingOrigins; + } + + @JsonProperty("allowed-timing-origins") + public List getAllowedTimingOrigins() { + return allowedTimingOrigins; + } + + public List getAllowedMethods() { + return allowedMethods; + } + + @JsonProperty("allowed-methods") + public void setAllowedMethods(List allowedMethods) { + this.allowedMethods = allowedMethods; + } + + public List getAllowedHeaders() { + return allowedHeaders; + } + + @JsonProperty("allowed-headers") + public void setAllowedHeaders(List allowedHeaders) { + this.allowedHeaders = allowedHeaders; + } + + public List getExposedHeaders() { + return exposedHeaders; + } + + @JsonProperty("exposed-headers") + public void setExposedHeaders(List exposedHeaders) { + this.exposedHeaders = exposedHeaders; + } + + public Integer getPreflightMaxAge() { + return preflightMaxAge; + } + + @JsonProperty("preflight-max-age") + public void setPreflightMaxAge(Integer preflightMaxAge) { + this.preflightMaxAge = preflightMaxAge; + } + + public String getAllowCredentials() { + return allowCredentials; + } + + @JsonProperty("allowed-credentials") + public void setAllowCredentials(String allowCredentials) { + this.allowCredentials = allowCredentials; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/config/DefaultConfigurationStore.java b/polaris-service/src/main/java/io/polaris/service/config/DefaultConfigurationStore.java new file mode 100644 index 0000000000..bf5a3f91f9 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/config/DefaultConfigurationStore.java @@ -0,0 +1,35 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.config; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisConfigurationStore; +import java.util.Map; +import org.jetbrains.annotations.Nullable; + +public class DefaultConfigurationStore implements PolarisConfigurationStore { + private final Map properties; + + public DefaultConfigurationStore(Map properties) { + this.properties = properties; + } + + @SuppressWarnings("unchecked") + @Override + public @Nullable T getConfiguration(PolarisCallContext ctx, String configName) { + return (T) properties.get(configName); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/config/HasEntityManagerFactory.java b/polaris-service/src/main/java/io/polaris/service/config/HasEntityManagerFactory.java new file mode 100644 index 0000000000..ab00e6b403 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/config/HasEntityManagerFactory.java @@ -0,0 +1,20 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.config; + +public interface HasEntityManagerFactory { + void setEntityManagerFactory(RealmEntityManagerFactory entityManagerFactory); +} diff --git a/polaris-service/src/main/java/io/polaris/service/config/OAuth2ApiService.java b/polaris-service/src/main/java/io/polaris/service/config/OAuth2ApiService.java new file mode 100644 index 0000000000..f7ce88da8f --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/config/OAuth2ApiService.java @@ -0,0 +1,26 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.config; + +import com.fasterxml.jackson.annotation.JsonTypeInfo; +import io.dropwizard.jackson.Discoverable; +import io.polaris.service.auth.TokenBrokerFactory; +import io.polaris.service.catalog.api.IcebergRestOAuth2ApiService; + +@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "type") +public interface OAuth2ApiService extends Discoverable, IcebergRestOAuth2ApiService { + void setTokenBroker(TokenBrokerFactory tokenBrokerFactory); +} diff --git a/polaris-service/src/main/java/io/polaris/service/config/PolarisApplicationConfig.java b/polaris-service/src/main/java/io/polaris/service/config/PolarisApplicationConfig.java new file mode 100644 index 0000000000..af5cca4867 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/config/PolarisApplicationConfig.java @@ -0,0 +1,184 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.config; + +import com.fasterxml.jackson.annotation.JsonProperty; +import io.dropwizard.core.Configuration; +import io.polaris.core.PolarisConfigurationStore; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.persistence.MetaStoreManagerFactory; +import io.polaris.service.auth.DiscoverableAuthenticator; +import io.polaris.service.context.CallContextResolver; +import io.polaris.service.context.RealmContextResolver; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import org.apache.commons.lang3.StringUtils; +import org.slf4j.LoggerFactory; +import software.amazon.awssdk.auth.credentials.AwsBasicCredentials; +import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider; +import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider; + +/** + * Configuration specific to a Polaris REST Service. Place these entries in a YML file for them to + * be picked up, i.e. `iceberg-rest-server.yml` + */ +public class PolarisApplicationConfig extends Configuration { + private Map sqlLiteCatalogDirs = new HashMap<>(); + + private String baseCatalogType; + private MetaStoreManagerFactory metaStoreManagerFactory; + private String defaultRealm = "default-realm"; + private RealmContextResolver realmContextResolver; + private CallContextResolver callContextResolver; + private DiscoverableAuthenticator polarisAuthenticator; + private CorsConfiguration corsConfiguration = new CorsConfiguration(); + private TaskHandlerConfiguration taskHandler = new TaskHandlerConfiguration(); + private PolarisConfigurationStore configurationStore = + new DefaultConfigurationStore(new HashMap<>()); + private List defaultRealms; + private String awsAccessKey; + private String awsSecretKey; + + public Map getSqlLiteCatalogDirs() { + return sqlLiteCatalogDirs; + } + + public void setSqlLiteCatalogDirs(Map sqlLiteCatalogDirs) { + this.sqlLiteCatalogDirs = sqlLiteCatalogDirs; + } + + @JsonProperty("baseCatalogType") + public void setBaseCatalogType(String baseCatalogType) { + this.baseCatalogType = baseCatalogType; + } + + @JsonProperty("baseCatalogType") + public String getBaseCatalogType() { + return baseCatalogType; + } + + @JsonProperty("metaStoreManager") + public void setMetaStoreManagerFactory(MetaStoreManagerFactory metaStoreManagerFactory) { + this.metaStoreManagerFactory = metaStoreManagerFactory; + } + + @JsonProperty("metaStoreManager") + public MetaStoreManagerFactory getMetaStoreManagerFactory() { + return metaStoreManagerFactory; + } + + @JsonProperty("authenticator") + public void setPolarisAuthenticator( + DiscoverableAuthenticator polarisAuthenticator) { + this.polarisAuthenticator = polarisAuthenticator; + } + + public DiscoverableAuthenticator + getPolarisAuthenticator() { + return polarisAuthenticator; + } + + public RealmContextResolver getRealmContextResolver() { + return realmContextResolver; + } + + public void setRealmContextResolver(RealmContextResolver realmContextResolver) { + this.realmContextResolver = realmContextResolver; + } + + public CallContextResolver getCallContextResolver() { + return callContextResolver; + } + + @JsonProperty("callContextResolver") + public void setCallContextResolver(CallContextResolver callContextResolver) { + this.callContextResolver = callContextResolver; + } + + private OAuth2ApiService oauth2Service; + + @JsonProperty("oauth2") + public void setOauth2Service(OAuth2ApiService oauth2Service) { + this.oauth2Service = oauth2Service; + } + + public OAuth2ApiService getOauth2Service() { + return oauth2Service; + } + + public String getDefaultRealm() { + return defaultRealm; + } + + @JsonProperty("defaultRealm") + public void setDefaultRealm(String defaultRealm) { + this.defaultRealm = defaultRealm; + } + + @JsonProperty("cors") + public CorsConfiguration getCorsConfiguration() { + return corsConfiguration; + } + + @JsonProperty("cors") + public void setCorsConfiguration(CorsConfiguration corsConfiguration) { + this.corsConfiguration = corsConfiguration; + } + + public void setTaskHandler(TaskHandlerConfiguration taskHandler) { + this.taskHandler = taskHandler; + } + + public TaskHandlerConfiguration getTaskHandler() { + return taskHandler; + } + + @JsonProperty("featureConfiguration") + public void setFeatureConfiguration(Map featureConfiguration) { + this.configurationStore = new DefaultConfigurationStore(featureConfiguration); + } + + public PolarisConfigurationStore getConfigurationStore() { + return configurationStore; + } + + public List getDefaultRealms() { + return defaultRealms; + } + + public AwsCredentialsProvider credentialsProvider() { + if (StringUtils.isNotBlank(awsAccessKey) && StringUtils.isNotBlank(awsSecretKey)) { + LoggerFactory.getLogger(PolarisApplicationConfig.class) + .warn("Using hard-coded AWS credentials - this is not recommended for production"); + return StaticCredentialsProvider.create( + AwsBasicCredentials.create(awsAccessKey, awsSecretKey)); + } + return null; + } + + public void setAwsAccessKey(String awsAccessKey) { + this.awsAccessKey = awsAccessKey; + } + + public void setAwsSecretKey(String awsSecretKey) { + this.awsSecretKey = awsSecretKey; + } + + public void setDefaultRealms(List defaultRealms) { + this.defaultRealms = defaultRealms; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/config/RealmEntityManagerFactory.java b/polaris-service/src/main/java/io/polaris/service/config/RealmEntityManagerFactory.java new file mode 100644 index 0000000000..6828f685c0 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/config/RealmEntityManagerFactory.java @@ -0,0 +1,61 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.config; + +import io.polaris.core.context.RealmContext; +import io.polaris.core.persistence.MetaStoreManagerFactory; +import io.polaris.core.persistence.PolarisEntityManager; +import java.util.HashMap; +import java.util.Map; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** Gets or creates PolarisEntityManager instances based on config values and RealmContext. */ +public class RealmEntityManagerFactory { + private static final Logger LOG = LoggerFactory.getLogger(RealmEntityManagerFactory.class); + private final MetaStoreManagerFactory metaStoreManagerFactory; + + // Key: realmIdentifier + private Map cachedEntityManagers = new HashMap<>(); + + // Subclasses for test injection. + protected RealmEntityManagerFactory() { + this.metaStoreManagerFactory = null; + } + + public RealmEntityManagerFactory(MetaStoreManagerFactory metaStoreManagerFactory) { + this.metaStoreManagerFactory = metaStoreManagerFactory; + } + + public PolarisEntityManager getOrCreateEntityManager(RealmContext context) { + String realm = context.getRealmIdentifier(); + + LOG.debug("Looking up PolarisEntityManager for realm {}", realm); + PolarisEntityManager entityManagerInstance = cachedEntityManagers.get(realm); + if (entityManagerInstance == null) { + LOG.info("Initializing new PolarisEntityManager for realm {}", realm); + + entityManagerInstance = + new PolarisEntityManager( + metaStoreManagerFactory.getOrCreateMetaStoreManager(context), + metaStoreManagerFactory.getOrCreateSessionSupplier(context), + metaStoreManagerFactory.getOrCreateStorageCredentialCache(context)); + + cachedEntityManagers.put(realm, entityManagerInstance); + } + return entityManagerInstance; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/config/Serializers.java b/polaris-service/src/main/java/io/polaris/service/config/Serializers.java new file mode 100644 index 0000000000..a7050ee705 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/config/Serializers.java @@ -0,0 +1,243 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.config; + +import com.fasterxml.jackson.core.JacksonException; +import com.fasterxml.jackson.core.JsonParser; +import com.fasterxml.jackson.core.TreeNode; +import com.fasterxml.jackson.databind.DeserializationContext; +import com.fasterxml.jackson.databind.JsonDeserializer; +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.module.SimpleModule; +import com.fasterxml.jackson.databind.node.ObjectNode; +import io.polaris.core.admin.model.AddGrantRequest; +import io.polaris.core.admin.model.Catalog; +import io.polaris.core.admin.model.CatalogRole; +import io.polaris.core.admin.model.CreateCatalogRequest; +import io.polaris.core.admin.model.CreateCatalogRoleRequest; +import io.polaris.core.admin.model.CreatePrincipalRequest; +import io.polaris.core.admin.model.CreatePrincipalRoleRequest; +import io.polaris.core.admin.model.GrantCatalogRoleRequest; +import io.polaris.core.admin.model.GrantPrincipalRoleRequest; +import io.polaris.core.admin.model.GrantResource; +import io.polaris.core.admin.model.Principal; +import io.polaris.core.admin.model.PrincipalRole; +import io.polaris.core.admin.model.RevokeGrantRequest; +import java.io.IOException; + +public final class Serializers { + private Serializers() {} + + public static void registerSerializers(ObjectMapper mapper) { + SimpleModule module = new SimpleModule(); + module.addDeserializer(CreateCatalogRequest.class, new CreateCatalogRequestDeserializer()); + module.addDeserializer(CreatePrincipalRequest.class, new CreatePrincipalRequestDeserializer()); + module.addDeserializer( + CreatePrincipalRoleRequest.class, new CreatePrincipalRoleRequestDeserializer()); + module.addDeserializer( + GrantPrincipalRoleRequest.class, new GrantPrincipalRoleRequestDeserializer()); + module.addDeserializer( + CreateCatalogRoleRequest.class, new CreateCatalogRoleRequestDeserializer()); + module.addDeserializer( + GrantCatalogRoleRequest.class, new GrantCatalogRoleRequestDeserializer()); + module.addDeserializer(AddGrantRequest.class, new AddGrantRequestDeserializer()); + module.addDeserializer(RevokeGrantRequest.class, new RevokeGrantRequestDeserializer()); + mapper.registerModule(module); + } + + /** + * Deserializer for {@link CreateCatalogRequest}. Backward compatible with the previous version of + * the api + */ + public static final class CreateCatalogRequestDeserializer + extends JsonDeserializer { + @Override + public CreateCatalogRequest deserialize(JsonParser p, DeserializationContext ctxt) + throws IOException, JacksonException { + TreeNode treeNode = p.readValueAsTree(); + if (treeNode.isObject() && ((ObjectNode) treeNode).has("catalog")) { + return CreateCatalogRequest.builder() + .setCatalog(ctxt.readTreeAsValue((JsonNode) treeNode.get("catalog"), Catalog.class)) + .build(); + } else { + return CreateCatalogRequest.builder() + .setCatalog(ctxt.readTreeAsValue((JsonNode) treeNode, Catalog.class)) + .build(); + } + } + } + + /** + * Deserializer for {@link CreatePrincipalRequest}. Backward compatible with the previous version + * of the api + */ + public static final class CreatePrincipalRequestDeserializer + extends JsonDeserializer { + @Override + public CreatePrincipalRequest deserialize(JsonParser p, DeserializationContext ctxt) + throws IOException, JacksonException { + TreeNode treeNode = p.readValueAsTree(); + if (treeNode.isObject() && ((ObjectNode) treeNode).has("principal")) { + return CreatePrincipalRequest.builder() + .setPrincipal( + ctxt.readTreeAsValue((JsonNode) treeNode.get("principal"), Principal.class)) + .setCredentialRotationRequired( + ctxt.readTreeAsValue( + (JsonNode) treeNode.get("credentialRotationRequired"), Boolean.class)) + .build(); + } else { + return CreatePrincipalRequest.builder() + .setPrincipal(ctxt.readTreeAsValue((JsonNode) treeNode, Principal.class)) + .build(); + } + } + } + + /** + * Deserializer for {@link CreatePrincipalRoleRequest}. Backward compatible with the previous + * version of the api + */ + public static final class CreatePrincipalRoleRequestDeserializer + extends JsonDeserializer { + @Override + public CreatePrincipalRoleRequest deserialize(JsonParser p, DeserializationContext ctxt) + throws IOException, JacksonException { + TreeNode treeNode = p.readValueAsTree(); + if (treeNode.isObject() && ((ObjectNode) treeNode).has("principalRole")) { + return CreatePrincipalRoleRequest.builder() + .setPrincipalRole( + ctxt.readTreeAsValue((JsonNode) treeNode.get("principalRole"), PrincipalRole.class)) + .build(); + } else { + return CreatePrincipalRoleRequest.builder() + .setPrincipalRole(ctxt.readTreeAsValue((JsonNode) treeNode, PrincipalRole.class)) + .build(); + } + } + } + + /** + * Deserializer for {@link GrantPrincipalRoleRequest}. Backward compatible with the previous + * version of the api + */ + public static final class GrantPrincipalRoleRequestDeserializer + extends JsonDeserializer { + @Override + public GrantPrincipalRoleRequest deserialize(JsonParser p, DeserializationContext ctxt) + throws IOException, JacksonException { + TreeNode treeNode = p.readValueAsTree(); + if (treeNode.isObject() && ((ObjectNode) treeNode).has("principalRole")) { + return GrantPrincipalRoleRequest.builder() + .setPrincipalRole( + ctxt.readTreeAsValue((JsonNode) treeNode.get("principalRole"), PrincipalRole.class)) + .build(); + } else { + return GrantPrincipalRoleRequest.builder() + .setPrincipalRole(ctxt.readTreeAsValue((JsonNode) treeNode, PrincipalRole.class)) + .build(); + } + } + } + + /** + * Deserializer for {@link CreateCatalogRoleRequest} Backward compatible with the previous version + * of the api + */ + public static final class CreateCatalogRoleRequestDeserializer + extends JsonDeserializer { + @Override + public CreateCatalogRoleRequest deserialize(JsonParser p, DeserializationContext ctxt) + throws IOException, JacksonException { + TreeNode treeNode = p.readValueAsTree(); + if (treeNode.isObject() && ((ObjectNode) treeNode).has("catalogRole")) { + return CreateCatalogRoleRequest.builder() + .setCatalogRole( + ctxt.readTreeAsValue((JsonNode) treeNode.get("catalogRole"), CatalogRole.class)) + .build(); + } else { + return CreateCatalogRoleRequest.builder() + .setCatalogRole(ctxt.readTreeAsValue((JsonNode) treeNode, CatalogRole.class)) + .build(); + } + } + } + + /** + * Deserializer for {@link GrantCatalogRoleRequest} Backward compatible with the previous version + * of the api + */ + public static final class GrantCatalogRoleRequestDeserializer + extends JsonDeserializer { + @Override + public GrantCatalogRoleRequest deserialize(JsonParser p, DeserializationContext ctxt) + throws IOException, JacksonException { + TreeNode treeNode = p.readValueAsTree(); + if (treeNode.isObject() && ((ObjectNode) treeNode).has("catalogRole")) { + return GrantCatalogRoleRequest.builder() + .setCatalogRole( + ctxt.readTreeAsValue((JsonNode) treeNode.get("catalogRole"), CatalogRole.class)) + .build(); + } else { + return GrantCatalogRoleRequest.builder() + .setCatalogRole(ctxt.readTreeAsValue((JsonNode) treeNode, CatalogRole.class)) + .build(); + } + } + } + + /** + * Deserializer for {@link AddGrantRequest} Backward compatible with previous version of the api + */ + public static final class AddGrantRequestDeserializer extends JsonDeserializer { + @Override + public AddGrantRequest deserialize(JsonParser p, DeserializationContext ctxt) + throws IOException, JacksonException { + TreeNode treeNode = p.readValueAsTree(); + if (treeNode.isObject() && ((ObjectNode) treeNode).has("grant")) { + return AddGrantRequest.builder() + .setGrant(ctxt.readTreeAsValue((JsonNode) treeNode.get("grant"), GrantResource.class)) + .build(); + } else { + return AddGrantRequest.builder() + .setGrant(ctxt.readTreeAsValue((JsonNode) treeNode, GrantResource.class)) + .build(); + } + } + } + + /** + * Deserializer for {@link RevokeGrantRequest} Backward compatible with previous version of the + * api + */ + public static final class RevokeGrantRequestDeserializer + extends JsonDeserializer { + @Override + public RevokeGrantRequest deserialize(JsonParser p, DeserializationContext ctxt) + throws IOException, JacksonException { + TreeNode treeNode = p.readValueAsTree(); + if (treeNode.isObject() && ((ObjectNode) treeNode).has("grant")) { + return RevokeGrantRequest.builder() + .setGrant(ctxt.readTreeAsValue((JsonNode) treeNode.get("grant"), GrantResource.class)) + .build(); + } else { + return RevokeGrantRequest.builder() + .setGrant(ctxt.readTreeAsValue((JsonNode) treeNode, GrantResource.class)) + .build(); + } + } + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/config/TaskHandlerConfiguration.java b/polaris-service/src/main/java/io/polaris/service/config/TaskHandlerConfiguration.java new file mode 100644 index 0000000000..9c8ed527f3 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/config/TaskHandlerConfiguration.java @@ -0,0 +1,49 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.config; + +import com.google.common.util.concurrent.ThreadFactoryBuilder; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.ThreadFactory; + +public class TaskHandlerConfiguration { + private int poolSize = 10; + private boolean fixedSize = true; + private String threadNamePattern = "taskHandler-%d"; + + public void setPoolSize(int poolSize) { + this.poolSize = poolSize; + } + + public void setFixedSize(boolean fixedSize) { + this.fixedSize = fixedSize; + } + + public void setThreadNamePattern(String threadNamePattern) { + this.threadNamePattern = threadNamePattern; + } + + public ExecutorService executorService() { + return fixedSize + ? Executors.newFixedThreadPool(poolSize, threadFactory()) + : Executors.newCachedThreadPool(threadFactory()); + } + + private ThreadFactory threadFactory() { + return new ThreadFactoryBuilder().setNameFormat(threadNamePattern).setDaemon(true).build(); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/context/CallContextCatalogFactory.java b/polaris-service/src/main/java/io/polaris/service/context/CallContextCatalogFactory.java new file mode 100644 index 0000000000..f900081191 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/context/CallContextCatalogFactory.java @@ -0,0 +1,28 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.context; + +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.context.CallContext; +import io.polaris.core.persistence.resolver.PolarisResolutionManifest; +import org.apache.iceberg.catalog.Catalog; + +public interface CallContextCatalogFactory { + Catalog createCallContextCatalog( + CallContext context, + AuthenticatedPolarisPrincipal authenticatedPrincipal, + PolarisResolutionManifest resolvedManifest); +} diff --git a/polaris-service/src/main/java/io/polaris/service/context/CallContextResolver.java b/polaris-service/src/main/java/io/polaris/service/context/CallContextResolver.java new file mode 100644 index 0000000000..850b2c2ca2 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/context/CallContextResolver.java @@ -0,0 +1,34 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.context; + +import com.fasterxml.jackson.annotation.JsonTypeInfo; +import io.dropwizard.jackson.Discoverable; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.service.config.HasEntityManagerFactory; +import java.util.Map; + +/** Uses the resolved RealmContext to further resolve elements of the CallContext. */ +@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "type") +public interface CallContextResolver extends HasEntityManagerFactory, Discoverable { + CallContext resolveCallContext( + RealmContext realmContext, + String method, + String path, + Map queryParams, + Map headers); +} diff --git a/polaris-service/src/main/java/io/polaris/service/context/DefaultContextResolver.java b/polaris-service/src/main/java/io/polaris/service/context/DefaultContextResolver.java new file mode 100644 index 0000000000..1a76e2027d --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/context/DefaultContextResolver.java @@ -0,0 +1,168 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.context; + +import com.fasterxml.jackson.annotation.JsonTypeName; +import com.google.common.base.Splitter; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisConfigurationStore; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisMetaStoreSession; +import io.polaris.service.config.ConfigurationStoreAware; +import io.polaris.service.config.RealmEntityManagerFactory; +import java.time.Clock; +import java.time.ZoneId; +import java.util.HashMap; +import java.util.Map; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * For local/dev testing, this resolver simply expects a custom bearer-token format that is a + * semicolon-separated list of colon-separated key/value pairs that constitute the realm properties. + * + *

Example: principal:data-engineer;password:test;realm:acct123 + */ +@JsonTypeName("default") +public class DefaultContextResolver + implements RealmContextResolver, CallContextResolver, ConfigurationStoreAware { + private static final Logger LOG = LoggerFactory.getLogger(DefaultContextResolver.class); + + public static final String REALM_PROPERTY_KEY = "realm"; + public static final String REALM_PROPERTY_DEFAULT_VALUE = "default-realm"; + + public static final String PRINCIPAL_PROPERTY_KEY = "principal"; + public static final String PRINCIPAL_PROPERTY_DEFAULT_VALUE = "default-principal"; + + private RealmEntityManagerFactory entityManagerFactory; + private PolarisConfigurationStore configurationStore; + + /** + * During CallContext resolution that might depend on RealmContext, the {@code + * entityManagerFactory} will be used to resolve elements of the CallContext which require + * additional information from an underlying entity store. + */ + @Override + public void setEntityManagerFactory(RealmEntityManagerFactory entityManagerFactory) { + this.entityManagerFactory = entityManagerFactory; + } + + @Override + public RealmContext resolveRealmContext( + String requestURL, + String method, + String path, + Map queryParams, + Map headers) { + // Since this default resolver is strictly for use in test/dev environments, we'll consider + // it safe to log all contents. Any "real" resolver used in a prod environment should make + // sure to only log non-sensitive contents. + LOG.debug( + "Resolving RealmContext for method: {}, path: {}, queryParams: {}, headers: {}", + method, + path, + queryParams, + headers); + final Map parsedProperties = parseBearerTokenAsKvPairs(headers); + + if (!parsedProperties.containsKey(REALM_PROPERTY_KEY) + && headers.containsKey(REALM_PROPERTY_KEY)) { + parsedProperties.put(REALM_PROPERTY_KEY, headers.get(REALM_PROPERTY_KEY)); + } + + if (!parsedProperties.containsKey(REALM_PROPERTY_KEY)) { + LOG.warn( + "Failed to parse {} from headers; using {}", + REALM_PROPERTY_KEY, + REALM_PROPERTY_DEFAULT_VALUE); + parsedProperties.put(REALM_PROPERTY_KEY, REALM_PROPERTY_DEFAULT_VALUE); + } + return new RealmContext() { + @Override + public String getRealmIdentifier() { + return parsedProperties.get(REALM_PROPERTY_KEY); + } + }; + } + + @Override + public CallContext resolveCallContext( + final RealmContext realmContext, + String method, + String path, + Map queryParams, + Map headers) { + LOG.atDebug() + .addKeyValue("realmContext", realmContext.getRealmIdentifier()) + .addKeyValue("method", method) + .addKeyValue("path", path) + .addKeyValue("queryParams", queryParams) + .addKeyValue("headers", headers) + .log("Resolving CallContext"); + final Map parsedProperties = parseBearerTokenAsKvPairs(headers); + + if (!parsedProperties.containsKey(PRINCIPAL_PROPERTY_KEY)) { + LOG.warn( + "Failed to parse {} from headers ({}); using {}", + PRINCIPAL_PROPERTY_KEY, + headers, + PRINCIPAL_PROPERTY_DEFAULT_VALUE); + parsedProperties.put(PRINCIPAL_PROPERTY_KEY, PRINCIPAL_PROPERTY_DEFAULT_VALUE); + } + + PolarisEntityManager entityManager = + entityManagerFactory.getOrCreateEntityManager(realmContext); + PolarisDiagnostics diagServices = new PolarisDefaultDiagServiceImpl(); + PolarisMetaStoreSession metaStoreSession = entityManager.newMetaStoreSession(); + PolarisCallContext polarisContext = + new PolarisCallContext( + metaStoreSession, + diagServices, + configurationStore, + Clock.system(ZoneId.systemDefault())); + return CallContext.of(realmContext, polarisContext); + } + + /** + * Returns kv pairs parsed from the "Authorization: Bearer k1:v1;k2:v2;k3:v3" header if it exists; + * if missing, returns empty map. + */ + private static Map parseBearerTokenAsKvPairs(Map headers) { + Map parsedProperties = new HashMap<>(); + if (headers != null) { + String authHeader = headers.get("Authorization"); + if (authHeader != null) { + String[] parts = authHeader.split(" "); + if (parts.length == 2 && "Bearer".equalsIgnoreCase(parts[0])) { + if (parts[1].matches("[\\w\\d=_+-]+:[\\w\\d=+_-]+(?:;[\\w\\d=+_-]+:[\\w\\d=+_-]+)*")) { + parsedProperties.putAll( + Splitter.on(';').trimResults().withKeyValueSeparator(':').split(parts[1])); + } + } + } + } + return parsedProperties; + } + + @Override + public void setConfigurationStore(PolarisConfigurationStore configurationStore) { + this.configurationStore = configurationStore; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/context/PolarisCallContextCatalogFactory.java b/polaris-service/src/main/java/io/polaris/service/context/PolarisCallContextCatalogFactory.java new file mode 100644 index 0000000000..f6c1684388 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/context/PolarisCallContextCatalogFactory.java @@ -0,0 +1,90 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.context; + +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.resolver.PolarisResolutionManifest; +import io.polaris.service.catalog.BasePolarisCatalog; +import io.polaris.service.config.RealmEntityManagerFactory; +import io.polaris.service.task.TaskExecutor; +import java.nio.file.Paths; +import java.util.HashMap; +import java.util.Map; +import org.apache.iceberg.CatalogProperties; +import org.apache.iceberg.catalog.Catalog; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class PolarisCallContextCatalogFactory implements CallContextCatalogFactory { + private static final Logger LOG = LoggerFactory.getLogger(PolarisCallContextCatalogFactory.class); + + private static final String WAREHOUSE_LOCATION_BASEDIR = + "/tmp/iceberg_rest_server_warehouse_data/"; + + private final RealmEntityManagerFactory entityManagerFactory; + private final TaskExecutor taskExecutor; + + public PolarisCallContextCatalogFactory( + RealmEntityManagerFactory entityManagerFactory, TaskExecutor taskExecutor) { + this.entityManagerFactory = entityManagerFactory; + this.taskExecutor = taskExecutor; + } + + @Override + public Catalog createCallContextCatalog( + CallContext context, + AuthenticatedPolarisPrincipal authenticatedPrincipal, + final PolarisResolutionManifest resolvedManifest) { + PolarisBaseEntity baseCatalogEntity = + resolvedManifest.getResolvedReferenceCatalogEntity().getRawLeafEntity(); + String catalogName = baseCatalogEntity.getName(); + + String realm = context.getRealmContext().getRealmIdentifier(); + String catalogKey = realm + "/" + catalogName; + LOG.info("Initializing new BasePolarisCatalog for key: {}", catalogKey); + + PolarisEntityManager entityManager = + entityManagerFactory.getOrCreateEntityManager(context.getRealmContext()); + + BasePolarisCatalog catalogInstance = + new BasePolarisCatalog( + entityManager, context, resolvedManifest, authenticatedPrincipal, taskExecutor); + + context.contextVariables().put(CallContext.REQUEST_PATH_CATALOG_INSTANCE_KEY, catalogInstance); + + CatalogEntity catalog = CatalogEntity.of(baseCatalogEntity); + Map catalogProperties = new HashMap<>(catalog.getPropertiesAsMap()); + String defaultBaseLocation = catalog.getDefaultBaseLocation(); + LOG.info("Looked up defaultBaseLocation {} for catalog {}", defaultBaseLocation, catalogKey); + if (defaultBaseLocation != null) { + catalogProperties.put(CatalogProperties.WAREHOUSE_LOCATION, defaultBaseLocation); + } else { + catalogProperties.put( + CatalogProperties.WAREHOUSE_LOCATION, + Paths.get(WAREHOUSE_LOCATION_BASEDIR, catalogKey).toString()); + } + + // TODO: The initialize properties might need to take more from CallContext and the + // CatalogEntity. + catalogInstance.initialize(catalogName, catalogProperties); + + return catalogInstance; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/context/RealmContextResolver.java b/polaris-service/src/main/java/io/polaris/service/context/RealmContextResolver.java new file mode 100644 index 0000000000..0a716c73ac --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/context/RealmContextResolver.java @@ -0,0 +1,32 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.context; + +import com.fasterxml.jackson.annotation.JsonTypeInfo; +import io.dropwizard.jackson.Discoverable; +import io.polaris.core.context.RealmContext; +import io.polaris.service.config.HasEntityManagerFactory; +import java.util.Map; + +@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "type") +public interface RealmContextResolver extends Discoverable, HasEntityManagerFactory { + RealmContext resolveRealmContext( + String requestURL, + String method, + String path, + Map queryParams, + Map headers); +} diff --git a/polaris-service/src/main/java/io/polaris/service/context/SqlliteCallContextCatalogFactory.java b/polaris-service/src/main/java/io/polaris/service/context/SqlliteCallContextCatalogFactory.java new file mode 100644 index 0000000000..a4685d1d1c --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/context/SqlliteCallContextCatalogFactory.java @@ -0,0 +1,107 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.context; + +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.context.CallContext; +import io.polaris.core.persistence.resolver.PolarisResolutionManifest; +import java.io.IOException; +import java.nio.file.FileSystems; +import java.nio.file.Files; +import java.nio.file.Path; +import java.nio.file.Paths; +import java.util.HashMap; +import java.util.Map; +import org.apache.hadoop.conf.Configuration; +import org.apache.iceberg.CatalogProperties; +import org.apache.iceberg.CatalogUtil; +import org.apache.iceberg.catalog.Catalog; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * For local/dev testing, this RealmCallContextFactory uses Sqllite as the backing store for Catalog + * metadata using local-filesystem-based files as the persistence layer. + * + *

Realms will reside in different subdirectories under a shared base directory on the local + * filesystem. Each Catalog in the realm will be a different sqllite file. + */ +public class SqlliteCallContextCatalogFactory implements CallContextCatalogFactory { + private static final String DEFAULT_METASTORE_STATE_BASEDIR = + "/tmp/iceberg_rest_server_sqlitestate_basedir/"; + private static final String WAREHOUSE_LOCATION_BASEDIR = + "/tmp/iceberg_rest_server_warehouse_data/"; + + private static final Logger LOG = LoggerFactory.getLogger(SqlliteCallContextCatalogFactory.class); + + private Map cachedCatalogs = new HashMap<>(); + private final Map catalogBaseDirs; + + public SqlliteCallContextCatalogFactory(Map catalogBaseDirs) { + this.catalogBaseDirs = catalogBaseDirs; + } + + @Override + public Catalog createCallContextCatalog( + CallContext context, + AuthenticatedPolarisPrincipal polarisPrincipal, + PolarisResolutionManifest resolvedManifest) { + String catalogName = + resolvedManifest.getResolvedReferenceCatalogEntity().getRawLeafEntity().getName(); + if (catalogName == null) { + catalogName = "default"; + } + + String realm = context.getRealmContext().getRealmIdentifier(); + String catalogKey = realm + "/" + catalogName; + LOG.debug("Looking up catalogKey: {}", catalogKey); + + Catalog catalogInstance = cachedCatalogs.get(catalogKey); + if (catalogInstance == null) { + Map catalogProperties = new HashMap<>(); + catalogProperties.put(CatalogProperties.CATALOG_IMPL, "org.apache.iceberg.jdbc.JdbcCatalog"); + catalogProperties.put("jdbc.schema-version", "V1"); + + // TODO: Do sanitization in case this ever runs in an exposed environment to avoid + // injection attacks. + String baseDir = catalogBaseDirs.getOrDefault(realm, DEFAULT_METASTORE_STATE_BASEDIR); + + String realmDir = Paths.get(baseDir, realm).toString(); + String catalogFile = Paths.get(realmDir, catalogName).toString(); + + // Ensure parent directories of metastore-state base directory exists. + LOG.info("Creating metastore state directory: " + realmDir); + try { + Path result = Files.createDirectories(FileSystems.getDefault().getPath(realmDir)); + } catch (IOException ioe) { + throw new RuntimeException(ioe); + } + + catalogProperties.put(CatalogProperties.URI, "jdbc:sqlite:file:" + catalogFile); + + // TODO: Derive warehouse location from realm configs. + catalogProperties.put( + CatalogProperties.WAREHOUSE_LOCATION, + Paths.get(WAREHOUSE_LOCATION_BASEDIR, catalogKey).toString()); + + catalogInstance = + CatalogUtil.buildIcebergCatalog( + "catalog_" + catalogKey, catalogProperties, new Configuration()); + cachedCatalogs.put(catalogKey, catalogInstance); + } + return catalogInstance; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/logging/PolarisJsonLayoutFactory.java b/polaris-service/src/main/java/io/polaris/service/logging/PolarisJsonLayoutFactory.java new file mode 100644 index 0000000000..68000e5cae --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/logging/PolarisJsonLayoutFactory.java @@ -0,0 +1,239 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.logging; + +import ch.qos.logback.classic.LoggerContext; +import ch.qos.logback.classic.pattern.ExtendedThrowableProxyConverter; +import ch.qos.logback.classic.pattern.RootCauseFirstThrowableProxyConverter; +import ch.qos.logback.classic.pattern.ThrowableHandlingConverter; +import ch.qos.logback.classic.spi.ILoggingEvent; +import ch.qos.logback.core.LayoutBase; +import com.fasterxml.jackson.annotation.JsonProperty; +import com.fasterxml.jackson.annotation.JsonTypeName; +import com.google.common.collect.ImmutableMap; +import io.dropwizard.logging.json.AbstractJsonLayoutBaseFactory; +import io.dropwizard.logging.json.EventAttribute; +import io.dropwizard.logging.json.layout.EventJsonLayout; +import io.dropwizard.logging.json.layout.ExceptionFormat; +import io.dropwizard.logging.json.layout.JsonFormatter; +import io.dropwizard.logging.json.layout.TimestampFormatter; +import java.util.ArrayList; +import java.util.Collections; +import java.util.EnumSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.TimeZone; +import java.util.stream.Collectors; +import org.checkerframework.checker.nullness.qual.Nullable; + +/** + * Basically a direct copy of {@link io.dropwizard.logging.json.EventJsonLayoutBaseFactory} that + * adds support for {@link ILoggingEvent#getKeyValuePairs()} in the output. By default, additional + * key/value pairs are included as the `params` field of the json output, but they can optionally be + * flattened into the log event output. + * + *

To use this appender, change the appender type to `polaris` + * loggers: + * org.apache.iceberg.rest: DEBUG + * org.apache.iceberg.polaris: DEBUG + * appenders: + * - type: console + * threshold: ALL + * layout: + * type: polaris + * flattenKeyValues: false + * includeKeyValues: true + * + */ +@JsonTypeName("polaris") +public class PolarisJsonLayoutFactory extends AbstractJsonLayoutBaseFactory { + private EnumSet includes = + EnumSet.of( + EventAttribute.LEVEL, + EventAttribute.THREAD_NAME, + EventAttribute.MDC, + EventAttribute.MARKER, + EventAttribute.LOGGER_NAME, + EventAttribute.MESSAGE, + EventAttribute.EXCEPTION, + EventAttribute.TIMESTAMP); + + private Set includesMdcKeys = Collections.emptySet(); + private boolean flattenMdc = false; + private boolean includeKeyValues = true; + private boolean flattenKeyValues = false; + + @Nullable private ExceptionFormat exceptionFormat; + + @JsonProperty + public EnumSet getIncludes() { + return includes; + } + + @JsonProperty + public void setIncludes(EnumSet includes) { + this.includes = includes; + } + + @JsonProperty + public Set getIncludesMdcKeys() { + return includesMdcKeys; + } + + @JsonProperty + public void setIncludesMdcKeys(Set includesMdcKeys) { + this.includesMdcKeys = includesMdcKeys; + } + + @JsonProperty + public boolean isFlattenMdc() { + return flattenMdc; + } + + @JsonProperty + public void setFlattenMdc(boolean flattenMdc) { + this.flattenMdc = flattenMdc; + } + + @JsonProperty + public boolean isIncludeKeyValues() { + return includeKeyValues; + } + + @JsonProperty + public void setIncludeKeyValues(boolean includeKeyValues) { + this.includeKeyValues = includeKeyValues; + } + + @JsonProperty + public boolean isFlattenKeyValues() { + return flattenKeyValues; + } + + @JsonProperty + public void setFlattenKeyValues(boolean flattenKeyValues) { + this.flattenKeyValues = flattenKeyValues; + } + + /** + * @since 2.0 + */ + @JsonProperty("exception") + public void setExceptionFormat(ExceptionFormat exceptionFormat) { + this.exceptionFormat = exceptionFormat; + } + + /** + * @since 2.0 + */ + @JsonProperty("exception") + @Nullable + public ExceptionFormat getExceptionFormat() { + return exceptionFormat; + } + + @Override + public LayoutBase build(LoggerContext context, TimeZone timeZone) { + final PolarisJsonLayout jsonLayout = + new PolarisJsonLayout( + createDropwizardJsonFormatter(), + createTimestampFormatter(timeZone), + createThrowableProxyConverter(context), + includes, + getCustomFieldNames(), + getAdditionalFields(), + includesMdcKeys, + flattenMdc, + includeKeyValues, + flattenKeyValues); + jsonLayout.setContext(context); + return jsonLayout; + } + + public static class PolarisJsonLayout extends EventJsonLayout { + private final boolean includeKeyValues; + private final boolean flattenKeyValues; + + public PolarisJsonLayout( + JsonFormatter jsonFormatter, + TimestampFormatter timestampFormatter, + ThrowableHandlingConverter throwableProxyConverter, + Set includes, + Map customFieldNames, + Map additionalFields, + Set includesMdcKeys, + boolean flattenMdc, + boolean includeKeyValues, + boolean flattenKeyValues) { + super( + jsonFormatter, + timestampFormatter, + throwableProxyConverter, + includes, + customFieldNames, + additionalFields, + includesMdcKeys, + flattenMdc); + this.includeKeyValues = includeKeyValues; + this.flattenKeyValues = flattenKeyValues; + } + + @Override + protected Map toJsonMap(ILoggingEvent event) { + Map jsonMap = super.toJsonMap(event); + if (!includeKeyValues) { + return jsonMap; + } + Map keyValueMap = + event.getKeyValuePairs() == null + ? Map.of() + : event.getKeyValuePairs().stream() + .collect(Collectors.toMap(kv -> kv.key, kv -> kv.value)); + ImmutableMap.Builder builder = + ImmutableMap.builder().putAll(jsonMap); + if (flattenKeyValues) { + builder.putAll(keyValueMap); + } else { + builder.put("params", keyValueMap); + } + return builder.build(); + } + } + + protected ThrowableHandlingConverter createThrowableProxyConverter(LoggerContext context) { + if (exceptionFormat == null) { + return new RootCauseFirstThrowableProxyConverter(); + } + + ThrowableHandlingConverter throwableHandlingConverter; + if (exceptionFormat.isRootFirst()) { + throwableHandlingConverter = new RootCauseFirstThrowableProxyConverter(); + } else { + throwableHandlingConverter = new ExtendedThrowableProxyConverter(); + } + + List options = new ArrayList<>(); + // depth must be added first + options.add(exceptionFormat.getDepth()); + options.addAll(exceptionFormat.getEvaluators()); + + throwableHandlingConverter.setOptionList(options); + throwableHandlingConverter.setContext(context); + + return throwableHandlingConverter; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/persistence/InMemoryPolarisMetaStoreManagerFactory.java b/polaris-service/src/main/java/io/polaris/service/persistence/InMemoryPolarisMetaStoreManagerFactory.java new file mode 100644 index 0000000000..d9f93dd196 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/persistence/InMemoryPolarisMetaStoreManagerFactory.java @@ -0,0 +1,85 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.persistence; + +import com.fasterxml.jackson.annotation.JsonTypeName; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.context.RealmContext; +import io.polaris.core.persistence.LocalPolarisMetaStoreManagerFactory; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.persistence.PolarisMetaStoreSession; +import io.polaris.core.persistence.PolarisTreeMapMetaStoreSessionImpl; +import io.polaris.core.persistence.PolarisTreeMapStore; +import java.util.Arrays; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; +import java.util.function.Supplier; +import org.jetbrains.annotations.NotNull; + +@JsonTypeName("in-memory") +public class InMemoryPolarisMetaStoreManagerFactory + extends LocalPolarisMetaStoreManagerFactory< + PolarisTreeMapStore, PolarisTreeMapMetaStoreSessionImpl> { + Set bootstrappedRealms = new HashSet<>(); + + @Override + protected PolarisTreeMapStore createBackingStore(@NotNull PolarisDiagnostics diagnostics) { + return new PolarisTreeMapStore(diagnostics); + } + + @Override + protected PolarisMetaStoreSession createMetaStoreSession( + @NotNull PolarisTreeMapStore store, @NotNull RealmContext realmContext) { + return new PolarisTreeMapMetaStoreSessionImpl(store, storageIntegration); + } + + @Override + public synchronized PolarisMetaStoreManager getOrCreateMetaStoreManager( + RealmContext realmContext) { + String realmId = realmContext.getRealmIdentifier(); + if (!bootstrappedRealms.contains(realmId)) { + bootstrapRealmAndPrintCredentials(realmId); + } + return super.getOrCreateMetaStoreManager(realmContext); + } + + @Override + public synchronized Supplier getOrCreateSessionSupplier( + RealmContext realmContext) { + String realmId = realmContext.getRealmIdentifier(); + if (!bootstrappedRealms.contains(realmId)) { + bootstrapRealmAndPrintCredentials(realmId); + } + return super.getOrCreateSessionSupplier(realmContext); + } + + private void bootstrapRealmAndPrintCredentials(String realmId) { + Map results = + this.bootstrapRealms(Arrays.asList(realmId)); + bootstrappedRealms.add(realmId); + + PolarisMetaStoreManager.PrincipalSecretsResult principalSecrets = results.get(realmId); + + String msg = + String.format( + "realm: %1s root principal credentials: %2s:%3s", + realmId, + principalSecrets.getPrincipalSecrets().getPrincipalClientId(), + principalSecrets.getPrincipalSecrets().getMainSecret()); + System.out.println(msg); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/resource/TimedApi.java b/polaris-service/src/main/java/io/polaris/service/resource/TimedApi.java new file mode 100644 index 0000000000..6372210a39 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/resource/TimedApi.java @@ -0,0 +1,37 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.resource; + +import java.lang.annotation.ElementType; +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; +import java.lang.annotation.Target; + +/** + * Annotation to mark methods for timing API calls and counting errors. The backing logic is + * controlled by {@link io.polaris.service.TimedApplicationEventListener}, therefore this annotation + * is only effective for Jersey resource methods. + */ +@Retention(RetentionPolicy.RUNTIME) +@Target(ElementType.METHOD) +public @interface TimedApi { + /** + * The name of the metric to be recorded. + * + * @return the metric name + */ + String value(); +} diff --git a/polaris-service/src/main/java/io/polaris/service/storage/PolarisStorageIntegrationProviderImpl.java b/polaris-service/src/main/java/io/polaris/service/storage/PolarisStorageIntegrationProviderImpl.java new file mode 100644 index 0000000000..b21575a968 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/storage/PolarisStorageIntegrationProviderImpl.java @@ -0,0 +1,116 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.storage; + +import com.google.api.client.http.javanet.NetHttpTransport; +import com.google.auth.http.HttpTransportFactory; +import com.google.auth.oauth2.GoogleCredentials; +import com.google.cloud.ServiceOptions; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.storage.PolarisCredentialProperty; +import io.polaris.core.storage.PolarisStorageActions; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import io.polaris.core.storage.PolarisStorageIntegration; +import io.polaris.core.storage.PolarisStorageIntegrationProvider; +import io.polaris.core.storage.aws.AwsCredentialsStorageIntegration; +import io.polaris.core.storage.azure.AzureCredentialsStorageIntegration; +import io.polaris.core.storage.gcp.GcpCredentialsStorageIntegration; +import java.io.IOException; +import java.util.EnumMap; +import java.util.Map; +import java.util.Set; +import java.util.function.Supplier; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import software.amazon.awssdk.services.sts.StsClient; + +public class PolarisStorageIntegrationProviderImpl implements PolarisStorageIntegrationProvider { + + private final Supplier stsClientSupplier; + + public PolarisStorageIntegrationProviderImpl(Supplier stsClientSupplier) { + this.stsClientSupplier = stsClientSupplier; + } + + @Override + @SuppressWarnings("unchecked") + public @Nullable + PolarisStorageIntegration getStorageIntegrationForConfig( + PolarisStorageConfigurationInfo polarisStorageConfigurationInfo) { + if (polarisStorageConfigurationInfo == null) { + return null; + } + PolarisStorageIntegration storageIntegration; + switch (polarisStorageConfigurationInfo.getStorageType()) { + case S3: + storageIntegration = + (PolarisStorageIntegration) + new AwsCredentialsStorageIntegration(stsClientSupplier.get()); + break; + case GCS: + try { + storageIntegration = + (PolarisStorageIntegration) + new GcpCredentialsStorageIntegration( + GoogleCredentials.getApplicationDefault(), + ServiceOptions.getFromServiceLoader( + HttpTransportFactory.class, NetHttpTransport::new)); + } catch (IOException e) { + throw new RuntimeException( + "Error initializing default google credentials" + e.getMessage()); + } + break; + case AZURE: + storageIntegration = + (PolarisStorageIntegration) new AzureCredentialsStorageIntegration(); + break; + case FILE: + storageIntegration = + new PolarisStorageIntegration("file") { + @Override + public EnumMap getSubscopedCreds( + @NotNull PolarisDiagnostics diagnostics, + @NotNull T storageConfig, + boolean allowListOperation, + @NotNull Set allowedReadLocations, + @NotNull Set allowedWriteLocations) { + return new EnumMap<>(PolarisCredentialProperty.class); + } + + @Override + public EnumMap + descPolarisStorageConfiguration( + @NotNull PolarisStorageConfigurationInfo storageConfigInfo) { + return new EnumMap<>(PolarisStorageConfigurationInfo.DescribeProperty.class); + } + + @Override + public @NotNull Map> + validateAccessToLocations( + @NotNull T storageConfig, + @NotNull Set actions, + @NotNull Set locations) { + return Map.of(); + } + }; + break; + default: + throw new IllegalArgumentException( + "Unknown storage type " + polarisStorageConfigurationInfo.getStorageType()); + } + return storageIntegration; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/task/ManifestFileCleanupTaskHandler.java b/polaris-service/src/main/java/io/polaris/service/task/ManifestFileCleanupTaskHandler.java new file mode 100644 index 0000000000..8947869c50 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/task/ManifestFileCleanupTaskHandler.java @@ -0,0 +1,221 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.task; + +import io.polaris.core.entity.AsyncTaskType; +import io.polaris.core.entity.TaskEntity; +import java.io.IOException; +import java.util.List; +import java.util.Objects; +import java.util.Spliterator; +import java.util.Spliterators; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.function.Function; +import java.util.stream.StreamSupport; +import org.apache.commons.codec.binary.Base64; +import org.apache.iceberg.DataFile; +import org.apache.iceberg.ManifestFile; +import org.apache.iceberg.ManifestFiles; +import org.apache.iceberg.ManifestReader; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.io.FileIO; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * {@link TaskHandler} responsible for deleting all of the files in a manifest and the manifest + * itself. Since data files may be present in multiple manifests across different snapshots, we + * assume a data file that doesn't exist is missing because it was already deleted by another task. + */ +public class ManifestFileCleanupTaskHandler implements TaskHandler { + public static final int MAX_ATTEMPTS = 3; + public static final int FILE_DELETION_RETRY_MILLIS = 100; + private final Logger LOGGER = LoggerFactory.getLogger(ManifestFileCleanupTaskHandler.class); + private final Function fileIOSupplier; + private final ExecutorService executorService; + + public ManifestFileCleanupTaskHandler( + Function fileIOSupplier, ExecutorService executorService) { + this.fileIOSupplier = fileIOSupplier; + this.executorService = executorService; + } + + @Override + public boolean canHandleTask(TaskEntity task) { + return task.getTaskType() == AsyncTaskType.FILE_CLEANUP; + } + + @Override + public boolean handleTask(TaskEntity task) { + ManifestCleanupTask cleanupTask = task.readData(ManifestCleanupTask.class); + ManifestFile manifestFile = decodeManifestData(cleanupTask.getManifestFileData()); + TableIdentifier tableId = cleanupTask.getTableId(); + try (FileIO authorizedFileIO = fileIOSupplier.apply(task)) { + + // if the file doesn't exist, we assume that another task execution was successful, but failed + // to drop the task entity. Log a warning and return success + if (!TaskUtils.exists(manifestFile.path(), authorizedFileIO)) { + LOGGER + .atWarn() + .addKeyValue("manifestFile", manifestFile.path()) + .addKeyValue("tableId", tableId) + .log("Manifest cleanup task scheduled, but manifest file doesn't exist"); + return true; + } + + ManifestReader dataFiles = ManifestFiles.read(manifestFile, authorizedFileIO); + List> dataFileDeletes = + StreamSupport.stream( + Spliterators.spliteratorUnknownSize(dataFiles.iterator(), Spliterator.IMMUTABLE), + false) + .map( + file -> + tryDelete( + tableId, authorizedFileIO, manifestFile, file.path().toString(), null, 1)) + .toList(); + LOGGER.debug( + "Scheduled {} data files to be deleted from manifest {}", + dataFileDeletes.size(), + manifestFile.path()); + try { + // wait for all data files to be deleted, then wait for the manifest itself to be deleted + CompletableFuture.allOf(dataFileDeletes.toArray(CompletableFuture[]::new)) + .thenCompose( + (v) -> { + LOGGER + .atInfo() + .addKeyValue("manifestFile", manifestFile.path()) + .log("All data files in manifest deleted - deleting manifest"); + return tryDelete( + tableId, authorizedFileIO, manifestFile, manifestFile.path(), null, 1); + }) + .get(); + return true; + } catch (InterruptedException e) { + LOGGER.error( + "Interrupted exception deleting data files from manifest {}", manifestFile.path(), e); + throw new RuntimeException(e); + } catch (ExecutionException e) { + LOGGER.error("Unable to delete data files from manifest {}", manifestFile.path(), e); + return false; + } + } + } + + private static ManifestFile decodeManifestData(String manifestFileData) { + try { + return ManifestFiles.decode(Base64.decodeBase64(manifestFileData)); + } catch (IOException e) { + throw new RuntimeException("Unable to decode base64 encoded manifest", e); + } + } + + private CompletableFuture tryDelete( + TableIdentifier tableId, + FileIO fileIO, + ManifestFile manifestFile, + String dataFile, + Throwable e, + int attempt) { + if (e != null && attempt <= MAX_ATTEMPTS) { + LOGGER + .atWarn() + .addKeyValue("dataFile", dataFile) + .addKeyValue("attempt", attempt) + .addKeyValue("error", e.getMessage()) + .log("Error encountered attempting to delete data file"); + } + if (attempt > MAX_ATTEMPTS && e != null) { + return CompletableFuture.failedFuture(e); + } + return CompletableFuture.runAsync( + () -> { + // totally normal for a file to already be missing, as a data file + // may be in multiple manifests. There's a possibility we check the + // file's existence, but then it is deleted before we have a chance to + // send the delete request. In such a case, we should retry + // and find + if (TaskUtils.exists(dataFile.toString(), fileIO)) { + fileIO.deleteFile(dataFile.toString()); + } else { + LOGGER + .atInfo() + .addKeyValue("dataFile", dataFile) + .addKeyValue("manifestFile", manifestFile.path()) + .addKeyValue("tableId", tableId) + .log("Manifest cleanup task scheduled, but data file doesn't exist"); + } + }, + executorService) + .exceptionallyComposeAsync( + newEx -> { + LOGGER + .atWarn() + .addKeyValue("dataFile", dataFile) + .addKeyValue("tableIdentifer", tableId) + .addKeyValue("manifestFile", manifestFile.path()) + .log("Exception caught deleting data file from manifest", newEx); + return tryDelete(tableId, fileIO, manifestFile, dataFile, newEx, attempt + 1); + }, + CompletableFuture.delayedExecutor( + FILE_DELETION_RETRY_MILLIS, TimeUnit.MILLISECONDS, executorService)); + } + + /** Serialized Task data sent from the {@link TableCleanupTaskHandler} */ + public static final class ManifestCleanupTask { + private TableIdentifier tableId; + private String manifestFileData; + + public ManifestCleanupTask(TableIdentifier tableId, String manifestFileData) { + this.tableId = tableId; + this.manifestFileData = manifestFileData; + } + + public ManifestCleanupTask() {} + + public TableIdentifier getTableId() { + return tableId; + } + + public void setTableId(TableIdentifier tableId) { + this.tableId = tableId; + } + + public String getManifestFileData() { + return manifestFileData; + } + + public void setManifestFileData(String manifestFileData) { + this.manifestFileData = manifestFileData; + } + + @Override + public boolean equals(Object object) { + if (this == object) return true; + if (!(object instanceof ManifestCleanupTask that)) return false; + return Objects.equals(tableId, that.tableId) + && Objects.equals(manifestFileData, that.manifestFileData); + } + + @Override + public int hashCode() { + return Objects.hash(tableId, manifestFileData); + } + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/task/TableCleanupTaskHandler.java b/polaris-service/src/main/java/io/polaris/service/task/TableCleanupTaskHandler.java new file mode 100644 index 0000000000..d281ff13b3 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/task/TableCleanupTaskHandler.java @@ -0,0 +1,168 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.task; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.AsyncTaskType; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.TableLikeEntity; +import io.polaris.core.entity.TaskEntity; +import io.polaris.core.persistence.MetaStoreManagerFactory; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import java.util.List; +import java.util.UUID; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.function.Function; +import java.util.stream.Collectors; +import org.apache.iceberg.ManifestFile; +import org.apache.iceberg.TableMetadata; +import org.apache.iceberg.TableMetadataParser; +import org.apache.iceberg.io.FileIO; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Table cleanup handler resolves the latest {@link TableMetadata} file for a dropped table and + * schedules a deletion task for each Snapshot found in the {@link TableMetadata}. Manifest + * cleanup tasks are scheduled in a batch so tasks should be stored atomically. + */ +public class TableCleanupTaskHandler implements TaskHandler { + private final Logger LOGGER = LoggerFactory.getLogger(TableCleanupTaskHandler.class); + private final TaskExecutor taskExecutor; + private final MetaStoreManagerFactory metaStoreManagerFactory; + private final Function fileIOSupplier; + private final ExecutorService executorService = Executors.newVirtualThreadPerTaskExecutor(); + + public TableCleanupTaskHandler( + TaskExecutor taskExecutor, + MetaStoreManagerFactory metaStoreManagerFactory, + Function fileIOSupplier) { + this.taskExecutor = taskExecutor; + this.metaStoreManagerFactory = metaStoreManagerFactory; + this.fileIOSupplier = fileIOSupplier; + } + + @Override + public boolean canHandleTask(TaskEntity task) { + return task.getTaskType() == AsyncTaskType.ENTITY_CLEANUP_SCHEDULER && taskEntityIsTable(task); + } + + private boolean taskEntityIsTable(TaskEntity task) { + PolarisEntity entity = PolarisEntity.of((task.readData(PolarisBaseEntity.class))); + return entity.getType().equals(PolarisEntityType.TABLE_LIKE); + } + + @Override + public boolean handleTask(TaskEntity cleanupTask) { + PolarisBaseEntity entity = cleanupTask.readData(PolarisBaseEntity.class); + PolarisMetaStoreManager metaStoreManager = + metaStoreManagerFactory.getOrCreateMetaStoreManager( + CallContext.getCurrentContext().getRealmContext()); + TableLikeEntity tableEntity = TableLikeEntity.of(entity); + PolarisCallContext polarisCallContext = CallContext.getCurrentContext().getPolarisCallContext(); + LOGGER + .atInfo() + .addKeyValue("tableIdentifier", tableEntity.getTableIdentifier()) + .addKeyValue("metadataLocation", tableEntity.getMetadataLocation()) + .log("Handling table metadata cleanup task"); + + // It's likely the cleanupTask has already been completed, but wasn't dropped successfully. + // Log a + // warning and move on + try (FileIO fileIO = fileIOSupplier.apply(cleanupTask)) { + if (!TaskUtils.exists(tableEntity.getMetadataLocation(), fileIO)) { + LOGGER + .atWarn() + .addKeyValue("tableIdentifier", tableEntity.getTableIdentifier()) + .addKeyValue("metadataLocation", tableEntity.getMetadataLocation()) + .log("Table metadata cleanup scheduled, but metadata file does not exist"); + return true; + } + + TableMetadata tableMetadata = + TableMetadataParser.read(fileIO, tableEntity.getMetadataLocation()); + + // read the manifest list for each snapshot. dedupe the manifest files and schedule a + // cleanupTask + // for each manifest file and its data files to be deleted + List taskEntities = + tableMetadata.snapshots().stream() + .flatMap(sn -> sn.allManifests(fileIO).stream()) + // distinct by manifest path, since multiple snapshots will contain the same + // manifest + .collect(Collectors.toMap(ManifestFile::path, Function.identity(), (mf1, mf2) -> mf1)) + .values() + .stream() + .filter(mf -> TaskUtils.exists(mf.path(), fileIO)) + .map( + mf -> { + // append a random uuid to the task name to avoid any potential conflict + // when + // storing the task entity. It's better to have duplicate tasks than to risk + // not storing the rest of the task entities. If a duplicate deletion task + // is + // queued, it will check for the manifest file's existence and simply exit + // if + // the task has already been handled. + String taskName = + cleanupTask.getName() + "_" + mf.path() + "_" + UUID.randomUUID(); + LOGGER + .atDebug() + .addKeyValue("taskName", taskName) + .addKeyValue("tableIdentifier", tableEntity.getTableIdentifier()) + .addKeyValue("metadataLocation", tableEntity.getMetadataLocation()) + .addKeyValue("manifestFile", mf.path()) + .log("Queueing task to delete manifest file"); + return new TaskEntity.Builder() + .setName(taskName) + .setId(metaStoreManager.generateNewEntityId(polarisCallContext).getId()) + .setCreateTimestamp(polarisCallContext.getClock().millis()) + .withTaskType(AsyncTaskType.FILE_CLEANUP) + .withData( + new ManifestFileCleanupTaskHandler.ManifestCleanupTask( + tableEntity.getTableIdentifier(), TaskUtils.encodeManifestFile(mf))) + .setId(metaStoreManager.generateNewEntityId(polarisCallContext).getId()) + // copy the internal properties, which will have storage info + .setInternalProperties(cleanupTask.getInternalPropertiesAsMap()) + .build(); + }) + .toList(); + List createdTasks = + metaStoreManager + .createEntitiesIfNotExist(polarisCallContext, null, taskEntities) + .getEntities(); + if (createdTasks != null) { + LOGGER + .atInfo() + .addKeyValue("tableIdentifier", tableEntity.getTableIdentifier()) + .addKeyValue("metadataLocation", tableEntity.getMetadataLocation()) + .addKeyValue("taskCount", taskEntities.size()) + .log("Successfully queued tasks to delete manifests - deleting table metadata file"); + for (PolarisBaseEntity createdTask : createdTasks) { + taskExecutor.addTaskHandlerContext(createdTask.getId(), CallContext.getCurrentContext()); + } + fileIO.deleteFile(tableEntity.getMetadataLocation()); + + return true; + } + } + return false; + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/task/TaskExecutor.java b/polaris-service/src/main/java/io/polaris/service/task/TaskExecutor.java new file mode 100644 index 0000000000..d93f03a22a --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/task/TaskExecutor.java @@ -0,0 +1,26 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.task; + +import io.polaris.core.context.CallContext; + +/** + * Execute a task asynchronously with a provided context. The context must be cloned so that callers + * can close their own context and closables + */ +public interface TaskExecutor { + void addTaskHandlerContext(long taskEntityId, CallContext callContext); +} diff --git a/polaris-service/src/main/java/io/polaris/service/task/TaskExecutorImpl.java b/polaris-service/src/main/java/io/polaris/service/task/TaskExecutorImpl.java new file mode 100644 index 0000000000..98734fab59 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/task/TaskExecutorImpl.java @@ -0,0 +1,138 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.task; + +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.TaskEntity; +import io.polaris.core.persistence.MetaStoreManagerFactory; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.TimeUnit; +import org.jetbrains.annotations.NotNull; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Given a list of registered {@link TaskHandler}s, execute tasks asynchronously with the provided + * {@link CallContext}. + */ +public class TaskExecutorImpl implements TaskExecutor { + private static final Logger LOGGER = LoggerFactory.getLogger(TaskExecutorImpl.class); + public static final long TASK_RETRY_DELAY = 1000; + private final ExecutorService executorService; + private final MetaStoreManagerFactory metaStoreManagerFactory; + private final List taskHandlers = new ArrayList<>(); + + public TaskExecutorImpl( + ExecutorService executorService, MetaStoreManagerFactory metaStoreManagerFactory) { + this.executorService = executorService; + this.metaStoreManagerFactory = metaStoreManagerFactory; + } + + /** + * Add a {@link TaskHandler}. {@link TaskEntity}s will be tested against the {@link + * TaskHandler#canHandleTask(TaskEntity)} method and will be handled by the first handler that + * responds true. + * + * @param taskHandler + */ + public void addTaskHandler(TaskHandler taskHandler) { + taskHandlers.add(taskHandler); + } + + /** + * Register a {@link CallContext} for a specific task id. That task will be loaded and executed + * asynchronously with a clone of the provided {@link CallContext}. + * + * @param taskEntityId + * @param callContext + */ + @Override + public void addTaskHandlerContext(long taskEntityId, CallContext callContext) { + CallContext clone = CallContext.copyOf(callContext); + tryHandleTask(taskEntityId, clone, null, 1); + } + + private @NotNull CompletableFuture tryHandleTask( + long taskEntityId, CallContext clone, Throwable e, int attempt) { + if (attempt > 3) { + return CompletableFuture.failedFuture(e); + } + return CompletableFuture.runAsync( + () -> { + // set the call context INSIDE the async task + try (CallContext ctx = CallContext.setCurrentContext(CallContext.copyOf(clone))) { + PolarisMetaStoreManager metaStoreManager = + metaStoreManagerFactory.getOrCreateMetaStoreManager(ctx.getRealmContext()); + PolarisBaseEntity taskEntity = + metaStoreManager + .loadEntity(ctx.getPolarisCallContext(), 0L, taskEntityId) + .getEntity(); + if (!PolarisEntityType.TASK.equals(taskEntity.getType())) { + throw new IllegalArgumentException("Provided taskId must be a task entity type"); + } + TaskEntity task = TaskEntity.of(taskEntity); + Optional handlerOpt = + taskHandlers.stream().filter(th -> th.canHandleTask(task)).findFirst(); + if (handlerOpt.isEmpty()) { + LOGGER + .atWarn() + .addKeyValue("taskEntityId", taskEntityId) + .addKeyValue("taskType", task.getTaskType()) + .log("Unable to find handler for task type"); + return; + } + TaskHandler handler = handlerOpt.get(); + boolean success = handler.handleTask(task); + if (success) { + LOGGER + .atInfo() + .addKeyValue("taskEntityId", taskEntityId) + .addKeyValue("handlerClass", handler.getClass()) + .log("Task successfully handled"); + metaStoreManager.dropEntityIfExists( + ctx.getPolarisCallContext(), + null, + PolarisEntity.toCore(taskEntity), + Map.of(), + false); + } else { + LOGGER + .atWarn() + .addKeyValue("taskEntityId", taskEntityId) + .addKeyValue("taskEntityName", taskEntity.getName()) + .log("Unable to execute async task"); + } + } + }, + executorService) + .exceptionallyComposeAsync( + (t) -> { + LOGGER.warn("Failed to handle task entity id {}", taskEntityId, t); + return tryHandleTask(taskEntityId, clone, t, attempt + 1); + }, + CompletableFuture.delayedExecutor( + TASK_RETRY_DELAY * (long) attempt, TimeUnit.MILLISECONDS, executorService)); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/task/TaskFileIOSupplier.java b/polaris-service/src/main/java/io/polaris/service/task/TaskFileIOSupplier.java new file mode 100644 index 0000000000..b8a1007298 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/task/TaskFileIOSupplier.java @@ -0,0 +1,62 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.task; + +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.PolarisTaskConstants; +import io.polaris.core.entity.TaskEntity; +import io.polaris.core.persistence.MetaStoreManagerFactory; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import java.util.HashMap; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import org.apache.hadoop.conf.Configuration; +import org.apache.iceberg.CatalogProperties; +import org.apache.iceberg.CatalogUtil; +import org.apache.iceberg.io.FileIO; + +public class TaskFileIOSupplier implements Function { + private final MetaStoreManagerFactory metaStoreManagerFactory; + + public TaskFileIOSupplier(MetaStoreManagerFactory metaStoreManagerFactory) { + this.metaStoreManagerFactory = metaStoreManagerFactory; + } + + @Override + public FileIO apply(TaskEntity task) { + Map internalProperties = task.getInternalPropertiesAsMap(); + String location = internalProperties.get(PolarisTaskConstants.STORAGE_LOCATION); + PolarisMetaStoreManager metaStoreManager = + metaStoreManagerFactory.getOrCreateMetaStoreManager( + CallContext.getCurrentContext().getRealmContext()); + Map properties = new HashMap<>(internalProperties); + properties.putAll( + metaStoreManagerFactory + .getOrCreateStorageCredentialCache(CallContext.getCurrentContext().getRealmContext()) + .getOrGenerateSubScopeCreds( + metaStoreManager, + CallContext.getCurrentContext().getPolarisCallContext(), + task, + true, + Set.of(location), + Set.of(location))); + String ioImpl = + properties.getOrDefault( + CatalogProperties.FILE_IO_IMPL, "org.apache.iceberg.io.ResolvingFileIO"); + return CatalogUtil.loadFileIO(ioImpl, properties, new Configuration()); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/task/TaskHandler.java b/polaris-service/src/main/java/io/polaris/service/task/TaskHandler.java new file mode 100644 index 0000000000..5be15212e3 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/task/TaskHandler.java @@ -0,0 +1,24 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.task; + +import io.polaris.core.entity.TaskEntity; + +public interface TaskHandler { + boolean canHandleTask(TaskEntity task); + + boolean handleTask(TaskEntity task); +} diff --git a/polaris-service/src/main/java/io/polaris/service/task/TaskUtils.java b/polaris-service/src/main/java/io/polaris/service/task/TaskUtils.java new file mode 100644 index 0000000000..c2a7d25ca9 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/task/TaskUtils.java @@ -0,0 +1,53 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.task; + +import java.io.IOException; +import org.apache.commons.codec.binary.Base64; +import org.apache.iceberg.ManifestFile; +import org.apache.iceberg.ManifestFiles; +import org.apache.iceberg.exceptions.NotFoundException; +import org.apache.iceberg.io.FileIO; + +public class TaskUtils { + static boolean exists(String path, FileIO fileIO) { + try { + return fileIO.newInputFile(path).exists(); + } catch (NotFoundException e) { + // in-memory FileIO throws this exception + return false; + } catch (Exception e) { + // typically, clients will catch a 404 and simply return false, so any other exception + // means something probably went wrong + throw new RuntimeException(e); + } + } + + /** + * base64 encode the serialized manifest file entry so we can deserialize it and read the manifest + * in the {@link ManifestFileCleanupTaskHandler} + * + * @param mf + * @return + */ + static String encodeManifestFile(ManifestFile mf) { + try { + return Base64.encodeBase64String(ManifestFiles.encode(mf)); + } catch (IOException e) { + throw new RuntimeException("Unable to encode binary data in memory", e); + } + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/tracing/HeadersMapAccessor.java b/polaris-service/src/main/java/io/polaris/service/tracing/HeadersMapAccessor.java new file mode 100644 index 0000000000..4f1677d9b0 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/tracing/HeadersMapAccessor.java @@ -0,0 +1,54 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.tracing; + +import io.opentelemetry.context.propagation.TextMapGetter; +import io.opentelemetry.context.propagation.TextMapSetter; +import jakarta.servlet.http.HttpServletRequest; +import java.net.http.HttpRequest; +import java.util.Spliterator; +import java.util.Spliterators; +import java.util.stream.StreamSupport; +import org.jetbrains.annotations.Nullable; + +/** + * Implementation of {@link TextMapSetter} and {@link TextMapGetter} that can handle an {@link + * HttpServletRequest} for extracting headers and sets headers on a {@link HttpRequest.Builder}. + */ +public class HeadersMapAccessor + implements TextMapSetter, TextMapGetter { + @Override + public Iterable keys(HttpServletRequest carrier) { + return StreamSupport.stream( + Spliterators.spliteratorUnknownSize( + carrier.getHeaderNames().asIterator(), Spliterator.IMMUTABLE), + false) + .toList(); + } + + @Nullable + @Override + public String get(@Nullable HttpServletRequest carrier, String key) { + return carrier == null ? null : carrier.getHeader(key); + } + + @Override + public void set(@Nullable HttpRequest.Builder carrier, String key, String value) { + if (carrier != null) { + carrier.setHeader(key, value); + } + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/tracing/OpenTelemetryAware.java b/polaris-service/src/main/java/io/polaris/service/tracing/OpenTelemetryAware.java new file mode 100644 index 0000000000..ffdfacdd46 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/tracing/OpenTelemetryAware.java @@ -0,0 +1,23 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.tracing; + +import io.opentelemetry.api.OpenTelemetry; + +/** Allows setting a configured instance of {@link OpenTelemetry} */ +public interface OpenTelemetryAware { + void setOpenTelemetry(OpenTelemetry openTelemetry); +} diff --git a/polaris-service/src/main/java/io/polaris/service/tracing/TracingFilter.java b/polaris-service/src/main/java/io/polaris/service/tracing/TracingFilter.java new file mode 100644 index 0000000000..4c9f06c464 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/tracing/TracingFilter.java @@ -0,0 +1,97 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.tracing; + +import io.opentelemetry.api.OpenTelemetry; +import io.opentelemetry.api.trace.Span; +import io.opentelemetry.api.trace.SpanKind; +import io.opentelemetry.api.trace.Tracer; +import io.opentelemetry.context.Context; +import io.opentelemetry.context.Scope; +import io.opentelemetry.semconv.HttpAttributes; +import io.opentelemetry.semconv.ServerAttributes; +import io.opentelemetry.semconv.UrlAttributes; +import io.polaris.core.context.CallContext; +import jakarta.annotation.Priority; +import jakarta.servlet.Filter; +import jakarta.servlet.FilterChain; +import jakarta.servlet.ServletException; +import jakarta.servlet.ServletRequest; +import jakarta.servlet.ServletResponse; +import jakarta.servlet.http.HttpServletRequest; +import jakarta.ws.rs.Priorities; +import java.io.IOException; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.slf4j.MDC; + +/** + * Servlet {@link Filter} that starts an OpenTracing {@link Span}, propagating the calling context + * from HTTP headers, if present. "spanId" and "traceId" are added to the logging MDC so that all + * logs recorded in the request will contain the current span and trace id. Downstream HTTP calls + * should use the OpenTelementry {@link io.opentelemetry.context.propagation.ContextPropagators} to + * include the current trace id in the request headers. + */ +@Priority(Priorities.AUTHENTICATION - 1) +public class TracingFilter implements Filter { + private final Logger LOGGER = LoggerFactory.getLogger(TracingFilter.class); + private final OpenTelemetry openTelemetry; + + public TracingFilter(OpenTelemetry openTelemetry) { + this.openTelemetry = openTelemetry; + } + + @Override + public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) + throws IOException, ServletException { + HttpServletRequest httpRequest = (HttpServletRequest) request; + Context extractedContext = + openTelemetry + .getPropagators() + .getTextMapPropagator() + .extract(Context.current(), httpRequest, new HeadersMapAccessor()); + try (Scope scope = extractedContext.makeCurrent()) { + Tracer tracer = openTelemetry.getTracer(httpRequest.getPathInfo()); + Span span = + tracer + .spanBuilder(httpRequest.getMethod() + " " + httpRequest.getPathInfo()) + .setSpanKind(SpanKind.SERVER) + .setAttribute( + "realm", CallContext.getCurrentContext().getRealmContext().getRealmIdentifier()) + .startSpan(); + + try (Scope ignored = span.makeCurrent(); + MDC.MDCCloseable spanId = MDC.putCloseable("spanId", span.getSpanContext().getSpanId()); + MDC.MDCCloseable traceId = + MDC.putCloseable("traceId", span.getSpanContext().getTraceId()); ) { + LOGGER + .atInfo() + .addKeyValue("spanId", span.getSpanContext().getSpanId()) + .addKeyValue("traceId", span.getSpanContext().getTraceId()) + .addKeyValue("parentContext", extractedContext) + .log("Started span with parent"); + span.setAttribute(HttpAttributes.HTTP_REQUEST_METHOD, httpRequest.getMethod()); + span.setAttribute(ServerAttributes.SERVER_ADDRESS, httpRequest.getServerName()); + span.setAttribute(UrlAttributes.URL_SCHEME, httpRequest.getScheme()); + span.setAttribute(UrlAttributes.URL_PATH, httpRequest.getPathInfo()); + + chain.doFilter(request, response); + } finally { + span.end(); + } + } + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/types/CommitTableRequest.java b/polaris-service/src/main/java/io/polaris/service/types/CommitTableRequest.java new file mode 100644 index 0000000000..55428e7b1c --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/types/CommitTableRequest.java @@ -0,0 +1,20 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.types; + +import org.apache.iceberg.rest.requests.UpdateTableRequest; + +public class CommitTableRequest extends UpdateTableRequest {} diff --git a/polaris-service/src/main/java/io/polaris/service/types/CommitViewRequest.java b/polaris-service/src/main/java/io/polaris/service/types/CommitViewRequest.java new file mode 100644 index 0000000000..fa4ca39531 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/types/CommitViewRequest.java @@ -0,0 +1,20 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.types; + +import org.apache.iceberg.rest.requests.UpdateTableRequest; + +public class CommitViewRequest extends UpdateTableRequest {} diff --git a/polaris-service/src/main/java/io/polaris/service/types/NotificationRequest.java b/polaris-service/src/main/java/io/polaris/service/types/NotificationRequest.java new file mode 100644 index 0000000000..c13c8993c3 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/types/NotificationRequest.java @@ -0,0 +1,91 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.types; + +import com.fasterxml.jackson.annotation.JsonProperty; +import io.swagger.annotations.ApiModelProperty; +import java.util.Objects; + +@jakarta.annotation.Generated( + value = "org.openapitools.codegen.languages.JavaResteasyServerCodegen", + date = "2024-05-25T00:53:53.298853423Z[UTC]", + comments = "Generator version: 7.5.0") +public class NotificationRequest { + + private NotificationType notificationType; + private TableUpdateNotification payload; + + /** */ + @ApiModelProperty(required = true, value = "") + @JsonProperty("notification-type") + public NotificationType getNotificationType() { + return notificationType; + } + + public void setNotificationType(NotificationType notificationType) { + this.notificationType = notificationType; + } + + /** */ + @ApiModelProperty(value = "") + @JsonProperty("payload") + public TableUpdateNotification getPayload() { + return payload; + } + + public void setPayload(TableUpdateNotification payload) { + this.payload = payload; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + NotificationRequest notificationRequest = (NotificationRequest) o; + return Objects.equals(this.notificationType, notificationRequest.notificationType) + && Objects.equals(this.payload, notificationRequest.payload); + } + + @Override + public int hashCode() { + return Objects.hash(notificationType, payload); + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("class NotificationRequest {\n"); + + sb.append(" notificationType: ").append(toIndentedString(notificationType)).append("\n"); + sb.append(" payload: ").append(toIndentedString(payload)).append("\n"); + sb.append("}"); + return sb.toString(); + } + + /** + * Convert the given object to string with each line indented by 4 spaces (except the first line). + */ + private String toIndentedString(Object o) { + if (o == null) { + return "null"; + } + return o.toString().replace("\n", "\n "); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/types/NotificationType.java b/polaris-service/src/main/java/io/polaris/service/types/NotificationType.java new file mode 100644 index 0000000000..245e675295 --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/types/NotificationType.java @@ -0,0 +1,89 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.types; + +import java.util.Arrays; +import java.util.Map; +import java.util.Optional; +import java.util.stream.Collectors; + +public enum NotificationType { + + /** Supported notification types for the update table notification. */ + UNKNOWN(0, "UNKNOWN"), + CREATE(1, "CREATE"), + UPDATE(2, "UPDATE"), + DROP(3, "DROP"); + + NotificationType(int id, String displayName) { + this.id = id; + this.displayName = displayName; + } + + /** Internal id of the notification type. */ + private final int id; + + /** Display name of the notification type */ + private final String displayName; + + /** Internal ids and their corresponding sources of notification types. */ + private static final Map idToNotificationTypeMap = + Arrays.stream(NotificationType.values()) + .collect(Collectors.toMap(NotificationType::getId, tf -> tf)); + + /** + * Lookup a notification type using its internal id representation + * + * @param id internal id of the notification type + * @return The notification type, if it exists, or empty + */ + public static Optional lookupById(int id) { + return Optional.ofNullable(idToNotificationTypeMap.get(id)); + } + + /** + * Return the internal id of the notification type + * + * @return id + */ + public int getId() { + return id; + } + + /** Return the display name of the notification type */ + public String getDisplayName() { + return displayName; + } + + /** + * Find the notification type by name, or return an empty optional + * + * @param name name of the notification type + * @return The notification type, if it exists, or empty + */ + public static Optional lookupByName(String name) { + if (name == null) { + return Optional.empty(); + } + + for (NotificationType NotificationType : NotificationType.values()) { + if (name.toUpperCase().equals(NotificationType.name())) { + return Optional.of(NotificationType); + } + } + return Optional.empty(); + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/types/TableUpdateNotification.java b/polaris-service/src/main/java/io/polaris/service/types/TableUpdateNotification.java new file mode 100644 index 0000000000..0966c8b60a --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/types/TableUpdateNotification.java @@ -0,0 +1,196 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.types; + +import com.fasterxml.jackson.annotation.JsonProperty; +import com.google.common.base.Preconditions; +import io.swagger.annotations.ApiModelProperty; +import java.util.Objects; +import org.apache.iceberg.TableMetadata; + +public class TableUpdateNotification { + + private String tableName; + private Long timestamp; + private String tableUuid; + private String metadataLocation; + private TableMetadata metadata; + + /** */ + @ApiModelProperty(required = true, value = "") + @JsonProperty("table-name") + public String getTableName() { + return tableName; + } + + public void setTableName(String tableName) { + this.tableName = tableName; + } + + /** */ + @ApiModelProperty(required = true, value = "") + @JsonProperty("timestamp") + public Long getTimestamp() { + return timestamp; + } + + public void setTimestamp(Long timestamp) { + this.timestamp = timestamp; + } + + /** */ + @ApiModelProperty(required = true, value = "") + @JsonProperty("table-uuid") + public String getTableUuid() { + return tableUuid; + } + + public void setTableUuid(String tableUuid) { + this.tableUuid = tableUuid; + } + + /** */ + @ApiModelProperty(required = true, value = "") + @JsonProperty("metadata-location") + public String getMetadataLocation() { + return metadataLocation; + } + + public void setMetadataLocation(String metadataLocation) { + this.metadataLocation = metadataLocation; + } + + /** */ + @ApiModelProperty(required = true, value = "") + @JsonProperty("metadata") + public TableMetadata getMetadata() { + return metadata; + } + + public void setMetadata(TableMetadata metadata) { + this.metadata = metadata; + } + + public TableUpdateNotification() {} + + public TableUpdateNotification( + String tableName, + Long timestamp, + String tableUuid, + String metadataLocation, + TableMetadata metadata) { + this.tableName = tableName; + this.timestamp = timestamp; + this.tableUuid = tableUuid; + this.metadataLocation = metadataLocation; + this.metadata = metadata; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + TableUpdateNotification tableUpdateNotification = (TableUpdateNotification) o; + return Objects.equals(this.tableName, tableUpdateNotification.tableName) + && Objects.equals(this.timestamp, tableUpdateNotification.timestamp) + && Objects.equals(this.tableUuid, tableUpdateNotification.tableUuid) + && Objects.equals(this.metadataLocation, tableUpdateNotification.metadataLocation) + && Objects.equals(this.metadata, tableUpdateNotification.metadata); + } + + @Override + public int hashCode() { + return Objects.hash(tableName, timestamp, tableUuid, metadataLocation, metadata); + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("class TableUpdateNotification {\n"); + + sb.append(" tableName: ").append(toIndentedString(tableName)).append("\n"); + sb.append(" timestamp: ").append(toIndentedString(timestamp)).append("\n"); + sb.append(" tableUuid: ").append(toIndentedString(tableUuid)).append("\n"); + sb.append(" metadataLocation: ").append(toIndentedString(metadataLocation)).append("\n"); + sb.append(" metadata: ").append(toIndentedString(metadata)).append("\n"); + sb.append("}"); + return sb.toString(); + } + + /** + * Convert the given object to string with each line indented by 4 spaces (except the first line). + */ + private String toIndentedString(Object o) { + if (o == null) { + return "null"; + } + return o.toString().replace("\n", "\n "); + } + + public static Builder builder() { + return new Builder(); + } + + public static final class Builder { + + private String tableName; + private Long timestamp; + private String tableUuid; + private String metadataLocation; + private TableMetadata metadata; + + private Builder() {} + + public final Builder tableName(String tableName) { + Preconditions.checkArgument(tableName != null, "Null table name supplied"); + this.tableName = tableName; + return this; + } + + public final Builder timestamp(Long timestamp) { + Preconditions.checkArgument(timestamp != null, "timestamp can't be null"); + this.timestamp = timestamp; + return this; + } + + public final Builder metadataLocation(String metadataLocation) { + Preconditions.checkArgument(metadataLocation != null, "metadataLocation can't be null"); + this.metadataLocation = metadataLocation; + return this; + } + + public final Builder metadata(TableMetadata metadata) { + this.metadata = metadata; + return this; + } + + public final Builder tableUuid(String tableUuid) { + Preconditions.checkArgument(tableUuid != null, "timestamp can't be null"); + this.tableUuid = tableUuid; + return this; + } + + public TableUpdateNotification build() { + + return new TableUpdateNotification( + tableName, timestamp, tableUuid, metadataLocation, metadata); + } + } +} diff --git a/polaris-service/src/main/java/io/polaris/service/types/TokenType.java b/polaris-service/src/main/java/io/polaris/service/types/TokenType.java new file mode 100644 index 0000000000..9709f1448d --- /dev/null +++ b/polaris-service/src/main/java/io/polaris/service/types/TokenType.java @@ -0,0 +1,63 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.types; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonValue; + +/** + * Token type identifier, from RFC 8693 Section 3 See + * https://datatracker.ietf.org/doc/html/rfc8693#section-3 + */ +public enum TokenType { + ACCESS_TOKEN("urn:ietf:params:oauth:token-type:access_token"), + + REFRESH_TOKEN("urn:ietf:params:oauth:token-type:refresh_token"), + + ID_TOKEN("urn:ietf:params:oauth:token-type:id_token"), + + SAML1("urn:ietf:params:oauth:token-type:saml1"), + + SAML2("urn:ietf:params:oauth:token-type:saml2"), + + JWT("urn:ietf:params:oauth:token-type:jwt"); + + private String value; + + TokenType(String value) { + this.value = value; + } + + @JsonValue + public String getValue() { + return value; + } + + @Override + public String toString() { + return String.valueOf(value); + } + + @JsonCreator + public static TokenType fromValue(String value) { + for (TokenType b : TokenType.values()) { + if (b.value.equals(value)) { + return b; + } + } + throw new IllegalArgumentException("Unexpected value '" + value + "'"); + } +} diff --git a/polaris-service/src/main/resources/META-INF/persistence.xml b/polaris-service/src/main/resources/META-INF/persistence.xml new file mode 100644 index 0000000000..db8c1c4bd3 --- /dev/null +++ b/polaris-service/src/main/resources/META-INF/persistence.xml @@ -0,0 +1,44 @@ + + + + + + org.eclipse.persistence.jpa.PersistenceProvider + io.polaris.core.persistence.models.ModelEntity + io.polaris.core.persistence.models.ModelEntityActive + io.polaris.core.persistence.models.ModelEntityChangeTracking + io.polaris.core.persistence.models.ModelEntityDropped + io.polaris.core.persistence.models.ModelGrantRecord + io.polaris.core.persistence.models.ModelPrincipalSecrets + io.polaris.core.persistence.models.ModelSequenceId + NONE + + + + + + + + + + + \ No newline at end of file diff --git a/polaris-service/src/main/resources/META-INF/services/io.dropwizard.jackson.Discoverable b/polaris-service/src/main/resources/META-INF/services/io.dropwizard.jackson.Discoverable new file mode 100644 index 0000000000..8354b3f36d --- /dev/null +++ b/polaris-service/src/main/resources/META-INF/services/io.dropwizard.jackson.Discoverable @@ -0,0 +1,22 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +io.polaris.service.auth.DiscoverableAuthenticator +io.polaris.core.persistence.MetaStoreManagerFactory +io.polaris.service.config.OAuth2ApiService +io.polaris.service.context.RealmContextResolver +io.polaris.service.context.CallContextResolver +io.polaris.service.auth.TokenBrokerFactory \ No newline at end of file diff --git a/polaris-service/src/main/resources/META-INF/services/io.dropwizard.logging.common.layout.DiscoverableLayoutFactory b/polaris-service/src/main/resources/META-INF/services/io.dropwizard.logging.common.layout.DiscoverableLayoutFactory new file mode 100644 index 0000000000..7035f1198d --- /dev/null +++ b/polaris-service/src/main/resources/META-INF/services/io.dropwizard.logging.common.layout.DiscoverableLayoutFactory @@ -0,0 +1,17 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +io.polaris.service.logging.PolarisJsonLayoutFactory \ No newline at end of file diff --git a/polaris-service/src/main/resources/META-INF/services/io.polaris.core.persistence.MetaStoreManagerFactory b/polaris-service/src/main/resources/META-INF/services/io.polaris.core.persistence.MetaStoreManagerFactory new file mode 100644 index 0000000000..795cbfb8c2 --- /dev/null +++ b/polaris-service/src/main/resources/META-INF/services/io.polaris.core.persistence.MetaStoreManagerFactory @@ -0,0 +1,20 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +io.polaris.extension.persistence.impl.hibernate.HibernatePolarisMetaStoreManagerFactory +io.polaris.service.persistence.InMemoryPolarisMetaStoreManagerFactory +com.snowflake.polaris.persistence.impl.remote.RemotePolarisMetaStoreManagerFactory +io.polaris.extension.persistence.impl.eclipselink.EclipseLinkPolarisMetaStoreManagerFactory diff --git a/polaris-service/src/main/resources/META-INF/services/io.polaris.service.auth.TokenBrokerFactory b/polaris-service/src/main/resources/META-INF/services/io.polaris.service.auth.TokenBrokerFactory new file mode 100644 index 0000000000..5ecc8a1fb8 --- /dev/null +++ b/polaris-service/src/main/resources/META-INF/services/io.polaris.service.auth.TokenBrokerFactory @@ -0,0 +1,18 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +io.polaris.service.auth.JWTRSAKeyPairFactory +io.polaris.service.auth.JWTSymmetricKeyFactory \ No newline at end of file diff --git a/polaris-service/src/main/resources/META-INF/services/io.polaris.service.config.OAuth2ApiService b/polaris-service/src/main/resources/META-INF/services/io.polaris.service.config.OAuth2ApiService new file mode 100644 index 0000000000..a629e73b32 --- /dev/null +++ b/polaris-service/src/main/resources/META-INF/services/io.polaris.service.config.OAuth2ApiService @@ -0,0 +1,18 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +io.polaris.service.auth.TestOAuth2ApiService +io.polaris.service.auth.DefaultOAuth2ApiService \ No newline at end of file diff --git a/polaris-service/src/main/resources/META-INF/services/io.polaris.service.context.CallContextResolver b/polaris-service/src/main/resources/META-INF/services/io.polaris.service.context.CallContextResolver new file mode 100644 index 0000000000..83fe0f1bf1 --- /dev/null +++ b/polaris-service/src/main/resources/META-INF/services/io.polaris.service.context.CallContextResolver @@ -0,0 +1,17 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +io.polaris.service.context.DefaultContextResolver \ No newline at end of file diff --git a/polaris-service/src/main/resources/META-INF/services/io.polaris.service.context.RealmContextResolver b/polaris-service/src/main/resources/META-INF/services/io.polaris.service.context.RealmContextResolver new file mode 100644 index 0000000000..83fe0f1bf1 --- /dev/null +++ b/polaris-service/src/main/resources/META-INF/services/io.polaris.service.context.RealmContextResolver @@ -0,0 +1,17 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +io.polaris.service.context.DefaultContextResolver \ No newline at end of file diff --git a/polaris-service/src/main/resources/log4j.properties b/polaris-service/src/main/resources/log4j.properties new file mode 100644 index 0000000000..663f16b4e6 --- /dev/null +++ b/polaris-service/src/main/resources/log4j.properties @@ -0,0 +1,21 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +log4j.rootLogger=INFO, stdout +log4j.appender.stdout=org.apache.log4j.ConsoleAppender +log4j.appender.stdout.Target=System.out +log4j.appender.stdout.layout=org.apache.log4j.PatternLayout +log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd'T'HH:mm:ss.SSS} %-5p [%c] - %m%n diff --git a/polaris-service/src/test/java/io/polaris/service/PolarisApplicationIntegrationTest.java b/polaris-service/src/test/java/io/polaris/service/PolarisApplicationIntegrationTest.java new file mode 100644 index 0000000000..56523032d3 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/PolarisApplicationIntegrationTest.java @@ -0,0 +1,680 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service; + +import static io.polaris.service.context.DefaultContextResolver.REALM_PROPERTY_KEY; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.fail; + +import io.dropwizard.testing.ConfigOverride; +import io.dropwizard.testing.ResourceHelpers; +import io.dropwizard.testing.junit5.DropwizardAppExtension; +import io.dropwizard.testing.junit5.DropwizardExtensionsSupport; +import io.polaris.core.admin.model.AwsStorageConfigInfo; +import io.polaris.core.admin.model.Catalog; +import io.polaris.core.admin.model.CatalogProperties; +import io.polaris.core.admin.model.CatalogRole; +import io.polaris.core.admin.model.ExternalCatalog; +import io.polaris.core.admin.model.FileStorageConfigInfo; +import io.polaris.core.admin.model.PolarisCatalog; +import io.polaris.core.admin.model.PrincipalRole; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.service.auth.BasePolarisAuthenticator; +import io.polaris.service.config.PolarisApplicationConfig; +import io.polaris.service.test.PolarisConnectionExtension; +import io.polaris.service.test.SnowmanCredentialsExtension; +import jakarta.ws.rs.client.Entity; +import jakarta.ws.rs.core.Response; +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.Comparator; +import java.util.List; +import java.util.Map; +import org.apache.hadoop.conf.Configuration; +import org.apache.iceberg.BaseTable; +import org.apache.iceberg.PartitionData; +import org.apache.iceberg.PartitionSpec; +import org.apache.iceberg.Schema; +import org.apache.iceberg.SortOrder; +import org.apache.iceberg.Table; +import org.apache.iceberg.TableMetadata; +import org.apache.iceberg.TableMetadataParser; +import org.apache.iceberg.TestHelpers; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.SessionCatalog; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.exceptions.BadRequestException; +import org.apache.iceberg.exceptions.ForbiddenException; +import org.apache.iceberg.exceptions.NoSuchNamespaceException; +import org.apache.iceberg.exceptions.NoSuchTableException; +import org.apache.iceberg.exceptions.RESTException; +import org.apache.iceberg.exceptions.ServiceFailureException; +import org.apache.iceberg.hadoop.HadoopFileIO; +import org.apache.iceberg.io.ResolvingFileIO; +import org.apache.iceberg.rest.RESTSessionCatalog; +import org.apache.iceberg.rest.auth.OAuth2Properties; +import org.apache.iceberg.types.Types; +import org.apache.iceberg.util.EnvironmentUtil; +import org.assertj.core.api.Assertions; +import org.assertj.core.api.InstanceOfAssertFactories; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInfo; +import org.junit.jupiter.api.extension.ExtendWith; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +@ExtendWith({ + DropwizardExtensionsSupport.class, + PolarisConnectionExtension.class, + SnowmanCredentialsExtension.class +}) +public class PolarisApplicationIntegrationTest { + public static final String PRINCIPAL_ROLE_NAME = "admin"; + public static final Logger LOGGER = + LoggerFactory.getLogger(PolarisApplicationIntegrationTest.class); + private static DropwizardAppExtension EXT = + new DropwizardAppExtension<>( + PolarisApplication.class, + ResourceHelpers.resourceFilePath("polaris-server-integrationtest.yml"), + ConfigOverride.config( + "server.applicationConnectors[0].port", + "0"), // Bind to random port to support parallelism + ConfigOverride.config( + "server.adminConnectors[0].port", "0")); // Bind to random port to support parallelism + + private static String userToken; + private static SnowmanCredentialsExtension.SnowmanCredentials snowmanCredentials; + private static Path testDir; + private static String realm; + + @BeforeAll + public static void setup( + PolarisConnectionExtension.PolarisToken userToken, + SnowmanCredentialsExtension.SnowmanCredentials snowmanCredentials) + throws IOException { + realm = PolarisConnectionExtension.getTestRealm(PolarisApplicationIntegrationTest.class); + + testDir = Path.of("build/test_data/iceberg/" + realm); + if (Files.exists(testDir)) { + if (Files.isDirectory(testDir)) { + Files.walk(testDir) + .sorted(Comparator.reverseOrder()) + .forEach( + path -> { + try { + Files.delete(path); + } catch (IOException e) { + throw new RuntimeException(e); + } + }); + + } else { + Files.delete(testDir); + } + } + Files.createDirectories(testDir); + PolarisApplicationIntegrationTest.userToken = userToken.token(); + PolarisApplicationIntegrationTest.snowmanCredentials = snowmanCredentials; + + PrincipalRole principalRole = new PrincipalRole(PRINCIPAL_ROLE_NAME); + try (Response createPrResponse = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/principal-roles", EXT.getLocalPort())) + .request("application/json") + .header("Authorization", "Bearer " + userToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(principalRole))) { + assertThat(createPrResponse) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + try (Response assignPrResponse = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/principals/snowman/principal-roles", + EXT.getLocalPort())) + .request("application/json") + .header("Authorization", "Bearer " + PolarisApplicationIntegrationTest.userToken) + .header(REALM_PROPERTY_KEY, realm) + .put(Entity.json(principalRole))) { + assertThat(assignPrResponse) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + } + + @AfterAll + public static void deletePrincipalRole() { + try (Response deletePrResponse = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/principal-roles/%s", + EXT.getLocalPort(), PRINCIPAL_ROLE_NAME)) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .delete()) {} + } + + /** + * Create a new catalog for each test case. Assign the snowman catalog-admin principal role the + * admin role of the new catalog. + * + * @param testInfo + */ + @BeforeEach + public void before(TestInfo testInfo) { + testInfo + .getTestMethod() + .ifPresent( + method -> { + String catalogName = method.getName(); + Catalog.TypeEnum catalogType = Catalog.TypeEnum.INTERNAL; + createCatalog(catalogName, catalogType, PRINCIPAL_ROLE_NAME); + }); + } + + private static void createCatalog( + String catalogName, Catalog.TypeEnum catalogType, String principalRoleName) { + createCatalog( + catalogName, + catalogType, + principalRoleName, + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::123456789012:role/my-role") + .setExternalId("externalId") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of("s3://my-old-bucket/path/to/data")) + .build(), + "s3://my-bucket/path/to/data"); + } + + private static void createCatalog( + String catalogName, + Catalog.TypeEnum catalogType, + String principalRoleName, + StorageConfigInfo storageConfig, + String defaultBaseLocation) { + CatalogProperties props = + CatalogProperties.builder(defaultBaseLocation) + .addProperty( + CatalogEntity.REPLACE_NEW_LOCATION_PREFIX_WITH_CATALOG_DEFAULT_KEY, "file:/") + .build(); + Catalog catalog = + catalogType.equals(Catalog.TypeEnum.INTERNAL) + ? PolarisCatalog.builder() + .setName(catalogName) + .setType(catalogType) + .setProperties(props) + .setStorageConfigInfo(storageConfig) + .build() + : ExternalCatalog.builder() + .setRemoteUrl("http://faraway.com") + .setName(catalogName) + .setType(catalogType) + .setProperties(props) + .setStorageConfigInfo(storageConfig) + .build(); + try (Response response = + EXT.client() + .target( + String.format("http://localhost:%d/api/management/v1/catalogs", EXT.getLocalPort())) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(catalog))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles/%s", + EXT.getLocalPort(), + catalogName, + PolarisEntityConstants.getNameOfCatalogAdminRole())) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .get()) { + assertThat(response).returns(Response.Status.OK.getStatusCode(), Response::getStatus); + CatalogRole catalogRole = response.readEntity(CatalogRole.class); + + try (Response assignResponse = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/principal-roles/%s/catalog-roles/%s", + EXT.getLocalPort(), principalRoleName, catalogName)) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .put(Entity.json(catalogRole))) { + assertThat(assignResponse) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + } + } + + private static RESTSessionCatalog newSessionCatalog(String catalog) { + RESTSessionCatalog sessionCatalog = new RESTSessionCatalog(); + sessionCatalog.initialize( + "snowflake", + Map.of( + "uri", + "http://localhost:" + EXT.getLocalPort() + "/api/catalog", + OAuth2Properties.CREDENTIAL, + snowmanCredentials.clientId() + ":" + snowmanCredentials.clientSecret(), + OAuth2Properties.SCOPE, + BasePolarisAuthenticator.PRINCIPAL_ROLE_ALL, + "warehouse", + catalog, + "header." + REALM_PROPERTY_KEY, + realm)); + return sessionCatalog; + } + + @Test + public void testIcebergListNamespaces() throws IOException { + try (RESTSessionCatalog sessionCatalog = newSessionCatalog("testIcebergListNamespaces")) { + SessionCatalog.SessionContext sessionContext = SessionCatalog.SessionContext.createEmpty(); + List namespaces = sessionCatalog.listNamespaces(sessionContext); + assertThat(namespaces).isNotNull().isEmpty(); + } + } + + @Test + public void testConfigureCatalogCaseSensitive() throws IOException { + try { + RESTSessionCatalog sessionCatalog = newSessionCatalog("TESTCONFIGURECATALOGCASESENSITIVE"); + fail("Expected exception connecting to catalog"); + } catch (ServiceFailureException e) { + fail("Unexpected service failure exception", e); + } catch (RESTException e) { + LoggerFactory.getLogger(getClass()).info("Caught expected rest exception", e); + } + } + + @Test + public void testIcebergListNamespacesNotFound() throws IOException { + try (RESTSessionCatalog sessionCatalog = + newSessionCatalog("testIcebergListNamespacesNotFound")) { + SessionCatalog.SessionContext sessionContext = SessionCatalog.SessionContext.createEmpty(); + try { + sessionCatalog.listNamespaces(sessionContext, Namespace.of("whoops")); + fail("Expected exception to be thrown"); + } catch (NoSuchNamespaceException e) { + // we expect this! + Assertions.assertThat(e).isNotNull(); + } catch (Exception e) { + fail("Unexpected exception", e); + } + } + } + + @Test + public void testIcebergListNamespacesNestedNotFound() throws IOException { + try (RESTSessionCatalog sessionCatalog = + newSessionCatalog("testIcebergListNamespacesNestedNotFound")) { + SessionCatalog.SessionContext sessionContext = SessionCatalog.SessionContext.createEmpty(); + Namespace topLevelNamespace = Namespace.of("top_level"); + sessionCatalog.createNamespace(sessionContext, topLevelNamespace); + sessionCatalog.loadNamespaceMetadata(sessionContext, Namespace.of("top_level")); + try { + sessionCatalog.listNamespaces(sessionContext, Namespace.of("top_level", "whoops")); + fail("Expected exception to be thrown"); + } catch (NoSuchNamespaceException e) { + // we expect this! + Assertions.assertThat(e).isNotNull(); + } catch (Exception e) { + fail("Unexpected exception", e); + } + } + } + + @Test + public void testIcebergListTablesNamespaceNotFound() throws IOException { + try (RESTSessionCatalog sessionCatalog = + newSessionCatalog("testIcebergListTablesNamespaceNotFound")) { + SessionCatalog.SessionContext sessionContext = SessionCatalog.SessionContext.createEmpty(); + try { + sessionCatalog.listTables(sessionContext, Namespace.of("whoops")); + fail("Expected exception to be thrown"); + } catch (NoSuchNamespaceException e) { + // we expect this! + Assertions.assertThat(e).isNotNull(); + } catch (Exception e) { + fail("Unexpected exception", e); + } + } + } + + @Test + public void testIcebergCreateNamespace() throws IOException { + try (RESTSessionCatalog sessionCatalog = newSessionCatalog("testIcebergCreateNamespace")) { + SessionCatalog.SessionContext sessionContext = SessionCatalog.SessionContext.createEmpty(); + Namespace topLevelNamespace = Namespace.of("top_level"); + sessionCatalog.createNamespace(sessionContext, topLevelNamespace); + List namespaces = sessionCatalog.listNamespaces(sessionContext); + assertThat(namespaces).isNotNull().hasSize(1).containsExactly(topLevelNamespace); + Namespace nestedNamespace = Namespace.of("top_level", "second_level"); + sessionCatalog.createNamespace(sessionContext, nestedNamespace); + namespaces = sessionCatalog.listNamespaces(sessionContext, topLevelNamespace); + assertThat(namespaces).isNotNull().hasSize(1).containsExactly(nestedNamespace); + } + } + + @Test + public void testIcebergCreateNamespaceInExternalCatalog(TestInfo testInfo) throws IOException { + String catalogName = testInfo.getTestMethod().get().getName() + "External"; + createCatalog(catalogName, Catalog.TypeEnum.EXTERNAL, PRINCIPAL_ROLE_NAME); + try (RESTSessionCatalog sessionCatalog = newSessionCatalog(catalogName)) { + SessionCatalog.SessionContext sessionContext = SessionCatalog.SessionContext.createEmpty(); + Namespace ns = Namespace.of("db1"); + sessionCatalog.createNamespace(sessionContext, ns); + List namespaces = sessionCatalog.listNamespaces(sessionContext); + assertThat(namespaces).isNotNull().hasSize(1).containsExactly(ns); + Map metadata = sessionCatalog.loadNamespaceMetadata(sessionContext, ns); + assertThat(metadata) + .isNotNull() + .isNotEmpty() + .containsEntry( + PolarisEntityConstants.ENTITY_BASE_LOCATION, "s3://my-bucket/path/to/data/db1"); + } + } + + @Test + public void testIcebergDropNamespaceInExternalCatalog(TestInfo testInfo) throws IOException { + String catalogName = testInfo.getTestMethod().get().getName() + "External"; + createCatalog(catalogName, Catalog.TypeEnum.EXTERNAL, PRINCIPAL_ROLE_NAME); + try (RESTSessionCatalog sessionCatalog = newSessionCatalog(catalogName)) { + SessionCatalog.SessionContext sessionContext = SessionCatalog.SessionContext.createEmpty(); + Namespace ns = Namespace.of("db1"); + sessionCatalog.createNamespace(sessionContext, ns); + List namespaces = sessionCatalog.listNamespaces(sessionContext); + assertThat(namespaces).isNotNull().hasSize(1).containsExactly(ns); + sessionCatalog.dropNamespace(sessionContext, ns); + try { + sessionCatalog.loadNamespaceMetadata(sessionContext, ns); + Assertions.fail("Expected exception when loading namespace after drop"); + } catch (NoSuchNamespaceException e) { + LOGGER.info("Received expected exception " + e.getMessage()); + } + } + } + + @Test + public void testIcebergCreateTablesInExternalCatalog(TestInfo testInfo) throws IOException { + String catalogName = testInfo.getTestMethod().get().getName() + "External"; + createCatalog(catalogName, Catalog.TypeEnum.EXTERNAL, PRINCIPAL_ROLE_NAME); + try (RESTSessionCatalog sessionCatalog = newSessionCatalog(catalogName)) { + SessionCatalog.SessionContext sessionContext = SessionCatalog.SessionContext.createEmpty(); + Namespace ns = Namespace.of("db1"); + sessionCatalog.createNamespace(sessionContext, ns); + try { + sessionCatalog + .buildTable( + sessionContext, + TableIdentifier.of(ns, "the_table"), + new Schema( + List.of(Types.NestedField.of(1, false, "theField", Types.StringType.get())))) + .withLocation("file:///tmp/tables") + .withSortOrder(SortOrder.unsorted()) + .withPartitionSpec(PartitionSpec.unpartitioned()) + .create(); + Assertions.fail("Expected failure calling create table in external catalog"); + } catch (BadRequestException e) { + LOGGER.info("Received expected exception " + e.getMessage()); + } + } + } + + @Test + public void testIcebergCreateTablesWithWritePathBlocked(TestInfo testInfo) throws IOException { + String catalogName = testInfo.getTestMethod().get().getName() + "Internal"; + createCatalog(catalogName, Catalog.TypeEnum.INTERNAL, PRINCIPAL_ROLE_NAME); + try (RESTSessionCatalog sessionCatalog = newSessionCatalog(catalogName)) { + SessionCatalog.SessionContext sessionContext = SessionCatalog.SessionContext.createEmpty(); + Namespace ns = Namespace.of("db1"); + sessionCatalog.createNamespace(sessionContext, ns); + try { + Assertions.assertThatThrownBy( + () -> + sessionCatalog + .buildTable( + sessionContext, + TableIdentifier.of(ns, "the_table"), + new Schema( + List.of( + Types.NestedField.of( + 1, false, "theField", Types.StringType.get())))) + .withSortOrder(SortOrder.unsorted()) + .withPartitionSpec(PartitionSpec.unpartitioned()) + .withProperties(Map.of("write.data.path", "s3://my-bucket/path/to/data")) + .create()) + .isInstanceOf(ForbiddenException.class) + .hasMessage( + "Forbidden: Delegate access to table with user-specified write location is temporarily not supported."); + + Assertions.assertThatThrownBy( + () -> + sessionCatalog + .buildTable( + sessionContext, + TableIdentifier.of(ns, "the_table"), + new Schema( + List.of( + Types.NestedField.of( + 1, false, "theField", Types.StringType.get())))) + .withSortOrder(SortOrder.unsorted()) + .withPartitionSpec(PartitionSpec.unpartitioned()) + .withProperties( + Map.of("write.metadata.path", "s3://my-bucket/path/to/data")) + .create()) + .isInstanceOf(ForbiddenException.class) + .hasMessage( + "Forbidden: Delegate access to table with user-specified write location is temporarily not supported."); + } catch (BadRequestException e) { + LOGGER.info("Received expected exception " + e.getMessage()); + } + } + } + + @Test + public void testIcebergRegisterTableInExternalCatalog(TestInfo testInfo) throws IOException { + String catalogName = testInfo.getTestMethod().get().getName() + "External"; + createCatalog( + catalogName, + Catalog.TypeEnum.EXTERNAL, + PRINCIPAL_ROLE_NAME, + FileStorageConfigInfo.builder(StorageConfigInfo.StorageTypeEnum.FILE) + .setAllowedLocations(List.of("file://" + testDir.toFile().getAbsolutePath())) + .build(), + "file://" + testDir.toFile().getAbsolutePath()); + try (RESTSessionCatalog sessionCatalog = newSessionCatalog(catalogName); + HadoopFileIO fileIo = new HadoopFileIO(new Configuration()); ) { + SessionCatalog.SessionContext sessionContext = SessionCatalog.SessionContext.createEmpty(); + Namespace ns = Namespace.of("db1"); + sessionCatalog.createNamespace(sessionContext, ns); + TableIdentifier tableIdentifier = TableIdentifier.of(ns, "the_table"); + String location = + "file://" + + testDir.toFile().getAbsolutePath() + + "/" + + testInfo.getTestMethod().get().getName(); + String metadataLocation = location + "/metadata/000001-494949494949494949.metadata.json"; + + TableMetadata tableMetadata = + TableMetadata.buildFromEmpty() + .setLocation(location) + .assignUUID() + .addPartitionSpec(PartitionSpec.unpartitioned()) + .addSortOrder(SortOrder.unsorted()) + .addSchema( + new Schema(Types.NestedField.of(1, false, "col1", Types.StringType.get())), 1) + .build(); + TableMetadataParser.write(tableMetadata, fileIo.newOutputFile(metadataLocation)); + + sessionCatalog.registerTable(sessionContext, tableIdentifier, metadataLocation); + Table table = sessionCatalog.loadTable(sessionContext, tableIdentifier); + assertThat(table) + .isNotNull() + .isInstanceOf(BaseTable.class) + .asInstanceOf(InstanceOfAssertFactories.type(BaseTable.class)) + .returns(tableMetadata.location(), BaseTable::location) + .returns(tableMetadata.uuid(), bt -> bt.uuid().toString()) + .returns(tableMetadata.schema().columns(), bt -> bt.schema().columns()); + } + } + + @Test + public void testIcebergUpdateTableInExternalCatalog(TestInfo testInfo) throws IOException { + String catalogName = testInfo.getTestMethod().get().getName() + "External"; + createCatalog( + catalogName, + Catalog.TypeEnum.EXTERNAL, + PRINCIPAL_ROLE_NAME, + FileStorageConfigInfo.builder(StorageConfigInfo.StorageTypeEnum.FILE) + .setAllowedLocations(List.of("file://" + testDir.toFile().getAbsolutePath())) + .build(), + "file://" + testDir.toFile().getAbsolutePath()); + try (RESTSessionCatalog sessionCatalog = newSessionCatalog(catalogName); + HadoopFileIO fileIo = new HadoopFileIO(new Configuration()); ) { + SessionCatalog.SessionContext sessionContext = SessionCatalog.SessionContext.createEmpty(); + Namespace ns = Namespace.of("db1"); + sessionCatalog.createNamespace(sessionContext, ns); + TableIdentifier tableIdentifier = TableIdentifier.of(ns, "the_table"); + String location = + "file://" + + testDir.toFile().getAbsolutePath() + + "/" + + testInfo.getTestMethod().get().getName(); + String metadataLocation = location + "/metadata/000001-494949494949494949.metadata.json"; + + Types.NestedField col1 = Types.NestedField.of(1, false, "col1", Types.StringType.get()); + TableMetadata tableMetadata = + TableMetadata.buildFromEmpty() + .setLocation(location) + .assignUUID() + .addPartitionSpec(PartitionSpec.unpartitioned()) + .addSortOrder(SortOrder.unsorted()) + .addSchema(new Schema(col1), 1) + .build(); + TableMetadataParser.write(tableMetadata, fileIo.newOutputFile(metadataLocation)); + + sessionCatalog.registerTable(sessionContext, tableIdentifier, metadataLocation); + Table table = sessionCatalog.loadTable(sessionContext, tableIdentifier); + ((ResolvingFileIO) table.io()).setConf(new Configuration()); + try { + table + .newAppend() + .appendFile( + new TestHelpers.TestDataFile( + location + "/path/to/file.parquet", + new PartitionData(PartitionSpec.unpartitioned().partitionType()), + 10L)) + .commit(); + Assertions.fail("Should fail when committing an update to external catalog"); + } catch (BadRequestException e) { + LOGGER.info("Received expected exception " + e.getMessage()); + } + } + } + + @Test + public void testIcebergDropTableInExternalCatalog(TestInfo testInfo) throws IOException { + String catalogName = testInfo.getTestMethod().get().getName() + "External"; + createCatalog( + catalogName, + Catalog.TypeEnum.EXTERNAL, + PRINCIPAL_ROLE_NAME, + FileStorageConfigInfo.builder(StorageConfigInfo.StorageTypeEnum.FILE) + .setAllowedLocations(List.of("file://" + testDir.toFile().getAbsolutePath())) + .build(), + "file://" + testDir.toFile().getAbsolutePath()); + try (RESTSessionCatalog sessionCatalog = newSessionCatalog(catalogName); + HadoopFileIO fileIo = new HadoopFileIO(new Configuration()); ) { + SessionCatalog.SessionContext sessionContext = SessionCatalog.SessionContext.createEmpty(); + Namespace ns = Namespace.of("db1"); + sessionCatalog.createNamespace(sessionContext, ns); + TableIdentifier tableIdentifier = TableIdentifier.of(ns, "the_table"); + String location = + "file://" + + testDir.toFile().getAbsolutePath() + + "/" + + testInfo.getTestMethod().get().getName(); + String metadataLocation = location + "/metadata/000001-494949494949494949.metadata.json"; + + TableMetadata tableMetadata = + TableMetadata.buildFromEmpty() + .setLocation(location) + .assignUUID() + .addPartitionSpec(PartitionSpec.unpartitioned()) + .addSortOrder(SortOrder.unsorted()) + .addSchema( + new Schema(Types.NestedField.of(1, false, "col1", Types.StringType.get())), 1) + .build(); + TableMetadataParser.write(tableMetadata, fileIo.newOutputFile(metadataLocation)); + + sessionCatalog.registerTable(sessionContext, tableIdentifier, metadataLocation); + Table table = sessionCatalog.loadTable(sessionContext, tableIdentifier); + assertThat(table).isNotNull(); + sessionCatalog.dropTable(sessionContext, tableIdentifier); + try { + sessionCatalog.loadTable(sessionContext, tableIdentifier); + Assertions.fail("Expected failure loading table after drop"); + } catch (NoSuchTableException e) { + LOGGER.info("Received expected exception " + e.getMessage()); + } + } + } + + @Test + public void testWarehouseNotSpecified() throws IOException { + try (RESTSessionCatalog sessionCatalog = new RESTSessionCatalog()) { + String emptyEnvironmentVariable = "env:__NULL_ENV_VARIABLE__"; + assertThat(EnvironmentUtil.resolveAll(Map.of("", emptyEnvironmentVariable)).get("")).isNull(); + sessionCatalog.initialize( + "snowflake", + Map.of( + "uri", + "http://localhost:" + EXT.getLocalPort() + "/api/catalog", + OAuth2Properties.CREDENTIAL, + snowmanCredentials.clientId() + ":" + snowmanCredentials.clientSecret(), + OAuth2Properties.SCOPE, + BasePolarisAuthenticator.PRINCIPAL_ROLE_ALL, + "warehouse", + emptyEnvironmentVariable, + "header." + REALM_PROPERTY_KEY, + realm)); + fail("Expected exception due to null warehouse"); + } catch (ServiceFailureException e) { + fail("Unexpected service failure exception", e); + } catch (RESTException e) { + LoggerFactory.getLogger(getClass()).info("Caught expected rest exception", e); + assertThat(e).isInstanceOf(BadRequestException.class); + } + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/admin/PolarisAdminServiceAuthzTest.java b/polaris-service/src/test/java/io/polaris/service/admin/PolarisAdminServiceAuthzTest.java new file mode 100644 index 0000000000..ed3c786726 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/admin/PolarisAdminServiceAuthzTest.java @@ -0,0 +1,1042 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.admin; + +import io.polaris.core.admin.model.UpdateCatalogRequest; +import io.polaris.core.admin.model.UpdateCatalogRoleRequest; +import io.polaris.core.admin.model.UpdatePrincipalRequest; +import io.polaris.core.admin.model.UpdatePrincipalRoleRequest; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.CatalogRoleEntity; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.entity.PrincipalEntity; +import io.polaris.core.entity.PrincipalRoleEntity; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import org.assertj.core.api.Assertions; +import org.junit.jupiter.api.Test; + +public class PolarisAdminServiceAuthzTest extends PolarisAuthzTestBase { + private PolarisAdminService newTestAdminService() { + return newTestAdminService(Set.of()); + } + + private PolarisAdminService newTestAdminService(Set activatedPrincipalRoles) { + final AuthenticatedPolarisPrincipal authenticatedPrincipal = + new AuthenticatedPolarisPrincipal(principalEntity, activatedPrincipalRoles); + return new PolarisAdminService( + callContext, entityManager, authenticatedPrincipal, polarisAuthorizer); + } + + private void doTestSufficientPrivileges( + List sufficientPrivileges, + Runnable action, + Runnable cleanupAction, + Function grantAction, + Function revokeAction) { + doTestSufficientPrivilegeSets( + sufficientPrivileges.stream().map(priv -> Set.of(priv)).toList(), + action, + cleanupAction, + PRINCIPAL_NAME, + grantAction, + revokeAction); + } + + private void doTestInsufficientPrivileges( + List insufficientPrivileges, + Runnable action, + Function grantAction, + Function revokeAction) { + doTestInsufficientPrivileges( + insufficientPrivileges, PRINCIPAL_NAME, action, grantAction, revokeAction); + } + + @Test + public void testListCatalogsSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.CATALOG_LIST, + PolarisPrivilege.CATALOG_READ_PROPERTIES, + PolarisPrivilege.CATALOG_WRITE_PROPERTIES, + PolarisPrivilege.CATALOG_CREATE, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newTestAdminService().listCatalogs(), + null, // cleanupAction + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testListCatalogsInsufficientPrivileges() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_DROP, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_ACCESS), + () -> newTestAdminService().listCatalogs(), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testCreateCatalogSufficientPrivileges() { + // Cleanup with PRINCIPAL_ROLE2 + Assertions.assertThat( + adminService.grantPrivilegeOnRootContainerToPrincipalRole( + PRINCIPAL_ROLE2, PolarisPrivilege.CATALOG_DROP)) + .isTrue(); + final CatalogEntity newCatalog = new CatalogEntity.Builder().setName("new_catalog").build(); + + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.CATALOG_CREATE, + PolarisPrivilege.CATALOG_FULL_METADATA), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE1)).createCatalog(newCatalog), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE2)).deleteCatalog(newCatalog.getName()), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testCreateCatalogInsufficientPrivileges() { + final CatalogEntity newCatalog = new CatalogEntity.Builder().setName("new_catalog").build(); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_LIST, + PolarisPrivilege.CATALOG_DROP, + PolarisPrivilege.CATALOG_READ_PROPERTIES, + PolarisPrivilege.CATALOG_WRITE_PROPERTIES, + PolarisPrivilege.CATALOG_MANAGE_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT, + PolarisPrivilege.CATALOG_MANAGE_ACCESS), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE1)).createCatalog(newCatalog), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testGetCatalogSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.CATALOG_READ_PROPERTIES, + PolarisPrivilege.CATALOG_WRITE_PROPERTIES, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newTestAdminService().getCatalog(CATALOG_NAME), + null, // cleanupAction + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testGetCatalogInsufficientPrivileges() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_LIST, + PolarisPrivilege.CATALOG_CREATE, + PolarisPrivilege.CATALOG_DROP, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_ACCESS), + () -> newTestAdminService().getCatalog(CATALOG_NAME), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testUpdateCatalogSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.CATALOG_WRITE_PROPERTIES, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> { + // Use the test-permission admin service instead of the root adminService to also + // perform the initial GET to illustrate that the actual user workflow for update + // *must* also encompass GET privileges to be able to set entityVersion properly. + UpdateCatalogRequest updateRequest = + UpdateCatalogRequest.builder() + .setCurrentEntityVersion( + newTestAdminService().getCatalog(CATALOG_NAME).getEntityVersion()) + .setProperties(Map.of("foo", Long.toString(System.currentTimeMillis()))) + .build(); + newTestAdminService().updateCatalog(CATALOG_NAME, updateRequest); + }, + null, // cleanupAction + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testUpdateCatalogInsufficientPrivileges() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_READ_PROPERTIES, + PolarisPrivilege.CATALOG_LIST, + PolarisPrivilege.CATALOG_CREATE, + PolarisPrivilege.CATALOG_DROP, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_ACCESS), + () -> { + UpdateCatalogRequest updateRequest = + UpdateCatalogRequest.builder() + .setCurrentEntityVersion( + newTestAdminService().getCatalog(CATALOG_NAME).getEntityVersion()) + .setProperties(Map.of("foo", Long.toString(System.currentTimeMillis()))) + .build(); + newTestAdminService().updateCatalog(CATALOG_NAME, updateRequest); + }, + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testDeleteCatalogSufficientPrivileges() { + // Cleanup with PRINCIPAL_ROLE2 + Assertions.assertThat( + adminService.grantPrivilegeOnRootContainerToPrincipalRole( + PRINCIPAL_ROLE2, PolarisPrivilege.CATALOG_CREATE)) + .isTrue(); + final CatalogEntity newCatalog = new CatalogEntity.Builder().setName("new_catalog").build(); + adminService.createCatalog(newCatalog); + + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.CATALOG_DROP, + PolarisPrivilege.CATALOG_FULL_METADATA), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE1)).deleteCatalog(newCatalog.getName()), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE2)).createCatalog(newCatalog), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testDeleteCatalogInsufficientPrivileges() { + final CatalogEntity newCatalog = new CatalogEntity.Builder().setName("new_catalog").build(); + adminService.createCatalog(newCatalog); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_CREATE, + PolarisPrivilege.CATALOG_LIST, + PolarisPrivilege.CATALOG_READ_PROPERTIES, + PolarisPrivilege.CATALOG_WRITE_PROPERTIES, + PolarisPrivilege.CATALOG_MANAGE_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT, + PolarisPrivilege.CATALOG_MANAGE_ACCESS), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE1)).deleteCatalog(newCatalog.getName()), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testListPrincipalsSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.PRINCIPAL_LIST, + PolarisPrivilege.PRINCIPAL_READ_PROPERTIES, + PolarisPrivilege.PRINCIPAL_WRITE_PROPERTIES, + PolarisPrivilege.PRINCIPAL_CREATE, + PolarisPrivilege.PRINCIPAL_FULL_METADATA), + () -> newTestAdminService().listPrincipals(), + null, // cleanupAction + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testListPrincipalsInsufficientPrivileges() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_DROP, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA), + () -> newTestAdminService().listPrincipals(), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testCreatePrincipalSufficientPrivileges() { + // Cleanup with PRINCIPAL_ROLE2 + Assertions.assertThat( + adminService.grantPrivilegeOnRootContainerToPrincipalRole( + PRINCIPAL_ROLE2, PolarisPrivilege.PRINCIPAL_DROP)) + .isTrue(); + final PrincipalEntity newPrincipal = + new PrincipalEntity.Builder().setName("new_principal").build(); + + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.PRINCIPAL_CREATE, + PolarisPrivilege.PRINCIPAL_FULL_METADATA), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE1)).createPrincipal(newPrincipal), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE2)).deletePrincipal(newPrincipal.getName()), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testCreatePrincipalInsufficientPrivileges() { + final PrincipalEntity newPrincipal = + new PrincipalEntity.Builder().setName("new_principal").build(); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_LIST, + PolarisPrivilege.PRINCIPAL_DROP, + PolarisPrivilege.PRINCIPAL_READ_PROPERTIES, + PolarisPrivilege.PRINCIPAL_WRITE_PROPERTIES), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE1)).createPrincipal(newPrincipal), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testGetPrincipalSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.PRINCIPAL_READ_PROPERTIES, + PolarisPrivilege.PRINCIPAL_WRITE_PROPERTIES, + PolarisPrivilege.PRINCIPAL_FULL_METADATA), + () -> newTestAdminService().getPrincipal(PRINCIPAL_NAME), + null, // cleanupAction + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testGetPrincipalInsufficientPrivileges() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_LIST, + PolarisPrivilege.PRINCIPAL_CREATE, + PolarisPrivilege.PRINCIPAL_DROP, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA), + () -> newTestAdminService().getPrincipal(PRINCIPAL_NAME), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testUpdatePrincipalSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.PRINCIPAL_WRITE_PROPERTIES, + PolarisPrivilege.PRINCIPAL_FULL_METADATA), + () -> { + // Use the test-permission admin service instead of the root adminService to also + // perform the initial GET to illustrate that the actual user workflow for update + // *must* also encompass GET privileges to be able to set entityVersion properly. + UpdatePrincipalRequest updateRequest = + UpdatePrincipalRequest.builder() + .setCurrentEntityVersion( + newTestAdminService().getPrincipal(PRINCIPAL_NAME).getEntityVersion()) + .setProperties(Map.of("foo", Long.toString(System.currentTimeMillis()))) + .build(); + newTestAdminService().updatePrincipal(PRINCIPAL_NAME, updateRequest); + }, + null, // cleanupAction + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testUpdatePrincipalInsufficientPrivileges() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_READ_PROPERTIES, + PolarisPrivilege.PRINCIPAL_LIST, + PolarisPrivilege.PRINCIPAL_CREATE, + PolarisPrivilege.PRINCIPAL_DROP, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA), + () -> { + UpdatePrincipalRequest updateRequest = + UpdatePrincipalRequest.builder() + .setCurrentEntityVersion( + newTestAdminService().getPrincipal(PRINCIPAL_NAME).getEntityVersion()) + .setProperties(Map.of("foo", Long.toString(System.currentTimeMillis()))) + .build(); + newTestAdminService().updatePrincipal(PRINCIPAL_NAME, updateRequest); + }, + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testDeletePrincipalSufficientPrivileges() { + // Cleanup with PRINCIPAL_ROLE2 + Assertions.assertThat( + adminService.grantPrivilegeOnRootContainerToPrincipalRole( + PRINCIPAL_ROLE2, PolarisPrivilege.PRINCIPAL_CREATE)) + .isTrue(); + final PrincipalEntity newPrincipal = + new PrincipalEntity.Builder().setName("new_principal").build(); + adminService.createPrincipal(newPrincipal); + + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.PRINCIPAL_DROP, + PolarisPrivilege.PRINCIPAL_FULL_METADATA), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE1)).deletePrincipal(newPrincipal.getName()), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE2)).createPrincipal(newPrincipal), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testDeletePrincipalInsufficientPrivileges() { + final PrincipalEntity newPrincipal = + new PrincipalEntity.Builder().setName("new_principal").build(); + adminService.createPrincipal(newPrincipal); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_CREATE, + PolarisPrivilege.PRINCIPAL_LIST, + PolarisPrivilege.PRINCIPAL_READ_PROPERTIES, + PolarisPrivilege.PRINCIPAL_WRITE_PROPERTIES), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE1)).deletePrincipal(newPrincipal.getName()), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testListPrincipalRolesSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.PRINCIPAL_ROLE_LIST, + PolarisPrivilege.PRINCIPAL_ROLE_READ_PROPERTIES, + PolarisPrivilege.PRINCIPAL_ROLE_WRITE_PROPERTIES, + PolarisPrivilege.PRINCIPAL_ROLE_CREATE, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA), + () -> newTestAdminService().listPrincipalRoles(), + null, // cleanupAction + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testListPrincipalRolesInsufficientPrivileges() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_DROP, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA), + () -> newTestAdminService().listPrincipalRoles(), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testCreatePrincipalRoleSufficientPrivileges() { + // Cleanup with PRINCIPAL_ROLE2 + Assertions.assertThat( + adminService.grantPrivilegeOnRootContainerToPrincipalRole( + PRINCIPAL_ROLE2, PolarisPrivilege.PRINCIPAL_ROLE_DROP)) + .isTrue(); + final PrincipalRoleEntity newPrincipalRole = + new PrincipalRoleEntity.Builder().setName("new_principal_role").build(); + + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.PRINCIPAL_ROLE_CREATE, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE1)).createPrincipalRole(newPrincipalRole), + () -> + newTestAdminService(Set.of(PRINCIPAL_ROLE2)) + .deletePrincipalRole(newPrincipalRole.getName()), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testCreatePrincipalRoleInsufficientPrivileges() { + final PrincipalRoleEntity newPrincipalRole = + new PrincipalRoleEntity.Builder().setName("new_principal_role").build(); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_LIST, + PolarisPrivilege.PRINCIPAL_ROLE_DROP, + PolarisPrivilege.PRINCIPAL_ROLE_READ_PROPERTIES, + PolarisPrivilege.PRINCIPAL_ROLE_WRITE_PROPERTIES), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE1)).createPrincipalRole(newPrincipalRole), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testGetPrincipalRoleSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.PRINCIPAL_ROLE_READ_PROPERTIES, + PolarisPrivilege.PRINCIPAL_ROLE_WRITE_PROPERTIES, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA), + () -> newTestAdminService().getPrincipalRole(PRINCIPAL_ROLE2), + null, // cleanupAction + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testGetPrincipalRoleInsufficientPrivileges() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_LIST, + PolarisPrivilege.PRINCIPAL_ROLE_CREATE, + PolarisPrivilege.PRINCIPAL_ROLE_DROP, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA), + () -> newTestAdminService().getPrincipalRole(PRINCIPAL_ROLE2), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testUpdatePrincipalRoleSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.PRINCIPAL_ROLE_WRITE_PROPERTIES, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA), + () -> { + // Use the test-permission admin service instead of the root adminService to also + // perform the initial GET to illustrate that the actual user workflow for update + // *must* also encompass GET privileges to be able to set entityVersion properly. + UpdatePrincipalRoleRequest updateRequest = + UpdatePrincipalRoleRequest.builder() + .setCurrentEntityVersion( + newTestAdminService().getPrincipalRole(PRINCIPAL_ROLE2).getEntityVersion()) + .setProperties(Map.of("foo", Long.toString(System.currentTimeMillis()))) + .build(); + newTestAdminService().updatePrincipalRole(PRINCIPAL_ROLE2, updateRequest); + }, + null, // cleanupAction + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testUpdatePrincipalRoleInsufficientPrivileges() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_READ_PROPERTIES, + PolarisPrivilege.PRINCIPAL_ROLE_LIST, + PolarisPrivilege.PRINCIPAL_ROLE_CREATE, + PolarisPrivilege.PRINCIPAL_ROLE_DROP, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA), + () -> { + UpdatePrincipalRoleRequest updateRequest = + UpdatePrincipalRoleRequest.builder() + .setCurrentEntityVersion( + newTestAdminService().getPrincipalRole(PRINCIPAL_ROLE2).getEntityVersion()) + .setProperties(Map.of("foo", Long.toString(System.currentTimeMillis()))) + .build(); + newTestAdminService().updatePrincipalRole(PRINCIPAL_ROLE2, updateRequest); + }, + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testDeletePrincipalRoleSufficientPrivileges() { + // Cleanup with PRINCIPAL_ROLE2 + Assertions.assertThat( + adminService.grantPrivilegeOnRootContainerToPrincipalRole( + PRINCIPAL_ROLE2, PolarisPrivilege.PRINCIPAL_ROLE_CREATE)) + .isTrue(); + final PrincipalRoleEntity newPrincipalRole = + new PrincipalRoleEntity.Builder().setName("new_principal_role").build(); + adminService.createPrincipalRole(newPrincipalRole); + + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.PRINCIPAL_ROLE_DROP, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA), + () -> + newTestAdminService(Set.of(PRINCIPAL_ROLE1)) + .deletePrincipalRole(newPrincipalRole.getName()), + () -> newTestAdminService(Set.of(PRINCIPAL_ROLE2)).createPrincipalRole(newPrincipalRole), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testDeletePrincipalRoleInsufficientPrivileges() { + final PrincipalRoleEntity newPrincipalRole = + new PrincipalRoleEntity.Builder().setName("new_principal_role").build(); + adminService.createPrincipalRole(newPrincipalRole); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_CREATE, + PolarisPrivilege.PRINCIPAL_ROLE_LIST, + PolarisPrivilege.PRINCIPAL_ROLE_READ_PROPERTIES, + PolarisPrivilege.PRINCIPAL_ROLE_WRITE_PROPERTIES), + () -> + newTestAdminService(Set.of(PRINCIPAL_ROLE1)) + .deletePrincipalRole(newPrincipalRole.getName()), + (privilege) -> + adminService.grantPrivilegeOnRootContainerToPrincipalRole(PRINCIPAL_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnRootContainerFromPrincipalRole( + PRINCIPAL_ROLE1, privilege)); + } + + @Test + public void testListCatalogRolesSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.CATALOG_MANAGE_ACCESS, + PolarisPrivilege.CATALOG_ROLE_LIST, + PolarisPrivilege.CATALOG_ROLE_READ_PROPERTIES, + PolarisPrivilege.CATALOG_ROLE_WRITE_PROPERTIES, + PolarisPrivilege.CATALOG_ROLE_CREATE, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA), + () -> newTestAdminService().listCatalogRoles(CATALOG_NAME), + null, // cleanupAction + (privilege) -> + adminService.grantPrivilegeOnCatalogToRole(CATALOG_NAME, CATALOG_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnCatalogFromRole(CATALOG_NAME, CATALOG_ROLE1, privilege)); + } + + @Test + public void testListCatalogRolesInsufficientPrivileges() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.CATALOG_ROLE_DROP, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA), + () -> newTestAdminService().listCatalogRoles(CATALOG_NAME), + (privilege) -> + adminService.grantPrivilegeOnCatalogToRole(CATALOG_NAME, CATALOG_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnCatalogFromRole(CATALOG_NAME, CATALOG_ROLE1, privilege)); + } + + @Test + public void testCreateCatalogRoleSufficientPrivileges() { + // Cleanup with CATALOG_ROLE2 + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.CATALOG_ROLE_DROP)) + .isTrue(); + final CatalogRoleEntity newCatalogRole = + new CatalogRoleEntity.Builder().setName("new_catalog_role").build(); + + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.CATALOG_MANAGE_ACCESS, + PolarisPrivilege.CATALOG_ROLE_CREATE, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA), + () -> + newTestAdminService(Set.of(PRINCIPAL_ROLE1)) + .createCatalogRole(CATALOG_NAME, newCatalogRole), + () -> + newTestAdminService(Set.of(PRINCIPAL_ROLE2)) + .deleteCatalogRole(CATALOG_NAME, newCatalogRole.getName()), + (privilege) -> + adminService.grantPrivilegeOnCatalogToRole(CATALOG_NAME, CATALOG_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnCatalogFromRole(CATALOG_NAME, CATALOG_ROLE1, privilege)); + } + + @Test + public void testCreateCatalogRoleInsufficientPrivileges() { + final CatalogRoleEntity newCatalogRole = + new CatalogRoleEntity.Builder().setName("new_catalog_role").build(); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_ROLE_LIST, + PolarisPrivilege.CATALOG_ROLE_DROP, + PolarisPrivilege.CATALOG_ROLE_READ_PROPERTIES, + PolarisPrivilege.CATALOG_ROLE_WRITE_PROPERTIES), + () -> + newTestAdminService(Set.of(PRINCIPAL_ROLE1)) + .createCatalogRole(CATALOG_NAME, newCatalogRole), + (privilege) -> + adminService.grantPrivilegeOnCatalogToRole(CATALOG_NAME, CATALOG_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnCatalogFromRole(CATALOG_NAME, CATALOG_ROLE1, privilege)); + } + + @Test + public void testGetCatalogRoleSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.CATALOG_MANAGE_ACCESS, + PolarisPrivilege.CATALOG_ROLE_READ_PROPERTIES, + PolarisPrivilege.CATALOG_ROLE_WRITE_PROPERTIES, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA), + () -> newTestAdminService().getCatalogRole(CATALOG_NAME, CATALOG_ROLE2), + null, // cleanupAction + (privilege) -> + adminService.grantPrivilegeOnCatalogToRole(CATALOG_NAME, CATALOG_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnCatalogFromRole(CATALOG_NAME, CATALOG_ROLE1, privilege)); + } + + @Test + public void testGetCatalogRoleInsufficientPrivileges() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.CATALOG_ROLE_LIST, + PolarisPrivilege.CATALOG_ROLE_CREATE, + PolarisPrivilege.CATALOG_ROLE_DROP, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA), + () -> newTestAdminService().getCatalogRole(CATALOG_NAME, CATALOG_ROLE2), + (privilege) -> + adminService.grantPrivilegeOnCatalogToRole(CATALOG_NAME, CATALOG_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnCatalogFromRole(CATALOG_NAME, CATALOG_ROLE1, privilege)); + } + + @Test + public void testUpdateCatalogRoleSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.CATALOG_MANAGE_ACCESS, + PolarisPrivilege.CATALOG_ROLE_WRITE_PROPERTIES, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA), + () -> { + // Use the test-permission admin service instead of the root adminService to also + // perform the initial GET to illustrate that the actual user workflow for update + // *must* also encompass GET privileges to be able to set entityVersion properly. + UpdateCatalogRoleRequest updateRequest = + UpdateCatalogRoleRequest.builder() + .setCurrentEntityVersion( + newTestAdminService() + .getCatalogRole(CATALOG_NAME, CATALOG_ROLE2) + .getEntityVersion()) + .setProperties(Map.of("foo", Long.toString(System.currentTimeMillis()))) + .build(); + newTestAdminService().updateCatalogRole(CATALOG_NAME, CATALOG_ROLE2, updateRequest); + }, + null, // cleanupAction + (privilege) -> + adminService.grantPrivilegeOnCatalogToRole(CATALOG_NAME, CATALOG_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnCatalogFromRole(CATALOG_NAME, CATALOG_ROLE1, privilege)); + } + + @Test + public void testUpdateCatalogRoleInsufficientPrivileges() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.CATALOG_ROLE_READ_PROPERTIES, + PolarisPrivilege.CATALOG_ROLE_LIST, + PolarisPrivilege.CATALOG_ROLE_CREATE, + PolarisPrivilege.CATALOG_ROLE_DROP, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA), + () -> { + UpdateCatalogRoleRequest updateRequest = + UpdateCatalogRoleRequest.builder() + .setCurrentEntityVersion( + newTestAdminService() + .getCatalogRole(CATALOG_NAME, CATALOG_ROLE2) + .getEntityVersion()) + .setProperties(Map.of("foo", Long.toString(System.currentTimeMillis()))) + .build(); + newTestAdminService().updateCatalogRole(CATALOG_NAME, CATALOG_ROLE2, updateRequest); + }, + (privilege) -> + adminService.grantPrivilegeOnCatalogToRole(CATALOG_NAME, CATALOG_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnCatalogFromRole(CATALOG_NAME, CATALOG_ROLE1, privilege)); + } + + @Test + public void testDeleteCatalogRoleSufficientPrivileges() { + // Cleanup with CATALOG_ROLE2 + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.CATALOG_ROLE_CREATE)) + .isTrue(); + final CatalogRoleEntity newCatalogRole = + new CatalogRoleEntity.Builder().setName("new_catalog_role").build(); + adminService.createCatalogRole(CATALOG_NAME, newCatalogRole); + + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.CATALOG_MANAGE_ACCESS, + PolarisPrivilege.CATALOG_ROLE_DROP, + PolarisPrivilege.CATALOG_ROLE_FULL_METADATA), + () -> + newTestAdminService(Set.of(PRINCIPAL_ROLE1)) + .deleteCatalogRole(CATALOG_NAME, newCatalogRole.getName()), + () -> + newTestAdminService(Set.of(PRINCIPAL_ROLE2)) + .createCatalogRole(CATALOG_NAME, newCatalogRole), + (privilege) -> + adminService.grantPrivilegeOnCatalogToRole(CATALOG_NAME, CATALOG_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnCatalogFromRole(CATALOG_NAME, CATALOG_ROLE1, privilege)); + } + + @Test + public void testDeleteCatalogRoleInsufficientPrivileges() { + final CatalogRoleEntity newCatalogRole = + new CatalogRoleEntity.Builder().setName("new_catalog_role").build(); + adminService.createCatalogRole(CATALOG_NAME, newCatalogRole); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.SERVICE_MANAGE_ACCESS, + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_FULL_METADATA, + PolarisPrivilege.PRINCIPAL_ROLE_FULL_METADATA, + PolarisPrivilege.CATALOG_ROLE_CREATE, + PolarisPrivilege.CATALOG_ROLE_LIST, + PolarisPrivilege.CATALOG_ROLE_READ_PROPERTIES, + PolarisPrivilege.CATALOG_ROLE_WRITE_PROPERTIES), + () -> + newTestAdminService(Set.of(PRINCIPAL_ROLE1)) + .deleteCatalogRole(CATALOG_NAME, newCatalogRole.getName()), + (privilege) -> + adminService.grantPrivilegeOnCatalogToRole(CATALOG_NAME, CATALOG_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnCatalogFromRole(CATALOG_NAME, CATALOG_ROLE1, privilege)); + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/admin/PolarisAuthzTestBase.java b/polaris-service/src/test/java/io/polaris/service/admin/PolarisAuthzTestBase.java new file mode 100644 index 0000000000..5f1420cb39 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/admin/PolarisAuthzTestBase.java @@ -0,0 +1,516 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.admin; + +import static org.apache.iceberg.types.Types.NestedField.required; + +import com.google.common.collect.ImmutableMap; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisConfiguration; +import io.polaris.core.PolarisConfigurationStore; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.admin.model.FileStorageConfigInfo; +import io.polaris.core.admin.model.PrincipalWithCredentials; +import io.polaris.core.admin.model.PrincipalWithCredentialsCredentials; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.auth.PolarisAuthorizer; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.CatalogRoleEntity; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.entity.PrincipalEntity; +import io.polaris.core.entity.PrincipalRoleEntity; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.persistence.PolarisTreeMapStore; +import io.polaris.core.persistence.resolver.PolarisResolutionManifest; +import io.polaris.core.storage.cache.StorageCredentialCache; +import io.polaris.service.catalog.BasePolarisCatalog; +import io.polaris.service.catalog.PolarisPassthroughResolutionView; +import io.polaris.service.config.DefaultConfigurationStore; +import io.polaris.service.config.RealmEntityManagerFactory; +import io.polaris.service.context.PolarisCallContextCatalogFactory; +import io.polaris.service.persistence.InMemoryPolarisMetaStoreManagerFactory; +import io.polaris.service.storage.PolarisStorageIntegrationProviderImpl; +import java.io.IOException; +import java.time.Clock; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import org.apache.iceberg.CatalogProperties; +import org.apache.iceberg.Schema; +import org.apache.iceberg.catalog.Catalog; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.exceptions.ForbiddenException; +import org.apache.iceberg.types.Types; +import org.assertj.core.api.Assertions; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.mockito.Mockito; + +/** Base class for shared test setup logic used by various Polaris authz-related tests. */ +public abstract class PolarisAuthzTestBase { + protected static final String CATALOG_NAME = "polaris-catalog"; + protected static final String PRINCIPAL_NAME = "snowman"; + + // catalog_role1 will be assigned only to principal_role1 and + // catalog_role2 will be assigned only to principal_role2 + protected static final String PRINCIPAL_ROLE1 = "principal_role1"; + protected static final String PRINCIPAL_ROLE2 = "principal_role2"; + protected static final String CATALOG_ROLE1 = "catalog_role1"; + protected static final String CATALOG_ROLE2 = "catalog_role2"; + protected static final String CATALOG_ROLE_SHARED = "catalog_role_shared"; + + protected static final Namespace NS1 = Namespace.of("ns1"); + protected static final Namespace NS2 = Namespace.of("ns2"); + protected static final Namespace NS1A = Namespace.of("ns1", "ns1a"); + protected static final Namespace NS1AA = Namespace.of("ns1", "ns1a", "ns1aa"); + protected static final Namespace NS1B = Namespace.of("ns1", "ns1b"); + + // One table directly under ns1 + protected static final TableIdentifier TABLE_NS1_1 = TableIdentifier.of(NS1, "layer1_table"); + + // Two tables under ns1a + protected static final TableIdentifier TABLE_NS1A_1 = TableIdentifier.of(NS1A, "table1"); + protected static final TableIdentifier TABLE_NS1A_2 = TableIdentifier.of(NS1A, "table2"); + + // One table under ns1b with same name as one under ns1a + protected static final TableIdentifier TABLE_NS1B_1 = TableIdentifier.of(NS1B, "table1"); + + // One table directly under ns2 + protected static final TableIdentifier TABLE_NS2_1 = TableIdentifier.of(NS2, "table1"); + + // One view directly under ns1 + protected static final TableIdentifier VIEW_NS1_1 = TableIdentifier.of(NS1, "layer1_view"); + + // Two views under ns1a + protected static final TableIdentifier VIEW_NS1A_1 = TableIdentifier.of(NS1A, "view1"); + protected static final TableIdentifier VIEW_NS1A_2 = TableIdentifier.of(NS1A, "view2"); + + // One view under ns1b with same name as one under ns1a + protected static final TableIdentifier VIEW_NS1B_1 = TableIdentifier.of(NS1B, "view1"); + + // One view directly under ns2 + protected static final TableIdentifier VIEW_NS2_1 = TableIdentifier.of(NS2, "view1"); + + protected static final String VIEW_QUERY = "select * from ns1.layer1_table"; + + protected static final Schema SCHEMA = + new Schema( + required(3, "id", Types.IntegerType.get(), "unique ID 🤪"), + required(4, "data", Types.StringType.get())); + protected final PolarisAuthorizer polarisAuthorizer = + new PolarisAuthorizer( + new DefaultConfigurationStore( + Map.of( + PolarisConfiguration.ENFORCE_PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_CHECKING, + true))); + + protected BasePolarisCatalog baseCatalog; + protected PolarisAdminService adminService; + protected PolarisEntityManager entityManager; + protected PolarisBaseEntity catalogEntity; + protected PrincipalEntity principalEntity; + protected CallContext callContext; + protected AuthenticatedPolarisPrincipal authenticatedRoot; + + @BeforeEach + @SuppressWarnings("unchecked") + public void before() { + PolarisDiagnostics diagServices = new PolarisDefaultDiagServiceImpl(); + PolarisTreeMapStore backingStore = new PolarisTreeMapStore(diagServices); + InMemoryPolarisMetaStoreManagerFactory managerFactory = + new InMemoryPolarisMetaStoreManagerFactory(); + managerFactory.setStorageIntegrationProvider( + new PolarisStorageIntegrationProviderImpl(Mockito::mock)); + RealmContext realmContext = () -> "realm"; + PolarisMetaStoreManager metaStoreManager = + managerFactory.getOrCreateMetaStoreManager(realmContext); + + Map configMap = + Map.of( + "ALLOW_SPECIFYING_FILE_IO_IMPL", true, "ALLOW_EXTERNAL_METADATA_FILE_LOCATION", true); + PolarisCallContext polarisContext = + new PolarisCallContext( + managerFactory.getOrCreateSessionSupplier(realmContext).get(), + diagServices, + new PolarisConfigurationStore() { + @Override + public @Nullable T getConfiguration(PolarisCallContext ctx, String configName) { + return (T) configMap.get(configName); + } + }, + Clock.systemDefaultZone()); + this.entityManager = + new PolarisEntityManager( + metaStoreManager, polarisContext::getMetaStore, new StorageCredentialCache()); + + callContext = + CallContext.of( + new RealmContext() { + @Override + public String getRealmIdentifier() { + return "test-realm"; + } + }, + polarisContext); + CallContext.setCurrentContext(callContext); + + PrincipalEntity rootEntity = + new PrincipalEntity( + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .readEntityByName( + polarisContext, + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + "root") + .getEntity())); + + this.authenticatedRoot = new AuthenticatedPolarisPrincipal(rootEntity, Set.of()); + + this.adminService = + new PolarisAdminService(callContext, entityManager, authenticatedRoot, polarisAuthorizer); + + String storageLocation = "file:///tmp/authz"; + FileStorageConfigInfo storageConfigModel = + FileStorageConfigInfo.builder() + .setStorageType(StorageConfigInfo.StorageTypeEnum.FILE) + .setAllowedLocations(List.of(storageLocation, "file:///tmp/authz")) + .build(); + catalogEntity = + adminService.createCatalog( + new CatalogEntity.Builder() + .setName(CATALOG_NAME) + .setDefaultBaseLocation(storageLocation) + .setStorageConfigurationInfo(storageConfigModel, storageLocation) + .build()); + + initBaseCatalog(); + + PrincipalWithCredentials principal = + adminService.createPrincipal(new PrincipalEntity.Builder().setName(PRINCIPAL_NAME).build()); + principalEntity = + rotateAndRefreshPrincipal( + metaStoreManager, PRINCIPAL_NAME, principal.getCredentials(), polarisContext); + + // Pre-create the principal roles and catalog roles without any grants on securables, but + // assign both principal roles to the principal, then CATALOG_ROLE1 to PRINCIPAL_ROLE1, + // CATALOG_ROLE2 to PRINCIPAL_ROLE2, and CATALOG_ROLE_SHARED to both. + adminService.createPrincipalRole( + new PrincipalRoleEntity.Builder().setName(PRINCIPAL_ROLE1).build()); + adminService.createPrincipalRole( + new PrincipalRoleEntity.Builder().setName(PRINCIPAL_ROLE2).build()); + adminService.createCatalogRole( + CATALOG_NAME, new CatalogRoleEntity.Builder().setName(CATALOG_ROLE1).build()); + adminService.createCatalogRole( + CATALOG_NAME, new CatalogRoleEntity.Builder().setName(CATALOG_ROLE2).build()); + adminService.createCatalogRole( + CATALOG_NAME, new CatalogRoleEntity.Builder().setName(CATALOG_ROLE_SHARED).build()); + + adminService.assignPrincipalRole(PRINCIPAL_NAME, PRINCIPAL_ROLE1); + adminService.assignPrincipalRole(PRINCIPAL_NAME, PRINCIPAL_ROLE2); + + adminService.assignCatalogRoleToPrincipalRole(PRINCIPAL_ROLE1, CATALOG_NAME, CATALOG_ROLE1); + adminService.assignCatalogRoleToPrincipalRole(PRINCIPAL_ROLE2, CATALOG_NAME, CATALOG_ROLE2); + adminService.assignCatalogRoleToPrincipalRole( + PRINCIPAL_ROLE1, CATALOG_NAME, CATALOG_ROLE_SHARED); + adminService.assignCatalogRoleToPrincipalRole( + PRINCIPAL_ROLE2, CATALOG_NAME, CATALOG_ROLE_SHARED); + + // Do some shared setup with non-authz-aware baseCatalog. + baseCatalog.createNamespace(NS1); + baseCatalog.createNamespace(NS2); + baseCatalog.createNamespace(NS1A); + baseCatalog.createNamespace(NS1AA); + baseCatalog.createNamespace(NS1B); + + baseCatalog.buildTable(TABLE_NS1_1, SCHEMA).create(); + baseCatalog.buildTable(TABLE_NS1A_1, SCHEMA).create(); + baseCatalog.buildTable(TABLE_NS1A_2, SCHEMA).create(); + baseCatalog.buildTable(TABLE_NS1B_1, SCHEMA).create(); + baseCatalog.buildTable(TABLE_NS2_1, SCHEMA).create(); + + baseCatalog + .buildView(VIEW_NS1_1) + .withSchema(SCHEMA) + .withDefaultNamespace(NS1) + .withQuery("spark", VIEW_QUERY) + .create(); + baseCatalog + .buildView(VIEW_NS1A_1) + .withSchema(SCHEMA) + .withDefaultNamespace(NS1) + .withQuery("spark", VIEW_QUERY) + .create(); + baseCatalog + .buildView(VIEW_NS1A_2) + .withSchema(SCHEMA) + .withDefaultNamespace(NS1) + .withQuery("spark", VIEW_QUERY) + .create(); + baseCatalog + .buildView(VIEW_NS1B_1) + .withSchema(SCHEMA) + .withDefaultNamespace(NS1) + .withQuery("spark", VIEW_QUERY) + .create(); + baseCatalog + .buildView(VIEW_NS2_1) + .withSchema(SCHEMA) + .withDefaultNamespace(NS1) + .withQuery("spark", VIEW_QUERY) + .create(); + } + + @AfterEach + public void after() { + if (this.baseCatalog != null) { + try { + this.baseCatalog.close(); + this.baseCatalog = null; + } catch (IOException e) { + throw new RuntimeException(e); + } + } + } + + protected @NotNull PrincipalEntity rotateAndRefreshPrincipal( + PolarisMetaStoreManager metaStoreManager, + String principalName, + PrincipalWithCredentialsCredentials credentials, + PolarisCallContext polarisContext) { + PolarisMetaStoreManager.EntityResult lookupEntity = + metaStoreManager.readEntityByName( + callContext.getPolarisCallContext(), + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + principalName); + metaStoreManager.rotatePrincipalSecrets( + callContext.getPolarisCallContext(), + credentials.getClientId(), + lookupEntity.getEntity().getId(), + credentials.getClientSecret(), + false); + + return new PrincipalEntity( + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .readEntityByName( + polarisContext, + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + principalName) + .getEntity())); + } + + /** + * This baseCatalog is used for setup rather than being the test target under a wrapper instance; + * we set up this baseCatalog with a PolarisPassthroughResolutionView to allow it to circumvent + * the "authorized" resolution set of entities used by wrapper instances, allowing it to resolve + * all entities in the underlying metaStoreManager at once. + */ + private void initBaseCatalog() { + if (this.baseCatalog != null) { + try { + this.baseCatalog.close(); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + PolarisPassthroughResolutionView passthroughView = + new PolarisPassthroughResolutionView( + callContext, entityManager, authenticatedRoot, CATALOG_NAME); + this.baseCatalog = + new BasePolarisCatalog( + entityManager, callContext, passthroughView, authenticatedRoot, Mockito.mock()); + this.baseCatalog.initialize( + CATALOG_NAME, + ImmutableMap.of( + CatalogProperties.FILE_IO_IMPL, "org.apache.iceberg.inmemory.InMemoryFileIO")); + } + + public class TestPolarisCallContextCatalogFactory extends PolarisCallContextCatalogFactory { + public TestPolarisCallContextCatalogFactory() { + super( + new RealmEntityManagerFactory() { + @Override + public PolarisEntityManager getOrCreateEntityManager(RealmContext realmContext) { + return entityManager; + } + }, + Mockito.mock()); + } + + @Override + public Catalog createCallContextCatalog( + CallContext context, + AuthenticatedPolarisPrincipal authenticatedPolarisPrincipal, + final PolarisResolutionManifest resolvedManifest) { + // This depends on the BasePolarisCatalog allowing calling initialize multiple times + // to override the previous config. + Catalog catalog = + super.createCallContextCatalog(context, authenticatedPolarisPrincipal, resolvedManifest); + catalog.initialize( + CATALOG_NAME, + ImmutableMap.of( + CatalogProperties.FILE_IO_IMPL, "org.apache.iceberg.inmemory.InMemoryFileIO")); + return catalog; + } + } + + /** + * Tests each "sufficient" privilege individually by invoking {@code grantAction} for each set of + * privileges, running the action being tested, revoking after each test set, and also ensuring + * that the request fails after each revocation. + * + * @param sufficientPrivileges each set of concurrent privileges expected to be sufficient + * together. + * @param action The operation being tested; could also be multiple operations that should all + * succeed with the sufficient privilege + * @param cleanupAction If non-null, additional action to run to "undo" a previous success action + * in case the action has side effects. Called before revoking the sufficient privilege; + * either the cleanup privileges must be latent, or the cleanup action could be run with + * PRINCIPAL_ROLE2 while runnint {@code action} with PRINCIPAL_ROLE1. + * @param principalName the name expected to appear in forbidden errors + * @param grantAction the grantPrivilege action to use for each test privilege that will apply the + * privilege to whatever context is used in the {@code action} + * @param revokeAction the revokePrivilege action to clean up after each granted test privilege + */ + protected void doTestSufficientPrivilegeSets( + List> sufficientPrivileges, + Runnable action, + Runnable cleanupAction, + String principalName, + Function grantAction, + Function revokeAction) { + for (Set privilegeSet : sufficientPrivileges) { + for (PolarisPrivilege privilege : privilegeSet) { + // Grant the single privilege at a catalog level to cascade to all objects. + Assertions.assertThat(grantAction.apply(privilege)).isTrue(); + } + + // Should run without issues. + try { + action.run(); + } catch (Throwable t) { + Assertions.fail( + String.format( + "Expected success with sufficientPrivileges '%s', got throwable instead.", + privilegeSet), + t); + } + if (cleanupAction != null) { + try { + cleanupAction.run(); + } catch (Throwable t) { + Assertions.fail( + String.format( + "Running cleanupAction with sufficientPrivileges '%s', got throwable.", + privilegeSet), + t); + } + } + + if (privilegeSet.size() > 1) { + // Knockout testing - Revoke single privileges and the same action should throw + // NotAuthorizedException. + for (PolarisPrivilege privilege : privilegeSet) { + Assertions.assertThat(revokeAction.apply(privilege)).isTrue(); + + try { + Assertions.assertThatThrownBy(() -> action.run()) + .isInstanceOf(ForbiddenException.class) + .hasMessageContaining(principalName) + .hasMessageContaining("is not authorized"); + } catch (Throwable t) { + Assertions.fail( + String.format( + "Expected failure after revoking sufficientPrivilege '%s' from set '%s'", + privilege, privilegeSet), + t); + } + + // Grant the single privilege at a catalog level to cascade to all objects. + Assertions.assertThat(grantAction.apply(privilege)).isTrue(); + } + } + + // Now remove all the privileges + for (PolarisPrivilege privilege : privilegeSet) { + Assertions.assertThat(revokeAction.apply(privilege)).isTrue(); + } + try { + Assertions.assertThatThrownBy(() -> action.run()) + .isInstanceOf(ForbiddenException.class) + .hasMessageContaining(principalName) + .hasMessageContaining("is not authorized"); + } catch (Throwable t) { + Assertions.fail( + String.format( + "Expected failure after revoking all sufficientPrivileges '%s'", privilegeSet), + t); + } + } + } + + /** + * Tests each "insufficient" privilege individually using CATALOG_ROLE1 by granting at the + * CATALOG_NAME level, ensuring the action fails, then revoking after each test case. + */ + protected void doTestInsufficientPrivileges( + List insufficientPrivileges, + String principalName, + Runnable action, + Function grantAction, + Function revokeAction) { + for (PolarisPrivilege privilege : insufficientPrivileges) { + // Grant the single privilege at a catalog level to cascade to all objects. + Assertions.assertThat(grantAction.apply(privilege)).isTrue(); + + // Should be insufficient + try { + Assertions.assertThatThrownBy(() -> action.run()) + .isInstanceOf(ForbiddenException.class) + .hasMessageContaining(principalName) + .hasMessageContaining("is not authorized"); + } catch (Throwable t) { + Assertions.fail( + String.format("Expected failure with insufficientPrivilege '%s'", privilege), t); + } + + // Revoking only matters in case there are some multi-privilege actions being tested with + // only granting individual privileges in isolation. + Assertions.assertThat(revokeAction.apply(privilege)).isTrue(); + } + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/admin/PolarisOverlappingCatalogTest.java b/polaris-service/src/test/java/io/polaris/service/admin/PolarisOverlappingCatalogTest.java new file mode 100644 index 0000000000..f4cba62197 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/admin/PolarisOverlappingCatalogTest.java @@ -0,0 +1,207 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.admin; + +import static io.polaris.service.context.DefaultContextResolver.REALM_PROPERTY_KEY; +import static org.assertj.core.api.Assertions.assertThat; + +import io.dropwizard.testing.ConfigOverride; +import io.dropwizard.testing.ResourceHelpers; +import io.dropwizard.testing.junit5.DropwizardAppExtension; +import io.dropwizard.testing.junit5.DropwizardExtensionsSupport; +import io.polaris.core.admin.model.AwsStorageConfigInfo; +import io.polaris.core.admin.model.Catalog; +import io.polaris.core.admin.model.CatalogProperties; +import io.polaris.core.admin.model.CreateCatalogRequest; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.service.PolarisApplication; +import io.polaris.service.config.PolarisApplicationConfig; +import io.polaris.service.test.PolarisConnectionExtension; +import jakarta.ws.rs.client.Entity; +import jakarta.ws.rs.client.Invocation; +import jakarta.ws.rs.core.Response; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.UUID; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +@ExtendWith({DropwizardExtensionsSupport.class, PolarisConnectionExtension.class}) +public class PolarisOverlappingCatalogTest { + private static final DropwizardAppExtension EXT = + new DropwizardAppExtension<>( + PolarisApplication.class, + ResourceHelpers.resourceFilePath("polaris-server-integrationtest.yml"), + // Bind to random port to support parallelism + ConfigOverride.config("server.applicationConnectors[0].port", "0"), + ConfigOverride.config("server.adminConnectors[0].port", "0"), + // Block overlapping catalog paths: + ConfigOverride.config("featureConfiguration.ALLOW_OVERLAPPING_CATALOG_URLS", "false")); + private static String userToken; + private static String realm; + + @BeforeAll + public static void setup(PolarisConnectionExtension.PolarisToken adminToken) { + userToken = adminToken.token(); + realm = PolarisConnectionExtension.getTestRealm(PolarisServiceImplIntegrationTest.class); + } + + private Response createCatalog(String prefix, String defaultBaseLocation, boolean isExternal) { + return createCatalog(prefix, defaultBaseLocation, isExternal, new ArrayList()); + } + + private static Invocation.Builder request() { + return EXT.client() + .target(String.format("http://localhost:%d/api/management/v1/catalogs", EXT.getLocalPort())) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm); + } + + private Response createCatalog( + String prefix, + String defaultBaseLocation, + boolean isExternal, + List allowedLocations) { + String uuid = UUID.randomUUID().toString(); + StorageConfigInfo config = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::123456789012:role/my-role") + .setExternalId("externalId") + .setUserArn("userArn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations( + allowedLocations.stream() + .map( + l -> { + return String.format("s3://bucket/%s/%s", prefix, l); + }) + .toList()) + .build(); + Catalog catalog = + new Catalog( + isExternal ? Catalog.TypeEnum.EXTERNAL : Catalog.TypeEnum.INTERNAL, + String.format("overlap_catalog_%s", uuid), + new CatalogProperties(String.format("s3://bucket/%s/%s", prefix, defaultBaseLocation)), + System.currentTimeMillis(), + System.currentTimeMillis(), + 1, + config); + try (Response response = request().post(Entity.json(new CreateCatalogRequest(catalog)))) { + return response; + } + } + + @Test + public void testBasicOverlappingCatalogs() { + Arrays.asList(false, true) + .forEach( + initiallyExternal -> { + Arrays.asList(false, true) + .forEach( + laterExternal -> { + String prefix = UUID.randomUUID().toString(); + + assertThat(createCatalog(prefix, "root", initiallyExternal)) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + + // OK, non-overlapping + assertThat(createCatalog(prefix, "boot", laterExternal)) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + + // OK, non-overlapping due to no `/` + assertThat(createCatalog(prefix, "roo", laterExternal)) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + + // Also OK due to no `/` + assertThat(createCatalog(prefix, "root.child", laterExternal)) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + + // inside `root` + assertThat(createCatalog(prefix, "root/child", laterExternal)) + .returns( + Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + + // `root` is inside this + assertThat(createCatalog(prefix, "", laterExternal)) + .returns( + Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + }); + }); + } + + @Test + public void testAllowedLocationOverlappingCatalogs() { + Arrays.asList(false, true) + .forEach( + initiallyExternal -> { + Arrays.asList(false, true) + .forEach( + laterExternal -> { + String prefix = UUID.randomUUID().toString(); + + assertThat( + createCatalog( + prefix, + "animals", + initiallyExternal, + Arrays.asList("dogs", "cats"))) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + + // OK, non-overlapping + assertThat( + createCatalog( + prefix, + "danimals", + laterExternal, + Arrays.asList("dan", "daniel"))) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + + // This DBL overlaps with initial AL + assertThat( + createCatalog( + prefix, + "dogs", + initiallyExternal, + Arrays.asList("huskies", "labs"))) + .returns( + Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + + // This AL overlaps with initial DBL + assertThat( + createCatalog( + prefix, + "kingdoms", + initiallyExternal, + Arrays.asList("plants", "animals"))) + .returns( + Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + + // This AL overlaps with an initial AL + assertThat( + createCatalog( + prefix, + "plays", + initiallyExternal, + Arrays.asList("rent", "cats"))) + .returns( + Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + }); + }); + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/admin/PolarisServiceImplIntegrationTest.java b/polaris-service/src/test/java/io/polaris/service/admin/PolarisServiceImplIntegrationTest.java new file mode 100644 index 0000000000..8d73724362 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/admin/PolarisServiceImplIntegrationTest.java @@ -0,0 +1,1785 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.admin; + +import static io.dropwizard.jackson.Jackson.newObjectMapper; +import static io.polaris.service.context.DefaultContextResolver.REALM_PROPERTY_KEY; +import static org.assertj.core.api.Assertions.assertThat; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.node.ObjectNode; +import io.dropwizard.testing.ConfigOverride; +import io.dropwizard.testing.ResourceHelpers; +import io.dropwizard.testing.junit5.DropwizardAppExtension; +import io.dropwizard.testing.junit5.DropwizardExtensionsSupport; +import io.polaris.core.admin.model.AwsStorageConfigInfo; +import io.polaris.core.admin.model.AzureStorageConfigInfo; +import io.polaris.core.admin.model.Catalog; +import io.polaris.core.admin.model.CatalogProperties; +import io.polaris.core.admin.model.CatalogRole; +import io.polaris.core.admin.model.CatalogRoles; +import io.polaris.core.admin.model.Catalogs; +import io.polaris.core.admin.model.CreateCatalogRequest; +import io.polaris.core.admin.model.CreateCatalogRoleRequest; +import io.polaris.core.admin.model.CreatePrincipalRequest; +import io.polaris.core.admin.model.CreatePrincipalRoleRequest; +import io.polaris.core.admin.model.ExternalCatalog; +import io.polaris.core.admin.model.FileStorageConfigInfo; +import io.polaris.core.admin.model.GrantCatalogRoleRequest; +import io.polaris.core.admin.model.PolarisCatalog; +import io.polaris.core.admin.model.Principal; +import io.polaris.core.admin.model.PrincipalRole; +import io.polaris.core.admin.model.PrincipalRoles; +import io.polaris.core.admin.model.PrincipalWithCredentials; +import io.polaris.core.admin.model.PrincipalWithCredentialsCredentials; +import io.polaris.core.admin.model.Principals; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.core.admin.model.UpdateCatalogRequest; +import io.polaris.core.admin.model.UpdateCatalogRoleRequest; +import io.polaris.core.admin.model.UpdatePrincipalRequest; +import io.polaris.core.admin.model.UpdatePrincipalRoleRequest; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.service.PolarisApplication; +import io.polaris.service.auth.TokenUtils; +import io.polaris.service.config.PolarisApplicationConfig; +import io.polaris.service.test.PolarisConnectionExtension; +import jakarta.ws.rs.client.Entity; +import jakarta.ws.rs.client.Invocation; +import jakarta.ws.rs.core.Response; +import java.io.IOException; +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import org.apache.commons.lang3.RandomStringUtils; +import org.apache.iceberg.rest.responses.ErrorResponse; +import org.assertj.core.api.InstanceOfAssertFactories; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; + +@ExtendWith({DropwizardExtensionsSupport.class, PolarisConnectionExtension.class}) +public class PolarisServiceImplIntegrationTest { + private static final int MAX_IDENTIFIER_LENGTH = 256; + + // TODO: Add a test-only hook that fully clobbers all persistence state so we can have a fresh + // slate on every test case; otherwise, leftover state from one test from failures will interfere + // with other test cases. + private static final DropwizardAppExtension EXT = + new DropwizardAppExtension<>( + PolarisApplication.class, + ResourceHelpers.resourceFilePath("polaris-server-integrationtest.yml"), + ConfigOverride.config( + "server.applicationConnectors[0].port", + "0"), // Bind to random port to support parallelism + ConfigOverride.config("server.adminConnectors[0].port", "0"), + + // disallow FILE urls for the sake of tests below + ConfigOverride.config( + "featureConfiguration.SUPPORTED_CATALOG_STORAGE_TYPES", "S3,GCS,AZURE")); + private static String userToken; + private static String realm; + + @BeforeAll + public static void setup(PolarisConnectionExtension.PolarisToken adminToken) { + userToken = adminToken.token(); + realm = PolarisConnectionExtension.getTestRealm(PolarisServiceImplIntegrationTest.class); + } + + @AfterEach + public void tearDown() { + try (Response response = newRequest("http://localhost:%d/api/management/v1/catalogs").get()) { + response.readEntity(Catalogs.class).getCatalogs().stream() + .forEach( + catalog -> { + try (Response innerResponse = + newRequest( + "http://localhost:%d/api/management/v1/catalogs/" + catalog.getName()) + .delete()) {} + }); + } + try (Response response = newRequest("http://localhost:%d/api/management/v1/principals").get()) { + response.readEntity(Principals.class).getPrincipals().stream() + .filter( + principal -> + !principal.getName().equals(PolarisEntityConstants.getRootPrincipalName())) + .forEach( + principal -> { + try (Response innerResponse = + newRequest( + "http://localhost:%d/api/management/v1/principals/" + + principal.getName()) + .delete()) {} + }); + } + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles").get()) { + response.readEntity(PrincipalRoles.class).getRoles().stream() + .filter( + principalRole -> + !principalRole + .getName() + .equals(PolarisEntityConstants.getNameOfPrincipalServiceAdminRole())) + .forEach( + principalRole -> { + try (Response innerResponse = + newRequest( + "http://localhost:%d/api/management/v1/principal-roles/" + + principalRole.getName()) + .delete()) {} + }); + } + } + + @Test + public void testCatalogSerializing() throws IOException { + CatalogProperties props = new CatalogProperties("s3://my-old-bucket/path/to/data"); + props.put("prop1", "propval"); + PolarisCatalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("my_catalog") + .setProperties(props) + .setStorageConfigInfo( + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::123456789012:role/my-role") + .setExternalId("externalId") + .setUserArn("userArn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of("s3://my-old-bucket/path/to/data")) + .build()) + .build(); + + ObjectMapper mapper = new ObjectMapper(); + String json = mapper.writeValueAsString(catalog); + System.out.println(json); + Catalog deserialized = mapper.readValue(json, Catalog.class); + assertThat(deserialized).isInstanceOf(PolarisCatalog.class); + } + + @Test + public void testListCatalogs() { + try (Response response = newRequest("http://localhost:%d/api/management/v1/catalogs").get()) { + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(Catalogs.class)) + .returns( + List.of(), + l -> + l.getCatalogs().stream() + .filter(c -> !c.getName().equalsIgnoreCase("ROOT")) + .toList()); + } + } + + @Test + public void testListCatalogsUnauthorized() { + Principal principal = new Principal("a_new_user"); + String newToken = null; + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals") + .post(Entity.json(principal))) { + assertThat(response).returns(201, Response::getStatus); + PrincipalWithCredentials creds = response.readEntity(PrincipalWithCredentials.class); + newToken = + TokenUtils.getTokenFromSecrets( + EXT.client(), + EXT.getLocalPort(), + creds.getCredentials().getClientId(), + creds.getCredentials().getClientSecret(), + realm); + } + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs", "BEARER " + newToken).get()) { + assertThat(response).returns(403, Response::getStatus); + } + } + + @Test + public void testCreateCatalog() { + AwsStorageConfigInfo awsConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::123456789012:role/my-role") + .setExternalId("externalId") + .setUserArn("userArn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of("s3://my-old-bucket/path/to/data")) + .build(); + Catalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("my-catalog") + .setProperties(new CatalogProperties("s3://my-bucket/path/to/data")) + .setStorageConfigInfo(awsConfigModel) + .build(); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post( + Entity.json( + "{\"catalog\":{\"type\":\"INTERNAL\",\"name\":\"my-catalog\",\"properties\":{\"default-base-location\":\"s3://my-bucket/path/to/data\"},\"storageConfigInfo\":{\"storageType\":\"S3\",\"roleArn\":\"arn:aws:iam::123456789012:role/my-role\",\"externalId\":\"externalId\",\"userArn\":\"userArn\",\"allowedLocations\":[\"s3://my-old-bucket/path/to/data\"]}}}"))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // 204 Successful delete + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/my-catalog").delete()) { + assertThat(response).returns(204, Response::getStatus); + } + } + + @Test + public void testCreateCatalogWithInvalidName() { + AwsStorageConfigInfo awsConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::123456789012:role/my-role") + .setExternalId("externalId") + .setUserArn("userArn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of("s3://my-old-bucket/path/to/data")) + .build(); + + String goodName = RandomStringUtils.random(MAX_IDENTIFIER_LENGTH, true, true); + + ObjectMapper mapper = newObjectMapper(); + + Catalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName(goodName) + .setProperties(new CatalogProperties("s3://my-bucket/path/to/data")) + .setStorageConfigInfo(awsConfigModel) + .build(); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(mapper.writeValueAsString(catalog)))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } catch (JsonProcessingException e) { + throw new RuntimeException(e); + } + + String longInvalidName = RandomStringUtils.random(MAX_IDENTIFIER_LENGTH + 1, true, true); + List invalidCatalogNames = + Arrays.asList( + longInvalidName, + "", + "system$catalog1", + "SYSTEM$TestCatalog", + "System$test_catalog", + " SysTeM$ test catalog"); + + for (String invalidCatalogName : invalidCatalogNames) { + catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName(invalidCatalogName) + .setProperties(new CatalogProperties("s3://my-bucket/path/to/data")) + .setStorageConfigInfo(awsConfigModel) + .build(); + + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(mapper.writeValueAsString(catalog)))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + assertThat(response.hasEntity()).isTrue(); + ErrorResponse errorResponse = response.readEntity(ErrorResponse.class); + assertThat(errorResponse.message()).contains("Invalid value:"); + } catch (JsonProcessingException e) { + throw new RuntimeException(e); + } + } + } + + @Test + public void testCreateCatalogWithNullBaseLocation() { + AwsStorageConfigInfo awsConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::123456789012:role/my-role") + .setExternalId("externalId") + .setUserArn("userArn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of("s3://my-old-bucket/path/to/data")) + .build(); + ObjectMapper mapper = new ObjectMapper(); + JsonNode storageConfig = mapper.valueToTree(awsConfigModel); + ObjectNode catalogNode = mapper.createObjectNode(); + catalogNode.set("storageConfigInfo", storageConfig); + catalogNode.put("name", "my-catalog"); + catalogNode.put("type", "INTERNAL"); + catalogNode.set("properties", mapper.createObjectNode()); + ObjectNode requestNode = mapper.createObjectNode(); + requestNode.set("catalog", catalogNode); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(requestNode))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + } + } + + @Test + public void testCreateCatalogWithoutProperties() { + AwsStorageConfigInfo awsConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::123456789012:role/my-role") + .setExternalId("externalId") + .setUserArn("userArn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of("s3://my-old-bucket/path/to/data")) + .build(); + ObjectMapper mapper = new ObjectMapper(); + JsonNode storageConfig = mapper.valueToTree(awsConfigModel); + ObjectNode catalogNode = mapper.createObjectNode(); + catalogNode.set("storageConfigInfo", storageConfig); + catalogNode.put("name", "my-catalog"); + catalogNode.put("type", "INTERNAL"); + ObjectNode requestNode = mapper.createObjectNode(); + requestNode.set("catalog", catalogNode); + + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs", "Bearer " + userToken) + .post(Entity.json(requestNode))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + ErrorResponse error = response.readEntity(ErrorResponse.class); + assertThat(error) + .isNotNull() + .returns( + "Invalid value: createCatalog.arg0.catalog.properties: must not be null", + ErrorResponse::message); + } + } + + @Test + public void testCreateCatalogWithoutStorageConfig() throws JsonProcessingException { + String catalogString = + "{\"catalog\": {\"type\":\"INTERNAL\",\"name\":\"my-catalog\",\"properties\":{\"default-base-location\":\"s3://my-bucket/path/to/data\"}}}"; + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs", "Bearer " + userToken) + .post(Entity.json(catalogString))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + ErrorResponse error = response.readEntity(ErrorResponse.class); + assertThat(error) + .isNotNull() + .returns( + "Invalid value: createCatalog.arg0.catalog.storageConfigInfo: must not be null", + ErrorResponse::message); + } + } + + @Test + public void testCreateCatalogWithUnparsableJson() throws JsonProcessingException { + String catalogString = "{\"catalog\": {{\"bad data}"; + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs", "Bearer " + userToken) + .post(Entity.json(catalogString))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + ErrorResponse error = response.readEntity(ErrorResponse.class); + assertThat(error) + .isNotNull() + .extracting(ErrorResponse::message) + .asString() + .startsWith("Invalid JSON: Unexpected character"); + } + } + + @Test + public void testCreateCatalogWithDisallowedStorageConfig() throws JsonProcessingException { + FileStorageConfigInfo fileStorage = + FileStorageConfigInfo.builder(StorageConfigInfo.StorageTypeEnum.FILE) + .setAllowedLocations(List.of("file://")) + .build(); + String catalogName = "my-external-catalog"; + Catalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName(catalogName) + .setProperties(new CatalogProperties("file:///tmp/path/to/data")) + .setStorageConfigInfo(fileStorage) + .build(); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs", "Bearer " + userToken) + .post(Entity.json(new CreateCatalogRequest(catalog)))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + ErrorResponse error = response.readEntity(ErrorResponse.class); + assertThat(error) + .isNotNull() + .returns("Unsupported storage type: FILE", ErrorResponse::message); + } + } + + @Test + public void testUpdateCatalogWithDisallowedStorageConfig() throws JsonProcessingException { + AwsStorageConfigInfo awsConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::123456789012:role/my-role") + .setExternalId("externalId") + .setUserArn("userArn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of("s3://my-old-bucket/path/to/data")) + .build(); + String catalogName = "mycatalog"; + Catalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName(catalogName) + .setProperties(new CatalogProperties("s3://bucket/path/to/data")) + .setStorageConfigInfo(awsConfigModel) + .build(); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs", "Bearer " + userToken) + .post(Entity.json(new CreateCatalogRequest(catalog)))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // 200 successful GET after creation + Catalog fetchedCatalog = null; + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/catalogs/" + catalogName, + "Bearer " + userToken) + .get()) { + assertThat(response).returns(200, Response::getStatus); + fetchedCatalog = response.readEntity(Catalog.class); + + assertThat(fetchedCatalog.getName()).isEqualTo(catalogName); + assertThat(fetchedCatalog.getProperties().toMap()) + .isEqualTo(Map.of("default-base-location", "s3://bucket/path/to/data")); + assertThat(fetchedCatalog.getEntityVersion()).isGreaterThan(0); + } + + FileStorageConfigInfo fileStorage = + FileStorageConfigInfo.builder(StorageConfigInfo.StorageTypeEnum.FILE) + .setAllowedLocations(List.of("file://")) + .build(); + UpdateCatalogRequest updateRequest = + new UpdateCatalogRequest( + fetchedCatalog.getEntityVersion(), + Map.of("default-base-location", "file:///tmp/path/to/data/"), + fileStorage); + + // failure to update + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/catalogs/" + catalogName, + "Bearer " + userToken) + .put(Entity.json(updateRequest))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + ErrorResponse error = response.readEntity(ErrorResponse.class); + + assertThat(error).returns("Unsupported storage type: FILE", ErrorResponse::message); + } + } + + @Test + public void testCreateExternalCatalog() { + AwsStorageConfigInfo awsConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::123456789012:role/my-role") + .setExternalId("externalId") + .setUserArn("userArn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of("s3://my-old-bucket/path/to/data")) + .build(); + String catalogName = "my-external-catalog"; + String remoteUrl = "http://localhost:8080"; + Catalog catalog = + ExternalCatalog.builder() + .setType(Catalog.TypeEnum.EXTERNAL) + .setName(catalogName) + .setRemoteUrl(remoteUrl) + .setProperties(new CatalogProperties("s3://my-bucket/path/to/data")) + .setStorageConfigInfo(awsConfigModel) + .build(); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(new CreateCatalogRequest(catalog)))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/" + catalogName).get()) { + assertThat(response).returns(Response.Status.OK.getStatusCode(), Response::getStatus); + Catalog fetchedCatalog = response.readEntity(Catalog.class); + assertThat(fetchedCatalog) + .isNotNull() + .isInstanceOf(ExternalCatalog.class) + .asInstanceOf(InstanceOfAssertFactories.type(ExternalCatalog.class)) + .returns(remoteUrl, ExternalCatalog::getRemoteUrl) + .extracting(ExternalCatalog::getStorageConfigInfo) + .isNotNull() + .isInstanceOf(AwsStorageConfigInfo.class) + .asInstanceOf(InstanceOfAssertFactories.type(AwsStorageConfigInfo.class)) + .returns("arn:aws:iam::123456789012:role/my-role", AwsStorageConfigInfo::getRoleArn); + } + + // 204 Successful delete + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/" + catalogName).delete()) { + assertThat(response).returns(204, Response::getStatus); + } + } + + @Test + public void testCreateCatalogWithoutDefaultLocation() { + AwsStorageConfigInfo awsConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::123456789012:role/my-role") + .setExternalId("externalId") + .setUserArn("userArn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of("s3://my-old-bucket/path/to/data")) + .build(); + ObjectMapper mapper = new ObjectMapper(); + JsonNode storageConfig = mapper.valueToTree(awsConfigModel); + ObjectNode catalogNode = mapper.createObjectNode(); + catalogNode.set("storageConfigInfo", storageConfig); + catalogNode.put("name", "my-catalog"); + catalogNode.put("type", "INTERNAL"); + ObjectNode properties = mapper.createObjectNode(); + properties.set("default-base-location", mapper.nullNode()); + catalogNode.set("properties", properties); + ObjectNode requestNode = mapper.createObjectNode(); + requestNode.set("catalog", catalogNode); + + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(requestNode))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + } + } + + @Test + public void serialization() throws JsonProcessingException { + CatalogProperties properties = new CatalogProperties("s3://my-bucket/path/to/data"); + ObjectMapper mapper = new ObjectMapper(); + CatalogProperties translated = mapper.convertValue(properties, CatalogProperties.class); + assertThat(translated.toMap()) + .containsEntry("default-base-location", "s3://my-bucket/path/to/data"); + } + + @Test + public void testCreateAndUpdateAzureCatalog() { + StorageConfigInfo storageConfig = + new AzureStorageConfigInfo("azure:tenantid:12345", StorageConfigInfo.StorageTypeEnum.AZURE); + Catalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("myazurecatalog") + .setStorageConfigInfo(storageConfig) + .setProperties(new CatalogProperties("abfss://container1@acct1.dfs.core.windows.net/")) + .build(); + + // 200 Successful create + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(new CreateCatalogRequest(catalog)))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // 200 successful GET after creation + Catalog fetchedCatalog = null; + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/myazurecatalog").get()) { + assertThat(response).returns(200, Response::getStatus); + fetchedCatalog = response.readEntity(Catalog.class); + + assertThat(fetchedCatalog.getName()).isEqualTo("myazurecatalog"); + assertThat(fetchedCatalog.getProperties().toMap()) + .isEqualTo( + Map.of("default-base-location", "abfss://container1@acct1.dfs.core.windows.net/")); + assertThat(fetchedCatalog.getEntityVersion()).isGreaterThan(0); + } + + StorageConfigInfo modifiedStorageConfig = + new AzureStorageConfigInfo("azure:tenantid:22222", StorageConfigInfo.StorageTypeEnum.AZURE); + UpdateCatalogRequest badUpdateRequest = + new UpdateCatalogRequest( + fetchedCatalog.getEntityVersion(), + Map.of("default-base-location", "abfss://newcontainer@acct1.dfs.core.windows.net/"), + modifiedStorageConfig); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/myazurecatalog") + .put(Entity.json(badUpdateRequest))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + ErrorResponse error = response.readEntity(ErrorResponse.class); + assertThat(error) + .isNotNull() + .extracting(ErrorResponse::message) + .asString() + .startsWith("Cannot modify"); + } + + UpdateCatalogRequest updateRequest = + new UpdateCatalogRequest( + fetchedCatalog.getEntityVersion(), + Map.of("default-base-location", "abfss://newcontainer@acct1.dfs.core.windows.net/"), + storageConfig); + + // 200 successful update + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/myazurecatalog") + .put(Entity.json(updateRequest))) { + assertThat(response).returns(200, Response::getStatus); + fetchedCatalog = response.readEntity(Catalog.class); + + assertThat(fetchedCatalog.getProperties().toMap()) + .isEqualTo( + Map.of("default-base-location", "abfss://newcontainer@acct1.dfs.core.windows.net/")); + } + + // 204 Successful delete + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/myazurecatalog").delete()) { + assertThat(response).returns(204, Response::getStatus); + } + } + + @Test + public void testCreateListUpdateAndDeleteCatalog() { + StorageConfigInfo storageConfig = + new AwsStorageConfigInfo( + "arn:aws:iam::123456789011:role/role1", StorageConfigInfo.StorageTypeEnum.S3); + Catalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("mycatalog") + .setStorageConfigInfo(storageConfig) + .setProperties(new CatalogProperties("s3://bucket1/")) + .build(); + + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(new CreateCatalogRequest(catalog)))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // Second attempt to create the same entity should fail with CONFLICT. + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(new CreateCatalogRequest(catalog)))) { + assertThat(response).returns(Response.Status.CONFLICT.getStatusCode(), Response::getStatus); + } + + // 200 successful GET after creation + Catalog fetchedCatalog = null; + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog").get()) { + assertThat(response).returns(200, Response::getStatus); + fetchedCatalog = response.readEntity(Catalog.class); + + assertThat(fetchedCatalog.getName()).isEqualTo("mycatalog"); + assertThat(fetchedCatalog.getProperties().toMap()) + .isEqualTo(Map.of("default-base-location", "s3://bucket1/")); + assertThat(fetchedCatalog.getEntityVersion()).isGreaterThan(0); + } + + // Should list the catalog. + try (Response response = newRequest("http://localhost:%d/api/management/v1/catalogs").get()) { + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(Catalogs.class)) + .extracting(Catalogs::getCatalogs) + .asInstanceOf(InstanceOfAssertFactories.list(Catalog.class)) + .filteredOn(cat -> !cat.getName().equalsIgnoreCase("ROOT")) + .satisfiesExactly(cat -> assertThat(cat).returns("mycatalog", Catalog::getName)); + } + + // Reject update of fields that can't be currently updated + StorageConfigInfo modifiedStorageConfig = + new AwsStorageConfigInfo( + "arn:aws:iam::123456789011:role/newrole", StorageConfigInfo.StorageTypeEnum.S3); + UpdateCatalogRequest badUpdateRequest = + new UpdateCatalogRequest( + fetchedCatalog.getEntityVersion(), + Map.of("default-base-location", "s3://newbucket/"), + modifiedStorageConfig); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog") + .put(Entity.json(badUpdateRequest))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + ErrorResponse error = response.readEntity(ErrorResponse.class); + assertThat(error) + .isNotNull() + .extracting(ErrorResponse::message) + .asString() + .startsWith("Cannot modify"); + } + + UpdateCatalogRequest updateRequest = + new UpdateCatalogRequest( + fetchedCatalog.getEntityVersion(), + Map.of("default-base-location", "s3://newbucket/"), + storageConfig); + + // 200 successful update + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog") + .put(Entity.json(updateRequest))) { + assertThat(response).returns(200, Response::getStatus); + fetchedCatalog = response.readEntity(Catalog.class); + + assertThat(fetchedCatalog.getProperties().toMap()) + .isEqualTo(Map.of("default-base-location", "s3://newbucket/")); + } + + // 200 GET after update should show new properties + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog").get()) { + assertThat(response).returns(200, Response::getStatus); + fetchedCatalog = response.readEntity(Catalog.class); + + assertThat(fetchedCatalog.getProperties().toMap()) + .isEqualTo(Map.of("default-base-location", "s3://newbucket/")); + } + + // 204 Successful delete + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog").delete()) { + assertThat(response).returns(204, Response::getStatus); + } + + // NOT_FOUND after deletion + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog").get()) { + assertThat(response).returns(404, Response::getStatus); + } + + // Empty list + try (Response response = newRequest("http://localhost:%d/api/management/v1/catalogs").get()) { + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(Catalogs.class)) + .returns( + List.of(), + c -> + c.getCatalogs().stream() + .filter(cat -> !cat.getName().equalsIgnoreCase("ROOT")) + .toList()); + } + } + + private static Invocation.Builder newRequest(String url, String token) { + return EXT.client() + .target(String.format(url, EXT.getLocalPort())) + .request("application/json") + .header("Authorization", token) + .header(REALM_PROPERTY_KEY, realm); + } + + private static Invocation.Builder newRequest(String url) { + return newRequest(url, "Bearer " + userToken); + } + + @Test + public void testGetCatalogNotFound() { + // there's no catalog yet. Expect 404 + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog").get()) { + assertThat(response).returns(404, Response::getStatus); + } + } + + @Test + public void testGetCatalogInvalidName() { + String longInvalidName = RandomStringUtils.random(MAX_IDENTIFIER_LENGTH + 1, true, true); + List invalidCatalogNames = + Arrays.asList( + longInvalidName, + "system$catalog1", + "SYSTEM$TestCatalog", + "System$test_catalog", + " SysTeM$ test catalog"); + + for (String invalidCatalogName : invalidCatalogNames) { + // there's no catalog yet. Expect 404 + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/" + invalidCatalogName) + .get()) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + assertThat(response.hasEntity()).isTrue(); + ErrorResponse errorResponse = response.readEntity(ErrorResponse.class); + assertThat(errorResponse.message()).contains("Invalid value:"); + } + } + } + + @Test + public void testCatalogRoleInvalidName() { + Catalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("mycatalog1") + .setProperties(new CatalogProperties("s3://required/base/location")) + .setStorageConfigInfo( + new AwsStorageConfigInfo( + "arn:aws:iam::012345678901:role/jdoe", StorageConfigInfo.StorageTypeEnum.S3)) + .build(); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(new CreateCatalogRequest(catalog)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + String longInvalidName = RandomStringUtils.random(MAX_IDENTIFIER_LENGTH + 1, true, true); + List invalidCatalogRoleNames = + Arrays.asList( + longInvalidName, + "system$catalog1", + "SYSTEM$TestCatalog", + "System$test_catalog", + " SysTeM$ test catalog"); + + for (String invalidCatalogRoleName : invalidCatalogRoleNames) { + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/catalogs/mycatalog1/catalog-roles/" + + invalidCatalogRoleName) + .get()) { + + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + assertThat(response.hasEntity()).isTrue(); + ErrorResponse errorResponse = response.readEntity(ErrorResponse.class); + assertThat(errorResponse.message()).contains("Invalid value:"); + } + } + } + + @Test + public void testListPrincipalsUnauthorized() { + Principal principal = new Principal("new_admin"); + String newToken = null; + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals") + .post(Entity.json(principal))) { + assertThat(response).returns(201, Response::getStatus); + PrincipalWithCredentials creds = response.readEntity(PrincipalWithCredentials.class); + newToken = + TokenUtils.getTokenFromSecrets( + EXT.client(), + EXT.getLocalPort(), + creds.getCredentials().getClientId(), + creds.getCredentials().getClientSecret(), + realm); + } + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals", "Bearer " + newToken) + .get()) { + assertThat(response).returns(403, Response::getStatus); + } + } + + @Test + public void testCreatePrincipalAndRotateCredentials() { + Principal principal = + Principal.builder() + .setName("myprincipal") + .setProperties(Map.of("custom-tag", "foo")) + .build(); + + PrincipalWithCredentialsCredentials creds = null; + Principal returnedPrincipal = null; + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals") + .post(Entity.json(new CreatePrincipalRequest(principal, true)))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + PrincipalWithCredentials parsed = response.readEntity(PrincipalWithCredentials.class); + creds = parsed.getCredentials(); + returnedPrincipal = parsed.getPrincipal(); + } + assertThat(creds.getClientId()).isEqualTo(returnedPrincipal.getClientId()); + + String oldClientId = creds.getClientId(); + String oldSecret = creds.getClientSecret(); + + // Now rotate the credentials. First, if we try to just use the adminToken to rotate the + // newly created principal's credentials, we should fail; rotateCredentials is only + // a "self" privilege that even admins can't inherit. + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/myprincipal/rotate") + .post(Entity.json(""))) { + assertThat(response).returns(Response.Status.FORBIDDEN.getStatusCode(), Response::getStatus); + } + + // Get a fresh token associate with the principal itself. + String newPrincipalToken = + TokenUtils.getTokenFromSecrets( + EXT.client(), EXT.getLocalPort(), oldClientId, oldSecret, realm); + + // Any call should initially fail with error indicating that rotation is needed. + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/principals/myprincipal", + "Bearer " + newPrincipalToken) + .get()) { + assertThat(response).returns(Response.Status.FORBIDDEN.getStatusCode(), Response::getStatus); + ErrorResponse error = response.readEntity(ErrorResponse.class); + assertThat(error) + .isNotNull() + .extracting(ErrorResponse::message) + .asString() + .contains("PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_STATE"); + } + + // Now try to rotate using the principal's token. + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/principals/myprincipal/rotate", + "Bearer " + newPrincipalToken) + .post(Entity.json(""))) { + assertThat(response).returns(Response.Status.OK.getStatusCode(), Response::getStatus); + PrincipalWithCredentials parsed = response.readEntity(PrincipalWithCredentials.class); + creds = parsed.getCredentials(); + returnedPrincipal = parsed.getPrincipal(); + } + assertThat(creds.getClientId()).isEqualTo(returnedPrincipal.getClientId()); + + // ClientId shouldn't change + assertThat(creds.getClientId()).isEqualTo(oldClientId); + assertThat(creds.getClientSecret()).isNotEqualTo(oldSecret); + + // TODO: Test the validity of the old secret for getting tokens, here and then after a second + // rotation that makes the old secret fall off retention. + } + + @Test + public void testCreateListUpdateAndDeletePrincipal() { + Principal principal = + Principal.builder() + .setName("myprincipal") + .setProperties(Map.of("custom-tag", "foo")) + .build(); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals") + .post(Entity.json(new CreatePrincipalRequest(principal, null)))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // Second attempt to create the same entity should fail with CONFLICT. + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals") + .post(Entity.json(new CreatePrincipalRequest(principal, false)))) { + assertThat(response).returns(Response.Status.CONFLICT.getStatusCode(), Response::getStatus); + } + + // 200 successful GET after creation + Principal fetchedPrincipal = null; + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/myprincipal").get()) { + assertThat(response).returns(200, Response::getStatus); + fetchedPrincipal = response.readEntity(Principal.class); + + assertThat(fetchedPrincipal.getName()).isEqualTo("myprincipal"); + assertThat(fetchedPrincipal.getProperties()).isEqualTo(Map.of("custom-tag", "foo")); + assertThat(fetchedPrincipal.getEntityVersion()).isGreaterThan(0); + } + + // Should list the principal. + try (Response response = newRequest("http://localhost:%d/api/management/v1/principals").get()) { + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(Principals.class)) + .extracting(Principals::getPrincipals) + .asInstanceOf(InstanceOfAssertFactories.list(Principal.class)) + .anySatisfy(pr -> assertThat(pr).returns("myprincipal", Principal::getName)); + } + + UpdatePrincipalRequest updateRequest = + new UpdatePrincipalRequest( + fetchedPrincipal.getEntityVersion(), Map.of("custom-tag", "updated")); + + // 200 successful update + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/myprincipal") + .put(Entity.json(updateRequest))) { + assertThat(response).returns(200, Response::getStatus); + fetchedPrincipal = response.readEntity(Principal.class); + + assertThat(fetchedPrincipal.getProperties()).isEqualTo(Map.of("custom-tag", "updated")); + } + + // 200 GET after update should show new properties + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/myprincipal").get()) { + assertThat(response).returns(200, Response::getStatus); + fetchedPrincipal = response.readEntity(Principal.class); + + assertThat(fetchedPrincipal.getProperties()).isEqualTo(Map.of("custom-tag", "updated")); + } + + // 204 Successful delete + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/myprincipal").delete()) { + assertThat(response).returns(204, Response::getStatus); + } + + // NOT_FOUND after deletion + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/myprincipal").get()) { + assertThat(response).returns(404, Response::getStatus); + } + + // Empty list + try (Response response = newRequest("http://localhost:%d/api/management/v1/principals").get()) { + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(Principals.class)) + .extracting(Principals::getPrincipals) + .asInstanceOf(InstanceOfAssertFactories.list(Principal.class)) + .noneSatisfy(pr -> assertThat(pr).returns("myprincipal", Principal::getName)); + } + } + + @Test + public void testCreatePrincipalWithInvalidName() { + String goodName = RandomStringUtils.random(MAX_IDENTIFIER_LENGTH, true, true); + Principal principal = + Principal.builder() + .setName(goodName) + .setProperties(Map.of("custom-tag", "good_principal")) + .build(); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals") + .post(Entity.json(new CreatePrincipalRequest(principal, null)))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + String longInvalidName = RandomStringUtils.random(MAX_IDENTIFIER_LENGTH + 1, true, true); + List invalidPrincipalNames = + Arrays.asList( + longInvalidName, + "", + "system$principal1", + "SYSTEM$TestPrincipal", + "System$test_principal", + " SysTeM$ principal"); + + for (String invalidPrincipalName : invalidPrincipalNames) { + principal = + Principal.builder() + .setName(invalidPrincipalName) + .setProperties(Map.of("custom-tag", "bad_principal")) + .build(); + + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals") + .post(Entity.json(new CreatePrincipalRequest(principal, false)))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + assertThat(response.hasEntity()).isTrue(); + ErrorResponse errorResponse = response.readEntity(ErrorResponse.class); + assertThat(errorResponse.message()).contains("Invalid value:"); + } + } + } + + @Test + public void testGetPrincipalWithInvalidName() { + String longInvalidName = RandomStringUtils.random(MAX_IDENTIFIER_LENGTH + 1, true, true); + List invalidPrincipalNames = + Arrays.asList( + longInvalidName, + "system$principal1", + "SYSTEM$TestPrincipal", + "System$test_principal", + " SysTeM$ principal"); + + for (String invalidPrincipalName : invalidPrincipalNames) { + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/" + invalidPrincipalName) + .get()) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + assertThat(response.hasEntity()).isTrue(); + ErrorResponse errorResponse = response.readEntity(ErrorResponse.class); + assertThat(errorResponse.message()).contains("Invalid value:"); + } + } + } + + @Test + public void testCreateListUpdateAndDeletePrincipalRole() { + PrincipalRole principalRole = + new PrincipalRole("myprincipalrole", Map.of("custom-tag", "foo"), 0L, 0L, 1); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles") + .post(Entity.json(new CreatePrincipalRoleRequest(principalRole)))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // Second attempt to create the same entity should fail with CONFLICT. + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles") + .post(Entity.json(new CreatePrincipalRoleRequest(principalRole)))) { + + assertThat(response).returns(Response.Status.CONFLICT.getStatusCode(), Response::getStatus); + } + + // 200 successful GET after creation + PrincipalRole fetchedPrincipalRole = null; + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles/myprincipalrole").get()) { + + assertThat(response).returns(200, Response::getStatus); + fetchedPrincipalRole = response.readEntity(PrincipalRole.class); + + assertThat(fetchedPrincipalRole.getName()).isEqualTo("myprincipalrole"); + assertThat(fetchedPrincipalRole.getProperties()).isEqualTo(Map.of("custom-tag", "foo")); + assertThat(fetchedPrincipalRole.getEntityVersion()).isGreaterThan(0); + } + + // Should list the principalRole. + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles").get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(PrincipalRoles.class)) + .extracting(PrincipalRoles::getRoles) + .asInstanceOf(InstanceOfAssertFactories.list(PrincipalRole.class)) + .anySatisfy(pr -> assertThat(pr).returns("myprincipalrole", PrincipalRole::getName)); + } + + UpdatePrincipalRoleRequest updateRequest = + new UpdatePrincipalRoleRequest( + fetchedPrincipalRole.getEntityVersion(), Map.of("custom-tag", "updated")); + + // 200 successful update + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles/myprincipalrole") + .put(Entity.json(updateRequest))) { + assertThat(response).returns(200, Response::getStatus); + fetchedPrincipalRole = response.readEntity(PrincipalRole.class); + + assertThat(fetchedPrincipalRole.getProperties()).isEqualTo(Map.of("custom-tag", "updated")); + } + + // 200 GET after update should show new properties + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles/myprincipalrole").get()) { + assertThat(response).returns(200, Response::getStatus); + fetchedPrincipalRole = response.readEntity(PrincipalRole.class); + + assertThat(fetchedPrincipalRole.getProperties()).isEqualTo(Map.of("custom-tag", "updated")); + } + + // 204 Successful delete + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles/myprincipalrole") + .delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + + // NOT_FOUND after deletion + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles/myprincipalrole").get()) { + + assertThat(response).returns(404, Response::getStatus); + } + + // Empty list + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles").get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(PrincipalRoles.class)) + .extracting(PrincipalRoles::getRoles) + .asInstanceOf(InstanceOfAssertFactories.list(PrincipalRole.class)) + .noneSatisfy(pr -> assertThat(pr).returns("myprincipalrole", PrincipalRole::getName)); + } + } + + @Test + public void testCreatePrincipalRoleInvalidName() { + String goodName = RandomStringUtils.random(MAX_IDENTIFIER_LENGTH, true, true); + PrincipalRole principalRole = + new PrincipalRole(goodName, Map.of("custom-tag", "good_principal_role"), 0L, 0L, 1); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles") + .post(Entity.json(new CreatePrincipalRoleRequest(principalRole)))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + String longInvalidName = RandomStringUtils.random(MAX_IDENTIFIER_LENGTH + 1, true, true); + List invalidPrincipalRoleNames = + Arrays.asList( + longInvalidName, + "", + "system$principalrole1", + "SYSTEM$TestPrincipalRole", + "System$test_principal_role", + " SysTeM$ principal role"); + + for (String invalidPrincipalRoleName : invalidPrincipalRoleNames) { + principalRole = + new PrincipalRole( + invalidPrincipalRoleName, Map.of("custom-tag", "bad_principal_role"), 0L, 0L, 1); + + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles") + .post(Entity.json(new CreatePrincipalRoleRequest(principalRole)))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + assertThat(response.hasEntity()).isTrue(); + ErrorResponse errorResponse = response.readEntity(ErrorResponse.class); + assertThat(errorResponse.message()).contains("Invalid value:"); + } + } + } + + @Test + public void testGetPrincipalRoleInvalidName() { + String longInvalidName = RandomStringUtils.random(MAX_IDENTIFIER_LENGTH + 1, true, true); + List invalidPrincipalRoleNames = + Arrays.asList( + longInvalidName, + "system$principalrole1", + "SYSTEM$TestPrincipalRole", + "System$test_principal_role", + " SysTeM$ principal role"); + + for (String invalidPrincipalRoleName : invalidPrincipalRoleNames) { + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/principal-roles/" + + invalidPrincipalRoleName) + .get()) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus); + assertThat(response.hasEntity()).isTrue(); + ErrorResponse errorResponse = response.readEntity(ErrorResponse.class); + assertThat(errorResponse.message()).contains("Invalid value:"); + } + } + } + + @Test + public void testCreateListUpdateAndDeleteCatalogRole() { + Catalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("mycatalog1") + .setProperties(new CatalogProperties("s3://required/base/location")) + .setStorageConfigInfo( + new AwsStorageConfigInfo( + "arn:aws:iam::012345678901:role/jdoe", StorageConfigInfo.StorageTypeEnum.S3)) + .build(); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(new CreateCatalogRequest(catalog)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + Catalog catalog2 = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("mycatalog2") + .setStorageConfigInfo( + new AwsStorageConfigInfo( + "arn:aws:iam::012345678901:role/jdoe", StorageConfigInfo.StorageTypeEnum.S3)) + .setProperties(new CatalogProperties("s3://required/base/other_location")) + .build(); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(new CreateCatalogRequest(catalog2)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + CatalogRole catalogRole = + new CatalogRole("mycatalogrole", Map.of("custom-tag", "foo"), 0L, 0L, 1); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog1/catalog-roles") + .post(Entity.json(new CreateCatalogRoleRequest(catalogRole)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // Second attempt to create the same entity should fail with CONFLICT. + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog1/catalog-roles") + .post(Entity.json(new CreateCatalogRoleRequest(catalogRole)))) { + + assertThat(response).returns(Response.Status.CONFLICT.getStatusCode(), Response::getStatus); + } + + // 200 successful GET after creation + CatalogRole fetchedCatalogRole = null; + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/catalogs/mycatalog1/catalog-roles/mycatalogrole") + .get()) { + + assertThat(response).returns(200, Response::getStatus); + fetchedCatalogRole = response.readEntity(CatalogRole.class); + + assertThat(fetchedCatalogRole.getName()).isEqualTo("mycatalogrole"); + assertThat(fetchedCatalogRole.getProperties()).isEqualTo(Map.of("custom-tag", "foo")); + assertThat(fetchedCatalogRole.getEntityVersion()).isGreaterThan(0); + } + + // Should list the catalogRole. + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog1/catalog-roles") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(CatalogRoles.class)) + .extracting(CatalogRoles::getRoles) + .asInstanceOf(InstanceOfAssertFactories.list(CatalogRole.class)) + .anySatisfy(cr -> assertThat(cr).returns("mycatalogrole", CatalogRole::getName)); + } + + // Empty list if listing in catalog2 + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog2/catalog-roles") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(CatalogRoles.class)) + .extracting(CatalogRoles::getRoles) + .asInstanceOf(InstanceOfAssertFactories.list(CatalogRole.class)) + .satisfiesExactly( + cr -> + assertThat(cr) + .returns( + PolarisEntityConstants.getNameOfCatalogAdminRole(), + CatalogRole::getName)); + } + + UpdateCatalogRoleRequest updateRequest = + new UpdateCatalogRoleRequest( + fetchedCatalogRole.getEntityVersion(), Map.of("custom-tag", "updated")); + + // 200 successful update + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/catalogs/mycatalog1/catalog-roles/mycatalogrole") + .put(Entity.json(updateRequest))) { + assertThat(response).returns(200, Response::getStatus); + fetchedCatalogRole = response.readEntity(CatalogRole.class); + + assertThat(fetchedCatalogRole.getProperties()).isEqualTo(Map.of("custom-tag", "updated")); + } + + // 200 GET after update should show new properties + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/catalogs/mycatalog1/catalog-roles/mycatalogrole") + .get()) { + assertThat(response).returns(200, Response::getStatus); + fetchedCatalogRole = response.readEntity(CatalogRole.class); + + assertThat(fetchedCatalogRole.getProperties()).isEqualTo(Map.of("custom-tag", "updated")); + } + + // 204 Successful delete + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/catalogs/mycatalog1/catalog-roles/mycatalogrole") + .delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + + // NOT_FOUND after deletion + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/catalogs/mycatalog1/catalog-roles/mycatalogrole") + .get()) { + + assertThat(response).returns(404, Response::getStatus); + } + + // Empty list + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog1/catalog-roles") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(CatalogRoles.class)) + .extracting(CatalogRoles::getRoles) + .asInstanceOf(InstanceOfAssertFactories.list(CatalogRole.class)) + .noneSatisfy(cr -> assertThat(cr).returns("mycatalogrole", CatalogRole::getName)); + } + + // 204 Successful delete mycatalog + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog1").delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + + // 204 Successful delete mycatalog2 + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog2").delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + } + + @Test + public void testAssignListAndRevokePrincipalRoles() { + // Create two Principals + Principal principal1 = new Principal("myprincipal1"); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals") + .post(Entity.json(new CreatePrincipalRequest(principal1, false)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + Principal principal2 = new Principal("myprincipal2"); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals") + .post(Entity.json(new CreatePrincipalRequest(principal2, false)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // One PrincipalRole + PrincipalRole principalRole = new PrincipalRole("myprincipalrole"); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles") + .post(Entity.json(new CreatePrincipalRoleRequest(principalRole)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // Assign the role to myprincipal1 + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/myprincipal1/principal-roles") + .put(Entity.json(principalRole))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // Should list myprincipalrole + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/myprincipal1/principal-roles") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(PrincipalRoles.class)) + .extracting(PrincipalRoles::getRoles) + .asInstanceOf(InstanceOfAssertFactories.list(PrincipalRole.class)) + .hasSize(1) + .satisfiesExactly( + pr -> assertThat(pr).returns("myprincipalrole", PrincipalRole::getName)); + } + + // Should list myprincipal1 if listing assignees of myprincipalrole + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/principal-roles/myprincipalrole/principals") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(Principals.class)) + .extracting(Principals::getPrincipals) + .asInstanceOf(InstanceOfAssertFactories.list(Principal.class)) + .hasSize(1) + .satisfiesExactly(pr -> assertThat(pr).returns("myprincipal1", Principal::getName)); + } + + // Empty list if listing in principal2 + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/myprincipal2/principal-roles") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(PrincipalRoles.class)) + .returns(List.of(), PrincipalRoles::getRoles); + } + + // 204 Successful revoke + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/principals/myprincipal1/principal-roles/myprincipalrole") + .delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + + // Empty list + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/myprincipal1/principal-roles") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(PrincipalRoles.class)) + .returns(List.of(), PrincipalRoles::getRoles); + } + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/principal-roles/myprincipalrole/principals") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(Principals.class)) + .returns(List.of(), Principals::getPrincipals); + } + + // 204 Successful delete myprincipal1 + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/myprincipal1").delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + + // 204 Successful delete myprincipal2 + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principals/myprincipal2").delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + + // 204 Successful delete myprincipalrole + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles/myprincipalrole") + .delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + } + + @Test + public void testAssignListAndRevokeCatalogRoles() { + // Create two PrincipalRoles + PrincipalRole principalRole1 = new PrincipalRole("mypr1"); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles") + .post(Entity.json(new CreatePrincipalRoleRequest(principalRole1)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + PrincipalRole principalRole2 = new PrincipalRole("mypr2"); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles") + .post(Entity.json(new CreatePrincipalRoleRequest(principalRole2)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // One CatalogRole + Catalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("mycatalog") + .setStorageConfigInfo( + new AwsStorageConfigInfo( + "arn:aws:iam::012345678901:role/jdoe", StorageConfigInfo.StorageTypeEnum.S3)) + .setProperties(new CatalogProperties("s3://bucket1/")) + .build(); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(new CreateCatalogRequest(catalog)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + CatalogRole catalogRole = new CatalogRole("mycr"); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog/catalog-roles") + .post(Entity.json(new CreateCatalogRoleRequest(catalogRole)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // Create another one in a different catalog. + Catalog otherCatalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("othercatalog") + .setProperties(new CatalogProperties("s3://path/to/data")) + .setStorageConfigInfo( + new AwsStorageConfigInfo( + "arn:aws:iam::012345678901:role/jdoe", StorageConfigInfo.StorageTypeEnum.S3)) + .build(); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs") + .post(Entity.json(new CreateCatalogRequest(otherCatalog)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + CatalogRole otherCatalogRole = new CatalogRole("myothercr"); + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/othercatalog/catalog-roles") + .post(Entity.json(new CreateCatalogRoleRequest(otherCatalogRole)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // Assign both the roles to mypr1 + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/principal-roles/mypr1/catalog-roles/mycatalog") + .put(Entity.json(new GrantCatalogRoleRequest(catalogRole)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/principal-roles/mypr1/catalog-roles/othercatalog") + .put(Entity.json(new GrantCatalogRoleRequest(otherCatalogRole)))) { + + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // Should list only mycr + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/principal-roles/mypr1/catalog-roles/mycatalog") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(CatalogRoles.class)) + .extracting(CatalogRoles::getRoles) + .asInstanceOf(InstanceOfAssertFactories.list(CatalogRole.class)) + .hasSize(1) + .satisfiesExactly(cr -> assertThat(cr).returns("mycr", CatalogRole::getName)); + } + + // Should list mypr1 if listing assignees of mycr + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/catalogs/mycatalog/catalog-roles/mycr/principal-roles") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(PrincipalRoles.class)) + .extracting(PrincipalRoles::getRoles) + .asInstanceOf(InstanceOfAssertFactories.list(PrincipalRole.class)) + .hasSize(1) + .satisfiesExactly(pr -> assertThat(pr).returns("mypr1", PrincipalRole::getName)); + } + + // Empty list if listing in principalRole2 + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/principal-roles/mypr2/catalog-roles/mycatalog") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(CatalogRoles.class)) + .returns(List.of(), CatalogRoles::getRoles); + } + + // 204 Successful revoke + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/principal-roles/mypr1/catalog-roles/mycatalog/mycr") + .delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + + // Empty list + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/principal-roles/mypr1/catalog-roles/mycatalog") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(CatalogRoles.class)) + .returns(List.of(), CatalogRoles::getRoles); + } + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/catalogs/mycatalog/catalog-roles/mycr/principal-roles") + .get()) { + + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(PrincipalRoles.class)) + .returns(List.of(), PrincipalRoles::getRoles); + } + + // 204 Successful delete mypr1 + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles/mypr1").delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + + // 204 Successful delete mypr2 + try (Response response = + newRequest("http://localhost:%d/api/management/v1/principal-roles/mypr2").delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + + // 204 Successful delete mycr + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog/catalog-roles/mycr") + .delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + + // 204 Successful delete mycatalog + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/mycatalog").delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + + // 204 Successful delete myothercr + try (Response response = + newRequest( + "http://localhost:%d/api/management/v1/catalogs/othercatalog/catalog-roles/myothercr") + .delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + + // 204 Successful delete othercatalog + try (Response response = + newRequest("http://localhost:%d/api/management/v1/catalogs/othercatalog").delete()) { + + assertThat(response).returns(204, Response::getStatus); + } + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/auth/JWTRSAKeyPairTest.java b/polaris-service/src/test/java/io/polaris/service/auth/JWTRSAKeyPairTest.java new file mode 100644 index 0000000000..b045998d1c --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/auth/JWTRSAKeyPairTest.java @@ -0,0 +1,159 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import static org.assertj.core.api.Fail.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; + +import com.auth0.jwt.JWT; +import com.auth0.jwt.JWTVerifier; +import com.auth0.jwt.algorithms.Algorithm; +import com.auth0.jwt.interfaces.DecodedJWT; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.storage.cache.StorageCredentialCache; +import io.polaris.service.config.DefaultConfigurationStore; +import java.io.BufferedWriter; +import java.io.File; +import java.io.FileWriter; +import java.security.KeyPair; +import java.security.KeyPairGenerator; +import java.security.interfaces.RSAPrivateKey; +import java.security.interfaces.RSAPublicKey; +import java.util.Base64; +import java.util.HashMap; +import java.util.Map; +import org.junit.jupiter.api.Test; +import org.mockito.Mockito; + +public class JWTRSAKeyPairTest { + + private void writePemToTmpFile(String privateFileLocation, String publicFileLocation) + throws Exception { + new File(privateFileLocation).delete(); + new File(publicFileLocation).delete(); + KeyPairGenerator kpg = KeyPairGenerator.getInstance("RSA"); + kpg.initialize(2048); + KeyPair kp = kpg.generateKeyPair(); + try (BufferedWriter writer = new BufferedWriter(new FileWriter(privateFileLocation, true))) { + writer.write("-----BEGIN PRIVATE KEY-----"); // pragma: allowlist secret + writer.newLine(); + writer.write(Base64.getMimeEncoder().encodeToString(kp.getPrivate().getEncoded())); + writer.newLine(); + writer.write("-----END PRIVATE KEY-----"); + writer.newLine(); + } + try (BufferedWriter writer = new BufferedWriter(new FileWriter(publicFileLocation, true))) { + writer.write("-----BEGIN PUBLIC KEY-----"); + writer.newLine(); + writer.write(Base64.getMimeEncoder().encodeToString(kp.getPublic().getEncoded())); + writer.newLine(); + writer.write("-----END PUBLIC KEY-----"); + writer.newLine(); + } + } + + public CallContext getTestCallContext(PolarisCallContext polarisCallContext) { + return CallContext.setCurrentContext( + new CallContext() { + @Override + public RealmContext getRealmContext() { + return null; + } + + @Override + public PolarisCallContext getPolarisCallContext() { + return polarisCallContext; + } + + @Override + public Map contextVariables() { + return Map.of("token", "me"); + } + }); + } + + @Test + public void testSuccessfulTokenGeneration() throws Exception { + String privateFileLocation = "/tmp/test-private.pem"; + String publicFileLocation = "/tmp/test-public.pem"; + writePemToTmpFile(privateFileLocation, publicFileLocation); + + final String clientId = "test-client-id"; + final String scope = "PRINCIPAL_ROLE:TEST"; + + Map config = new HashMap<>(); + + config.put("LOCAL_PRIVATE_KEY_LOCATION_KEY", privateFileLocation); + config.put("LOCAL_PUBLIC_LOCATION_KEY", publicFileLocation); + + DefaultConfigurationStore store = new DefaultConfigurationStore(config); + PolarisCallContext polarisCallContext = new PolarisCallContext(null, null, store, null); + CallContext.setCurrentContext(getTestCallContext(polarisCallContext)); + PolarisMetaStoreManager metastoreManager = Mockito.mock(PolarisMetaStoreManager.class); + String mainSecret = "client-secret"; + PolarisPrincipalSecrets principalSecrets = + new PolarisPrincipalSecrets(1L, clientId, mainSecret, "otherSecret"); + PolarisEntityManager entityManager = + new PolarisEntityManager(metastoreManager, Mockito::mock, new StorageCredentialCache()); + Mockito.when(metastoreManager.loadPrincipalSecrets(polarisCallContext, clientId)) + .thenReturn(new PolarisMetaStoreManager.PrincipalSecretsResult(principalSecrets)); + PolarisBaseEntity principal = + new PolarisBaseEntity( + 0L, + 1L, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + 0L, + "principal"); + Mockito.when(metastoreManager.loadEntity(polarisCallContext, 0L, 1L)) + .thenReturn(new PolarisMetaStoreManager.EntityResult(principal)); + TokenBroker tokenBroker = new JWTRSAKeyPair(entityManager, 420); + TokenResponse token = null; + try { + token = + tokenBroker.generateFromClientSecrets( + clientId, mainSecret, TokenRequestValidator.CLIENT_CREDENTIALS, scope); + } catch (Exception e) { + fail("Unexpected exception: " + e); + } + assertNotNull(token); + assertEquals(420, token.getExpiresIn()); + + LocalRSAKeyProvider provider = new LocalRSAKeyProvider(); + assertNotNull(provider.getPrivateKey()); + assertNotNull(provider.getPublicKey()); + JWTVerifier verifier = + JWT.require( + Algorithm.RSA256( + (RSAPublicKey) provider.getPublicKey(), + (RSAPrivateKey) provider.getPrivateKey())) + .withIssuer("polaris") + .build(); + DecodedJWT decodedJWT = verifier.verify(token.getAccessToken()); + assertNotNull(decodedJWT); + assertEquals(decodedJWT.getClaim("scope").asString(), "PRINCIPAL_ROLE:TEST"); + assertEquals(decodedJWT.getClaim("client_id").asString(), "test-client-id"); + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/auth/JWTSymmetricKeyGeneratorTest.java b/polaris-service/src/test/java/io/polaris/service/auth/JWTSymmetricKeyGeneratorTest.java new file mode 100644 index 0000000000..e80a86c025 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/auth/JWTSymmetricKeyGeneratorTest.java @@ -0,0 +1,94 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; + +import com.auth0.jwt.JWT; +import com.auth0.jwt.JWTVerifier; +import com.auth0.jwt.algorithms.Algorithm; +import com.auth0.jwt.interfaces.DecodedJWT; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.storage.cache.StorageCredentialCache; +import java.util.Map; +import org.junit.jupiter.api.Test; +import org.mockito.Mockito; + +public class JWTSymmetricKeyGeneratorTest { + + /** Sanity test to verify that we can generate a token */ + @Test + public void testJWTSymmetricKeyGenerator() { + PolarisCallContext polarisCallContext = new PolarisCallContext(null, null, null, null); + CallContext.setCurrentContext( + new CallContext() { + @Override + public RealmContext getRealmContext() { + return () -> "realm"; + } + + @Override + public PolarisCallContext getPolarisCallContext() { + return polarisCallContext; + } + + @Override + public Map contextVariables() { + return Map.of(); + } + }); + PolarisMetaStoreManager metastoreManager = Mockito.mock(PolarisMetaStoreManager.class); + String mainSecret = "test_secret"; + String clientId = "test_client_id"; + PolarisPrincipalSecrets principalSecrets = + new PolarisPrincipalSecrets(1L, clientId, mainSecret, "otherSecret"); + PolarisEntityManager entityManager = + new PolarisEntityManager(metastoreManager, Mockito::mock, new StorageCredentialCache()); + Mockito.when(metastoreManager.loadPrincipalSecrets(polarisCallContext, clientId)) + .thenReturn(new PolarisMetaStoreManager.PrincipalSecretsResult(principalSecrets)); + PolarisBaseEntity principal = + new PolarisBaseEntity( + 0L, + 1L, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + 0L, + "principal"); + Mockito.when(metastoreManager.loadEntity(polarisCallContext, 0L, 1L)) + .thenReturn(new PolarisMetaStoreManager.EntityResult(principal)); + TokenBroker generator = new JWTSymmetricKeyBroker(entityManager, 666, () -> "polaris"); + TokenResponse token = + generator.generateFromClientSecrets( + clientId, mainSecret, TokenRequestValidator.CLIENT_CREDENTIALS, "PRINCIPAL_ROLE:TEST"); + assertNotNull(token); + + JWTVerifier verifier = JWT.require(Algorithm.HMAC256("polaris")).withIssuer("polaris").build(); + DecodedJWT decodedJWT = verifier.verify(token.getAccessToken()); + assertNotNull(decodedJWT); + assertEquals(666, token.getExpiresIn()); + assertEquals(decodedJWT.getClaim("scope").asString(), "PRINCIPAL_ROLE:TEST"); + assertEquals(decodedJWT.getClaim("client_id").asString(), clientId); + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/auth/TokenRequestValidatorTest.java b/polaris-service/src/test/java/io/polaris/service/auth/TokenRequestValidatorTest.java new file mode 100644 index 0000000000..a44ed5f041 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/auth/TokenRequestValidatorTest.java @@ -0,0 +1,88 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import java.util.Arrays; +import java.util.Optional; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; + +public class TokenRequestValidatorTest { + @Test + public void testValidateForClientCredentialsFlowNullClientId() { + Assertions.assertEquals( + OAuthTokenErrorResponse.Error.invalid_client, + new TokenRequestValidator() + .validateForClientCredentialsFlow(null, "notnull", "notnull", "nontnull") + .get()); + Assertions.assertEquals( + OAuthTokenErrorResponse.Error.invalid_client, + new TokenRequestValidator() + .validateForClientCredentialsFlow("", "notnull", "notnull", "nonnull") + .get()); + } + + @Test + public void testValidateForClientCredentialsFlowNullClientSecret() { + Assertions.assertEquals( + OAuthTokenErrorResponse.Error.invalid_client, + new TokenRequestValidator() + .validateForClientCredentialsFlow("client-id", null, "notnull", "nontnull") + .get()); + Assertions.assertEquals( + OAuthTokenErrorResponse.Error.invalid_client, + new TokenRequestValidator() + .validateForClientCredentialsFlow("client-id", "", "notnull", "notnull") + .get()); + } + + @Test + public void testValidateForClientCredentialsFlowInvalidGrantType() { + Assertions.assertEquals( + OAuthTokenErrorResponse.Error.invalid_grant, + new TokenRequestValidator() + .validateForClientCredentialsFlow( + "client-id", "client-secret", "not-client-credentials", "notnull") + .get()); + Assertions.assertEquals( + OAuthTokenErrorResponse.Error.invalid_grant, + new TokenRequestValidator() + .validateForClientCredentialsFlow("client-id", "client-secret", "grant", "notnull") + .get()); + } + + @Test + public void testValidateForClientCredentialsFlowInvalidScope() { + for (String scope : + Arrays.asList("null", "", ",", "ALL", "PRINCIPAL_ROLE:", "PRINCIPAL_ROLE")) { + Assertions.assertEquals( + OAuthTokenErrorResponse.Error.invalid_scope, + new TokenRequestValidator() + .validateForClientCredentialsFlow( + "client-id", "client-secret", "client_credentials", scope) + .get()); + } + } + + @Test + public void testValidateForClientCredentialsFlowAllValid() { + Assertions.assertEquals( + Optional.empty(), + new TokenRequestValidator() + .validateForClientCredentialsFlow( + "client-id", "client-secret", "client_credentials", "PRINCIPAL_ROLE:ALL")); + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/auth/TokenUtils.java b/polaris-service/src/test/java/io/polaris/service/auth/TokenUtils.java new file mode 100644 index 0000000000..32c5e51226 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/auth/TokenUtils.java @@ -0,0 +1,68 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.auth; + +import static io.polaris.service.context.DefaultContextResolver.REALM_PROPERTY_KEY; +import static org.assertj.core.api.Assertions.assertThat; + +import jakarta.ws.rs.client.Client; +import jakarta.ws.rs.client.Entity; +import jakarta.ws.rs.client.Invocation; +import jakarta.ws.rs.core.MultivaluedHashMap; +import jakarta.ws.rs.core.Response; +import java.util.Map; +import org.apache.iceberg.rest.responses.OAuthTokenResponse; + +public class TokenUtils { + + /** Get token against default realm */ + public static String getTokenFromSecrets( + Client client, int port, String clientId, String clientSecret) { + return getTokenFromSecrets(client, port, clientId, clientSecret, null); + } + + /** Get token against specified realm */ + public static String getTokenFromSecrets( + Client client, int port, String clientId, String clientSecret, String realm) { + String token; + + Invocation.Builder builder = + client + .target(String.format("http://localhost:%d/api/catalog/v1/oauth/tokens", port)) + .request("application/json"); + if (realm != null) { + builder = builder.header(REALM_PROPERTY_KEY, realm); + } + + try (Response response = + builder.post( + Entity.form( + new MultivaluedHashMap<>( + Map.of( + "grant_type", + "client_credentials", + "scope", + "PRINCIPAL_ROLE:ALL", + "client_id", + clientId, + "client_secret", + clientSecret))))) { + assertThat(response).returns(200, Response::getStatus); + token = response.readEntity(OAuthTokenResponse.class).token(); + } + return token; + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/catalog/BasePolarisCatalogTest.java b/polaris-service/src/test/java/io/polaris/service/catalog/BasePolarisCatalogTest.java new file mode 100644 index 0000000000..6ca1957230 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/catalog/BasePolarisCatalogTest.java @@ -0,0 +1,1159 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.catalog; + +import static org.apache.iceberg.types.Types.NestedField.required; +import static org.mockito.ArgumentMatchers.isA; +import static org.mockito.Mockito.when; + +import com.google.common.collect.ImmutableMap; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisConfiguration; +import io.polaris.core.PolarisConfigurationStore; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.admin.model.AwsStorageConfigInfo; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.auth.PolarisAuthorizer; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PrincipalEntity; +import io.polaris.core.entity.TaskEntity; +import io.polaris.core.monitor.PolarisMetricRegistry; +import io.polaris.core.persistence.MetaStoreManagerFactory; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.persistence.PolarisMetaStoreSession; +import io.polaris.core.storage.PolarisCredentialProperty; +import io.polaris.core.storage.PolarisStorageIntegration; +import io.polaris.core.storage.PolarisStorageIntegrationProvider; +import io.polaris.core.storage.aws.AwsCredentialsStorageIntegration; +import io.polaris.core.storage.aws.AwsStorageConfigurationInfo; +import io.polaris.core.storage.cache.StorageCredentialCache; +import io.polaris.service.admin.PolarisAdminService; +import io.polaris.service.persistence.InMemoryPolarisMetaStoreManagerFactory; +import io.polaris.service.task.TaskExecutor; +import io.polaris.service.task.TaskFileIOSupplier; +import io.polaris.service.types.NotificationRequest; +import io.polaris.service.types.NotificationType; +import io.polaris.service.types.TableUpdateNotification; +import java.io.IOException; +import java.time.Clock; +import java.util.Arrays; +import java.util.EnumMap; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.UUID; +import java.util.function.Supplier; +import java.util.stream.Collectors; +import org.apache.commons.lang3.NotImplementedException; +import org.apache.iceberg.BaseTable; +import org.apache.iceberg.CatalogProperties; +import org.apache.iceberg.PartitionSpec; +import org.apache.iceberg.Schema; +import org.apache.iceberg.SortOrder; +import org.apache.iceberg.Table; +import org.apache.iceberg.TableMetadata; +import org.apache.iceberg.TableMetadataParser; +import org.apache.iceberg.catalog.CatalogTests; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.SupportsNamespaces; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.exceptions.BadRequestException; +import org.apache.iceberg.exceptions.ForbiddenException; +import org.apache.iceberg.exceptions.NoSuchNamespaceException; +import org.apache.iceberg.inmemory.InMemoryFileIO; +import org.apache.iceberg.io.FileIO; +import org.apache.iceberg.types.Types; +import org.assertj.core.api.AbstractBooleanAssert; +import org.assertj.core.api.Assertions; +import org.jetbrains.annotations.Nullable; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Assumptions; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.mockito.Mockito; +import software.amazon.awssdk.services.sts.StsClient; +import software.amazon.awssdk.services.sts.model.AssumeRoleRequest; +import software.amazon.awssdk.services.sts.model.AssumeRoleResponse; +import software.amazon.awssdk.services.sts.model.Credentials; + +public class BasePolarisCatalogTest extends CatalogTests { + protected static final Namespace NS = Namespace.of("newdb"); + protected static final TableIdentifier TABLE = TableIdentifier.of(NS, "table"); + protected static final Schema SCHEMA = + new Schema( + required(3, "id", Types.IntegerType.get(), "unique ID 🤪"), + required(4, "data", Types.StringType.get())); + public static final String CATALOG_NAME = "polaris-catalog"; + public static final String TEST_ACCESS_KEY = "test_access_key"; + public static final String SECRET_ACCESS_KEY = "secret_access_key"; + public static final String SESSION_TOKEN = "session_token"; + + private BasePolarisCatalog catalog; + private AwsStorageConfigInfo storageConfigModel; + private StsClient stsClient; + private PolarisMetaStoreManager metaStoreManager; + private PolarisCallContext polarisContext; + private PolarisAdminService adminService; + private PolarisEntityManager entityManager; + private AuthenticatedPolarisPrincipal authenticatedRoot; + private PolarisEntity catalogEntity; + + @BeforeEach + @SuppressWarnings("unchecked") + public void before() { + PolarisDiagnostics diagServices = new PolarisDefaultDiagServiceImpl(); + RealmContext realmContext = () -> "realm"; + PolarisStorageIntegrationProvider storageIntegrationProvider = Mockito.mock(); + InMemoryPolarisMetaStoreManagerFactory managerFactory = + new InMemoryPolarisMetaStoreManagerFactory(); + managerFactory.setStorageIntegrationProvider(storageIntegrationProvider); + metaStoreManager = managerFactory.getOrCreateMetaStoreManager(realmContext); + Map configMap = new HashMap<>(); + configMap.put("ALLOW_SPECIFYING_FILE_IO_IMPL", true); + polarisContext = + new PolarisCallContext( + managerFactory.getOrCreateSessionSupplier(realmContext).get(), + diagServices, + new PolarisConfigurationStore() { + @Override + public @Nullable T getConfiguration(PolarisCallContext ctx, String configName) { + return (T) configMap.get(configName); + } + }, + Clock.systemDefaultZone()); + entityManager = + new PolarisEntityManager( + metaStoreManager, polarisContext::getMetaStore, new StorageCredentialCache()); + + CallContext callContext = CallContext.of(realmContext, polarisContext); + CallContext.setCurrentContext(callContext); + + PrincipalEntity rootEntity = + new PrincipalEntity( + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .readEntityByName( + polarisContext, + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + "root") + .getEntity())); + + authenticatedRoot = new AuthenticatedPolarisPrincipal(rootEntity, Set.of()); + + adminService = + new PolarisAdminService( + callContext, + entityManager, + authenticatedRoot, + new PolarisAuthorizer(new PolarisConfigurationStore() {})); + String storageLocation = "s3://my-bucket/path/to/data"; + storageConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::012345678901:role/jdoe") + .setExternalId("externalId") + .setUserArn("aws::a:user:arn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of(storageLocation, "s3://externally-owned-bucket")) + .build(); + catalogEntity = + adminService.createCatalog( + new CatalogEntity.Builder() + .setName(CATALOG_NAME) + .setDefaultBaseLocation(storageLocation) + .setReplaceNewLocationPrefixWithCatalogDefault("file:") + .addProperty(PolarisConfiguration.CATALOG_ALLOW_EXTERNAL_TABLE_LOCATION, "true") + .addProperty(PolarisConfiguration.CATALOG_ALLOW_UNSTRUCTURED_TABLE_LOCATION, "true") + .setStorageConfigurationInfo(storageConfigModel, storageLocation) + .build()); + + PolarisPassthroughResolutionView passthroughView = + new PolarisPassthroughResolutionView( + callContext, entityManager, authenticatedRoot, CATALOG_NAME); + TaskExecutor taskExecutor = Mockito.mock(); + this.catalog = + new BasePolarisCatalog( + entityManager, callContext, passthroughView, authenticatedRoot, taskExecutor); + this.catalog.initialize( + CATALOG_NAME, + ImmutableMap.of( + CatalogProperties.FILE_IO_IMPL, "org.apache.iceberg.inmemory.InMemoryFileIO")); + stsClient = Mockito.mock(StsClient.class); + when(stsClient.assumeRole(isA(AssumeRoleRequest.class))) + .thenReturn( + AssumeRoleResponse.builder() + .credentials( + Credentials.builder() + .accessKeyId(TEST_ACCESS_KEY) + .secretAccessKey(SECRET_ACCESS_KEY) + .sessionToken(SESSION_TOKEN) + .build()) + .build()); + PolarisStorageIntegration storageIntegration = + new AwsCredentialsStorageIntegration(stsClient); + when(storageIntegrationProvider.getStorageIntegrationForConfig( + isA(AwsStorageConfigurationInfo.class))) + .thenReturn((PolarisStorageIntegration) storageIntegration); + } + + @AfterEach + public void after() throws IOException { + catalog().close(); + } + + @Override + protected BasePolarisCatalog catalog() { + return catalog; + } + + @Override + protected boolean requiresNamespaceCreate() { + return true; + } + + @Override + protected boolean supportsNestedNamespaces() { + return true; + } + + @Override + protected boolean overridesRequestedLocation() { + return true; + } + + protected boolean supportsNotifications() { + return true; + } + + @Test + public void testRenameTableMissingDestinationNamespace() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + + BasePolarisCatalog catalog = catalog(); + catalog.createNamespace(NS); + + Assertions.assertThat(catalog.tableExists(TABLE)) + .as("Source table should not exist before create") + .isFalse(); + + catalog.buildTable(TABLE, SCHEMA).create(); + Assertions.assertThat(catalog.tableExists(TABLE)) + .as("Table should exist after create") + .isTrue(); + + Namespace newNamespace = Namespace.of("nonexistent_namespace"); + TableIdentifier renamedTable = TableIdentifier.of(newNamespace, "table_renamed"); + + Assertions.assertThat(catalog.namespaceExists(newNamespace)) + .as("Destination namespace should not exist before rename") + .isFalse(); + + Assertions.assertThat(catalog.tableExists(renamedTable)) + .as("Destination table should not exist before rename") + .isFalse(); + + Assertions.assertThatThrownBy(() -> catalog.renameTable(TABLE, renamedTable)) + .isInstanceOf(NoSuchNamespaceException.class) + .hasMessageContaining("Namespace does not exist"); + + Assertions.assertThat(catalog.namespaceExists(newNamespace)) + .as("Destination namespace should not exist after failed rename") + .isFalse(); + + Assertions.assertThat(catalog.tableExists(renamedTable)) + .as("Table should not exist after failed rename") + .isFalse(); + } + + @Test + public void testCreateNestedNamespaceUnderMissingParent() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supoprted"); + + BasePolarisCatalog catalog = catalog(); + + Namespace child1 = Namespace.of("parent", "child1"); + + Assertions.assertThatThrownBy(() -> catalog.createNamespace(child1)) + .isInstanceOf(NoSuchNamespaceException.class) + .hasMessageContaining("Parent"); + } + + @Test + public void testUpdateNotificationWhenTableAndNamespacesDontExist() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + final String tableLocation = "s3://externally-owned-bucket/table/"; + final String tableMetadataLocation = tableLocation + "metadata/v1.metadata.json"; + BasePolarisCatalog catalog = catalog(); + + Namespace namespace = Namespace.of("parent", "child1"); + TableIdentifier table = TableIdentifier.of(namespace, "table"); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.UPDATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(tableMetadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + InMemoryFileIO fileIO = (InMemoryFileIO) catalog.getIo(); + + fileIO.addFile( + tableMetadataLocation, + TableMetadataParser.toJson(createSampleTableMetadata(tableLocation)).getBytes()); + + Assertions.assertThat(catalog.sendNotification(table, request)) + .as("Notification should be sent successfully") + .isTrue(); + Assertions.assertThat(catalog.namespaceExists(namespace)) + .as("Intermediate namespaces should be created") + .isTrue(); + Assertions.assertThat(catalog.tableExists(table)) + .as("Table should be created on receiving notification") + .isTrue(); + } + + @Test + public void testUpdateNotificationCreateTableInDisallowedLocation() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + // The location of the metadata JSON file specified in the create will be forbidden. + final String tableLocation = "s3://forbidden-table-location/table/"; + final String tableMetadataLocation = tableLocation + "metadata/v1.metadata.json"; + BasePolarisCatalog catalog = catalog(); + + Namespace namespace = Namespace.of("parent", "child1"); + TableIdentifier table = TableIdentifier.of(namespace, "table"); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.UPDATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(tableMetadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + InMemoryFileIO fileIO = (InMemoryFileIO) catalog.getIo(); + + fileIO.addFile( + tableMetadataLocation, + TableMetadataParser.toJson(createSampleTableMetadata(tableLocation)).getBytes()); + + Assertions.assertThatThrownBy(() -> catalog.sendNotification(table, request)) + .isInstanceOf(ForbiddenException.class) + .hasMessageContaining("Invalid location"); + } + + @Test + public void testCreateNotificationCreateTableInExternalLocation() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + // The location of the metadata JSON file specified is outside of the table's base location + // according to the + // metadata. We assume this is fraudulent and disallowed + final String tableLocation = "s3://my-bucket/path/to/data/my_table/"; + final String tableMetadataLocation = tableLocation + "metadata/v1.metadata.json"; + final String anotherTableLocation = "s3://my-bucket/path/to/data/another_table/"; + + entityManager + .getMetaStoreManager() + .updateEntityPropertiesIfNotChanged( + polarisContext, + List.of(PolarisEntity.toCore(catalogEntity)), + new CatalogEntity.Builder(CatalogEntity.of(catalogEntity)) + .addProperty(PolarisConfiguration.CATALOG_ALLOW_EXTERNAL_TABLE_LOCATION, "false") + .addProperty(PolarisConfiguration.CATALOG_ALLOW_UNSTRUCTURED_TABLE_LOCATION, "true") + .build()); + BasePolarisCatalog catalog = catalog(); + TableMetadata tableMetadata = + TableMetadata.buildFromEmpty() + .assignUUID() + .setLocation(anotherTableLocation) + .addSchema(SCHEMA, 4) + .addPartitionSpec(PartitionSpec.unpartitioned()) + .addSortOrder(SortOrder.unsorted()) + .build(); + TableMetadataParser.write(tableMetadata, catalog.getIo().newOutputFile(tableMetadataLocation)); + + Namespace namespace = Namespace.of("parent", "child1"); + TableIdentifier table = TableIdentifier.of(namespace, "my_table"); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.CREATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(tableMetadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + Assertions.assertThatThrownBy(() -> catalog.sendNotification(table, request)) + .isInstanceOf(BadRequestException.class) + .hasMessageContaining("is not allowed outside of table location"); + } + + @Test + public void testCreateNotificationCreateTableOutsideOfMetadataLocation() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + // The location of the metadata JSON file specified is outside of the table's metadata directory + // according to the + // metadata. We assume this is fraudulent and disallowed + final String tableLocation = "s3://my-bucket/path/to/data/my_table/"; + final String tableMetadataLocation = tableLocation + "metadata/v3.metadata.json"; + + // this passes the first validation, since it's within the namespace subdirectory, but + // the location is in another table's subdirectory + final String anotherTableLocation = "s3://my-bucket/path/to/data/another_table"; + + entityManager + .getMetaStoreManager() + .updateEntityPropertiesIfNotChanged( + polarisContext, + List.of(PolarisEntity.toCore(catalogEntity)), + new CatalogEntity.Builder(CatalogEntity.of(catalogEntity)) + .addProperty(PolarisConfiguration.CATALOG_ALLOW_EXTERNAL_TABLE_LOCATION, "false") + .addProperty(PolarisConfiguration.CATALOG_ALLOW_UNSTRUCTURED_TABLE_LOCATION, "true") + .build()); + BasePolarisCatalog catalog = catalog(); + TableMetadata tableMetadata = + TableMetadata.buildFromEmpty() + .assignUUID() + .setLocation(anotherTableLocation) + .addSchema(SCHEMA, 4) + .addPartitionSpec(PartitionSpec.unpartitioned()) + .addSortOrder(SortOrder.unsorted()) + .build(); + TableMetadataParser.write(tableMetadata, catalog.getIo().newOutputFile(tableMetadataLocation)); + + Namespace namespace = Namespace.of("parent", "child1"); + TableIdentifier table = TableIdentifier.of(namespace, "my_table"); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.CREATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(tableMetadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + Assertions.assertThatThrownBy(() -> catalog.sendNotification(table, request)) + .isInstanceOf(BadRequestException.class) + .hasMessageContaining("is not allowed outside of table location"); + } + + @Test + public void testUpdateNotificationCreateTableInExternalLocation() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + // The location of the metadata JSON file specified is outside of the table's base location + // according to the + // metadata. We assume this is fraudulent and disallowed + final String tableLocation = "s3://my-bucket/path/to/data/my_table/"; + final String tableMetadataLocation = tableLocation + "metadata/v1.metadata.json"; + final String anotherTableLocation = "s3://my-bucket/path/to/data/another_table/"; + + entityManager + .getMetaStoreManager() + .updateEntityPropertiesIfNotChanged( + polarisContext, + List.of(PolarisEntity.toCore(catalogEntity)), + new CatalogEntity.Builder(CatalogEntity.of(catalogEntity)) + .addProperty(PolarisConfiguration.CATALOG_ALLOW_EXTERNAL_TABLE_LOCATION, "false") + .addProperty(PolarisConfiguration.CATALOG_ALLOW_UNSTRUCTURED_TABLE_LOCATION, "true") + .build()); + BasePolarisCatalog catalog = catalog(); + InMemoryFileIO fileIO = (InMemoryFileIO) catalog.getIo(); + + fileIO.addFile( + tableMetadataLocation, + TableMetadataParser.toJson(createSampleTableMetadata(tableLocation)).getBytes()); + + Namespace namespace = Namespace.of("parent", "child1"); + TableIdentifier table = TableIdentifier.of(namespace, "my_table"); + + NotificationRequest createRequest = new NotificationRequest(); + createRequest.setNotificationType(NotificationType.CREATE); + TableUpdateNotification create = new TableUpdateNotification(); + create.setMetadataLocation(tableMetadataLocation); + create.setTableName(table.name()); + create.setTableUuid(UUID.randomUUID().toString()); + create.setTimestamp(230950845L); + createRequest.setPayload(create); + + // the create should succeed + catalog.sendNotification(table, createRequest); + + // now craft the malicious metadata file + final String maliciousMetadataFile = tableLocation + "metadata/v2.metadata.json"; + TableMetadata tableMetadata = + TableMetadata.buildFromEmpty() + .assignUUID() + .setLocation(anotherTableLocation) + .addSchema(SCHEMA, 4) + .addPartitionSpec(PartitionSpec.unpartitioned()) + .addSortOrder(SortOrder.unsorted()) + .build(); + TableMetadataParser.write(tableMetadata, catalog.getIo().newOutputFile(maliciousMetadataFile)); + + NotificationRequest updateRequest = new NotificationRequest(); + updateRequest.setNotificationType(NotificationType.UPDATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(maliciousMetadataFile); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + updateRequest.setPayload(update); + + Assertions.assertThatThrownBy(() -> catalog.sendNotification(table, updateRequest)) + .isInstanceOf(BadRequestException.class) + .hasMessageContaining("is not allowed outside of table location"); + } + + @Test + public void testUpdateNotificationCreateTableWithLocalFilePrefix() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + // The location of the metadata JSON file specified in the create will be forbidden. + final String metadataLocation = "file:///etc/metadata.json/../passwd"; + String catalogWithoutStorage = "catalogWithoutStorage"; + PolarisEntity catalogEntity = + adminService.createCatalog( + new CatalogEntity.Builder() + .setDefaultBaseLocation("file://") + .setName(catalogWithoutStorage) + .build()); + + CallContext callContext = CallContext.getCurrentContext(); + PolarisPassthroughResolutionView passthroughView = + new PolarisPassthroughResolutionView( + callContext, entityManager, authenticatedRoot, catalogWithoutStorage); + TaskExecutor taskExecutor = Mockito.mock(); + BasePolarisCatalog catalog = + new BasePolarisCatalog( + entityManager, callContext, passthroughView, authenticatedRoot, taskExecutor); + catalog.initialize( + catalogWithoutStorage, + ImmutableMap.of( + CatalogProperties.FILE_IO_IMPL, "org.apache.iceberg.inmemory.InMemoryFileIO")); + + Namespace namespace = Namespace.of("parent", "child1"); + TableIdentifier table = TableIdentifier.of(namespace, "table"); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.UPDATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(metadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + InMemoryFileIO fileIO = (InMemoryFileIO) catalog.getIo(); + + fileIO.addFile( + metadataLocation, + TableMetadataParser.toJson(createSampleTableMetadata(metadataLocation)).getBytes()); + + Assertions.assertThatThrownBy(() -> catalog.sendNotification(table, request)) + .isInstanceOf(ForbiddenException.class) + .hasMessageContaining("Invalid location"); + } + + @Test + public void testUpdateNotificationCreateTableWithHttpPrefix() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + String catalogName = "catalogForMaliciousDomain"; + PolarisEntity catalogEntity = + adminService.createCatalog( + new CatalogEntity.Builder() + .setDefaultBaseLocation("http://maliciousdomain.com") + .setName(catalogName) + .build()); + + CallContext callContext = CallContext.getCurrentContext(); + PolarisPassthroughResolutionView passthroughView = + new PolarisPassthroughResolutionView( + callContext, entityManager, authenticatedRoot, catalogName); + TaskExecutor taskExecutor = Mockito.mock(); + BasePolarisCatalog catalog = + new BasePolarisCatalog( + entityManager, callContext, passthroughView, authenticatedRoot, taskExecutor); + catalog.initialize( + catalogName, + ImmutableMap.of( + CatalogProperties.FILE_IO_IMPL, "org.apache.iceberg.inmemory.InMemoryFileIO")); + + Namespace namespace = Namespace.of("parent", "child1"); + TableIdentifier table = TableIdentifier.of(namespace, "table"); + + InMemoryFileIO fileIO = (InMemoryFileIO) catalog.getIo(); + + // The location of the metadata JSON file specified in the create will be forbidden. + final String metadataLocation = "http://maliciousdomain.com/metadata.json"; + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.UPDATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(metadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + fileIO.addFile( + metadataLocation, + TableMetadataParser.toJson(createSampleTableMetadata(metadataLocation)).getBytes()); + + Assertions.assertThatThrownBy(() -> catalog.sendNotification(table, request)) + .isInstanceOf(ForbiddenException.class) + .hasMessageContaining("Invalid location"); + + // It also fails if we try to use https + final String httpsMetadataLocation = "https://maliciousdomain.com/metadata.json"; + NotificationRequest newRequest = new NotificationRequest(); + newRequest.setNotificationType(NotificationType.UPDATE); + newRequest.setPayload( + new TableUpdateNotification( + table.name(), 230950845L, UUID.randomUUID().toString(), httpsMetadataLocation, null)); + + fileIO.addFile( + httpsMetadataLocation, + TableMetadataParser.toJson(createSampleTableMetadata(metadataLocation)).getBytes()); + + Assertions.assertThatThrownBy(() -> catalog.sendNotification(table, newRequest)) + .isInstanceOf(ForbiddenException.class) + .hasMessageContaining("Invalid location"); + } + + @Test + public void testUpdateNotificationWhenNamespacesExist() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + final String tableLocation = "s3://externally-owned-bucket/table/"; + final String tableMetadataLocation = tableLocation + "metadata/v1.metadata.json"; + BasePolarisCatalog catalog = catalog(); + + Namespace namespace = Namespace.of("parent", "child1"); + + createNonExistingNamespaces(namespace); + + TableIdentifier table = TableIdentifier.of(namespace, "table"); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.UPDATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(tableMetadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + InMemoryFileIO fileIO = (InMemoryFileIO) catalog.getIo(); + + fileIO.addFile( + tableMetadataLocation, + TableMetadataParser.toJson(createSampleTableMetadata(tableLocation)).getBytes()); + + Assertions.assertThat(catalog.sendNotification(table, request)) + .as("Notification should be sent successfully") + .isTrue(); + Assertions.assertThat(catalog.namespaceExists(namespace)) + .as("Intermediate namespaces should be created") + .isTrue(); + Assertions.assertThat(catalog.tableExists(table)) + .as("Table should be created on receiving notification") + .isTrue(); + } + + @Test + public void testUpdateNotificationWhenTableExists() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + final String tableLocation = "s3://externally-owned-bucket/table/"; + final String tableMetadataLocation = tableLocation + "metadata/v1.metadata.json"; + BasePolarisCatalog catalog = catalog(); + + Namespace namespace = Namespace.of("parent", "child1"); + + createNonExistingNamespaces(namespace); + + TableIdentifier table = TableIdentifier.of(namespace, "table"); + + catalog.createTable( + table, + new Schema( + Types.NestedField.required(1, "intType", Types.IntegerType.get()), + Types.NestedField.required(2, "stringType", Types.StringType.get()))); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.UPDATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(tableMetadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + InMemoryFileIO fileIO = (InMemoryFileIO) catalog.getIo(); + + fileIO.addFile( + tableMetadataLocation, + TableMetadataParser.toJson(createSampleTableMetadata(tableLocation)).getBytes()); + + Assertions.assertThat(catalog.sendNotification(table, request)) + .as("Notification should be sent successfully") + .isTrue(); + Assertions.assertThat(catalog.namespaceExists(namespace)) + .as("Intermediate namespaces should be created") + .isTrue(); + Assertions.assertThat(catalog.tableExists(table)) + .as("Table should be created on receiving notification") + .isTrue(); + } + + @Test + public void testUpdateNotificationWhenTableExistsInDisallowedLocation() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + // The location of the metadata JSON file specified in the update will be forbidden. + final String tableLocation = "s3://forbidden-table-location/table/"; + final String tableMetadataLocation = tableLocation + "metadata/v1.metadata.json"; + BasePolarisCatalog catalog = catalog(); + + Namespace namespace = Namespace.of("parent", "child1"); + + createNonExistingNamespaces(namespace); + + TableIdentifier table = TableIdentifier.of(namespace, "table"); + + catalog.createTable( + table, + new Schema( + Types.NestedField.required(1, "intType", Types.IntegerType.get()), + Types.NestedField.required(2, "stringType", Types.StringType.get()))); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.UPDATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(tableMetadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + InMemoryFileIO fileIO = (InMemoryFileIO) catalog.getIo(); + + fileIO.addFile( + tableMetadataLocation, + TableMetadataParser.toJson(createSampleTableMetadata(tableLocation)).getBytes()); + + Assertions.assertThatThrownBy(() -> catalog.sendNotification(table, request)) + .isInstanceOf(ForbiddenException.class) + .hasMessageContaining("Invalid location"); + } + + @Test + public void testUpdateNotificationWhenTableExistsFileSpecifiesDisallowedLocation() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + final String tableLocation = "s3://externally-owned-bucket/table/"; + final String tableMetadataLocation = tableLocation + "metadata/v1.metadata.json"; + BasePolarisCatalog catalog = catalog(); + + Namespace namespace = Namespace.of("parent", "child1"); + + createNonExistingNamespaces(namespace); + + TableIdentifier table = TableIdentifier.of(namespace, "table"); + + catalog.createTable( + table, + new Schema( + Types.NestedField.required(1, "intType", Types.IntegerType.get()), + Types.NestedField.required(2, "stringType", Types.StringType.get()))); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.UPDATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(tableMetadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + InMemoryFileIO fileIO = (InMemoryFileIO) catalog.getIo(); + + // Though the metadata JSON file itself is in an allowed location, make it internally specify + // a forbidden table location. + TableMetadata forbiddenMetadata = + createSampleTableMetadata("s3://forbidden-table-location/table/"); + fileIO.addFile(tableMetadataLocation, TableMetadataParser.toJson(forbiddenMetadata).getBytes()); + + Assertions.assertThatThrownBy(() -> catalog.sendNotification(table, request)) + .isInstanceOf(ForbiddenException.class) + .hasMessageContaining("Invalid location"); + } + + @Test + public void testDropNotificationWhenTableAndNamespacesDontExist() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + final String tableLocation = "s3://externally-owned-bucket/table/"; + final String tableMetadataLocation = tableLocation + "metadata/v1.metadata.json"; + BasePolarisCatalog catalog = catalog(); + + Namespace namespace = Namespace.of("parent", "child1"); + TableIdentifier table = TableIdentifier.of(namespace, "table"); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.DROP); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(tableMetadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + Assertions.assertThat(catalog.sendNotification(table, request)) + .as("Notification should fail since the target table doesn't exist") + .isFalse(); + Assertions.assertThat(catalog.namespaceExists(namespace)) + .as("Intermediate namespaces should not be created") + .isFalse(); + Assertions.assertThat(catalog.tableExists(table)).as("Table should not exist").isFalse(); + } + + @Test + public void testDropNotificationWhenNamespacesExist() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + final String tableLocation = "s3://externally-owned-bucket/table/"; + final String tableMetadataLocation = tableLocation + "metadata/v1.metadata.json"; + BasePolarisCatalog catalog = catalog(); + + Namespace namespace = Namespace.of("parent", "child1"); + + createNonExistingNamespaces(namespace); + + TableIdentifier table = TableIdentifier.of(namespace, "table"); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.DROP); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(tableMetadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + InMemoryFileIO fileIO = (InMemoryFileIO) catalog.getIo(); + + fileIO.addFile( + tableMetadataLocation, + TableMetadataParser.toJson(createSampleTableMetadata(tableLocation)).getBytes()); + + Assertions.assertThat(catalog.sendNotification(table, request)) + .as("Notification should fail since table doesn't exist") + .isFalse(); + Assertions.assertThat(catalog.namespaceExists(namespace)) + .as("Intermediate namespaces should exist") + .isTrue(); + Assertions.assertThat(catalog.tableExists(table)) + .as("Table should not be created on receiving notification") + .isFalse(); + } + + @Test + public void testDropNotificationWhenTableExists() { + Assumptions.assumeTrue( + requiresNamespaceCreate(), + "Only applicable if namespaces must be created before adding children"); + Assumptions.assumeTrue( + supportsNestedNamespaces(), "Only applicable if nested namespaces are supported"); + Assumptions.assumeTrue( + supportsNotifications(), "Only applicable if notifications are supported"); + + final String tableLocation = "s3://externally-owned-bucket/table/"; + final String tableMetadataLocation = tableLocation + "metadata/v1.metadata.json"; + BasePolarisCatalog catalog = catalog(); + + Namespace namespace = Namespace.of("parent", "child1"); + + createNonExistingNamespaces(namespace); + + TableIdentifier table = TableIdentifier.of(namespace, "table"); + + catalog.createTable( + table, + new Schema( + Types.NestedField.required(1, "intType", Types.IntegerType.get()), + Types.NestedField.required(2, "stringType", Types.StringType.get()))); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.DROP); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation(tableMetadataLocation); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + InMemoryFileIO fileIO = (InMemoryFileIO) catalog.getIo(); + + fileIO.addFile( + tableMetadataLocation, + TableMetadataParser.toJson(createSampleTableMetadata(tableLocation)).getBytes()); + + Assertions.assertThat(catalog.sendNotification(table, request)) + .as("Notification should be sent successfully") + .isTrue(); + Assertions.assertThat(catalog.namespaceExists(namespace)) + .as("Intermediate namespaces should already exist") + .isTrue(); + Assertions.assertThat(catalog.tableExists(table)) + .as("Table should be dropped on receiving notification") + .isFalse(); + } + + @Test + public void testDropTableWithPurge() { + if (this.requiresNamespaceCreate()) { + ((SupportsNamespaces) catalog).createNamespace(NS); + } + + Assertions.assertThatPredicate(catalog::tableExists) + .as("Table should not exist before create") + .rejects(TABLE); + + Table table = catalog.buildTable(TABLE, SCHEMA).create(); + Assertions.assertThatPredicate(catalog::tableExists) + .as("Table should exist after create") + .accepts(TABLE); + Assertions.assertThat(table).isInstanceOf(BaseTable.class); + TableMetadata tableMetadata = ((BaseTable) table).operations().current(); + + boolean dropped = catalog.dropTable(TABLE, true); + ((AbstractBooleanAssert) + Assertions.assertThat(dropped).as("Should drop a table that does exist", new Object[0])) + .isTrue(); + Assertions.assertThatPredicate(catalog::tableExists) + .as("Table should not exist after drop") + .rejects(TABLE); + List tasks = + metaStoreManager.loadTasks(polarisContext, "testExecutor", 1).getEntities(); + Assertions.assertThat(tasks).hasSize(1); + TaskEntity taskEntity = TaskEntity.of(tasks.get(0)); + EnumMap credentials = + metaStoreManager + .getSubscopedCredsForEntity( + polarisContext, + 0, + taskEntity.getId(), + true, + Set.of(tableMetadata.location()), + Set.of(tableMetadata.location())) + .getCredentials(); + Assertions.assertThat(credentials) + .isNotNull() + .isNotEmpty() + .containsEntry(PolarisCredentialProperty.AWS_KEY_ID, TEST_ACCESS_KEY) + .containsEntry(PolarisCredentialProperty.AWS_SECRET_KEY, SECRET_ACCESS_KEY) + .containsEntry(PolarisCredentialProperty.AWS_TOKEN, SESSION_TOKEN); + FileIO fileIO = + new TaskFileIOSupplier( + new MetaStoreManagerFactory() { + @Override + public PolarisMetaStoreManager getOrCreateMetaStoreManager( + RealmContext realmContext) { + return metaStoreManager; + } + + @Override + public Supplier getOrCreateSessionSupplier( + RealmContext realmContext) { + return () -> polarisContext.getMetaStore(); + } + + @Override + public StorageCredentialCache getOrCreateStorageCredentialCache( + RealmContext realmContext) { + return new StorageCredentialCache(); + } + + @Override + public void setMetricRegistry(PolarisMetricRegistry metricRegistry) {} + + @Override + public Map + bootstrapRealms(List realms) { + throw new NotImplementedException("Bootstrapping realms is not supported"); + } + + @Override + public void setStorageIntegrationProvider( + PolarisStorageIntegrationProvider storageIntegrationProvider) {} + }) + .apply(taskEntity); + Assertions.assertThat(fileIO).isNotNull().isInstanceOf(InMemoryFileIO.class); + } + + private TableMetadata createSampleTableMetadata(String tableLocation) { + Schema schema = + new Schema( + Types.NestedField.required(1, "intType", Types.IntegerType.get()), + Types.NestedField.required(2, "stringType", Types.StringType.get())); + PartitionSpec partitionSpec = + PartitionSpec.builderFor(schema).identity("intType").withSpecId(1000).build(); + + return TableMetadata.newTableMetadata( + schema, partitionSpec, tableLocation, ImmutableMap.of()); + } + + private void createNonExistingNamespaces(Namespace namespace) { + // Pre-create namespaces if they don't exist + for (int i = 1; i <= namespace.length(); i++) { + Namespace nsLevel = + Namespace.of( + Arrays.stream(namespace.levels()) + .limit(i) + .collect(Collectors.toList()) + .toArray(String[]::new)); + if (!catalog.namespaceExists(nsLevel)) { + catalog.createNamespace(nsLevel); + } + } + } + + @Test + public void testRetriableException() { + RuntimeException s3Exception = new RuntimeException("Access Denied"); + RuntimeException azureBlobStorageException = + new RuntimeException( + "This request is not authorized to perform this operation using this permission"); + RuntimeException gcsException = new RuntimeException("Forbidden"); + RuntimeException otherException = new RuntimeException(new IOException("Connection reset")); + Assertions.assertThat(BasePolarisCatalog.SHOULD_RETRY_REFRESH_PREDICATE.test(s3Exception)) + .isFalse(); + Assertions.assertThat( + BasePolarisCatalog.SHOULD_RETRY_REFRESH_PREDICATE.test(azureBlobStorageException)) + .isFalse(); + Assertions.assertThat(BasePolarisCatalog.SHOULD_RETRY_REFRESH_PREDICATE.test(gcsException)) + .isFalse(); + Assertions.assertThat(BasePolarisCatalog.SHOULD_RETRY_REFRESH_PREDICATE.test(otherException)) + .isTrue(); + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/catalog/BasePolarisCatalogViewTest.java b/polaris-service/src/test/java/io/polaris/service/catalog/BasePolarisCatalogViewTest.java new file mode 100644 index 0000000000..4d7ba998e8 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/catalog/BasePolarisCatalogViewTest.java @@ -0,0 +1,150 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.catalog; + +import com.google.common.collect.ImmutableMap; +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisConfiguration; +import io.polaris.core.PolarisConfigurationStore; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.admin.model.FileStorageConfigInfo; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.auth.PolarisAuthorizer; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.PolarisEntity; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PrincipalEntity; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.storage.cache.StorageCredentialCache; +import io.polaris.service.admin.PolarisAdminService; +import io.polaris.service.persistence.InMemoryPolarisMetaStoreManagerFactory; +import io.polaris.service.storage.PolarisStorageIntegrationProviderImpl; +import java.time.Clock; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import org.apache.iceberg.CatalogProperties; +import org.apache.iceberg.catalog.Catalog; +import org.apache.iceberg.view.ViewCatalogTests; +import org.jetbrains.annotations.Nullable; +import org.junit.jupiter.api.BeforeEach; +import org.mockito.Mockito; + +public class BasePolarisCatalogViewTest extends ViewCatalogTests { + public static final String CATALOG_NAME = "polaris-catalog"; + private BasePolarisCatalog catalog; + + @BeforeEach + @SuppressWarnings("unchecked") + public void before() { + PolarisDiagnostics diagServices = new PolarisDefaultDiagServiceImpl(); + RealmContext realmContext = () -> "realm"; + InMemoryPolarisMetaStoreManagerFactory managerFactory = + new InMemoryPolarisMetaStoreManagerFactory(); + managerFactory.setStorageIntegrationProvider( + new PolarisStorageIntegrationProviderImpl(Mockito::mock)); + PolarisMetaStoreManager metaStoreManager = + managerFactory.getOrCreateMetaStoreManager(realmContext); + Map configMap = new HashMap<>(); + configMap.put("ALLOW_WILDCARD_LOCATION", true); + configMap.put("ALLOW_SPECIFYING_FILE_IO_IMPL", true); + PolarisCallContext polarisContext = + new PolarisCallContext( + managerFactory.getOrCreateSessionSupplier(realmContext).get(), + diagServices, + new PolarisConfigurationStore() { + @Override + public @Nullable T getConfiguration(PolarisCallContext ctx, String configName) { + return (T) configMap.get(configName); + } + }, + Clock.systemDefaultZone()); + + PolarisEntityManager entityManager = + new PolarisEntityManager( + metaStoreManager, polarisContext::getMetaStore, new StorageCredentialCache()); + + CallContext callContext = CallContext.of(null, polarisContext); + CallContext.setCurrentContext(callContext); + + PrincipalEntity rootEntity = + new PrincipalEntity( + PolarisEntity.of( + entityManager + .getMetaStoreManager() + .readEntityByName( + polarisContext, + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + "root") + .getEntity())); + AuthenticatedPolarisPrincipal authenticatedRoot = + new AuthenticatedPolarisPrincipal(rootEntity, Set.of()); + + PolarisAdminService adminService = + new PolarisAdminService( + callContext, + entityManager, + authenticatedRoot, + new PolarisAuthorizer(new PolarisConfigurationStore() {})); + PolarisEntity catalogEntity = + adminService.createCatalog( + new CatalogEntity.Builder() + .setName(CATALOG_NAME) + .addProperty(PolarisConfiguration.CATALOG_ALLOW_EXTERNAL_TABLE_LOCATION, "true") + .addProperty(PolarisConfiguration.CATALOG_ALLOW_UNSTRUCTURED_TABLE_LOCATION, "true") + .setDefaultBaseLocation("file://tmp") + .setStorageConfigurationInfo( + new FileStorageConfigInfo( + StorageConfigInfo.StorageTypeEnum.FILE, List.of("file://", "/", "*")), + "file://tmp") + .build()); + + PolarisPassthroughResolutionView passthroughView = + new PolarisPassthroughResolutionView( + callContext, entityManager, authenticatedRoot, CATALOG_NAME); + this.catalog = + new BasePolarisCatalog( + entityManager, callContext, passthroughView, authenticatedRoot, Mockito.mock()); + this.catalog.initialize( + CATALOG_NAME, + ImmutableMap.of( + CatalogProperties.FILE_IO_IMPL, "org.apache.iceberg.inmemory.InMemoryFileIO")); + } + + @Override + protected BasePolarisCatalog catalog() { + return catalog; + } + + @Override + protected Catalog tableCatalog() { + return catalog; + } + + @Override + protected boolean requiresNamespaceCreate() { + return true; + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/catalog/PolarisCatalogHandlerWrapperAuthzTest.java b/polaris-service/src/test/java/io/polaris/service/catalog/PolarisCatalogHandlerWrapperAuthzTest.java new file mode 100644 index 0000000000..b4a3c370ed --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/catalog/PolarisCatalogHandlerWrapperAuthzTest.java @@ -0,0 +1,1713 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.catalog; + +import com.google.common.collect.ImmutableMap; +import io.polaris.core.admin.model.FileStorageConfigInfo; +import io.polaris.core.admin.model.PrincipalWithCredentialsCredentials; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.CatalogRoleEntity; +import io.polaris.core.entity.PolarisPrivilege; +import io.polaris.core.entity.PrincipalEntity; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.persistence.resolver.PolarisResolutionManifest; +import io.polaris.service.admin.PolarisAuthzTestBase; +import io.polaris.service.config.RealmEntityManagerFactory; +import io.polaris.service.context.PolarisCallContextCatalogFactory; +import io.polaris.service.types.NotificationRequest; +import io.polaris.service.types.NotificationType; +import io.polaris.service.types.TableUpdateNotification; +import java.time.Instant; +import java.util.List; +import java.util.Set; +import java.util.UUID; +import org.apache.iceberg.CatalogProperties; +import org.apache.iceberg.PartitionSpec; +import org.apache.iceberg.SortOrder; +import org.apache.iceberg.TableMetadata; +import org.apache.iceberg.TableMetadataParser; +import org.apache.iceberg.catalog.Catalog; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.exceptions.ForbiddenException; +import org.apache.iceberg.io.FileIO; +import org.apache.iceberg.rest.requests.CommitTransactionRequest; +import org.apache.iceberg.rest.requests.CreateNamespaceRequest; +import org.apache.iceberg.rest.requests.CreateTableRequest; +import org.apache.iceberg.rest.requests.CreateViewRequest; +import org.apache.iceberg.rest.requests.ImmutableCreateViewRequest; +import org.apache.iceberg.rest.requests.RegisterTableRequest; +import org.apache.iceberg.rest.requests.RenameTableRequest; +import org.apache.iceberg.rest.requests.UpdateNamespacePropertiesRequest; +import org.apache.iceberg.rest.requests.UpdateTableRequest; +import org.apache.iceberg.view.ImmutableSQLViewRepresentation; +import org.apache.iceberg.view.ImmutableViewVersion; +import org.assertj.core.api.Assertions; +import org.junit.jupiter.api.Test; +import org.mockito.Mockito; + +public class PolarisCatalogHandlerWrapperAuthzTest extends PolarisAuthzTestBase { + private PolarisCatalogHandlerWrapper newWrapper() { + return newWrapper(Set.of()); + } + + private PolarisCatalogHandlerWrapper newWrapper(Set activatedPrincipalRoles) { + return newWrapper( + activatedPrincipalRoles, CATALOG_NAME, new TestPolarisCallContextCatalogFactory()); + } + + private PolarisCatalogHandlerWrapper newWrapper( + Set activatedPrincipalRoles, + String catalogName, + PolarisCallContextCatalogFactory factory) { + final AuthenticatedPolarisPrincipal authenticatedPrincipal = + new AuthenticatedPolarisPrincipal(principalEntity, activatedPrincipalRoles); + return new PolarisCatalogHandlerWrapper( + callContext, + entityManager, + authenticatedPrincipal, + factory, + catalogName, + polarisAuthorizer); + } + + /** + * Tests each "sufficient" privilege individually using CATALOG_ROLE1 by granting at the + * CATALOG_NAME level, revoking after each test, and also ensuring that the request fails after + * revocation. + * + * @param sufficientPrivileges List of privileges that should be sufficient each in isolation for + * {@code action} to succeed. + * @param action The operation being tested; could also be multiple operations that should all + * succeed with the sufficient privilege + * @param cleanupAction If non-null, additional action to run to "undo" a previous success action + * in case the action has side effects. Called before revoking the sufficient privilege; + * either the cleanup privileges must be latent, or the cleanup action could be run with + * PRINCIPAL_ROLE2 while runnint {@code action} with PRINCIPAL_ROLE1. + */ + private void doTestSufficientPrivileges( + List sufficientPrivileges, Runnable action, Runnable cleanupAction) { + doTestSufficientPrivilegeSets( + sufficientPrivileges.stream().map(priv -> Set.of(priv)).toList(), + action, + cleanupAction, + PRINCIPAL_NAME); + } + + /** + * @param sufficientPrivileges each set of concurrent privileges expected to be sufficient + * together. + * @param action + * @param cleanupAction + * @param principalName + */ + private void doTestSufficientPrivilegeSets( + List> sufficientPrivileges, + Runnable action, + Runnable cleanupAction, + String principalName) { + doTestSufficientPrivilegeSets( + sufficientPrivileges, action, cleanupAction, principalName, CATALOG_NAME); + } + + /** + * @param sufficientPrivileges each set of concurrent privileges expected to be sufficient + * together. + * @param action + * @param cleanupAction + * @param principalName + * @param catalogName + */ + private void doTestSufficientPrivilegeSets( + List> sufficientPrivileges, + Runnable action, + Runnable cleanupAction, + String principalName, + String catalogName) { + doTestSufficientPrivilegeSets( + sufficientPrivileges, + action, + cleanupAction, + principalName, + (privilege) -> + adminService.grantPrivilegeOnCatalogToRole(catalogName, CATALOG_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnCatalogFromRole(catalogName, CATALOG_ROLE1, privilege)); + } + + private void doTestInsufficientPrivileges( + List insufficientPrivileges, Runnable action) { + doTestInsufficientPrivileges(insufficientPrivileges, PRINCIPAL_NAME, action); + } + + /** + * Tests each "insufficient" privilege individually using CATALOG_ROLE1 by granting at the + * CATALOG_NAME level, ensuring the action fails, then revoking after each test case. + */ + private void doTestInsufficientPrivileges( + List insufficientPrivileges, String principalName, Runnable action) { + doTestInsufficientPrivileges( + insufficientPrivileges, + principalName, + action, + (privilege) -> + adminService.grantPrivilegeOnCatalogToRole(CATALOG_NAME, CATALOG_ROLE1, privilege), + (privilege) -> + adminService.revokePrivilegeOnCatalogFromRole(CATALOG_NAME, CATALOG_ROLE1, privilege)); + } + + @Test + public void testListNamespacesAllSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_LIST, + PolarisPrivilege.NAMESPACE_READ_PROPERTIES, + PolarisPrivilege.NAMESPACE_WRITE_PROPERTIES, + PolarisPrivilege.NAMESPACE_CREATE, + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().listNamespaces(Namespace.of()), + null /* cleanupAction */); + } + + @Test + public void testListNamespacesInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.NAMESPACE_DROP), + () -> newWrapper().listNamespaces(Namespace.of())); + } + + @Test + public void testInsufficientPermissionsPriorToSecretRotation() { + String principalName = "all_the_powers"; + PolarisMetaStoreManager.CreatePrincipalResult newPrincipal = + entityManager + .getMetaStoreManager() + .createPrincipal( + callContext.getPolarisCallContext(), + new PrincipalEntity.Builder() + .setName(principalName) + .setCreateTimestamp(Instant.now().toEpochMilli()) + .setCredentialRotationRequiredState() + .build()); + adminService.assignPrincipalRole(principalName, PRINCIPAL_ROLE1); + adminService.assignPrincipalRole(principalName, PRINCIPAL_ROLE2); + + final AuthenticatedPolarisPrincipal authenticatedPrincipal = + new AuthenticatedPolarisPrincipal( + PrincipalEntity.of(newPrincipal.getPrincipal()), Set.of()); + PolarisCatalogHandlerWrapper wrapper = + new PolarisCatalogHandlerWrapper( + callContext, + entityManager, + authenticatedPrincipal, + new TestPolarisCallContextCatalogFactory(), + CATALOG_NAME, + polarisAuthorizer); + + // a variety of actions are all disallowed because the principal's credentials must be rotated + doTestInsufficientPrivileges( + List.of(PolarisPrivilege.values()), + principalName, + () -> wrapper.listNamespaces(Namespace.of())); + Namespace ns3 = Namespace.of("ns3"); + doTestInsufficientPrivileges( + List.of(PolarisPrivilege.values()), + principalName, + () -> wrapper.createNamespace(CreateNamespaceRequest.builder().withNamespace(ns3).build())); + doTestInsufficientPrivileges( + List.of(PolarisPrivilege.values()), principalName, () -> wrapper.listTables(NS1)); + PrincipalWithCredentialsCredentials credentials = + new PrincipalWithCredentialsCredentials( + newPrincipal.getPrincipalSecrets().getPrincipalClientId(), + newPrincipal.getPrincipalSecrets().getMainSecret()); + PrincipalEntity refreshPrincipal = + rotateAndRefreshPrincipal( + entityManager.getMetaStoreManager(), + principalName, + credentials, + callContext.getPolarisCallContext()); + final AuthenticatedPolarisPrincipal authenticatedPrincipal1 = + new AuthenticatedPolarisPrincipal(PrincipalEntity.of(refreshPrincipal), Set.of()); + PolarisCatalogHandlerWrapper refreshedWrapper = + new PolarisCatalogHandlerWrapper( + callContext, + entityManager, + authenticatedPrincipal1, + new TestPolarisCallContextCatalogFactory(), + CATALOG_NAME, + polarisAuthorizer); + + doTestSufficientPrivilegeSets( + List.of(Set.of(PolarisPrivilege.NAMESPACE_LIST)), + () -> refreshedWrapper.listNamespaces(Namespace.of()), + null, + principalName); + doTestSufficientPrivilegeSets( + List.of(Set.of(PolarisPrivilege.NAMESPACE_CREATE)), + () -> + refreshedWrapper.createNamespace( + CreateNamespaceRequest.builder().withNamespace(ns3).build()), + null, + principalName); + doTestSufficientPrivilegeSets( + List.of(Set.of(PolarisPrivilege.TABLE_LIST)), + () -> refreshedWrapper.listTables(ns3), + null, + principalName); + } + + @Test + public void testListNamespacesCatalogLevelWithPrincipalRoleActivation() { + // Grant catalog-level privilege to CATALOG_ROLE1 + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE1, PolarisPrivilege.NAMESPACE_LIST)) + .isTrue(); + Assertions.assertThat(newWrapper().listNamespaces(Namespace.of()).namespaces()) + .containsAll(List.of(NS1, NS2)); + + // Just activating PRINCIPAL_ROLE1 should also work. + Assertions.assertThat( + newWrapper(Set.of(PRINCIPAL_ROLE1)).listNamespaces(Namespace.of()).namespaces()) + .containsAll(List.of(NS1, NS2)); + + // If we only activate PRINCIPAL_ROLE2 it won't have the privilege. + Assertions.assertThatThrownBy( + () -> newWrapper(Set.of(PRINCIPAL_ROLE2)).listNamespaces(Namespace.of())) + .isInstanceOf(ForbiddenException.class) + .hasMessageContaining("is not authorized"); + + // If we revoke, then it should fail again even with all principal roles activated. + Assertions.assertThat( + adminService.revokePrivilegeOnCatalogFromRole( + CATALOG_NAME, CATALOG_ROLE1, PolarisPrivilege.NAMESPACE_LIST)) + .isTrue(); + Assertions.assertThatThrownBy(() -> newWrapper().listNamespaces(Namespace.of())) + .isInstanceOf(ForbiddenException.class); + } + + @Test + public void testListNamespacesChildOnly() { + // Grant only NS1-level privilege to CATALOG_ROLE1 + Assertions.assertThat( + adminService.grantPrivilegeOnNamespaceToRole( + CATALOG_NAME, CATALOG_ROLE1, NS1, PolarisPrivilege.NAMESPACE_LIST)) + .isTrue(); + + // Listing directly on NS1 succeeds + Assertions.assertThat(newWrapper().listNamespaces(NS1).namespaces()) + .containsAll(List.of(NS1A, NS1B)); + + // Root listing fails + Assertions.assertThatThrownBy(() -> newWrapper().listNamespaces(Namespace.of())) + .isInstanceOf(ForbiddenException.class); + + // NS2 listing fails + Assertions.assertThatThrownBy(() -> newWrapper().listNamespaces(Namespace.of())) + .isInstanceOf(ForbiddenException.class); + + // Listing on a child of NS1 succeeds + Assertions.assertThat(newWrapper().listNamespaces(NS1A).namespaces()) + .containsAll(List.of(NS1AA)); + } + + @Test + public void testCreateNamespaceAllSufficientPrivileges() { + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.NAMESPACE_DROP)) + .isTrue(); + + // Use PRINCIPAL_ROLE1 for privilege-testing, PRINCIPAL_ROLE2 for cleanup. + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_CREATE, + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)) + .createNamespace( + CreateNamespaceRequest.builder().withNamespace(Namespace.of("newns")).build()); + newWrapper(Set.of(PRINCIPAL_ROLE1)) + .createNamespace( + CreateNamespaceRequest.builder() + .withNamespace(Namespace.of("ns1", "ns1a", "newns")) + .build()); + }, + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE2)).dropNamespace(Namespace.of("newns")); + newWrapper(Set.of(PRINCIPAL_ROLE2)).dropNamespace(Namespace.of("ns1", "ns1a", "newns")); + }); + } + + @Test + public void testCreateNamespacesInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.NAMESPACE_DROP, + PolarisPrivilege.NAMESPACE_READ_PROPERTIES, + PolarisPrivilege.NAMESPACE_WRITE_PROPERTIES, + PolarisPrivilege.NAMESPACE_LIST), + () -> + newWrapper() + .createNamespace( + CreateNamespaceRequest.builder().withNamespace(Namespace.of("newns")).build())); + } + + @Test + public void testLoadNamespaceMetadataSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_READ_PROPERTIES, + PolarisPrivilege.NAMESPACE_WRITE_PROPERTIES, + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().loadNamespaceMetadata(NS1A), + null /* cleanupAction */); + } + + @Test + public void testLoadNamespaceMetadataInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.NAMESPACE_CREATE, + PolarisPrivilege.NAMESPACE_LIST, + PolarisPrivilege.NAMESPACE_DROP), + () -> newWrapper().loadNamespaceMetadata(NS1A)); + } + + @Test + public void testNamespaceExistsAllSufficientPrivileges() { + // TODO: If we change the behavior of existence-check to return 404 on unauthorized, + // the overall test structure will need to change (other tests catching ForbiddenException + // need to still have catalog-level "REFERENCE" equivalent privileges, and the exists() + // tests need to expect 404 instead). + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_LIST, + PolarisPrivilege.NAMESPACE_READ_PROPERTIES, + PolarisPrivilege.NAMESPACE_WRITE_PROPERTIES, + PolarisPrivilege.NAMESPACE_CREATE, + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().namespaceExists(NS1A), + null /* cleanupAction */); + } + + @Test + public void testNamespaceExistsInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.NAMESPACE_DROP), + () -> newWrapper().namespaceExists(NS1A)); + } + + @Test + public void testDropNamespaceSufficientPrivileges() { + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.NAMESPACE_CREATE)) + .isTrue(); + + // Use PRINCIPAL_ROLE1 for privilege-testing, PRINCIPAL_ROLE2 for cleanup. + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_DROP, + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).dropNamespace(NS1AA); + }, + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE2)) + .createNamespace(CreateNamespaceRequest.builder().withNamespace(NS1AA).build()); + }); + } + + @Test + public void testDropNamespaceInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.NAMESPACE_CREATE, + PolarisPrivilege.NAMESPACE_LIST, + PolarisPrivilege.NAMESPACE_READ_PROPERTIES, + PolarisPrivilege.NAMESPACE_WRITE_PROPERTIES), + () -> newWrapper().dropNamespace(NS1AA)); + } + + @Test + public void testUpdateNamespacePropertiesAllSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_WRITE_PROPERTIES, + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> { + newWrapper() + .updateNamespaceProperties( + NS1A, UpdateNamespacePropertiesRequest.builder().update("foo", "bar").build()); + newWrapper() + .updateNamespaceProperties( + NS1A, UpdateNamespacePropertiesRequest.builder().remove("foo").build()); + }, + null /* cleanupAction */); + } + + @Test + public void testUpdateNamespacePropertiesInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.NAMESPACE_LIST, + PolarisPrivilege.NAMESPACE_READ_PROPERTIES, + PolarisPrivilege.NAMESPACE_CREATE, + PolarisPrivilege.NAMESPACE_DROP), + () -> + newWrapper() + .updateNamespaceProperties( + NS1A, UpdateNamespacePropertiesRequest.builder().update("foo", "bar").build())); + } + + @Test + public void testListTablesAllSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_LIST, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().listTables(NS1A), + null /* cleanupAction */); + } + + @Test + public void testListTablesInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_DROP), + () -> newWrapper().listTables(NS1A)); + } + + @Test + public void testCreateTableDirectAllSufficientPrivileges() { + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.TABLE_DROP)) + .isTrue(); + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.TABLE_WRITE_DATA)) + .isTrue(); + + final TableIdentifier newtable = TableIdentifier.of(NS2, "newtable"); + final CreateTableRequest createRequest = + CreateTableRequest.builder().withName("newtable").withSchema(SCHEMA).build(); + + // Use PRINCIPAL_ROLE1 for privilege-testing, PRINCIPAL_ROLE2 for cleanup. + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).createTableDirect(NS2, createRequest); + }, + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE2)).dropTableWithPurge(newtable); + }); + } + + @Test + public void testCreateTableDirectInsufficientPermissions() { + final TableIdentifier newtable = TableIdentifier.of(NS2, "newtable"); + final CreateTableRequest createRequest = + CreateTableRequest.builder().withName("newtable").withSchema(SCHEMA).build(); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_DROP, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_LIST), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).createTableDirect(NS2, createRequest); + }); + } + + @Test + public void testCreateTableStagedAllSufficientPrivileges() { + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.TABLE_DROP)) + .isTrue(); + + final CreateTableRequest createStagedRequest = + CreateTableRequest.builder() + .withName("stagetable") + .withSchema(SCHEMA) + .stageCreate() + .build(); + + // Use PRINCIPAL_ROLE1 for privilege-testing, PRINCIPAL_ROLE2 for cleanup. + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).createTableStaged(NS2, createStagedRequest); + }, + // createTableStaged doesn't actually commit any metadata + null); + } + + @Test + public void testCreateTableStagedInsufficientPermissions() { + final CreateTableRequest createStagedRequest = + CreateTableRequest.builder() + .withName("stagetable") + .withSchema(SCHEMA) + .stageCreate() + .build(); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_DROP, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_LIST), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).createTableStaged(NS2, createStagedRequest); + }); + } + + @Test + public void testCreateTableStagedWithWriteDelegationAllSufficientPrivileges() { + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.TABLE_DROP)) + .isTrue(); + + final CreateTableRequest createStagedWithWriteDelegationRequest = + CreateTableRequest.builder() + .withName("stagetable") + .withSchema(SCHEMA) + .stageCreate() + .build(); + + doTestSufficientPrivilegeSets( + List.of( + Set.of(PolarisPrivilege.TABLE_CREATE, PolarisPrivilege.TABLE_WRITE_DATA), + Set.of(PolarisPrivilege.CATALOG_MANAGE_CONTENT)), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)) + .createTableStagedWithWriteDelegation( + NS2, createStagedWithWriteDelegationRequest, "vended-credentials"); + }, + // createTableStagedWithWriteDelegation doesn't actually commit any metadata + null, + PRINCIPAL_NAME); + } + + @Test + public void testCreateTableStagedWithWriteDelegationInsufficientPermissions() { + final CreateTableRequest createStagedWithWriteDelegationRequest = + CreateTableRequest.builder() + .withName("stagetable") + .withSchema(SCHEMA) + .stageCreate() + .build(); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_DROP, + PolarisPrivilege.TABLE_CREATE, // TABLE_CREATE itself is insufficient for delegation + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_LIST), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)) + .createTableStagedWithWriteDelegation( + NS2, createStagedWithWriteDelegationRequest, "vended-credentials"); + }); + } + + @Test + public void testRegisterTableAllSufficientPrivileges() { + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.TABLE_DROP)) + .isTrue(); + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.TABLE_READ_PROPERTIES)) + .isTrue(); + + // To get a handy metadata file we can use one from another table. + // to avoid overlapping directories, drop the original table and recreate it via registerTable + final String metadataLocation = newWrapper().loadTable(TABLE_NS1_1, "all").metadataLocation(); + newWrapper(Set.of(PRINCIPAL_ROLE2)).dropTableWithoutPurge(TABLE_NS1_1); + + final RegisterTableRequest registerRequest = + new RegisterTableRequest() { + @Override + public String name() { + return TABLE_NS1_1.name(); + } + + @Override + public String metadataLocation() { + return metadataLocation; + } + }; + + // Use PRINCIPAL_ROLE1 for privilege-testing, PRINCIPAL_ROLE2 for cleanup. + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).registerTable(NS1, registerRequest); + }, + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE2)).dropTableWithoutPurge(TABLE_NS1_1); + }); + } + + @Test + public void testRegisterTableInsufficientPermissions() { + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.TABLE_READ_PROPERTIES)) + .isTrue(); + + // To get a handy metadata file we can use one from another table. + final String metadataLocation = newWrapper().loadTable(TABLE_NS1_1, "all").metadataLocation(); + + final TableIdentifier newtable = TableIdentifier.of(NS2, "newtable"); + final RegisterTableRequest registerRequest = + new RegisterTableRequest() { + @Override + public String name() { + return "newtable"; + } + + @Override + public String metadataLocation() { + return metadataLocation; + } + }; + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_DROP, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_LIST), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).registerTable(NS2, registerRequest); + }); + } + + @Test + public void testLoadTableSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().loadTable(TABLE_NS1A_2, "all"), + null /* cleanupAction */); + } + + @Test + public void testLoadTableInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_LIST, + PolarisPrivilege.TABLE_DROP), + () -> newWrapper().loadTable(TABLE_NS1A_2, "all")); + } + + @Test + public void testLoadTableWithReadAccessDelegationSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().loadTableWithAccessDelegation(TABLE_NS1A_2, "vended-credentials", "all"), + null /* cleanupAction */); + } + + @Test + public void testLoadTableWithReadAccessDelegationInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_LIST, + PolarisPrivilege.TABLE_DROP), + () -> + newWrapper().loadTableWithAccessDelegation(TABLE_NS1A_2, "vended-credentials", "all")); + } + + @Test + public void testLoadTableWithWriteAccessDelegationSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + // TODO: Once we give different creds for read/write privilege, move this + // TABLE_READ_DATA into a special-case test; with only TABLE_READ_DATA we'd expet + // to receive a read-only credential. + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().loadTableWithAccessDelegation(TABLE_NS1A_2, "vended-credentials", "all"), + null /* cleanupAction */); + } + + @Test + public void testLoadTableWithWriteAccessDelegationInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_LIST, + PolarisPrivilege.TABLE_DROP), + () -> + newWrapper().loadTableWithAccessDelegation(TABLE_NS1A_2, "vended-credentials", "all")); + } + + @Test + public void testUpdateTableSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().updateTable(TABLE_NS1A_2, new UpdateTableRequest()), + null /* cleanupAction */); + } + + @Test + public void testUpdateTableInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_LIST, + PolarisPrivilege.TABLE_DROP), + () -> newWrapper().updateTable(TABLE_NS1A_2, new UpdateTableRequest())); + } + + @Test + public void testUpdateTableForStagedCreateSufficientPrivileges() { + // Note: This is kind of cheating by only leaning on the PolarisCatalogHandlerWrapper level + // of differentiation between updateForStageCreate vs regular update so that we don't need + // to actually set up the staged create but still test the privileges. If the underlying + // behavior diverges, we need to change this test to actually start with a stageCreate. + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().updateTableForStagedCreate(TABLE_NS1A_2, new UpdateTableRequest()), + null /* cleanupAction */); + } + + @Test + public void testUpdateTableForStagedCreateInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_DROP, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_LIST), + () -> newWrapper().updateTableForStagedCreate(TABLE_NS1A_2, new UpdateTableRequest())); + } + + @Test + public void testDropTableWithoutPurgeAllSufficientPrivileges() { + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.TABLE_CREATE)) + .isTrue(); + + final CreateTableRequest createRequest = + CreateTableRequest.builder().withName(TABLE_NS1_1.name()).withSchema(SCHEMA).build(); + + // Use PRINCIPAL_ROLE1 for privilege-testing, PRINCIPAL_ROLE2 for cleanup. + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_DROP, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).dropTableWithoutPurge(TABLE_NS1_1); + }, + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE2)) + .createTableDirect(TABLE_NS1_1.namespace(), createRequest); + }); + } + + @Test + public void testDropTableWithoutPurgeInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_LIST), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).dropTableWithoutPurge(TABLE_NS1_1); + }); + } + + @Test + public void testDropTableWithPurgeAllSufficientPrivileges() { + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.TABLE_CREATE)) + .isTrue(); + + final CreateTableRequest createRequest = + CreateTableRequest.builder().withName(TABLE_NS1_1.name()).withSchema(SCHEMA).build(); + + // Use PRINCIPAL_ROLE1 for privilege-testing, PRINCIPAL_ROLE2 for cleanup. + doTestSufficientPrivilegeSets( + List.of( + Set.of(PolarisPrivilege.TABLE_WRITE_DATA, PolarisPrivilege.TABLE_FULL_METADATA), + Set.of(PolarisPrivilege.TABLE_WRITE_DATA, PolarisPrivilege.TABLE_DROP), + Set.of(PolarisPrivilege.CATALOG_MANAGE_CONTENT)), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).dropTableWithPurge(TABLE_NS1_1); + }, + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE2)) + .createTableDirect(TABLE_NS1_1.namespace(), createRequest); + }, + PRINCIPAL_NAME); + } + + @Test + public void testDropTableWithPurgeInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.TABLE_DROP, // TABLE_DROP itself is insufficient for purge + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_LIST), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).dropTableWithPurge(TABLE_NS1_1); + }); + } + + @Test + public void testTableExistsSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_LIST, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().tableExists(TABLE_NS1A_2), + null /* cleanupAction */); + } + + @Test + public void testTableExistsInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_DROP), + () -> newWrapper().tableExists(TABLE_NS1A_2)); + } + + @Test + public void testRenameTableAllSufficientPrivileges() { + final TableIdentifier srcTable = TABLE_NS1_1; + final TableIdentifier dstTable = TableIdentifier.of(NS1AA, "newtable"); + final RenameTableRequest rename1 = + RenameTableRequest.builder().withSource(srcTable).withDestination(dstTable).build(); + final RenameTableRequest rename2 = + RenameTableRequest.builder().withSource(dstTable).withDestination(srcTable).build(); + + doTestSufficientPrivilegeSets( + List.of( + Set.of(PolarisPrivilege.TABLE_FULL_METADATA), + Set.of(PolarisPrivilege.TABLE_CREATE, PolarisPrivilege.TABLE_DROP), + Set.of(PolarisPrivilege.CATALOG_MANAGE_CONTENT)), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).renameTable(rename1); + }, + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).renameTable(rename2); + }, + PRINCIPAL_NAME); + } + + @Test + public void testRenameTableInsufficientPermissions() { + final TableIdentifier srcTable = TABLE_NS1_1; + final TableIdentifier dstTable = TableIdentifier.of(NS1AA, "newtable"); + final RenameTableRequest rename1 = + RenameTableRequest.builder().withSource(srcTable).withDestination(dstTable).build(); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_DROP, + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_LIST), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).renameTable(rename1); + }); + } + + @Test + public void testRenameTablePrivilegesOnWrongSourceOrDestination() { + final TableIdentifier srcTable = TABLE_NS2_1; + final TableIdentifier dstTable = TableIdentifier.of(NS1AA, "newtable"); + final RenameTableRequest rename1 = + RenameTableRequest.builder().withSource(srcTable).withDestination(dstTable).build(); + final RenameTableRequest rename2 = + RenameTableRequest.builder().withSource(dstTable).withDestination(srcTable).build(); + + // Minimum privileges should succeed -- drop on src, create on dst parent. + Assertions.assertThat( + adminService.grantPrivilegeOnTableToRole( + CATALOG_NAME, CATALOG_ROLE1, srcTable, PolarisPrivilege.TABLE_DROP)) + .isTrue(); + Assertions.assertThat( + adminService.grantPrivilegeOnNamespaceToRole( + CATALOG_NAME, CATALOG_ROLE1, dstTable.namespace(), PolarisPrivilege.TABLE_CREATE)) + .isTrue(); + + // Initial rename should succeed + newWrapper().renameTable(rename1); + + // Inverse operation should fail + Assertions.assertThatThrownBy(() -> newWrapper().renameTable(rename2)) + .isInstanceOf(ForbiddenException.class); + + // Now grant TABLE_DROP on dst + Assertions.assertThat( + adminService.grantPrivilegeOnTableToRole( + CATALOG_NAME, CATALOG_ROLE1, dstTable, PolarisPrivilege.TABLE_DROP)) + .isTrue(); + + // Still not enough without TABLE_CREATE at source + Assertions.assertThatThrownBy(() -> newWrapper().renameTable(rename2)) + .isInstanceOf(ForbiddenException.class); + + // Even grant CATALOG_MANAGE_CONTENT under all of NS1 + Assertions.assertThat( + adminService.grantPrivilegeOnNamespaceToRole( + CATALOG_NAME, CATALOG_ROLE1, NS1, PolarisPrivilege.CATALOG_MANAGE_CONTENT)) + .isTrue(); + + // Still not enough to rename back to src since src was NS2. + Assertions.assertThatThrownBy(() -> newWrapper().renameTable(rename2)) + .isInstanceOf(ForbiddenException.class); + + // Finally, grant TABLE_CREATE on NS2 and it should succeed to rename back to src. + Assertions.assertThat( + adminService.grantPrivilegeOnNamespaceToRole( + CATALOG_NAME, CATALOG_ROLE1, NS2, PolarisPrivilege.TABLE_CREATE)) + .isTrue(); + newWrapper().renameTable(rename2); + } + + @Test + public void testCommitTransactionSufficientPrivileges() { + CommitTransactionRequest req = + new CommitTransactionRequest( + List.of( + UpdateTableRequest.create(TABLE_NS1_1, List.of(), List.of()), + UpdateTableRequest.create(TABLE_NS1A_1, List.of(), List.of()), + UpdateTableRequest.create(TABLE_NS1B_1, List.of(), List.of()), + UpdateTableRequest.create(TABLE_NS2_1, List.of(), List.of()))); + + doTestSufficientPrivilegeSets( + List.of( + Set.of(PolarisPrivilege.CATALOG_MANAGE_CONTENT), + Set.of(PolarisPrivilege.TABLE_FULL_METADATA), + Set.of(PolarisPrivilege.TABLE_CREATE, PolarisPrivilege.TABLE_WRITE_DATA), + Set.of(PolarisPrivilege.TABLE_CREATE, PolarisPrivilege.TABLE_WRITE_PROPERTIES)), + () -> newWrapper().commitTransaction(req), + null, + PRINCIPAL_NAME /* cleanupAction */); + } + + @Test + public void testCommitTransactionInsufficientPermissions() { + CommitTransactionRequest req = + new CommitTransactionRequest( + List.of( + UpdateTableRequest.create(TABLE_NS1_1, List.of(), List.of()), + UpdateTableRequest.create(TABLE_NS1A_1, List.of(), List.of()), + UpdateTableRequest.create(TABLE_NS1B_1, List.of(), List.of()), + UpdateTableRequest.create(TABLE_NS2_1, List.of(), List.of()))); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.TABLE_READ_PROPERTIES, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.TABLE_READ_DATA, + PolarisPrivilege.TABLE_WRITE_DATA, + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_LIST, + PolarisPrivilege.TABLE_DROP), + () -> newWrapper().commitTransaction(req)); + } + + @Test + public void testCommitTransactionMixedPermissions() { + CommitTransactionRequest req = + new CommitTransactionRequest( + List.of( + UpdateTableRequest.create(TABLE_NS1_1, List.of(), List.of()), + UpdateTableRequest.create(TABLE_NS1A_1, List.of(), List.of()), + UpdateTableRequest.create(TABLE_NS1B_1, List.of(), List.of()), + UpdateTableRequest.create(TABLE_NS2_1, List.of(), List.of()))); + + // Grant TABLE_CREATE for all of NS1 + Assertions.assertThat( + adminService.grantPrivilegeOnNamespaceToRole( + CATALOG_NAME, CATALOG_ROLE1, NS1, PolarisPrivilege.TABLE_CREATE)) + .isTrue(); + Assertions.assertThatThrownBy(() -> newWrapper().commitTransaction(req)) + .isInstanceOf(ForbiddenException.class); + + // Grant TABLE_FULL_METADATA directly on TABLE_NS1_1 + Assertions.assertThat( + adminService.grantPrivilegeOnTableToRole( + CATALOG_NAME, CATALOG_ROLE1, TABLE_NS1_1, PolarisPrivilege.TABLE_FULL_METADATA)) + .isTrue(); + Assertions.assertThatThrownBy(() -> newWrapper().commitTransaction(req)) + .isInstanceOf(ForbiddenException.class); + + // Grant TABLE_WRITE_PROPERTIES on NS1A namespace + Assertions.assertThat( + adminService.grantPrivilegeOnNamespaceToRole( + CATALOG_NAME, CATALOG_ROLE1, NS1A, PolarisPrivilege.TABLE_WRITE_PROPERTIES)) + .isTrue(); + Assertions.assertThatThrownBy(() -> newWrapper().commitTransaction(req)) + .isInstanceOf(ForbiddenException.class); + + // Grant TABLE_WRITE_DATA directly on TABLE_NS1B_1 + Assertions.assertThat( + adminService.grantPrivilegeOnTableToRole( + CATALOG_NAME, CATALOG_ROLE1, TABLE_NS1B_1, PolarisPrivilege.TABLE_WRITE_DATA)) + .isTrue(); + Assertions.assertThatThrownBy(() -> newWrapper().commitTransaction(req)) + .isInstanceOf(ForbiddenException.class); + + // Grant TABLE_WRITE_PROPERTIES directly on TABLE_NS2_1 + Assertions.assertThat( + adminService.grantPrivilegeOnTableToRole( + CATALOG_NAME, CATALOG_ROLE1, TABLE_NS2_1, PolarisPrivilege.TABLE_WRITE_PROPERTIES)) + .isTrue(); + Assertions.assertThatThrownBy(() -> newWrapper().commitTransaction(req)) + .isInstanceOf(ForbiddenException.class); + + // Also grant TABLE_CREATE directly on TABLE_NS2_1 + // TODO: If we end up having fine-grained differentiation between updateForStagedCreate + // and update, then this one should only be TABLE_CREATE on the *parent* of this last table + // and the table shouldn't exist. + Assertions.assertThat( + adminService.grantPrivilegeOnTableToRole( + CATALOG_NAME, CATALOG_ROLE1, TABLE_NS2_1, PolarisPrivilege.TABLE_CREATE)) + .isTrue(); + newWrapper().commitTransaction(req); + } + + @Test + public void testListViewsAllSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.VIEW_LIST, + PolarisPrivilege.VIEW_READ_PROPERTIES, + PolarisPrivilege.VIEW_WRITE_PROPERTIES, + PolarisPrivilege.VIEW_CREATE, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().listViews(NS1A), + null /* cleanupAction */); + } + + @Test + public void testListViewsInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.VIEW_DROP), + () -> newWrapper().listViews(NS1A)); + } + + @Test + public void testCreateViewAllSufficientPrivileges() { + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.VIEW_DROP)) + .isTrue(); + + final TableIdentifier newview = TableIdentifier.of(NS2, "newview"); + final CreateViewRequest createRequest = + ImmutableCreateViewRequest.builder() + .name("newview") + .schema(SCHEMA) + .viewVersion( + ImmutableViewVersion.builder() + .versionId(1) + .timestampMillis(System.currentTimeMillis()) + .schemaId(1) + .defaultNamespace(NS1) + .addRepresentations( + ImmutableSQLViewRepresentation.builder() + .sql(VIEW_QUERY) + .dialect("spark") + .build()) + .build()) + .build(); + + // Use PRINCIPAL_ROLE1 for privilege-testing, PRINCIPAL_ROLE2 for cleanup. + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.VIEW_CREATE, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).createView(NS2, createRequest); + }, + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE2)).dropView(newview); + }); + } + + @Test + public void testCreateViewInsufficientPermissions() { + final TableIdentifier newview = TableIdentifier.of(NS2, "newview"); + + final CreateViewRequest createRequest = + ImmutableCreateViewRequest.builder() + .name("newview") + .schema(SCHEMA) + .viewVersion( + ImmutableViewVersion.builder() + .versionId(1) + .timestampMillis(System.currentTimeMillis()) + .schemaId(1) + .defaultNamespace(NS1) + .addRepresentations( + ImmutableSQLViewRepresentation.builder() + .sql(VIEW_QUERY) + .dialect("spark") + .build()) + .build()) + .build(); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_DROP, + PolarisPrivilege.VIEW_READ_PROPERTIES, + PolarisPrivilege.VIEW_WRITE_PROPERTIES, + PolarisPrivilege.VIEW_LIST), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).createView(NS2, createRequest); + }); + } + + @Test + public void testLoadViewSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.VIEW_READ_PROPERTIES, + PolarisPrivilege.VIEW_WRITE_PROPERTIES, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().loadView(VIEW_NS1A_2), + null /* cleanupAction */); + } + + @Test + public void testLoadViewInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_CREATE, + PolarisPrivilege.VIEW_LIST, + PolarisPrivilege.VIEW_DROP), + () -> newWrapper().loadView(VIEW_NS1A_2)); + } + + @Test + public void testUpdateViewSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.VIEW_WRITE_PROPERTIES, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().replaceView(VIEW_NS1A_2, new UpdateTableRequest()), + null /* cleanupAction */); + } + + @Test + public void testUpdateViewInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_READ_PROPERTIES, + PolarisPrivilege.VIEW_CREATE, + PolarisPrivilege.VIEW_LIST, + PolarisPrivilege.VIEW_DROP), + () -> newWrapper().replaceView(VIEW_NS1A_2, new UpdateTableRequest())); + } + + @Test + public void testDropViewAllSufficientPrivileges() { + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + CATALOG_NAME, CATALOG_ROLE2, PolarisPrivilege.VIEW_CREATE)) + .isTrue(); + + final CreateViewRequest createRequest = + ImmutableCreateViewRequest.builder() + .name(VIEW_NS1_1.name()) + .schema(SCHEMA) + .viewVersion( + ImmutableViewVersion.builder() + .versionId(1) + .timestampMillis(System.currentTimeMillis()) + .schemaId(1) + .defaultNamespace(NS1) + .addRepresentations( + ImmutableSQLViewRepresentation.builder() + .sql(VIEW_QUERY) + .dialect("spark") + .build()) + .build()) + .build(); + + // Use PRINCIPAL_ROLE1 for privilege-testing, PRINCIPAL_ROLE2 for cleanup. + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.VIEW_DROP, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).dropView(VIEW_NS1_1); + }, + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE2)).createView(VIEW_NS1_1.namespace(), createRequest); + }); + } + + @Test + public void testDropViewInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_CREATE, + PolarisPrivilege.VIEW_READ_PROPERTIES, + PolarisPrivilege.VIEW_WRITE_PROPERTIES, + PolarisPrivilege.VIEW_LIST), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).dropView(VIEW_NS1_1); + }); + } + + @Test + public void testViewExistsSufficientPrivileges() { + doTestSufficientPrivileges( + List.of( + PolarisPrivilege.VIEW_LIST, + PolarisPrivilege.VIEW_READ_PROPERTIES, + PolarisPrivilege.VIEW_WRITE_PROPERTIES, + PolarisPrivilege.VIEW_CREATE, + PolarisPrivilege.VIEW_FULL_METADATA, + PolarisPrivilege.CATALOG_MANAGE_CONTENT), + () -> newWrapper().viewExists(VIEW_NS1A_2), + null /* cleanupAction */); + } + + @Test + public void testViewExistsInsufficientPermissions() { + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_DROP), + () -> newWrapper().viewExists(VIEW_NS1A_2)); + } + + @Test + public void testRenameViewAllSufficientPrivileges() { + final TableIdentifier srcView = VIEW_NS1_1; + final TableIdentifier dstView = TableIdentifier.of(NS1AA, "newview"); + final RenameTableRequest rename1 = + RenameTableRequest.builder().withSource(srcView).withDestination(dstView).build(); + final RenameTableRequest rename2 = + RenameTableRequest.builder().withSource(dstView).withDestination(srcView).build(); + + doTestSufficientPrivilegeSets( + List.of( + Set.of(PolarisPrivilege.VIEW_FULL_METADATA), + Set.of(PolarisPrivilege.CATALOG_MANAGE_CONTENT), + Set.of(PolarisPrivilege.VIEW_DROP, PolarisPrivilege.VIEW_CREATE)), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).renameView(rename1); + }, + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).renameView(rename2); + }, + PRINCIPAL_NAME); + } + + @Test + public void testRenameViewInsufficientPermissions() { + final TableIdentifier srcView = VIEW_NS1_1; + final TableIdentifier dstView = TableIdentifier.of(NS1AA, "newview"); + final RenameTableRequest rename1 = + RenameTableRequest.builder().withSource(srcView).withDestination(dstView).build(); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_DROP, + PolarisPrivilege.VIEW_CREATE, + PolarisPrivilege.VIEW_READ_PROPERTIES, + PolarisPrivilege.VIEW_WRITE_PROPERTIES, + PolarisPrivilege.VIEW_LIST), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).renameView(rename1); + }); + } + + @Test + public void testRenameViewPrivilegesOnWrongSourceOrDestination() { + final TableIdentifier srcView = VIEW_NS2_1; + final TableIdentifier dstView = TableIdentifier.of(NS1AA, "newview"); + final RenameTableRequest rename1 = + RenameTableRequest.builder().withSource(srcView).withDestination(dstView).build(); + final RenameTableRequest rename2 = + RenameTableRequest.builder().withSource(dstView).withDestination(srcView).build(); + + // Minimum privileges should succeed -- drop on src, create on dst parent. + Assertions.assertThat( + adminService.grantPrivilegeOnViewToRole( + CATALOG_NAME, CATALOG_ROLE1, srcView, PolarisPrivilege.VIEW_DROP)) + .isTrue(); + Assertions.assertThat( + adminService.grantPrivilegeOnNamespaceToRole( + CATALOG_NAME, CATALOG_ROLE1, dstView.namespace(), PolarisPrivilege.VIEW_CREATE)) + .isTrue(); + + // Initial rename should succeed + newWrapper().renameView(rename1); + + // Inverse operation should fail + Assertions.assertThatThrownBy(() -> newWrapper().renameView(rename2)) + .isInstanceOf(ForbiddenException.class); + + // Now grant VIEW_DROP on dst + Assertions.assertThat( + adminService.grantPrivilegeOnViewToRole( + CATALOG_NAME, CATALOG_ROLE1, dstView, PolarisPrivilege.VIEW_DROP)) + .isTrue(); + + // Still not enough without VIEW_CREATE at source + Assertions.assertThatThrownBy(() -> newWrapper().renameView(rename2)) + .isInstanceOf(ForbiddenException.class); + + // Even grant CATALOG_MANAGE_CONTENT under all of NS1 + Assertions.assertThat( + adminService.grantPrivilegeOnNamespaceToRole( + CATALOG_NAME, CATALOG_ROLE1, NS1, PolarisPrivilege.CATALOG_MANAGE_CONTENT)) + .isTrue(); + + // Still not enough to rename back to src since src was NS2. + Assertions.assertThatThrownBy(() -> newWrapper().renameView(rename2)) + .isInstanceOf(ForbiddenException.class); + + // Finally, grant VIEW_CREATE on NS2 and it should succeed to rename back to src. + Assertions.assertThat( + adminService.grantPrivilegeOnNamespaceToRole( + CATALOG_NAME, CATALOG_ROLE1, NS2, PolarisPrivilege.VIEW_CREATE)) + .isTrue(); + newWrapper().renameView(rename2); + } + + @Test + public void testSendNotificationSufficientPrivileges() { + String externalCatalog = "externalCatalog"; + String storageLocation = + "file:///tmp/send_notification_sufficient_privileges_" + System.currentTimeMillis(); + + FileStorageConfigInfo storageConfigModel = + FileStorageConfigInfo.builder() + .setStorageType(StorageConfigInfo.StorageTypeEnum.FILE) + .build(); + adminService.createCatalog( + new CatalogEntity.Builder() + .setName(externalCatalog) + .setDefaultBaseLocation(storageLocation) + .setStorageConfigurationInfo(storageConfigModel, storageLocation) + .setCatalogType("EXTERNAL") + .build()); + adminService.createCatalogRole( + externalCatalog, new CatalogRoleEntity.Builder().setName(CATALOG_ROLE1).build()); + adminService.createCatalogRole( + externalCatalog, new CatalogRoleEntity.Builder().setName(CATALOG_ROLE2).build()); + + adminService.assignPrincipalRole(PRINCIPAL_NAME, PRINCIPAL_ROLE1); + adminService.assignCatalogRoleToPrincipalRole(PRINCIPAL_ROLE1, externalCatalog, CATALOG_ROLE1); + adminService.assignCatalogRoleToPrincipalRole(PRINCIPAL_ROLE2, externalCatalog, CATALOG_ROLE2); + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + externalCatalog, CATALOG_ROLE2, PolarisPrivilege.TABLE_DROP)) + .isTrue(); + Assertions.assertThat( + adminService.grantPrivilegeOnCatalogToRole( + externalCatalog, CATALOG_ROLE2, PolarisPrivilege.NAMESPACE_DROP)) + .isTrue(); + + Namespace namespace = Namespace.of("extns1", "extns2"); + TableIdentifier table = TableIdentifier.of(namespace, "tbl1"); + + String tableUuid = UUID.randomUUID().toString(); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.CREATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation( + String.format("%s/bucket/table/metadata/v1.metadata.json", storageLocation)); + update.setTableName(table.name()); + update.setTableUuid(tableUuid); + update.setTimestamp(230950845L); + request.setPayload(update); + + NotificationRequest request2 = new NotificationRequest(); + request2.setNotificationType(NotificationType.UPDATE); + TableUpdateNotification update2 = new TableUpdateNotification(); + update2.setMetadataLocation( + String.format("%s/bucket/table/metadata/v2.metadata.json", storageLocation)); + update2.setTableName(table.name()); + update2.setTableUuid(tableUuid); + update2.setTimestamp(330950845L); + request2.setPayload(update2); + + NotificationRequest request3 = new NotificationRequest(); + request3.setNotificationType(NotificationType.DROP); + TableUpdateNotification update3 = new TableUpdateNotification(); + update3.setTableName(table.name()); + update3.setTableUuid(tableUuid); + update3.setTimestamp(430950845L); + request3.setPayload(update3); + + PolarisCallContextCatalogFactory factory = + new PolarisCallContextCatalogFactory( + new RealmEntityManagerFactory() { + @Override + public PolarisEntityManager getOrCreateEntityManager(RealmContext realmContext) { + return entityManager; + } + }, + Mockito.mock()) { + @Override + public Catalog createCallContextCatalog( + CallContext context, + AuthenticatedPolarisPrincipal authenticatedPolarisPrincipal, + PolarisResolutionManifest resolvedManifest) { + Catalog catalog = + super.createCallContextCatalog( + context, authenticatedPolarisPrincipal, resolvedManifest); + catalog.initialize( + externalCatalog, + ImmutableMap.of( + CatalogProperties.FILE_IO_IMPL, "org.apache.iceberg.inmemory.InMemoryFileIO")); + + FileIO fileIO = ((BasePolarisCatalog) catalog).newTableOps(table).io(); + TableMetadata tableMetadata = + TableMetadata.buildFromEmpty() + .addSchema(SCHEMA, SCHEMA.highestFieldId()) + .setLocation( + String.format("%s/bucket/table/metadata/v1.metadata.json", storageLocation)) + .addPartitionSpec(PartitionSpec.unpartitioned()) + .addSortOrder(SortOrder.unsorted()) + .assignUUID() + .build(); + TableMetadataParser.overwrite( + tableMetadata, fileIO.newOutputFile(update.getMetadataLocation())); + TableMetadataParser.overwrite( + tableMetadata, fileIO.newOutputFile(update2.getMetadataLocation())); + return catalog; + } + }; + doTestSufficientPrivilegeSets( + List.of( + Set.of(PolarisPrivilege.CATALOG_MANAGE_CONTENT), + Set.of(PolarisPrivilege.TABLE_FULL_METADATA, PolarisPrivilege.NAMESPACE_FULL_METADATA), + Set.of( + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.NAMESPACE_CREATE, + PolarisPrivilege.NAMESPACE_DROP), + Set.of( + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_DROP, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.NAMESPACE_FULL_METADATA), + Set.of( + PolarisPrivilege.TABLE_CREATE, + PolarisPrivilege.TABLE_DROP, + PolarisPrivilege.TABLE_WRITE_PROPERTIES, + PolarisPrivilege.NAMESPACE_CREATE, + PolarisPrivilege.NAMESPACE_DROP)), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1), externalCatalog, factory) + .sendNotification(table, request); + newWrapper(Set.of(PRINCIPAL_ROLE1), externalCatalog, factory) + .sendNotification(table, request2); + newWrapper(Set.of(PRINCIPAL_ROLE1), externalCatalog, factory) + .sendNotification(table, request3); + }, + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE2), externalCatalog, factory) + .dropNamespace(Namespace.of("extns1", "extns2")); + newWrapper(Set.of(PRINCIPAL_ROLE2), externalCatalog, factory) + .dropNamespace(Namespace.of("extns1")); + }, + PRINCIPAL_NAME, + externalCatalog); + } + + @Test + public void testSendNotificationInsufficientPermissions() { + Namespace namespace = Namespace.of("ns1", "ns2"); + TableIdentifier table = TableIdentifier.of(namespace, "tbl1"); + + NotificationRequest request = new NotificationRequest(); + request.setNotificationType(NotificationType.UPDATE); + TableUpdateNotification update = new TableUpdateNotification(); + update.setMetadataLocation("file:///tmp/bucket/table/metadata/v1.metadata.json"); + update.setTableName(table.name()); + update.setTableUuid(UUID.randomUUID().toString()); + update.setTimestamp(230950845L); + request.setPayload(update); + + doTestInsufficientPrivileges( + List.of( + PolarisPrivilege.NAMESPACE_FULL_METADATA, + PolarisPrivilege.TABLE_FULL_METADATA, + PolarisPrivilege.VIEW_FULL_METADATA), + () -> { + newWrapper(Set.of(PRINCIPAL_ROLE1)).sendNotification(table, request); + }); + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/catalog/PolarisPassthroughResolutionView.java b/polaris-service/src/test/java/io/polaris/service/catalog/PolarisPassthroughResolutionView.java new file mode 100644 index 0000000000..15b77f1614 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/catalog/PolarisPassthroughResolutionView.java @@ -0,0 +1,144 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.catalog; + +import io.polaris.core.auth.AuthenticatedPolarisPrincipal; +import io.polaris.core.catalog.PolarisCatalogHelpers; +import io.polaris.core.context.CallContext; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.persistence.PolarisEntityManager; +import io.polaris.core.persistence.PolarisResolvedPathWrapper; +import io.polaris.core.persistence.resolver.PolarisResolutionManifest; +import io.polaris.core.persistence.resolver.PolarisResolutionManifestCatalogView; +import io.polaris.core.persistence.resolver.ResolverPath; +import java.util.Arrays; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.TableIdentifier; + +/** + * For test purposes or for elevated-privilege scenarios where entity resolution is allowed to + * directly access a PolarisEntityManager/PolarisMetaStoreManager without being part of an + * authorization-gated PolarisResolutionManifest, this class delegates entity resolution directly to + * new single-use PolarisResolutionManifests for each desired resolved path without defining a fixed + * set of resolved entities that need to be checked against authorizable operations. + */ +public class PolarisPassthroughResolutionView implements PolarisResolutionManifestCatalogView { + private final PolarisEntityManager entityManager; + private final CallContext callContext; + private final AuthenticatedPolarisPrincipal authenticatedPrincipal; + private final String catalogName; + + public PolarisPassthroughResolutionView( + CallContext callContext, + PolarisEntityManager entityManager, + AuthenticatedPolarisPrincipal authenticatedPrincipal, + String catalogName) { + this.entityManager = entityManager; + this.callContext = callContext; + this.authenticatedPrincipal = authenticatedPrincipal; + this.catalogName = catalogName; + } + + @Override + public PolarisResolvedPathWrapper getResolvedReferenceCatalogEntity() { + PolarisResolutionManifest manifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + manifest.resolveAll(); + return manifest.getResolvedReferenceCatalogEntity(); + } + + @Override + public PolarisResolvedPathWrapper getResolvedPath(Object key) { + PolarisResolutionManifest manifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + + if (key instanceof Namespace) { + Namespace namespace = (Namespace) key; + manifest.addPath( + new ResolverPath(Arrays.asList(namespace.levels()), PolarisEntityType.NAMESPACE), + namespace); + manifest.resolveAll(); + return manifest.getResolvedPath(namespace); + } else { + throw new IllegalStateException( + String.format( + "Trying to getResolvedPath(key) for %s with class %s", key, key.getClass())); + } + } + + @Override + public PolarisResolvedPathWrapper getResolvedPath(Object key, PolarisEntitySubType subType) { + PolarisResolutionManifest manifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + + if (key instanceof TableIdentifier) { + TableIdentifier identifier = (TableIdentifier) key; + manifest.addPath( + new ResolverPath( + PolarisCatalogHelpers.tableIdentifierToList(identifier), + PolarisEntityType.TABLE_LIKE), + identifier); + manifest.resolveAll(); + return manifest.getResolvedPath(identifier, subType); + } else { + throw new IllegalStateException( + String.format( + "Trying to getResolvedPath(key, subType) for %s with class %s and subType %s", + key, key.getClass(), subType)); + } + } + + @Override + public PolarisResolvedPathWrapper getPassthroughResolvedPath(Object key) { + PolarisResolutionManifest manifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + + if (key instanceof Namespace) { + Namespace namespace = (Namespace) key; + manifest.addPassthroughPath( + new ResolverPath(Arrays.asList(namespace.levels()), PolarisEntityType.NAMESPACE), + namespace); + return manifest.getPassthroughResolvedPath(namespace); + } else { + throw new IllegalStateException( + String.format( + "Trying to getResolvedPath(key) for %s with class %s", key, key.getClass())); + } + } + + @Override + public PolarisResolvedPathWrapper getPassthroughResolvedPath( + Object key, PolarisEntitySubType subType) { + PolarisResolutionManifest manifest = + entityManager.prepareResolutionManifest(callContext, authenticatedPrincipal, catalogName); + + if (key instanceof TableIdentifier) { + TableIdentifier identifier = (TableIdentifier) key; + manifest.addPassthroughPath( + new ResolverPath( + PolarisCatalogHelpers.tableIdentifierToList(identifier), + PolarisEntityType.TABLE_LIKE), + identifier); + return manifest.getPassthroughResolvedPath(identifier, subType); + } else { + throw new IllegalStateException( + String.format( + "Trying to getResolvedPath(key, subType) for %s with class %s and subType %s", + key, key.getClass(), subType)); + } + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/catalog/PolarisRestCatalogIntegrationTest.java b/polaris-service/src/test/java/io/polaris/service/catalog/PolarisRestCatalogIntegrationTest.java new file mode 100644 index 0000000000..a60d9f404b --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/catalog/PolarisRestCatalogIntegrationTest.java @@ -0,0 +1,782 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.catalog; + +import static io.polaris.service.context.DefaultContextResolver.REALM_PROPERTY_KEY; +import static org.apache.iceberg.types.Types.NestedField.required; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +import com.google.common.collect.ImmutableMap; +import io.dropwizard.testing.ConfigOverride; +import io.dropwizard.testing.ResourceHelpers; +import io.dropwizard.testing.junit5.DropwizardAppExtension; +import io.dropwizard.testing.junit5.DropwizardExtensionsSupport; +import io.polaris.core.PolarisConfiguration; +import io.polaris.core.admin.model.AwsStorageConfigInfo; +import io.polaris.core.admin.model.Catalog; +import io.polaris.core.admin.model.CatalogGrant; +import io.polaris.core.admin.model.CatalogPrivilege; +import io.polaris.core.admin.model.CatalogRole; +import io.polaris.core.admin.model.FileStorageConfigInfo; +import io.polaris.core.admin.model.GrantResource; +import io.polaris.core.admin.model.GrantResources; +import io.polaris.core.admin.model.NamespaceGrant; +import io.polaris.core.admin.model.NamespacePrivilege; +import io.polaris.core.admin.model.PolarisCatalog; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.core.admin.model.TableGrant; +import io.polaris.core.admin.model.TablePrivilege; +import io.polaris.core.admin.model.UpdateCatalogRequest; +import io.polaris.core.admin.model.ViewGrant; +import io.polaris.core.admin.model.ViewPrivilege; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.service.PolarisApplication; +import io.polaris.service.auth.BasePolarisAuthenticator; +import io.polaris.service.auth.TokenUtils; +import io.polaris.service.config.PolarisApplicationConfig; +import io.polaris.service.test.PolarisConnectionExtension; +import io.polaris.service.test.PolarisConnectionExtension.PolarisToken; +import io.polaris.service.test.SnowmanCredentialsExtension; +import io.polaris.service.test.SnowmanCredentialsExtension.SnowmanCredentials; +import io.polaris.service.types.NotificationRequest; +import io.polaris.service.types.NotificationType; +import io.polaris.service.types.TableUpdateNotification; +import jakarta.ws.rs.client.Entity; +import jakarta.ws.rs.core.Response; +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.Comparator; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.UUID; +import org.apache.iceberg.BaseTable; +import org.apache.iceberg.CatalogProperties; +import org.apache.iceberg.Schema; +import org.apache.iceberg.Table; +import org.apache.iceberg.catalog.CatalogTests; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.SessionCatalog; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.exceptions.BadRequestException; +import org.apache.iceberg.exceptions.ForbiddenException; +import org.apache.iceberg.rest.HTTPClient; +import org.apache.iceberg.rest.RESTCatalog; +import org.apache.iceberg.rest.auth.OAuth2Properties; +import org.apache.iceberg.rest.responses.ErrorResponse; +import org.apache.iceberg.types.Types; +import org.assertj.core.api.Assertions; +import org.assertj.core.api.InstanceOfAssertFactories; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInfo; +import org.junit.jupiter.api.extension.ExtendWith; + +/** + * Import the full core Iceberg catalog tests by hitting the REST service via the RESTCatalog + * client. + */ +@ExtendWith({ + DropwizardExtensionsSupport.class, + PolarisConnectionExtension.class, + SnowmanCredentialsExtension.class +}) +public class PolarisRestCatalogIntegrationTest extends CatalogTests { + private static final String TEST_ROLE_ARN = + Optional.ofNullable(System.getenv("INTEGRATION_TEST_ROLE_ARN")) + .orElse("arn:aws:iam::123456789012:role/my-role"); + private static final String S3_BUCKET_BASE = + Optional.ofNullable(System.getenv("INTEGRATION_TEST_S3_PATH")) + .orElse("file:///tmp/buckets/my-bucket"); + private static DropwizardAppExtension EXT = + new DropwizardAppExtension<>( + PolarisApplication.class, + ResourceHelpers.resourceFilePath("polaris-server-integrationtest.yml"), + ConfigOverride.config( + "server.applicationConnectors[0].port", + "0"), // Bind to random port to support parallelism + ConfigOverride.config( + "server.adminConnectors[0].port", "0")); // Bind to random port to support parallelism + + protected static final Schema SCHEMA = new Schema(required(4, "data", Types.StringType.get())); + protected static final String VIEW_QUERY = "select * from ns1.layer1_table"; + + private RESTCatalog restCatalog; + private String currentCatalogName; + private String userToken; + private static String realm; + + private final String catalogBaseLocation = + S3_BUCKET_BASE + "/" + System.getenv("USER") + "/path/to/data"; + + @BeforeAll + public static void setup() throws IOException { + realm = PolarisConnectionExtension.getTestRealm(PolarisRestCatalogIntegrationTest.class); + + Path testDir = Path.of("build/test_data/iceberg/" + realm); + if (Files.exists(testDir)) { + if (Files.isDirectory(testDir)) { + Files.walk(testDir) + .sorted(Comparator.reverseOrder()) + .forEach( + path -> { + try { + Files.delete(path); + } catch (IOException e) { + throw new RuntimeException(e); + } + }); + + } else { + Files.delete(testDir); + } + } + Files.createDirectories(testDir); + } + + @BeforeEach + public void before( + TestInfo testInfo, PolarisToken adminToken, SnowmanCredentials snowmanCredentials) { + userToken = + TokenUtils.getTokenFromSecrets( + EXT.client(), + EXT.getLocalPort(), + snowmanCredentials.clientId(), + snowmanCredentials.clientSecret(), + realm); + testInfo + .getTestMethod() + .ifPresent( + method -> { + currentCatalogName = method.getName(); + AwsStorageConfigInfo awsConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn(TEST_ROLE_ARN) + .setExternalId("externalId") + .setUserArn("a:user:arn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of("s3://my-old-bucket/path/to/data")) + .build(); + io.polaris.core.admin.model.CatalogProperties.Builder catalogPropsBuilder = + io.polaris.core.admin.model.CatalogProperties.builder(catalogBaseLocation) + .addProperty( + PolarisConfiguration.CATALOG_ALLOW_UNSTRUCTURED_TABLE_LOCATION, "true") + .addProperty( + PolarisConfiguration.CATALOG_ALLOW_EXTERNAL_TABLE_LOCATION, "true"); + if (!S3_BUCKET_BASE.startsWith("file:/")) { + catalogPropsBuilder.addProperty( + CatalogEntity.REPLACE_NEW_LOCATION_PREFIX_WITH_CATALOG_DEFAULT_KEY, "file:"); + } + Catalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName(currentCatalogName) + .setProperties(catalogPropsBuilder.build()) + .setStorageConfigInfo( + S3_BUCKET_BASE.startsWith("file:/") + ? new FileStorageConfigInfo( + StorageConfigInfo.StorageTypeEnum.FILE, List.of("file://")) + : awsConfigModel) + .build(); + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs", EXT.getLocalPort())) + .request("application/json") + .header("Authorization", "Bearer " + adminToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(catalog))) { + assertThat(response) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // Create a new CatalogRole that has CATALOG_MANAGE_CONTENT and CATALOG_MANAGE_ACCESS + CatalogRole newRole = new CatalogRole("custom-admin"); + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles", + EXT.getLocalPort(), currentCatalogName)) + .request("application/json") + .header("Authorization", "Bearer " + adminToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(newRole))) { + assertThat(response) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + CatalogGrant grantResource = + new CatalogGrant( + CatalogPrivilege.CATALOG_MANAGE_CONTENT, GrantResource.TypeEnum.CATALOG); + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles/custom-admin/grants", + EXT.getLocalPort(), currentCatalogName)) + .request("application/json") + .header("Authorization", "Bearer " + adminToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .put(Entity.json(grantResource))) { + assertThat(response) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + CatalogGrant grantAccessResource = + new CatalogGrant( + CatalogPrivilege.CATALOG_MANAGE_ACCESS, GrantResource.TypeEnum.CATALOG); + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles/custom-admin/grants", + EXT.getLocalPort(), currentCatalogName)) + .request("application/json") + .header("Authorization", "Bearer " + adminToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .put(Entity.json(grantAccessResource))) { + assertThat(response) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + // Assign this new CatalogRole to the service_admin PrincipalRole + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles/custom-admin", + EXT.getLocalPort(), currentCatalogName)) + .request("application/json") + .header("Authorization", "Bearer " + adminToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .get()) { + assertThat(response) + .returns(Response.Status.OK.getStatusCode(), Response::getStatus); + CatalogRole catalogRole = response.readEntity(CatalogRole.class); + try (Response assignResponse = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/principal-roles/catalog-admin/catalog-roles/%s", + EXT.getLocalPort(), currentCatalogName)) + .request("application/json") + .header("Authorization", "Bearer " + adminToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .put(Entity.json(catalogRole))) { + assertThat(assignResponse) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + } + + SessionCatalog.SessionContext context = SessionCatalog.SessionContext.createEmpty(); + this.restCatalog = + new RESTCatalog( + context, + (config) -> + HTTPClient.builder(config) + .uri(config.get(CatalogProperties.URI)) + .build()); + this.restCatalog.initialize( + "polaris", + ImmutableMap.of( + CatalogProperties.URI, + "http://localhost:" + EXT.getLocalPort() + "/api/catalog", + OAuth2Properties.CREDENTIAL, + snowmanCredentials.clientId() + ":" + snowmanCredentials.clientSecret(), + OAuth2Properties.SCOPE, + BasePolarisAuthenticator.PRINCIPAL_ROLE_ALL, + CatalogProperties.FILE_IO_IMPL, + "org.apache.iceberg.inmemory.InMemoryFileIO", + "warehouse", + currentCatalogName, + "header." + REALM_PROPERTY_KEY, + realm)); + }); + } + + @Override + protected RESTCatalog catalog() { + return restCatalog; + } + + @Override + protected boolean requiresNamespaceCreate() { + return true; + } + + @Override + protected boolean supportsNestedNamespaces() { + return true; + } + + @Override + protected boolean supportsServerSideRetry() { + return true; + } + + @Override + protected boolean overridesRequestedLocation() { + return true; + } + + private void createCatalogRole(String catalogRoleName) { + CatalogRole catalogRole = new CatalogRole(catalogRoleName); + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles", + EXT.getLocalPort(), currentCatalogName)) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(catalogRole))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + } + + private void addGrant(String catalogRoleName, GrantResource grant) { + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles/%s/grants", + EXT.getLocalPort(), currentCatalogName, catalogRoleName)) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .put(Entity.json(grant))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + } + + @Test + public void testListGrantsOnCatalogObjectsToCatalogRoles() { + restCatalog.createNamespace(Namespace.of("ns1")); + restCatalog.createNamespace(Namespace.of("ns1", "ns1a")); + restCatalog.createNamespace(Namespace.of("ns2")); + + restCatalog.buildTable(TableIdentifier.of(Namespace.of("ns1"), "tbl1"), SCHEMA).create(); + restCatalog + .buildTable(TableIdentifier.of(Namespace.of("ns1", "ns1a"), "tbl1"), SCHEMA) + .create(); + restCatalog.buildTable(TableIdentifier.of(Namespace.of("ns2"), "tbl2"), SCHEMA).create(); + + restCatalog + .buildView(TableIdentifier.of(Namespace.of("ns1"), "view1")) + .withSchema(SCHEMA) + .withDefaultNamespace(Namespace.of("ns1")) + .withQuery("spark", VIEW_QUERY) + .create(); + restCatalog + .buildView(TableIdentifier.of(Namespace.of("ns1", "ns1a"), "view1")) + .withSchema(SCHEMA) + .withDefaultNamespace(Namespace.of("ns1")) + .withQuery("spark", VIEW_QUERY) + .create(); + restCatalog + .buildView(TableIdentifier.of(Namespace.of("ns2"), "view2")) + .withSchema(SCHEMA) + .withDefaultNamespace(Namespace.of("ns1")) + .withQuery("spark", VIEW_QUERY) + .create(); + + CatalogGrant catalogGrant1 = + new CatalogGrant(CatalogPrivilege.CATALOG_MANAGE_CONTENT, GrantResource.TypeEnum.CATALOG); + + CatalogGrant catalogGrant2 = + new CatalogGrant(CatalogPrivilege.NAMESPACE_FULL_METADATA, GrantResource.TypeEnum.CATALOG); + + CatalogGrant catalogGrant3 = + new CatalogGrant(CatalogPrivilege.VIEW_FULL_METADATA, GrantResource.TypeEnum.CATALOG); + + NamespaceGrant namespaceGrant1 = + new NamespaceGrant( + List.of("ns1"), + NamespacePrivilege.NAMESPACE_FULL_METADATA, + GrantResource.TypeEnum.NAMESPACE); + + NamespaceGrant namespaceGrant2 = + new NamespaceGrant( + List.of("ns1", "ns1a"), + NamespacePrivilege.TABLE_CREATE, + GrantResource.TypeEnum.NAMESPACE); + + NamespaceGrant namespaceGrant3 = + new NamespaceGrant( + List.of("ns2"), + NamespacePrivilege.VIEW_READ_PROPERTIES, + GrantResource.TypeEnum.NAMESPACE); + + TableGrant tableGrant1 = + new TableGrant( + List.of("ns1"), + "tbl1", + TablePrivilege.TABLE_FULL_METADATA, + GrantResource.TypeEnum.TABLE); + + TableGrant tableGrant2 = + new TableGrant( + List.of("ns1", "ns1a"), + "tbl1", + TablePrivilege.TABLE_READ_DATA, + GrantResource.TypeEnum.TABLE); + + TableGrant tableGrant3 = + new TableGrant( + List.of("ns2"), "tbl2", TablePrivilege.TABLE_WRITE_DATA, GrantResource.TypeEnum.TABLE); + + ViewGrant viewGrant1 = + new ViewGrant( + List.of("ns1"), "view1", ViewPrivilege.VIEW_FULL_METADATA, GrantResource.TypeEnum.VIEW); + + ViewGrant viewGrant2 = + new ViewGrant( + List.of("ns1", "ns1a"), + "view1", + ViewPrivilege.VIEW_READ_PROPERTIES, + GrantResource.TypeEnum.VIEW); + + ViewGrant viewGrant3 = + new ViewGrant( + List.of("ns2"), + "view2", + ViewPrivilege.VIEW_WRITE_PROPERTIES, + GrantResource.TypeEnum.VIEW); + + createCatalogRole("catalogrole1"); + createCatalogRole("catalogrole2"); + + List role1Grants = + List.of( + catalogGrant1, + catalogGrant2, + namespaceGrant1, + namespaceGrant2, + tableGrant1, + tableGrant2, + viewGrant1, + viewGrant2); + role1Grants.stream().forEach(grant -> addGrant("catalogrole1", grant)); + List role2Grants = + List.of( + catalogGrant1, + catalogGrant3, + namespaceGrant1, + namespaceGrant3, + tableGrant1, + tableGrant3, + viewGrant1, + viewGrant3); + role2Grants.stream().forEach(grant -> addGrant("catalogrole2", grant)); + + // List grants for catalogrole1 + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles/%s/grants", + EXT.getLocalPort(), currentCatalogName, "catalogrole1")) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .get()) { + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(GrantResources.class)) + .extracting(GrantResources::getGrants) + .asInstanceOf(InstanceOfAssertFactories.list(GrantResource.class)) + .containsExactlyInAnyOrder(role1Grants.toArray(new GrantResource[0])); + } + + // List grants for catalogrole2 + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles/%s/grants", + EXT.getLocalPort(), currentCatalogName, "catalogrole2")) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .get()) { + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(GrantResources.class)) + .extracting(GrantResources::getGrants) + .asInstanceOf(InstanceOfAssertFactories.list(GrantResource.class)) + .containsExactlyInAnyOrder(role2Grants.toArray(new GrantResource[0])); + } + } + + @Test + public void testListGrantsAfterRename() { + restCatalog.createNamespace(Namespace.of("ns1")); + restCatalog.createNamespace(Namespace.of("ns1", "ns1a")); + restCatalog.createNamespace(Namespace.of("ns2")); + + restCatalog + .buildTable(TableIdentifier.of(Namespace.of("ns1", "ns1a"), "tbl1"), SCHEMA) + .create(); + + TableGrant tableGrant1 = + new TableGrant( + List.of("ns1", "ns1a"), + "tbl1", + TablePrivilege.TABLE_FULL_METADATA, + GrantResource.TypeEnum.TABLE); + + createCatalogRole("catalogrole1"); + addGrant("catalogrole1", tableGrant1); + + // Grants will follow the table through the rename + restCatalog.renameTable( + TableIdentifier.of(Namespace.of("ns1", "ns1a"), "tbl1"), + TableIdentifier.of(Namespace.of("ns2"), "newtable")); + + TableGrant expectedGrant = + new TableGrant( + List.of("ns2"), + "newtable", + TablePrivilege.TABLE_FULL_METADATA, + GrantResource.TypeEnum.TABLE); + + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles/%s/grants", + EXT.getLocalPort(), currentCatalogName, "catalogrole1")) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .get()) { + assertThat(response) + .returns(200, Response::getStatus) + .extracting(r -> r.readEntity(GrantResources.class)) + .extracting(GrantResources::getGrants) + .asInstanceOf(InstanceOfAssertFactories.list(GrantResource.class)) + .containsExactly(expectedGrant); + } + } + + @Test + public void testCreateTableWithOverriddenBaseLocation(PolarisToken adminToken) { + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s", + EXT.getLocalPort(), currentCatalogName)) + .request("application/json") + .header("Authorization", "Bearer " + adminToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .get()) { + assertThat(response).returns(Response.Status.OK.getStatusCode(), Response::getStatus); + Catalog catalog = response.readEntity(Catalog.class); + Map catalogProps = new HashMap<>(catalog.getProperties().toMap()); + catalogProps.put(PolarisConfiguration.CATALOG_ALLOW_UNSTRUCTURED_TABLE_LOCATION, "false"); + try (Response updateResponse = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s", + EXT.getLocalPort(), catalog.getName())) + .request("application/json") + .header("Authorization", "Bearer " + adminToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .put( + Entity.json( + new UpdateCatalogRequest( + catalog.getEntityVersion(), + catalogProps, + catalog.getStorageConfigInfo())))) { + assertThat(updateResponse).returns(Response.Status.OK.getStatusCode(), Response::getStatus); + } + } + + restCatalog.createNamespace(Namespace.of("ns1")); + restCatalog.createNamespace( + Namespace.of("ns1", "ns1a"), + ImmutableMap.of( + PolarisEntityConstants.ENTITY_BASE_LOCATION, + catalogBaseLocation + "/ns1/ns1a-override")); + + TableIdentifier tableIdentifier = TableIdentifier.of(Namespace.of("ns1", "ns1a"), "tbl1"); + restCatalog + .buildTable(tableIdentifier, SCHEMA) + .withLocation(catalogBaseLocation + "/ns1/ns1a-override/tbl1-override") + .create(); + Table table = restCatalog.loadTable(tableIdentifier); + assertThat(table) + .isNotNull() + .isInstanceOf(BaseTable.class) + .asInstanceOf(InstanceOfAssertFactories.type(BaseTable.class)) + .returns(catalogBaseLocation + "/ns1/ns1a-override/tbl1-override", BaseTable::location); + } + + @Test + public void testCreateTableWithOverriddenBaseLocationCannotOverlapSibling( + PolarisToken adminToken) { + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s", + EXT.getLocalPort(), currentCatalogName)) + .request("application/json") + .header("Authorization", "Bearer " + adminToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .get()) { + assertThat(response).returns(Response.Status.OK.getStatusCode(), Response::getStatus); + Catalog catalog = response.readEntity(Catalog.class); + Map catalogProps = new HashMap<>(catalog.getProperties().toMap()); + catalogProps.put(PolarisConfiguration.CATALOG_ALLOW_UNSTRUCTURED_TABLE_LOCATION, "false"); + try (Response updateResponse = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s", + EXT.getLocalPort(), catalog.getName())) + .request("application/json") + .header("Authorization", "Bearer " + adminToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .put( + Entity.json( + new UpdateCatalogRequest( + catalog.getEntityVersion(), + catalogProps, + catalog.getStorageConfigInfo())))) { + assertThat(updateResponse).returns(Response.Status.OK.getStatusCode(), Response::getStatus); + } + } + + restCatalog.createNamespace(Namespace.of("ns1")); + restCatalog.createNamespace( + Namespace.of("ns1", "ns1a"), + ImmutableMap.of( + PolarisEntityConstants.ENTITY_BASE_LOCATION, + catalogBaseLocation + "/ns1/ns1a-override")); + + TableIdentifier tableIdentifier = TableIdentifier.of(Namespace.of("ns1", "ns1a"), "tbl1"); + restCatalog + .buildTable(tableIdentifier, SCHEMA) + .withLocation(catalogBaseLocation + "/ns1/ns1a-override/tbl1-override") + .create(); + Table table = restCatalog.loadTable(tableIdentifier); + assertThat(table) + .isNotNull() + .isInstanceOf(BaseTable.class) + .asInstanceOf(InstanceOfAssertFactories.type(BaseTable.class)) + .returns(catalogBaseLocation + "/ns1/ns1a-override/tbl1-override", BaseTable::location); + + Assertions.assertThatThrownBy( + () -> + restCatalog + .buildTable(TableIdentifier.of(Namespace.of("ns1", "ns1a"), "tbl2"), SCHEMA) + .withLocation(catalogBaseLocation + "/ns1/ns1a-override/tbl1-override") + .create()) + .isInstanceOf(BadRequestException.class) + .hasMessageContaining("because it conflicts with existing table or namespace"); + } + + @Test + public void testCreateTableWithOverriddenBaseLocationMustResideInNsDirectory( + PolarisToken adminToken) { + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s", + EXT.getLocalPort(), currentCatalogName)) + .request("application/json") + .header("Authorization", "Bearer " + adminToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .get()) { + assertThat(response).returns(Response.Status.OK.getStatusCode(), Response::getStatus); + Catalog catalog = response.readEntity(Catalog.class); + Map catalogProps = new HashMap<>(catalog.getProperties().toMap()); + catalogProps.put(PolarisConfiguration.CATALOG_ALLOW_UNSTRUCTURED_TABLE_LOCATION, "false"); + try (Response updateResponse = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s", + EXT.getLocalPort(), catalog.getName())) + .request("application/json") + .header("Authorization", "Bearer " + adminToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .put( + Entity.json( + new UpdateCatalogRequest( + catalog.getEntityVersion(), + catalogProps, + catalog.getStorageConfigInfo())))) { + assertThat(updateResponse).returns(Response.Status.OK.getStatusCode(), Response::getStatus); + } + } + + restCatalog.createNamespace(Namespace.of("ns1")); + restCatalog.createNamespace( + Namespace.of("ns1", "ns1a"), + ImmutableMap.of( + PolarisEntityConstants.ENTITY_BASE_LOCATION, + catalogBaseLocation + "/ns1/ns1a-override")); + + TableIdentifier tableIdentifier = TableIdentifier.of(Namespace.of("ns1", "ns1a"), "tbl1"); + assertThatThrownBy( + () -> + restCatalog + .buildTable(tableIdentifier, SCHEMA) + .withLocation(catalogBaseLocation + "/ns1/ns1a/tbl1-override") + .create()) + .isInstanceOf(ForbiddenException.class); + } + + @Test + public void testSendNotificationInternalCatalog() { + NotificationRequest notification = new NotificationRequest(); + notification.setNotificationType(NotificationType.CREATE); + notification.setPayload( + new TableUpdateNotification( + "tbl1", + System.currentTimeMillis(), + UUID.randomUUID().toString(), + "s3://my-bucket/path/to/metadata.json", + null)); + restCatalog.createNamespace(Namespace.of("ns1")); + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/catalog/v1/%s/namespaces/ns1/tables/tbl1/notifications", + EXT.getLocalPort(), currentCatalogName)) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(notification))) { + assertThat(response) + .returns(Response.Status.BAD_REQUEST.getStatusCode(), Response::getStatus) + .extracting(r -> r.readEntity(ErrorResponse.class)) + .returns("Cannot update internal catalog via notifications", ErrorResponse::message); + } + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/catalog/PolarisRestCatalogViewIntegrationTest.java b/polaris-service/src/test/java/io/polaris/service/catalog/PolarisRestCatalogViewIntegrationTest.java new file mode 100644 index 0000000000..f99cbb32b1 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/catalog/PolarisRestCatalogViewIntegrationTest.java @@ -0,0 +1,295 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.catalog; + +import static io.polaris.service.context.DefaultContextResolver.REALM_PROPERTY_KEY; +import static org.assertj.core.api.Assertions.assertThat; + +import com.google.common.collect.ImmutableMap; +import io.dropwizard.testing.ConfigOverride; +import io.dropwizard.testing.ResourceHelpers; +import io.dropwizard.testing.junit5.DropwizardAppExtension; +import io.dropwizard.testing.junit5.DropwizardExtensionsSupport; +import io.polaris.core.PolarisConfiguration; +import io.polaris.core.admin.model.AwsStorageConfigInfo; +import io.polaris.core.admin.model.Catalog; +import io.polaris.core.admin.model.CatalogGrant; +import io.polaris.core.admin.model.CatalogPrivilege; +import io.polaris.core.admin.model.CatalogRole; +import io.polaris.core.admin.model.FileStorageConfigInfo; +import io.polaris.core.admin.model.GrantResource; +import io.polaris.core.admin.model.PolarisCatalog; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.core.entity.CatalogEntity; +import io.polaris.service.PolarisApplication; +import io.polaris.service.auth.BasePolarisAuthenticator; +import io.polaris.service.config.PolarisApplicationConfig; +import io.polaris.service.test.PolarisConnectionExtension; +import io.polaris.service.test.PolarisConnectionExtension.PolarisToken; +import io.polaris.service.test.SnowmanCredentialsExtension; +import io.polaris.service.test.SnowmanCredentialsExtension.SnowmanCredentials; +import jakarta.ws.rs.client.Entity; +import jakarta.ws.rs.core.Response; +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.Comparator; +import java.util.List; +import java.util.Optional; +import org.apache.iceberg.CatalogProperties; +import org.apache.iceberg.catalog.SessionCatalog; +import org.apache.iceberg.rest.HTTPClient; +import org.apache.iceberg.rest.RESTCatalog; +import org.apache.iceberg.rest.auth.OAuth2Properties; +import org.apache.iceberg.view.ViewCatalogTests; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.TestInfo; +import org.junit.jupiter.api.extension.ExtendWith; + +/** + * Import the full core Iceberg catalog tests by hitting the REST service via the RESTCatalog + * client. + */ +@ExtendWith({ + DropwizardExtensionsSupport.class, + PolarisConnectionExtension.class, + SnowmanCredentialsExtension.class +}) +public class PolarisRestCatalogViewIntegrationTest extends ViewCatalogTests { + public static final String TEST_ROLE_ARN = + Optional.ofNullable(System.getenv("INTEGRATION_TEST_ROLE_ARN")) + .orElse("arn:aws:iam::123456789012:role/my-role"); + public static final String S3_BUCKET_BASE = + Optional.ofNullable(System.getenv("INTEGRATION_TEST_S3_PATH")) + .orElse("file:///tmp/buckets/my-bucket"); + private static DropwizardAppExtension EXT = + new DropwizardAppExtension<>( + PolarisApplication.class, + ResourceHelpers.resourceFilePath("polaris-server-integrationtest.yml"), + ConfigOverride.config( + "server.applicationConnectors[0].port", + "0"), // Bind to random port to support parallelism + ConfigOverride.config( + "server.adminConnectors[0].port", "0")); // Bind to random port to support parallelism + + private RESTCatalog restCatalog; + private static String realm; + + @BeforeAll + public static void setup() throws IOException { + realm = PolarisConnectionExtension.getTestRealm(PolarisRestCatalogViewIntegrationTest.class); + + Path testDir = Path.of("build/test_data/iceberg/" + realm); + if (Files.exists(testDir)) { + if (Files.isDirectory(testDir)) { + Files.walk(testDir) + .sorted(Comparator.reverseOrder()) + .forEach( + path -> { + try { + Files.delete(path); + } catch (IOException e) { + throw new RuntimeException(e); + } + }); + + } else { + Files.delete(testDir); + } + } + Files.createDirectories(testDir); + } + + @BeforeEach + public void before( + TestInfo testInfo, PolarisToken adminToken, SnowmanCredentials snowmanCredentials) { + String userToken = adminToken.token(); + testInfo + .getTestMethod() + .ifPresent( + method -> { + String catalogName = method.getName(); + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s", + EXT.getLocalPort(), catalogName)) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .get()) { + if (response.getStatus() == Response.Status.OK.getStatusCode()) { + // Already exists! Must be in a parameterized test. + // Quick hack to get a unique catalogName. + // TODO: Have a while-loop instead with consecutive incrementing suffixes. + catalogName = catalogName + System.currentTimeMillis(); + } + } + + AwsStorageConfigInfo awsConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn(TEST_ROLE_ARN) + .setExternalId("externalId") + .setUserArn("userArn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of("s3://my-old-bucket/path/to/data")) + .build(); + io.polaris.core.admin.model.CatalogProperties props = + io.polaris.core.admin.model.CatalogProperties.builder( + S3_BUCKET_BASE + "/" + System.getenv("USER") + "/path/to/data") + .addProperty( + CatalogEntity.REPLACE_NEW_LOCATION_PREFIX_WITH_CATALOG_DEFAULT_KEY, + "file:") + .addProperty( + PolarisConfiguration.CATALOG_ALLOW_EXTERNAL_TABLE_LOCATION, "true") + .addProperty( + PolarisConfiguration.CATALOG_ALLOW_UNSTRUCTURED_TABLE_LOCATION, "true") + .build(); + Catalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName(catalogName) + .setProperties(props) + .setStorageConfigInfo( + S3_BUCKET_BASE.startsWith("file:") + ? new FileStorageConfigInfo( + StorageConfigInfo.StorageTypeEnum.FILE, List.of("file://")) + : awsConfigModel) + .build(); + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs", EXT.getLocalPort())) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(catalog))) { + assertThat(response) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + CatalogRole newRole = new CatalogRole("admin"); + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles", + EXT.getLocalPort(), catalogName)) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(newRole))) { + assertThat(response) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + CatalogGrant grantResource = + new CatalogGrant( + CatalogPrivilege.CATALOG_MANAGE_CONTENT, GrantResource.TypeEnum.CATALOG); + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles/admin/grants", + EXT.getLocalPort(), catalogName)) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .put(Entity.json(grantResource))) { + assertThat(response) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/%s/catalog-roles/admin", + EXT.getLocalPort(), catalogName)) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .get()) { + assertThat(response) + .returns(Response.Status.OK.getStatusCode(), Response::getStatus); + CatalogRole catalogRole = response.readEntity(CatalogRole.class); + try (Response assignResponse = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/principal-roles/catalog-admin/catalog-roles/%s", + EXT.getLocalPort(), catalogName)) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .put(Entity.json(catalogRole))) { + assertThat(response) + .returns(Response.Status.OK.getStatusCode(), Response::getStatus); + } + } + + SessionCatalog.SessionContext context = SessionCatalog.SessionContext.createEmpty(); + this.restCatalog = + new RESTCatalog( + context, + (config) -> + HTTPClient.builder(config) + .uri(config.get(CatalogProperties.URI)) + .build()); + this.restCatalog.initialize( + "polaris", + ImmutableMap.of( + CatalogProperties.URI, + "http://localhost:" + EXT.getLocalPort() + "/api/catalog", + OAuth2Properties.CREDENTIAL, + snowmanCredentials.clientId() + ":" + snowmanCredentials.clientSecret(), + OAuth2Properties.SCOPE, + BasePolarisAuthenticator.PRINCIPAL_ROLE_ALL, + CatalogProperties.FILE_IO_IMPL, + "org.apache.iceberg.inmemory.InMemoryFileIO", + "warehouse", + catalogName, + "header." + REALM_PROPERTY_KEY, + realm)); + }); + } + + @Override + protected RESTCatalog catalog() { + return restCatalog; + } + + @Override + protected org.apache.iceberg.catalog.Catalog tableCatalog() { + return restCatalog; + } + + @Override + protected boolean requiresNamespaceCreate() { + return true; + } + + @Override + protected boolean supportsServerSideRetry() { + return true; + } + + @Override + protected boolean overridesRequestedLocation() { + return true; + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/catalog/PolarisSparkIntegrationTest.java b/polaris-service/src/test/java/io/polaris/service/catalog/PolarisSparkIntegrationTest.java new file mode 100644 index 0000000000..4dc4162a9f --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/catalog/PolarisSparkIntegrationTest.java @@ -0,0 +1,356 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.catalog; + +import static io.polaris.service.context.DefaultContextResolver.REALM_PROPERTY_KEY; +import static org.assertj.core.api.Assertions.assertThat; + +import com.adobe.testing.s3mock.testcontainers.S3MockContainer; +import io.dropwizard.testing.ConfigOverride; +import io.dropwizard.testing.ResourceHelpers; +import io.dropwizard.testing.junit5.DropwizardAppExtension; +import io.dropwizard.testing.junit5.DropwizardExtensionsSupport; +import io.polaris.core.admin.model.AwsStorageConfigInfo; +import io.polaris.core.admin.model.Catalog; +import io.polaris.core.admin.model.CatalogProperties; +import io.polaris.core.admin.model.ExternalCatalog; +import io.polaris.core.admin.model.PolarisCatalog; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.service.PolarisApplication; +import io.polaris.service.config.PolarisApplicationConfig; +import io.polaris.service.test.PolarisConnectionExtension; +import io.polaris.service.types.NotificationRequest; +import io.polaris.service.types.NotificationType; +import io.polaris.service.types.TableUpdateNotification; +import jakarta.ws.rs.client.Entity; +import jakarta.ws.rs.core.Response; +import java.time.Instant; +import java.util.List; +import java.util.Map; +import org.apache.iceberg.rest.requests.ImmutableRegisterTableRequest; +import org.apache.iceberg.rest.responses.LoadTableResponse; +import org.apache.spark.sql.Row; +import org.apache.spark.sql.SparkSession; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.slf4j.LoggerFactory; + +@ExtendWith({DropwizardExtensionsSupport.class, PolarisConnectionExtension.class}) +public class PolarisSparkIntegrationTest { + private static final DropwizardAppExtension EXT = + new DropwizardAppExtension<>( + PolarisApplication.class, + ResourceHelpers.resourceFilePath("polaris-server-integrationtest.yml"), + ConfigOverride.config( + "server.applicationConnectors[0].port", + "0"), // Bind to random port to support parallelism + ConfigOverride.config( + "server.adminConnectors[0].port", "0")); // Bind to random port to support parallelism + + public static final String CATALOG_NAME = "mycatalog"; + public static final String EXTERNAL_CATALOG_NAME = "external_catalog"; + private static S3MockContainer s3Container = + new S3MockContainer("3.9.1").withInitialBuckets("my-bucket,my-old-bucket"); + private static PolarisConnectionExtension.PolarisToken polarisToken; + private static SparkSession spark; + private static String realm; + + @BeforeAll + public static void setup(PolarisConnectionExtension.PolarisToken polarisToken) { + s3Container.start(); + PolarisSparkIntegrationTest.polarisToken = polarisToken; + realm = PolarisConnectionExtension.getTestRealm(PolarisSparkIntegrationTest.class); + } + + @AfterAll + public static void cleanup() { + s3Container.stop(); + } + + @BeforeEach + public void before() { + AwsStorageConfigInfo awsConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::123456789012:role/my-role") + .setExternalId("externalId") + .setUserArn("userArn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of("s3://my-old-bucket/path/to/data")) + .build(); + CatalogProperties props = new CatalogProperties("s3://my-bucket/path/to/data"); + props.putAll( + Map.of( + "table-default.s3.endpoint", + s3Container.getHttpEndpoint(), + "table-default.s3.path-style-access", + "true", + "table-default.s3.access-key-id", + "foo", + "table-default.s3.secret-access-key", + "bar", + "s3.endpoint", + s3Container.getHttpEndpoint(), + "s3.path-style-access", + "true", + "s3.access-key-id", + "foo", + "s3.secret-access-key", + "bar")); + Catalog catalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName(CATALOG_NAME) + .setProperties(props) + .setStorageConfigInfo(awsConfigModel) + .build(); + + try (Response response = + EXT.client() + .target( + String.format("http://localhost:%d/api/management/v1/catalogs", EXT.getLocalPort())) + .request("application/json") + .header("Authorization", "BEARER " + polarisToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(catalog))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + CatalogProperties externalProps = new CatalogProperties("s3://my-bucket/path/to/data"); + externalProps.putAll( + Map.of( + "s3.endpoint", + s3Container.getHttpEndpoint(), + "s3.path-style-access", + "true", + "s3.access-key-id", + "foo", + "s3.secret-access-key", + "bar")); + Catalog externalCatalog = + ExternalCatalog.builder() + .setType(Catalog.TypeEnum.EXTERNAL) + .setName(EXTERNAL_CATALOG_NAME) + .setProperties(externalProps) + .setStorageConfigInfo(awsConfigModel) + .setRemoteUrl("http://dummy_url") + .build(); + try (Response response = + EXT.client() + .target( + String.format("http://localhost:%d/api/management/v1/catalogs", EXT.getLocalPort())) + .request("application/json") + .header("Authorization", "BEARER " + polarisToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(externalCatalog))) { + assertThat(response).returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + SparkSession.Builder sessionBuilder = + SparkSession.builder() + .master("local[1]") + .config("spark.hadoop.fs.s3.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem") + .config( + "spark.hadoop.fs.s3.aws.credentials.provider", + "org.apache.hadoop.fs.s3.TemporaryAWSCredentialsProvider") + .config("spark.hadoop.fs.s3.access.key", "foo") + .config("spark.hadoop.fs.s3.secret.key", "bar") + .config( + "spark.sql.extensions", + "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions") + .config("spark.ui.showConsoleProgress", false) + .config("spark.ui.enabled", "false"); + spark = + withCatalog(withCatalog(sessionBuilder, CATALOG_NAME), EXTERNAL_CATALOG_NAME).getOrCreate(); + + spark.sql("USE " + CATALOG_NAME); + } + + private SparkSession.Builder withCatalog(SparkSession.Builder builder, String catalogName) { + return builder + .config( + String.format("spark.sql.catalog.%s", catalogName), + "org.apache.iceberg.spark.SparkCatalog") + .config(String.format("spark.sql.catalog.%s.type", catalogName), "rest") + .config( + String.format("spark.sql.catalog.%s.uri", catalogName), + "http://localhost:" + EXT.getLocalPort() + "/api/catalog") + .config(String.format("spark.sql.catalog.%s.warehouse", catalogName), catalogName) + .config(String.format("spark.sql.catalog.%s.scope", catalogName), "PRINCIPAL_ROLE:ALL") + .config(String.format("spark.sql.catalog.%s.header.realm", catalogName), realm) + .config(String.format("spark.sql.catalog.%s.token", catalogName), polarisToken.token()) + .config(String.format("spark.sql.catalog.%s.s3.access-key-id", catalogName), "fakekey") + .config( + String.format("spark.sql.catalog.%s.s3.secret-access-key", catalogName), "fakesecret") + .config(String.format("spark.sql.catalog.%s.s3.region", catalogName), "us-west-2"); + } + + @AfterEach + public void after() { + cleanupCatalog(CATALOG_NAME); + cleanupCatalog(EXTERNAL_CATALOG_NAME); + try { + SparkSession.clearDefaultSession(); + SparkSession.clearActiveSession(); + spark.close(); + } catch (Exception e) { + LoggerFactory.getLogger(getClass()).error("Unable to close spark session", e); + } + } + + private void cleanupCatalog(String catalogName) { + spark.sql("USE " + catalogName); + List namespaces = spark.sql("SHOW NAMESPACES").collectAsList(); + for (Row namespace : namespaces) { + List tables = spark.sql("SHOW TABLES IN " + namespace.getString(0)).collectAsList(); + for (Row table : tables) { + spark.sql("DROP TABLE " + namespace.getString(0) + "." + table.getString(1)); + } + spark.sql("DROP NAMESPACE " + namespace.getString(0)); + } + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/management/v1/catalogs/" + catalogName, + EXT.getLocalPort())) + .request("application/json") + .header("Authorization", "BEARER " + polarisToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .delete()) { + assertThat(response).returns(Response.Status.NO_CONTENT.getStatusCode(), Response::getStatus); + } + } + + @Test + public void testCreateTable() { + long namespaceCount = spark.sql("SHOW NAMESPACES").count(); + assertThat(namespaceCount).isEqualTo(0L); + + spark.sql("CREATE NAMESPACE ns1"); + spark.sql("USE ns1"); + spark.sql("CREATE TABLE tb1 (col1 integer, col2 string)"); + spark.sql("INSERT INTO tb1 VALUES (1, 'a'), (2, 'b'), (3, 'c')"); + long recordCount = spark.sql("SELECT * FROM tb1").count(); + assertThat(recordCount).isEqualTo(3); + } + + @Test + public void testCreateAndUpdateExternalTable() { + long namespaceCount = spark.sql("SHOW NAMESPACES").count(); + assertThat(namespaceCount).isEqualTo(0L); + + spark.sql("CREATE NAMESPACE ns1"); + spark.sql("USE ns1"); + spark.sql("CREATE TABLE tb1 (col1 integer, col2 string)"); + spark.sql("INSERT INTO tb1 VALUES (1, 'a'), (2, 'b'), (3, 'c')"); + long recordCount = spark.sql("SELECT * FROM tb1").count(); + assertThat(recordCount).isEqualTo(3); + + spark.sql("USE " + EXTERNAL_CATALOG_NAME); + List existingNamespaces = spark.sql("SHOW NAMESPACES").collectAsList(); + assertThat(existingNamespaces).isEmpty(); + + spark.sql("CREATE NAMESPACE externalns1"); + spark.sql("USE externalns1"); + List existingTables = spark.sql("SHOW TABLES").collectAsList(); + assertThat(existingTables).isEmpty(); + + LoadTableResponse tableResponse = loadTable(CATALOG_NAME, "ns1", "tb1"); + try (Response registerResponse = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/catalog/v1/" + + EXTERNAL_CATALOG_NAME + + "/namespaces/externalns1/register", + EXT.getLocalPort())) + .request("application/json") + .header("Authorization", "BEARER " + polarisToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .post( + Entity.json( + ImmutableRegisterTableRequest.builder() + .name("mytb1") + .metadataLocation(tableResponse.metadataLocation()) + .build()))) { + assertThat(registerResponse).returns(Response.Status.OK.getStatusCode(), Response::getStatus); + } + + long tableCount = spark.sql("SHOW TABLES").count(); + assertThat(tableCount).isEqualTo(1); + List tables = spark.sql("SHOW TABLES").collectAsList(); + assertThat(tables).hasSize(1).extracting(row -> row.getString(1)).containsExactly("mytb1"); + long rowCount = spark.sql("SELECT * FROM mytb1").count(); + assertThat(rowCount).isEqualTo(3); + try { + spark.sql("INSERT INTO mytb1 VALUES (20, 'new_text')"); + Assertions.fail("Expected exception when inserting into external table"); + } catch (Exception e) { + LoggerFactory.getLogger(getClass()).info("Expected exception", e); + // expected exception + } + + spark.sql("INSERT INTO " + CATALOG_NAME + ".ns1.tb1 VALUES (20, 'new_text')"); + tableResponse = loadTable(CATALOG_NAME, "ns1", "tb1"); + TableUpdateNotification updateNotification = + new TableUpdateNotification( + "mytb1", + Instant.now().toEpochMilli(), + tableResponse.tableMetadata().uuid(), + tableResponse.metadataLocation(), + tableResponse.tableMetadata()); + NotificationRequest notificationRequest = new NotificationRequest(); + notificationRequest.setPayload(updateNotification); + notificationRequest.setNotificationType(NotificationType.UPDATE); + try (Response notifyResponse = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/catalog/v1/%s/namespaces/externalns1/tables/mytb1/notifications", + EXT.getLocalPort(), EXTERNAL_CATALOG_NAME)) + .request("application/json") + .header("Authorization", "BEARER " + polarisToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(notificationRequest))) { + assertThat(notifyResponse) + .returns(Response.Status.NO_CONTENT.getStatusCode(), Response::getStatus); + } + // refresh the table so it queries for the latest metadata.json + spark.sql("REFRESH TABLE mytb1"); + rowCount = spark.sql("SELECT * FROM mytb1").count(); + assertThat(rowCount).isEqualTo(4); + } + + private LoadTableResponse loadTable(String catalog, String namespace, String table) { + try (Response response = + EXT.client() + .target( + String.format( + "http://localhost:%d/api/catalog/v1/%s/namespaces/%s/tables/%s", + EXT.getLocalPort(), catalog, namespace, table)) + .request("application/json") + .header("Authorization", "BEARER " + polarisToken.token()) + .header(REALM_PROPERTY_KEY, realm) + .get()) { + assertThat(response).returns(Response.Status.OK.getStatusCode(), Response::getStatus); + return response.readEntity(LoadTableResponse.class); + } + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/entity/CatalogEntityTest.java b/polaris-service/src/test/java/io/polaris/service/entity/CatalogEntityTest.java new file mode 100644 index 0000000000..89e7f73192 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/entity/CatalogEntityTest.java @@ -0,0 +1,232 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.entity; + +import io.polaris.core.admin.model.AwsStorageConfigInfo; +import io.polaris.core.admin.model.AzureStorageConfigInfo; +import io.polaris.core.admin.model.Catalog; +import io.polaris.core.admin.model.CatalogProperties; +import io.polaris.core.admin.model.GcpStorageConfigInfo; +import io.polaris.core.admin.model.PolarisCatalog; +import io.polaris.core.admin.model.StorageConfigInfo; +import io.polaris.core.entity.CatalogEntity; +import java.util.List; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.ValueSource; + +public class CatalogEntityTest { + + @Test + public void testInvalidAllowedLocationPrefix() { + String storageLocation = "unsupportPrefix://mybucket/path"; + AwsStorageConfigInfo awsStorageConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::012345678901:role/jdoe") + .setExternalId("externalId") + .setUserArn("aws::a:user:arn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of(storageLocation, "s3://externally-owned-bucket")) + .build(); + CatalogProperties prop = new CatalogProperties(storageLocation); + Catalog awsCatalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("name") + .setProperties(prop) + .setStorageConfigInfo(awsStorageConfigModel) + .build(); + Exception ex = + Assertions.assertThrows( + IllegalArgumentException.class, () -> CatalogEntity.fromCatalog(awsCatalog)); + Assertions.assertTrue( + ex.getMessage() + .contains( + "Location prefix not allowed: 'unsupportPrefix://mybucket/path', expected prefix: 's3://'")); + + // Invaliad azure prefix + AzureStorageConfigInfo azureStorageConfigModel = + AzureStorageConfigInfo.builder() + .setAllowedLocations( + List.of(storageLocation, "abfs://container@storageaccount.blob.windows.net/path")) + .setStorageType(StorageConfigInfo.StorageTypeEnum.AZURE) + .setTenantId("tenantId") + .build(); + Catalog azureCatalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("name") + .setProperties( + new CatalogProperties("abfs://container@storageaccount.blob.windows.net/path")) + .setStorageConfigInfo(azureStorageConfigModel) + .build(); + Exception ex2 = + Assertions.assertThrows( + IllegalArgumentException.class, () -> CatalogEntity.fromCatalog(azureCatalog)); + Assertions.assertTrue( + ex2.getMessage() + .contains("Invalid azure adls location uri unsupportPrefix://mybucket/path")); + + // invalid gcp prefix + GcpStorageConfigInfo gcpStorageConfigModel = + GcpStorageConfigInfo.builder() + .setStorageType(StorageConfigInfo.StorageTypeEnum.GCS) + .setAllowedLocations(List.of(storageLocation, "gs://externally-owned-bucket")) + .build(); + Catalog gcpCatalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("name") + .setProperties(new CatalogProperties("gs://externally-owned-bucket")) + .setStorageConfigInfo(gcpStorageConfigModel) + .build(); + Exception ex3 = + Assertions.assertThrows( + IllegalArgumentException.class, () -> CatalogEntity.fromCatalog(gcpCatalog)); + Assertions.assertTrue( + ex3.getMessage() + .contains( + "Location prefix not allowed: 'unsupportPrefix://mybucket/path', expected prefix: 'gs://'")); + } + + @Test + public void testExceedMaxAllowedLocations() { + String storageLocation = "s3://mybucket/path/"; + AwsStorageConfigInfo awsStorageConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::012345678901:role/jdoe") + .setExternalId("externalId") + .setUserArn("aws::a:user:arn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations( + List.of( + storageLocation + "1/", + storageLocation + "2/", + storageLocation + "3/", + storageLocation + "4/", + storageLocation + "5/", + storageLocation + "6/")) + .build(); + CatalogProperties prop = new CatalogProperties(storageLocation); + Catalog awsCatalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("name") + .setProperties(prop) + .setStorageConfigInfo(awsStorageConfigModel) + .build(); + Exception ex = + Assertions.assertThrows( + IllegalArgumentException.class, () -> CatalogEntity.fromCatalog(awsCatalog)); + Assertions.assertTrue(ex.getMessage().contains("Number of allowed locations exceeds 5")); + } + + @Test + public void testValidAllowedLocationPrefix() { + String basedLocation = "s3://externally-owned-bucket"; + AwsStorageConfigInfo awsStorageConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn("arn:aws:iam::012345678901:role/jdoe") + .setExternalId("externalId") + .setUserArn("aws::a:user:arn") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of(basedLocation)) + .build(); + + CatalogProperties prop = new CatalogProperties(basedLocation); + Catalog awsCatalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("name") + .setProperties(prop) + .setStorageConfigInfo(awsStorageConfigModel) + .build(); + Assertions.assertDoesNotThrow(() -> CatalogEntity.fromCatalog(awsCatalog)); + + basedLocation = "abfs://container@storageaccount.blob.windows.net/path"; + prop.put(CatalogEntity.DEFAULT_BASE_LOCATION_KEY, basedLocation); + AzureStorageConfigInfo azureStorageConfigModel = + AzureStorageConfigInfo.builder() + .setAllowedLocations(List.of(basedLocation)) + .setStorageType(StorageConfigInfo.StorageTypeEnum.AZURE) + .setTenantId("tenantId") + .build(); + Catalog azureCatalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("name") + .setProperties(new CatalogProperties(basedLocation)) + .setStorageConfigInfo(azureStorageConfigModel) + .build(); + Assertions.assertDoesNotThrow(() -> CatalogEntity.fromCatalog(azureCatalog)); + + basedLocation = "gs://externally-owned-bucket"; + prop.put(CatalogEntity.DEFAULT_BASE_LOCATION_KEY, basedLocation); + GcpStorageConfigInfo gcpStorageConfigModel = + GcpStorageConfigInfo.builder() + .setStorageType(StorageConfigInfo.StorageTypeEnum.GCS) + .setAllowedLocations(List.of(basedLocation)) + .build(); + Catalog gcpCatalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("name") + .setProperties(new CatalogProperties(basedLocation)) + .setStorageConfigInfo(gcpStorageConfigModel) + .build(); + Assertions.assertDoesNotThrow(() -> CatalogEntity.fromCatalog(gcpCatalog)); + } + + @ParameterizedTest + @ValueSource(strings = {"", "arn:aws:iam::0123456:role/jdoe", "aws-cn", "aws-us-gov"}) + public void testInvalidArn(String roleArn) { + String basedLocation = "s3://externally-owned-bucket"; + AwsStorageConfigInfo awsStorageConfigModel = + AwsStorageConfigInfo.builder() + .setRoleArn(roleArn) + .setExternalId("externalId") + .setStorageType(StorageConfigInfo.StorageTypeEnum.S3) + .setAllowedLocations(List.of(basedLocation)) + .build(); + + CatalogProperties prop = new CatalogProperties(basedLocation); + Catalog awsCatalog = + PolarisCatalog.builder() + .setType(Catalog.TypeEnum.INTERNAL) + .setName("name") + .setProperties(prop) + .setStorageConfigInfo(awsStorageConfigModel) + .build(); + Exception ex = + Assertions.assertThrows( + IllegalArgumentException.class, () -> CatalogEntity.fromCatalog(awsCatalog)); + String expectedMessage = ""; + switch (roleArn) { + case "": + expectedMessage = "ARN cannot be null or empty"; + break; + case "aws-cn": + case "aws-us-gov": + expectedMessage = "AWS China or Gov Cloud are temporarily not supported"; + break; + default: + expectedMessage = "Invalid role ARN format"; + } + ; + Assertions.assertEquals(ex.getMessage(), expectedMessage); + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/task/ManifestFileCleanupTaskHandlerTest.java b/polaris-service/src/test/java/io/polaris/service/task/ManifestFileCleanupTaskHandlerTest.java new file mode 100644 index 0000000000..77a5dfe7a0 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/task/ManifestFileCleanupTaskHandlerTest.java @@ -0,0 +1,228 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.task; + +import static org.assertj.core.api.Assertions.assertThatPredicate; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.AsyncTaskType; +import io.polaris.core.entity.TaskEntity; +import io.polaris.service.persistence.InMemoryPolarisMetaStoreManagerFactory; +import java.io.IOException; +import java.util.HashMap; +import java.util.Map; +import java.util.UUID; +import java.util.concurrent.Executors; +import java.util.concurrent.atomic.AtomicInteger; +import org.apache.commons.codec.binary.Base64; +import org.apache.iceberg.ManifestFile; +import org.apache.iceberg.ManifestFiles; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.inmemory.InMemoryFileIO; +import org.apache.iceberg.io.FileIO; +import org.apache.iceberg.io.OutputFile; +import org.apache.iceberg.io.PositionOutputStream; +import org.junit.jupiter.api.Test; + +class ManifestFileCleanupTaskHandlerTest { + + @Test + public void testCleanupFileNotExists() throws IOException { + InMemoryPolarisMetaStoreManagerFactory metaStoreManagerFactory = + new InMemoryPolarisMetaStoreManagerFactory(); + RealmContext realmContext = () -> "realmName"; + PolarisCallContext polarisCallContext = + new PolarisCallContext( + metaStoreManagerFactory.getOrCreateSessionSupplier(realmContext).get(), + new PolarisDefaultDiagServiceImpl()); + try (CallContext callCtx = CallContext.of(realmContext, polarisCallContext)) { + CallContext.setCurrentContext(callCtx); + FileIO fileIO = new InMemoryFileIO(); + TableIdentifier tableIdentifier = + TableIdentifier.of(Namespace.of("db1", "schema1"), "table1"); + ManifestFileCleanupTaskHandler handler = + new ManifestFileCleanupTaskHandler((task) -> fileIO, Executors.newSingleThreadExecutor()); + ManifestFile manifestFile = + TaskTestUtils.manifestFile( + fileIO, "manifest1.avro", 1L, "dataFile1.parquet", "dataFile2.parquet"); + fileIO.deleteFile(manifestFile.path()); + TaskEntity task = + new TaskEntity.Builder() + .withTaskType(AsyncTaskType.FILE_CLEANUP) + .withData( + new ManifestFileCleanupTaskHandler.ManifestCleanupTask( + tableIdentifier, + Base64.encodeBase64String(ManifestFiles.encode(manifestFile)))) + .setName(UUID.randomUUID().toString()) + .build(); + assertThatPredicate(handler::canHandleTask).accepts(task); + assertThatPredicate(handler::handleTask).accepts(task); + } + } + + @Test + public void testCleanupFileManifestExistsDataFilesDontExist() throws IOException { + InMemoryPolarisMetaStoreManagerFactory metaStoreManagerFactory = + new InMemoryPolarisMetaStoreManagerFactory(); + RealmContext realmContext = () -> "realmName"; + PolarisCallContext polarisCallContext = + new PolarisCallContext( + metaStoreManagerFactory.getOrCreateSessionSupplier(realmContext).get(), + new PolarisDefaultDiagServiceImpl()); + try (CallContext callCtx = CallContext.of(realmContext, polarisCallContext)) { + CallContext.setCurrentContext(callCtx); + FileIO fileIO = new InMemoryFileIO(); + TableIdentifier tableIdentifier = + TableIdentifier.of(Namespace.of("db1", "schema1"), "table1"); + ManifestFileCleanupTaskHandler handler = + new ManifestFileCleanupTaskHandler((task) -> fileIO, Executors.newSingleThreadExecutor()); + ManifestFile manifestFile = + TaskTestUtils.manifestFile( + fileIO, "manifest1.avro", 100L, "dataFile1.parquet", "dataFile2.parquet"); + TaskEntity task = + new TaskEntity.Builder() + .withTaskType(AsyncTaskType.FILE_CLEANUP) + .withData( + new ManifestFileCleanupTaskHandler.ManifestCleanupTask( + tableIdentifier, + Base64.encodeBase64String(ManifestFiles.encode(manifestFile)))) + .setName(UUID.randomUUID().toString()) + .build(); + assertThatPredicate(handler::canHandleTask).accepts(task); + assertThatPredicate(handler::handleTask).accepts(task); + } + } + + @Test + public void testCleanupFiles() throws IOException { + InMemoryPolarisMetaStoreManagerFactory metaStoreManagerFactory = + new InMemoryPolarisMetaStoreManagerFactory(); + RealmContext realmContext = () -> "realmName"; + PolarisCallContext polarisCallContext = + new PolarisCallContext( + metaStoreManagerFactory.getOrCreateSessionSupplier(realmContext).get(), + new PolarisDefaultDiagServiceImpl()); + try (CallContext callCtx = CallContext.of(realmContext, polarisCallContext)) { + CallContext.setCurrentContext(callCtx); + FileIO fileIO = + new InMemoryFileIO() { + @Override + public void close() { + // no-op + } + }; + TableIdentifier tableIdentifier = + TableIdentifier.of(Namespace.of("db1", "schema1"), "table1"); + ManifestFileCleanupTaskHandler handler = + new ManifestFileCleanupTaskHandler((task) -> fileIO, Executors.newSingleThreadExecutor()); + String dataFile1Path = "dataFile1.parquet"; + OutputFile dataFile1 = fileIO.newOutputFile(dataFile1Path); + PositionOutputStream out1 = dataFile1.createOrOverwrite(); + out1.write("the data".getBytes()); + out1.close(); + String dataFile2Path = "dataFile2.parquet"; + OutputFile dataFile2 = fileIO.newOutputFile(dataFile2Path); + PositionOutputStream out2 = dataFile2.createOrOverwrite(); + out2.write("the data".getBytes()); + out2.close(); + ManifestFile manifestFile = + TaskTestUtils.manifestFile(fileIO, "manifest1.avro", 100L, dataFile1Path, dataFile2Path); + TaskEntity task = + new TaskEntity.Builder() + .withTaskType(AsyncTaskType.FILE_CLEANUP) + .withData( + new ManifestFileCleanupTaskHandler.ManifestCleanupTask( + tableIdentifier, + Base64.encodeBase64String(ManifestFiles.encode(manifestFile)))) + .setName(UUID.randomUUID().toString()) + .build(); + assertThatPredicate(handler::canHandleTask).accepts(task); + assertThatPredicate(handler::handleTask).accepts(task); + assertThatPredicate((String f) -> TaskUtils.exists(f, fileIO)).rejects(dataFile1Path); + assertThatPredicate((String f) -> TaskUtils.exists(f, fileIO)).rejects(dataFile2Path); + } + } + + @Test + public void testCleanupFilesWithRetries() throws IOException { + InMemoryPolarisMetaStoreManagerFactory metaStoreManagerFactory = + new InMemoryPolarisMetaStoreManagerFactory(); + RealmContext realmContext = () -> "realmName"; + PolarisCallContext polarisCallContext = + new PolarisCallContext( + metaStoreManagerFactory.getOrCreateSessionSupplier(realmContext).get(), + new PolarisDefaultDiagServiceImpl()); + try (CallContext callCtx = CallContext.of(realmContext, polarisCallContext)) { + CallContext.setCurrentContext(callCtx); + Map retryCounter = new HashMap<>(); + FileIO fileIO = + new InMemoryFileIO() { + @Override + public void close() { + // no-op + } + + @Override + public void deleteFile(String location) { + int attempts = + retryCounter + .computeIfAbsent(location, k -> new AtomicInteger(0)) + .incrementAndGet(); + if (attempts < 3) { + throw new RuntimeException("I'm failing to test retries"); + } else { + // succeed on the third attempt + super.deleteFile(location); + } + } + }; + + TableIdentifier tableIdentifier = + TableIdentifier.of(Namespace.of("db1", "schema1"), "table1"); + ManifestFileCleanupTaskHandler handler = + new ManifestFileCleanupTaskHandler((task) -> fileIO, Executors.newSingleThreadExecutor()); + String dataFile1Path = "dataFile1.parquet"; + OutputFile dataFile1 = fileIO.newOutputFile(dataFile1Path); + PositionOutputStream out1 = dataFile1.createOrOverwrite(); + out1.write("the data".getBytes()); + out1.close(); + String dataFile2Path = "dataFile2.parquet"; + OutputFile dataFile2 = fileIO.newOutputFile(dataFile2Path); + PositionOutputStream out2 = dataFile2.createOrOverwrite(); + out2.write("the data".getBytes()); + out2.close(); + ManifestFile manifestFile = + TaskTestUtils.manifestFile(fileIO, "manifest1.avro", 100L, dataFile1Path, dataFile2Path); + TaskEntity task = + new TaskEntity.Builder() + .withTaskType(AsyncTaskType.FILE_CLEANUP) + .withData( + new ManifestFileCleanupTaskHandler.ManifestCleanupTask( + tableIdentifier, + Base64.encodeBase64String(ManifestFiles.encode(manifestFile)))) + .setName(UUID.randomUUID().toString()) + .build(); + assertThatPredicate(handler::canHandleTask).accepts(task); + assertThatPredicate(handler::handleTask).accepts(task); + assertThatPredicate((String f) -> TaskUtils.exists(f, fileIO)).rejects(dataFile1Path); + assertThatPredicate((String f) -> TaskUtils.exists(f, fileIO)).rejects(dataFile2Path); + } + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/task/TableCleanupTaskHandlerTest.java b/polaris-service/src/test/java/io/polaris/service/task/TableCleanupTaskHandlerTest.java new file mode 100644 index 0000000000..19ccdd0410 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/task/TableCleanupTaskHandlerTest.java @@ -0,0 +1,367 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.task; + +import static org.assertj.core.api.Assertions.assertThat; + +import io.polaris.core.PolarisCallContext; +import io.polaris.core.PolarisDefaultDiagServiceImpl; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.AsyncTaskType; +import io.polaris.core.entity.PolarisBaseEntity; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.TableLikeEntity; +import io.polaris.core.entity.TaskEntity; +import io.polaris.service.persistence.InMemoryPolarisMetaStoreManagerFactory; +import java.io.IOException; +import java.util.List; +import org.apache.commons.codec.binary.Base64; +import org.apache.iceberg.ManifestFile; +import org.apache.iceberg.ManifestFiles; +import org.apache.iceberg.Snapshot; +import org.apache.iceberg.catalog.Namespace; +import org.apache.iceberg.catalog.TableIdentifier; +import org.apache.iceberg.inmemory.InMemoryFileIO; +import org.apache.iceberg.io.FileIO; +import org.assertj.core.api.Assertions; +import org.junit.jupiter.api.Test; +import org.mockito.Mockito; +import org.slf4j.LoggerFactory; + +class TableCleanupTaskHandlerTest { + + @Test + public void testTableCleanup() throws IOException { + InMemoryPolarisMetaStoreManagerFactory metaStoreManagerFactory = + new InMemoryPolarisMetaStoreManagerFactory(); + RealmContext realmContext = () -> "realmName"; + PolarisCallContext polarisCallContext = + new PolarisCallContext( + metaStoreManagerFactory.getOrCreateSessionSupplier(realmContext).get(), + new PolarisDefaultDiagServiceImpl()); + try (CallContext callCtx = CallContext.of(realmContext, polarisCallContext)) { + CallContext.setCurrentContext(callCtx); + FileIO fileIO = new InMemoryFileIO(); + TableIdentifier tableIdentifier = + TableIdentifier.of(Namespace.of("db1", "schema1"), "table1"); + TableCleanupTaskHandler handler = + new TableCleanupTaskHandler(Mockito.mock(), metaStoreManagerFactory, (task) -> fileIO); + long snapshotId = 100L; + ManifestFile manifestFile = + TaskTestUtils.manifestFile( + fileIO, "manifest1.avro", snapshotId, "dataFile1.parquet", "dataFile2.parquet"); + TestSnapshot snapshot = + TaskTestUtils.newSnapshot(fileIO, "manifestList.avro", 1, snapshotId, 99L, manifestFile); + String metadataFile = "v1-49494949.metadata.json"; + TaskTestUtils.writeTableMetadata(fileIO, metadataFile, snapshot); + + TaskEntity task = + new TaskEntity.Builder() + .setName("cleanup_" + tableIdentifier.toString()) + .withTaskType(AsyncTaskType.ENTITY_CLEANUP_SCHEDULER) + .withData( + new TableLikeEntity.Builder(tableIdentifier, metadataFile) + .setName("table1") + .setCatalogId(1) + .setCreateTimestamp(100) + .build()) + .build(); + Assertions.assertThatPredicate(handler::canHandleTask).accepts(task); + + CallContext.setCurrentContext(CallContext.of(realmContext, polarisCallContext)); + handler.handleTask(task); + + assertThat( + metaStoreManagerFactory + .getOrCreateMetaStoreManager(realmContext) + .loadTasks(polarisCallContext, "test", 1) + .getEntities()) + .hasSize(1) + .satisfiesExactly( + taskEntity -> + assertThat(taskEntity) + .returns(PolarisEntityType.TASK.getCode(), PolarisBaseEntity::getTypeCode) + .extracting(entity -> TaskEntity.of(entity)) + .returns(AsyncTaskType.FILE_CLEANUP, TaskEntity::getTaskType) + .returns( + new ManifestFileCleanupTaskHandler.ManifestCleanupTask( + tableIdentifier, + Base64.encodeBase64String(ManifestFiles.encode(manifestFile))), + entity -> + entity.readData( + ManifestFileCleanupTaskHandler.ManifestCleanupTask.class))); + } + } + + @Test + public void testTableCleanupHandlesAlreadyDeletedMetadata() throws IOException { + InMemoryPolarisMetaStoreManagerFactory metaStoreManagerFactory = + new InMemoryPolarisMetaStoreManagerFactory(); + RealmContext realmContext = () -> "realmName"; + PolarisCallContext polarisCallContext = + new PolarisCallContext( + metaStoreManagerFactory.getOrCreateSessionSupplier(realmContext).get(), + new PolarisDefaultDiagServiceImpl()); + try (CallContext callCtx = CallContext.of(realmContext, polarisCallContext)) { + CallContext.setCurrentContext(callCtx); + FileIO fileIO = + new InMemoryFileIO() { + @Override + public void close() { + // no-op + } + }; + TableIdentifier tableIdentifier = + TableIdentifier.of(Namespace.of("db1", "schema1"), "table1"); + TableCleanupTaskHandler handler = + new TableCleanupTaskHandler(Mockito.mock(), metaStoreManagerFactory, (task) -> fileIO); + long snapshotId = 100L; + ManifestFile manifestFile = + TaskTestUtils.manifestFile( + fileIO, "manifest1.avro", snapshotId, "dataFile1.parquet", "dataFile2.parquet"); + TestSnapshot snapshot = + TaskTestUtils.newSnapshot(fileIO, "manifestList.avro", 1, snapshotId, 99L, manifestFile); + String metadataFile = "v1-49494949.metadata.json"; + TaskTestUtils.writeTableMetadata(fileIO, metadataFile, snapshot); + + TableLikeEntity tableLikeEntity = + new TableLikeEntity.Builder(tableIdentifier, metadataFile) + .setName("table1") + .setCatalogId(1) + .setCreateTimestamp(100) + .build(); + TaskEntity task = + new TaskEntity.Builder() + .setName("cleanup_" + tableIdentifier.toString()) + .withTaskType(AsyncTaskType.ENTITY_CLEANUP_SCHEDULER) + .withData(tableLikeEntity) + .build(); + Assertions.assertThatPredicate(handler::canHandleTask).accepts(task); + + CallContext.setCurrentContext(CallContext.of(realmContext, polarisCallContext)); + + // handle the same task twice + // the first one should successfully delete the metadata + List results = List.of(handler.handleTask(task), handler.handleTask(task)); + assertThat(results).containsExactly(true, true); + + // both tasks successfully executed, but only one should queue subtasks + assertThat( + metaStoreManagerFactory + .getOrCreateMetaStoreManager(realmContext) + .loadTasks(polarisCallContext, "test", 5) + .getEntities()) + .hasSize(1); + } + } + + @Test + public void testTableCleanupDuplicatesTasksIfFileStillExists() throws IOException { + InMemoryPolarisMetaStoreManagerFactory metaStoreManagerFactory = + new InMemoryPolarisMetaStoreManagerFactory(); + RealmContext realmContext = () -> "realmName"; + PolarisCallContext polarisCallContext = + new PolarisCallContext( + metaStoreManagerFactory.getOrCreateSessionSupplier(realmContext).get(), + new PolarisDefaultDiagServiceImpl()); + try (CallContext callCtx = CallContext.of(realmContext, polarisCallContext)) { + CallContext.setCurrentContext(callCtx); + FileIO fileIO = + new InMemoryFileIO() { + @Override + public void deleteFile(String location) { + LoggerFactory.getLogger(TableCleanupTaskHandler.class) + .info( + "Not deleting file at location {} to simulate concurrent tasks runs", + location); + // don't do anything + } + + @Override + public void close() { + // no-op + } + }; + TableIdentifier tableIdentifier = + TableIdentifier.of(Namespace.of("db1", "schema1"), "table1"); + TableCleanupTaskHandler handler = + new TableCleanupTaskHandler(Mockito.mock(), metaStoreManagerFactory, (task) -> fileIO); + long snapshotId = 100L; + ManifestFile manifestFile = + TaskTestUtils.manifestFile( + fileIO, "manifest1.avro", snapshotId, "dataFile1.parquet", "dataFile2.parquet"); + TestSnapshot snapshot = + TaskTestUtils.newSnapshot(fileIO, "manifestList.avro", 1, snapshotId, 99L, manifestFile); + String metadataFile = "v1-49494949.metadata.json"; + TaskTestUtils.writeTableMetadata(fileIO, metadataFile, snapshot); + + TaskEntity task = + new TaskEntity.Builder() + .setName("cleanup_" + tableIdentifier.toString()) + .withTaskType(AsyncTaskType.ENTITY_CLEANUP_SCHEDULER) + .withData( + new TableLikeEntity.Builder(tableIdentifier, metadataFile) + .setName("table1") + .setCatalogId(1) + .setCreateTimestamp(100) + .build()) + .build(); + Assertions.assertThatPredicate(handler::canHandleTask).accepts(task); + + CallContext.setCurrentContext(CallContext.of(realmContext, polarisCallContext)); + + // handle the same task twice + // the first one should successfully delete the metadata + List results = List.of(handler.handleTask(task), handler.handleTask(task)); + assertThat(results).containsExactly(true, true); + + // both tasks successfully executed, but only one should queue subtasks + assertThat( + metaStoreManagerFactory + .getOrCreateMetaStoreManager(realmContext) + .loadTasks(polarisCallContext, "test", 5) + .getEntities()) + .hasSize(2) + .satisfiesExactly( + taskEntity -> + assertThat(taskEntity) + .returns(PolarisEntityType.TASK.getCode(), PolarisBaseEntity::getTypeCode) + .extracting(entity -> TaskEntity.of(entity)) + .returns(AsyncTaskType.FILE_CLEANUP, TaskEntity::getTaskType) + .returns( + new ManifestFileCleanupTaskHandler.ManifestCleanupTask( + tableIdentifier, + Base64.encodeBase64String(ManifestFiles.encode(manifestFile))), + entity -> + entity.readData( + ManifestFileCleanupTaskHandler.ManifestCleanupTask.class)), + taskEntity -> + assertThat(taskEntity) + .returns(PolarisEntityType.TASK.getCode(), PolarisBaseEntity::getTypeCode) + .extracting(entity -> TaskEntity.of(entity)) + .returns(AsyncTaskType.FILE_CLEANUP, TaskEntity::getTaskType) + .returns( + new ManifestFileCleanupTaskHandler.ManifestCleanupTask( + tableIdentifier, + Base64.encodeBase64String(ManifestFiles.encode(manifestFile))), + entity -> + entity.readData( + ManifestFileCleanupTaskHandler.ManifestCleanupTask.class))); + } + } + + @Test + public void testTableCleanupMultipleSnapshots() throws IOException { + InMemoryPolarisMetaStoreManagerFactory metaStoreManagerFactory = + new InMemoryPolarisMetaStoreManagerFactory(); + RealmContext realmContext = () -> "realmName"; + PolarisCallContext polarisCallContext = + new PolarisCallContext( + metaStoreManagerFactory.getOrCreateSessionSupplier(realmContext).get(), + new PolarisDefaultDiagServiceImpl()); + try (CallContext callCtx = CallContext.of(realmContext, polarisCallContext)) { + CallContext.setCurrentContext(callCtx); + FileIO fileIO = new InMemoryFileIO(); + TableIdentifier tableIdentifier = + TableIdentifier.of(Namespace.of("db1", "schema1"), "table1"); + TableCleanupTaskHandler handler = + new TableCleanupTaskHandler(Mockito.mock(), metaStoreManagerFactory, (task) -> fileIO); + long snapshotId1 = 100L; + ManifestFile manifestFile1 = + TaskTestUtils.manifestFile( + fileIO, "manifest1.avro", snapshotId1, "dataFile1.parquet", "dataFile2.parquet"); + ManifestFile manifestFile2 = + TaskTestUtils.manifestFile( + fileIO, "manifest2.avro", snapshotId1, "dataFile3.parquet", "dataFile4.parquet"); + Snapshot snapshot = + TaskTestUtils.newSnapshot( + fileIO, "manifestList.avro", 1, snapshotId1, 99L, manifestFile1, manifestFile2); + ManifestFile manifestFile3 = + TaskTestUtils.manifestFile( + fileIO, "manifest3.avro", snapshot.snapshotId() + 1, "dataFile5.parquet"); + Snapshot snapshot2 = + TaskTestUtils.newSnapshot( + fileIO, + "manifestList2.avro", + snapshot.sequenceNumber() + 1, + snapshot.snapshotId() + 1, + snapshot.snapshotId(), + manifestFile1, + manifestFile3); // exclude manifest2 from the new snapshot + String metadataFile = "v1-295495059.metadata.json"; + TaskTestUtils.writeTableMetadata(fileIO, metadataFile, snapshot, snapshot2); + + TaskEntity task = + new TaskEntity.Builder() + .withTaskType(AsyncTaskType.ENTITY_CLEANUP_SCHEDULER) + .withData( + new TableLikeEntity.Builder(tableIdentifier, metadataFile) + .setName("table1") + .setCatalogId(1) + .setCreateTimestamp(100) + .build()) + .build(); + Assertions.assertThatPredicate(handler::canHandleTask).accepts(task); + + CallContext.setCurrentContext(CallContext.of(realmContext, polarisCallContext)); + handler.handleTask(task); + + assertThat( + metaStoreManagerFactory + .getOrCreateMetaStoreManager(realmContext) + .loadTasks(polarisCallContext, "test", 5) + .getEntities()) + // all three manifests should be present, even though one is excluded from the latest + // snapshot + .hasSize(3) + .satisfiesExactlyInAnyOrder( + taskEntity -> + assertThat(taskEntity) + .returns(PolarisEntityType.TASK.getCode(), PolarisBaseEntity::getTypeCode) + .extracting(entity -> TaskEntity.of(entity)) + .returns( + new ManifestFileCleanupTaskHandler.ManifestCleanupTask( + tableIdentifier, + Base64.encodeBase64String(ManifestFiles.encode(manifestFile1))), + entity -> + entity.readData( + ManifestFileCleanupTaskHandler.ManifestCleanupTask.class)), + taskEntity -> + assertThat(taskEntity) + .returns(PolarisEntityType.TASK.getCode(), PolarisBaseEntity::getTypeCode) + .extracting(entity -> TaskEntity.of(entity)) + .returns( + new ManifestFileCleanupTaskHandler.ManifestCleanupTask( + tableIdentifier, + Base64.encodeBase64String(ManifestFiles.encode(manifestFile2))), + entity -> + entity.readData( + ManifestFileCleanupTaskHandler.ManifestCleanupTask.class)), + taskEntity -> + assertThat(taskEntity) + .returns(PolarisEntityType.TASK.getCode(), PolarisBaseEntity::getTypeCode) + .extracting(entity -> TaskEntity.of(entity)) + .returns( + new ManifestFileCleanupTaskHandler.ManifestCleanupTask( + tableIdentifier, + Base64.encodeBase64String(ManifestFiles.encode(manifestFile3))), + entity -> + entity.readData( + ManifestFileCleanupTaskHandler.ManifestCleanupTask.class))); + } + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/task/TaskTestUtils.java b/polaris-service/src/test/java/io/polaris/service/task/TaskTestUtils.java new file mode 100644 index 0000000000..709ad056b3 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/task/TaskTestUtils.java @@ -0,0 +1,103 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.task; + +import java.io.IOException; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; +import java.util.List; +import java.util.UUID; +import org.apache.iceberg.DataFile; +import org.apache.iceberg.DataFiles; +import org.apache.iceberg.FileFormat; +import org.apache.iceberg.ManifestFile; +import org.apache.iceberg.ManifestFiles; +import org.apache.iceberg.ManifestWriter; +import org.apache.iceberg.PartitionSpec; +import org.apache.iceberg.Schema; +import org.apache.iceberg.Snapshot; +import org.apache.iceberg.SortOrder; +import org.apache.iceberg.TableMetadata; +import org.apache.iceberg.TableMetadataParser; +import org.apache.iceberg.avro.Avro; +import org.apache.iceberg.io.FileAppender; +import org.apache.iceberg.io.FileIO; +import org.apache.iceberg.io.PositionOutputStream; +import org.apache.iceberg.types.Types; +import org.jetbrains.annotations.NotNull; + +public class TaskTestUtils { + static ManifestFile manifestFile( + FileIO fileIO, String manifestFilePath, long snapshotId, String... dataFiles) + throws IOException { + ManifestWriter writer = + ManifestFiles.write( + 2, PartitionSpec.unpartitioned(), fileIO.newOutputFile(manifestFilePath), snapshotId); + for (String dataFile : dataFiles) { + writer.add( + new DataFiles.Builder(PartitionSpec.unpartitioned()) + .withFileSizeInBytes(100L) + .withFormat(FileFormat.PARQUET) + .withPath(dataFile) + .withRecordCount(10) + .build()); + } + writer.close(); + return writer.toManifestFile(); + } + + static void writeTableMetadata(FileIO fileIO, String metadataFile, Snapshot... snapshots) + throws IOException { + TableMetadata.Builder tmBuidler = + TableMetadata.buildFromEmpty() + .setLocation("path/to/table") + .addSchema( + new Schema( + List.of(Types.NestedField.of(1, false, "field1", Types.StringType.get()))), + 1) + .addSortOrder(SortOrder.unsorted()) + .assignUUID(UUID.randomUUID().toString()) + .addPartitionSpec(PartitionSpec.unpartitioned()); + for (Snapshot snapshot : snapshots) { + tmBuidler.addSnapshot(snapshot); + } + TableMetadata tableMetadata = tmBuidler.build(); + PositionOutputStream out = fileIO.newOutputFile(metadataFile).createOrOverwrite(); + out.write(TableMetadataParser.toJson(tableMetadata).getBytes(StandardCharsets.UTF_8)); + out.close(); + } + + static @NotNull TestSnapshot newSnapshot( + FileIO fileIO, + String manifestListLocation, + long sequenceNumber, + long snapshotId, + long parentSnapshot, + ManifestFile... manifestFiles) + throws IOException { + FileAppender manifestListWriter = + Avro.write(fileIO.newOutputFile(manifestListLocation)) + .schema(ManifestFile.schema()) + .named("manifest_file") + .overwrite() + .build(); + manifestListWriter.addAll(Arrays.asList(manifestFiles)); + manifestListWriter.close(); + TestSnapshot snapshot = + new TestSnapshot(sequenceNumber, snapshotId, parentSnapshot, 1L, manifestListLocation); + return snapshot; + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/task/TestSnapshot.java b/polaris-service/src/test/java/io/polaris/service/task/TestSnapshot.java new file mode 100644 index 0000000000..9ac7c1d122 --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/task/TestSnapshot.java @@ -0,0 +1,130 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.task; + +import com.google.common.collect.Lists; +import java.io.IOException; +import java.util.List; +import java.util.Map; +import org.apache.iceberg.DataFile; +import org.apache.iceberg.GenericManifestFile; +import org.apache.iceberg.GenericPartitionFieldSummary; +import org.apache.iceberg.ManifestContent; +import org.apache.iceberg.ManifestFile; +import org.apache.iceberg.Snapshot; +import org.apache.iceberg.avro.Avro; +import org.apache.iceberg.exceptions.RuntimeIOException; +import org.apache.iceberg.io.CloseableIterable; +import org.apache.iceberg.io.FileIO; + +final class TestSnapshot implements Snapshot { + private long sequenceNumber; + private long snapshotId; + private long parentSnapshot; + private long timestampMillis; + private String manifestListLocation; + + public TestSnapshot( + long sequenceNumber, + long snapshotId, + long parentSnapshot, + long timestampMillis, + String manifestListLocation) { + this.sequenceNumber = sequenceNumber; + this.snapshotId = snapshotId; + this.parentSnapshot = parentSnapshot; + this.timestampMillis = timestampMillis; + this.manifestListLocation = manifestListLocation; + } + + @Override + public long sequenceNumber() { + return sequenceNumber; + } + + @Override + public long snapshotId() { + return snapshotId; + } + + @Override + public Long parentId() { + return parentSnapshot; + } + + @Override + public long timestampMillis() { + return timestampMillis; + } + + @Override + public List allManifests(FileIO io) { + try (CloseableIterable files = + Avro.read(io.newInputFile(manifestListLocation)) + .rename("manifest_file", GenericManifestFile.class.getName()) + .rename("partitions", GenericPartitionFieldSummary.class.getName()) + .rename("r508", GenericPartitionFieldSummary.class.getName()) + .classLoader(GenericManifestFile.class.getClassLoader()) + .project(ManifestFile.schema()) + .reuseContainers(false) + .build()) { + + return Lists.newLinkedList(files); + + } catch (IOException e) { + throw new RuntimeIOException(e, "Cannot read manifest list file: %s", manifestListLocation); + } + } + + @Override + public List dataManifests(FileIO io) { + return allManifests(io).stream() + .filter(mf -> mf.content().equals(ManifestContent.DATA)) + .toList(); + } + + @Override + public List deleteManifests(FileIO io) { + return allManifests(io).stream() + .filter(mf -> mf.content().equals(ManifestContent.DELETES)) + .toList(); + } + + @Override + public String operation() { + return "op"; + } + + @Override + public Map summary() { + return Map.of(); + } + + @Override + public Iterable addedDataFiles(FileIO io) { + return null; + } + + @Override + public Iterable removedDataFiles(FileIO io) { + return null; + } + + @Override + public String manifestListLocation() { + return manifestListLocation; + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/test/PolarisConnectionExtension.java b/polaris-service/src/test/java/io/polaris/service/test/PolarisConnectionExtension.java new file mode 100644 index 0000000000..78b2388bea --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/test/PolarisConnectionExtension.java @@ -0,0 +1,248 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.test; + +import static io.polaris.service.context.DefaultContextResolver.REALM_PROPERTY_KEY; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import io.dropwizard.testing.junit5.DropwizardAppExtension; +import io.polaris.core.PolarisDiagnostics; +import io.polaris.core.context.CallContext; +import io.polaris.core.context.RealmContext; +import io.polaris.core.entity.PolarisEntityConstants; +import io.polaris.core.entity.PolarisEntitySubType; +import io.polaris.core.entity.PolarisEntityType; +import io.polaris.core.entity.PolarisGrantRecord; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.core.persistence.LocalPolarisMetaStoreManagerFactory; +import io.polaris.core.persistence.MetaStoreManagerFactory; +import io.polaris.core.persistence.PolarisMetaStoreManager; +import io.polaris.core.storage.PolarisCredentialProperty; +import io.polaris.core.storage.PolarisStorageActions; +import io.polaris.core.storage.PolarisStorageConfigurationInfo; +import io.polaris.core.storage.PolarisStorageIntegration; +import io.polaris.core.storage.PolarisStorageIntegrationProvider; +import io.polaris.service.auth.TokenUtils; +import io.polaris.service.config.PolarisApplicationConfig; +import java.lang.reflect.Field; +import java.lang.reflect.Modifier; +import java.util.Arrays; +import java.util.EnumMap; +import java.util.Map; +import java.util.Optional; +import java.util.Set; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.junit.jupiter.api.extension.BeforeAllCallback; +import org.junit.jupiter.api.extension.ExtensionContext; +import org.junit.jupiter.api.extension.ExtensionContext.Namespace; +import org.junit.jupiter.api.extension.ParameterContext; +import org.junit.jupiter.api.extension.ParameterResolutionException; +import org.junit.jupiter.api.extension.ParameterResolver; +import org.junit.platform.commons.util.ReflectionUtils; +import org.mockito.Mockito; +import org.slf4j.LoggerFactory; +import software.amazon.awssdk.services.sts.StsClient; +import software.amazon.awssdk.services.sts.model.AssumeRoleRequest; +import software.amazon.awssdk.services.sts.model.AssumeRoleResponse; +import software.amazon.awssdk.services.sts.model.Credentials; + +public class PolarisConnectionExtension implements BeforeAllCallback, ParameterResolver { + + public static final ObjectMapper OBJECT_MAPPER = new ObjectMapper(); + private MetaStoreManagerFactory metaStoreManagerFactory; + private DropwizardAppExtension dropwizardAppExtension; + + public record PolarisToken(String token) {} + + private static PolarisPrincipalSecrets adminSecrets; + private static String realm; + + @Override + public void beforeAll(ExtensionContext extensionContext) throws Exception { + dropwizardAppExtension = findDropwizardExtension(extensionContext); + if (dropwizardAppExtension == null) { + return; + } + + // Generate unique realm using test name for each test since the tests can run in parallel + realm = getTestRealm(extensionContext.getRequiredTestClass()); + extensionContext + .getStore(Namespace.create(extensionContext.getRequiredTestClass())) + .put(REALM_PROPERTY_KEY, realm); + + try { + PolarisApplicationConfig config = + (PolarisApplicationConfig) dropwizardAppExtension.getConfiguration(); + metaStoreManagerFactory = config.getMetaStoreManagerFactory(); + + if (metaStoreManagerFactory instanceof LocalPolarisMetaStoreManagerFactory msmf) { + StsClient mockSts = Mockito.mock(StsClient.class); + Mockito.when(mockSts.assumeRole(Mockito.isA(AssumeRoleRequest.class))) + .thenReturn( + AssumeRoleResponse.builder() + .credentials( + Credentials.builder() + .accessKeyId("theaccesskey") + .secretAccessKey("thesecretkey") + .sessionToken("thesessiontoken") + .build()) + .build()); + msmf.setStorageIntegrationProvider( + new PolarisStorageIntegrationProvider() { + @Override + public @Nullable + PolarisStorageIntegration getStorageIntegrationForConfig( + PolarisStorageConfigurationInfo polarisStorageConfigurationInfo) { + return new PolarisStorageIntegration("testIntegration") { + @Override + public EnumMap getSubscopedCreds( + @NotNull PolarisDiagnostics diagnostics, + @NotNull T storageConfig, + boolean allowListOperation, + @NotNull Set allowedReadLocations, + @NotNull Set allowedWriteLocations) { + return new EnumMap<>(PolarisCredentialProperty.class); + } + + @Override + public EnumMap + descPolarisStorageConfiguration( + @NotNull PolarisStorageConfigurationInfo storageConfigInfo) { + return new EnumMap<>(PolarisStorageConfigurationInfo.DescribeProperty.class); + } + + @Override + public @NotNull Map> + validateAccessToLocations( + @NotNull T storageConfig, + @NotNull Set actions, + @NotNull Set locations) { + return Map.of(); + } + }; + } + }); + } + + RealmContext realmContext = + config + .getRealmContextResolver() + .resolveRealmContext( + "http://localhost", "GET", "/", Map.of(), Map.of(REALM_PROPERTY_KEY, realm)); + CallContext ctx = + config + .getCallContextResolver() + .resolveCallContext(realmContext, "GET", "/", Map.of(), Map.of()); + CallContext.setCurrentContext(ctx); + PolarisMetaStoreManager metaStoreManager = + metaStoreManagerFactory.getOrCreateMetaStoreManager(ctx.getRealmContext()); + PolarisMetaStoreManager.EntityResult principal = + metaStoreManager.readEntityByName( + ctx.getPolarisCallContext(), + null, + PolarisEntityType.PRINCIPAL, + PolarisEntitySubType.NULL_SUBTYPE, + PolarisEntityConstants.getRootPrincipalName()); + + Map propertiesMap = readInternalProperties(principal); + adminSecrets = + metaStoreManager + .loadPrincipalSecrets(ctx.getPolarisCallContext(), propertiesMap.get("client_id")) + .getPrincipalSecrets(); + } finally { + CallContext.unsetCurrentContext(); + } + } + + public static String getTestRealm(Class testClassName) { + return testClassName.getName().replace('.', '_'); + } + + static PolarisPrincipalSecrets getAdminSecrets() { + return adminSecrets; + } + + public static @Nullable DropwizardAppExtension findDropwizardExtension( + ExtensionContext extensionContext) throws IllegalAccessException { + Field dropwizardExtensionField = + findAnnotatedFields(extensionContext.getRequiredTestClass(), true); + if (dropwizardExtensionField == null) { + LoggerFactory.getLogger(PolarisGrantRecord.class) + .warn( + "Unable to find dropwizard extension field in test class " + + extensionContext.getRequiredTestClass()); + return null; + } + DropwizardAppExtension appExtension = + (DropwizardAppExtension) ReflectionUtils.makeAccessible(dropwizardExtensionField).get(null); + return appExtension; + } + + @Override + public boolean supportsParameter( + ParameterContext parameterContext, ExtensionContext extensionContext) + throws ParameterResolutionException { + return parameterContext.getParameter().getType().equals(PolarisToken.class) + || parameterContext.getParameter().getType().equals(PolarisPrincipalSecrets.class); + } + + @Override + public Object resolveParameter( + ParameterContext parameterContext, ExtensionContext extensionContext) + throws ParameterResolutionException { + if (parameterContext.getParameter().getType().equals(PolarisToken.class)) { + String token = + TokenUtils.getTokenFromSecrets( + dropwizardAppExtension.client(), + dropwizardAppExtension.getLocalPort(), + adminSecrets.getPrincipalClientId(), + adminSecrets.getMainSecret(), + realm); + return new PolarisToken(token); + } else { + return metaStoreManagerFactory; + } + } + + private static Map readInternalProperties( + PolarisMetaStoreManager.EntityResult principal) { + try { + return OBJECT_MAPPER.readValue( + principal.getEntity().getInternalProperties(), + new TypeReference>() {}); + } catch (JsonProcessingException e) { + throw new RuntimeException(e); + } + } + + private static Field findAnnotatedFields(Class testClass, boolean isStaticMember) { + final Optional set = + Arrays.stream(testClass.getDeclaredFields()) + .filter(m -> isStaticMember == Modifier.isStatic(m.getModifiers())) + .filter(m -> DropwizardAppExtension.class.isAssignableFrom(m.getType())) + .findFirst(); + if (set.isPresent()) { + return set.get(); + } + if (!testClass.getSuperclass().equals(Object.class)) { + return findAnnotatedFields(testClass.getSuperclass(), isStaticMember); + } + return null; + } +} diff --git a/polaris-service/src/test/java/io/polaris/service/test/SnowmanCredentialsExtension.java b/polaris-service/src/test/java/io/polaris/service/test/SnowmanCredentialsExtension.java new file mode 100644 index 0000000000..ec3728984f --- /dev/null +++ b/polaris-service/src/test/java/io/polaris/service/test/SnowmanCredentialsExtension.java @@ -0,0 +1,224 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.polaris.service.test; + +import static io.polaris.service.context.DefaultContextResolver.REALM_PROPERTY_KEY; +import static org.assertj.core.api.Assertions.assertThat; + +import io.dropwizard.testing.junit5.DropwizardAppExtension; +import io.polaris.core.admin.model.GrantPrincipalRoleRequest; +import io.polaris.core.admin.model.Principal; +import io.polaris.core.admin.model.PrincipalRole; +import io.polaris.core.admin.model.PrincipalWithCredentials; +import io.polaris.core.entity.PolarisPrincipalSecrets; +import io.polaris.service.auth.TokenUtils; +import jakarta.ws.rs.client.Entity; +import jakarta.ws.rs.core.MediaType; +import jakarta.ws.rs.core.Response; +import org.junit.jupiter.api.extension.AfterAllCallback; +import org.junit.jupiter.api.extension.BeforeAllCallback; +import org.junit.jupiter.api.extension.ExtensionContext; +import org.junit.jupiter.api.extension.ExtensionContext.Namespace; +import org.junit.jupiter.api.extension.ParameterContext; +import org.junit.jupiter.api.extension.ParameterResolutionException; +import org.junit.jupiter.api.extension.ParameterResolver; +import org.slf4j.LoggerFactory; + +public class SnowmanCredentialsExtension + implements BeforeAllCallback, AfterAllCallback, ParameterResolver { + + private SnowmanCredentials snowmanCredentials; + + public record SnowmanCredentials(String clientId, String clientSecret) {} + + @Override + public void beforeAll(ExtensionContext extensionContext) throws Exception { + PolarisPrincipalSecrets adminSecrets = PolarisConnectionExtension.getAdminSecrets(); + String realm = + extensionContext + .getStore(Namespace.create(extensionContext.getRequiredTestClass())) + .get(REALM_PROPERTY_KEY, String.class); + + if (adminSecrets == null) { + LoggerFactory.getLogger(SnowmanCredentialsExtension.class) + .atError() + .log( + "No admin secrets configured - you must also configure your test with PolarisConnectionExtension"); + return; + } + DropwizardAppExtension dropwizard = + PolarisConnectionExtension.findDropwizardExtension(extensionContext); + if (dropwizard == null) { + return; + } + String userToken = + TokenUtils.getTokenFromSecrets( + dropwizard.client(), + dropwizard.getLocalPort(), + adminSecrets.getPrincipalClientId(), + adminSecrets.getMainSecret(), + realm); + + PrincipalRole principalRole = new PrincipalRole("catalog-admin"); + try (Response createPrResponse = + dropwizard + .client() + .target( + String.format( + "http://localhost:%d/api/management/v1/principal-roles", + dropwizard.getLocalPort())) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(principalRole))) { + assertThat(createPrResponse) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + + Principal principal = new Principal("snowman"); + + try (Response createPResponse = + dropwizard + .client() + .target( + String.format( + "http://localhost:%d/api/management/v1/principals", dropwizard.getLocalPort())) + .request("application/json") + .header("Authorization", "Bearer " + userToken) // how is token getting used? + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(principal))) { + assertThat(createPResponse) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + PrincipalWithCredentials snowmanWithCredentials = + createPResponse.readEntity(PrincipalWithCredentials.class); + try (Response rotateResp = + dropwizard + .client() + .target( + String.format( + "http://localhost:%d/api/management/v1/principals/%s/rotate", + dropwizard.getLocalPort(), "snowman")) + .request(MediaType.APPLICATION_JSON) + .header( + "Authorization", + "Bearer " + + TokenUtils.getTokenFromSecrets( + dropwizard.client(), + dropwizard.getLocalPort(), + snowmanWithCredentials.getCredentials().getClientId(), + snowmanWithCredentials.getCredentials().getClientSecret(), + realm)) + .header(REALM_PROPERTY_KEY, realm) + .post(Entity.json(snowmanWithCredentials))) { + + assertThat(rotateResp).returns(200, Response::getStatus); + + // Use the rotated credentials. + snowmanWithCredentials = rotateResp.readEntity(PrincipalWithCredentials.class); + } + snowmanCredentials = + new SnowmanCredentials( + snowmanWithCredentials.getCredentials().getClientId(), + snowmanWithCredentials.getCredentials().getClientSecret()); + } + try (Response assignPrResponse = + dropwizard + .client() + .target( + String.format( + "http://localhost:%d/api/management/v1/principals/snowman/principal-roles", + dropwizard.getLocalPort())) + .request("application/json") + .header("Authorization", "Bearer " + userToken) // how is token getting used? + .header(REALM_PROPERTY_KEY, realm) + .put(Entity.json(new GrantPrincipalRoleRequest(principalRole)))) { + assertThat(assignPrResponse) + .returns(Response.Status.CREATED.getStatusCode(), Response::getStatus); + } + } + + @Override + public void afterAll(ExtensionContext extensionContext) throws Exception { + PolarisPrincipalSecrets adminSecrets = PolarisConnectionExtension.getAdminSecrets(); + String realm = + extensionContext + .getStore(Namespace.create(extensionContext.getRequiredTestClass())) + .get(REALM_PROPERTY_KEY, String.class); + + if (adminSecrets == null) { + LoggerFactory.getLogger(SnowmanCredentialsExtension.class) + .atError() + .log( + "No admin secrets configured - you must also configure your test with PolarisConnectionExtension"); + return; + } + DropwizardAppExtension dropwizard = + PolarisConnectionExtension.findDropwizardExtension(extensionContext); + if (dropwizard == null) { + return; + } + String userToken = + TokenUtils.getTokenFromSecrets( + dropwizard.client(), + dropwizard.getLocalPort(), + adminSecrets.getPrincipalClientId(), + adminSecrets.getMainSecret(), + realm); + + try (Response deletePrResponse = + dropwizard + .client() + .target( + String.format( + "http://localhost:%d/api/management/v1/principal-roles/%s", + dropwizard.getLocalPort(), "catalog-admin")) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .delete()) {} + + try (Response deleteResponse = + dropwizard + .client() + .target( + String.format( + "http://localhost:%d/api/management/v1/principals/%s", + dropwizard.getLocalPort(), "snowman")) + .request("application/json") + .header("Authorization", "Bearer " + userToken) + .header(REALM_PROPERTY_KEY, realm) + .delete()) {} + } + + // FIXME - this would be better done with a Credentials-specific annotation processor so + // tests could declare which credentials they want (e.g., @TestCredentials("root") ) + // For now, snowman comes from here and root comes from PolarisConnectionExtension + + @Override + public boolean supportsParameter( + ParameterContext parameterContext, ExtensionContext extensionContext) + throws ParameterResolutionException { + + return parameterContext.getParameter().getType() == SnowmanCredentials.class; + } + + @Override + public Object resolveParameter( + ParameterContext parameterContext, ExtensionContext extensionContext) + throws ParameterResolutionException { + return snowmanCredentials; + } +} diff --git a/polaris-service/src/test/resources/META-INF/persistence.xml b/polaris-service/src/test/resources/META-INF/persistence.xml new file mode 100644 index 0000000000..11828b2848 --- /dev/null +++ b/polaris-service/src/test/resources/META-INF/persistence.xml @@ -0,0 +1,44 @@ + + + + + + org.eclipse.persistence.jpa.PersistenceProvider + io.polaris.core.persistence.models.ModelEntity + io.polaris.core.persistence.models.ModelEntityActive + io.polaris.core.persistence.models.ModelEntityChangeTracking + io.polaris.core.persistence.models.ModelEntityDropped + io.polaris.core.persistence.models.ModelGrantRecord + io.polaris.core.persistence.models.ModelPrincipalSecrets + io.polaris.core.persistence.models.ModelSequenceId + NONE + + + + + + + + + + + \ No newline at end of file diff --git a/polaris-service/src/test/resources/META-INF/services/io.polaris.service.auth.DiscoverableAuthenticator b/polaris-service/src/test/resources/META-INF/services/io.polaris.service.auth.DiscoverableAuthenticator new file mode 100644 index 0000000000..32c21d7dd3 --- /dev/null +++ b/polaris-service/src/test/resources/META-INF/services/io.polaris.service.auth.DiscoverableAuthenticator @@ -0,0 +1,17 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +io.polaris.service.auth.TestInlineBearerTokenPolarisAuthenticator \ No newline at end of file diff --git a/polaris-service/src/test/resources/polaris-server-integrationtest.yml b/polaris-service/src/test/resources/polaris-server-integrationtest.yml new file mode 100644 index 0000000000..78ff8190de --- /dev/null +++ b/polaris-service/src/test/resources/polaris-server-integrationtest.yml @@ -0,0 +1,167 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +server: + # Maximum number of threads. + maxThreads: 200 + + # Minimum number of thread to keep alive. + minThreads: 10 + applicationConnectors: + # HTTP-specific options. + - type: http + + # The port on which the HTTP server listens for service requests. + port: 8181 + + adminConnectors: + - type: http + port: 8182 + + # The hostname of the interface to which the HTTP server socket wil be found. If omitted, the + # socket will listen on all interfaces. + #bindHost: localhost + + # ssl: + # keyStore: ./example.keystore + # keyStorePassword: example + # + # keyStoreType: JKS # (optional, JKS is default) + + # HTTP request log settings + requestLog: + appenders: + # Settings for logging to stdout. + - type: console + + # Settings for logging to a file. + - type: file + + # The file to which statements will be logged. + currentLogFilename: ./logs/request.log + + # When the log file rolls over, the file will be archived to requests-2012-03-15.log.gz, + # requests.log will be truncated, and new statements written to it. + archivedLogFilenamePattern: ./logs/requests-%d.log.gz + + # The maximum number of log files to archive. + archivedFileCount: 14 + + # Enable archiving if the request log entries go to the their own file + archive: true + +# Either 'jdbc' or 'polaris'; specifies the underlying delegate catalog +baseCatalogType: "polaris" + +featureConfiguration: + ENFORCE_PRINCIPAL_CREDENTIAL_ROTATION_REQUIRED_CHECKING: true + DISABLE_TOKEN_GENERATION_FOR_USER_PRINCIPALS: true + ALLOW_WILDCARD_LOCATION: true + ALLOW_SPECIFYING_FILE_IO_IMPL: true + ALLOW_OVERLAPPING_CATALOG_URLS: true + SUPPORTED_CATALOG_STORAGE_TYPES: + - FILE + - S3 + - GCS + - AZURE + +sqlLiteCatalogDirs: + default-realm: ./build/test_data/iceberg + +metaStoreManager: + type: in-memory +# type: remote +# url: http://sdp-devvm-mcollado:8080 + +oauth2: + type: default + tokenBroker: + type: symmetric-key + secret: polaris +# type: snowflake +# clientId: ${GS_POLARIS_SERVICE_CLIENT_ID} +# clientSecret: ${GS_POLARIS_SERVICE_CLIENT_SECRET} +# clientSecret2: ${GS_POLARIS_SERVICE_CLIENT_SECRET2} + +authenticator: + class: io.polaris.service.auth.DefaultPolarisAuthenticator + tokenBroker: + type: symmetric-key + secret: polaris + + +callContextResolver: + type: default +# type: snowflake +# account: ${SNOWFLAKE_ACCOUNT:-SNOWFLAKE} +# scheme: ${GS_SCHEME:-http} +# host: ${GS_HOST:-localhost} +# port: ${GS_PORT:-8080} + +realmContextResolver: + type: default +# type: snowflake +# account: ${SNOWFLAKE_ACCOUNT:-SNOWFLAKE} +# scheme: ${GS_SCHEME:-http} +# host: ${GS_HOST:-localhost} +# port: ${GS_PORT:-8080} + +defaultRealm: SNOWFLAKE + +cors: + allowed-origins: + - snowflake.com + + # Logging settings. +logging: + + # The default level of all loggers. Can be OFF, ERROR, WARN, INFO, DEBUG, TRACE, or ALL. + level: INFO + + # Logger-specific levels. + loggers: + io.polaris: DEBUG + + appenders: + + - type: console + # If true, write log statements to stdout. + # enabled: true + # Do not display log statements below this threshold to stdout. + threshold: ALL + # Custom Logback PatternLayout with threadname. + logFormat: "%-5p [%d{ISO8601} - %-6r] [%t] [%X{aid}%X{sid}%X{tid}%X{wid}%X{oid}%X{srv}%X{job}%X{rid}] %c{30}: %m %kvp%n%ex" + + # Settings for logging to a file. + - type: file + # If true, write log statements to a file. + # enabled: true + # Do not write log statements below this threshold to the file. + threshold: ALL + # Custom Logback PatternLayout with threadname. + logFormat: "%-5p [%d{ISO8601} - %-6r] [%t] [%X{aid}%X{sid}%X{tid}%X{wid}%X{oid}%X{srv}%X{job}%X{rid}] %c: %m %kvp%n%ex" + + # when using json logging, you must use a format like this, else the + # mdc section of the json log will be incorrect + # logFormat: "%-5p [%d{ISO8601} - %-6r] [%t] [%X] %c: %m%n%ex" + + # The file to which statements will be logged. + currentLogFilename: ./logs/iceberg-rest.log + # When the log file rolls over, the file will be archived to snowflake-2012-03-15.log.gz, + # snowflake.log will be truncated, and new statements written to it. + archivedLogFilenamePattern: ./logs/iceberg-rest-%d.log.gz + # The maximum number of log files to archive. + archivedFileCount: 14 diff --git a/regtests/.dockerignore b/regtests/.dockerignore new file mode 100644 index 0000000000..fbc79dce91 --- /dev/null +++ b/regtests/.dockerignore @@ -0,0 +1,5 @@ +.python-version +.pytest_cache +t_pyspark/src/spark-warehouse +t_pyspark/src/.pytest_cache +polaris/polaris.management.egg-info \ No newline at end of file diff --git a/regtests/.gitignore b/regtests/.gitignore new file mode 100644 index 0000000000..46a767f646 --- /dev/null +++ b/regtests/.gitignore @@ -0,0 +1,4 @@ +.venv +t_pyspark/src/__pycache__ +.pytest_cache +.python-version diff --git a/regtests/Dockerfile b/regtests/Dockerfile new file mode 100644 index 0000000000..99d762af51 --- /dev/null +++ b/regtests/Dockerfile @@ -0,0 +1,42 @@ +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM apache/spark:3.5.1-python3 +ARG POLARIS_HOST=polaris +ENV POLARIS_HOST=$POLARIS_HOST +ENV SPARK_HOME=/opt/spark + +USER root +RUN apt update +RUN apt-get install -y diffutils wget curl python3.8-venv +RUN mkdir -p /home/spark && \ + chown -R spark /home/spark && \ + mkdir -p /tmp/polaris-regtests && \ + chown -R spark /tmp/polaris-regtests +RUN mkdir /opt/spark/conf && chmod -R 777 /opt/spark/conf + +USER spark +ENV PYTHONPATH="${SPARK_HOME}/python/:${SPARK_HOME}/python/lib/py4j-0.10.9.7-src.zip:$PYTHONPATH" + +# Copy and run setup.sh separately so that test sources can change, but the setup script run is still cached +WORKDIR /home/spark/regtests +COPY ./setup.sh /home/spark/regtests/setup.sh +COPY ./pyspark-setup.sh /home/spark/regtests/pyspark-setup.sh +COPY ./client/python /home/spark/regtests/client/python + +RUN ./setup.sh + +COPY --chown=spark . /home/spark/regtests + +CMD ["./run.sh"] \ No newline at end of file diff --git a/regtests/README.md b/regtests/README.md new file mode 100644 index 0000000000..590201ffac --- /dev/null +++ b/regtests/README.md @@ -0,0 +1,152 @@ + + +# End-to-end regression tests + +## Run Tests With Docker Compose + +Tests can be run with docker-compose by executing + +```bash +docker compose up --build --exit-code-from regtest +``` + +This is the flow used in CI and should be done locally before pushing to github to ensure that no environmental +factors contribute to the outcome of the tests. + +## Run all tests + +Polaris REST server must be running on localhost:8181 before running tests. + +Running test harness will automatically run idempotent setup script. + +``` +./run.sh +``` + +## Run in VERBOSE mode with test stdout printing to console + +``` +VERBOSE=1 ./run.sh t_spark_sql/src/spark_sql_basic.sh +``` + +## Run with Cloud resources +Several tests require access to cloud resources, such as S3 or GCS. To run these tests, you must export the appropriate +environment variables prior to running the tests. Each cloud can be enabled independently. +Create a .env file that contains the following variables: + +``` +# AWS variables +AWS_TEST_ENABLED=true +AWS_ACCESS_KEY_ID= +AWS_SECRET_ACCESS_KEY= +AWS_STORAGE_BUCKET= +AWS_ROLE_ARN= +AWS_TEST_BASE=s3:/// + +# GCP variables +GCS_TEST_ENABLED=true +GCS_TEST_BASE=gs:// +GOOGLE_APPLICATION_CREDENTIALS=/tmp/credentials/ + +# Azure variables +AZURE_TEST_ENABLED=true +AZURE_TENANT_ID= +AZURE_DFS_TEST_BASE=abfss://@.dfs.core.windows.net/ +AZURE_BLOB_TEST_BASE=abfss://@.blob.core.windows.net/ +``` +`GOOGLE_APPLICATION_CREDENTIALS` must be mounted to the container volumes. Copy your credentials file +into the `credentials` folder. Then specify the name of the file in your .env file - do not change the +path, as `/tmp/credentials` is the folder on the container where the credentials file will be mounted. + +## Setup without running tests + +Setup is idempotent. + +``` +./setup.sh +``` + +## Experiment with failed test + +``` +rm t_hello_world/ref/hello_world.sh.ref +./run.sh +``` + +``` +Tue Apr 23 06:32:23 UTC 2024: Running all tests +Tue Apr 23 06:32:23 UTC 2024: Starting test t_hello_world:hello_world.sh +Tue Apr 23 06:32:23 UTC 2024: Test run concluded for t_hello_world:hello_world.sh +Tue Apr 23 06:32:23 UTC 2024: Test FAILED: t_hello_world:hello_world.sh +Tue Apr 23 06:32:23 UTC 2024: To compare and fix diffs: /tmp/polaris-regtests/t_hello_world/hello_world.sh.fixdiffs.sh +Tue Apr 23 06:32:23 UTC 2024: Starting test t_spark_sql:spark_sql_basic.sh +Tue Apr 23 06:32:32 UTC 2024: Test run concluded for t_spark_sql:spark_sql_basic.sh +Tue Apr 23 06:32:32 UTC 2024: Test SUCCEEDED: t_spark_sql:spark_sql_basic.sh +``` + +Simply run the specified fixdiffs file to run `meld` and fix the ref file. + +``` +/tmp/polaris-regtests/t_hello_world/hello_world.sh.fixdiffs.sh +``` + +## Run a spark-sql interactive shell + +With in-memory standalong Polaris running: + +``` +./run_spark_sql.sh +``` + +## Python Tests + +Python tests are based on `pytest`. They rely on a python Polaris client, which is generated from the openapi spec. +The client can be generated using two commands: + +```bash +# generate the management api client +$ docker run --rm \ + -v ${PWD}:/local openapitools/openapi-generator-cli generate \ + -i /local/spec/polaris-management-service.yml \ + -g python \ + -o /local/regtests/client/python --additional-properties=packageName=polaris.management --additional-properties=apiNamePrefix=polaris + +# generate the iceberg rest client +$ docker run --rm \ + -v ${PWD}:/local openapitools/openapi-generator-cli generate \ + -i /local/spec/rest-catalog-open-api.yaml \ + -g python \ + -o /local/regtests/client/python --additional-properties=packageName=polaris.catalog --additional-properties=apiNameSuffix="" --additional-properties=apiNamePrefix=Iceberg +``` + +Tests rely on Python 3.8 or higher. `pyenv` can be used to install a current version and mapped to the local directory +by using + +```bash +pyenv install 3.8 +pyenv local 3.8 +``` + +Once you've done that, you can run `setup.sh` to generate a python virtual environment (installed at `~/polaris-venv`) +and download all of the test dependencies into it. From here, `run.sh` will be able to execute any pytest present. + +To debug, setup IntelliJ to point at your virtual environment to find your test dependencies +(see https://www.jetbrains.com/help/idea/configuring-python-sdk.html). Then run the test in your IDE. + +The above is handled automatically when running reg tests from the docker image. \ No newline at end of file diff --git a/regtests/client/python/.github/workflows/python.yml b/regtests/client/python/.github/workflows/python.yml new file mode 100644 index 0000000000..559ec9f909 --- /dev/null +++ b/regtests/client/python/.github/workflows/python.yml @@ -0,0 +1,54 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# NOTE: This file is auto generated by OpenAPI Generator. +# URL: https://openapi-generator.tech +# +# ref: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python + +name: polaris.management Python package + +on: [push, pull_request] + +jobs: + build: + + runs-on: ubuntu-latest + strategy: + matrix: + python-version: ["3.7", "3.8", "3.9", "3.10", "3.11"] + + steps: + - uses: actions/checkout@v3 + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v4 + with: + python-version: ${{ matrix.python-version }} + - name: Install dependencies + run: | + python -m pip install --upgrade pip + pip install flake8 pytest + if [ -f requirements.txt ]; then pip install -r requirements.txt; fi + if [ -f test-requirements.txt ]; then pip install -r test-requirements.txt; fi + - name: Lint with flake8 + run: | + # stop the build if there are Python syntax errors or undefined names + flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics + # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide + flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics + - name: Test with pytest + run: | + pytest diff --git a/regtests/client/python/.gitignore b/regtests/client/python/.gitignore new file mode 100644 index 0000000000..43995bd42f --- /dev/null +++ b/regtests/client/python/.gitignore @@ -0,0 +1,66 @@ +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +env/ +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +*.egg-info/ +.installed.cfg +*.egg + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*,cover +.hypothesis/ +venv/ +.venv/ +.python-version +.pytest_cache + +# Translations +*.mo +*.pot + +# Django stuff: +*.log + +# Sphinx documentation +docs/_build/ + +# PyBuilder +target/ + +#Ipython Notebook +.ipynb_checkpoints diff --git a/regtests/client/python/.gitlab-ci.yml b/regtests/client/python/.gitlab-ci.yml new file mode 100644 index 0000000000..a034567142 --- /dev/null +++ b/regtests/client/python/.gitlab-ci.yml @@ -0,0 +1,46 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# NOTE: This file is auto generated by OpenAPI Generator. +# URL: https://openapi-generator.tech +# +# ref: https://docs.gitlab.com/ee/ci/README.html +# ref: https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Python.gitlab-ci.yml + +stages: + - test + +.pytest: + stage: test + script: + - pip install -r requirements.txt + - pip install -r test-requirements.txt + - pytest --cov=polaris.catalog + +pytest-3.7: + extends: .pytest + image: python:3.7-alpine +pytest-3.8: + extends: .pytest + image: python:3.8-alpine +pytest-3.9: + extends: .pytest + image: python:3.9-alpine +pytest-3.10: + extends: .pytest + image: python:3.10-alpine +pytest-3.11: + extends: .pytest + image: python:3.11-alpine diff --git a/regtests/client/python/.openapi-generator-ignore b/regtests/client/python/.openapi-generator-ignore new file mode 100644 index 0000000000..7484ee590a --- /dev/null +++ b/regtests/client/python/.openapi-generator-ignore @@ -0,0 +1,23 @@ +# OpenAPI Generator Ignore +# Generated by openapi-generator https://github.com/openapitools/openapi-generator + +# Use this file to prevent files from being overwritten by the generator. +# The patterns follow closely to .gitignore or .dockerignore. + +# As an example, the C# client generator defines ApiClient.cs. +# You can make changes and tell OpenAPI Generator to ignore just this file by uncommenting the following line: +#ApiClient.cs + +# You can match any string of characters against a directory, file or extension with a single asterisk (*): +#foo/*/qux +# The above matches foo/bar/qux and foo/baz/qux, but not foo/bar/baz/qux + +# You can recursively match patterns against a directory, file or extension with a double asterisk (**): +#foo/**/qux +# This matches foo/bar/qux, foo/baz/qux, and foo/bar/baz/qux + +# You can also negate patterns with an exclamation (!). +# For example, you can ignore all files in a docs folder with the file extension .md: +#docs/*.md +# Then explicitly reverse the ignore rule for a single file: +#!docs/README.md diff --git a/regtests/client/python/.openapi-generator/FILES b/regtests/client/python/.openapi-generator/FILES new file mode 100644 index 0000000000..4bf280b4c6 --- /dev/null +++ b/regtests/client/python/.openapi-generator/FILES @@ -0,0 +1,249 @@ +.github/workflows/python.yml +.gitignore +.gitlab-ci.yml +.travis.yml +README.md +docs/AddPartitionSpecUpdate.md +docs/AddSchemaUpdate.md +docs/AddSnapshotUpdate.md +docs/AddSortOrderUpdate.md +docs/AddViewVersionUpdate.md +docs/AndOrExpression.md +docs/AssertCreate.md +docs/AssertCurrentSchemaId.md +docs/AssertDefaultSortOrderId.md +docs/AssertDefaultSpecId.md +docs/AssertLastAssignedFieldId.md +docs/AssertLastAssignedPartitionId.md +docs/AssertRefSnapshotId.md +docs/AssertTableUUID.md +docs/AssertViewUUID.md +docs/AssignUUIDUpdate.md +docs/BaseUpdate.md +docs/BlobMetadata.md +docs/CatalogConfig.md +docs/CommitReport.md +docs/CommitTableRequest.md +docs/CommitTableResponse.md +docs/CommitTransactionRequest.md +docs/CommitViewRequest.md +docs/ContentFile.md +docs/CountMap.md +docs/CounterResult.md +docs/CreateNamespaceRequest.md +docs/CreateNamespaceResponse.md +docs/CreateTableRequest.md +docs/CreateViewRequest.md +docs/DataFile.md +docs/EqualityDeleteFile.md +docs/ErrorModel.md +docs/Expression.md +docs/FileFormat.md +docs/GetNamespaceResponse.md +docs/IcebergCatalogAPI.md +docs/IcebergConfigurationAPI.md +docs/IcebergErrorResponse.md +docs/IcebergOAuth2API.md +docs/ListNamespacesResponse.md +docs/ListTablesResponse.md +docs/ListType.md +docs/LiteralExpression.md +docs/LoadTableResult.md +docs/LoadViewResult.md +docs/MapType.md +docs/MetadataLogInner.md +docs/MetricResult.md +docs/ModelSchema.md +docs/NotExpression.md +docs/NotificationRequest.md +docs/NotificationType.md +docs/NullOrder.md +docs/OAuthError.md +docs/OAuthTokenResponse.md +docs/PartitionField.md +docs/PartitionSpec.md +docs/PartitionStatisticsFile.md +docs/PositionDeleteFile.md +docs/PrimitiveTypeValue.md +docs/RegisterTableRequest.md +docs/RemovePartitionStatisticsUpdate.md +docs/RemovePropertiesUpdate.md +docs/RemoveSnapshotRefUpdate.md +docs/RemoveSnapshotsUpdate.md +docs/RemoveStatisticsUpdate.md +docs/RenameTableRequest.md +docs/ReportMetricsRequest.md +docs/SQLViewRepresentation.md +docs/ScanReport.md +docs/SetCurrentSchemaUpdate.md +docs/SetCurrentViewVersionUpdate.md +docs/SetDefaultSortOrderUpdate.md +docs/SetDefaultSpecUpdate.md +docs/SetExpression.md +docs/SetLocationUpdate.md +docs/SetPartitionStatisticsUpdate.md +docs/SetPropertiesUpdate.md +docs/SetSnapshotRefUpdate.md +docs/SetStatisticsUpdate.md +docs/Snapshot.md +docs/SnapshotLogInner.md +docs/SnapshotReference.md +docs/SnapshotSummary.md +docs/SortDirection.md +docs/SortField.md +docs/SortOrder.md +docs/StatisticsFile.md +docs/StructField.md +docs/StructType.md +docs/TableIdentifier.md +docs/TableMetadata.md +docs/TableRequirement.md +docs/TableUpdate.md +docs/TableUpdateNotification.md +docs/Term.md +docs/TimerResult.md +docs/TokenType.md +docs/TransformTerm.md +docs/Type.md +docs/UnaryExpression.md +docs/UpdateNamespacePropertiesRequest.md +docs/UpdateNamespacePropertiesResponse.md +docs/UpgradeFormatVersionUpdate.md +docs/ValueMap.md +docs/ViewHistoryEntry.md +docs/ViewMetadata.md +docs/ViewRepresentation.md +docs/ViewRequirement.md +docs/ViewUpdate.md +docs/ViewVersion.md +git_push.sh +polaris/__init__.py +polaris/catalog/__init__.py +polaris/catalog/api/__init__.py +polaris/catalog/api/iceberg_catalog_api.py +polaris/catalog/api/iceberg_configuration_api.py +polaris/catalog/api/iceberg_o_auth2_api.py +polaris/catalog/api_client.py +polaris/catalog/api_response.py +polaris/catalog/configuration.py +polaris/catalog/exceptions.py +polaris/catalog/models/__init__.py +polaris/catalog/models/add_partition_spec_update.py +polaris/catalog/models/add_schema_update.py +polaris/catalog/models/add_snapshot_update.py +polaris/catalog/models/add_sort_order_update.py +polaris/catalog/models/add_view_version_update.py +polaris/catalog/models/and_or_expression.py +polaris/catalog/models/assert_create.py +polaris/catalog/models/assert_current_schema_id.py +polaris/catalog/models/assert_default_sort_order_id.py +polaris/catalog/models/assert_default_spec_id.py +polaris/catalog/models/assert_last_assigned_field_id.py +polaris/catalog/models/assert_last_assigned_partition_id.py +polaris/catalog/models/assert_ref_snapshot_id.py +polaris/catalog/models/assert_table_uuid.py +polaris/catalog/models/assert_view_uuid.py +polaris/catalog/models/assign_uuid_update.py +polaris/catalog/models/base_update.py +polaris/catalog/models/blob_metadata.py +polaris/catalog/models/catalog_config.py +polaris/catalog/models/commit_report.py +polaris/catalog/models/commit_table_request.py +polaris/catalog/models/commit_table_response.py +polaris/catalog/models/commit_transaction_request.py +polaris/catalog/models/commit_view_request.py +polaris/catalog/models/content_file.py +polaris/catalog/models/count_map.py +polaris/catalog/models/counter_result.py +polaris/catalog/models/create_namespace_request.py +polaris/catalog/models/create_namespace_response.py +polaris/catalog/models/create_table_request.py +polaris/catalog/models/create_view_request.py +polaris/catalog/models/data_file.py +polaris/catalog/models/equality_delete_file.py +polaris/catalog/models/error_model.py +polaris/catalog/models/expression.py +polaris/catalog/models/file_format.py +polaris/catalog/models/get_namespace_response.py +polaris/catalog/models/iceberg_error_response.py +polaris/catalog/models/list_namespaces_response.py +polaris/catalog/models/list_tables_response.py +polaris/catalog/models/list_type.py +polaris/catalog/models/literal_expression.py +polaris/catalog/models/load_table_result.py +polaris/catalog/models/load_view_result.py +polaris/catalog/models/map_type.py +polaris/catalog/models/metadata_log_inner.py +polaris/catalog/models/metric_result.py +polaris/catalog/models/model_schema.py +polaris/catalog/models/not_expression.py +polaris/catalog/models/notification_request.py +polaris/catalog/models/notification_type.py +polaris/catalog/models/null_order.py +polaris/catalog/models/o_auth_error.py +polaris/catalog/models/o_auth_token_response.py +polaris/catalog/models/partition_field.py +polaris/catalog/models/partition_spec.py +polaris/catalog/models/partition_statistics_file.py +polaris/catalog/models/position_delete_file.py +polaris/catalog/models/primitive_type_value.py +polaris/catalog/models/register_table_request.py +polaris/catalog/models/remove_partition_statistics_update.py +polaris/catalog/models/remove_properties_update.py +polaris/catalog/models/remove_snapshot_ref_update.py +polaris/catalog/models/remove_snapshots_update.py +polaris/catalog/models/remove_statistics_update.py +polaris/catalog/models/rename_table_request.py +polaris/catalog/models/report_metrics_request.py +polaris/catalog/models/scan_report.py +polaris/catalog/models/set_current_schema_update.py +polaris/catalog/models/set_current_view_version_update.py +polaris/catalog/models/set_default_sort_order_update.py +polaris/catalog/models/set_default_spec_update.py +polaris/catalog/models/set_expression.py +polaris/catalog/models/set_location_update.py +polaris/catalog/models/set_partition_statistics_update.py +polaris/catalog/models/set_properties_update.py +polaris/catalog/models/set_snapshot_ref_update.py +polaris/catalog/models/set_statistics_update.py +polaris/catalog/models/snapshot.py +polaris/catalog/models/snapshot_log_inner.py +polaris/catalog/models/snapshot_reference.py +polaris/catalog/models/snapshot_summary.py +polaris/catalog/models/sort_direction.py +polaris/catalog/models/sort_field.py +polaris/catalog/models/sort_order.py +polaris/catalog/models/sql_view_representation.py +polaris/catalog/models/statistics_file.py +polaris/catalog/models/struct_field.py +polaris/catalog/models/struct_type.py +polaris/catalog/models/table_identifier.py +polaris/catalog/models/table_metadata.py +polaris/catalog/models/table_requirement.py +polaris/catalog/models/table_update.py +polaris/catalog/models/table_update_notification.py +polaris/catalog/models/term.py +polaris/catalog/models/timer_result.py +polaris/catalog/models/token_type.py +polaris/catalog/models/transform_term.py +polaris/catalog/models/type.py +polaris/catalog/models/unary_expression.py +polaris/catalog/models/update_namespace_properties_request.py +polaris/catalog/models/update_namespace_properties_response.py +polaris/catalog/models/upgrade_format_version_update.py +polaris/catalog/models/value_map.py +polaris/catalog/models/view_history_entry.py +polaris/catalog/models/view_metadata.py +polaris/catalog/models/view_representation.py +polaris/catalog/models/view_requirement.py +polaris/catalog/models/view_update.py +polaris/catalog/models/view_version.py +polaris/catalog/py.typed +polaris/catalog/rest.py +pyproject.toml +requirements.txt +setup.cfg +setup.py +test-requirements.txt +test/__init__.py +tox.ini diff --git a/regtests/client/python/.openapi-generator/VERSION b/regtests/client/python/.openapi-generator/VERSION new file mode 100644 index 0000000000..6116b14d2c --- /dev/null +++ b/regtests/client/python/.openapi-generator/VERSION @@ -0,0 +1 @@ +7.8.0-SNAPSHOT diff --git a/regtests/client/python/.travis.yml b/regtests/client/python/.travis.yml new file mode 100644 index 0000000000..a4f41c2226 --- /dev/null +++ b/regtests/client/python/.travis.yml @@ -0,0 +1,32 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# ref: https://docs.travis-ci.com/user/languages/python +language: python +python: + - "3.7" + - "3.8" + - "3.9" + - "3.10" + - "3.11" + # uncomment the following if needed + #- "3.11-dev" # 3.11 development branch + #- "nightly" # nightly build +# command to install dependencies +install: + - "pip install -r requirements.txt" + - "pip install -r test-requirements.txt" +# command to run tests +script: pytest --cov=polaris.catalog diff --git a/regtests/client/python/README.md b/regtests/client/python/README.md new file mode 100644 index 0000000000..ecd80b31d3 --- /dev/null +++ b/regtests/client/python/README.md @@ -0,0 +1,281 @@ + +# polaris.catalog +Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + +This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project: + +- API version: 0.0.1 +- Package version: 1.0.0 +- Generator version: 7.8.0-SNAPSHOT +- Build package: org.openapitools.codegen.languages.PythonClientCodegen + +## Requirements. + +Python 3.7+ + +## Installation & Usage +### pip install + +If the python package is hosted on a repository, you can install directly using: + +```sh +pip install git+https://github.com/GIT_USER_ID/GIT_REPO_ID.git +``` +(you may need to run `pip` with root permission: `sudo pip install git+https://github.com/GIT_USER_ID/GIT_REPO_ID.git`) + +Then import the package: +```python +import polaris.catalog +``` + +### Setuptools + +Install via [Setuptools](http://pypi.python.org/pypi/setuptools). + +```sh +python setup.py install --user +``` +(or `sudo python setup.py install` to install the package for all users) + +Then import the package: +```python +import polaris.catalog +``` + +### Tests + +Execute `pytest` to run the tests. + +## Getting Started + +Please follow the [installation procedure](#installation--usage) and then run the following: + +```python + +import polaris.catalog +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + commit_transaction_request = polaris.catalog.CommitTransactionRequest() # CommitTransactionRequest | Commit updates to multiple tables in an atomic operation A commit for a single table consists of a table identifier with requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id. + + try: + # Commit updates to multiple tables in an atomic operation + api_instance.commit_transaction(prefix, commit_transaction_request) + except ApiException as e: + print("Exception when calling IcebergCatalogAPI->commit_transaction: %s\n" % e) + +``` + +## Documentation for API Endpoints + +All URIs are relative to *https://localhost* + +Class | Method | HTTP request | Description +------------ | ------------- | ------------- | ------------- +*IcebergCatalogAPI* | [**commit_transaction**](docs/IcebergCatalogAPI.md#commit_transaction) | **POST** /v1/{prefix}/transactions/commit | Commit updates to multiple tables in an atomic operation +*IcebergCatalogAPI* | [**create_namespace**](docs/IcebergCatalogAPI.md#create_namespace) | **POST** /v1/{prefix}/namespaces | Create a namespace +*IcebergCatalogAPI* | [**create_table**](docs/IcebergCatalogAPI.md#create_table) | **POST** /v1/{prefix}/namespaces/{namespace}/tables | Create a table in the given namespace +*IcebergCatalogAPI* | [**create_view**](docs/IcebergCatalogAPI.md#create_view) | **POST** /v1/{prefix}/namespaces/{namespace}/views | Create a view in the given namespace +*IcebergCatalogAPI* | [**drop_namespace**](docs/IcebergCatalogAPI.md#drop_namespace) | **DELETE** /v1/{prefix}/namespaces/{namespace} | Drop a namespace from the catalog. Namespace must be empty. +*IcebergCatalogAPI* | [**drop_table**](docs/IcebergCatalogAPI.md#drop_table) | **DELETE** /v1/{prefix}/namespaces/{namespace}/tables/{table} | Drop a table from the catalog +*IcebergCatalogAPI* | [**drop_view**](docs/IcebergCatalogAPI.md#drop_view) | **DELETE** /v1/{prefix}/namespaces/{namespace}/views/{view} | Drop a view from the catalog +*IcebergCatalogAPI* | [**list_namespaces**](docs/IcebergCatalogAPI.md#list_namespaces) | **GET** /v1/{prefix}/namespaces | List namespaces, optionally providing a parent namespace to list underneath +*IcebergCatalogAPI* | [**list_tables**](docs/IcebergCatalogAPI.md#list_tables) | **GET** /v1/{prefix}/namespaces/{namespace}/tables | List all table identifiers underneath a given namespace +*IcebergCatalogAPI* | [**list_views**](docs/IcebergCatalogAPI.md#list_views) | **GET** /v1/{prefix}/namespaces/{namespace}/views | List all view identifiers underneath a given namespace +*IcebergCatalogAPI* | [**load_namespace_metadata**](docs/IcebergCatalogAPI.md#load_namespace_metadata) | **GET** /v1/{prefix}/namespaces/{namespace} | Load the metadata properties for a namespace +*IcebergCatalogAPI* | [**load_table**](docs/IcebergCatalogAPI.md#load_table) | **GET** /v1/{prefix}/namespaces/{namespace}/tables/{table} | Load a table from the catalog +*IcebergCatalogAPI* | [**load_view**](docs/IcebergCatalogAPI.md#load_view) | **GET** /v1/{prefix}/namespaces/{namespace}/views/{view} | Load a view from the catalog +*IcebergCatalogAPI* | [**namespace_exists**](docs/IcebergCatalogAPI.md#namespace_exists) | **HEAD** /v1/{prefix}/namespaces/{namespace} | Check if a namespace exists +*IcebergCatalogAPI* | [**register_table**](docs/IcebergCatalogAPI.md#register_table) | **POST** /v1/{prefix}/namespaces/{namespace}/register | Register a table in the given namespace using given metadata file location +*IcebergCatalogAPI* | [**rename_table**](docs/IcebergCatalogAPI.md#rename_table) | **POST** /v1/{prefix}/tables/rename | Rename a table from its current name to a new name +*IcebergCatalogAPI* | [**rename_view**](docs/IcebergCatalogAPI.md#rename_view) | **POST** /v1/{prefix}/views/rename | Rename a view from its current name to a new name +*IcebergCatalogAPI* | [**replace_view**](docs/IcebergCatalogAPI.md#replace_view) | **POST** /v1/{prefix}/namespaces/{namespace}/views/{view} | Replace a view +*IcebergCatalogAPI* | [**report_metrics**](docs/IcebergCatalogAPI.md#report_metrics) | **POST** /v1/{prefix}/namespaces/{namespace}/tables/{table}/metrics | Send a metrics report to this endpoint to be processed by the backend +*IcebergCatalogAPI* | [**send_notification**](docs/IcebergCatalogAPI.md#send_notification) | **POST** /v1/{prefix}/namespaces/{namespace}/tables/{table}/notifications | Sends a notification to the table +*IcebergCatalogAPI* | [**table_exists**](docs/IcebergCatalogAPI.md#table_exists) | **HEAD** /v1/{prefix}/namespaces/{namespace}/tables/{table} | Check if a table exists +*IcebergCatalogAPI* | [**update_properties**](docs/IcebergCatalogAPI.md#update_properties) | **POST** /v1/{prefix}/namespaces/{namespace}/properties | Set or remove properties on a namespace +*IcebergCatalogAPI* | [**update_table**](docs/IcebergCatalogAPI.md#update_table) | **POST** /v1/{prefix}/namespaces/{namespace}/tables/{table} | Commit updates to a table +*IcebergCatalogAPI* | [**view_exists**](docs/IcebergCatalogAPI.md#view_exists) | **HEAD** /v1/{prefix}/namespaces/{namespace}/views/{view} | Check if a view exists +*IcebergConfigurationAPI* | [**get_config**](docs/IcebergConfigurationAPI.md#get_config) | **GET** /v1/config | List all catalog configuration settings +*IcebergOAuth2API* | [**get_token**](docs/IcebergOAuth2API.md#get_token) | **POST** /v1/oauth/tokens | Get a token using an OAuth2 flow + + +## Documentation For Models + + - [AddPartitionSpecUpdate](docs/AddPartitionSpecUpdate.md) + - [AddSchemaUpdate](docs/AddSchemaUpdate.md) + - [AddSnapshotUpdate](docs/AddSnapshotUpdate.md) + - [AddSortOrderUpdate](docs/AddSortOrderUpdate.md) + - [AddViewVersionUpdate](docs/AddViewVersionUpdate.md) + - [AndOrExpression](docs/AndOrExpression.md) + - [AssertCreate](docs/AssertCreate.md) + - [AssertCurrentSchemaId](docs/AssertCurrentSchemaId.md) + - [AssertDefaultSortOrderId](docs/AssertDefaultSortOrderId.md) + - [AssertDefaultSpecId](docs/AssertDefaultSpecId.md) + - [AssertLastAssignedFieldId](docs/AssertLastAssignedFieldId.md) + - [AssertLastAssignedPartitionId](docs/AssertLastAssignedPartitionId.md) + - [AssertRefSnapshotId](docs/AssertRefSnapshotId.md) + - [AssertTableUUID](docs/AssertTableUUID.md) + - [AssertViewUUID](docs/AssertViewUUID.md) + - [AssignUUIDUpdate](docs/AssignUUIDUpdate.md) + - [BaseUpdate](docs/BaseUpdate.md) + - [BlobMetadata](docs/BlobMetadata.md) + - [CatalogConfig](docs/CatalogConfig.md) + - [CommitReport](docs/CommitReport.md) + - [CommitTableRequest](docs/CommitTableRequest.md) + - [CommitTableResponse](docs/CommitTableResponse.md) + - [CommitTransactionRequest](docs/CommitTransactionRequest.md) + - [CommitViewRequest](docs/CommitViewRequest.md) + - [ContentFile](docs/ContentFile.md) + - [CountMap](docs/CountMap.md) + - [CounterResult](docs/CounterResult.md) + - [CreateNamespaceRequest](docs/CreateNamespaceRequest.md) + - [CreateNamespaceResponse](docs/CreateNamespaceResponse.md) + - [CreateTableRequest](docs/CreateTableRequest.md) + - [CreateViewRequest](docs/CreateViewRequest.md) + - [DataFile](docs/DataFile.md) + - [EqualityDeleteFile](docs/EqualityDeleteFile.md) + - [ErrorModel](docs/ErrorModel.md) + - [Expression](docs/Expression.md) + - [FileFormat](docs/FileFormat.md) + - [GetNamespaceResponse](docs/GetNamespaceResponse.md) + - [IcebergErrorResponse](docs/IcebergErrorResponse.md) + - [ListNamespacesResponse](docs/ListNamespacesResponse.md) + - [ListTablesResponse](docs/ListTablesResponse.md) + - [ListType](docs/ListType.md) + - [LiteralExpression](docs/LiteralExpression.md) + - [LoadTableResult](docs/LoadTableResult.md) + - [LoadViewResult](docs/LoadViewResult.md) + - [MapType](docs/MapType.md) + - [MetadataLogInner](docs/MetadataLogInner.md) + - [MetricResult](docs/MetricResult.md) + - [ModelSchema](docs/ModelSchema.md) + - [NotExpression](docs/NotExpression.md) + - [NotificationRequest](docs/NotificationRequest.md) + - [NotificationType](docs/NotificationType.md) + - [NullOrder](docs/NullOrder.md) + - [OAuthError](docs/OAuthError.md) + - [OAuthTokenResponse](docs/OAuthTokenResponse.md) + - [PartitionField](docs/PartitionField.md) + - [PartitionSpec](docs/PartitionSpec.md) + - [PartitionStatisticsFile](docs/PartitionStatisticsFile.md) + - [PositionDeleteFile](docs/PositionDeleteFile.md) + - [PrimitiveTypeValue](docs/PrimitiveTypeValue.md) + - [RegisterTableRequest](docs/RegisterTableRequest.md) + - [RemovePartitionStatisticsUpdate](docs/RemovePartitionStatisticsUpdate.md) + - [RemovePropertiesUpdate](docs/RemovePropertiesUpdate.md) + - [RemoveSnapshotRefUpdate](docs/RemoveSnapshotRefUpdate.md) + - [RemoveSnapshotsUpdate](docs/RemoveSnapshotsUpdate.md) + - [RemoveStatisticsUpdate](docs/RemoveStatisticsUpdate.md) + - [RenameTableRequest](docs/RenameTableRequest.md) + - [ReportMetricsRequest](docs/ReportMetricsRequest.md) + - [SQLViewRepresentation](docs/SQLViewRepresentation.md) + - [ScanReport](docs/ScanReport.md) + - [SetCurrentSchemaUpdate](docs/SetCurrentSchemaUpdate.md) + - [SetCurrentViewVersionUpdate](docs/SetCurrentViewVersionUpdate.md) + - [SetDefaultSortOrderUpdate](docs/SetDefaultSortOrderUpdate.md) + - [SetDefaultSpecUpdate](docs/SetDefaultSpecUpdate.md) + - [SetExpression](docs/SetExpression.md) + - [SetLocationUpdate](docs/SetLocationUpdate.md) + - [SetPartitionStatisticsUpdate](docs/SetPartitionStatisticsUpdate.md) + - [SetPropertiesUpdate](docs/SetPropertiesUpdate.md) + - [SetSnapshotRefUpdate](docs/SetSnapshotRefUpdate.md) + - [SetStatisticsUpdate](docs/SetStatisticsUpdate.md) + - [Snapshot](docs/Snapshot.md) + - [SnapshotLogInner](docs/SnapshotLogInner.md) + - [SnapshotReference](docs/SnapshotReference.md) + - [SnapshotSummary](docs/SnapshotSummary.md) + - [SortDirection](docs/SortDirection.md) + - [SortField](docs/SortField.md) + - [SortOrder](docs/SortOrder.md) + - [StatisticsFile](docs/StatisticsFile.md) + - [StructField](docs/StructField.md) + - [StructType](docs/StructType.md) + - [TableIdentifier](docs/TableIdentifier.md) + - [TableMetadata](docs/TableMetadata.md) + - [TableRequirement](docs/TableRequirement.md) + - [TableUpdate](docs/TableUpdate.md) + - [TableUpdateNotification](docs/TableUpdateNotification.md) + - [Term](docs/Term.md) + - [TimerResult](docs/TimerResult.md) + - [TokenType](docs/TokenType.md) + - [TransformTerm](docs/TransformTerm.md) + - [Type](docs/Type.md) + - [UnaryExpression](docs/UnaryExpression.md) + - [UpdateNamespacePropertiesRequest](docs/UpdateNamespacePropertiesRequest.md) + - [UpdateNamespacePropertiesResponse](docs/UpdateNamespacePropertiesResponse.md) + - [UpgradeFormatVersionUpdate](docs/UpgradeFormatVersionUpdate.md) + - [ValueMap](docs/ValueMap.md) + - [ViewHistoryEntry](docs/ViewHistoryEntry.md) + - [ViewMetadata](docs/ViewMetadata.md) + - [ViewRepresentation](docs/ViewRepresentation.md) + - [ViewRequirement](docs/ViewRequirement.md) + - [ViewUpdate](docs/ViewUpdate.md) + - [ViewVersion](docs/ViewVersion.md) + + + +## Documentation For Authorization + + +Authentication schemes defined for the API: + +### OAuth2 + +- **Type**: OAuth +- **Flow**: application +- **Authorization URL**: +- **Scopes**: + - **catalog**: Allows interacting with the Config and Catalog APIs + + +### BearerAuth + +- **Type**: Bearer authentication + + +## Author + + + + diff --git a/regtests/client/python/cli/__init__.py b/regtests/client/python/cli/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/regtests/client/python/cli/command/__init__.py b/regtests/client/python/cli/command/__init__.py new file mode 100644 index 0000000000..099e3bf0d7 --- /dev/null +++ b/regtests/client/python/cli/command/__init__.py @@ -0,0 +1,122 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import argparse +from abc import ABC + +from cli.constants import Commands, Arguments +from cli.options.parser import Parser +from polaris.management import PolarisDefaultApi + + +class Command(ABC): + """ + An abstract base class for commands. Implementations are expected to override the class methods `validate` and + `execute`. The static method `Command.from_options` can be used to parse a argparse Namespace into the appropriate + Command implementation if one exists. + """ + + @staticmethod + def from_options(options: argparse.Namespace) -> 'Command': + + def options_get(key, f=lambda x: x): + return f(getattr(options, key)) if hasattr(options, key) else None + + properties = Parser.parse_properties(options_get(Arguments.PROPERTY)) + + command = None + if options.command == Commands.CATALOGS: + from cli.command.catalogs import CatalogsCommand + command = CatalogsCommand( + options_get(f'{Commands.CATALOGS}_subcommand'), + catalog_type=options_get(Arguments.TYPE), + remote_url=options_get(Arguments.REMOTE_URL), + default_base_location=options_get(Arguments.DEFAULT_BASE_LOCATION), + storage_type=options_get(Arguments.STORAGE_TYPE), + allowed_locations=options_get(Arguments.ALLOWED_LOCATION), + role_arn=options_get(Arguments.ROLE_ARN), + external_id=options_get(Arguments.EXTERNAL_ID), + user_arn=options_get(Arguments.USER_ARN), + tenant_id=options_get(Arguments.TENANT_ID), + multi_tenant_app_name=options_get(Arguments.MULTI_TENANT_APP_NAME), + consent_url=options_get(Arguments.CONSENT_URL), + service_account=options_get(Arguments.SERVICE_ACCOUNT), + catalog_name=options_get(Arguments.CATALOG), + properties={} if properties is None else properties + ) + elif options.command == Commands.PRINCIPALS: + from cli.command.principals import PrincipalsCommand + command = PrincipalsCommand( + options_get(f'{Commands.PRINCIPALS}_subcommand'), + type=options_get(Arguments.TYPE), + principal_name=options_get(Arguments.PRINCIPAL), + client_id=options_get(Arguments.CLIENT_ID), + principal_role=options_get(Arguments.PRINCIPAL_ROLE), + properties=properties + ) + elif options.command == Commands.PRINCIPAL_ROLES: + from cli.command.principal_roles import PrincipalRolesCommand + command = PrincipalRolesCommand( + options_get(f'{Commands.PRINCIPAL_ROLES}_subcommand'), + principal_role_name=options_get(Arguments.PRINCIPAL_ROLE), + principal_name=options_get(Arguments.PRINCIPAL), + catalog_name=options_get(Arguments.CATALOG), + catalog_role_name=options_get(Arguments.CATALOG_ROLE), + properties=properties + ) + elif options.command == Commands.CATALOG_ROLES: + from cli.command.catalog_roles import CatalogRolesCommand + command = CatalogRolesCommand( + options_get(f'{Commands.CATALOG_ROLES}_subcommand'), + catalog_name=options_get(Arguments.CATALOG), + catalog_role_name=options_get(Arguments.CATALOG_ROLE), + principal_role_name=options_get(Arguments.PRINCIPAL_ROLE), + properties=properties + ) + elif options.command == Commands.PRIVILEGES: + from cli.command.privileges import PrivilegesCommand + subcommand = options_get(f'{Commands.PRIVILEGES}_subcommand') + command = PrivilegesCommand( + subcommand, + action=options_get(f'{subcommand}_subcommand'), + catalog_name=options_get(Arguments.CATALOG), + catalog_role_name=options_get(Arguments.CATALOG_ROLE), + namespace=options_get(Arguments.NAMESPACE, lambda s: s.split('.')), + view=options_get(Arguments.VIEW), + table=options_get(Arguments.TABLE), + privilege=options_get(Arguments.PRIVILEGE), + cascade=options_get(Arguments.CASCADE) + ) + + if command is not None: + command.validate() + return command + else: + raise Exception("Please specify a command or run ./polaris --help to view the available commands") + + def execute(self, api: PolarisDefaultApi) -> None: + """ + Execute a given command and, where applicable, print the response as JSON. + """ + raise Exception("`execute` called on abstract `Command`") + + def validate(self) -> None: + """ + Used to validate a command. Should always be called before `execute`. The arg parser will catch many issues + with options, but this is used to apply additional constraints that the arg parser can't currently handle. + One example is that a catalog cannot be created with the `s3` storage type without a `--role-arn` option, but + one can be created without this flag if it's using the `gcs` storage type. + """ + raise Exception("`validate` called on abstract `Command`") diff --git a/regtests/client/python/cli/command/catalog_roles.py b/regtests/client/python/cli/command/catalog_roles.py new file mode 100644 index 0000000000..56b64dd96d --- /dev/null +++ b/regtests/client/python/cli/command/catalog_roles.py @@ -0,0 +1,93 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from dataclasses import dataclass +from typing import Dict, Optional + +from pydantic import StrictStr + +from cli.command import Command +from cli.constants import Subcommands +from polaris.management import PolarisDefaultApi, CreateCatalogRoleRequest, CatalogRole, UpdateCatalogRoleRequest, \ + GrantCatalogRoleRequest + + +@dataclass +class CatalogRolesCommand(Command): + """ + A Command implementation to represent `polaris catalog-roles`. The instance attributes correspond to parameters + that can be provided to various subcommands, except `catalog_roles_subcommand` which represents the subcommand + itself. + + Example commands: + * ./polaris catalog-roles create --catalog bronze_catalog cat_role + * ./polaris catalog-roles list --catalog bronze_catalog --principal-role data-analyst + * ./polaris catalog-roles grant --catalog bronze_catalog --principal-role data-engineer etl_role + """ + + catalog_roles_subcommand: str + catalog_name: str + catalog_role_name: str + principal_role_name: str + properties: Optional[Dict[str, StrictStr]] + + def validate(self): + if not self.catalog_name: + raise Exception("Missing required argument: --catalog") + if self.catalog_roles_subcommand in {Subcommands.GRANT, Subcommands.REVOKE}: + if not self.principal_role_name: + raise Exception("Missing required argument: --principal") + + def execute(self, api: PolarisDefaultApi) -> None: + if self.catalog_roles_subcommand == Subcommands.CREATE: + request = CreateCatalogRoleRequest( + catalog_role=CatalogRole( + name=self.catalog_role_name, + properties=self.properties + ) + ) + api.create_catalog_role(self.catalog_name, request) + elif self.catalog_roles_subcommand == Subcommands.DELETE: + api.delete_catalog_role(self.catalog_name, self.catalog_role_name) + elif self.catalog_roles_subcommand == Subcommands.GET: + print(api.get_catalog_role(self.catalog_name, self.catalog_role_name).to_json()) + elif self.catalog_roles_subcommand == Subcommands.LIST: + if self.principal_role_name: + for catalog_role in api.list_catalog_roles_for_principal_role( + self.principal_role_name, self.catalog_name).roles: + print(catalog_role.to_json()) + else: + for catalog_role in api.list_catalog_roles(self.catalog_name).roles: + print(catalog_role.to_json()) + elif self.catalog_roles_subcommand == Subcommands.UPDATE: + catalog_role = api.get_catalog_role(self.catalog_name, self.catalog_role_name) + request = UpdateCatalogRoleRequest( + current_entity_version=catalog_role.entity_version, + properties=self.properties + ) + api.update_catalog_role(self.catalog_name, self.catalog_role_name, request) + elif self.catalog_roles_subcommand == Subcommands.GRANT: + request = GrantCatalogRoleRequest( + catalog_role=CatalogRole( + name=self.catalog_role_name + ), + properties=self.properties + ) + api.assign_catalog_role_to_principal_role(self.principal_role_name, self.catalog_name, request) + elif self.catalog_roles_subcommand == Subcommands.REVOKE: + api.revoke_catalog_role_from_principal_role( + self.principal_role_name, self.catalog_name, self.catalog_role_name) + else: + raise Exception(f"{self.catalog_roles_subcommand} is not supported in the CLI") diff --git a/regtests/client/python/cli/command/catalogs.py b/regtests/client/python/cli/command/catalogs.py new file mode 100644 index 0000000000..e4b3410ef4 --- /dev/null +++ b/regtests/client/python/cli/command/catalogs.py @@ -0,0 +1,184 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from dataclasses import dataclass, field +from typing import Dict, Optional, List + +from pydantic import StrictStr + +from cli.command import Command +from cli.constants import StorageType, CatalogType, Subcommands +from polaris.management import PolarisDefaultApi, Catalog, CreateCatalogRequest, UpdateCatalogRequest, \ + StorageConfigInfo, ExternalCatalog, AwsStorageConfigInfo, AzureStorageConfigInfo, GcpStorageConfigInfo, \ + PolarisCatalog, CatalogProperties + + +@dataclass +class CatalogsCommand(Command): + """ + A Command implementation to represent `polaris catalogs`. The instance attributes correspond to parameters + that can be provided to various subcommands, except `catalogs_subcommand` which represents the subcommand + itself. + + Example commands: + * ./polaris catalogs create cat_name --storage-type s3 --default-base-location s3://bucket/path --role-arn ... + * ./polaris catalogs update cat_name --default-base-location s3://new-bucket/new-location + * ./polaris catalogs list + """ + + catalogs_subcommand: str + catalog_type: str + remote_url: str + default_base_location: str + storage_type: str + allowed_locations: List[str] + role_arn: str + external_id: str + user_arn: str + tenant_id: str + multi_tenant_app_name: str + consent_url: str + service_account: str + catalog_name: str + properties: Dict[str, StrictStr] + + def validate(self): + if self.catalogs_subcommand == Subcommands.CREATE: + if not self.storage_type: + raise Exception(f"Missing required argument:" + f" --storage-type") + if not self.default_base_location: + raise Exception(f"Missing required argument:" + f" --default-base-location") + if self.catalog_type == CatalogType.EXTERNAL.value: + if not self.remote_url: + raise Exception(f"Missing required argument for {CatalogType.EXTERNAL.value} catalog:" + f" --remote-url") + if self.catalogs_subcommand == Subcommands.UPDATE: + if self.allowed_locations: + if not self.storage_type: + raise Exception(f"Missing required argument when updating allowed locations for a catalog:" + f" --storage-type") + + if self.storage_type == StorageType.S3.value: + if not self.role_arn: + raise Exception("Missing required argument for storage type 's3': --role-arn") + if self._has_azure_storage_info() or self._has_gcs_storage_info(): + raise Exception("Storage type 's3' supports the storage configurations --role-arn, " + "--external-id, and --user-arn") + elif self.storage_type == StorageType.AZURE.value: + if not self.tenant_id: + raise Exception("Missing required argument for storage type 'azure': --tenant-id") + if self._has_aws_storage_info() or self._has_gcs_storage_info(): + raise Exception("Storage type 'azure' supports the storage configurations --tenant-id, " + "--multi-tenant-app-name, and --consent-url") + elif self._has_aws_storage_info() or self._has_azure_storage_info(): + raise Exception("Storage type 'gcs' supports the storage configuration: --service-account") + + def _has_aws_storage_info(self): + return self.role_arn or self.external_id or self.user_arn + + def _has_azure_storage_info(self): + return self.tenant_id or self.multi_tenant_app_name or self.consent_url + + def _has_gcs_storage_info(self): + return self.service_account + + def _build_storage_config_info(self): + config = None + if self.storage_type == StorageType.S3.value: + config = AwsStorageConfigInfo( + storage_type=self.storage_type.upper(), + allowed_locations=self.allowed_locations, + role_arn=self.role_arn, + external_id=self.external_id, + user_arn=self.user_arn + ) + elif self.storage_type == StorageType.AZURE.value: + config = AzureStorageConfigInfo( + storage_type=self.storage_type.upper(), + allowed_locations=self.allowed_locations, + tenant_id=self.tenant_id, + multi_tenant_app_name=self.multi_tenant_app_name, + consent_url=self.consent_url, + ) + elif self.storage_type == StorageType.GCS.value: + config = GcpStorageConfigInfo( + storage_type=self.storage_type.upper(), + allowed_locations=self.allowed_locations, + tenant_id=self.tenant_id, + multi_tenant_app_name=self.multi_tenant_app_name + ) + return config + + def execute(self, api: PolarisDefaultApi) -> None: + if self.catalogs_subcommand == Subcommands.CREATE: + config = self._build_storage_config_info() + if self.catalog_type == CatalogType.EXTERNAL.value: + request = CreateCatalogRequest( + catalog=ExternalCatalog( + type=self.catalog_type.upper(), + name=self.catalog_name, + storage_config_info=config, + remote_url=self.remote_url, + properties=CatalogProperties( + default_base_location=self.default_base_location, + additional_properties=self.properties + ) + ) + ) + else: + request = CreateCatalogRequest( + catalog=PolarisCatalog( + type=self.catalog_type.upper(), + name=self.catalog_name, + storage_config_info=config, + properties=CatalogProperties( + default_base_location=self.default_base_location, + additional_properties=self.properties + ) + ) + ) + api.create_catalog(request) + elif self.catalogs_subcommand == Subcommands.DELETE: + api.delete_catalog(self.catalog_name) + elif self.catalogs_subcommand == Subcommands.GET: + print(api.get_catalog(self.catalog_name).to_json()) + elif self.catalogs_subcommand == Subcommands.LIST: + for catalog in api.list_catalogs().catalogs: + print(catalog.to_json()) + elif self.catalogs_subcommand == Subcommands.UPDATE: + catalog = api.get_catalog(self.catalog_name) + default_base_location_properties = {} + if self.default_base_location: + default_base_location_properties = {'default-base-location': self.default_base_location} + catalog.properties = {**default_base_location_properties, **self.properties} + + request = UpdateCatalogRequest( + current_entity_version=catalog.entity_version, + catalog=catalog + ) + if (self.allowed_locations or self._has_aws_storage_info() or self._has_azure_storage_info() or + self._has_gcs_storage_info()): + request = UpdateCatalogRequest( + current_entity_version=catalog.entity_version, + catalog=catalog, + storage_config_info=self._build_storage_config_info() + ) + + api.update_catalog(self.catalog_name, request) + else: + raise Exception(f"{self.catalogs_subcommand} is not supported in the CLI") + diff --git a/regtests/client/python/cli/command/principal_roles.py b/regtests/client/python/cli/command/principal_roles.py new file mode 100644 index 0000000000..cfbb440714 --- /dev/null +++ b/regtests/client/python/cli/command/principal_roles.py @@ -0,0 +1,94 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from dataclasses import dataclass +from typing import Dict, Optional + +from pydantic import StrictStr + +from cli.command import Command +from cli.constants import Subcommands +from polaris.management import PolarisDefaultApi, CreatePrincipalRoleRequest, PrincipalRole, UpdatePrincipalRoleRequest, \ + GrantCatalogRoleRequest, CatalogRole, GrantPrincipalRoleRequest + + +@dataclass +class PrincipalRolesCommand(Command): + """ + A Command implementation to represent `polaris principal-roles`. The instance attributes correspond to parameters + that can be provided to various subcommands, except `principal_roles_subcommand` which represents the subcommand + itself. + + Example commands: + * ./polaris principal-roles create user_role + * ./polaris principal-roles list --principal user + """ + + principal_roles_subcommand: str + principal_role_name: str + principal_name: str + catalog_name: str + catalog_role_name: str + properties: Optional[Dict[str, StrictStr]] + + def validate(self): + if self.principal_roles_subcommand == Subcommands.LIST: + if self.principal_name and self.catalog_role_name: + raise Exception('You may provide either --principal or --catalog-role, but not both') + if self.principal_roles_subcommand in {Subcommands.GRANT, Subcommands.REVOKE}: + if not self.principal_name: + raise Exception(f"Missing required argument for {self.principal_roles_subcommand}: --principal") + + def execute(self, api: PolarisDefaultApi) -> None: + if self.principal_roles_subcommand == Subcommands.CREATE: + request = CreatePrincipalRoleRequest( + principal_role=PrincipalRole( + name=self.principal_role_name, + properties=self.properties + ) + ) + api.create_principal_role(request) + elif self.principal_roles_subcommand == Subcommands.DELETE: + api.delete_principal_role(self.principal_role_name) + elif self.principal_roles_subcommand == Subcommands.GET: + print(api.get_principal_role(self.principal_role_name).to_json()) + elif self.principal_roles_subcommand == Subcommands.LIST: + if self.catalog_role_name: + for principal_role in api.list_principal_roles(self.catalog_role_name).roles: + print(principal_role.to_json()) + elif self.principal_name: + for principal_role in api.list_principal_roles_assigned(self.principal_name).roles: + print(principal_role.to_json()) + else: + for principal_role in api.list_principal_roles().roles: + print(principal_role.to_json()) + elif self.principal_roles_subcommand == Subcommands.UPDATE: + principal_role = api.get_principal_role(self.principal_role_name) + request = UpdatePrincipalRoleRequest( + current_entity_version=principal_role.entity_version, + properties=self.properties + ) + api.update_principal_role(self.principal_role_name, request) + elif self.principal_roles_subcommand == Subcommands.GRANT: + request = GrantPrincipalRoleRequest( + principal_role=PrincipalRole( + name=self.principal_role_name + ), + ) + api.assign_principal_role(self.principal_name, request) + elif self.principal_roles_subcommand == Subcommands.REVOKE: + api.revoke_principal_role(self.principal_name, self.principal_role_name) + else: + raise Exception(f"{self.principal_roles_subcommand} is not supported in the CLI") diff --git a/regtests/client/python/cli/command/principals.py b/regtests/client/python/cli/command/principals.py new file mode 100644 index 0000000000..f8174a38a2 --- /dev/null +++ b/regtests/client/python/cli/command/principals.py @@ -0,0 +1,82 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from dataclasses import dataclass +from typing import Dict, Optional + +from pydantic import StrictStr + +from cli.command import Command +from cli.constants import Subcommands +from polaris.management import PolarisDefaultApi, CreatePrincipalRequest, Principal, UpdatePrincipalRequest, \ + GrantPrincipalRoleRequest, PrincipalRole + + +@dataclass +class PrincipalsCommand(Command): + """ + A Command implementation to represent `polaris principals`. The instance attributes correspond to parameters + that can be provided to various subcommands, except `principals_subcommand` which represents the subcommand + itself. + + Example commands: + * ./polaris principals create user + * ./polaris principals list + * ./polaris principals list --principal-role filter-to-this-role + """ + + principals_subcommand: str + type: str + principal_name: str + client_id: str + principal_role: str + properties: Optional[Dict[str, StrictStr]] + + def validate(self): + pass + + def execute(self, api: PolarisDefaultApi) -> None: + if self.principals_subcommand == Subcommands.CREATE: + request = CreatePrincipalRequest( + principal=Principal( + type=self.type.upper(), + name=self.principal_name, + client_id=self.client_id, + properties=self.properties + ) + ) + print(api.create_principal(request).credentials.to_json()) + elif self.principals_subcommand == Subcommands.DELETE: + api.delete_principal(self.principal_name) + elif self.principals_subcommand == Subcommands.GET: + print(api.get_principal(self.principal_name).to_json()) + elif self.principals_subcommand == Subcommands.LIST: + if self.principal_role: + for principal in api.list_assignee_principals_for_principal_role(self.principal_role).principals: + print(principal.to_json()) + else: + for principal in api.list_principals().principals: + print(principal.to_json()) + elif self.principals_subcommand == Subcommands.ROTATE_CREDENTIALS: + print(api.rotate_credentials(self.principal_name).to_json()) + elif self.principals_subcommand == Subcommands.UPDATE: + principal = api.get_principal(self.principal_name) + request = UpdatePrincipalRequest( + current_entity_version=principal.current_entity_version, + properties=self.properties + ) + api.update_principal(self.principal_name, request) + else: + raise Exception(f"{self.principals_subcommand} is not supported in the CLI") diff --git a/regtests/client/python/cli/command/privileges.py b/regtests/client/python/cli/command/privileges.py new file mode 100644 index 0000000000..92b876eef5 --- /dev/null +++ b/regtests/client/python/cli/command/privileges.py @@ -0,0 +1,122 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from dataclasses import dataclass +from typing import List + +from pydantic import StrictStr + +from cli.command import Command +from cli.constants import Subcommands, Actions +from polaris.management import PolarisDefaultApi, AddGrantRequest, NamespaceGrant, \ + RevokeGrantRequest, CatalogGrant, TableGrant, ViewGrant, CatalogPrivilege, NamespacePrivilege, TablePrivilege, \ + ViewPrivilege + + +@dataclass +class PrivilegesCommand(Command): + """ + A Command implementation to represent `polaris privileges`. Unlike other commands, `privileges` itself takes two + parameters -- catalog_name and catalog_role_name. The other instance attributes, besides `privileges_subcommand` and + `action`, represent parameters provided to either the `grant` or `revoke` action. + + Example commands: + * ./polaris privileges --catalog c --catalog-role cr table grant --namespace n --table t PRIVILEGE_NAME + * ./polaris privileges --catalog c --catalog-role cr namespace revoke --namespace n PRIVILEGE_NAME + * ./polaris privileges -catalog c --catalog-role cr list + """ + + privileges_subcommand: str + action: str + catalog_name: str + catalog_role_name: str + namespace: List[StrictStr] + view: str + table: str + privilege: str + cascade: bool + + def validate(self): + if not self.catalog_name: + raise Exception('Missing required argument: --catalog') + if not self.catalog_role_name: + raise Exception('Missing required argument: --catalog-role') + + if (self.privileges_subcommand in {Subcommands.NAMESPACE, Subcommands.TABLE, Subcommands.VIEW} + and not self.namespace): + raise Exception('Missing required argument: --namespace') + + if self.action == Actions.GRANT and self.cascade: + raise Exception('Unrecognized argument for GRANT: --cascade') + + if self.privileges_subcommand == Subcommands.CATALOG: + if self.privilege not in {i.value for i in CatalogPrivilege}: + raise Exception(f'Invalid catalog privilege: {self.privilege}') + if self.privileges_subcommand == Subcommands.NAMESPACE: + if self.privilege not in {i.value for i in NamespacePrivilege}: + raise Exception(f'Invalid namespace privilege: {self.privilege}') + if self.privileges_subcommand == Subcommands.TABLE: + if self.privilege not in {i.value for i in TablePrivilege}: + raise Exception(f'Invalid table privilege: {self.privilege}') + if self.privileges_subcommand == Subcommands.VIEW: + if self.privilege not in {i.value for i in ViewPrivilege}: + raise Exception(f'Invalid view privilege: {self.privilege}') + + def execute(self, api: PolarisDefaultApi) -> None: + if self.privileges_subcommand == Subcommands.LIST: + for grant in api.list_grants_for_catalog_role(self.catalog_name, self.catalog_role_name).grants: + print(grant.to_json()) + else: + grant = None + if self.privileges_subcommand == Subcommands.CATALOG: + grant = CatalogGrant( + type=Subcommands.CATALOG, + privilege=CatalogPrivilege(self.privilege) + ) + elif self.privileges_subcommand == Subcommands.NAMESPACE: + grant = NamespaceGrant( + type=Subcommands.NAMESPACE, + namespace=self.namespace, + privilege=NamespacePrivilege(self.privilege) + ) + elif self.privileges_subcommand == Subcommands.TABLE: + grant = TableGrant( + type=Subcommands.TABLE, + namespace=self.namespace, + table_name=self.table, + privilege=TablePrivilege(self.privilege) + ) + elif self.privileges_subcommand == Subcommands.VIEW: + grant = ViewGrant( + type=Subcommands.VIEW, + namespace=self.namespace, + view_name=self.view, + privilege=ViewPrivilege(self.privilege) + ) + + if not grant: + raise Exception(f'{self.privileges_subcommand} is not supported in the CLI') + elif self.action == Actions.GRANT: + request = AddGrantRequest( + grant=grant + ) + api.add_grant_to_catalog_role(self.catalog_name, self.catalog_role_name, request) + elif self.action == Actions.REVOKE: + request = RevokeGrantRequest( + grant=grant + ) + api.revoke_grant_from_catalog_role(self.catalog_name, self.catalog_role_name, self.cascade, request) + else: + raise Exception(f'{self.action} is not supported in the CLI') diff --git a/regtests/client/python/cli/constants.py b/regtests/client/python/cli/constants.py new file mode 100644 index 0000000000..210debaacd --- /dev/null +++ b/regtests/client/python/cli/constants.py @@ -0,0 +1,200 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from enum import Enum + + +class StorageType(Enum): + """ + Represents a Storage Type within the Polaris API -- `s3`, `azure`, or `gcs`. + """ + + S3 = 's3' + AZURE = 'azure' + GCS = 'gcs' + + +class CatalogType(Enum): + """ + Represents a Catalog Type within the Polaris API -- `internal` or `external` + """ + + INTERNAL = 'internal' + EXTERNAL = 'external' + + +class PrincipalType(Enum): + """ + Represents a Principal Type within the Polaris API -- currently only `service` + """ + + SERVICE = 'service' + + +class Commands: + """ + Represents the various commands available in the CLI + """ + + CATALOGS = 'catalogs' + PRINCIPALS = 'principals' + PRINCIPAL_ROLES = 'principal-roles' + CATALOG_ROLES = 'catalog-roles' + PRIVILEGES = 'privileges' + + +class Subcommands: + """ + Represents the various subcommands available in the CLI. This is a flattened view, and no one command supports + all these subcommands. + """ + + CREATE = 'create' + DELETE = 'delete' + GET = 'get' + LIST = 'list' + UPDATE = 'update' + ROTATE_CREDENTIALS = 'rotate-credentials' + CATALOG = 'catalog' + NAMESPACE = 'namespace' + TABLE = 'table' + VIEW = 'view' + GRANT = 'grant' + REVOKE = 'revoke' + + +class Actions: + """ + Represents actions available to different subcommands available in the CLI. Currently, only some subcommands of the + `privileges` command support actions. + """ + + GRANT = 'grant' + REVOKE = 'revoke' + + +class Arguments: + """ + Constants to represent different arguments used by various commands. This is a flattened view, and no one + subcommand supports all these arguments. These argument names map directly to the parameters that the CLI expects + and to the attribute names within the argparse Namespace generated by parsing. + + These values should be snake_case, but they will get mapped to kebab-case in `Parser.parse` + """ + + TYPE = 'type' + REMOTE_URL = 'remote_url' + DEFAULT_BASE_LOCATION = 'default_base_location' + STORAGE_TYPE = 'storage_type' + ALLOWED_LOCATION = 'allowed_location' + ROLE_ARN = 'role_arn' + EXTERNAL_ID = 'external_id' + USER_ARN = 'user_arn' + TENANT_ID = 'tenant_id' + MULTI_TENANT_APP_NAME = 'multi_tenant_app_name' + CONSENT_URL = 'consent_url' + SERVICE_ACCOUNT = 'service_account' + CATALOG_ROLE = 'catalog_role' + CATALOG = 'catalog' + PRINCIPAL = 'principal' + CLIENT_ID = 'client_id' + PRINCIPAL_ROLE = 'principal_role' + PROPERTY = 'property' + PRIVILEGE = 'privilege' + NAMESPACE = 'namespace' + TABLE = 'table' + VIEW = 'view' + CASCADE = 'cascade' + + +class Hints: + """ + Constants used as hints by the various --help outputs. These are arranged within subclasses for readability, but + there is no strict mapping between these subclasses and commands. For example, the hint for the `--catalog` + parameter used by `catalog-roles create` and `catalog-roles delete` may be the same. + """ + + PROPERTY = ('A key/value pair such as: tag=value. Multiple can be provided by specifying this option' + ' more than once') + + class Catalogs: + GRANT = 'Grant a catalog role to a catalog' + REVOKE = 'Revoke a catalog role from a catalog' + + class Create: + TYPE = 'The type of catalog to create in [INTERNAL, EXTERNAL]. INTERNAL by default.' + REMOTE_URL = '(Only for external catalogs) The remote URL to use' + DEFAULT_BASE_LOCATION = '(Required for internal catalogs) Default base location of the catalog' + STORAGE_TYPE = '(Required for internal catalogs) The type of storage to use for the catalog' + ALLOWED_LOCATION = ('(For internal catalogs) An allowed location for files tracked by the catalog. ' + 'Multiple locations can be provided by specifying this option more than once.') + + ROLE_ARN = '(Required for AWS) A role ARN to use when connecting to S3' + EXTERNAL_ID = '(Only for AWS) The external Id to use when connecting to S3' + USER_ARN = '(Only for AWS) A user ARN to use when connecting to S3' + + TENANT_ID = '(Required for Azure) A tenant ID to use when connecting to Azure Storage' + MULTI_TENANT_APP_NAME = '(Only for Azure) The app name to use when connecting to Azure Storage' + CONSENT_URL = '(Only for Azure) A consent URL granting permissions for the Azure Storage location' + + SERVICE_ACCOUNT = '(Only for GCP) The service account to use when connecting to GCS' + + class Principals: + class Create: + NAME = 'The principal name' + CLIENT_ID = 'The output-only OAuth clientId associated with this principal if applicable' + + class Revoke: + PRINCIPAL_ROLE = 'A principal role to revoke from this principal' + + class PrincipalRoles: + PRINCIPAL_ROLE = 'The name of a principal role' + LIST = 'List principal roles, optionally limited to those held a given principal' + + GRANT = 'Grant a principal role to a principal' + REVOKE = 'Revoke a principal role from a principal' + + class Grant: + PRINCIPAL = 'A principal to grant this principal role to' + + class Revoke: + PRINCIPAL = 'A principal to revoke this principal role from' + + class List: + CATALOG_ROLE = ('The name of a catalog role. If provided, show only principal roles assigned to this' + ' catalog role.') + PRINCIPAL_NAME = ('The name of a principal. If provided, show only principal roles assigned to this' + ' principal.') + + class CatalogRoles: + CATALOG_NAME = 'The name of a catalog' + CATALOG_ROLE = 'The name of a catalog role' + LIST = 'List catalog roles within a catalog. Optionally, specify a principal role.' + REVOKE_CATALOG_ROLE = 'Revoke a catalog role from a principal role' + GRANT_CATALOG_ROLE = 'Grant a catalog role to a principal role' + + class Create: + CATALOG_NAME = 'The name of an existing catalog' + + class Grant: + CATALOG_NAME = 'The name of a catalog' + CATALOG_ROLE = 'The name of a catalog role' + PRIVILEGE = 'The privilege to grant or revoke' + NAMESPACE = 'A period-delimited namespace' + TABLE = 'The name of a table' + VIEW = 'The name of a view' + ADD = 'Add a grant. Either this or --revoke must be specified except when the subcommand is `list`' + REVOKE = 'Revoke a grant. Either this or --add must be specified except when the subcommand is `list`' + CASCADE = 'When revoking privileges, additionally revoke privileges that depend on the specified privilege' diff --git a/regtests/client/python/cli/options/__init__.py b/regtests/client/python/cli/options/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/regtests/client/python/cli/options/option_tree.py b/regtests/client/python/cli/options/option_tree.py new file mode 100644 index 0000000000..5c3a1f3b97 --- /dev/null +++ b/regtests/client/python/cli/options/option_tree.py @@ -0,0 +1,204 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from dataclasses import dataclass, field +from typing import List + +from cli.constants import StorageType, CatalogType, PrincipalType, Hints, Commands, Arguments, Subcommands, Actions + + +@dataclass +class Argument: + """ + A data class for representing a single argument within the CLI, such as `--host`. + """ + + name: str + type: type + hint: str + choices: List[str] = None + lower: bool = False + allow_repeats: bool = False + default: object = None + flag_name = None + + def __post_init__(self): + if self.name.startswith('--'): + raise Exception(f'Argument name {self.name} starts with `--`: should this be a flag_name?') + + def get_flag_name(self): + return self.flag_name or ('--' + self.name.replace('_', '-')) + + +@dataclass +class Option: + """ + A data class that represents a subcommand within the CLI, such as `catalogs`. Each Option can have child Options, + a collection of Arguments, or both. + """ + + name: str + hint: str = None + input_name: str = None + args: List[Argument] = field(default_factory=list) + children: List['Option'] = field(default_factory=list) + + +class OptionTree: + """ + `OptionTree.get_tree()` returns the full set of Options supported by the CLI. This structure is used to simplify + configuration of the CLI and to generate a custom `--help` message including nested commands. + """ + + _STORAGE_CONFIG_INFO = [ + Argument(Arguments.STORAGE_TYPE, str, Hints.Catalogs.Create.STORAGE_TYPE, lower=True, + choices=[st.value for st in StorageType]), + Argument(Arguments.ALLOWED_LOCATION, str, Hints.Catalogs.Create.ALLOWED_LOCATION, allow_repeats=True), + Argument(Arguments.ROLE_ARN, str, Hints.Catalogs.Create.ROLE_ARN), + Argument(Arguments.EXTERNAL_ID, str, Hints.Catalogs.Create.EXTERNAL_ID), + Argument(Arguments.USER_ARN, str, Hints.Catalogs.Create.USER_ARN), + Argument(Arguments.TENANT_ID, str, Hints.Catalogs.Create.TENANT_ID), + Argument(Arguments.MULTI_TENANT_APP_NAME, str, Hints.Catalogs.Create.MULTI_TENANT_APP_NAME), + Argument(Arguments.CONSENT_URL, str, Hints.Catalogs.Create.CONSENT_URL), + Argument(Arguments.SERVICE_ACCOUNT, str, Hints.Catalogs.Create.SERVICE_ACCOUNT), + ] + + @staticmethod + def get_tree() -> List[Option]: + return [ + Option(Commands.CATALOGS, 'manage catalogs', children=[ + Option(Subcommands.CREATE, args=[ + Argument(Arguments.TYPE, str, Hints.Catalogs.Create.TYPE, lower=True, + choices=[ct.value for ct in CatalogType], default=CatalogType.INTERNAL.value), + Argument(Arguments.REMOTE_URL, str, Hints.Catalogs.Create.REMOTE_URL), + Argument(Arguments.DEFAULT_BASE_LOCATION, str, Hints.Catalogs.Create.DEFAULT_BASE_LOCATION), + Argument(Arguments.PROPERTY, str, Hints.PROPERTY, allow_repeats=True), + ] + OptionTree._STORAGE_CONFIG_INFO, input_name=Arguments.CATALOG), + Option(Subcommands.DELETE, input_name=Arguments.CATALOG), + Option(Subcommands.GET, input_name=Arguments.CATALOG), + Option(Subcommands.LIST, args=[ + Argument(Arguments.PRINCIPAL_ROLE, str, Hints.PrincipalRoles.PRINCIPAL_ROLE) + ]), + Option(Subcommands.UPDATE, args=[ + Argument(Arguments.PROPERTY, str, Hints.PROPERTY, allow_repeats=True), + Argument(Arguments.DEFAULT_BASE_LOCATION, str, Hints.Catalogs.Create.DEFAULT_BASE_LOCATION), + ] + OptionTree._STORAGE_CONFIG_INFO, input_name=Arguments.CATALOG) + ]), + Option(Commands.PRINCIPALS, 'manage principals', children=[ + Option(Subcommands.CREATE, args=[ + Argument(Arguments.TYPE, str, Hints.Catalogs.Create.TYPE, lower=True, + choices=[pt.value for pt in PrincipalType], default=PrincipalType.SERVICE.value), + Argument(Arguments.CLIENT_ID, str, Hints.Principals.Create.CLIENT_ID), + Argument(Arguments.PROPERTY, str, Hints.PROPERTY, allow_repeats=True) + ], input_name=Arguments.PRINCIPAL), + Option(Subcommands.DELETE, input_name=Arguments.PRINCIPAL), + Option(Subcommands.GET, input_name=Arguments.PRINCIPAL), + Option(Subcommands.LIST), + Option(Subcommands.ROTATE_CREDENTIALS, input_name=Arguments.PRINCIPAL), + Option(Subcommands.UPDATE, args=[ + Argument(Arguments.PROPERTY, str, Hints.PROPERTY, allow_repeats=True) + ], input_name=Arguments.PRINCIPAL) + ]), + Option(Commands.PRINCIPAL_ROLES, 'manage principal roles', children=[ + Option(Subcommands.CREATE, args=[ + Argument(Arguments.PROPERTY, str, Hints.PROPERTY, allow_repeats=True) + ], input_name=Arguments.PRINCIPAL_ROLE), + Option(Subcommands.DELETE, input_name=Arguments.PRINCIPAL_ROLE), + Option(Subcommands.GET, input_name=Arguments.PRINCIPAL_ROLE), + Option(Subcommands.LIST, hint=Hints.PrincipalRoles.LIST, args=[ + Argument(Arguments.CATALOG_ROLE, str, Hints.PrincipalRoles.List.CATALOG_ROLE), + Argument(Arguments.PRINCIPAL, str, Hints.PrincipalRoles.List.PRINCIPAL_NAME) + ]), + Option(Subcommands.UPDATE, args=[ + Argument(Arguments.PROPERTY, str, Hints.PROPERTY, allow_repeats=True) + ], input_name=Arguments.PRINCIPAL_ROLE), + Option(Subcommands.GRANT, hint=Hints.PrincipalRoles.GRANT, args=[ + Argument(Arguments.PRINCIPAL, str, Hints.PrincipalRoles.Grant.PRINCIPAL) + ], input_name=Arguments.PRINCIPAL_ROLE), + Option(Subcommands.REVOKE, hint=Hints.PrincipalRoles.REVOKE, args=[ + Argument(Arguments.PRINCIPAL, str, Hints.PrincipalRoles.Revoke.PRINCIPAL) + ], input_name=Arguments.PRINCIPAL_ROLE) + ]), + Option(Commands.CATALOG_ROLES, 'manage catalog roles', children=[ + Option(Subcommands.CREATE, args=[ + Argument(Arguments.CATALOG, str, Hints.CatalogRoles.Create.CATALOG_NAME), + Argument(Arguments.PROPERTY, str, Hints.PROPERTY, allow_repeats=True) + ], input_name=Arguments.CATALOG_ROLE), + Option(Subcommands.DELETE, args=[ + Argument(Arguments.CATALOG, str, Hints.CatalogRoles.Create.CATALOG_NAME), + ], input_name=Arguments.CATALOG_ROLE), + Option(Subcommands.GET, args=[ + Argument(Arguments.CATALOG, str, Hints.CatalogRoles.Create.CATALOG_NAME), + ], input_name=Arguments.CATALOG_ROLE), + Option(Subcommands.LIST, hint=Hints.CatalogRoles.LIST, args=[ + Argument(Arguments.PRINCIPAL_ROLE, str, Hints.PrincipalRoles.PRINCIPAL_ROLE) + ], input_name=Arguments.CATALOG), + Option(Subcommands.UPDATE, args=[ + Argument(Arguments.CATALOG, str, Hints.CatalogRoles.Create.CATALOG_NAME), + Argument(Arguments.PROPERTY, str, Hints.PROPERTY, allow_repeats=True) + ], input_name=Arguments.CATALOG_ROLE), + Option(Subcommands.GRANT, hint=Hints.CatalogRoles.GRANT_CATALOG_ROLE, args=[ + Argument(Arguments.CATALOG, str, Hints.CatalogRoles.CATALOG_NAME), + Argument(Arguments.PRINCIPAL_ROLE, str, Hints.CatalogRoles.CATALOG_ROLE) + ], input_name=Arguments.CATALOG_ROLE), + Option(Subcommands.REVOKE, hint=Hints.CatalogRoles.GRANT_CATALOG_ROLE, args=[ + Argument(Arguments.CATALOG, str, Hints.CatalogRoles.CATALOG_NAME), + Argument(Arguments.PRINCIPAL_ROLE, str, Hints.CatalogRoles.CATALOG_ROLE) + ], input_name=Arguments.CATALOG_ROLE) + ]), + Option(Commands.PRIVILEGES, 'manage privileges for a catalog role', args=[ + Argument(Arguments.CATALOG, str, Hints.CatalogRoles.Create.CATALOG_NAME), + Argument(Arguments.CATALOG_ROLE, str, Hints.CatalogRoles.CATALOG_ROLE) + ], children=[ + Option(Subcommands.LIST), + Option(Subcommands.CATALOG, children=[ + Option(Actions.GRANT, input_name=Arguments.PRIVILEGE), + Option(Actions.REVOKE, args=[ + Argument(Arguments.CASCADE, bool, Hints.Grant.CASCADE) + ], input_name=Arguments.PRIVILEGE), + ]), + Option(Subcommands.NAMESPACE, children=[ + Option(Actions.GRANT, args=[ + Argument(Arguments.NAMESPACE, str, Hints.Grant.NAMESPACE) + ], input_name=Arguments.PRIVILEGE), + Option(Actions.REVOKE, args=[ + Argument(Arguments.NAMESPACE, str, Hints.Grant.NAMESPACE), + Argument(Arguments.CASCADE, bool, Hints.Grant.CASCADE) + ], input_name=Arguments.PRIVILEGE), + ]), + Option(Subcommands.TABLE, children=[ + Option(Actions.GRANT, args=[ + Argument(Arguments.NAMESPACE, str, Hints.Grant.NAMESPACE), + Argument(Arguments.TABLE, str, Hints.Grant.TABLE) + ], input_name=Arguments.PRIVILEGE), + Option(Actions.REVOKE, args=[ + Argument(Arguments.NAMESPACE, str, Hints.Grant.NAMESPACE), + Argument(Arguments.TABLE, str, Hints.Grant.TABLE), + Argument(Arguments.CASCADE, bool, Hints.Grant.CASCADE) + ], input_name=Arguments.PRIVILEGE), + ]), + Option(Subcommands.VIEW, children=[ + Option(Actions.GRANT, args=[ + Argument(Arguments.NAMESPACE, str, Hints.Grant.NAMESPACE), + Argument(Arguments.VIEW, str, Hints.Grant.VIEW) + ], input_name=Arguments.PRIVILEGE), + Option(Actions.REVOKE, args=[ + Argument(Arguments.NAMESPACE, str, Hints.Grant.NAMESPACE), + Argument(Arguments.VIEW, str, Hints.Grant.VIEW), + Argument(Arguments.CASCADE, bool, Hints.Grant.CASCADE) + ], input_name=Arguments.PRIVILEGE), + ]) + ]) + ] diff --git a/regtests/client/python/cli/options/parser.py b/regtests/client/python/cli/options/parser.py new file mode 100644 index 0000000000..18a281bc93 --- /dev/null +++ b/regtests/client/python/cli/options/parser.py @@ -0,0 +1,193 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import argparse +import sys +from typing import List, Optional, Dict + +from cli.options.option_tree import OptionTree, Option, Argument + + +class Parser(object): + """ + `Parser.parse()` is used to parse CLI input into an argparse.Namespace. The arguments expected by the parser are + defined by `OptionTree.getTree()` and by the arguments in `Parser._ROOT_ARGUMENTS`. This class is responsible for + translating the option tree into an ArgumentParser, for applying that ArgumentParser to the user input, and for + generating a custom help message based on the option tree. + """ + + """ + Generates an argparse parser based on the option tree. + """ + + _ROOT_ARGUMENTS = [ + Argument('host', str, hint='hostname', default='localhost'), + Argument('port', int, hint='port', default=8181), + Argument('client-id', str, hint='client ID for token-based authentication'), + Argument('client-secret', str, hint='client secret for token-based authentication'), + Argument('access-token', str, hint='access token for token-based authentication'), + ] + + @staticmethod + def _build_parser() -> argparse.ArgumentParser: + parser = TreeHelpParser(description='Polaris CLI') + + for arg in Parser._ROOT_ARGUMENTS: + if arg.default is not None: + parser.add_argument(arg.get_flag_name(), type=arg.type, help=arg.hint, default=arg.default) + else: + parser.add_argument(arg.get_flag_name(), type=arg.type, help=arg.hint) + + # Add everything from the option tree to the parser: + def add_arguments(parser, args: List[Argument]): + for arg in args: + kwargs = {'help': arg.hint, 'type': arg.type} + if arg.choices: + kwargs['choices'] = arg.choices + if arg.lower: + kwargs['type'] = kwargs['type'].lower + if arg.default: + kwargs['default'] = arg.default + + if arg.type == bool: + del kwargs['type'] + parser.add_argument(arg.get_flag_name(), **kwargs, action='store_true') + elif arg.allow_repeats: + parser.add_argument(arg.get_flag_name(), **kwargs, action='append') + else: + parser.add_argument(arg.get_flag_name(), **kwargs) + + def recurse_options(subparser, options: List[Option]): + for option in options: + option_parser = subparser.add_parser(option.name, help=option.hint or option.name) + add_arguments(option_parser, option.args) + if option.input_name: + option_parser.add_argument(option.input_name, type=str, + help=option.input_name.replace('_', ' '), default=None) + if option.children: + children_subparser = option_parser.add_subparsers(dest=f'{option.name}_subcommand', required=False) + recurse_options(children_subparser, option.children) + + subparser = parser.add_subparsers(dest='command', required=False) + recurse_options(subparser, OptionTree.get_tree()) + return parser + + @staticmethod + def parse(input: Optional[List[str]] = None) -> argparse.Namespace: + parser = Parser._build_parser() + return parser.parse_args(input) + + @staticmethod + def parse_properties(properties: List[str]) -> Optional[Dict[str, str]]: + if not properties: + return None + results = dict() + for property in properties: + if '=' not in property: + raise Exception(f'Could not parse property `{property}`') + key, value = property.split('=', 1) + if '=' in value or not value: + raise Exception(f'Could not parse property `{property}`') + if key in results: + raise Exception(f'Duplicate property key `{key}`') + results[key] = value + return results + + +class TreeHelpParser(argparse.ArgumentParser): + """ + Replaces the default help behavior with a more readable message. + """ + + INDENT = ' ' * 2 + + def parse_args(self, args=None, namespace=None): + if args is None: + args = sys.argv[1:] + help_index = min([float('inf')] + [args.index(x) for x in ['-h', '--help'] if x in args]) + if help_index < float('inf'): + tree_str = self._get_tree_str(args[:help_index]) + if tree_str: + print(f'input: polaris {" ".join(args)}') + print(f'options:') + print(tree_str) + print('\n') + self.print_usage() + super().exit() + else: + return super().parse_args(args, namespace) + else: + return super().parse_args(args, namespace) + + def _get_tree_str(self, args: List[str]) -> Optional[str]: + command_path = self._get_command_path(args, OptionTree.get_tree()) + if len(command_path) == 0: + result = TreeHelpParser.INDENT + 'polaris' + for arg in Parser._ROOT_ARGUMENTS: + result += '\n' + (TreeHelpParser.INDENT * 2) + f"{arg.get_flag_name()} {arg.hint}" + for option in OptionTree.get_tree(): + result += '\n' + self._get_tree_for_option(option, indent=2) + return result + else: + option_node = self._get_option_node(command_path, OptionTree.get_tree()) + if option_node is None: + return None + else: + return self._get_tree_for_option(option_node) + + def _get_tree_for_option(self, option: Option, indent=1) -> str: + result = "" + result += (TreeHelpParser.INDENT * indent) + option.name + + for arg in option.args: + result += '\n' + (TreeHelpParser.INDENT * (indent + 1)) + f"{arg.get_flag_name()} {arg.hint}" + + if len(option.args) > 0 and len(option.children) > 0: + result += '\n' + + for child in option.children: + result += '\n' + self._get_tree_for_option(child, indent + 1) + + return result + + def _get_command_path(self, args: List[str], options: List[Option]) -> List[str]: + command_path = [] + parser = self + + while args: + arg = args.pop(0) + if arg in {o.name for o in options}: + command_path.append(arg) + try: + parser = parser._subparsers._group_actions[0].choices.get(arg) + if not parser: + break + except Exception as e: + break + options = list(filter(lambda o: o.name == arg, options))[0].children + if options is None: + break + return command_path + + def _get_option_node(self, command_path: List[str], nodes: List[Option]) -> Optional[Option]: + if len(command_path) > 0: + for node in nodes: + if node.name == command_path[0]: + if len(command_path) == 1: + return node + else: + return self._get_option_node(command_path[1:], node.children) + return None + diff --git a/regtests/client/python/cli/polaris_cli.py b/regtests/client/python/cli/polaris_cli.py new file mode 100644 index 0000000000..2b0d1ff1e0 --- /dev/null +++ b/regtests/client/python/cli/polaris_cli.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from cli.options.parser import Parser +from polaris.management import ApiClient, Configuration, ApiException +from polaris.management import PolarisDefaultApi + + +class PolarisCli: + """ + Implements a basic Command-Line Interface (CLI) for interacting with a Polaris service. The CLI can be used to + manage entities like catalogs, principals, and grants within Polaris and can perform most operations that are + available in the Python client API. + + Example usage: + * ./polaris --client-id ${id} --client-secret ${secret} --host ${hostname} principals create example_user + * ./polaris --client-id ${id} --client-secret ${secret} --host ${hostname} principal-roles create example_role + * ./polaris --client-id ${id} --client-secret ${secret} --host ${hostname} catalog-roles list + """ + + @staticmethod + def execute(): + options = Parser.parse() + client_builder = PolarisCli._get_client_builder(options) + with client_builder() as api_client: + try: + from cli.command import Command + admin_api = PolarisDefaultApi(api_client) + command = Command.from_options(options) + command.execute(admin_api) + except ApiException as e: + import json + error = json.loads(e.body)['error'] + print(f'Exception when communicating with the Polaris server. {error["type"]}: {error["message"]}') + + @staticmethod + def _get_client_builder(options): + + # Validate + has_access_token = options.access_token is not None + has_client_secret = options.client_id is not None and options.client_secret is not None + if has_access_token and has_client_secret: + raise Exception("Please provide credentials via either --client-id / --client-secret or " + "--access-token, but not both") + + # Authenticate accordingly + polaris_catalog_url = f'http://{options.host}:{options.port}/api/management/v1' + if has_access_token: + return lambda: ApiClient( + Configuration(host=polaris_catalog_url, access_token=options.access_token), + ) + elif has_client_secret: + return lambda: ApiClient( + Configuration(host=polaris_catalog_url, username=options.client_id, password=options.client_secret), + ) + else: + raise Exception("Please provide credentials via --client-id & --client-secret or via --access-token") + + +if __name__ == '__main__': + PolarisCli.execute() diff --git a/regtests/client/python/docs/AddGrantRequest.md b/regtests/client/python/docs/AddGrantRequest.md new file mode 100644 index 0000000000..7a05da1eac --- /dev/null +++ b/regtests/client/python/docs/AddGrantRequest.md @@ -0,0 +1,45 @@ + +# AddGrantRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**grant** | [**GrantResource**](GrantResource.md) | | [optional] + +## Example + +```python +from polaris.management.models.add_grant_request import AddGrantRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of AddGrantRequest from a JSON string +add_grant_request_instance = AddGrantRequest.from_json(json) +# print the JSON string representation of the object +print(AddGrantRequest.to_json()) + +# convert the object into a dict +add_grant_request_dict = add_grant_request_instance.to_dict() +# create an instance of AddGrantRequest from a dict +add_grant_request_from_dict = AddGrantRequest.from_dict(add_grant_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AddPartitionSpecUpdate.md b/regtests/client/python/docs/AddPartitionSpecUpdate.md new file mode 100644 index 0000000000..1156779bc9 --- /dev/null +++ b/regtests/client/python/docs/AddPartitionSpecUpdate.md @@ -0,0 +1,46 @@ + +# AddPartitionSpecUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**spec** | [**PartitionSpec**](PartitionSpec.md) | | + +## Example + +```python +from polaris.catalog.models.add_partition_spec_update import AddPartitionSpecUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of AddPartitionSpecUpdate from a JSON string +add_partition_spec_update_instance = AddPartitionSpecUpdate.from_json(json) +# print the JSON string representation of the object +print(AddPartitionSpecUpdate.to_json()) + +# convert the object into a dict +add_partition_spec_update_dict = add_partition_spec_update_instance.to_dict() +# create an instance of AddPartitionSpecUpdate from a dict +add_partition_spec_update_from_dict = AddPartitionSpecUpdate.from_dict(add_partition_spec_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AddSchemaUpdate.md b/regtests/client/python/docs/AddSchemaUpdate.md new file mode 100644 index 0000000000..1c236a67f9 --- /dev/null +++ b/regtests/client/python/docs/AddSchemaUpdate.md @@ -0,0 +1,47 @@ + +# AddSchemaUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**var_schema** | [**ModelSchema**](ModelSchema.md) | | +**last_column_id** | **int** | The highest assigned column ID for the table. This is used to ensure columns are always assigned an unused ID when evolving schemas. When omitted, it will be computed on the server side. | [optional] + +## Example + +```python +from polaris.catalog.models.add_schema_update import AddSchemaUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of AddSchemaUpdate from a JSON string +add_schema_update_instance = AddSchemaUpdate.from_json(json) +# print the JSON string representation of the object +print(AddSchemaUpdate.to_json()) + +# convert the object into a dict +add_schema_update_dict = add_schema_update_instance.to_dict() +# create an instance of AddSchemaUpdate from a dict +add_schema_update_from_dict = AddSchemaUpdate.from_dict(add_schema_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AddSnapshotUpdate.md b/regtests/client/python/docs/AddSnapshotUpdate.md new file mode 100644 index 0000000000..dc23e76b58 --- /dev/null +++ b/regtests/client/python/docs/AddSnapshotUpdate.md @@ -0,0 +1,46 @@ + +# AddSnapshotUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**snapshot** | [**Snapshot**](Snapshot.md) | | + +## Example + +```python +from polaris.catalog.models.add_snapshot_update import AddSnapshotUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of AddSnapshotUpdate from a JSON string +add_snapshot_update_instance = AddSnapshotUpdate.from_json(json) +# print the JSON string representation of the object +print(AddSnapshotUpdate.to_json()) + +# convert the object into a dict +add_snapshot_update_dict = add_snapshot_update_instance.to_dict() +# create an instance of AddSnapshotUpdate from a dict +add_snapshot_update_from_dict = AddSnapshotUpdate.from_dict(add_snapshot_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AddSortOrderUpdate.md b/regtests/client/python/docs/AddSortOrderUpdate.md new file mode 100644 index 0000000000..39b2609420 --- /dev/null +++ b/regtests/client/python/docs/AddSortOrderUpdate.md @@ -0,0 +1,46 @@ + +# AddSortOrderUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**sort_order** | [**SortOrder**](SortOrder.md) | | + +## Example + +```python +from polaris.catalog.models.add_sort_order_update import AddSortOrderUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of AddSortOrderUpdate from a JSON string +add_sort_order_update_instance = AddSortOrderUpdate.from_json(json) +# print the JSON string representation of the object +print(AddSortOrderUpdate.to_json()) + +# convert the object into a dict +add_sort_order_update_dict = add_sort_order_update_instance.to_dict() +# create an instance of AddSortOrderUpdate from a dict +add_sort_order_update_from_dict = AddSortOrderUpdate.from_dict(add_sort_order_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AddViewVersionUpdate.md b/regtests/client/python/docs/AddViewVersionUpdate.md new file mode 100644 index 0000000000..f1219c3970 --- /dev/null +++ b/regtests/client/python/docs/AddViewVersionUpdate.md @@ -0,0 +1,46 @@ + +# AddViewVersionUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**view_version** | [**ViewVersion**](ViewVersion.md) | | + +## Example + +```python +from polaris.catalog.models.add_view_version_update import AddViewVersionUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of AddViewVersionUpdate from a JSON string +add_view_version_update_instance = AddViewVersionUpdate.from_json(json) +# print the JSON string representation of the object +print(AddViewVersionUpdate.to_json()) + +# convert the object into a dict +add_view_version_update_dict = add_view_version_update_instance.to_dict() +# create an instance of AddViewVersionUpdate from a dict +add_view_version_update_from_dict = AddViewVersionUpdate.from_dict(add_view_version_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AndOrExpression.md b/regtests/client/python/docs/AndOrExpression.md new file mode 100644 index 0000000000..4a5dc57fa2 --- /dev/null +++ b/regtests/client/python/docs/AndOrExpression.md @@ -0,0 +1,47 @@ + +# AndOrExpression + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**left** | [**Expression**](Expression.md) | | +**right** | [**Expression**](Expression.md) | | + +## Example + +```python +from polaris.catalog.models.and_or_expression import AndOrExpression + +# TODO update the JSON string below +json = "{}" +# create an instance of AndOrExpression from a JSON string +and_or_expression_instance = AndOrExpression.from_json(json) +# print the JSON string representation of the object +print(AndOrExpression.to_json()) + +# convert the object into a dict +and_or_expression_dict = and_or_expression_instance.to_dict() +# create an instance of AndOrExpression from a dict +and_or_expression_from_dict = AndOrExpression.from_dict(and_or_expression_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AssertCreate.md b/regtests/client/python/docs/AssertCreate.md new file mode 100644 index 0000000000..c30e8d0f83 --- /dev/null +++ b/regtests/client/python/docs/AssertCreate.md @@ -0,0 +1,47 @@ + +# AssertCreate + +The table must not already exist; used for create transactions + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | + +## Example + +```python +from polaris.catalog.models.assert_create import AssertCreate + +# TODO update the JSON string below +json = "{}" +# create an instance of AssertCreate from a JSON string +assert_create_instance = AssertCreate.from_json(json) +# print the JSON string representation of the object +print(AssertCreate.to_json()) + +# convert the object into a dict +assert_create_dict = assert_create_instance.to_dict() +# create an instance of AssertCreate from a dict +assert_create_from_dict = AssertCreate.from_dict(assert_create_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AssertCurrentSchemaId.md b/regtests/client/python/docs/AssertCurrentSchemaId.md new file mode 100644 index 0000000000..f99598b4f4 --- /dev/null +++ b/regtests/client/python/docs/AssertCurrentSchemaId.md @@ -0,0 +1,48 @@ + +# AssertCurrentSchemaId + +The table's current schema id must match the requirement's `current-schema-id` + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**current_schema_id** | **int** | | + +## Example + +```python +from polaris.catalog.models.assert_current_schema_id import AssertCurrentSchemaId + +# TODO update the JSON string below +json = "{}" +# create an instance of AssertCurrentSchemaId from a JSON string +assert_current_schema_id_instance = AssertCurrentSchemaId.from_json(json) +# print the JSON string representation of the object +print(AssertCurrentSchemaId.to_json()) + +# convert the object into a dict +assert_current_schema_id_dict = assert_current_schema_id_instance.to_dict() +# create an instance of AssertCurrentSchemaId from a dict +assert_current_schema_id_from_dict = AssertCurrentSchemaId.from_dict(assert_current_schema_id_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AssertDefaultSortOrderId.md b/regtests/client/python/docs/AssertDefaultSortOrderId.md new file mode 100644 index 0000000000..060a481592 --- /dev/null +++ b/regtests/client/python/docs/AssertDefaultSortOrderId.md @@ -0,0 +1,48 @@ + +# AssertDefaultSortOrderId + +The table's default sort order id must match the requirement's `default-sort-order-id` + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**default_sort_order_id** | **int** | | + +## Example + +```python +from polaris.catalog.models.assert_default_sort_order_id import AssertDefaultSortOrderId + +# TODO update the JSON string below +json = "{}" +# create an instance of AssertDefaultSortOrderId from a JSON string +assert_default_sort_order_id_instance = AssertDefaultSortOrderId.from_json(json) +# print the JSON string representation of the object +print(AssertDefaultSortOrderId.to_json()) + +# convert the object into a dict +assert_default_sort_order_id_dict = assert_default_sort_order_id_instance.to_dict() +# create an instance of AssertDefaultSortOrderId from a dict +assert_default_sort_order_id_from_dict = AssertDefaultSortOrderId.from_dict(assert_default_sort_order_id_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AssertDefaultSpecId.md b/regtests/client/python/docs/AssertDefaultSpecId.md new file mode 100644 index 0000000000..dca5b16730 --- /dev/null +++ b/regtests/client/python/docs/AssertDefaultSpecId.md @@ -0,0 +1,48 @@ + +# AssertDefaultSpecId + +The table's default spec id must match the requirement's `default-spec-id` + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**default_spec_id** | **int** | | + +## Example + +```python +from polaris.catalog.models.assert_default_spec_id import AssertDefaultSpecId + +# TODO update the JSON string below +json = "{}" +# create an instance of AssertDefaultSpecId from a JSON string +assert_default_spec_id_instance = AssertDefaultSpecId.from_json(json) +# print the JSON string representation of the object +print(AssertDefaultSpecId.to_json()) + +# convert the object into a dict +assert_default_spec_id_dict = assert_default_spec_id_instance.to_dict() +# create an instance of AssertDefaultSpecId from a dict +assert_default_spec_id_from_dict = AssertDefaultSpecId.from_dict(assert_default_spec_id_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AssertLastAssignedFieldId.md b/regtests/client/python/docs/AssertLastAssignedFieldId.md new file mode 100644 index 0000000000..7ebb658cd1 --- /dev/null +++ b/regtests/client/python/docs/AssertLastAssignedFieldId.md @@ -0,0 +1,48 @@ + +# AssertLastAssignedFieldId + +The table's last assigned column id must match the requirement's `last-assigned-field-id` + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**last_assigned_field_id** | **int** | | + +## Example + +```python +from polaris.catalog.models.assert_last_assigned_field_id import AssertLastAssignedFieldId + +# TODO update the JSON string below +json = "{}" +# create an instance of AssertLastAssignedFieldId from a JSON string +assert_last_assigned_field_id_instance = AssertLastAssignedFieldId.from_json(json) +# print the JSON string representation of the object +print(AssertLastAssignedFieldId.to_json()) + +# convert the object into a dict +assert_last_assigned_field_id_dict = assert_last_assigned_field_id_instance.to_dict() +# create an instance of AssertLastAssignedFieldId from a dict +assert_last_assigned_field_id_from_dict = AssertLastAssignedFieldId.from_dict(assert_last_assigned_field_id_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AssertLastAssignedPartitionId.md b/regtests/client/python/docs/AssertLastAssignedPartitionId.md new file mode 100644 index 0000000000..f9d2e32bc5 --- /dev/null +++ b/regtests/client/python/docs/AssertLastAssignedPartitionId.md @@ -0,0 +1,48 @@ + +# AssertLastAssignedPartitionId + +The table's last assigned partition id must match the requirement's `last-assigned-partition-id` + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**last_assigned_partition_id** | **int** | | + +## Example + +```python +from polaris.catalog.models.assert_last_assigned_partition_id import AssertLastAssignedPartitionId + +# TODO update the JSON string below +json = "{}" +# create an instance of AssertLastAssignedPartitionId from a JSON string +assert_last_assigned_partition_id_instance = AssertLastAssignedPartitionId.from_json(json) +# print the JSON string representation of the object +print(AssertLastAssignedPartitionId.to_json()) + +# convert the object into a dict +assert_last_assigned_partition_id_dict = assert_last_assigned_partition_id_instance.to_dict() +# create an instance of AssertLastAssignedPartitionId from a dict +assert_last_assigned_partition_id_from_dict = AssertLastAssignedPartitionId.from_dict(assert_last_assigned_partition_id_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AssertRefSnapshotId.md b/regtests/client/python/docs/AssertRefSnapshotId.md new file mode 100644 index 0000000000..80a141df55 --- /dev/null +++ b/regtests/client/python/docs/AssertRefSnapshotId.md @@ -0,0 +1,49 @@ + +# AssertRefSnapshotId + +The table branch or tag identified by the requirement's `ref` must reference the requirement's `snapshot-id`; if `snapshot-id` is `null` or missing, the ref must not already exist + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**ref** | **str** | | +**snapshot_id** | **int** | | + +## Example + +```python +from polaris.catalog.models.assert_ref_snapshot_id import AssertRefSnapshotId + +# TODO update the JSON string below +json = "{}" +# create an instance of AssertRefSnapshotId from a JSON string +assert_ref_snapshot_id_instance = AssertRefSnapshotId.from_json(json) +# print the JSON string representation of the object +print(AssertRefSnapshotId.to_json()) + +# convert the object into a dict +assert_ref_snapshot_id_dict = assert_ref_snapshot_id_instance.to_dict() +# create an instance of AssertRefSnapshotId from a dict +assert_ref_snapshot_id_from_dict = AssertRefSnapshotId.from_dict(assert_ref_snapshot_id_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AssertTableUUID.md b/regtests/client/python/docs/AssertTableUUID.md new file mode 100644 index 0000000000..0aa31a4a89 --- /dev/null +++ b/regtests/client/python/docs/AssertTableUUID.md @@ -0,0 +1,48 @@ + +# AssertTableUUID + +The table UUID must match the requirement's `uuid` + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**uuid** | **str** | | + +## Example + +```python +from polaris.catalog.models.assert_table_uuid import AssertTableUUID + +# TODO update the JSON string below +json = "{}" +# create an instance of AssertTableUUID from a JSON string +assert_table_uuid_instance = AssertTableUUID.from_json(json) +# print the JSON string representation of the object +print(AssertTableUUID.to_json()) + +# convert the object into a dict +assert_table_uuid_dict = assert_table_uuid_instance.to_dict() +# create an instance of AssertTableUUID from a dict +assert_table_uuid_from_dict = AssertTableUUID.from_dict(assert_table_uuid_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AssertViewUUID.md b/regtests/client/python/docs/AssertViewUUID.md new file mode 100644 index 0000000000..fc2858d83c --- /dev/null +++ b/regtests/client/python/docs/AssertViewUUID.md @@ -0,0 +1,48 @@ + +# AssertViewUUID + +The view UUID must match the requirement's `uuid` + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**uuid** | **str** | | + +## Example + +```python +from polaris.catalog.models.assert_view_uuid import AssertViewUUID + +# TODO update the JSON string below +json = "{}" +# create an instance of AssertViewUUID from a JSON string +assert_view_uuid_instance = AssertViewUUID.from_json(json) +# print the JSON string representation of the object +print(AssertViewUUID.to_json()) + +# convert the object into a dict +assert_view_uuid_dict = assert_view_uuid_instance.to_dict() +# create an instance of AssertViewUUID from a dict +assert_view_uuid_from_dict = AssertViewUUID.from_dict(assert_view_uuid_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AssignUUIDUpdate.md b/regtests/client/python/docs/AssignUUIDUpdate.md new file mode 100644 index 0000000000..00d847539b --- /dev/null +++ b/regtests/client/python/docs/AssignUUIDUpdate.md @@ -0,0 +1,48 @@ + +# AssignUUIDUpdate + +Assigning a UUID to a table/view should only be done when creating the table/view. It is not safe to re-assign the UUID if a table/view already has a UUID assigned + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**uuid** | **str** | | + +## Example + +```python +from polaris.catalog.models.assign_uuid_update import AssignUUIDUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of AssignUUIDUpdate from a JSON string +assign_uuid_update_instance = AssignUUIDUpdate.from_json(json) +# print the JSON string representation of the object +print(AssignUUIDUpdate.to_json()) + +# convert the object into a dict +assign_uuid_update_dict = assign_uuid_update_instance.to_dict() +# create an instance of AssignUUIDUpdate from a dict +assign_uuid_update_from_dict = AssignUUIDUpdate.from_dict(assign_uuid_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AwsStorageConfigInfo.md b/regtests/client/python/docs/AwsStorageConfigInfo.md new file mode 100644 index 0000000000..5aab847180 --- /dev/null +++ b/regtests/client/python/docs/AwsStorageConfigInfo.md @@ -0,0 +1,49 @@ + +# AwsStorageConfigInfo + +aws storage configuration info + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**role_arn** | **str** | the aws role arn that grants privileges on the S3 buckets | +**external_id** | **str** | an optional external id used to establish a trust relationship with AWS in the trust policy | [optional] +**user_arn** | **str** | the aws user arn used to assume the aws role | [optional] + +## Example + +```python +from polaris.management.models.aws_storage_config_info import AwsStorageConfigInfo + +# TODO update the JSON string below +json = "{}" +# create an instance of AwsStorageConfigInfo from a JSON string +aws_storage_config_info_instance = AwsStorageConfigInfo.from_json(json) +# print the JSON string representation of the object +print(AwsStorageConfigInfo.to_json()) + +# convert the object into a dict +aws_storage_config_info_dict = aws_storage_config_info_instance.to_dict() +# create an instance of AwsStorageConfigInfo from a dict +aws_storage_config_info_from_dict = AwsStorageConfigInfo.from_dict(aws_storage_config_info_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/AzureStorageConfigInfo.md b/regtests/client/python/docs/AzureStorageConfigInfo.md new file mode 100644 index 0000000000..cdbd2cff10 --- /dev/null +++ b/regtests/client/python/docs/AzureStorageConfigInfo.md @@ -0,0 +1,49 @@ + +# AzureStorageConfigInfo + +azure storage configuration info + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**tenant_id** | **str** | the tenant id that the storage accounts belong to | +**multi_tenant_app_name** | **str** | the name of the azure client application | [optional] +**consent_url** | **str** | URL to the Azure permissions request page | [optional] + +## Example + +```python +from polaris.management.models.azure_storage_config_info import AzureStorageConfigInfo + +# TODO update the JSON string below +json = "{}" +# create an instance of AzureStorageConfigInfo from a JSON string +azure_storage_config_info_instance = AzureStorageConfigInfo.from_json(json) +# print the JSON string representation of the object +print(AzureStorageConfigInfo.to_json()) + +# convert the object into a dict +azure_storage_config_info_dict = azure_storage_config_info_instance.to_dict() +# create an instance of AzureStorageConfigInfo from a dict +azure_storage_config_info_from_dict = AzureStorageConfigInfo.from_dict(azure_storage_config_info_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/BaseUpdate.md b/regtests/client/python/docs/BaseUpdate.md new file mode 100644 index 0000000000..ff2cde9a30 --- /dev/null +++ b/regtests/client/python/docs/BaseUpdate.md @@ -0,0 +1,45 @@ + +# BaseUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | + +## Example + +```python +from polaris.catalog.models.base_update import BaseUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of BaseUpdate from a JSON string +base_update_instance = BaseUpdate.from_json(json) +# print the JSON string representation of the object +print(BaseUpdate.to_json()) + +# convert the object into a dict +base_update_dict = base_update_instance.to_dict() +# create an instance of BaseUpdate from a dict +base_update_from_dict = BaseUpdate.from_dict(base_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/BlobMetadata.md b/regtests/client/python/docs/BlobMetadata.md new file mode 100644 index 0000000000..a8c4cbd12c --- /dev/null +++ b/regtests/client/python/docs/BlobMetadata.md @@ -0,0 +1,49 @@ + +# BlobMetadata + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**snapshot_id** | **int** | | +**sequence_number** | **int** | | +**fields** | **List[int]** | | +**properties** | **object** | | [optional] + +## Example + +```python +from polaris.catalog.models.blob_metadata import BlobMetadata + +# TODO update the JSON string below +json = "{}" +# create an instance of BlobMetadata from a JSON string +blob_metadata_instance = BlobMetadata.from_json(json) +# print the JSON string representation of the object +print(BlobMetadata.to_json()) + +# convert the object into a dict +blob_metadata_dict = blob_metadata_instance.to_dict() +# create an instance of BlobMetadata from a dict +blob_metadata_from_dict = BlobMetadata.from_dict(blob_metadata_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/Catalog.md b/regtests/client/python/docs/Catalog.md new file mode 100644 index 0000000000..3f583ca427 --- /dev/null +++ b/regtests/client/python/docs/Catalog.md @@ -0,0 +1,53 @@ + +# Catalog + +A catalog object. A catalog may be internal or external. Internal catalogs are managed entirely by an external catalog interface. Third party catalogs may be other Iceberg REST implementations or other services with their own proprietary APIs + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | the type of catalog - internal or external | [default to 'INTERNAL'] +**name** | **str** | The name of the catalog | +**properties** | [**CatalogProperties**](CatalogProperties.md) | | +**create_timestamp** | **int** | The creation time represented as unix epoch timestamp in milliseconds | [optional] +**last_update_timestamp** | **int** | The last update time represented as unix epoch timestamp in milliseconds | [optional] +**entity_version** | **int** | The version of the catalog object used to determine if the catalog metadata has changed | [optional] +**storage_config_info** | [**StorageConfigInfo**](StorageConfigInfo.md) | | + +## Example + +```python +from polaris.management.models.catalog import Catalog + +# TODO update the JSON string below +json = "{}" +# create an instance of Catalog from a JSON string +catalog_instance = Catalog.from_json(json) +# print the JSON string representation of the object +print(Catalog.to_json()) + +# convert the object into a dict +catalog_dict = catalog_instance.to_dict() +# create an instance of Catalog from a dict +catalog_from_dict = Catalog.from_dict(catalog_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CatalogConfig.md b/regtests/client/python/docs/CatalogConfig.md new file mode 100644 index 0000000000..e9c99992cd --- /dev/null +++ b/regtests/client/python/docs/CatalogConfig.md @@ -0,0 +1,48 @@ + +# CatalogConfig + +Server-provided configuration for the catalog. + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**overrides** | **Dict[str, str]** | Properties that should be used to override client configuration; applied after defaults and client configuration. | +**defaults** | **Dict[str, str]** | Properties that should be used as default configuration; applied before client configuration. | + +## Example + +```python +from polaris.catalog.models.catalog_config import CatalogConfig + +# TODO update the JSON string below +json = "{}" +# create an instance of CatalogConfig from a JSON string +catalog_config_instance = CatalogConfig.from_json(json) +# print the JSON string representation of the object +print(CatalogConfig.to_json()) + +# convert the object into a dict +catalog_config_dict = catalog_config_instance.to_dict() +# create an instance of CatalogConfig from a dict +catalog_config_from_dict = CatalogConfig.from_dict(catalog_config_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CatalogGrant.md b/regtests/client/python/docs/CatalogGrant.md new file mode 100644 index 0000000000..1cdc65865f --- /dev/null +++ b/regtests/client/python/docs/CatalogGrant.md @@ -0,0 +1,45 @@ + +# CatalogGrant + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**privilege** | [**CatalogPrivilege**](CatalogPrivilege.md) | | + +## Example + +```python +from polaris.management.models.catalog_grant import CatalogGrant + +# TODO update the JSON string below +json = "{}" +# create an instance of CatalogGrant from a JSON string +catalog_grant_instance = CatalogGrant.from_json(json) +# print the JSON string representation of the object +print(CatalogGrant.to_json()) + +# convert the object into a dict +catalog_grant_dict = catalog_grant_instance.to_dict() +# create an instance of CatalogGrant from a dict +catalog_grant_from_dict = CatalogGrant.from_dict(catalog_grant_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CatalogPrivilege.md b/regtests/client/python/docs/CatalogPrivilege.md new file mode 100644 index 0000000000..e6088cc2b9 --- /dev/null +++ b/regtests/client/python/docs/CatalogPrivilege.md @@ -0,0 +1,74 @@ + +# CatalogPrivilege + +## Enum + +* `CATALOG_MANAGE_ACCESS` (value: `'CATALOG_MANAGE_ACCESS'`) + +* `CATALOG_MANAGE_CONTENT` (value: `'CATALOG_MANAGE_CONTENT'`) + +* `CATALOG_MANAGE_METADATA` (value: `'CATALOG_MANAGE_METADATA'`) + +* `CATALOG_READ_PROPERTIES` (value: `'CATALOG_READ_PROPERTIES'`) + +* `CATALOG_WRITE_PROPERTIES` (value: `'CATALOG_WRITE_PROPERTIES'`) + +* `NAMESPACE_CREATE` (value: `'NAMESPACE_CREATE'`) + +* `TABLE_CREATE` (value: `'TABLE_CREATE'`) + +* `VIEW_CREATE` (value: `'VIEW_CREATE'`) + +* `NAMESPACE_DROP` (value: `'NAMESPACE_DROP'`) + +* `TABLE_DROP` (value: `'TABLE_DROP'`) + +* `VIEW_DROP` (value: `'VIEW_DROP'`) + +* `NAMESPACE_LIST` (value: `'NAMESPACE_LIST'`) + +* `TABLE_LIST` (value: `'TABLE_LIST'`) + +* `VIEW_LIST` (value: `'VIEW_LIST'`) + +* `NAMESPACE_READ_PROPERTIES` (value: `'NAMESPACE_READ_PROPERTIES'`) + +* `TABLE_READ_PROPERTIES` (value: `'TABLE_READ_PROPERTIES'`) + +* `VIEW_READ_PROPERTIES` (value: `'VIEW_READ_PROPERTIES'`) + +* `NAMESPACE_WRITE_PROPERTIES` (value: `'NAMESPACE_WRITE_PROPERTIES'`) + +* `TABLE_WRITE_PROPERTIES` (value: `'TABLE_WRITE_PROPERTIES'`) + +* `VIEW_WRITE_PROPERTIES` (value: `'VIEW_WRITE_PROPERTIES'`) + +* `TABLE_READ_DATA` (value: `'TABLE_READ_DATA'`) + +* `TABLE_WRITE_DATA` (value: `'TABLE_WRITE_DATA'`) + +* `NAMESPACE_FULL_METADATA` (value: `'NAMESPACE_FULL_METADATA'`) + +* `TABLE_FULL_METADATA` (value: `'TABLE_FULL_METADATA'`) + +* `VIEW_FULL_METADATA` (value: `'VIEW_FULL_METADATA'`) + +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CatalogProperties.md b/regtests/client/python/docs/CatalogProperties.md new file mode 100644 index 0000000000..5a4fe0b3d4 --- /dev/null +++ b/regtests/client/python/docs/CatalogProperties.md @@ -0,0 +1,45 @@ + +# CatalogProperties + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**default_base_location** | **str** | | + +## Example + +```python +from polaris.management.models.catalog_properties import CatalogProperties + +# TODO update the JSON string below +json = "{}" +# create an instance of CatalogProperties from a JSON string +catalog_properties_instance = CatalogProperties.from_json(json) +# print the JSON string representation of the object +print(CatalogProperties.to_json()) + +# convert the object into a dict +catalog_properties_dict = catalog_properties_instance.to_dict() +# create an instance of CatalogProperties from a dict +catalog_properties_from_dict = CatalogProperties.from_dict(catalog_properties_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CatalogRole.md b/regtests/client/python/docs/CatalogRole.md new file mode 100644 index 0000000000..d29a63418a --- /dev/null +++ b/regtests/client/python/docs/CatalogRole.md @@ -0,0 +1,49 @@ + +# CatalogRole + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**name** | **str** | The name of the role | +**properties** | **Dict[str, str]** | | [optional] +**create_timestamp** | **int** | | [optional] +**last_update_timestamp** | **int** | | [optional] +**entity_version** | **int** | The version of the catalog role object used to determine if the catalog role metadata has changed | [optional] + +## Example + +```python +from polaris.management.models.catalog_role import CatalogRole + +# TODO update the JSON string below +json = "{}" +# create an instance of CatalogRole from a JSON string +catalog_role_instance = CatalogRole.from_json(json) +# print the JSON string representation of the object +print(CatalogRole.to_json()) + +# convert the object into a dict +catalog_role_dict = catalog_role_instance.to_dict() +# create an instance of CatalogRole from a dict +catalog_role_from_dict = CatalogRole.from_dict(catalog_role_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CatalogRoles.md b/regtests/client/python/docs/CatalogRoles.md new file mode 100644 index 0000000000..b4cb392191 --- /dev/null +++ b/regtests/client/python/docs/CatalogRoles.md @@ -0,0 +1,45 @@ + +# CatalogRoles + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**roles** | [**List[CatalogRole]**](CatalogRole.md) | The list of catalog roles | + +## Example + +```python +from polaris.management.models.catalog_roles import CatalogRoles + +# TODO update the JSON string below +json = "{}" +# create an instance of CatalogRoles from a JSON string +catalog_roles_instance = CatalogRoles.from_json(json) +# print the JSON string representation of the object +print(CatalogRoles.to_json()) + +# convert the object into a dict +catalog_roles_dict = catalog_roles_instance.to_dict() +# create an instance of CatalogRoles from a dict +catalog_roles_from_dict = CatalogRoles.from_dict(catalog_roles_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/Catalogs.md b/regtests/client/python/docs/Catalogs.md new file mode 100644 index 0000000000..07cb83db1e --- /dev/null +++ b/regtests/client/python/docs/Catalogs.md @@ -0,0 +1,47 @@ + +# Catalogs + +A list of Catalog objects + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**catalogs** | [**List[Catalog]**](Catalog.md) | | + +## Example + +```python +from polaris.management.models.catalogs import Catalogs + +# TODO update the JSON string below +json = "{}" +# create an instance of Catalogs from a JSON string +catalogs_instance = Catalogs.from_json(json) +# print the JSON string representation of the object +print(Catalogs.to_json()) + +# convert the object into a dict +catalogs_dict = catalogs_instance.to_dict() +# create an instance of Catalogs from a dict +catalogs_from_dict = Catalogs.from_dict(catalogs_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CommitReport.md b/regtests/client/python/docs/CommitReport.md new file mode 100644 index 0000000000..c683f42121 --- /dev/null +++ b/regtests/client/python/docs/CommitReport.md @@ -0,0 +1,50 @@ + +# CommitReport + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**table_name** | **str** | | +**snapshot_id** | **int** | | +**sequence_number** | **int** | | +**operation** | **str** | | +**metrics** | [**Dict[str, MetricResult]**](MetricResult.md) | | +**metadata** | **Dict[str, str]** | | [optional] + +## Example + +```python +from polaris.catalog.models.commit_report import CommitReport + +# TODO update the JSON string below +json = "{}" +# create an instance of CommitReport from a JSON string +commit_report_instance = CommitReport.from_json(json) +# print the JSON string representation of the object +print(CommitReport.to_json()) + +# convert the object into a dict +commit_report_dict = commit_report_instance.to_dict() +# create an instance of CommitReport from a dict +commit_report_from_dict = CommitReport.from_dict(commit_report_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CommitTableRequest.md b/regtests/client/python/docs/CommitTableRequest.md new file mode 100644 index 0000000000..a00a388106 --- /dev/null +++ b/regtests/client/python/docs/CommitTableRequest.md @@ -0,0 +1,47 @@ + +# CommitTableRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**identifier** | [**TableIdentifier**](TableIdentifier.md) | | [optional] +**requirements** | [**List[TableRequirement]**](TableRequirement.md) | | +**updates** | [**List[TableUpdate]**](TableUpdate.md) | | + +## Example + +```python +from polaris.catalog.models.commit_table_request import CommitTableRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of CommitTableRequest from a JSON string +commit_table_request_instance = CommitTableRequest.from_json(json) +# print the JSON string representation of the object +print(CommitTableRequest.to_json()) + +# convert the object into a dict +commit_table_request_dict = commit_table_request_instance.to_dict() +# create an instance of CommitTableRequest from a dict +commit_table_request_from_dict = CommitTableRequest.from_dict(commit_table_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CommitTableResponse.md b/regtests/client/python/docs/CommitTableResponse.md new file mode 100644 index 0000000000..6dd62fce11 --- /dev/null +++ b/regtests/client/python/docs/CommitTableResponse.md @@ -0,0 +1,46 @@ + +# CommitTableResponse + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**metadata_location** | **str** | | +**metadata** | [**TableMetadata**](TableMetadata.md) | | + +## Example + +```python +from polaris.catalog.models.commit_table_response import CommitTableResponse + +# TODO update the JSON string below +json = "{}" +# create an instance of CommitTableResponse from a JSON string +commit_table_response_instance = CommitTableResponse.from_json(json) +# print the JSON string representation of the object +print(CommitTableResponse.to_json()) + +# convert the object into a dict +commit_table_response_dict = commit_table_response_instance.to_dict() +# create an instance of CommitTableResponse from a dict +commit_table_response_from_dict = CommitTableResponse.from_dict(commit_table_response_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CommitTransactionRequest.md b/regtests/client/python/docs/CommitTransactionRequest.md new file mode 100644 index 0000000000..5d3634322f --- /dev/null +++ b/regtests/client/python/docs/CommitTransactionRequest.md @@ -0,0 +1,45 @@ + +# CommitTransactionRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**table_changes** | [**List[CommitTableRequest]**](CommitTableRequest.md) | | + +## Example + +```python +from polaris.catalog.models.commit_transaction_request import CommitTransactionRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of CommitTransactionRequest from a JSON string +commit_transaction_request_instance = CommitTransactionRequest.from_json(json) +# print the JSON string representation of the object +print(CommitTransactionRequest.to_json()) + +# convert the object into a dict +commit_transaction_request_dict = commit_transaction_request_instance.to_dict() +# create an instance of CommitTransactionRequest from a dict +commit_transaction_request_from_dict = CommitTransactionRequest.from_dict(commit_transaction_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CommitViewRequest.md b/regtests/client/python/docs/CommitViewRequest.md new file mode 100644 index 0000000000..4d40043d9a --- /dev/null +++ b/regtests/client/python/docs/CommitViewRequest.md @@ -0,0 +1,47 @@ + +# CommitViewRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**identifier** | [**TableIdentifier**](TableIdentifier.md) | | [optional] +**requirements** | [**List[ViewRequirement]**](ViewRequirement.md) | | [optional] +**updates** | [**List[ViewUpdate]**](ViewUpdate.md) | | + +## Example + +```python +from polaris.catalog.models.commit_view_request import CommitViewRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of CommitViewRequest from a JSON string +commit_view_request_instance = CommitViewRequest.from_json(json) +# print the JSON string representation of the object +print(CommitViewRequest.to_json()) + +# convert the object into a dict +commit_view_request_dict = commit_view_request_instance.to_dict() +# create an instance of CommitViewRequest from a dict +commit_view_request_from_dict = CommitViewRequest.from_dict(commit_view_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ContentFile.md b/regtests/client/python/docs/ContentFile.md new file mode 100644 index 0000000000..c14af53b27 --- /dev/null +++ b/regtests/client/python/docs/ContentFile.md @@ -0,0 +1,54 @@ + +# ContentFile + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**content** | **str** | | +**file_path** | **str** | | +**file_format** | [**FileFormat**](FileFormat.md) | | +**spec_id** | **int** | | +**partition** | [**List[PrimitiveTypeValue]**](PrimitiveTypeValue.md) | A list of partition field values ordered based on the fields of the partition spec specified by the `spec-id` | [optional] +**file_size_in_bytes** | **int** | Total file size in bytes | +**record_count** | **int** | Number of records in the file | +**key_metadata** | **str** | Encryption key metadata blob | [optional] +**split_offsets** | **List[int]** | List of splittable offsets | [optional] +**sort_order_id** | **int** | | [optional] + +## Example + +```python +from polaris.catalog.models.content_file import ContentFile + +# TODO update the JSON string below +json = "{}" +# create an instance of ContentFile from a JSON string +content_file_instance = ContentFile.from_json(json) +# print the JSON string representation of the object +print(ContentFile.to_json()) + +# convert the object into a dict +content_file_dict = content_file_instance.to_dict() +# create an instance of ContentFile from a dict +content_file_from_dict = ContentFile.from_dict(content_file_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CountMap.md b/regtests/client/python/docs/CountMap.md new file mode 100644 index 0000000000..644095f4bd --- /dev/null +++ b/regtests/client/python/docs/CountMap.md @@ -0,0 +1,46 @@ + +# CountMap + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**keys** | **List[int]** | List of integer column ids for each corresponding value | [optional] +**values** | **List[int]** | List of Long values, matched to 'keys' by index | [optional] + +## Example + +```python +from polaris.catalog.models.count_map import CountMap + +# TODO update the JSON string below +json = "{}" +# create an instance of CountMap from a JSON string +count_map_instance = CountMap.from_json(json) +# print the JSON string representation of the object +print(CountMap.to_json()) + +# convert the object into a dict +count_map_dict = count_map_instance.to_dict() +# create an instance of CountMap from a dict +count_map_from_dict = CountMap.from_dict(count_map_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CounterResult.md b/regtests/client/python/docs/CounterResult.md new file mode 100644 index 0000000000..cc9c364c71 --- /dev/null +++ b/regtests/client/python/docs/CounterResult.md @@ -0,0 +1,46 @@ + +# CounterResult + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**unit** | **str** | | +**value** | **int** | | + +## Example + +```python +from polaris.catalog.models.counter_result import CounterResult + +# TODO update the JSON string below +json = "{}" +# create an instance of CounterResult from a JSON string +counter_result_instance = CounterResult.from_json(json) +# print the JSON string representation of the object +print(CounterResult.to_json()) + +# convert the object into a dict +counter_result_dict = counter_result_instance.to_dict() +# create an instance of CounterResult from a dict +counter_result_from_dict = CounterResult.from_dict(counter_result_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CreateCatalogRequest.md b/regtests/client/python/docs/CreateCatalogRequest.md new file mode 100644 index 0000000000..aeb5bacaae --- /dev/null +++ b/regtests/client/python/docs/CreateCatalogRequest.md @@ -0,0 +1,47 @@ + +# CreateCatalogRequest + +Request to create a new catalog + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**catalog** | [**Catalog**](Catalog.md) | | + +## Example + +```python +from polaris.management.models.create_catalog_request import CreateCatalogRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of CreateCatalogRequest from a JSON string +create_catalog_request_instance = CreateCatalogRequest.from_json(json) +# print the JSON string representation of the object +print(CreateCatalogRequest.to_json()) + +# convert the object into a dict +create_catalog_request_dict = create_catalog_request_instance.to_dict() +# create an instance of CreateCatalogRequest from a dict +create_catalog_request_from_dict = CreateCatalogRequest.from_dict(create_catalog_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CreateCatalogRoleRequest.md b/regtests/client/python/docs/CreateCatalogRoleRequest.md new file mode 100644 index 0000000000..36a77eaabd --- /dev/null +++ b/regtests/client/python/docs/CreateCatalogRoleRequest.md @@ -0,0 +1,45 @@ + +# CreateCatalogRoleRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**catalog_role** | [**CatalogRole**](CatalogRole.md) | | [optional] + +## Example + +```python +from polaris.management.models.create_catalog_role_request import CreateCatalogRoleRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of CreateCatalogRoleRequest from a JSON string +create_catalog_role_request_instance = CreateCatalogRoleRequest.from_json(json) +# print the JSON string representation of the object +print(CreateCatalogRoleRequest.to_json()) + +# convert the object into a dict +create_catalog_role_request_dict = create_catalog_role_request_instance.to_dict() +# create an instance of CreateCatalogRoleRequest from a dict +create_catalog_role_request_from_dict = CreateCatalogRoleRequest.from_dict(create_catalog_role_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CreateNamespaceRequest.md b/regtests/client/python/docs/CreateNamespaceRequest.md new file mode 100644 index 0000000000..5bb68cd93e --- /dev/null +++ b/regtests/client/python/docs/CreateNamespaceRequest.md @@ -0,0 +1,46 @@ + +# CreateNamespaceRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**namespace** | **List[str]** | Reference to one or more levels of a namespace | +**properties** | **Dict[str, str]** | Configured string to string map of properties for the namespace | [optional] + +## Example + +```python +from polaris.catalog.models.create_namespace_request import CreateNamespaceRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of CreateNamespaceRequest from a JSON string +create_namespace_request_instance = CreateNamespaceRequest.from_json(json) +# print the JSON string representation of the object +print(CreateNamespaceRequest.to_json()) + +# convert the object into a dict +create_namespace_request_dict = create_namespace_request_instance.to_dict() +# create an instance of CreateNamespaceRequest from a dict +create_namespace_request_from_dict = CreateNamespaceRequest.from_dict(create_namespace_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CreateNamespaceResponse.md b/regtests/client/python/docs/CreateNamespaceResponse.md new file mode 100644 index 0000000000..23de5ec427 --- /dev/null +++ b/regtests/client/python/docs/CreateNamespaceResponse.md @@ -0,0 +1,46 @@ + +# CreateNamespaceResponse + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**namespace** | **List[str]** | Reference to one or more levels of a namespace | +**properties** | **Dict[str, str]** | Properties stored on the namespace, if supported by the server. | [optional] + +## Example + +```python +from polaris.catalog.models.create_namespace_response import CreateNamespaceResponse + +# TODO update the JSON string below +json = "{}" +# create an instance of CreateNamespaceResponse from a JSON string +create_namespace_response_instance = CreateNamespaceResponse.from_json(json) +# print the JSON string representation of the object +print(CreateNamespaceResponse.to_json()) + +# convert the object into a dict +create_namespace_response_dict = create_namespace_response_instance.to_dict() +# create an instance of CreateNamespaceResponse from a dict +create_namespace_response_from_dict = CreateNamespaceResponse.from_dict(create_namespace_response_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CreatePrincipalRequest.md b/regtests/client/python/docs/CreatePrincipalRequest.md new file mode 100644 index 0000000000..5d4cb414d6 --- /dev/null +++ b/regtests/client/python/docs/CreatePrincipalRequest.md @@ -0,0 +1,46 @@ + +# CreatePrincipalRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**principal** | [**Principal**](Principal.md) | | [optional] +**credential_rotation_required** | **bool** | If true, the initial credentials can only be used to call rotateCredentials | [optional] + +## Example + +```python +from polaris.management.models.create_principal_request import CreatePrincipalRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of CreatePrincipalRequest from a JSON string +create_principal_request_instance = CreatePrincipalRequest.from_json(json) +# print the JSON string representation of the object +print(CreatePrincipalRequest.to_json()) + +# convert the object into a dict +create_principal_request_dict = create_principal_request_instance.to_dict() +# create an instance of CreatePrincipalRequest from a dict +create_principal_request_from_dict = CreatePrincipalRequest.from_dict(create_principal_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CreatePrincipalRoleRequest.md b/regtests/client/python/docs/CreatePrincipalRoleRequest.md new file mode 100644 index 0000000000..5c7c574545 --- /dev/null +++ b/regtests/client/python/docs/CreatePrincipalRoleRequest.md @@ -0,0 +1,45 @@ + +# CreatePrincipalRoleRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**principal_role** | [**PrincipalRole**](PrincipalRole.md) | | [optional] + +## Example + +```python +from polaris.management.models.create_principal_role_request import CreatePrincipalRoleRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of CreatePrincipalRoleRequest from a JSON string +create_principal_role_request_instance = CreatePrincipalRoleRequest.from_json(json) +# print the JSON string representation of the object +print(CreatePrincipalRoleRequest.to_json()) + +# convert the object into a dict +create_principal_role_request_dict = create_principal_role_request_instance.to_dict() +# create an instance of CreatePrincipalRoleRequest from a dict +create_principal_role_request_from_dict = CreatePrincipalRoleRequest.from_dict(create_principal_role_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CreateTableRequest.md b/regtests/client/python/docs/CreateTableRequest.md new file mode 100644 index 0000000000..afbef3db4a --- /dev/null +++ b/regtests/client/python/docs/CreateTableRequest.md @@ -0,0 +1,51 @@ + +# CreateTableRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**name** | **str** | | +**location** | **str** | | [optional] +**var_schema** | [**ModelSchema**](ModelSchema.md) | | +**partition_spec** | [**PartitionSpec**](PartitionSpec.md) | | [optional] +**write_order** | [**SortOrder**](SortOrder.md) | | [optional] +**stage_create** | **bool** | | [optional] +**properties** | **Dict[str, str]** | | [optional] + +## Example + +```python +from polaris.catalog.models.create_table_request import CreateTableRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of CreateTableRequest from a JSON string +create_table_request_instance = CreateTableRequest.from_json(json) +# print the JSON string representation of the object +print(CreateTableRequest.to_json()) + +# convert the object into a dict +create_table_request_dict = create_table_request_instance.to_dict() +# create an instance of CreateTableRequest from a dict +create_table_request_from_dict = CreateTableRequest.from_dict(create_table_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/CreateViewRequest.md b/regtests/client/python/docs/CreateViewRequest.md new file mode 100644 index 0000000000..58c617aa43 --- /dev/null +++ b/regtests/client/python/docs/CreateViewRequest.md @@ -0,0 +1,49 @@ + +# CreateViewRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**name** | **str** | | +**location** | **str** | | [optional] +**var_schema** | [**ModelSchema**](ModelSchema.md) | | +**view_version** | [**ViewVersion**](ViewVersion.md) | | +**properties** | **Dict[str, str]** | | + +## Example + +```python +from polaris.catalog.models.create_view_request import CreateViewRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of CreateViewRequest from a JSON string +create_view_request_instance = CreateViewRequest.from_json(json) +# print the JSON string representation of the object +print(CreateViewRequest.to_json()) + +# convert the object into a dict +create_view_request_dict = create_view_request_instance.to_dict() +# create an instance of CreateViewRequest from a dict +create_view_request_from_dict = CreateViewRequest.from_dict(create_view_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/DataFile.md b/regtests/client/python/docs/DataFile.md new file mode 100644 index 0000000000..9714f7648c --- /dev/null +++ b/regtests/client/python/docs/DataFile.md @@ -0,0 +1,51 @@ + +# DataFile + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**content** | **str** | | +**column_sizes** | [**CountMap**](CountMap.md) | Map of column id to total count, including null and NaN | [optional] +**value_counts** | [**CountMap**](CountMap.md) | Map of column id to null value count | [optional] +**null_value_counts** | [**CountMap**](CountMap.md) | Map of column id to null value count | [optional] +**nan_value_counts** | [**CountMap**](CountMap.md) | Map of column id to number of NaN values in the column | [optional] +**lower_bounds** | [**ValueMap**](ValueMap.md) | Map of column id to lower bound primitive type values | [optional] +**upper_bounds** | [**ValueMap**](ValueMap.md) | Map of column id to upper bound primitive type values | [optional] + +## Example + +```python +from polaris.catalog.models.data_file import DataFile + +# TODO update the JSON string below +json = "{}" +# create an instance of DataFile from a JSON string +data_file_instance = DataFile.from_json(json) +# print the JSON string representation of the object +print(DataFile.to_json()) + +# convert the object into a dict +data_file_dict = data_file_instance.to_dict() +# create an instance of DataFile from a dict +data_file_from_dict = DataFile.from_dict(data_file_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/EqualityDeleteFile.md b/regtests/client/python/docs/EqualityDeleteFile.md new file mode 100644 index 0000000000..84d01a3f4a --- /dev/null +++ b/regtests/client/python/docs/EqualityDeleteFile.md @@ -0,0 +1,46 @@ + +# EqualityDeleteFile + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**content** | **str** | | +**equality_ids** | **List[int]** | List of equality field IDs | [optional] + +## Example + +```python +from polaris.catalog.models.equality_delete_file import EqualityDeleteFile + +# TODO update the JSON string below +json = "{}" +# create an instance of EqualityDeleteFile from a JSON string +equality_delete_file_instance = EqualityDeleteFile.from_json(json) +# print the JSON string representation of the object +print(EqualityDeleteFile.to_json()) + +# convert the object into a dict +equality_delete_file_dict = equality_delete_file_instance.to_dict() +# create an instance of EqualityDeleteFile from a dict +equality_delete_file_from_dict = EqualityDeleteFile.from_dict(equality_delete_file_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ErrorModel.md b/regtests/client/python/docs/ErrorModel.md new file mode 100644 index 0000000000..cb55548281 --- /dev/null +++ b/regtests/client/python/docs/ErrorModel.md @@ -0,0 +1,50 @@ + +# ErrorModel + +JSON error payload returned in a response with further details on the error + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**message** | **str** | Human-readable error message | +**type** | **str** | Internal type definition of the error | +**code** | **int** | HTTP response code | +**stack** | **List[str]** | | [optional] + +## Example + +```python +from polaris.catalog.models.error_model import ErrorModel + +# TODO update the JSON string below +json = "{}" +# create an instance of ErrorModel from a JSON string +error_model_instance = ErrorModel.from_json(json) +# print the JSON string representation of the object +print(ErrorModel.to_json()) + +# convert the object into a dict +error_model_dict = error_model_instance.to_dict() +# create an instance of ErrorModel from a dict +error_model_from_dict = ErrorModel.from_dict(error_model_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/Expression.md b/regtests/client/python/docs/Expression.md new file mode 100644 index 0000000000..21d643a234 --- /dev/null +++ b/regtests/client/python/docs/Expression.md @@ -0,0 +1,51 @@ + +# Expression + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**left** | [**Expression**](Expression.md) | | +**right** | [**Expression**](Expression.md) | | +**child** | [**Expression**](Expression.md) | | +**term** | [**Term**](Term.md) | | +**values** | **List[object]** | | +**value** | **object** | | + +## Example + +```python +from polaris.catalog.models.expression import Expression + +# TODO update the JSON string below +json = "{}" +# create an instance of Expression from a JSON string +expression_instance = Expression.from_json(json) +# print the JSON string representation of the object +print(Expression.to_json()) + +# convert the object into a dict +expression_dict = expression_instance.to_dict() +# create an instance of Expression from a dict +expression_from_dict = Expression.from_dict(expression_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ExternalCatalog.md b/regtests/client/python/docs/ExternalCatalog.md new file mode 100644 index 0000000000..4475418394 --- /dev/null +++ b/regtests/client/python/docs/ExternalCatalog.md @@ -0,0 +1,47 @@ + +# ExternalCatalog + +An externally managed catalog + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**remote_url** | **str** | URL to the remote catalog API | [optional] + +## Example + +```python +from polaris.management.models.external_catalog import ExternalCatalog + +# TODO update the JSON string below +json = "{}" +# create an instance of ExternalCatalog from a JSON string +external_catalog_instance = ExternalCatalog.from_json(json) +# print the JSON string representation of the object +print(ExternalCatalog.to_json()) + +# convert the object into a dict +external_catalog_dict = external_catalog_instance.to_dict() +# create an instance of ExternalCatalog from a dict +external_catalog_from_dict = ExternalCatalog.from_dict(external_catalog_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/FileFormat.md b/regtests/client/python/docs/FileFormat.md new file mode 100644 index 0000000000..ede219a42f --- /dev/null +++ b/regtests/client/python/docs/FileFormat.md @@ -0,0 +1,30 @@ + +# FileFormat + +## Enum + +* `AVRO` (value: `'avro'`) + +* `ORC` (value: `'orc'`) + +* `PARQUET` (value: `'parquet'`) + +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/FileStorageConfigInfo.md b/regtests/client/python/docs/FileStorageConfigInfo.md new file mode 100644 index 0000000000..b14db26422 --- /dev/null +++ b/regtests/client/python/docs/FileStorageConfigInfo.md @@ -0,0 +1,46 @@ + +# FileStorageConfigInfo + +gcp storage configuration info + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- + +## Example + +```python +from polaris.management.models.file_storage_config_info import FileStorageConfigInfo + +# TODO update the JSON string below +json = "{}" +# create an instance of FileStorageConfigInfo from a JSON string +file_storage_config_info_instance = FileStorageConfigInfo.from_json(json) +# print the JSON string representation of the object +print(FileStorageConfigInfo.to_json()) + +# convert the object into a dict +file_storage_config_info_dict = file_storage_config_info_instance.to_dict() +# create an instance of FileStorageConfigInfo from a dict +file_storage_config_info_from_dict = FileStorageConfigInfo.from_dict(file_storage_config_info_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/GcpStorageConfigInfo.md b/regtests/client/python/docs/GcpStorageConfigInfo.md new file mode 100644 index 0000000000..f76f3843aa --- /dev/null +++ b/regtests/client/python/docs/GcpStorageConfigInfo.md @@ -0,0 +1,47 @@ + +# GcpStorageConfigInfo + +gcp storage configuration info + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**gcs_service_account** | **str** | a Google cloud storage service account | [optional] + +## Example + +```python +from polaris.management.models.gcp_storage_config_info import GcpStorageConfigInfo + +# TODO update the JSON string below +json = "{}" +# create an instance of GcpStorageConfigInfo from a JSON string +gcp_storage_config_info_instance = GcpStorageConfigInfo.from_json(json) +# print the JSON string representation of the object +print(GcpStorageConfigInfo.to_json()) + +# convert the object into a dict +gcp_storage_config_info_dict = gcp_storage_config_info_instance.to_dict() +# create an instance of GcpStorageConfigInfo from a dict +gcp_storage_config_info_from_dict = GcpStorageConfigInfo.from_dict(gcp_storage_config_info_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/GetNamespaceResponse.md b/regtests/client/python/docs/GetNamespaceResponse.md new file mode 100644 index 0000000000..494acd210d --- /dev/null +++ b/regtests/client/python/docs/GetNamespaceResponse.md @@ -0,0 +1,46 @@ + +# GetNamespaceResponse + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**namespace** | **List[str]** | Reference to one or more levels of a namespace | +**properties** | **Dict[str, str]** | Properties stored on the namespace, if supported by the server. If the server does not support namespace properties, it should return null for this field. If namespace properties are supported, but none are set, it should return an empty object. | [optional] + +## Example + +```python +from polaris.catalog.models.get_namespace_response import GetNamespaceResponse + +# TODO update the JSON string below +json = "{}" +# create an instance of GetNamespaceResponse from a JSON string +get_namespace_response_instance = GetNamespaceResponse.from_json(json) +# print the JSON string representation of the object +print(GetNamespaceResponse.to_json()) + +# convert the object into a dict +get_namespace_response_dict = get_namespace_response_instance.to_dict() +# create an instance of GetNamespaceResponse from a dict +get_namespace_response_from_dict = GetNamespaceResponse.from_dict(get_namespace_response_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/GrantCatalogRoleRequest.md b/regtests/client/python/docs/GrantCatalogRoleRequest.md new file mode 100644 index 0000000000..3b872921ec --- /dev/null +++ b/regtests/client/python/docs/GrantCatalogRoleRequest.md @@ -0,0 +1,45 @@ + +# GrantCatalogRoleRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**catalog_role** | [**CatalogRole**](CatalogRole.md) | | [optional] + +## Example + +```python +from polaris.management.models.grant_catalog_role_request import GrantCatalogRoleRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of GrantCatalogRoleRequest from a JSON string +grant_catalog_role_request_instance = GrantCatalogRoleRequest.from_json(json) +# print the JSON string representation of the object +print(GrantCatalogRoleRequest.to_json()) + +# convert the object into a dict +grant_catalog_role_request_dict = grant_catalog_role_request_instance.to_dict() +# create an instance of GrantCatalogRoleRequest from a dict +grant_catalog_role_request_from_dict = GrantCatalogRoleRequest.from_dict(grant_catalog_role_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/GrantPrincipalRoleRequest.md b/regtests/client/python/docs/GrantPrincipalRoleRequest.md new file mode 100644 index 0000000000..31661d5517 --- /dev/null +++ b/regtests/client/python/docs/GrantPrincipalRoleRequest.md @@ -0,0 +1,45 @@ + +# GrantPrincipalRoleRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**principal_role** | [**PrincipalRole**](PrincipalRole.md) | | [optional] + +## Example + +```python +from polaris.management.models.grant_principal_role_request import GrantPrincipalRoleRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of GrantPrincipalRoleRequest from a JSON string +grant_principal_role_request_instance = GrantPrincipalRoleRequest.from_json(json) +# print the JSON string representation of the object +print(GrantPrincipalRoleRequest.to_json()) + +# convert the object into a dict +grant_principal_role_request_dict = grant_principal_role_request_instance.to_dict() +# create an instance of GrantPrincipalRoleRequest from a dict +grant_principal_role_request_from_dict = GrantPrincipalRoleRequest.from_dict(grant_principal_role_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/GrantResource.md b/regtests/client/python/docs/GrantResource.md new file mode 100644 index 0000000000..5f86dfb157 --- /dev/null +++ b/regtests/client/python/docs/GrantResource.md @@ -0,0 +1,45 @@ + +# GrantResource + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | + +## Example + +```python +from polaris.management.models.grant_resource import GrantResource + +# TODO update the JSON string below +json = "{}" +# create an instance of GrantResource from a JSON string +grant_resource_instance = GrantResource.from_json(json) +# print the JSON string representation of the object +print(GrantResource.to_json()) + +# convert the object into a dict +grant_resource_dict = grant_resource_instance.to_dict() +# create an instance of GrantResource from a dict +grant_resource_from_dict = GrantResource.from_dict(grant_resource_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/GrantResources.md b/regtests/client/python/docs/GrantResources.md new file mode 100644 index 0000000000..5fe5397328 --- /dev/null +++ b/regtests/client/python/docs/GrantResources.md @@ -0,0 +1,45 @@ + +# GrantResources + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**grants** | [**List[GrantResource]**](GrantResource.md) | | + +## Example + +```python +from polaris.management.models.grant_resources import GrantResources + +# TODO update the JSON string below +json = "{}" +# create an instance of GrantResources from a JSON string +grant_resources_instance = GrantResources.from_json(json) +# print the JSON string representation of the object +print(GrantResources.to_json()) + +# convert the object into a dict +grant_resources_dict = grant_resources_instance.to_dict() +# create an instance of GrantResources from a dict +grant_resources_from_dict = GrantResources.from_dict(grant_resources_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/IcebergCatalogAPI.md b/regtests/client/python/docs/IcebergCatalogAPI.md new file mode 100644 index 0000000000..6c50da3bd3 --- /dev/null +++ b/regtests/client/python/docs/IcebergCatalogAPI.md @@ -0,0 +1,2257 @@ + +# polaris.catalog.IcebergCatalogAPI + +All URIs are relative to *https://localhost* + +Method | HTTP request | Description +------------- | ------------- | ------------- +[**commit_transaction**](IcebergCatalogAPI.md#commit_transaction) | **POST** /v1/{prefix}/transactions/commit | Commit updates to multiple tables in an atomic operation +[**create_namespace**](IcebergCatalogAPI.md#create_namespace) | **POST** /v1/{prefix}/namespaces | Create a namespace +[**create_table**](IcebergCatalogAPI.md#create_table) | **POST** /v1/{prefix}/namespaces/{namespace}/tables | Create a table in the given namespace +[**create_view**](IcebergCatalogAPI.md#create_view) | **POST** /v1/{prefix}/namespaces/{namespace}/views | Create a view in the given namespace +[**drop_namespace**](IcebergCatalogAPI.md#drop_namespace) | **DELETE** /v1/{prefix}/namespaces/{namespace} | Drop a namespace from the catalog. Namespace must be empty. +[**drop_table**](IcebergCatalogAPI.md#drop_table) | **DELETE** /v1/{prefix}/namespaces/{namespace}/tables/{table} | Drop a table from the catalog +[**drop_view**](IcebergCatalogAPI.md#drop_view) | **DELETE** /v1/{prefix}/namespaces/{namespace}/views/{view} | Drop a view from the catalog +[**list_namespaces**](IcebergCatalogAPI.md#list_namespaces) | **GET** /v1/{prefix}/namespaces | List namespaces, optionally providing a parent namespace to list underneath +[**list_tables**](IcebergCatalogAPI.md#list_tables) | **GET** /v1/{prefix}/namespaces/{namespace}/tables | List all table identifiers underneath a given namespace +[**list_views**](IcebergCatalogAPI.md#list_views) | **GET** /v1/{prefix}/namespaces/{namespace}/views | List all view identifiers underneath a given namespace +[**load_namespace_metadata**](IcebergCatalogAPI.md#load_namespace_metadata) | **GET** /v1/{prefix}/namespaces/{namespace} | Load the metadata properties for a namespace +[**load_table**](IcebergCatalogAPI.md#load_table) | **GET** /v1/{prefix}/namespaces/{namespace}/tables/{table} | Load a table from the catalog +[**load_view**](IcebergCatalogAPI.md#load_view) | **GET** /v1/{prefix}/namespaces/{namespace}/views/{view} | Load a view from the catalog +[**namespace_exists**](IcebergCatalogAPI.md#namespace_exists) | **HEAD** /v1/{prefix}/namespaces/{namespace} | Check if a namespace exists +[**register_table**](IcebergCatalogAPI.md#register_table) | **POST** /v1/{prefix}/namespaces/{namespace}/register | Register a table in the given namespace using given metadata file location +[**rename_table**](IcebergCatalogAPI.md#rename_table) | **POST** /v1/{prefix}/tables/rename | Rename a table from its current name to a new name +[**rename_view**](IcebergCatalogAPI.md#rename_view) | **POST** /v1/{prefix}/views/rename | Rename a view from its current name to a new name +[**replace_view**](IcebergCatalogAPI.md#replace_view) | **POST** /v1/{prefix}/namespaces/{namespace}/views/{view} | Replace a view +[**report_metrics**](IcebergCatalogAPI.md#report_metrics) | **POST** /v1/{prefix}/namespaces/{namespace}/tables/{table}/metrics | Send a metrics report to this endpoint to be processed by the backend +[**send_notification**](IcebergCatalogAPI.md#send_notification) | **POST** /v1/{prefix}/namespaces/{namespace}/tables/{table}/notifications | Sends a notification to the table +[**table_exists**](IcebergCatalogAPI.md#table_exists) | **HEAD** /v1/{prefix}/namespaces/{namespace}/tables/{table} | Check if a table exists +[**update_properties**](IcebergCatalogAPI.md#update_properties) | **POST** /v1/{prefix}/namespaces/{namespace}/properties | Set or remove properties on a namespace +[**update_table**](IcebergCatalogAPI.md#update_table) | **POST** /v1/{prefix}/namespaces/{namespace}/tables/{table} | Commit updates to a table +[**view_exists**](IcebergCatalogAPI.md#view_exists) | **HEAD** /v1/{prefix}/namespaces/{namespace}/views/{view} | Check if a view exists + + +# **commit_transaction** +> commit_transaction(prefix, commit_transaction_request) + +Commit updates to multiple tables in an atomic operation + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.commit_transaction_request import CommitTransactionRequest +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + commit_transaction_request = polaris.catalog.CommitTransactionRequest() # CommitTransactionRequest | Commit updates to multiple tables in an atomic operation A commit for a single table consists of a table identifier with requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id. + + try: + # Commit updates to multiple tables in an atomic operation + api_instance.commit_transaction(prefix, commit_transaction_request) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->commit_transaction: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **commit_transaction_request** | [**CommitTransactionRequest**](CommitTransactionRequest.md)| Commit updates to multiple tables in an atomic operation A commit for a single table consists of a table identifier with requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id. | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - NoSuchTableException, table to load does not exist | - | +**409** | Conflict - CommitFailedException, one or more requirements failed. The client may retry. | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**500** | An unknown server-side problem occurred; the commit state is unknown. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**502** | A gateway or proxy received an invalid response from the upstream server; the commit state is unknown. | - | +**504** | A server-side gateway timeout occurred; the commit state is unknown. | - | +**5XX** | A server-side problem that might not be addressable on the client. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **create_namespace** +> CreateNamespaceResponse create_namespace(prefix, create_namespace_request) + +Create a namespace + +Create a namespace, with an optional set of properties. The server might also add properties, such as `last_modified_time` etc. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.create_namespace_request import CreateNamespaceRequest +from polaris.catalog.models.create_namespace_response import CreateNamespaceResponse +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + create_namespace_request = polaris.catalog.CreateNamespaceRequest() # CreateNamespaceRequest | + + try: + # Create a namespace + api_response = api_instance.create_namespace(prefix, create_namespace_request) + print("The response of IcebergCatalogAPI->create_namespace:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->create_namespace: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **create_namespace_request** | [**CreateNamespaceRequest**](CreateNamespaceRequest.md)| | + +### Return type + +[**CreateNamespaceResponse**](CreateNamespaceResponse.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | Represents a successful call to create a namespace. Returns the namespace created, as well as any properties that were stored for the namespace, including those the server might have added. Implementations are not required to support namespace properties. | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**406** | Not Acceptable / Unsupported Operation. The server does not support this operation. | - | +**409** | Conflict - The namespace already exists | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **create_table** +> LoadTableResult create_table(prefix, namespace, create_table_request, x_iceberg_access_delegation=x_iceberg_access_delegation) + +Create a table in the given namespace + +Create a table or start a create transaction, like atomic CTAS. If `stage-create` is false, the table is created immediately. If `stage-create` is true, the table is not created, but table metadata is initialized and returned. The service should prepare as needed for a commit to the table commit endpoint to complete the create transaction. The client uses the returned metadata to begin a transaction. To commit the transaction, the client sends all create and subsequent changes to the table commit route. Changes from the table create operation include changes like AddSchemaUpdate and SetCurrentSchemaUpdate that set the initial table state. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.create_table_request import CreateTableRequest +from polaris.catalog.models.load_table_result import LoadTableResult +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + create_table_request = polaris.catalog.CreateTableRequest() # CreateTableRequest | + x_iceberg_access_delegation = 'vended-credentials,remote-signing' # str | Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. (optional) + + try: + # Create a table in the given namespace + api_response = api_instance.create_table(prefix, namespace, create_table_request, x_iceberg_access_delegation=x_iceberg_access_delegation) + print("The response of IcebergCatalogAPI->create_table:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->create_table: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **create_table_request** | [**CreateTableRequest**](CreateTableRequest.md)| | + **x_iceberg_access_delegation** | **str**| Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. | [optional] + +### Return type + +[**LoadTableResult**](LoadTableResult.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | Table metadata result after creating a table | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - The namespace specified does not exist | - | +**409** | Conflict - The table already exists | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **create_view** +> LoadViewResult create_view(prefix, namespace, create_view_request) + +Create a view in the given namespace + +Create a view in the given namespace. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.create_view_request import CreateViewRequest +from polaris.catalog.models.load_view_result import LoadViewResult +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + create_view_request = polaris.catalog.CreateViewRequest() # CreateViewRequest | + + try: + # Create a view in the given namespace + api_response = api_instance.create_view(prefix, namespace, create_view_request) + print("The response of IcebergCatalogAPI->create_view:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->create_view: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **create_view_request** | [**CreateViewRequest**](CreateViewRequest.md)| | + +### Return type + +[**LoadViewResult**](LoadViewResult.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | View metadata result when loading a view | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - The namespace specified does not exist | - | +**409** | Conflict - The view already exists | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **drop_namespace** +> drop_namespace(prefix, namespace) + +Drop a namespace from the catalog. Namespace must be empty. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + + try: + # Drop a namespace from the catalog. Namespace must be empty. + api_instance.drop_namespace(prefix, namespace) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->drop_namespace: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - Namespace to delete does not exist. | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **drop_table** +> drop_table(prefix, namespace, table, purge_requested=purge_requested) + +Drop a table from the catalog + +Remove a table from the catalog + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + table = 'sales' # str | A table name + purge_requested = False # bool | Whether the user requested to purge the underlying table's data and metadata (optional) (default to False) + + try: + # Drop a table from the catalog + api_instance.drop_table(prefix, namespace, table, purge_requested=purge_requested) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->drop_table: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **table** | **str**| A table name | + **purge_requested** | **bool**| Whether the user requested to purge the underlying table's data and metadata | [optional] [default to False] + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - NoSuchTableException, Table to drop does not exist | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **drop_view** +> drop_view(prefix, namespace, view) + +Drop a view from the catalog + +Remove a view from the catalog + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + view = 'sales' # str | A view name + + try: + # Drop a view from the catalog + api_instance.drop_view(prefix, namespace, view) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->drop_view: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **view** | **str**| A view name | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - NoSuchViewException, view to drop does not exist | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **list_namespaces** +> ListNamespacesResponse list_namespaces(prefix, page_token=page_token, page_size=page_size, parent=parent) + +List namespaces, optionally providing a parent namespace to list underneath + +List all namespaces at a certain level, optionally starting from a given parent namespace. If table accounting.tax.paid.info exists, using 'SELECT NAMESPACE IN accounting' would translate into `GET /namespaces?parent=accounting` and must return a namespace, [\"accounting\", \"tax\"] only. Using 'SELECT NAMESPACE IN accounting.tax' would translate into `GET /namespaces?parent=accounting%1Ftax` and must return a namespace, [\"accounting\", \"tax\", \"paid\"]. If `parent` is not provided, all top-level namespaces should be listed. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.list_namespaces_response import ListNamespacesResponse +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + page_token = 'page_token_example' # str | (optional) + page_size = 56 # int | For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. (optional) + parent = 'accounting%1Ftax' # str | An optional namespace, underneath which to list namespaces. If not provided or empty, all top-level namespaces should be listed. If parent is a multipart namespace, the parts must be separated by the unit separator (`0x1F`) byte. (optional) + + try: + # List namespaces, optionally providing a parent namespace to list underneath + api_response = api_instance.list_namespaces(prefix, page_token=page_token, page_size=page_size, parent=parent) + print("The response of IcebergCatalogAPI->list_namespaces:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->list_namespaces: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **page_token** | **str**| | [optional] + **page_size** | **int**| For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. | [optional] + **parent** | **str**| An optional namespace, underneath which to list namespaces. If not provided or empty, all top-level namespaces should be listed. If parent is a multipart namespace, the parts must be separated by the unit separator (`0x1F`) byte. | [optional] + +### Return type + +[**ListNamespacesResponse**](ListNamespacesResponse.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | A list of namespaces | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - Namespace provided in the `parent` query parameter is not found. | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **list_tables** +> ListTablesResponse list_tables(prefix, namespace, page_token=page_token, page_size=page_size) + +List all table identifiers underneath a given namespace + +Return all table identifiers under this namespace + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.list_tables_response import ListTablesResponse +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + page_token = 'page_token_example' # str | (optional) + page_size = 56 # int | For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. (optional) + + try: + # List all table identifiers underneath a given namespace + api_response = api_instance.list_tables(prefix, namespace, page_token=page_token, page_size=page_size) + print("The response of IcebergCatalogAPI->list_tables:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->list_tables: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **page_token** | **str**| | [optional] + **page_size** | **int**| For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. | [optional] + +### Return type + +[**ListTablesResponse**](ListTablesResponse.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | A list of table identifiers | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - The namespace specified does not exist | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **list_views** +> ListTablesResponse list_views(prefix, namespace, page_token=page_token, page_size=page_size) + +List all view identifiers underneath a given namespace + +Return all view identifiers under this namespace + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.list_tables_response import ListTablesResponse +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + page_token = 'page_token_example' # str | (optional) + page_size = 56 # int | For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. (optional) + + try: + # List all view identifiers underneath a given namespace + api_response = api_instance.list_views(prefix, namespace, page_token=page_token, page_size=page_size) + print("The response of IcebergCatalogAPI->list_views:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->list_views: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **page_token** | **str**| | [optional] + **page_size** | **int**| For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. | [optional] + +### Return type + +[**ListTablesResponse**](ListTablesResponse.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | A list of table identifiers | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - The namespace specified does not exist | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **load_namespace_metadata** +> GetNamespaceResponse load_namespace_metadata(prefix, namespace) + +Load the metadata properties for a namespace + +Return all stored metadata properties for a given namespace + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.get_namespace_response import GetNamespaceResponse +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + + try: + # Load the metadata properties for a namespace + api_response = api_instance.load_namespace_metadata(prefix, namespace) + print("The response of IcebergCatalogAPI->load_namespace_metadata:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->load_namespace_metadata: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + +### Return type + +[**GetNamespaceResponse**](GetNamespaceResponse.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | Returns a namespace, as well as any properties stored on the namespace if namespace properties are supported by the server. | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - Namespace not found | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **load_table** +> LoadTableResult load_table(prefix, namespace, table, x_iceberg_access_delegation=x_iceberg_access_delegation, snapshots=snapshots) + +Load a table from the catalog + +Load a table from the catalog. The response contains both configuration and table metadata. The configuration, if non-empty is used as additional configuration for the table that overrides catalog configuration. For example, this configuration may change the FileIO implementation to be used for the table. The response also contains the table's full metadata, matching the table metadata JSON file. The catalog configuration may contain credentials that should be used for subsequent requests for the table. The configuration key \"token\" is used to pass an access token to be used as a bearer token for table requests. Otherwise, a token may be passed using a RFC 8693 token type as a configuration key. For example, \"urn:ietf:params:oauth:token-type:jwt=\". + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.load_table_result import LoadTableResult +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + table = 'sales' # str | A table name + x_iceberg_access_delegation = 'vended-credentials,remote-signing' # str | Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. (optional) + snapshots = 'snapshots_example' # str | The snapshots to return in the body of the metadata. Setting the value to `all` would return the full set of snapshots currently valid for the table. Setting the value to `refs` would load all snapshots referenced by branches or tags. Default if no param is provided is `all`. (optional) + + try: + # Load a table from the catalog + api_response = api_instance.load_table(prefix, namespace, table, x_iceberg_access_delegation=x_iceberg_access_delegation, snapshots=snapshots) + print("The response of IcebergCatalogAPI->load_table:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->load_table: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **table** | **str**| A table name | + **x_iceberg_access_delegation** | **str**| Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. | [optional] + **snapshots** | **str**| The snapshots to return in the body of the metadata. Setting the value to `all` would return the full set of snapshots currently valid for the table. Setting the value to `refs` would load all snapshots referenced by branches or tags. Default if no param is provided is `all`. | [optional] + +### Return type + +[**LoadTableResult**](LoadTableResult.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | Table metadata result when loading a table | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - NoSuchTableException, table to load does not exist | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **load_view** +> LoadViewResult load_view(prefix, namespace, view) + +Load a view from the catalog + +Load a view from the catalog. The response contains both configuration and view metadata. The configuration, if non-empty is used as additional configuration for the view that overrides catalog configuration. The response also contains the view's full metadata, matching the view metadata JSON file. The catalog configuration may contain credentials that should be used for subsequent requests for the view. The configuration key \"token\" is used to pass an access token to be used as a bearer token for view requests. Otherwise, a token may be passed using a RFC 8693 token type as a configuration key. For example, \"urn:ietf:params:oauth:token-type:jwt=\". + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.load_view_result import LoadViewResult +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + view = 'sales' # str | A view name + + try: + # Load a view from the catalog + api_response = api_instance.load_view(prefix, namespace, view) + print("The response of IcebergCatalogAPI->load_view:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->load_view: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **view** | **str**| A view name | + +### Return type + +[**LoadViewResult**](LoadViewResult.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | View metadata result when loading a view | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - NoSuchViewException, view to load does not exist | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **namespace_exists** +> namespace_exists(prefix, namespace) + +Check if a namespace exists + +Check if a namespace exists. The response does not contain a body. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + + try: + # Check if a namespace exists + api_instance.namespace_exists(prefix, namespace) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->namespace_exists: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - Namespace not found | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **register_table** +> LoadTableResult register_table(prefix, namespace, register_table_request) + +Register a table in the given namespace using given metadata file location + +Register a table using given metadata file location. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.load_table_result import LoadTableResult +from polaris.catalog.models.register_table_request import RegisterTableRequest +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + register_table_request = polaris.catalog.RegisterTableRequest() # RegisterTableRequest | + + try: + # Register a table in the given namespace using given metadata file location + api_response = api_instance.register_table(prefix, namespace, register_table_request) + print("The response of IcebergCatalogAPI->register_table:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->register_table: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **register_table_request** | [**RegisterTableRequest**](RegisterTableRequest.md)| | + +### Return type + +[**LoadTableResult**](LoadTableResult.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | Table metadata result when loading a table | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - The namespace specified does not exist | - | +**409** | Conflict - The table already exists | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **rename_table** +> rename_table(prefix, rename_table_request) + +Rename a table from its current name to a new name + +Rename a table from one identifier to another. It's valid to move a table across namespaces, but the server implementation is not required to support it. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.rename_table_request import RenameTableRequest +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + rename_table_request = polaris.catalog.RenameTableRequest() # RenameTableRequest | Current table identifier to rename and new table identifier to rename to + + try: + # Rename a table from its current name to a new name + api_instance.rename_table(prefix, rename_table_request) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->rename_table: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **rename_table_request** | [**RenameTableRequest**](RenameTableRequest.md)| Current table identifier to rename and new table identifier to rename to | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - NoSuchTableException, Table to rename does not exist - NoSuchNamespaceException, The target namespace of the new table identifier does not exist | - | +**406** | Not Acceptable / Unsupported Operation. The server does not support this operation. | - | +**409** | Conflict - The target identifier to rename to already exists as a table or view | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **rename_view** +> rename_view(prefix, rename_table_request) + +Rename a view from its current name to a new name + +Rename a view from one identifier to another. It's valid to move a view across namespaces, but the server implementation is not required to support it. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.rename_table_request import RenameTableRequest +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + rename_table_request = polaris.catalog.RenameTableRequest() # RenameTableRequest | Current view identifier to rename and new view identifier to rename to + + try: + # Rename a view from its current name to a new name + api_instance.rename_view(prefix, rename_table_request) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->rename_view: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **rename_table_request** | [**RenameTableRequest**](RenameTableRequest.md)| Current view identifier to rename and new view identifier to rename to | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - NoSuchViewException, view to rename does not exist - NoSuchNamespaceException, The target namespace of the new identifier does not exist | - | +**406** | Not Acceptable / Unsupported Operation. The server does not support this operation. | - | +**409** | Conflict - The target identifier to rename to already exists as a table or view | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **replace_view** +> LoadViewResult replace_view(prefix, namespace, view, commit_view_request) + +Replace a view + +Commit updates to a view. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.commit_view_request import CommitViewRequest +from polaris.catalog.models.load_view_result import LoadViewResult +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + view = 'sales' # str | A view name + commit_view_request = polaris.catalog.CommitViewRequest() # CommitViewRequest | + + try: + # Replace a view + api_response = api_instance.replace_view(prefix, namespace, view, commit_view_request) + print("The response of IcebergCatalogAPI->replace_view:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->replace_view: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **view** | **str**| A view name | + **commit_view_request** | [**CommitViewRequest**](CommitViewRequest.md)| | + +### Return type + +[**LoadViewResult**](LoadViewResult.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | View metadata result when loading a view | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - NoSuchViewException, view to load does not exist | - | +**409** | Conflict - CommitFailedException. The client may retry. | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**500** | An unknown server-side problem occurred; the commit state is unknown. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**502** | A gateway or proxy received an invalid response from the upstream server; the commit state is unknown. | - | +**504** | A server-side gateway timeout occurred; the commit state is unknown. | - | +**5XX** | A server-side problem that might not be addressable on the client. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **report_metrics** +> report_metrics(prefix, namespace, table, report_metrics_request) + +Send a metrics report to this endpoint to be processed by the backend + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.report_metrics_request import ReportMetricsRequest +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + table = 'sales' # str | A table name + report_metrics_request = polaris.catalog.ReportMetricsRequest() # ReportMetricsRequest | The request containing the metrics report to be sent + + try: + # Send a metrics report to this endpoint to be processed by the backend + api_instance.report_metrics(prefix, namespace, table, report_metrics_request) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->report_metrics: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **table** | **str**| A table name | + **report_metrics_request** | [**ReportMetricsRequest**](ReportMetricsRequest.md)| The request containing the metrics report to be sent | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - NoSuchTableException, table to load does not exist | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **send_notification** +> send_notification(prefix, namespace, table, notification_request) + +Sends a notification to the table + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.notification_request import NotificationRequest +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + table = 'sales' # str | A table name + notification_request = polaris.catalog.NotificationRequest() # NotificationRequest | The request containing the notification to be sent + + try: + # Sends a notification to the table + api_instance.send_notification(prefix, namespace, table, notification_request) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->send_notification: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **table** | **str**| A table name | + **notification_request** | [**NotificationRequest**](NotificationRequest.md)| The request containing the notification to be sent | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - NoSuchTableException, table to load does not exist | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **table_exists** +> table_exists(prefix, namespace, table) + +Check if a table exists + +Check if a table exists within a given namespace. The response does not contain a body. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + table = 'sales' # str | A table name + + try: + # Check if a table exists + api_instance.table_exists(prefix, namespace, table) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->table_exists: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **table** | **str**| A table name | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - NoSuchTableException, Table not found | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **update_properties** +> UpdateNamespacePropertiesResponse update_properties(prefix, namespace, update_namespace_properties_request) + +Set or remove properties on a namespace + +Set and/or remove properties on a namespace. The request body specifies a list of properties to remove and a map of key value pairs to update. Properties that are not in the request are not modified or removed by this call. Server implementations are not required to support namespace properties. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.update_namespace_properties_request import UpdateNamespacePropertiesRequest +from polaris.catalog.models.update_namespace_properties_response import UpdateNamespacePropertiesResponse +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + update_namespace_properties_request = polaris.catalog.UpdateNamespacePropertiesRequest() # UpdateNamespacePropertiesRequest | + + try: + # Set or remove properties on a namespace + api_response = api_instance.update_properties(prefix, namespace, update_namespace_properties_request) + print("The response of IcebergCatalogAPI->update_properties:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->update_properties: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **update_namespace_properties_request** | [**UpdateNamespacePropertiesRequest**](UpdateNamespacePropertiesRequest.md)| | + +### Return type + +[**UpdateNamespacePropertiesResponse**](UpdateNamespacePropertiesResponse.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | JSON data response for a synchronous update properties request. | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - Namespace not found | - | +**406** | Not Acceptable / Unsupported Operation. The server does not support this operation. | - | +**422** | Unprocessable Entity - A property key was included in both `removals` and `updates` | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **update_table** +> CommitTableResponse update_table(prefix, namespace, table, commit_table_request) + +Commit updates to a table + +Commit updates to a table. Commits have two parts, requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id. Create table transactions that are started by createTable with `stage-create` set to true are committed using this route. Transactions should include all changes to the table, including table initialization, like AddSchemaUpdate and SetCurrentSchemaUpdate. The `assert-create` requirement is used to ensure that the table was not created concurrently. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.commit_table_request import CommitTableRequest +from polaris.catalog.models.commit_table_response import CommitTableResponse +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + table = 'sales' # str | A table name + commit_table_request = polaris.catalog.CommitTableRequest() # CommitTableRequest | + + try: + # Commit updates to a table + api_response = api_instance.update_table(prefix, namespace, table, commit_table_request) + print("The response of IcebergCatalogAPI->update_table:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->update_table: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **table** | **str**| A table name | + **commit_table_request** | [**CommitTableRequest**](CommitTableRequest.md)| | + +### Return type + +[**CommitTableResponse**](CommitTableResponse.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | Response used when a table is successfully updated. The table metadata JSON is returned in the metadata field. The corresponding file location of table metadata must be returned in the metadata-location field. Clients can check whether metadata has changed by comparing metadata locations. | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**404** | Not Found - NoSuchTableException, table to load does not exist | - | +**409** | Conflict - CommitFailedException, one or more requirements failed. The client may retry. | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**500** | An unknown server-side problem occurred; the commit state is unknown. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**502** | A gateway or proxy received an invalid response from the upstream server; the commit state is unknown. | - | +**504** | A server-side gateway timeout occurred; the commit state is unknown. | - | +**5XX** | A server-side problem that might not be addressable on the client. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **view_exists** +> view_exists(prefix, namespace, view) + +Check if a view exists + +Check if a view exists within a given namespace. This request does not return a response body. + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergCatalogAPI(api_client) + prefix = 'prefix_example' # str | An optional prefix in the path + namespace = 'accounting' # str | A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + view = 'sales' # str | A view name + + try: + # Check if a view exists + api_instance.view_exists(prefix, namespace, view) + except Exception as e: + print("Exception when calling IcebergCatalogAPI->view_exists: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **prefix** | **str**| An optional prefix in the path | + **namespace** | **str**| A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. | + **view** | **str**| A view name | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**400** | Bad Request | - | +**401** | Unauthorized | - | +**404** | Not Found | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + diff --git a/regtests/client/python/docs/IcebergConfigurationAPI.md b/regtests/client/python/docs/IcebergConfigurationAPI.md new file mode 100644 index 0000000000..0ccc62e9a8 --- /dev/null +++ b/regtests/client/python/docs/IcebergConfigurationAPI.md @@ -0,0 +1,113 @@ + +# polaris.catalog.IcebergConfigurationAPI + +All URIs are relative to *https://localhost* + +Method | HTTP request | Description +------------- | ------------- | ------------- +[**get_config**](IcebergConfigurationAPI.md#get_config) | **GET** /v1/config | List all catalog configuration settings + + +# **get_config** +> CatalogConfig get_config(warehouse=warehouse) + +List all catalog configuration settings + + All REST clients should first call this route to get catalog configuration properties from the server to configure the catalog and its HTTP client. Configuration from the server consists of two sets of key/value pairs. - defaults - properties that should be used as default configuration; applied before client configuration - overrides - properties that should be used to override client configuration; applied after defaults and client configuration Catalog configuration is constructed by setting the defaults, then client- provided configuration, and finally overrides. The final property set is then used to configure the catalog. For example, a default configuration property might set the size of the client pool, which can be replaced with a client-specific setting. An override might be used to set the warehouse location, which is stored on the server rather than in client configuration. Common catalog configuration settings are documented at https://iceberg.apache.org/docs/latest/configuration/#catalog-properties + +### Example + +* OAuth Authentication (OAuth2): +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.catalog_config import CatalogConfig +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergConfigurationAPI(api_client) + warehouse = 'warehouse_example' # str | Warehouse location or identifier to request from the service (optional) + + try: + # List all catalog configuration settings + api_response = api_instance.get_config(warehouse=warehouse) + print("The response of IcebergConfigurationAPI->get_config:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergConfigurationAPI->get_config: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **warehouse** | **str**| Warehouse location or identifier to request from the service | [optional] + +### Return type + +[**CatalogConfig**](CatalogConfig.md) + +### Authorization + +[OAuth2](../README.md#OAuth2), [BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | Server specified configuration values. | - | +**400** | Indicates a bad request error. It could be caused by an unexpected request body format or other forms of request validation failure, such as invalid json. Usually serves application/json content, although in some cases simple text/plain content might be returned by the server's middleware. | - | +**401** | Unauthorized. Authentication is required and has failed or has not yet been provided. | - | +**403** | Forbidden. Authenticated user does not have the necessary permissions. | - | +**419** | Credentials have timed out. If possible, the client should refresh credentials and retry. | - | +**503** | The service is not ready to handle the request. The client should wait and retry. The service may additionally send a Retry-After header to indicate when to retry. | - | +**5XX** | A server-side problem that might not be addressable from the client side. Used for server 5xx errors without more specific documentation in individual routes. | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + diff --git a/regtests/client/python/docs/IcebergErrorResponse.md b/regtests/client/python/docs/IcebergErrorResponse.md new file mode 100644 index 0000000000..f4f9b5c69d --- /dev/null +++ b/regtests/client/python/docs/IcebergErrorResponse.md @@ -0,0 +1,47 @@ + +# IcebergErrorResponse + +JSON wrapper for all error responses (non-2xx) + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**error** | [**ErrorModel**](ErrorModel.md) | | + +## Example + +```python +from polaris.catalog.models.iceberg_error_response import IcebergErrorResponse + +# TODO update the JSON string below +json = "{}" +# create an instance of IcebergErrorResponse from a JSON string +iceberg_error_response_instance = IcebergErrorResponse.from_json(json) +# print the JSON string representation of the object +print(IcebergErrorResponse.to_json()) + +# convert the object into a dict +iceberg_error_response_dict = iceberg_error_response_instance.to_dict() +# create an instance of IcebergErrorResponse from a dict +iceberg_error_response_from_dict = IcebergErrorResponse.from_dict(iceberg_error_response_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/IcebergOAuth2API.md b/regtests/client/python/docs/IcebergOAuth2API.md new file mode 100644 index 0000000000..b5d2ca0364 --- /dev/null +++ b/regtests/client/python/docs/IcebergOAuth2API.md @@ -0,0 +1,124 @@ + +# polaris.catalog.IcebergOAuth2API + +All URIs are relative to *https://localhost* + +Method | HTTP request | Description +------------- | ------------- | ------------- +[**get_token**](IcebergOAuth2API.md#get_token) | **POST** /v1/oauth/tokens | Get a token using an OAuth2 flow + + +# **get_token** +> OAuthTokenResponse get_token(grant_type=grant_type, scope=scope, client_id=client_id, client_secret=client_secret, requested_token_type=requested_token_type, subject_token=subject_token, subject_token_type=subject_token_type, actor_token=actor_token, actor_token_type=actor_token_type) + +Get a token using an OAuth2 flow + +Exchange credentials for a token using the OAuth2 client credentials flow or token exchange. This endpoint is used for three purposes - 1. To exchange client credentials (client ID and secret) for an access token This uses the client credentials flow. 2. To exchange a client token and an identity token for a more specific access token This uses the token exchange flow. 3. To exchange an access token for one with the same claims and a refreshed expiration period This uses the token exchange flow. For example, a catalog client may be configured with client credentials from the OAuth2 Authorization flow. This client would exchange its client ID and secret for an access token using the client credentials request with this endpoint (1). Subsequent requests would then use that access token. Some clients may also handle sessions that have additional user context. These clients would use the token exchange flow to exchange a user token (the \"subject\" token) from the session for a more specific access token for that user, using the catalog's access token as the \"actor\" token (2). The user ID token is the \"subject\" token and can be any token type allowed by the OAuth2 token exchange flow, including a unsecured JWT token with a sub claim. This request should use the catalog's bearer token in the \"Authorization\" header. Clients may also use the token exchange flow to refresh a token that is about to expire by sending a token exchange request (3). The request's \"subject\" token should be the expiring token. This request should use the subject token in the \"Authorization\" header. + +### Example + +* Bearer Authentication (BearerAuth): + +```python +import polaris.catalog +from polaris.catalog.models.o_auth_token_response import OAuthTokenResponse +from polaris.catalog.models.token_type import TokenType +from polaris.catalog.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.catalog.Configuration( + host = "https://localhost" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +# Configure Bearer authorization: BearerAuth +configuration = polaris.catalog.Configuration( + access_token = os.environ["BEARER_TOKEN"] +) + +# Enter a context with an instance of the API client +with polaris.catalog.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.catalog.IcebergOAuth2API(api_client) + grant_type = 'grant_type_example' # str | (optional) + scope = 'scope_example' # str | (optional) + client_id = 'client_id_example' # str | Client ID This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header. (optional) + client_secret = 'client_secret_example' # str | Client secret This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header. (optional) + requested_token_type = polaris.catalog.TokenType() # TokenType | (optional) + subject_token = 'subject_token_example' # str | Subject token for token exchange request (optional) + subject_token_type = polaris.catalog.TokenType() # TokenType | (optional) + actor_token = 'actor_token_example' # str | Actor token for token exchange request (optional) + actor_token_type = polaris.catalog.TokenType() # TokenType | (optional) + + try: + # Get a token using an OAuth2 flow + api_response = api_instance.get_token(grant_type=grant_type, scope=scope, client_id=client_id, client_secret=client_secret, requested_token_type=requested_token_type, subject_token=subject_token, subject_token_type=subject_token_type, actor_token=actor_token, actor_token_type=actor_token_type) + print("The response of IcebergOAuth2API->get_token:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling IcebergOAuth2API->get_token: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **grant_type** | **str**| | [optional] + **scope** | **str**| | [optional] + **client_id** | **str**| Client ID This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header. | [optional] + **client_secret** | **str**| Client secret This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header. | [optional] + **requested_token_type** | [**TokenType**](TokenType.md)| | [optional] + **subject_token** | **str**| Subject token for token exchange request | [optional] + **subject_token_type** | [**TokenType**](TokenType.md)| | [optional] + **actor_token** | **str**| Actor token for token exchange request | [optional] + **actor_token_type** | [**TokenType**](TokenType.md)| | [optional] + +### Return type + +[**OAuthTokenResponse**](OAuthTokenResponse.md) + +### Authorization + +[BearerAuth](../README.md#BearerAuth) + +### HTTP request headers + + - **Content-Type**: application/x-www-form-urlencoded + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | OAuth2 token response for client credentials or token exchange | - | +**400** | OAuth2 error response | - | +**401** | OAuth2 error response | - | +**5XX** | OAuth2 error response | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + diff --git a/regtests/client/python/docs/ListNamespacesResponse.md b/regtests/client/python/docs/ListNamespacesResponse.md new file mode 100644 index 0000000000..87e868837f --- /dev/null +++ b/regtests/client/python/docs/ListNamespacesResponse.md @@ -0,0 +1,46 @@ + +# ListNamespacesResponse + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**next_page_token** | **str** | An opaque token that allows clients to make use of pagination for list APIs (e.g. ListTables). Clients may initiate the first paginated request by sending an empty query parameter `pageToken` to the server. Servers that support pagination should identify the `pageToken` parameter and return a `next-page-token` in the response if there are more results available. After the initial request, the value of `next-page-token` from each response must be used as the `pageToken` parameter value for the next request. The server must return `null` value for the `next-page-token` in the last response. Servers that support pagination must return all results in a single response with the value of `next-page-token` set to `null` if the query parameter `pageToken` is not set in the request. Servers that do not support pagination should ignore the `pageToken` parameter and return all results in a single response. The `next-page-token` must be omitted from the response. Clients must interpret either `null` or missing response value of `next-page-token` as the end of the listing results. | [optional] +**namespaces** | **List[List[str]]** | | [optional] + +## Example + +```python +from polaris.catalog.models.list_namespaces_response import ListNamespacesResponse + +# TODO update the JSON string below +json = "{}" +# create an instance of ListNamespacesResponse from a JSON string +list_namespaces_response_instance = ListNamespacesResponse.from_json(json) +# print the JSON string representation of the object +print(ListNamespacesResponse.to_json()) + +# convert the object into a dict +list_namespaces_response_dict = list_namespaces_response_instance.to_dict() +# create an instance of ListNamespacesResponse from a dict +list_namespaces_response_from_dict = ListNamespacesResponse.from_dict(list_namespaces_response_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ListTablesResponse.md b/regtests/client/python/docs/ListTablesResponse.md new file mode 100644 index 0000000000..3b785b6a0f --- /dev/null +++ b/regtests/client/python/docs/ListTablesResponse.md @@ -0,0 +1,46 @@ + +# ListTablesResponse + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**next_page_token** | **str** | An opaque token that allows clients to make use of pagination for list APIs (e.g. ListTables). Clients may initiate the first paginated request by sending an empty query parameter `pageToken` to the server. Servers that support pagination should identify the `pageToken` parameter and return a `next-page-token` in the response if there are more results available. After the initial request, the value of `next-page-token` from each response must be used as the `pageToken` parameter value for the next request. The server must return `null` value for the `next-page-token` in the last response. Servers that support pagination must return all results in a single response with the value of `next-page-token` set to `null` if the query parameter `pageToken` is not set in the request. Servers that do not support pagination should ignore the `pageToken` parameter and return all results in a single response. The `next-page-token` must be omitted from the response. Clients must interpret either `null` or missing response value of `next-page-token` as the end of the listing results. | [optional] +**identifiers** | [**List[TableIdentifier]**](TableIdentifier.md) | | [optional] + +## Example + +```python +from polaris.catalog.models.list_tables_response import ListTablesResponse + +# TODO update the JSON string below +json = "{}" +# create an instance of ListTablesResponse from a JSON string +list_tables_response_instance = ListTablesResponse.from_json(json) +# print the JSON string representation of the object +print(ListTablesResponse.to_json()) + +# convert the object into a dict +list_tables_response_dict = list_tables_response_instance.to_dict() +# create an instance of ListTablesResponse from a dict +list_tables_response_from_dict = ListTablesResponse.from_dict(list_tables_response_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ListType.md b/regtests/client/python/docs/ListType.md new file mode 100644 index 0000000000..d8e608923b --- /dev/null +++ b/regtests/client/python/docs/ListType.md @@ -0,0 +1,48 @@ + +# ListType + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**element_id** | **int** | | +**element** | [**Type**](Type.md) | | +**element_required** | **bool** | | + +## Example + +```python +from polaris.catalog.models.list_type import ListType + +# TODO update the JSON string below +json = "{}" +# create an instance of ListType from a JSON string +list_type_instance = ListType.from_json(json) +# print the JSON string representation of the object +print(ListType.to_json()) + +# convert the object into a dict +list_type_dict = list_type_instance.to_dict() +# create an instance of ListType from a dict +list_type_from_dict = ListType.from_dict(list_type_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/LiteralExpression.md b/regtests/client/python/docs/LiteralExpression.md new file mode 100644 index 0000000000..ade9cb63b9 --- /dev/null +++ b/regtests/client/python/docs/LiteralExpression.md @@ -0,0 +1,47 @@ + +# LiteralExpression + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**term** | [**Term**](Term.md) | | +**value** | **object** | | + +## Example + +```python +from polaris.catalog.models.literal_expression import LiteralExpression + +# TODO update the JSON string below +json = "{}" +# create an instance of LiteralExpression from a JSON string +literal_expression_instance = LiteralExpression.from_json(json) +# print the JSON string representation of the object +print(LiteralExpression.to_json()) + +# convert the object into a dict +literal_expression_dict = literal_expression_instance.to_dict() +# create an instance of LiteralExpression from a dict +literal_expression_from_dict = LiteralExpression.from_dict(literal_expression_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/LoadTableResult.md b/regtests/client/python/docs/LoadTableResult.md new file mode 100644 index 0000000000..01ad84349d --- /dev/null +++ b/regtests/client/python/docs/LoadTableResult.md @@ -0,0 +1,49 @@ + +# LoadTableResult + +Result used when a table is successfully loaded. The table metadata JSON is returned in the `metadata` field. The corresponding file location of table metadata should be returned in the `metadata-location` field, unless the metadata is not yet committed. For example, a create transaction may return metadata that is staged but not committed. Clients can check whether metadata has changed by comparing metadata locations after the table has been created. The `config` map returns table-specific configuration for the table's resources, including its HTTP client and FileIO. For example, config may contain a specific FileIO implementation class for the table depending on its underlying storage. The following configurations should be respected by clients: ## General Configurations - `token`: Authorization bearer token to use for table requests if OAuth2 security is enabled ## AWS Configurations The following configurations should be respected when working with tables stored in AWS S3 - `client.region`: region to configure client for making requests to AWS - `s3.access-key-id`: id for for credentials that provide access to the data in S3 - `s3.secret-access-key`: secret for credentials that provide access to data in S3 - `s3.session-token`: if present, this value should be used for as the session token - `s3.remote-signing-enabled`: if `true` remote signing should be performed as described in the `s3-signer-open-api.yaml` specification + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**metadata_location** | **str** | May be null if the table is staged as part of a transaction | [optional] +**metadata** | [**TableMetadata**](TableMetadata.md) | | +**config** | **Dict[str, str]** | | [optional] + +## Example + +```python +from polaris.catalog.models.load_table_result import LoadTableResult + +# TODO update the JSON string below +json = "{}" +# create an instance of LoadTableResult from a JSON string +load_table_result_instance = LoadTableResult.from_json(json) +# print the JSON string representation of the object +print(LoadTableResult.to_json()) + +# convert the object into a dict +load_table_result_dict = load_table_result_instance.to_dict() +# create an instance of LoadTableResult from a dict +load_table_result_from_dict = LoadTableResult.from_dict(load_table_result_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/LoadViewResult.md b/regtests/client/python/docs/LoadViewResult.md new file mode 100644 index 0000000000..8cd50f0d41 --- /dev/null +++ b/regtests/client/python/docs/LoadViewResult.md @@ -0,0 +1,49 @@ + +# LoadViewResult + +Result used when a view is successfully loaded. The view metadata JSON is returned in the `metadata` field. The corresponding file location of view metadata is returned in the `metadata-location` field. Clients can check whether metadata has changed by comparing metadata locations after the view has been created. The `config` map returns view-specific configuration for the view's resources. The following configurations should be respected by clients: ## General Configurations - `token`: Authorization bearer token to use for view requests if OAuth2 security is enabled + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**metadata_location** | **str** | | +**metadata** | [**ViewMetadata**](ViewMetadata.md) | | +**config** | **Dict[str, str]** | | [optional] + +## Example + +```python +from polaris.catalog.models.load_view_result import LoadViewResult + +# TODO update the JSON string below +json = "{}" +# create an instance of LoadViewResult from a JSON string +load_view_result_instance = LoadViewResult.from_json(json) +# print the JSON string representation of the object +print(LoadViewResult.to_json()) + +# convert the object into a dict +load_view_result_dict = load_view_result_instance.to_dict() +# create an instance of LoadViewResult from a dict +load_view_result_from_dict = LoadViewResult.from_dict(load_view_result_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/MapType.md b/regtests/client/python/docs/MapType.md new file mode 100644 index 0000000000..71b50893a3 --- /dev/null +++ b/regtests/client/python/docs/MapType.md @@ -0,0 +1,50 @@ + +# MapType + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**key_id** | **int** | | +**key** | [**Type**](Type.md) | | +**value_id** | **int** | | +**value** | [**Type**](Type.md) | | +**value_required** | **bool** | | + +## Example + +```python +from polaris.catalog.models.map_type import MapType + +# TODO update the JSON string below +json = "{}" +# create an instance of MapType from a JSON string +map_type_instance = MapType.from_json(json) +# print the JSON string representation of the object +print(MapType.to_json()) + +# convert the object into a dict +map_type_dict = map_type_instance.to_dict() +# create an instance of MapType from a dict +map_type_from_dict = MapType.from_dict(map_type_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/MetadataLogInner.md b/regtests/client/python/docs/MetadataLogInner.md new file mode 100644 index 0000000000..1c3b46be55 --- /dev/null +++ b/regtests/client/python/docs/MetadataLogInner.md @@ -0,0 +1,46 @@ + +# MetadataLogInner + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**metadata_file** | **str** | | +**timestamp_ms** | **int** | | + +## Example + +```python +from polaris.catalog.models.metadata_log_inner import MetadataLogInner + +# TODO update the JSON string below +json = "{}" +# create an instance of MetadataLogInner from a JSON string +metadata_log_inner_instance = MetadataLogInner.from_json(json) +# print the JSON string representation of the object +print(MetadataLogInner.to_json()) + +# convert the object into a dict +metadata_log_inner_dict = metadata_log_inner_instance.to_dict() +# create an instance of MetadataLogInner from a dict +metadata_log_inner_from_dict = MetadataLogInner.from_dict(metadata_log_inner_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/MetricResult.md b/regtests/client/python/docs/MetricResult.md new file mode 100644 index 0000000000..22f5e006e9 --- /dev/null +++ b/regtests/client/python/docs/MetricResult.md @@ -0,0 +1,49 @@ + +# MetricResult + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**unit** | **str** | | +**value** | **int** | | +**time_unit** | **str** | | +**count** | **int** | | +**total_duration** | **int** | | + +## Example + +```python +from polaris.catalog.models.metric_result import MetricResult + +# TODO update the JSON string below +json = "{}" +# create an instance of MetricResult from a JSON string +metric_result_instance = MetricResult.from_json(json) +# print the JSON string representation of the object +print(MetricResult.to_json()) + +# convert the object into a dict +metric_result_dict = metric_result_instance.to_dict() +# create an instance of MetricResult from a dict +metric_result_from_dict = MetricResult.from_dict(metric_result_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ModelSchema.md b/regtests/client/python/docs/ModelSchema.md new file mode 100644 index 0000000000..831f8311ac --- /dev/null +++ b/regtests/client/python/docs/ModelSchema.md @@ -0,0 +1,48 @@ + +# ModelSchema + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**fields** | [**List[StructField]**](StructField.md) | | +**schema_id** | **int** | | [optional] [readonly] +**identifier_field_ids** | **List[int]** | | [optional] + +## Example + +```python +from polaris.catalog.models.model_schema import ModelSchema + +# TODO update the JSON string below +json = "{}" +# create an instance of ModelSchema from a JSON string +model_schema_instance = ModelSchema.from_json(json) +# print the JSON string representation of the object +print(ModelSchema.to_json()) + +# convert the object into a dict +model_schema_dict = model_schema_instance.to_dict() +# create an instance of ModelSchema from a dict +model_schema_from_dict = ModelSchema.from_dict(model_schema_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/NamespaceGrant.md b/regtests/client/python/docs/NamespaceGrant.md new file mode 100644 index 0000000000..5a0fb3f62e --- /dev/null +++ b/regtests/client/python/docs/NamespaceGrant.md @@ -0,0 +1,46 @@ + +# NamespaceGrant + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**namespace** | **List[str]** | | +**privilege** | [**NamespacePrivilege**](NamespacePrivilege.md) | | + +## Example + +```python +from polaris.management.models.namespace_grant import NamespaceGrant + +# TODO update the JSON string below +json = "{}" +# create an instance of NamespaceGrant from a JSON string +namespace_grant_instance = NamespaceGrant.from_json(json) +# print the JSON string representation of the object +print(NamespaceGrant.to_json()) + +# convert the object into a dict +namespace_grant_dict = namespace_grant_instance.to_dict() +# create an instance of NamespaceGrant from a dict +namespace_grant_from_dict = NamespaceGrant.from_dict(namespace_grant_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/NamespacePrivilege.md b/regtests/client/python/docs/NamespacePrivilege.md new file mode 100644 index 0000000000..f9ed3b2921 --- /dev/null +++ b/regtests/client/python/docs/NamespacePrivilege.md @@ -0,0 +1,70 @@ + +# NamespacePrivilege + +## Enum + +* `CATALOG_MANAGE_ACCESS` (value: `'CATALOG_MANAGE_ACCESS'`) + +* `CATALOG_MANAGE_CONTENT` (value: `'CATALOG_MANAGE_CONTENT'`) + +* `CATALOG_MANAGE_METADATA` (value: `'CATALOG_MANAGE_METADATA'`) + +* `NAMESPACE_CREATE` (value: `'NAMESPACE_CREATE'`) + +* `TABLE_CREATE` (value: `'TABLE_CREATE'`) + +* `VIEW_CREATE` (value: `'VIEW_CREATE'`) + +* `NAMESPACE_DROP` (value: `'NAMESPACE_DROP'`) + +* `TABLE_DROP` (value: `'TABLE_DROP'`) + +* `VIEW_DROP` (value: `'VIEW_DROP'`) + +* `NAMESPACE_LIST` (value: `'NAMESPACE_LIST'`) + +* `TABLE_LIST` (value: `'TABLE_LIST'`) + +* `VIEW_LIST` (value: `'VIEW_LIST'`) + +* `NAMESPACE_READ_PROPERTIES` (value: `'NAMESPACE_READ_PROPERTIES'`) + +* `TABLE_READ_PROPERTIES` (value: `'TABLE_READ_PROPERTIES'`) + +* `VIEW_READ_PROPERTIES` (value: `'VIEW_READ_PROPERTIES'`) + +* `NAMESPACE_WRITE_PROPERTIES` (value: `'NAMESPACE_WRITE_PROPERTIES'`) + +* `TABLE_WRITE_PROPERTIES` (value: `'TABLE_WRITE_PROPERTIES'`) + +* `VIEW_WRITE_PROPERTIES` (value: `'VIEW_WRITE_PROPERTIES'`) + +* `TABLE_READ_DATA` (value: `'TABLE_READ_DATA'`) + +* `TABLE_WRITE_DATA` (value: `'TABLE_WRITE_DATA'`) + +* `NAMESPACE_FULL_METADATA` (value: `'NAMESPACE_FULL_METADATA'`) + +* `TABLE_FULL_METADATA` (value: `'TABLE_FULL_METADATA'`) + +* `VIEW_FULL_METADATA` (value: `'VIEW_FULL_METADATA'`) + +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/NotExpression.md b/regtests/client/python/docs/NotExpression.md new file mode 100644 index 0000000000..283a3fb804 --- /dev/null +++ b/regtests/client/python/docs/NotExpression.md @@ -0,0 +1,46 @@ + +# NotExpression + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**child** | [**Expression**](Expression.md) | | + +## Example + +```python +from polaris.catalog.models.not_expression import NotExpression + +# TODO update the JSON string below +json = "{}" +# create an instance of NotExpression from a JSON string +not_expression_instance = NotExpression.from_json(json) +# print the JSON string representation of the object +print(NotExpression.to_json()) + +# convert the object into a dict +not_expression_dict = not_expression_instance.to_dict() +# create an instance of NotExpression from a dict +not_expression_from_dict = NotExpression.from_dict(not_expression_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/NotificationRequest.md b/regtests/client/python/docs/NotificationRequest.md new file mode 100644 index 0000000000..ecd0a51b1d --- /dev/null +++ b/regtests/client/python/docs/NotificationRequest.md @@ -0,0 +1,46 @@ + +# NotificationRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**notification_type** | [**NotificationType**](NotificationType.md) | | +**payload** | [**TableUpdateNotification**](TableUpdateNotification.md) | | [optional] + +## Example + +```python +from polaris.catalog.models.notification_request import NotificationRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of NotificationRequest from a JSON string +notification_request_instance = NotificationRequest.from_json(json) +# print the JSON string representation of the object +print(NotificationRequest.to_json()) + +# convert the object into a dict +notification_request_dict = notification_request_instance.to_dict() +# create an instance of NotificationRequest from a dict +notification_request_from_dict = NotificationRequest.from_dict(notification_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/NotificationType.md b/regtests/client/python/docs/NotificationType.md new file mode 100644 index 0000000000..7b332037bd --- /dev/null +++ b/regtests/client/python/docs/NotificationType.md @@ -0,0 +1,32 @@ + +# NotificationType + +## Enum + +* `UNKNOWN` (value: `'UNKNOWN'`) + +* `CREATE` (value: `'CREATE'`) + +* `UPDATE` (value: `'UPDATE'`) + +* `DROP` (value: `'DROP'`) + +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/NullOrder.md b/regtests/client/python/docs/NullOrder.md new file mode 100644 index 0000000000..d5a236b1de --- /dev/null +++ b/regtests/client/python/docs/NullOrder.md @@ -0,0 +1,28 @@ + +# NullOrder + +## Enum + +* `NULLS_MINUS_FIRST` (value: `'nulls-first'`) + +* `NULLS_MINUS_LAST` (value: `'nulls-last'`) + +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/OAuthError.md b/regtests/client/python/docs/OAuthError.md new file mode 100644 index 0000000000..26416cebd1 --- /dev/null +++ b/regtests/client/python/docs/OAuthError.md @@ -0,0 +1,47 @@ + +# OAuthError + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**error** | **str** | | +**error_description** | **str** | | [optional] +**error_uri** | **str** | | [optional] + +## Example + +```python +from polaris.catalog.models.o_auth_error import OAuthError + +# TODO update the JSON string below +json = "{}" +# create an instance of OAuthError from a JSON string +o_auth_error_instance = OAuthError.from_json(json) +# print the JSON string representation of the object +print(OAuthError.to_json()) + +# convert the object into a dict +o_auth_error_dict = o_auth_error_instance.to_dict() +# create an instance of OAuthError from a dict +o_auth_error_from_dict = OAuthError.from_dict(o_auth_error_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/OAuthTokenResponse.md b/regtests/client/python/docs/OAuthTokenResponse.md new file mode 100644 index 0000000000..9c1a7802c7 --- /dev/null +++ b/regtests/client/python/docs/OAuthTokenResponse.md @@ -0,0 +1,50 @@ + +# OAuthTokenResponse + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**access_token** | **str** | The access token, for client credentials or token exchange | +**token_type** | **str** | Access token type for client credentials or token exchange See https://datatracker.ietf.org/doc/html/rfc6749#section-7.1 | +**expires_in** | **int** | Lifetime of the access token in seconds for client credentials or token exchange | [optional] +**issued_token_type** | [**TokenType**](TokenType.md) | | [optional] +**refresh_token** | **str** | Refresh token for client credentials or token exchange | [optional] +**scope** | **str** | Authorization scope for client credentials or token exchange | [optional] + +## Example + +```python +from polaris.catalog.models.o_auth_token_response import OAuthTokenResponse + +# TODO update the JSON string below +json = "{}" +# create an instance of OAuthTokenResponse from a JSON string +o_auth_token_response_instance = OAuthTokenResponse.from_json(json) +# print the JSON string representation of the object +print(OAuthTokenResponse.to_json()) + +# convert the object into a dict +o_auth_token_response_dict = o_auth_token_response_instance.to_dict() +# create an instance of OAuthTokenResponse from a dict +o_auth_token_response_from_dict = OAuthTokenResponse.from_dict(o_auth_token_response_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/PartitionField.md b/regtests/client/python/docs/PartitionField.md new file mode 100644 index 0000000000..0fcd147d24 --- /dev/null +++ b/regtests/client/python/docs/PartitionField.md @@ -0,0 +1,48 @@ + +# PartitionField + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**field_id** | **int** | | [optional] +**source_id** | **int** | | +**name** | **str** | | +**transform** | **str** | | + +## Example + +```python +from polaris.catalog.models.partition_field import PartitionField + +# TODO update the JSON string below +json = "{}" +# create an instance of PartitionField from a JSON string +partition_field_instance = PartitionField.from_json(json) +# print the JSON string representation of the object +print(PartitionField.to_json()) + +# convert the object into a dict +partition_field_dict = partition_field_instance.to_dict() +# create an instance of PartitionField from a dict +partition_field_from_dict = PartitionField.from_dict(partition_field_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/PartitionSpec.md b/regtests/client/python/docs/PartitionSpec.md new file mode 100644 index 0000000000..468493a974 --- /dev/null +++ b/regtests/client/python/docs/PartitionSpec.md @@ -0,0 +1,46 @@ + +# PartitionSpec + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**spec_id** | **int** | | [optional] [readonly] +**fields** | [**List[PartitionField]**](PartitionField.md) | | + +## Example + +```python +from polaris.catalog.models.partition_spec import PartitionSpec + +# TODO update the JSON string below +json = "{}" +# create an instance of PartitionSpec from a JSON string +partition_spec_instance = PartitionSpec.from_json(json) +# print the JSON string representation of the object +print(PartitionSpec.to_json()) + +# convert the object into a dict +partition_spec_dict = partition_spec_instance.to_dict() +# create an instance of PartitionSpec from a dict +partition_spec_from_dict = PartitionSpec.from_dict(partition_spec_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/PartitionStatisticsFile.md b/regtests/client/python/docs/PartitionStatisticsFile.md new file mode 100644 index 0000000000..dc8b8c2ded --- /dev/null +++ b/regtests/client/python/docs/PartitionStatisticsFile.md @@ -0,0 +1,47 @@ + +# PartitionStatisticsFile + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**snapshot_id** | **int** | | +**statistics_path** | **str** | | +**file_size_in_bytes** | **int** | | + +## Example + +```python +from polaris.catalog.models.partition_statistics_file import PartitionStatisticsFile + +# TODO update the JSON string below +json = "{}" +# create an instance of PartitionStatisticsFile from a JSON string +partition_statistics_file_instance = PartitionStatisticsFile.from_json(json) +# print the JSON string representation of the object +print(PartitionStatisticsFile.to_json()) + +# convert the object into a dict +partition_statistics_file_dict = partition_statistics_file_instance.to_dict() +# create an instance of PartitionStatisticsFile from a dict +partition_statistics_file_from_dict = PartitionStatisticsFile.from_dict(partition_statistics_file_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/PolarisCatalog.md b/regtests/client/python/docs/PolarisCatalog.md new file mode 100644 index 0000000000..91050694a1 --- /dev/null +++ b/regtests/client/python/docs/PolarisCatalog.md @@ -0,0 +1,46 @@ + +# PolarisCatalog + +The base catalog type - this contains all the fields necessary to construct an INTERNAL catalog + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- + +## Example + +```python +from polaris.management.models.polaris_catalog import PolarisCatalog + +# TODO update the JSON string below +json = "{}" +# create an instance of PolarisCatalog from a JSON string +polaris_catalog_instance = PolarisCatalog.from_json(json) +# print the JSON string representation of the object +print(PolarisCatalog.to_json()) + +# convert the object into a dict +polaris_catalog_dict = polaris_catalog_instance.to_dict() +# create an instance of PolarisCatalog from a dict +polaris_catalog_from_dict = PolarisCatalog.from_dict(polaris_catalog_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/PolarisDefaultApi.md b/regtests/client/python/docs/PolarisDefaultApi.md new file mode 100644 index 0000000000..42b3fe74de --- /dev/null +++ b/regtests/client/python/docs/PolarisDefaultApi.md @@ -0,0 +1,2491 @@ + +# polaris.management.PolarisDefaultApi + +All URIs are relative to *https://localhost/api/management/v1* + +Method | HTTP request | Description +------------- | ------------- | ------------- +[**add_grant_to_catalog_role**](PolarisDefaultApi.md#add_grant_to_catalog_role) | **PUT** /catalogs/{catalogName}/catalog-roles/{catalogRoleName}/grants | +[**assign_catalog_role_to_principal_role**](PolarisDefaultApi.md#assign_catalog_role_to_principal_role) | **PUT** /principal-roles/{principalRoleName}/catalog-roles/{catalogName} | +[**assign_principal_role**](PolarisDefaultApi.md#assign_principal_role) | **PUT** /principals/{principalName}/principal-roles | +[**create_catalog**](PolarisDefaultApi.md#create_catalog) | **POST** /catalogs | +[**create_catalog_role**](PolarisDefaultApi.md#create_catalog_role) | **POST** /catalogs/{catalogName}/catalog-roles | +[**create_principal**](PolarisDefaultApi.md#create_principal) | **POST** /principals | +[**create_principal_role**](PolarisDefaultApi.md#create_principal_role) | **POST** /principal-roles | +[**delete_catalog**](PolarisDefaultApi.md#delete_catalog) | **DELETE** /catalogs/{catalogName} | +[**delete_catalog_role**](PolarisDefaultApi.md#delete_catalog_role) | **DELETE** /catalogs/{catalogName}/catalog-roles/{catalogRoleName} | +[**delete_principal**](PolarisDefaultApi.md#delete_principal) | **DELETE** /principals/{principalName} | +[**delete_principal_role**](PolarisDefaultApi.md#delete_principal_role) | **DELETE** /principal-roles/{principalRoleName} | +[**get_catalog**](PolarisDefaultApi.md#get_catalog) | **GET** /catalogs/{catalogName} | +[**get_catalog_role**](PolarisDefaultApi.md#get_catalog_role) | **GET** /catalogs/{catalogName}/catalog-roles/{catalogRoleName} | +[**get_principal**](PolarisDefaultApi.md#get_principal) | **GET** /principals/{principalName} | +[**get_principal_role**](PolarisDefaultApi.md#get_principal_role) | **GET** /principal-roles/{principalRoleName} | +[**list_assignee_principal_roles_for_catalog_role**](PolarisDefaultApi.md#list_assignee_principal_roles_for_catalog_role) | **GET** /catalogs/{catalogName}/catalog-roles/{catalogRoleName}/principal-roles | +[**list_assignee_principals_for_principal_role**](PolarisDefaultApi.md#list_assignee_principals_for_principal_role) | **GET** /principal-roles/{principalRoleName}/principals | +[**list_catalog_roles**](PolarisDefaultApi.md#list_catalog_roles) | **GET** /catalogs/{catalogName}/catalog-roles | +[**list_catalog_roles_for_principal_role**](PolarisDefaultApi.md#list_catalog_roles_for_principal_role) | **GET** /principal-roles/{principalRoleName}/catalog-roles/{catalogName} | +[**list_catalogs**](PolarisDefaultApi.md#list_catalogs) | **GET** /catalogs | +[**list_grants_for_catalog_role**](PolarisDefaultApi.md#list_grants_for_catalog_role) | **GET** /catalogs/{catalogName}/catalog-roles/{catalogRoleName}/grants | +[**list_principal_roles**](PolarisDefaultApi.md#list_principal_roles) | **GET** /principal-roles | +[**list_principal_roles_assigned**](PolarisDefaultApi.md#list_principal_roles_assigned) | **GET** /principals/{principalName}/principal-roles | +[**list_principals**](PolarisDefaultApi.md#list_principals) | **GET** /principals | +[**revoke_catalog_role_from_principal_role**](PolarisDefaultApi.md#revoke_catalog_role_from_principal_role) | **DELETE** /principal-roles/{principalRoleName}/catalog-roles/{catalogName}/{catalogRoleName} | +[**revoke_grant_from_catalog_role**](PolarisDefaultApi.md#revoke_grant_from_catalog_role) | **POST** /catalogs/{catalogName}/catalog-roles/{catalogRoleName}/grants | +[**revoke_principal_role**](PolarisDefaultApi.md#revoke_principal_role) | **DELETE** /principals/{principalName}/principal-roles/{principalRoleName} | +[**rotate_credentials**](PolarisDefaultApi.md#rotate_credentials) | **POST** /principals/{principalName}/rotate | +[**update_catalog**](PolarisDefaultApi.md#update_catalog) | **PUT** /catalogs/{catalogName} | +[**update_catalog_role**](PolarisDefaultApi.md#update_catalog_role) | **PUT** /catalogs/{catalogName}/catalog-roles/{catalogRoleName} | +[**update_principal**](PolarisDefaultApi.md#update_principal) | **PUT** /principals/{principalName} | +[**update_principal_role**](PolarisDefaultApi.md#update_principal_role) | **PUT** /principal-roles/{principalRoleName} | + + +# **add_grant_to_catalog_role** +> add_grant_to_catalog_role(catalog_name, catalog_role_name, add_grant_request=add_grant_request) + + + +Add a new grant to the catalog role + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.add_grant_request import AddGrantRequest +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + catalog_name = 'catalog_name_example' # str | The name of the catalog where the role will receive the grant + catalog_role_name = 'catalog_role_name_example' # str | The name of the role receiving the grant (must exist) + add_grant_request = polaris.management.AddGrantRequest() # AddGrantRequest | (optional) + + try: + api_instance.add_grant_to_catalog_role(catalog_name, catalog_role_name, add_grant_request=add_grant_request) + except Exception as e: + print("Exception when calling PolarisDefaultApi->add_grant_to_catalog_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **catalog_name** | **str**| The name of the catalog where the role will receive the grant | + **catalog_role_name** | **str**| The name of the role receiving the grant (must exist) | + **add_grant_request** | [**AddGrantRequest**](AddGrantRequest.md)| | [optional] + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**201** | Successful response | - | +**403** | The principal is not authorized to create grants | - | +**404** | The catalog or the role does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **assign_catalog_role_to_principal_role** +> assign_catalog_role_to_principal_role(principal_role_name, catalog_name, grant_catalog_role_request) + + + +Assign a catalog role to a principal role + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.grant_catalog_role_request import GrantCatalogRoleRequest +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_role_name = 'principal_role_name_example' # str | The principal role name + catalog_name = 'catalog_name_example' # str | The name of the catalog where the catalogRoles reside + grant_catalog_role_request = polaris.management.GrantCatalogRoleRequest() # GrantCatalogRoleRequest | The principal to create + + try: + api_instance.assign_catalog_role_to_principal_role(principal_role_name, catalog_name, grant_catalog_role_request) + except Exception as e: + print("Exception when calling PolarisDefaultApi->assign_catalog_role_to_principal_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_role_name** | **str**| The principal role name | + **catalog_name** | **str**| The name of the catalog where the catalogRoles reside | + **grant_catalog_role_request** | [**GrantCatalogRoleRequest**](GrantCatalogRoleRequest.md)| The principal to create | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**201** | Successful response | - | +**403** | The caller does not have permission to assign a catalog role | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **assign_principal_role** +> assign_principal_role(principal_name, grant_principal_role_request) + + + +Add a role to the principal + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.grant_principal_role_request import GrantPrincipalRoleRequest +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_name = 'principal_name_example' # str | The name of the target principal + grant_principal_role_request = polaris.management.GrantPrincipalRoleRequest() # GrantPrincipalRoleRequest | The principal role to assign + + try: + api_instance.assign_principal_role(principal_name, grant_principal_role_request) + except Exception as e: + print("Exception when calling PolarisDefaultApi->assign_principal_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_name** | **str**| The name of the target principal | + **grant_principal_role_request** | [**GrantPrincipalRoleRequest**](GrantPrincipalRoleRequest.md)| The principal role to assign | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**201** | Successful response | - | +**403** | The caller does not have permission to add assign a role to the principal | - | +**404** | The catalog, the principal, or the role does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **create_catalog** +> create_catalog(create_catalog_request) + + + +Add a new Catalog + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.create_catalog_request import CreateCatalogRequest +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + create_catalog_request = polaris.management.CreateCatalogRequest() # CreateCatalogRequest | The Catalog to create + + try: + api_instance.create_catalog(create_catalog_request) + except Exception as e: + print("Exception when calling PolarisDefaultApi->create_catalog: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **create_catalog_request** | [**CreateCatalogRequest**](CreateCatalogRequest.md)| The Catalog to create | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**201** | Successful response | - | +**403** | The caller does not have permission to create a catalog | - | +**404** | The catalog does not exist | - | +**409** | A catalog with the specified name already exists | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **create_catalog_role** +> create_catalog_role(catalog_name, create_catalog_role_request=create_catalog_role_request) + + + +Create a new role in the catalog + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.create_catalog_role_request import CreateCatalogRoleRequest +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + catalog_name = 'catalog_name_example' # str | The catalog for which we are reading/updating roles + create_catalog_role_request = polaris.management.CreateCatalogRoleRequest() # CreateCatalogRoleRequest | (optional) + + try: + api_instance.create_catalog_role(catalog_name, create_catalog_role_request=create_catalog_role_request) + except Exception as e: + print("Exception when calling PolarisDefaultApi->create_catalog_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **catalog_name** | **str**| The catalog for which we are reading/updating roles | + **create_catalog_role_request** | [**CreateCatalogRoleRequest**](CreateCatalogRoleRequest.md)| | [optional] + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**201** | Successful response | - | +**403** | The principal is not authorized to create roles | - | +**404** | The catalog does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **create_principal** +> PrincipalWithCredentials create_principal(create_principal_request) + + + +Create a principal + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.create_principal_request import CreatePrincipalRequest +from polaris.management.models.principal_with_credentials import PrincipalWithCredentials +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + create_principal_request = polaris.management.CreatePrincipalRequest() # CreatePrincipalRequest | The principal to create + + try: + api_response = api_instance.create_principal(create_principal_request) + print("The response of PolarisDefaultApi->create_principal:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->create_principal: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **create_principal_request** | [**CreatePrincipalRequest**](CreatePrincipalRequest.md)| The principal to create | + +### Return type + +[**PrincipalWithCredentials**](PrincipalWithCredentials.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**201** | Successful response | - | +**403** | The caller does not have permission to add a principal | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **create_principal_role** +> create_principal_role(create_principal_role_request) + + + +Create a principal role + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.create_principal_role_request import CreatePrincipalRoleRequest +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + create_principal_role_request = polaris.management.CreatePrincipalRoleRequest() # CreatePrincipalRoleRequest | The principal to create + + try: + api_instance.create_principal_role(create_principal_role_request) + except Exception as e: + print("Exception when calling PolarisDefaultApi->create_principal_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **create_principal_role_request** | [**CreatePrincipalRoleRequest**](CreatePrincipalRoleRequest.md)| The principal to create | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**201** | Successful response | - | +**403** | The caller does not have permission to add a principal role | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **delete_catalog** +> delete_catalog(catalog_name) + + + +Delete an existing catalog. This is a cascading operation that deletes all metadata, including principals, roles and grants. If the catalog is an internal catalog, all tables and namespaces are dropped without purge. + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + catalog_name = 'catalog_name_example' # str | The name of the catalog + + try: + api_instance.delete_catalog(catalog_name) + except Exception as e: + print("Exception when calling PolarisDefaultApi->delete_catalog: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **catalog_name** | **str**| The name of the catalog | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**403** | The caller does not have permission to delete a catalog | - | +**404** | The catalog does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **delete_catalog_role** +> delete_catalog_role(catalog_name, catalog_role_name) + + + +Delete an existing role from the catalog. All associated grants will also be deleted + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + catalog_name = 'catalog_name_example' # str | The catalog for which we are retrieving roles + catalog_role_name = 'catalog_role_name_example' # str | The name of the role + + try: + api_instance.delete_catalog_role(catalog_name, catalog_role_name) + except Exception as e: + print("Exception when calling PolarisDefaultApi->delete_catalog_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **catalog_name** | **str**| The catalog for which we are retrieving roles | + **catalog_role_name** | **str**| The name of the role | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**403** | The principal is not authorized to delete roles | - | +**404** | The catalog or the role does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **delete_principal** +> delete_principal(principal_name) + + + +Remove a principal from polaris + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_name = 'principal_name_example' # str | The principal name + + try: + api_instance.delete_principal(principal_name) + except Exception as e: + print("Exception when calling PolarisDefaultApi->delete_principal: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_name** | **str**| The principal name | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**403** | The caller does not have permission to delete a principal | - | +**404** | The principal does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **delete_principal_role** +> delete_principal_role(principal_role_name) + + + +Remove a principal role from polaris + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_role_name = 'principal_role_name_example' # str | The principal role name + + try: + api_instance.delete_principal_role(principal_role_name) + except Exception as e: + print("Exception when calling PolarisDefaultApi->delete_principal_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_role_name** | **str**| The principal role name | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**403** | The caller does not have permission to delete a principal role | - | +**404** | The principal role does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **get_catalog** +> Catalog get_catalog(catalog_name) + + + +Get the details of a catalog + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.catalog import Catalog +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + catalog_name = 'catalog_name_example' # str | The name of the catalog + + try: + api_response = api_instance.get_catalog(catalog_name) + print("The response of PolarisDefaultApi->get_catalog:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->get_catalog: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **catalog_name** | **str**| The name of the catalog | + +### Return type + +[**Catalog**](Catalog.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | The catalog details | - | +**403** | The caller does not have permission to read catalog details | - | +**404** | The catalog does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **get_catalog_role** +> CatalogRole get_catalog_role(catalog_name, catalog_role_name) + + + +Get the details of an existing role + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.catalog_role import CatalogRole +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + catalog_name = 'catalog_name_example' # str | The catalog for which we are retrieving roles + catalog_role_name = 'catalog_role_name_example' # str | The name of the role + + try: + api_response = api_instance.get_catalog_role(catalog_name, catalog_role_name) + print("The response of PolarisDefaultApi->get_catalog_role:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->get_catalog_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **catalog_name** | **str**| The catalog for which we are retrieving roles | + **catalog_role_name** | **str**| The name of the role | + +### Return type + +[**CatalogRole**](CatalogRole.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | The specified role details | - | +**403** | The principal is not authorized to read role data | - | +**404** | The catalog or the role does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **get_principal** +> Principal get_principal(principal_name) + + + +Get the principal details + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.principal import Principal +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_name = 'principal_name_example' # str | The principal name + + try: + api_response = api_instance.get_principal(principal_name) + print("The response of PolarisDefaultApi->get_principal:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->get_principal: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_name** | **str**| The principal name | + +### Return type + +[**Principal**](Principal.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | The requested principal | - | +**403** | The caller does not have permission to get principal details | - | +**404** | The catalog or principal does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **get_principal_role** +> PrincipalRole get_principal_role(principal_role_name) + + + +Get the principal role details + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.principal_role import PrincipalRole +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_role_name = 'principal_role_name_example' # str | The principal role name + + try: + api_response = api_instance.get_principal_role(principal_role_name) + print("The response of PolarisDefaultApi->get_principal_role:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->get_principal_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_role_name** | **str**| The principal role name | + +### Return type + +[**PrincipalRole**](PrincipalRole.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | The requested principal role | - | +**403** | The caller does not have permission to get principal role details | - | +**404** | The principal role does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **list_assignee_principal_roles_for_catalog_role** +> PrincipalRoles list_assignee_principal_roles_for_catalog_role(catalog_name, catalog_role_name) + + + +List the PrincipalRoles to whome the tagetcatalog role has been assigned + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.principal_roles import PrincipalRoles +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + catalog_name = 'catalog_name_example' # str | The name of the catalog where the catalog role resides + catalog_role_name = 'catalog_role_name_example' # str | The name of the catalog role + + try: + api_response = api_instance.list_assignee_principal_roles_for_catalog_role(catalog_name, catalog_role_name) + print("The response of PolarisDefaultApi->list_assignee_principal_roles_for_catalog_role:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->list_assignee_principal_roles_for_catalog_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **catalog_name** | **str**| The name of the catalog where the catalog role resides | + **catalog_role_name** | **str**| The name of the catalog role | + +### Return type + +[**PrincipalRoles**](PrincipalRoles.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | List the PrincipalRoles to whome the tagetcatalog role has been assigned | - | +**403** | The caller does not have permission to list principal roles | - | +**404** | The catalog or catalog role does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **list_assignee_principals_for_principal_role** +> Principals list_assignee_principals_for_principal_role(principal_role_name) + + + +List the Principals to whom the target principal role has been assigned + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.principals import Principals +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_role_name = 'principal_role_name_example' # str | The principal role name + + try: + api_response = api_instance.list_assignee_principals_for_principal_role(principal_role_name) + print("The response of PolarisDefaultApi->list_assignee_principals_for_principal_role:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->list_assignee_principals_for_principal_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_role_name** | **str**| The principal role name | + +### Return type + +[**Principals**](Principals.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | List the Principals to whom the target principal role has been assigned | - | +**403** | The caller does not have permission to list principals | - | +**404** | The principal role does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **list_catalog_roles** +> CatalogRoles list_catalog_roles(catalog_name) + + + +List existing roles in the catalog + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.catalog_roles import CatalogRoles +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + catalog_name = 'catalog_name_example' # str | The catalog for which we are reading/updating roles + + try: + api_response = api_instance.list_catalog_roles(catalog_name) + print("The response of PolarisDefaultApi->list_catalog_roles:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->list_catalog_roles: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **catalog_name** | **str**| The catalog for which we are reading/updating roles | + +### Return type + +[**CatalogRoles**](CatalogRoles.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | The list of roles that exist in this catalog | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **list_catalog_roles_for_principal_role** +> CatalogRoles list_catalog_roles_for_principal_role(principal_role_name, catalog_name) + + + +Get the catalog roles mapped to the principal role + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.catalog_roles import CatalogRoles +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_role_name = 'principal_role_name_example' # str | The principal role name + catalog_name = 'catalog_name_example' # str | The name of the catalog where the catalogRoles reside + + try: + api_response = api_instance.list_catalog_roles_for_principal_role(principal_role_name, catalog_name) + print("The response of PolarisDefaultApi->list_catalog_roles_for_principal_role:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->list_catalog_roles_for_principal_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_role_name** | **str**| The principal role name | + **catalog_name** | **str**| The name of the catalog where the catalogRoles reside | + +### Return type + +[**CatalogRoles**](CatalogRoles.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | The list of catalog roles mapped to the principal role | - | +**403** | The caller does not have permission to list catalog roles | - | +**404** | The principal role does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **list_catalogs** +> Catalogs list_catalogs() + + + +List all catalogs in this polaris service + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.catalogs import Catalogs +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + + try: + api_response = api_instance.list_catalogs() + print("The response of PolarisDefaultApi->list_catalogs:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->list_catalogs: %s\n" % e) +``` + + + +### Parameters + +This endpoint does not need any parameter. + +### Return type + +[**Catalogs**](Catalogs.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | List of catalogs in the polaris service | - | +**403** | The caller does not have permission to list catalog details | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **list_grants_for_catalog_role** +> GrantResources list_grants_for_catalog_role(catalog_name, catalog_role_name) + + + +List the grants the catalog role holds + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.grant_resources import GrantResources +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + catalog_name = 'catalog_name_example' # str | The name of the catalog where the role will receive the grant + catalog_role_name = 'catalog_role_name_example' # str | The name of the role receiving the grant (must exist) + + try: + api_response = api_instance.list_grants_for_catalog_role(catalog_name, catalog_role_name) + print("The response of PolarisDefaultApi->list_grants_for_catalog_role:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->list_grants_for_catalog_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **catalog_name** | **str**| The name of the catalog where the role will receive the grant | + **catalog_role_name** | **str**| The name of the role receiving the grant (must exist) | + +### Return type + +[**GrantResources**](GrantResources.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | List of all grants given to the role in this catalog | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **list_principal_roles** +> PrincipalRoles list_principal_roles() + + + +List the principal roles + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.principal_roles import PrincipalRoles +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + + try: + api_response = api_instance.list_principal_roles() + print("The response of PolarisDefaultApi->list_principal_roles:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->list_principal_roles: %s\n" % e) +``` + + + +### Parameters + +This endpoint does not need any parameter. + +### Return type + +[**PrincipalRoles**](PrincipalRoles.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | List of principal roles | - | +**403** | The caller does not have permission to list principal roles | - | +**404** | The catalog does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **list_principal_roles_assigned** +> PrincipalRoles list_principal_roles_assigned(principal_name) + + + +List the roles assigned to the principal + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.principal_roles import PrincipalRoles +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_name = 'principal_name_example' # str | The name of the target principal + + try: + api_response = api_instance.list_principal_roles_assigned(principal_name) + print("The response of PolarisDefaultApi->list_principal_roles_assigned:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->list_principal_roles_assigned: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_name** | **str**| The name of the target principal | + +### Return type + +[**PrincipalRoles**](PrincipalRoles.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | List of roles assigned to this principal | - | +**403** | The caller does not have permission to list roles | - | +**404** | The principal or catalog does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **list_principals** +> Principals list_principals() + + + +List the principals for the current catalog + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.principals import Principals +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + + try: + api_response = api_instance.list_principals() + print("The response of PolarisDefaultApi->list_principals:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->list_principals: %s\n" % e) +``` + + + +### Parameters + +This endpoint does not need any parameter. + +### Return type + +[**Principals**](Principals.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | List of principals for this catalog | - | +**403** | The caller does not have permission to list catalog admins | - | +**404** | The catalog does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **revoke_catalog_role_from_principal_role** +> revoke_catalog_role_from_principal_role(principal_role_name, catalog_name, catalog_role_name) + + + +Remove a catalog role from a principal role + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_role_name = 'principal_role_name_example' # str | The principal role name + catalog_name = 'catalog_name_example' # str | The name of the catalog that contains the role to revoke + catalog_role_name = 'catalog_role_name_example' # str | The name of the catalog role that should be revoked + + try: + api_instance.revoke_catalog_role_from_principal_role(principal_role_name, catalog_name, catalog_role_name) + except Exception as e: + print("Exception when calling PolarisDefaultApi->revoke_catalog_role_from_principal_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_role_name** | **str**| The principal role name | + **catalog_name** | **str**| The name of the catalog that contains the role to revoke | + **catalog_role_name** | **str**| The name of the catalog role that should be revoked | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**403** | The caller does not have permission to revoke a catalog role | - | +**404** | The principal role does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **revoke_grant_from_catalog_role** +> revoke_grant_from_catalog_role(catalog_name, catalog_role_name, cascade=cascade, revoke_grant_request=revoke_grant_request) + + + +Delete a specific grant from the role. This may be a subset or a superset of the grants the role has. In case of a subset, the role will retain the grants not specified. If the `cascade` parameter is true, grant revocation will have a cascading effect - that is, if a principal has specific grants on a subresource, and grants are revoked on a parent resource, the grants present on the subresource will be revoked as well. By default, this behavior is disabled and grant revocation only affects the specified resource. + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.revoke_grant_request import RevokeGrantRequest +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + catalog_name = 'catalog_name_example' # str | The name of the catalog where the role will receive the grant + catalog_role_name = 'catalog_role_name_example' # str | The name of the role receiving the grant (must exist) + cascade = False # bool | If true, the grant revocation cascades to all subresources. (optional) (default to False) + revoke_grant_request = polaris.management.RevokeGrantRequest() # RevokeGrantRequest | (optional) + + try: + api_instance.revoke_grant_from_catalog_role(catalog_name, catalog_role_name, cascade=cascade, revoke_grant_request=revoke_grant_request) + except Exception as e: + print("Exception when calling PolarisDefaultApi->revoke_grant_from_catalog_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **catalog_name** | **str**| The name of the catalog where the role will receive the grant | + **catalog_role_name** | **str**| The name of the role receiving the grant (must exist) | + **cascade** | **bool**| If true, the grant revocation cascades to all subresources. | [optional] [default to False] + **revoke_grant_request** | [**RevokeGrantRequest**](RevokeGrantRequest.md)| | [optional] + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**201** | Successful response | - | +**403** | The principal is not authorized to create grants | - | +**404** | The catalog or the role does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **revoke_principal_role** +> revoke_principal_role(principal_name, principal_role_name) + + + +Remove a role from a catalog principal + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_name = 'principal_name_example' # str | The name of the target principal + principal_role_name = 'principal_role_name_example' # str | The name of the role + + try: + api_instance.revoke_principal_role(principal_name, principal_role_name) + except Exception as e: + print("Exception when calling PolarisDefaultApi->revoke_principal_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_name** | **str**| The name of the target principal | + **principal_role_name** | **str**| The name of the role | + +### Return type + +void (empty response body) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: Not defined + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**204** | Success, no content | - | +**403** | The caller does not have permission to remove a role from the principal | - | +**404** | The catalog or principal does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **rotate_credentials** +> PrincipalWithCredentials rotate_credentials(principal_name) + + + +Rotate a principal's credentials. The new credentials will be returned in the response. This is the only API, aside from createPrincipal, that returns the user's credentials. This API is *not* idempotent. + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.principal_with_credentials import PrincipalWithCredentials +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_name = 'principal_name_example' # str | The user name + + try: + api_response = api_instance.rotate_credentials(principal_name) + print("The response of PolarisDefaultApi->rotate_credentials:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->rotate_credentials: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_name** | **str**| The user name | + +### Return type + +[**PrincipalWithCredentials**](PrincipalWithCredentials.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: Not defined + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | The principal details along with the newly rotated credentials | - | +**403** | The caller does not have permission to rotate credentials | - | +**404** | The principal does not exist | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **update_catalog** +> Catalog update_catalog(catalog_name, update_catalog_request) + + + +Update an existing catalog + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.catalog import Catalog +from polaris.management.models.update_catalog_request import UpdateCatalogRequest +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + catalog_name = 'catalog_name_example' # str | The name of the catalog + update_catalog_request = polaris.management.UpdateCatalogRequest() # UpdateCatalogRequest | The catalog details to use in the update + + try: + api_response = api_instance.update_catalog(catalog_name, update_catalog_request) + print("The response of PolarisDefaultApi->update_catalog:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->update_catalog: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **catalog_name** | **str**| The name of the catalog | + **update_catalog_request** | [**UpdateCatalogRequest**](UpdateCatalogRequest.md)| The catalog details to use in the update | + +### Return type + +[**Catalog**](Catalog.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | The catalog details | - | +**403** | The caller does not have permission to update catalog details | - | +**404** | The catalog does not exist | - | +**409** | The entity version doesn't match the currentEntityVersion; retry after fetching latest version | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **update_catalog_role** +> CatalogRole update_catalog_role(catalog_name, catalog_role_name, update_catalog_role_request=update_catalog_role_request) + + + +Update an existing role in the catalog + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.catalog_role import CatalogRole +from polaris.management.models.update_catalog_role_request import UpdateCatalogRoleRequest +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + catalog_name = 'catalog_name_example' # str | The catalog for which we are retrieving roles + catalog_role_name = 'catalog_role_name_example' # str | The name of the role + update_catalog_role_request = polaris.management.UpdateCatalogRoleRequest() # UpdateCatalogRoleRequest | (optional) + + try: + api_response = api_instance.update_catalog_role(catalog_name, catalog_role_name, update_catalog_role_request=update_catalog_role_request) + print("The response of PolarisDefaultApi->update_catalog_role:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->update_catalog_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **catalog_name** | **str**| The catalog for which we are retrieving roles | + **catalog_role_name** | **str**| The name of the role | + **update_catalog_role_request** | [**UpdateCatalogRoleRequest**](UpdateCatalogRoleRequest.md)| | [optional] + +### Return type + +[**CatalogRole**](CatalogRole.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | The specified role details | - | +**403** | The principal is not authorized to update roles | - | +**404** | The catalog or the role does not exist | - | +**409** | The entity version doesn't match the currentEntityVersion; retry after fetching latest version | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **update_principal** +> Principal update_principal(principal_name, update_principal_request) + + + +Update an existing principal + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.principal import Principal +from polaris.management.models.update_principal_request import UpdatePrincipalRequest +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_name = 'principal_name_example' # str | The principal name + update_principal_request = polaris.management.UpdatePrincipalRequest() # UpdatePrincipalRequest | The principal details to use in the update + + try: + api_response = api_instance.update_principal(principal_name, update_principal_request) + print("The response of PolarisDefaultApi->update_principal:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->update_principal: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_name** | **str**| The principal name | + **update_principal_request** | [**UpdatePrincipalRequest**](UpdatePrincipalRequest.md)| The principal details to use in the update | + +### Return type + +[**Principal**](Principal.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | The updated principal | - | +**403** | The caller does not have permission to update principal details | - | +**404** | The principal does not exist | - | +**409** | The entity version doesn't match the currentEntityVersion; retry after fetching latest version | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + +# **update_principal_role** +> PrincipalRole update_principal_role(principal_role_name, update_principal_role_request) + + + +Update an existing principalRole + +### Example + +* OAuth Authentication (OAuth2): + +```python +import polaris.management +from polaris.management.models.principal_role import PrincipalRole +from polaris.management.models.update_principal_role_request import UpdatePrincipalRoleRequest +from polaris.management.rest import ApiException +from pprint import pprint + +# Defining the host is optional and defaults to https://localhost/api/management/v1 +# See configuration.py for a list of all supported configuration parameters. +configuration = polaris.management.Configuration( + host = "https://localhost/api/management/v1" +) + +# The client must configure the authentication and authorization parameters +# in accordance with the API server security policy. +# Examples for each auth method are provided below, use the example that +# satisfies your auth use case. + +configuration.access_token = os.environ["ACCESS_TOKEN"] + +# Enter a context with an instance of the API client +with polaris.management.ApiClient(configuration) as api_client: + # Create an instance of the API class + api_instance = polaris.management.PolarisDefaultApi(api_client) + principal_role_name = 'principal_role_name_example' # str | The principal role name + update_principal_role_request = polaris.management.UpdatePrincipalRoleRequest() # UpdatePrincipalRoleRequest | The principalRole details to use in the update + + try: + api_response = api_instance.update_principal_role(principal_role_name, update_principal_role_request) + print("The response of PolarisDefaultApi->update_principal_role:\n") + pprint(api_response) + except Exception as e: + print("Exception when calling PolarisDefaultApi->update_principal_role: %s\n" % e) +``` + + + +### Parameters + + +Name | Type | Description | Notes +------------- | ------------- | ------------- | ------------- + **principal_role_name** | **str**| The principal role name | + **update_principal_role_request** | [**UpdatePrincipalRoleRequest**](UpdatePrincipalRoleRequest.md)| The principalRole details to use in the update | + +### Return type + +[**PrincipalRole**](PrincipalRole.md) + +### Authorization + +[OAuth2](../README.md#OAuth2) + +### HTTP request headers + + - **Content-Type**: application/json + - **Accept**: application/json + +### HTTP response details + +| Status code | Description | Response headers | +|-------------|-------------|------------------| +**200** | The updated principal role | - | +**403** | The caller does not have permission to update principal role details | - | +**404** | The principal role does not exist | - | +**409** | The entity version doesn't match the currentEntityVersion; retry after fetching latest version | - | + +[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) + diff --git a/regtests/client/python/docs/PositionDeleteFile.md b/regtests/client/python/docs/PositionDeleteFile.md new file mode 100644 index 0000000000..dc39d2b5b3 --- /dev/null +++ b/regtests/client/python/docs/PositionDeleteFile.md @@ -0,0 +1,45 @@ + +# PositionDeleteFile + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**content** | **str** | | + +## Example + +```python +from polaris.catalog.models.position_delete_file import PositionDeleteFile + +# TODO update the JSON string below +json = "{}" +# create an instance of PositionDeleteFile from a JSON string +position_delete_file_instance = PositionDeleteFile.from_json(json) +# print the JSON string representation of the object +print(PositionDeleteFile.to_json()) + +# convert the object into a dict +position_delete_file_dict = position_delete_file_instance.to_dict() +# create an instance of PositionDeleteFile from a dict +position_delete_file_from_dict = PositionDeleteFile.from_dict(position_delete_file_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/PrimitiveTypeValue.md b/regtests/client/python/docs/PrimitiveTypeValue.md new file mode 100644 index 0000000000..585370edc7 --- /dev/null +++ b/regtests/client/python/docs/PrimitiveTypeValue.md @@ -0,0 +1,44 @@ + +# PrimitiveTypeValue + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- + +## Example + +```python +from polaris.catalog.models.primitive_type_value import PrimitiveTypeValue + +# TODO update the JSON string below +json = "{}" +# create an instance of PrimitiveTypeValue from a JSON string +primitive_type_value_instance = PrimitiveTypeValue.from_json(json) +# print the JSON string representation of the object +print(PrimitiveTypeValue.to_json()) + +# convert the object into a dict +primitive_type_value_dict = primitive_type_value_instance.to_dict() +# create an instance of PrimitiveTypeValue from a dict +primitive_type_value_from_dict = PrimitiveTypeValue.from_dict(primitive_type_value_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/Principal.md b/regtests/client/python/docs/Principal.md new file mode 100644 index 0000000000..a47d5cf390 --- /dev/null +++ b/regtests/client/python/docs/Principal.md @@ -0,0 +1,52 @@ + +# Principal + +A Polaris principal. + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**name** | **str** | | +**client_id** | **str** | The output-only OAuth clientId associated with this principal if applicable | [optional] +**properties** | **Dict[str, str]** | | [optional] +**create_timestamp** | **int** | | [optional] +**last_update_timestamp** | **int** | | [optional] +**entity_version** | **int** | The version of the principal object used to determine if the principal metadata has changed | [optional] + +## Example + +```python +from polaris.management.models.principal import Principal + +# TODO update the JSON string below +json = "{}" +# create an instance of Principal from a JSON string +principal_instance = Principal.from_json(json) +# print the JSON string representation of the object +print(Principal.to_json()) + +# convert the object into a dict +principal_dict = principal_instance.to_dict() +# create an instance of Principal from a dict +principal_from_dict = Principal.from_dict(principal_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/PrincipalRole.md b/regtests/client/python/docs/PrincipalRole.md new file mode 100644 index 0000000000..6fb31653cd --- /dev/null +++ b/regtests/client/python/docs/PrincipalRole.md @@ -0,0 +1,49 @@ + +# PrincipalRole + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**name** | **str** | The name of the role | +**properties** | **Dict[str, str]** | | [optional] +**create_timestamp** | **int** | | [optional] +**last_update_timestamp** | **int** | | [optional] +**entity_version** | **int** | The version of the principal role object used to determine if the principal role metadata has changed | [optional] + +## Example + +```python +from polaris.management.models.principal_role import PrincipalRole + +# TODO update the JSON string below +json = "{}" +# create an instance of PrincipalRole from a JSON string +principal_role_instance = PrincipalRole.from_json(json) +# print the JSON string representation of the object +print(PrincipalRole.to_json()) + +# convert the object into a dict +principal_role_dict = principal_role_instance.to_dict() +# create an instance of PrincipalRole from a dict +principal_role_from_dict = PrincipalRole.from_dict(principal_role_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/PrincipalRoles.md b/regtests/client/python/docs/PrincipalRoles.md new file mode 100644 index 0000000000..a16037dd2b --- /dev/null +++ b/regtests/client/python/docs/PrincipalRoles.md @@ -0,0 +1,45 @@ + +# PrincipalRoles + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**roles** | [**List[PrincipalRole]**](PrincipalRole.md) | | + +## Example + +```python +from polaris.management.models.principal_roles import PrincipalRoles + +# TODO update the JSON string below +json = "{}" +# create an instance of PrincipalRoles from a JSON string +principal_roles_instance = PrincipalRoles.from_json(json) +# print the JSON string representation of the object +print(PrincipalRoles.to_json()) + +# convert the object into a dict +principal_roles_dict = principal_roles_instance.to_dict() +# create an instance of PrincipalRoles from a dict +principal_roles_from_dict = PrincipalRoles.from_dict(principal_roles_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/PrincipalWithCredentials.md b/regtests/client/python/docs/PrincipalWithCredentials.md new file mode 100644 index 0000000000..fe1006172a --- /dev/null +++ b/regtests/client/python/docs/PrincipalWithCredentials.md @@ -0,0 +1,48 @@ + +# PrincipalWithCredentials + +A user with its client id and secret. This type is returned when a new principal is created or when its credentials are rotated + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**principal** | [**Principal**](Principal.md) | | +**credentials** | [**PrincipalWithCredentialsCredentials**](PrincipalWithCredentialsCredentials.md) | | + +## Example + +```python +from polaris.management.models.principal_with_credentials import PrincipalWithCredentials + +# TODO update the JSON string below +json = "{}" +# create an instance of PrincipalWithCredentials from a JSON string +principal_with_credentials_instance = PrincipalWithCredentials.from_json(json) +# print the JSON string representation of the object +print(PrincipalWithCredentials.to_json()) + +# convert the object into a dict +principal_with_credentials_dict = principal_with_credentials_instance.to_dict() +# create an instance of PrincipalWithCredentials from a dict +principal_with_credentials_from_dict = PrincipalWithCredentials.from_dict(principal_with_credentials_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/PrincipalWithCredentialsCredentials.md b/regtests/client/python/docs/PrincipalWithCredentialsCredentials.md new file mode 100644 index 0000000000..f3cdb77d8d --- /dev/null +++ b/regtests/client/python/docs/PrincipalWithCredentialsCredentials.md @@ -0,0 +1,46 @@ + +# PrincipalWithCredentialsCredentials + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**client_id** | **str** | | [optional] +**client_secret** | **str** | | [optional] + +## Example + +```python +from polaris.management.models.principal_with_credentials_credentials import PrincipalWithCredentialsCredentials + +# TODO update the JSON string below +json = "{}" +# create an instance of PrincipalWithCredentialsCredentials from a JSON string +principal_with_credentials_credentials_instance = PrincipalWithCredentialsCredentials.from_json(json) +# print the JSON string representation of the object +print(PrincipalWithCredentialsCredentials.to_json()) + +# convert the object into a dict +principal_with_credentials_credentials_dict = principal_with_credentials_credentials_instance.to_dict() +# create an instance of PrincipalWithCredentialsCredentials from a dict +principal_with_credentials_credentials_from_dict = PrincipalWithCredentialsCredentials.from_dict(principal_with_credentials_credentials_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/Principals.md b/regtests/client/python/docs/Principals.md new file mode 100644 index 0000000000..dc45c78635 --- /dev/null +++ b/regtests/client/python/docs/Principals.md @@ -0,0 +1,47 @@ + +# Principals + +A list of Principals + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**principals** | [**List[Principal]**](Principal.md) | | + +## Example + +```python +from polaris.management.models.principals import Principals + +# TODO update the JSON string below +json = "{}" +# create an instance of Principals from a JSON string +principals_instance = Principals.from_json(json) +# print the JSON string representation of the object +print(Principals.to_json()) + +# convert the object into a dict +principals_dict = principals_instance.to_dict() +# create an instance of Principals from a dict +principals_from_dict = Principals.from_dict(principals_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/RegisterTableRequest.md b/regtests/client/python/docs/RegisterTableRequest.md new file mode 100644 index 0000000000..9aab7a364e --- /dev/null +++ b/regtests/client/python/docs/RegisterTableRequest.md @@ -0,0 +1,46 @@ + +# RegisterTableRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**name** | **str** | | +**metadata_location** | **str** | | + +## Example + +```python +from polaris.catalog.models.register_table_request import RegisterTableRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of RegisterTableRequest from a JSON string +register_table_request_instance = RegisterTableRequest.from_json(json) +# print the JSON string representation of the object +print(RegisterTableRequest.to_json()) + +# convert the object into a dict +register_table_request_dict = register_table_request_instance.to_dict() +# create an instance of RegisterTableRequest from a dict +register_table_request_from_dict = RegisterTableRequest.from_dict(register_table_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/RemovePartitionStatisticsUpdate.md b/regtests/client/python/docs/RemovePartitionStatisticsUpdate.md new file mode 100644 index 0000000000..6d98532cf9 --- /dev/null +++ b/regtests/client/python/docs/RemovePartitionStatisticsUpdate.md @@ -0,0 +1,46 @@ + +# RemovePartitionStatisticsUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**snapshot_id** | **int** | | + +## Example + +```python +from polaris.catalog.models.remove_partition_statistics_update import RemovePartitionStatisticsUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of RemovePartitionStatisticsUpdate from a JSON string +remove_partition_statistics_update_instance = RemovePartitionStatisticsUpdate.from_json(json) +# print the JSON string representation of the object +print(RemovePartitionStatisticsUpdate.to_json()) + +# convert the object into a dict +remove_partition_statistics_update_dict = remove_partition_statistics_update_instance.to_dict() +# create an instance of RemovePartitionStatisticsUpdate from a dict +remove_partition_statistics_update_from_dict = RemovePartitionStatisticsUpdate.from_dict(remove_partition_statistics_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/RemovePropertiesUpdate.md b/regtests/client/python/docs/RemovePropertiesUpdate.md new file mode 100644 index 0000000000..f7619f9117 --- /dev/null +++ b/regtests/client/python/docs/RemovePropertiesUpdate.md @@ -0,0 +1,46 @@ + +# RemovePropertiesUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**removals** | **List[str]** | | + +## Example + +```python +from polaris.catalog.models.remove_properties_update import RemovePropertiesUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of RemovePropertiesUpdate from a JSON string +remove_properties_update_instance = RemovePropertiesUpdate.from_json(json) +# print the JSON string representation of the object +print(RemovePropertiesUpdate.to_json()) + +# convert the object into a dict +remove_properties_update_dict = remove_properties_update_instance.to_dict() +# create an instance of RemovePropertiesUpdate from a dict +remove_properties_update_from_dict = RemovePropertiesUpdate.from_dict(remove_properties_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/RemoveSnapshotRefUpdate.md b/regtests/client/python/docs/RemoveSnapshotRefUpdate.md new file mode 100644 index 0000000000..0640643fad --- /dev/null +++ b/regtests/client/python/docs/RemoveSnapshotRefUpdate.md @@ -0,0 +1,46 @@ + +# RemoveSnapshotRefUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**ref_name** | **str** | | + +## Example + +```python +from polaris.catalog.models.remove_snapshot_ref_update import RemoveSnapshotRefUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of RemoveSnapshotRefUpdate from a JSON string +remove_snapshot_ref_update_instance = RemoveSnapshotRefUpdate.from_json(json) +# print the JSON string representation of the object +print(RemoveSnapshotRefUpdate.to_json()) + +# convert the object into a dict +remove_snapshot_ref_update_dict = remove_snapshot_ref_update_instance.to_dict() +# create an instance of RemoveSnapshotRefUpdate from a dict +remove_snapshot_ref_update_from_dict = RemoveSnapshotRefUpdate.from_dict(remove_snapshot_ref_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/RemoveSnapshotsUpdate.md b/regtests/client/python/docs/RemoveSnapshotsUpdate.md new file mode 100644 index 0000000000..cf0292caeb --- /dev/null +++ b/regtests/client/python/docs/RemoveSnapshotsUpdate.md @@ -0,0 +1,46 @@ + +# RemoveSnapshotsUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**snapshot_ids** | **List[int]** | | + +## Example + +```python +from polaris.catalog.models.remove_snapshots_update import RemoveSnapshotsUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of RemoveSnapshotsUpdate from a JSON string +remove_snapshots_update_instance = RemoveSnapshotsUpdate.from_json(json) +# print the JSON string representation of the object +print(RemoveSnapshotsUpdate.to_json()) + +# convert the object into a dict +remove_snapshots_update_dict = remove_snapshots_update_instance.to_dict() +# create an instance of RemoveSnapshotsUpdate from a dict +remove_snapshots_update_from_dict = RemoveSnapshotsUpdate.from_dict(remove_snapshots_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/RemoveStatisticsUpdate.md b/regtests/client/python/docs/RemoveStatisticsUpdate.md new file mode 100644 index 0000000000..a6c8e89aff --- /dev/null +++ b/regtests/client/python/docs/RemoveStatisticsUpdate.md @@ -0,0 +1,46 @@ + +# RemoveStatisticsUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**snapshot_id** | **int** | | + +## Example + +```python +from polaris.catalog.models.remove_statistics_update import RemoveStatisticsUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of RemoveStatisticsUpdate from a JSON string +remove_statistics_update_instance = RemoveStatisticsUpdate.from_json(json) +# print the JSON string representation of the object +print(RemoveStatisticsUpdate.to_json()) + +# convert the object into a dict +remove_statistics_update_dict = remove_statistics_update_instance.to_dict() +# create an instance of RemoveStatisticsUpdate from a dict +remove_statistics_update_from_dict = RemoveStatisticsUpdate.from_dict(remove_statistics_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/RenameTableRequest.md b/regtests/client/python/docs/RenameTableRequest.md new file mode 100644 index 0000000000..ff757923a1 --- /dev/null +++ b/regtests/client/python/docs/RenameTableRequest.md @@ -0,0 +1,46 @@ + +# RenameTableRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**source** | [**TableIdentifier**](TableIdentifier.md) | | +**destination** | [**TableIdentifier**](TableIdentifier.md) | | + +## Example + +```python +from polaris.catalog.models.rename_table_request import RenameTableRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of RenameTableRequest from a JSON string +rename_table_request_instance = RenameTableRequest.from_json(json) +# print the JSON string representation of the object +print(RenameTableRequest.to_json()) + +# convert the object into a dict +rename_table_request_dict = rename_table_request_instance.to_dict() +# create an instance of RenameTableRequest from a dict +rename_table_request_from_dict = RenameTableRequest.from_dict(rename_table_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ReportMetricsRequest.md b/regtests/client/python/docs/ReportMetricsRequest.md new file mode 100644 index 0000000000..9b8f351de0 --- /dev/null +++ b/regtests/client/python/docs/ReportMetricsRequest.md @@ -0,0 +1,55 @@ + +# ReportMetricsRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**report_type** | **str** | | +**table_name** | **str** | | +**snapshot_id** | **int** | | +**filter** | [**Expression**](Expression.md) | | +**schema_id** | **int** | | +**projected_field_ids** | **List[int]** | | +**projected_field_names** | **List[str]** | | +**metrics** | [**Dict[str, MetricResult]**](MetricResult.md) | | +**metadata** | **Dict[str, str]** | | [optional] +**sequence_number** | **int** | | +**operation** | **str** | | + +## Example + +```python +from polaris.catalog.models.report_metrics_request import ReportMetricsRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of ReportMetricsRequest from a JSON string +report_metrics_request_instance = ReportMetricsRequest.from_json(json) +# print the JSON string representation of the object +print(ReportMetricsRequest.to_json()) + +# convert the object into a dict +report_metrics_request_dict = report_metrics_request_instance.to_dict() +# create an instance of ReportMetricsRequest from a dict +report_metrics_request_from_dict = ReportMetricsRequest.from_dict(report_metrics_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/RevokeGrantRequest.md b/regtests/client/python/docs/RevokeGrantRequest.md new file mode 100644 index 0000000000..7271a48417 --- /dev/null +++ b/regtests/client/python/docs/RevokeGrantRequest.md @@ -0,0 +1,45 @@ + +# RevokeGrantRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**grant** | [**GrantResource**](GrantResource.md) | | [optional] + +## Example + +```python +from polaris.management.models.revoke_grant_request import RevokeGrantRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of RevokeGrantRequest from a JSON string +revoke_grant_request_instance = RevokeGrantRequest.from_json(json) +# print the JSON string representation of the object +print(RevokeGrantRequest.to_json()) + +# convert the object into a dict +revoke_grant_request_dict = revoke_grant_request_instance.to_dict() +# create an instance of RevokeGrantRequest from a dict +revoke_grant_request_from_dict = RevokeGrantRequest.from_dict(revoke_grant_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SQLViewRepresentation.md b/regtests/client/python/docs/SQLViewRepresentation.md new file mode 100644 index 0000000000..00adb1c6c1 --- /dev/null +++ b/regtests/client/python/docs/SQLViewRepresentation.md @@ -0,0 +1,47 @@ + +# SQLViewRepresentation + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**sql** | **str** | | +**dialect** | **str** | | + +## Example + +```python +from polaris.catalog.models.sql_view_representation import SQLViewRepresentation + +# TODO update the JSON string below +json = "{}" +# create an instance of SQLViewRepresentation from a JSON string +sql_view_representation_instance = SQLViewRepresentation.from_json(json) +# print the JSON string representation of the object +print(SQLViewRepresentation.to_json()) + +# convert the object into a dict +sql_view_representation_dict = sql_view_representation_instance.to_dict() +# create an instance of SQLViewRepresentation from a dict +sql_view_representation_from_dict = SQLViewRepresentation.from_dict(sql_view_representation_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ScanReport.md b/regtests/client/python/docs/ScanReport.md new file mode 100644 index 0000000000..9b44317f5a --- /dev/null +++ b/regtests/client/python/docs/ScanReport.md @@ -0,0 +1,52 @@ + +# ScanReport + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**table_name** | **str** | | +**snapshot_id** | **int** | | +**filter** | [**Expression**](Expression.md) | | +**schema_id** | **int** | | +**projected_field_ids** | **List[int]** | | +**projected_field_names** | **List[str]** | | +**metrics** | [**Dict[str, MetricResult]**](MetricResult.md) | | +**metadata** | **Dict[str, str]** | | [optional] + +## Example + +```python +from polaris.catalog.models.scan_report import ScanReport + +# TODO update the JSON string below +json = "{}" +# create an instance of ScanReport from a JSON string +scan_report_instance = ScanReport.from_json(json) +# print the JSON string representation of the object +print(ScanReport.to_json()) + +# convert the object into a dict +scan_report_dict = scan_report_instance.to_dict() +# create an instance of ScanReport from a dict +scan_report_from_dict = ScanReport.from_dict(scan_report_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SetCurrentSchemaUpdate.md b/regtests/client/python/docs/SetCurrentSchemaUpdate.md new file mode 100644 index 0000000000..3edf744ffc --- /dev/null +++ b/regtests/client/python/docs/SetCurrentSchemaUpdate.md @@ -0,0 +1,46 @@ + +# SetCurrentSchemaUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**schema_id** | **int** | Schema ID to set as current, or -1 to set last added schema | + +## Example + +```python +from polaris.catalog.models.set_current_schema_update import SetCurrentSchemaUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of SetCurrentSchemaUpdate from a JSON string +set_current_schema_update_instance = SetCurrentSchemaUpdate.from_json(json) +# print the JSON string representation of the object +print(SetCurrentSchemaUpdate.to_json()) + +# convert the object into a dict +set_current_schema_update_dict = set_current_schema_update_instance.to_dict() +# create an instance of SetCurrentSchemaUpdate from a dict +set_current_schema_update_from_dict = SetCurrentSchemaUpdate.from_dict(set_current_schema_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SetCurrentViewVersionUpdate.md b/regtests/client/python/docs/SetCurrentViewVersionUpdate.md new file mode 100644 index 0000000000..c5a461cd05 --- /dev/null +++ b/regtests/client/python/docs/SetCurrentViewVersionUpdate.md @@ -0,0 +1,46 @@ + +# SetCurrentViewVersionUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**view_version_id** | **int** | The view version id to set as current, or -1 to set last added view version id | + +## Example + +```python +from polaris.catalog.models.set_current_view_version_update import SetCurrentViewVersionUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of SetCurrentViewVersionUpdate from a JSON string +set_current_view_version_update_instance = SetCurrentViewVersionUpdate.from_json(json) +# print the JSON string representation of the object +print(SetCurrentViewVersionUpdate.to_json()) + +# convert the object into a dict +set_current_view_version_update_dict = set_current_view_version_update_instance.to_dict() +# create an instance of SetCurrentViewVersionUpdate from a dict +set_current_view_version_update_from_dict = SetCurrentViewVersionUpdate.from_dict(set_current_view_version_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SetDefaultSortOrderUpdate.md b/regtests/client/python/docs/SetDefaultSortOrderUpdate.md new file mode 100644 index 0000000000..4829080578 --- /dev/null +++ b/regtests/client/python/docs/SetDefaultSortOrderUpdate.md @@ -0,0 +1,46 @@ + +# SetDefaultSortOrderUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**sort_order_id** | **int** | Sort order ID to set as the default, or -1 to set last added sort order | + +## Example + +```python +from polaris.catalog.models.set_default_sort_order_update import SetDefaultSortOrderUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of SetDefaultSortOrderUpdate from a JSON string +set_default_sort_order_update_instance = SetDefaultSortOrderUpdate.from_json(json) +# print the JSON string representation of the object +print(SetDefaultSortOrderUpdate.to_json()) + +# convert the object into a dict +set_default_sort_order_update_dict = set_default_sort_order_update_instance.to_dict() +# create an instance of SetDefaultSortOrderUpdate from a dict +set_default_sort_order_update_from_dict = SetDefaultSortOrderUpdate.from_dict(set_default_sort_order_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SetDefaultSpecUpdate.md b/regtests/client/python/docs/SetDefaultSpecUpdate.md new file mode 100644 index 0000000000..b21a42bd8a --- /dev/null +++ b/regtests/client/python/docs/SetDefaultSpecUpdate.md @@ -0,0 +1,46 @@ + +# SetDefaultSpecUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**spec_id** | **int** | Partition spec ID to set as the default, or -1 to set last added spec | + +## Example + +```python +from polaris.catalog.models.set_default_spec_update import SetDefaultSpecUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of SetDefaultSpecUpdate from a JSON string +set_default_spec_update_instance = SetDefaultSpecUpdate.from_json(json) +# print the JSON string representation of the object +print(SetDefaultSpecUpdate.to_json()) + +# convert the object into a dict +set_default_spec_update_dict = set_default_spec_update_instance.to_dict() +# create an instance of SetDefaultSpecUpdate from a dict +set_default_spec_update_from_dict = SetDefaultSpecUpdate.from_dict(set_default_spec_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SetExpression.md b/regtests/client/python/docs/SetExpression.md new file mode 100644 index 0000000000..6034024761 --- /dev/null +++ b/regtests/client/python/docs/SetExpression.md @@ -0,0 +1,47 @@ + +# SetExpression + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**term** | [**Term**](Term.md) | | +**values** | **List[object]** | | + +## Example + +```python +from polaris.catalog.models.set_expression import SetExpression + +# TODO update the JSON string below +json = "{}" +# create an instance of SetExpression from a JSON string +set_expression_instance = SetExpression.from_json(json) +# print the JSON string representation of the object +print(SetExpression.to_json()) + +# convert the object into a dict +set_expression_dict = set_expression_instance.to_dict() +# create an instance of SetExpression from a dict +set_expression_from_dict = SetExpression.from_dict(set_expression_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SetLocationUpdate.md b/regtests/client/python/docs/SetLocationUpdate.md new file mode 100644 index 0000000000..63c660f7f1 --- /dev/null +++ b/regtests/client/python/docs/SetLocationUpdate.md @@ -0,0 +1,46 @@ + +# SetLocationUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**location** | **str** | | + +## Example + +```python +from polaris.catalog.models.set_location_update import SetLocationUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of SetLocationUpdate from a JSON string +set_location_update_instance = SetLocationUpdate.from_json(json) +# print the JSON string representation of the object +print(SetLocationUpdate.to_json()) + +# convert the object into a dict +set_location_update_dict = set_location_update_instance.to_dict() +# create an instance of SetLocationUpdate from a dict +set_location_update_from_dict = SetLocationUpdate.from_dict(set_location_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SetPartitionStatisticsUpdate.md b/regtests/client/python/docs/SetPartitionStatisticsUpdate.md new file mode 100644 index 0000000000..0f8d785bfe --- /dev/null +++ b/regtests/client/python/docs/SetPartitionStatisticsUpdate.md @@ -0,0 +1,46 @@ + +# SetPartitionStatisticsUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**partition_statistics** | [**PartitionStatisticsFile**](PartitionStatisticsFile.md) | | + +## Example + +```python +from polaris.catalog.models.set_partition_statistics_update import SetPartitionStatisticsUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of SetPartitionStatisticsUpdate from a JSON string +set_partition_statistics_update_instance = SetPartitionStatisticsUpdate.from_json(json) +# print the JSON string representation of the object +print(SetPartitionStatisticsUpdate.to_json()) + +# convert the object into a dict +set_partition_statistics_update_dict = set_partition_statistics_update_instance.to_dict() +# create an instance of SetPartitionStatisticsUpdate from a dict +set_partition_statistics_update_from_dict = SetPartitionStatisticsUpdate.from_dict(set_partition_statistics_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SetPropertiesUpdate.md b/regtests/client/python/docs/SetPropertiesUpdate.md new file mode 100644 index 0000000000..23e4f560ae --- /dev/null +++ b/regtests/client/python/docs/SetPropertiesUpdate.md @@ -0,0 +1,46 @@ + +# SetPropertiesUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**updates** | **Dict[str, str]** | | + +## Example + +```python +from polaris.catalog.models.set_properties_update import SetPropertiesUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of SetPropertiesUpdate from a JSON string +set_properties_update_instance = SetPropertiesUpdate.from_json(json) +# print the JSON string representation of the object +print(SetPropertiesUpdate.to_json()) + +# convert the object into a dict +set_properties_update_dict = set_properties_update_instance.to_dict() +# create an instance of SetPropertiesUpdate from a dict +set_properties_update_from_dict = SetPropertiesUpdate.from_dict(set_properties_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SetSnapshotRefUpdate.md b/regtests/client/python/docs/SetSnapshotRefUpdate.md new file mode 100644 index 0000000000..f892420512 --- /dev/null +++ b/regtests/client/python/docs/SetSnapshotRefUpdate.md @@ -0,0 +1,51 @@ + +# SetSnapshotRefUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**ref_name** | **str** | | +**type** | **str** | | +**snapshot_id** | **int** | | +**max_ref_age_ms** | **int** | | [optional] +**max_snapshot_age_ms** | **int** | | [optional] +**min_snapshots_to_keep** | **int** | | [optional] + +## Example + +```python +from polaris.catalog.models.set_snapshot_ref_update import SetSnapshotRefUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of SetSnapshotRefUpdate from a JSON string +set_snapshot_ref_update_instance = SetSnapshotRefUpdate.from_json(json) +# print the JSON string representation of the object +print(SetSnapshotRefUpdate.to_json()) + +# convert the object into a dict +set_snapshot_ref_update_dict = set_snapshot_ref_update_instance.to_dict() +# create an instance of SetSnapshotRefUpdate from a dict +set_snapshot_ref_update_from_dict = SetSnapshotRefUpdate.from_dict(set_snapshot_ref_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SetStatisticsUpdate.md b/regtests/client/python/docs/SetStatisticsUpdate.md new file mode 100644 index 0000000000..2efff01888 --- /dev/null +++ b/regtests/client/python/docs/SetStatisticsUpdate.md @@ -0,0 +1,47 @@ + +# SetStatisticsUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**snapshot_id** | **int** | | +**statistics** | [**StatisticsFile**](StatisticsFile.md) | | + +## Example + +```python +from polaris.catalog.models.set_statistics_update import SetStatisticsUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of SetStatisticsUpdate from a JSON string +set_statistics_update_instance = SetStatisticsUpdate.from_json(json) +# print the JSON string representation of the object +print(SetStatisticsUpdate.to_json()) + +# convert the object into a dict +set_statistics_update_dict = set_statistics_update_instance.to_dict() +# create an instance of SetStatisticsUpdate from a dict +set_statistics_update_from_dict = SetStatisticsUpdate.from_dict(set_statistics_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/Snapshot.md b/regtests/client/python/docs/Snapshot.md new file mode 100644 index 0000000000..a64649e8a4 --- /dev/null +++ b/regtests/client/python/docs/Snapshot.md @@ -0,0 +1,51 @@ + +# Snapshot + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**snapshot_id** | **int** | | +**parent_snapshot_id** | **int** | | [optional] +**sequence_number** | **int** | | [optional] +**timestamp_ms** | **int** | | +**manifest_list** | **str** | Location of the snapshot's manifest list file | +**summary** | [**SnapshotSummary**](SnapshotSummary.md) | | +**schema_id** | **int** | | [optional] + +## Example + +```python +from polaris.catalog.models.snapshot import Snapshot + +# TODO update the JSON string below +json = "{}" +# create an instance of Snapshot from a JSON string +snapshot_instance = Snapshot.from_json(json) +# print the JSON string representation of the object +print(Snapshot.to_json()) + +# convert the object into a dict +snapshot_dict = snapshot_instance.to_dict() +# create an instance of Snapshot from a dict +snapshot_from_dict = Snapshot.from_dict(snapshot_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SnapshotLogInner.md b/regtests/client/python/docs/SnapshotLogInner.md new file mode 100644 index 0000000000..ec0256e700 --- /dev/null +++ b/regtests/client/python/docs/SnapshotLogInner.md @@ -0,0 +1,46 @@ + +# SnapshotLogInner + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**snapshot_id** | **int** | | +**timestamp_ms** | **int** | | + +## Example + +```python +from polaris.catalog.models.snapshot_log_inner import SnapshotLogInner + +# TODO update the JSON string below +json = "{}" +# create an instance of SnapshotLogInner from a JSON string +snapshot_log_inner_instance = SnapshotLogInner.from_json(json) +# print the JSON string representation of the object +print(SnapshotLogInner.to_json()) + +# convert the object into a dict +snapshot_log_inner_dict = snapshot_log_inner_instance.to_dict() +# create an instance of SnapshotLogInner from a dict +snapshot_log_inner_from_dict = SnapshotLogInner.from_dict(snapshot_log_inner_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SnapshotReference.md b/regtests/client/python/docs/SnapshotReference.md new file mode 100644 index 0000000000..facff3816a --- /dev/null +++ b/regtests/client/python/docs/SnapshotReference.md @@ -0,0 +1,49 @@ + +# SnapshotReference + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**snapshot_id** | **int** | | +**max_ref_age_ms** | **int** | | [optional] +**max_snapshot_age_ms** | **int** | | [optional] +**min_snapshots_to_keep** | **int** | | [optional] + +## Example + +```python +from polaris.catalog.models.snapshot_reference import SnapshotReference + +# TODO update the JSON string below +json = "{}" +# create an instance of SnapshotReference from a JSON string +snapshot_reference_instance = SnapshotReference.from_json(json) +# print the JSON string representation of the object +print(SnapshotReference.to_json()) + +# convert the object into a dict +snapshot_reference_dict = snapshot_reference_instance.to_dict() +# create an instance of SnapshotReference from a dict +snapshot_reference_from_dict = SnapshotReference.from_dict(snapshot_reference_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SnapshotSummary.md b/regtests/client/python/docs/SnapshotSummary.md new file mode 100644 index 0000000000..5a60c043a2 --- /dev/null +++ b/regtests/client/python/docs/SnapshotSummary.md @@ -0,0 +1,45 @@ + +# SnapshotSummary + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**operation** | **str** | | + +## Example + +```python +from polaris.catalog.models.snapshot_summary import SnapshotSummary + +# TODO update the JSON string below +json = "{}" +# create an instance of SnapshotSummary from a JSON string +snapshot_summary_instance = SnapshotSummary.from_json(json) +# print the JSON string representation of the object +print(SnapshotSummary.to_json()) + +# convert the object into a dict +snapshot_summary_dict = snapshot_summary_instance.to_dict() +# create an instance of SnapshotSummary from a dict +snapshot_summary_from_dict = SnapshotSummary.from_dict(snapshot_summary_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SortDirection.md b/regtests/client/python/docs/SortDirection.md new file mode 100644 index 0000000000..7a2f6834b4 --- /dev/null +++ b/regtests/client/python/docs/SortDirection.md @@ -0,0 +1,28 @@ + +# SortDirection + +## Enum + +* `ASC` (value: `'asc'`) + +* `DESC` (value: `'desc'`) + +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SortField.md b/regtests/client/python/docs/SortField.md new file mode 100644 index 0000000000..6f05e7699a --- /dev/null +++ b/regtests/client/python/docs/SortField.md @@ -0,0 +1,48 @@ + +# SortField + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**source_id** | **int** | | +**transform** | **str** | | +**direction** | [**SortDirection**](SortDirection.md) | | +**null_order** | [**NullOrder**](NullOrder.md) | | + +## Example + +```python +from polaris.catalog.models.sort_field import SortField + +# TODO update the JSON string below +json = "{}" +# create an instance of SortField from a JSON string +sort_field_instance = SortField.from_json(json) +# print the JSON string representation of the object +print(SortField.to_json()) + +# convert the object into a dict +sort_field_dict = sort_field_instance.to_dict() +# create an instance of SortField from a dict +sort_field_from_dict = SortField.from_dict(sort_field_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/SortOrder.md b/regtests/client/python/docs/SortOrder.md new file mode 100644 index 0000000000..04751ed593 --- /dev/null +++ b/regtests/client/python/docs/SortOrder.md @@ -0,0 +1,46 @@ + +# SortOrder + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**order_id** | **int** | | [readonly] +**fields** | [**List[SortField]**](SortField.md) | | + +## Example + +```python +from polaris.catalog.models.sort_order import SortOrder + +# TODO update the JSON string below +json = "{}" +# create an instance of SortOrder from a JSON string +sort_order_instance = SortOrder.from_json(json) +# print the JSON string representation of the object +print(SortOrder.to_json()) + +# convert the object into a dict +sort_order_dict = sort_order_instance.to_dict() +# create an instance of SortOrder from a dict +sort_order_from_dict = SortOrder.from_dict(sort_order_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/StatisticsFile.md b/regtests/client/python/docs/StatisticsFile.md new file mode 100644 index 0000000000..f429240883 --- /dev/null +++ b/regtests/client/python/docs/StatisticsFile.md @@ -0,0 +1,49 @@ + +# StatisticsFile + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**snapshot_id** | **int** | | +**statistics_path** | **str** | | +**file_size_in_bytes** | **int** | | +**file_footer_size_in_bytes** | **int** | | +**blob_metadata** | [**List[BlobMetadata]**](BlobMetadata.md) | | + +## Example + +```python +from polaris.catalog.models.statistics_file import StatisticsFile + +# TODO update the JSON string below +json = "{}" +# create an instance of StatisticsFile from a JSON string +statistics_file_instance = StatisticsFile.from_json(json) +# print the JSON string representation of the object +print(StatisticsFile.to_json()) + +# convert the object into a dict +statistics_file_dict = statistics_file_instance.to_dict() +# create an instance of StatisticsFile from a dict +statistics_file_from_dict = StatisticsFile.from_dict(statistics_file_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/StorageConfigInfo.md b/regtests/client/python/docs/StorageConfigInfo.md new file mode 100644 index 0000000000..704520290f --- /dev/null +++ b/regtests/client/python/docs/StorageConfigInfo.md @@ -0,0 +1,48 @@ + +# StorageConfigInfo + +A storage configuration used by catalogs + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**storage_type** | **str** | The cloud provider type this storage is built on. FILE is supported for testing purposes only | +**allowed_locations** | **List[str]** | | [optional] + +## Example + +```python +from polaris.management.models.storage_config_info import StorageConfigInfo + +# TODO update the JSON string below +json = "{}" +# create an instance of StorageConfigInfo from a JSON string +storage_config_info_instance = StorageConfigInfo.from_json(json) +# print the JSON string representation of the object +print(StorageConfigInfo.to_json()) + +# convert the object into a dict +storage_config_info_dict = storage_config_info_instance.to_dict() +# create an instance of StorageConfigInfo from a dict +storage_config_info_from_dict = StorageConfigInfo.from_dict(storage_config_info_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/StructField.md b/regtests/client/python/docs/StructField.md new file mode 100644 index 0000000000..9125b17ff5 --- /dev/null +++ b/regtests/client/python/docs/StructField.md @@ -0,0 +1,49 @@ + +# StructField + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**id** | **int** | | +**name** | **str** | | +**type** | [**Type**](Type.md) | | +**required** | **bool** | | +**doc** | **str** | | [optional] + +## Example + +```python +from polaris.catalog.models.struct_field import StructField + +# TODO update the JSON string below +json = "{}" +# create an instance of StructField from a JSON string +struct_field_instance = StructField.from_json(json) +# print the JSON string representation of the object +print(StructField.to_json()) + +# convert the object into a dict +struct_field_dict = struct_field_instance.to_dict() +# create an instance of StructField from a dict +struct_field_from_dict = StructField.from_dict(struct_field_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/StructType.md b/regtests/client/python/docs/StructType.md new file mode 100644 index 0000000000..25bfef569c --- /dev/null +++ b/regtests/client/python/docs/StructType.md @@ -0,0 +1,46 @@ + +# StructType + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**fields** | [**List[StructField]**](StructField.md) | | + +## Example + +```python +from polaris.catalog.models.struct_type import StructType + +# TODO update the JSON string below +json = "{}" +# create an instance of StructType from a JSON string +struct_type_instance = StructType.from_json(json) +# print the JSON string representation of the object +print(StructType.to_json()) + +# convert the object into a dict +struct_type_dict = struct_type_instance.to_dict() +# create an instance of StructType from a dict +struct_type_from_dict = StructType.from_dict(struct_type_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/TableGrant.md b/regtests/client/python/docs/TableGrant.md new file mode 100644 index 0000000000..0bc1e53bc1 --- /dev/null +++ b/regtests/client/python/docs/TableGrant.md @@ -0,0 +1,47 @@ + +# TableGrant + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**namespace** | **List[str]** | | +**table_name** | **str** | | +**privilege** | [**TablePrivilege**](TablePrivilege.md) | | + +## Example + +```python +from polaris.management.models.table_grant import TableGrant + +# TODO update the JSON string below +json = "{}" +# create an instance of TableGrant from a JSON string +table_grant_instance = TableGrant.from_json(json) +# print the JSON string representation of the object +print(TableGrant.to_json()) + +# convert the object into a dict +table_grant_dict = table_grant_instance.to_dict() +# create an instance of TableGrant from a dict +table_grant_from_dict = TableGrant.from_dict(table_grant_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/TableIdentifier.md b/regtests/client/python/docs/TableIdentifier.md new file mode 100644 index 0000000000..f64a178f49 --- /dev/null +++ b/regtests/client/python/docs/TableIdentifier.md @@ -0,0 +1,46 @@ + +# TableIdentifier + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**namespace** | **List[str]** | Reference to one or more levels of a namespace | +**name** | **str** | | + +## Example + +```python +from polaris.catalog.models.table_identifier import TableIdentifier + +# TODO update the JSON string below +json = "{}" +# create an instance of TableIdentifier from a JSON string +table_identifier_instance = TableIdentifier.from_json(json) +# print the JSON string representation of the object +print(TableIdentifier.to_json()) + +# convert the object into a dict +table_identifier_dict = table_identifier_instance.to_dict() +# create an instance of TableIdentifier from a dict +table_identifier_from_dict = TableIdentifier.from_dict(table_identifier_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/TableMetadata.md b/regtests/client/python/docs/TableMetadata.md new file mode 100644 index 0000000000..e7532867a0 --- /dev/null +++ b/regtests/client/python/docs/TableMetadata.md @@ -0,0 +1,65 @@ + +# TableMetadata + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**format_version** | **int** | | +**table_uuid** | **str** | | +**location** | **str** | | [optional] +**last_updated_ms** | **int** | | [optional] +**properties** | **Dict[str, str]** | | [optional] +**schemas** | [**List[ModelSchema]**](ModelSchema.md) | | [optional] +**current_schema_id** | **int** | | [optional] +**last_column_id** | **int** | | [optional] +**partition_specs** | [**List[PartitionSpec]**](PartitionSpec.md) | | [optional] +**default_spec_id** | **int** | | [optional] +**last_partition_id** | **int** | | [optional] +**sort_orders** | [**List[SortOrder]**](SortOrder.md) | | [optional] +**default_sort_order_id** | **int** | | [optional] +**snapshots** | [**List[Snapshot]**](Snapshot.md) | | [optional] +**refs** | [**Dict[str, SnapshotReference]**](SnapshotReference.md) | | [optional] +**current_snapshot_id** | **int** | | [optional] +**last_sequence_number** | **int** | | [optional] +**snapshot_log** | [**List[SnapshotLogInner]**](SnapshotLogInner.md) | | [optional] +**metadata_log** | [**List[MetadataLogInner]**](MetadataLogInner.md) | | [optional] +**statistics_files** | [**List[StatisticsFile]**](StatisticsFile.md) | | [optional] +**partition_statistics_files** | [**List[PartitionStatisticsFile]**](PartitionStatisticsFile.md) | | [optional] + +## Example + +```python +from polaris.catalog.models.table_metadata import TableMetadata + +# TODO update the JSON string below +json = "{}" +# create an instance of TableMetadata from a JSON string +table_metadata_instance = TableMetadata.from_json(json) +# print the JSON string representation of the object +print(TableMetadata.to_json()) + +# convert the object into a dict +table_metadata_dict = table_metadata_instance.to_dict() +# create an instance of TableMetadata from a dict +table_metadata_from_dict = TableMetadata.from_dict(table_metadata_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/TablePrivilege.md b/regtests/client/python/docs/TablePrivilege.md new file mode 100644 index 0000000000..23ae745888 --- /dev/null +++ b/regtests/client/python/docs/TablePrivilege.md @@ -0,0 +1,42 @@ + +# TablePrivilege + +## Enum + +* `CATALOG_MANAGE_ACCESS` (value: `'CATALOG_MANAGE_ACCESS'`) + +* `TABLE_DROP` (value: `'TABLE_DROP'`) + +* `TABLE_LIST` (value: `'TABLE_LIST'`) + +* `TABLE_READ_PROPERTIES` (value: `'TABLE_READ_PROPERTIES'`) + +* `VIEW_READ_PROPERTIES` (value: `'VIEW_READ_PROPERTIES'`) + +* `TABLE_WRITE_PROPERTIES` (value: `'TABLE_WRITE_PROPERTIES'`) + +* `TABLE_READ_DATA` (value: `'TABLE_READ_DATA'`) + +* `TABLE_WRITE_DATA` (value: `'TABLE_WRITE_DATA'`) + +* `TABLE_FULL_METADATA` (value: `'TABLE_FULL_METADATA'`) + +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/TableRequirement.md b/regtests/client/python/docs/TableRequirement.md new file mode 100644 index 0000000000..9e7ed0d29b --- /dev/null +++ b/regtests/client/python/docs/TableRequirement.md @@ -0,0 +1,45 @@ + +# TableRequirement + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | + +## Example + +```python +from polaris.catalog.models.table_requirement import TableRequirement + +# TODO update the JSON string below +json = "{}" +# create an instance of TableRequirement from a JSON string +table_requirement_instance = TableRequirement.from_json(json) +# print the JSON string representation of the object +print(TableRequirement.to_json()) + +# convert the object into a dict +table_requirement_dict = table_requirement_instance.to_dict() +# create an instance of TableRequirement from a dict +table_requirement_from_dict = TableRequirement.from_dict(table_requirement_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/TableUpdate.md b/regtests/client/python/docs/TableUpdate.md new file mode 100644 index 0000000000..d7bda62d74 --- /dev/null +++ b/regtests/client/python/docs/TableUpdate.md @@ -0,0 +1,65 @@ + +# TableUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**format_version** | **int** | | +**var_schema** | [**ModelSchema**](ModelSchema.md) | | +**last_column_id** | **int** | The highest assigned column ID for the table. This is used to ensure columns are always assigned an unused ID when evolving schemas. When omitted, it will be computed on the server side. | [optional] +**schema_id** | **int** | Schema ID to set as current, or -1 to set last added schema | +**spec** | [**PartitionSpec**](PartitionSpec.md) | | +**spec_id** | **int** | Partition spec ID to set as the default, or -1 to set last added spec | +**sort_order** | [**SortOrder**](SortOrder.md) | | +**sort_order_id** | **int** | Sort order ID to set as the default, or -1 to set last added sort order | +**snapshot** | [**Snapshot**](Snapshot.md) | | +**ref_name** | **str** | | +**type** | **str** | | +**snapshot_id** | **int** | | +**max_ref_age_ms** | **int** | | [optional] +**max_snapshot_age_ms** | **int** | | [optional] +**min_snapshots_to_keep** | **int** | | [optional] +**snapshot_ids** | **List[int]** | | +**location** | **str** | | +**updates** | **Dict[str, str]** | | +**removals** | **List[str]** | | +**statistics** | [**StatisticsFile**](StatisticsFile.md) | | + +## Example + +```python +from polaris.catalog.models.table_update import TableUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of TableUpdate from a JSON string +table_update_instance = TableUpdate.from_json(json) +# print the JSON string representation of the object +print(TableUpdate.to_json()) + +# convert the object into a dict +table_update_dict = table_update_instance.to_dict() +# create an instance of TableUpdate from a dict +table_update_from_dict = TableUpdate.from_dict(table_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/TableUpdateNotification.md b/regtests/client/python/docs/TableUpdateNotification.md new file mode 100644 index 0000000000..935381243c --- /dev/null +++ b/regtests/client/python/docs/TableUpdateNotification.md @@ -0,0 +1,49 @@ + +# TableUpdateNotification + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**table_name** | **str** | | +**timestamp** | **int** | | +**table_uuid** | **str** | | +**metadata_location** | **str** | | +**metadata** | [**TableMetadata**](TableMetadata.md) | | [optional] + +## Example + +```python +from polaris.catalog.models.table_update_notification import TableUpdateNotification + +# TODO update the JSON string below +json = "{}" +# create an instance of TableUpdateNotification from a JSON string +table_update_notification_instance = TableUpdateNotification.from_json(json) +# print the JSON string representation of the object +print(TableUpdateNotification.to_json()) + +# convert the object into a dict +table_update_notification_dict = table_update_notification_instance.to_dict() +# create an instance of TableUpdateNotification from a dict +table_update_notification_from_dict = TableUpdateNotification.from_dict(table_update_notification_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/Term.md b/regtests/client/python/docs/Term.md new file mode 100644 index 0000000000..c627b9a22d --- /dev/null +++ b/regtests/client/python/docs/Term.md @@ -0,0 +1,47 @@ + +# Term + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**transform** | **str** | | +**term** | **str** | | + +## Example + +```python +from polaris.catalog.models.term import Term + +# TODO update the JSON string below +json = "{}" +# create an instance of Term from a JSON string +term_instance = Term.from_json(json) +# print the JSON string representation of the object +print(Term.to_json()) + +# convert the object into a dict +term_dict = term_instance.to_dict() +# create an instance of Term from a dict +term_from_dict = Term.from_dict(term_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/TimerResult.md b/regtests/client/python/docs/TimerResult.md new file mode 100644 index 0000000000..abdccb2296 --- /dev/null +++ b/regtests/client/python/docs/TimerResult.md @@ -0,0 +1,47 @@ + +# TimerResult + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**time_unit** | **str** | | +**count** | **int** | | +**total_duration** | **int** | | + +## Example + +```python +from polaris.catalog.models.timer_result import TimerResult + +# TODO update the JSON string below +json = "{}" +# create an instance of TimerResult from a JSON string +timer_result_instance = TimerResult.from_json(json) +# print the JSON string representation of the object +print(TimerResult.to_json()) + +# convert the object into a dict +timer_result_dict = timer_result_instance.to_dict() +# create an instance of TimerResult from a dict +timer_result_from_dict = TimerResult.from_dict(timer_result_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/TokenType.md b/regtests/client/python/docs/TokenType.md new file mode 100644 index 0000000000..bf3e5a0cbe --- /dev/null +++ b/regtests/client/python/docs/TokenType.md @@ -0,0 +1,38 @@ + +# TokenType + +Token type identifier, from RFC 8693 Section 3 See https://datatracker.ietf.org/doc/html/rfc8693#section-3 + +## Enum + +* `URN_COLON_IETF_COLON_PARAMS_COLON_OAUTH_COLON_TOKEN_MINUS_TYPE_COLON_ACCESS_TOKEN` (value: `'urn:ietf:params:oauth:token-type:access_token'`) + +* `URN_COLON_IETF_COLON_PARAMS_COLON_OAUTH_COLON_TOKEN_MINUS_TYPE_COLON_REFRESH_TOKEN` (value: `'urn:ietf:params:oauth:token-type:refresh_token'`) + +* `URN_COLON_IETF_COLON_PARAMS_COLON_OAUTH_COLON_TOKEN_MINUS_TYPE_COLON_ID_TOKEN` (value: `'urn:ietf:params:oauth:token-type:id_token'`) + +* `URN_COLON_IETF_COLON_PARAMS_COLON_OAUTH_COLON_TOKEN_MINUS_TYPE_COLON_SAML1` (value: `'urn:ietf:params:oauth:token-type:saml1'`) + +* `URN_COLON_IETF_COLON_PARAMS_COLON_OAUTH_COLON_TOKEN_MINUS_TYPE_COLON_SAML2` (value: `'urn:ietf:params:oauth:token-type:saml2'`) + +* `URN_COLON_IETF_COLON_PARAMS_COLON_OAUTH_COLON_TOKEN_MINUS_TYPE_COLON_JWT` (value: `'urn:ietf:params:oauth:token-type:jwt'`) + +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/TransformTerm.md b/regtests/client/python/docs/TransformTerm.md new file mode 100644 index 0000000000..9aa207be5e --- /dev/null +++ b/regtests/client/python/docs/TransformTerm.md @@ -0,0 +1,47 @@ + +# TransformTerm + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**transform** | **str** | | +**term** | **str** | | + +## Example + +```python +from polaris.catalog.models.transform_term import TransformTerm + +# TODO update the JSON string below +json = "{}" +# create an instance of TransformTerm from a JSON string +transform_term_instance = TransformTerm.from_json(json) +# print the JSON string representation of the object +print(TransformTerm.to_json()) + +# convert the object into a dict +transform_term_dict = transform_term_instance.to_dict() +# create an instance of TransformTerm from a dict +transform_term_from_dict = TransformTerm.from_dict(transform_term_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/Type.md b/regtests/client/python/docs/Type.md new file mode 100644 index 0000000000..7ce5b32297 --- /dev/null +++ b/regtests/client/python/docs/Type.md @@ -0,0 +1,54 @@ + +# Type + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**fields** | [**List[StructField]**](StructField.md) | | +**element_id** | **int** | | +**element** | [**Type**](Type.md) | | +**element_required** | **bool** | | +**key_id** | **int** | | +**key** | [**Type**](Type.md) | | +**value_id** | **int** | | +**value** | [**Type**](Type.md) | | +**value_required** | **bool** | | + +## Example + +```python +from polaris.catalog.models.type import Type + +# TODO update the JSON string below +json = "{}" +# create an instance of Type from a JSON string +type_instance = Type.from_json(json) +# print the JSON string representation of the object +print(Type.to_json()) + +# convert the object into a dict +type_dict = type_instance.to_dict() +# create an instance of Type from a dict +type_from_dict = Type.from_dict(type_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/UnaryExpression.md b/regtests/client/python/docs/UnaryExpression.md new file mode 100644 index 0000000000..3721decd2a --- /dev/null +++ b/regtests/client/python/docs/UnaryExpression.md @@ -0,0 +1,47 @@ + +# UnaryExpression + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**term** | [**Term**](Term.md) | | +**value** | **object** | | + +## Example + +```python +from polaris.catalog.models.unary_expression import UnaryExpression + +# TODO update the JSON string below +json = "{}" +# create an instance of UnaryExpression from a JSON string +unary_expression_instance = UnaryExpression.from_json(json) +# print the JSON string representation of the object +print(UnaryExpression.to_json()) + +# convert the object into a dict +unary_expression_dict = unary_expression_instance.to_dict() +# create an instance of UnaryExpression from a dict +unary_expression_from_dict = UnaryExpression.from_dict(unary_expression_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/UpdateCatalogRequest.md b/regtests/client/python/docs/UpdateCatalogRequest.md new file mode 100644 index 0000000000..68a5c97ff7 --- /dev/null +++ b/regtests/client/python/docs/UpdateCatalogRequest.md @@ -0,0 +1,49 @@ + +# UpdateCatalogRequest + +Updates to apply to a Catalog + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**current_entity_version** | **int** | The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version. | [optional] +**properties** | **Dict[str, str]** | | [optional] +**storage_config_info** | [**StorageConfigInfo**](StorageConfigInfo.md) | | [optional] + +## Example + +```python +from polaris.management.models.update_catalog_request import UpdateCatalogRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of UpdateCatalogRequest from a JSON string +update_catalog_request_instance = UpdateCatalogRequest.from_json(json) +# print the JSON string representation of the object +print(UpdateCatalogRequest.to_json()) + +# convert the object into a dict +update_catalog_request_dict = update_catalog_request_instance.to_dict() +# create an instance of UpdateCatalogRequest from a dict +update_catalog_request_from_dict = UpdateCatalogRequest.from_dict(update_catalog_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/UpdateCatalogRoleRequest.md b/regtests/client/python/docs/UpdateCatalogRoleRequest.md new file mode 100644 index 0000000000..0fbe5a8a88 --- /dev/null +++ b/regtests/client/python/docs/UpdateCatalogRoleRequest.md @@ -0,0 +1,48 @@ + +# UpdateCatalogRoleRequest + +Updates to apply to a Catalog Role + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**current_entity_version** | **int** | The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version. | +**properties** | **Dict[str, str]** | | + +## Example + +```python +from polaris.management.models.update_catalog_role_request import UpdateCatalogRoleRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of UpdateCatalogRoleRequest from a JSON string +update_catalog_role_request_instance = UpdateCatalogRoleRequest.from_json(json) +# print the JSON string representation of the object +print(UpdateCatalogRoleRequest.to_json()) + +# convert the object into a dict +update_catalog_role_request_dict = update_catalog_role_request_instance.to_dict() +# create an instance of UpdateCatalogRoleRequest from a dict +update_catalog_role_request_from_dict = UpdateCatalogRoleRequest.from_dict(update_catalog_role_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/UpdateNamespacePropertiesRequest.md b/regtests/client/python/docs/UpdateNamespacePropertiesRequest.md new file mode 100644 index 0000000000..eeb294f5b0 --- /dev/null +++ b/regtests/client/python/docs/UpdateNamespacePropertiesRequest.md @@ -0,0 +1,46 @@ + +# UpdateNamespacePropertiesRequest + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**removals** | **List[str]** | | [optional] +**updates** | **Dict[str, str]** | | [optional] + +## Example + +```python +from polaris.catalog.models.update_namespace_properties_request import UpdateNamespacePropertiesRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of UpdateNamespacePropertiesRequest from a JSON string +update_namespace_properties_request_instance = UpdateNamespacePropertiesRequest.from_json(json) +# print the JSON string representation of the object +print(UpdateNamespacePropertiesRequest.to_json()) + +# convert the object into a dict +update_namespace_properties_request_dict = update_namespace_properties_request_instance.to_dict() +# create an instance of UpdateNamespacePropertiesRequest from a dict +update_namespace_properties_request_from_dict = UpdateNamespacePropertiesRequest.from_dict(update_namespace_properties_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/UpdateNamespacePropertiesResponse.md b/regtests/client/python/docs/UpdateNamespacePropertiesResponse.md new file mode 100644 index 0000000000..ed73879da7 --- /dev/null +++ b/regtests/client/python/docs/UpdateNamespacePropertiesResponse.md @@ -0,0 +1,47 @@ + +# UpdateNamespacePropertiesResponse + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**updated** | **List[str]** | List of property keys that were added or updated | +**removed** | **List[str]** | List of properties that were removed | +**missing** | **List[str]** | List of properties requested for removal that were not found in the namespace's properties. Represents a partial success response. Server's do not need to implement this. | [optional] + +## Example + +```python +from polaris.catalog.models.update_namespace_properties_response import UpdateNamespacePropertiesResponse + +# TODO update the JSON string below +json = "{}" +# create an instance of UpdateNamespacePropertiesResponse from a JSON string +update_namespace_properties_response_instance = UpdateNamespacePropertiesResponse.from_json(json) +# print the JSON string representation of the object +print(UpdateNamespacePropertiesResponse.to_json()) + +# convert the object into a dict +update_namespace_properties_response_dict = update_namespace_properties_response_instance.to_dict() +# create an instance of UpdateNamespacePropertiesResponse from a dict +update_namespace_properties_response_from_dict = UpdateNamespacePropertiesResponse.from_dict(update_namespace_properties_response_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/UpdatePrincipalRequest.md b/regtests/client/python/docs/UpdatePrincipalRequest.md new file mode 100644 index 0000000000..bcb52c2aa3 --- /dev/null +++ b/regtests/client/python/docs/UpdatePrincipalRequest.md @@ -0,0 +1,48 @@ + +# UpdatePrincipalRequest + +Updates to apply to a Principal + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**current_entity_version** | **int** | The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version. | +**properties** | **Dict[str, str]** | | + +## Example + +```python +from polaris.management.models.update_principal_request import UpdatePrincipalRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of UpdatePrincipalRequest from a JSON string +update_principal_request_instance = UpdatePrincipalRequest.from_json(json) +# print the JSON string representation of the object +print(UpdatePrincipalRequest.to_json()) + +# convert the object into a dict +update_principal_request_dict = update_principal_request_instance.to_dict() +# create an instance of UpdatePrincipalRequest from a dict +update_principal_request_from_dict = UpdatePrincipalRequest.from_dict(update_principal_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/UpdatePrincipalRoleRequest.md b/regtests/client/python/docs/UpdatePrincipalRoleRequest.md new file mode 100644 index 0000000000..a918f706c6 --- /dev/null +++ b/regtests/client/python/docs/UpdatePrincipalRoleRequest.md @@ -0,0 +1,48 @@ + +# UpdatePrincipalRoleRequest + +Updates to apply to a Principal Role + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**current_entity_version** | **int** | The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version. | +**properties** | **Dict[str, str]** | | + +## Example + +```python +from polaris.management.models.update_principal_role_request import UpdatePrincipalRoleRequest + +# TODO update the JSON string below +json = "{}" +# create an instance of UpdatePrincipalRoleRequest from a JSON string +update_principal_role_request_instance = UpdatePrincipalRoleRequest.from_json(json) +# print the JSON string representation of the object +print(UpdatePrincipalRoleRequest.to_json()) + +# convert the object into a dict +update_principal_role_request_dict = update_principal_role_request_instance.to_dict() +# create an instance of UpdatePrincipalRoleRequest from a dict +update_principal_role_request_from_dict = UpdatePrincipalRoleRequest.from_dict(update_principal_role_request_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/UpgradeFormatVersionUpdate.md b/regtests/client/python/docs/UpgradeFormatVersionUpdate.md new file mode 100644 index 0000000000..a1a02ea47c --- /dev/null +++ b/regtests/client/python/docs/UpgradeFormatVersionUpdate.md @@ -0,0 +1,46 @@ + +# UpgradeFormatVersionUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**format_version** | **int** | | + +## Example + +```python +from polaris.catalog.models.upgrade_format_version_update import UpgradeFormatVersionUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of UpgradeFormatVersionUpdate from a JSON string +upgrade_format_version_update_instance = UpgradeFormatVersionUpdate.from_json(json) +# print the JSON string representation of the object +print(UpgradeFormatVersionUpdate.to_json()) + +# convert the object into a dict +upgrade_format_version_update_dict = upgrade_format_version_update_instance.to_dict() +# create an instance of UpgradeFormatVersionUpdate from a dict +upgrade_format_version_update_from_dict = UpgradeFormatVersionUpdate.from_dict(upgrade_format_version_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ValueMap.md b/regtests/client/python/docs/ValueMap.md new file mode 100644 index 0000000000..87a581adfd --- /dev/null +++ b/regtests/client/python/docs/ValueMap.md @@ -0,0 +1,46 @@ + +# ValueMap + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**keys** | **List[int]** | List of integer column ids for each corresponding value | [optional] +**values** | [**List[PrimitiveTypeValue]**](PrimitiveTypeValue.md) | List of primitive type values, matched to 'keys' by index | [optional] + +## Example + +```python +from polaris.catalog.models.value_map import ValueMap + +# TODO update the JSON string below +json = "{}" +# create an instance of ValueMap from a JSON string +value_map_instance = ValueMap.from_json(json) +# print the JSON string representation of the object +print(ValueMap.to_json()) + +# convert the object into a dict +value_map_dict = value_map_instance.to_dict() +# create an instance of ValueMap from a dict +value_map_from_dict = ValueMap.from_dict(value_map_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ViewGrant.md b/regtests/client/python/docs/ViewGrant.md new file mode 100644 index 0000000000..59a7da9907 --- /dev/null +++ b/regtests/client/python/docs/ViewGrant.md @@ -0,0 +1,47 @@ + +# ViewGrant + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**namespace** | **List[str]** | | +**view_name** | **str** | | +**privilege** | [**ViewPrivilege**](ViewPrivilege.md) | | + +## Example + +```python +from polaris.management.models.view_grant import ViewGrant + +# TODO update the JSON string below +json = "{}" +# create an instance of ViewGrant from a JSON string +view_grant_instance = ViewGrant.from_json(json) +# print the JSON string representation of the object +print(ViewGrant.to_json()) + +# convert the object into a dict +view_grant_dict = view_grant_instance.to_dict() +# create an instance of ViewGrant from a dict +view_grant_from_dict = ViewGrant.from_dict(view_grant_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ViewHistoryEntry.md b/regtests/client/python/docs/ViewHistoryEntry.md new file mode 100644 index 0000000000..2942b6e2bc --- /dev/null +++ b/regtests/client/python/docs/ViewHistoryEntry.md @@ -0,0 +1,46 @@ + +# ViewHistoryEntry + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**version_id** | **int** | | +**timestamp_ms** | **int** | | + +## Example + +```python +from polaris.catalog.models.view_history_entry import ViewHistoryEntry + +# TODO update the JSON string below +json = "{}" +# create an instance of ViewHistoryEntry from a JSON string +view_history_entry_instance = ViewHistoryEntry.from_json(json) +# print the JSON string representation of the object +print(ViewHistoryEntry.to_json()) + +# convert the object into a dict +view_history_entry_dict = view_history_entry_instance.to_dict() +# create an instance of ViewHistoryEntry from a dict +view_history_entry_from_dict = ViewHistoryEntry.from_dict(view_history_entry_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ViewMetadata.md b/regtests/client/python/docs/ViewMetadata.md new file mode 100644 index 0000000000..397011799a --- /dev/null +++ b/regtests/client/python/docs/ViewMetadata.md @@ -0,0 +1,52 @@ + +# ViewMetadata + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**view_uuid** | **str** | | +**format_version** | **int** | | +**location** | **str** | | +**current_version_id** | **int** | | +**versions** | [**List[ViewVersion]**](ViewVersion.md) | | +**version_log** | [**List[ViewHistoryEntry]**](ViewHistoryEntry.md) | | +**schemas** | [**List[ModelSchema]**](ModelSchema.md) | | +**properties** | **Dict[str, str]** | | [optional] + +## Example + +```python +from polaris.catalog.models.view_metadata import ViewMetadata + +# TODO update the JSON string below +json = "{}" +# create an instance of ViewMetadata from a JSON string +view_metadata_instance = ViewMetadata.from_json(json) +# print the JSON string representation of the object +print(ViewMetadata.to_json()) + +# convert the object into a dict +view_metadata_dict = view_metadata_instance.to_dict() +# create an instance of ViewMetadata from a dict +view_metadata_from_dict = ViewMetadata.from_dict(view_metadata_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ViewPrivilege.md b/regtests/client/python/docs/ViewPrivilege.md new file mode 100644 index 0000000000..0bffea4187 --- /dev/null +++ b/regtests/client/python/docs/ViewPrivilege.md @@ -0,0 +1,38 @@ + +# ViewPrivilege + +## Enum + +* `CATALOG_MANAGE_ACCESS` (value: `'CATALOG_MANAGE_ACCESS'`) + +* `VIEW_CREATE` (value: `'VIEW_CREATE'`) + +* `VIEW_DROP` (value: `'VIEW_DROP'`) + +* `VIEW_LIST` (value: `'VIEW_LIST'`) + +* `VIEW_READ_PROPERTIES` (value: `'VIEW_READ_PROPERTIES'`) + +* `VIEW_WRITE_PROPERTIES` (value: `'VIEW_WRITE_PROPERTIES'`) + +* `VIEW_FULL_METADATA` (value: `'VIEW_FULL_METADATA'`) + +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ViewRepresentation.md b/regtests/client/python/docs/ViewRepresentation.md new file mode 100644 index 0000000000..a1753a8731 --- /dev/null +++ b/regtests/client/python/docs/ViewRepresentation.md @@ -0,0 +1,47 @@ + +# ViewRepresentation + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | +**sql** | **str** | | +**dialect** | **str** | | + +## Example + +```python +from polaris.catalog.models.view_representation import ViewRepresentation + +# TODO update the JSON string below +json = "{}" +# create an instance of ViewRepresentation from a JSON string +view_representation_instance = ViewRepresentation.from_json(json) +# print the JSON string representation of the object +print(ViewRepresentation.to_json()) + +# convert the object into a dict +view_representation_dict = view_representation_instance.to_dict() +# create an instance of ViewRepresentation from a dict +view_representation_from_dict = ViewRepresentation.from_dict(view_representation_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ViewRequirement.md b/regtests/client/python/docs/ViewRequirement.md new file mode 100644 index 0000000000..9a6713860e --- /dev/null +++ b/regtests/client/python/docs/ViewRequirement.md @@ -0,0 +1,45 @@ + +# ViewRequirement + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**type** | **str** | | + +## Example + +```python +from polaris.catalog.models.view_requirement import ViewRequirement + +# TODO update the JSON string below +json = "{}" +# create an instance of ViewRequirement from a JSON string +view_requirement_instance = ViewRequirement.from_json(json) +# print the JSON string representation of the object +print(ViewRequirement.to_json()) + +# convert the object into a dict +view_requirement_dict = view_requirement_instance.to_dict() +# create an instance of ViewRequirement from a dict +view_requirement_from_dict = ViewRequirement.from_dict(view_requirement_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ViewUpdate.md b/regtests/client/python/docs/ViewUpdate.md new file mode 100644 index 0000000000..6965ae05de --- /dev/null +++ b/regtests/client/python/docs/ViewUpdate.md @@ -0,0 +1,53 @@ + +# ViewUpdate + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**action** | **str** | | +**format_version** | **int** | | +**var_schema** | [**ModelSchema**](ModelSchema.md) | | +**last_column_id** | **int** | The highest assigned column ID for the table. This is used to ensure columns are always assigned an unused ID when evolving schemas. When omitted, it will be computed on the server side. | [optional] +**location** | **str** | | +**updates** | **Dict[str, str]** | | +**removals** | **List[str]** | | +**view_version** | [**ViewVersion**](ViewVersion.md) | | +**view_version_id** | **int** | The view version id to set as current, or -1 to set last added view version id | + +## Example + +```python +from polaris.catalog.models.view_update import ViewUpdate + +# TODO update the JSON string below +json = "{}" +# create an instance of ViewUpdate from a JSON string +view_update_instance = ViewUpdate.from_json(json) +# print the JSON string representation of the object +print(ViewUpdate.to_json()) + +# convert the object into a dict +view_update_dict = view_update_instance.to_dict() +# create an instance of ViewUpdate from a dict +view_update_from_dict = ViewUpdate.from_dict(view_update_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/docs/ViewVersion.md b/regtests/client/python/docs/ViewVersion.md new file mode 100644 index 0000000000..8b6c713c69 --- /dev/null +++ b/regtests/client/python/docs/ViewVersion.md @@ -0,0 +1,51 @@ + +# ViewVersion + +## Properties + +Name | Type | Description | Notes +------------ | ------------- | ------------- | ------------- +**version_id** | **int** | | +**timestamp_ms** | **int** | | +**schema_id** | **int** | Schema ID to set as current, or -1 to set last added schema | +**summary** | **Dict[str, str]** | | +**representations** | [**List[ViewRepresentation]**](ViewRepresentation.md) | | +**default_catalog** | **str** | | [optional] +**default_namespace** | **List[str]** | Reference to one or more levels of a namespace | + +## Example + +```python +from polaris.catalog.models.view_version import ViewVersion + +# TODO update the JSON string below +json = "{}" +# create an instance of ViewVersion from a JSON string +view_version_instance = ViewVersion.from_json(json) +# print the JSON string representation of the object +print(ViewVersion.to_json()) + +# convert the object into a dict +view_version_dict = view_version_instance.to_dict() +# create an instance of ViewVersion from a dict +view_version_from_dict = ViewVersion.from_dict(view_version_dict) +``` +[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) + + diff --git a/regtests/client/python/git_push.sh b/regtests/client/python/git_push.sh new file mode 100644 index 0000000000..7d770085b8 --- /dev/null +++ b/regtests/client/python/git_push.sh @@ -0,0 +1,72 @@ +#!/bin/sh +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# ref: https://help.github.com/articles/adding-an-existing-project-to-github-using-the-command-line/ +# +# Usage example: /bin/sh ./git_push.sh wing328 openapi-petstore-perl "minor update" "gitlab.com" + +git_user_id=$1 +git_repo_id=$2 +release_note=$3 +git_host=$4 + +if [ "$git_host" = "" ]; then + git_host="github.com" + echo "[INFO] No command line input provided. Set \$git_host to $git_host" +fi + +if [ "$git_user_id" = "" ]; then + git_user_id="GIT_USER_ID" + echo "[INFO] No command line input provided. Set \$git_user_id to $git_user_id" +fi + +if [ "$git_repo_id" = "" ]; then + git_repo_id="GIT_REPO_ID" + echo "[INFO] No command line input provided. Set \$git_repo_id to $git_repo_id" +fi + +if [ "$release_note" = "" ]; then + release_note="Minor update" + echo "[INFO] No command line input provided. Set \$release_note to $release_note" +fi + +# Initialize the local directory as a Git repository +git init + +# Adds the files in the local repository and stages them for commit. +git add . + +# Commits the tracked changes and prepares them to be pushed to a remote repository. +git commit -m "$release_note" + +# Sets the new remote +git_remote=$(git remote) +if [ "$git_remote" = "" ]; then # git remote not defined + + if [ "$GIT_TOKEN" = "" ]; then + echo "[INFO] \$GIT_TOKEN (environment variable) is not set. Using the git credential in your environment." + git remote add origin https://${git_host}/${git_user_id}/${git_repo_id}.git + else + git remote add origin https://${git_user_id}:"${GIT_TOKEN}"@${git_host}/${git_user_id}/${git_repo_id}.git + fi + +fi + +git pull origin master + +# Pushes (Forces) the changes in the local repository up to the remote repository +echo "Git pushing to https://${git_host}/${git_user_id}/${git_repo_id}.git" +git push origin master 2>&1 | grep -v 'To https' diff --git a/regtests/client/python/poetry.lock b/regtests/client/python/poetry.lock new file mode 100644 index 0000000000..25367ce075 --- /dev/null +++ b/regtests/client/python/poetry.lock @@ -0,0 +1,594 @@ +# This file is automatically @generated by Poetry 1.5.0 and should not be changed by hand. + +[[package]] +name = "annotated-types" +version = "0.7.0" +description = "Reusable constraint types to use with typing.Annotated" +optional = false +python-versions = ">=3.8" +files = [ + {file = "annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53"}, + {file = "annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89"}, +] + +[package.dependencies] +typing-extensions = {version = ">=4.0.0", markers = "python_version < \"3.9\""} + +[[package]] +name = "boto3" +version = "1.34.120" +description = "The AWS SDK for Python" +optional = false +python-versions = ">=3.8" +files = [ + {file = "boto3-1.34.120-py3-none-any.whl", hash = "sha256:3c42bc309246a761413f6e152f307f009e80e7c9fd03dd9e6c0dc8ab8b3a8fc1"}, + {file = "boto3-1.34.120.tar.gz", hash = "sha256:38893db8269d25b72cc6fbab97633bfc863eefde5456847169d06149a16aa6e0"}, +] + +[package.dependencies] +botocore = ">=1.34.120,<1.35.0" +jmespath = ">=0.7.1,<2.0.0" +s3transfer = ">=0.10.0,<0.11.0" + +[package.extras] +crt = ["botocore[crt] (>=1.21.0,<2.0a0)"] + +[[package]] +name = "botocore" +version = "1.34.132" +description = "Low-level, data-driven core of boto 3." +optional = false +python-versions = ">=3.8" +files = [ + {file = "botocore-1.34.132-py3-none-any.whl", hash = "sha256:06ef8b4bd3b3cb5a9b9a4273a543b257be3304030978ba51516b576a65156c39"}, + {file = "botocore-1.34.132.tar.gz", hash = "sha256:372a6cfce29e5de9bcf8c95af901d0bc3e27d8aa2295fadee295424f95f43f16"}, +] + +[package.dependencies] +jmespath = ">=0.7.1,<2.0.0" +python-dateutil = ">=2.1,<3.0.0" +urllib3 = [ + {version = ">=1.25.4,<1.27", markers = "python_version < \"3.10\""}, + {version = ">=1.25.4,<2.2.0 || >2.2.0,<3", markers = "python_version >= \"3.10\""}, +] + +[package.extras] +crt = ["awscrt (==0.20.11)"] + +[[package]] +name = "cachetools" +version = "5.3.3" +description = "Extensible memoizing collections and decorators" +optional = false +python-versions = ">=3.7" +files = [ + {file = "cachetools-5.3.3-py3-none-any.whl", hash = "sha256:0abad1021d3f8325b2fc1d2e9c8b9c9d57b04c3932657a72465447332c24d945"}, + {file = "cachetools-5.3.3.tar.gz", hash = "sha256:ba29e2dfa0b8b556606f097407ed1aa62080ee108ab0dc5ec9d6a723a007d105"}, +] + +[[package]] +name = "chardet" +version = "5.2.0" +description = "Universal encoding detector for Python 3" +optional = false +python-versions = ">=3.7" +files = [ + {file = "chardet-5.2.0-py3-none-any.whl", hash = "sha256:e1cf59446890a00105fe7b7912492ea04b6e6f06d4b742b2c788469e34c82970"}, + {file = "chardet-5.2.0.tar.gz", hash = "sha256:1b3b6ff479a8c414bc3fa2c0852995695c4a026dcd6d0633b2dd092ca39c1cf7"}, +] + +[[package]] +name = "colorama" +version = "0.4.6" +description = "Cross-platform colored terminal text." +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7" +files = [ + {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"}, + {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"}, +] + +[[package]] +name = "distlib" +version = "0.3.8" +description = "Distribution utilities" +optional = false +python-versions = "*" +files = [ + {file = "distlib-0.3.8-py2.py3-none-any.whl", hash = "sha256:034db59a0b96f8ca18035f36290806a9a6e6bd9d1ff91e45a7f172eb17e51784"}, + {file = "distlib-0.3.8.tar.gz", hash = "sha256:1530ea13e350031b6312d8580ddb6b27a104275a31106523b8f123787f494f64"}, +] + +[[package]] +name = "exceptiongroup" +version = "1.2.1" +description = "Backport of PEP 654 (exception groups)" +optional = false +python-versions = ">=3.7" +files = [ + {file = "exceptiongroup-1.2.1-py3-none-any.whl", hash = "sha256:5258b9ed329c5bbdd31a309f53cbfb0b155341807f6ff7606a1e801a891b29ad"}, + {file = "exceptiongroup-1.2.1.tar.gz", hash = "sha256:a4785e48b045528f5bfe627b6ad554ff32def154f42372786903b7abcfe1aa16"}, +] + +[package.extras] +test = ["pytest (>=6)"] + +[[package]] +name = "filelock" +version = "3.15.4" +description = "A platform independent file lock." +optional = false +python-versions = ">=3.8" +files = [ + {file = "filelock-3.15.4-py3-none-any.whl", hash = "sha256:6ca1fffae96225dab4c6eaf1c4f4f28cd2568d3ec2a44e15a08520504de468e7"}, + {file = "filelock-3.15.4.tar.gz", hash = "sha256:2207938cbc1844345cb01a5a95524dae30f0ce089eba5b00378295a17e3e90cb"}, +] + +[package.extras] +docs = ["furo (>=2023.9.10)", "sphinx (>=7.2.6)", "sphinx-autodoc-typehints (>=1.25.2)"] +testing = ["covdefaults (>=2.3)", "coverage (>=7.3.2)", "diff-cover (>=8.0.1)", "pytest (>=7.4.3)", "pytest-asyncio (>=0.21)", "pytest-cov (>=4.1)", "pytest-mock (>=3.12)", "pytest-timeout (>=2.2)", "virtualenv (>=20.26.2)"] +typing = ["typing-extensions (>=4.8)"] + +[[package]] +name = "flake8" +version = "5.0.4" +description = "the modular source code checker: pep8 pyflakes and co" +optional = false +python-versions = ">=3.6.1" +files = [ + {file = "flake8-5.0.4-py2.py3-none-any.whl", hash = "sha256:7a1cf6b73744f5806ab95e526f6f0d8c01c66d7bbe349562d22dfca20610b248"}, + {file = "flake8-5.0.4.tar.gz", hash = "sha256:6fbe320aad8d6b95cec8b8e47bc933004678dc63095be98528b7bdd2a9f510db"}, +] + +[package.dependencies] +mccabe = ">=0.7.0,<0.8.0" +pycodestyle = ">=2.9.0,<2.10.0" +pyflakes = ">=2.5.0,<2.6.0" + +[[package]] +name = "iniconfig" +version = "2.0.0" +description = "brain-dead simple config-ini parsing" +optional = false +python-versions = ">=3.7" +files = [ + {file = "iniconfig-2.0.0-py3-none-any.whl", hash = "sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374"}, + {file = "iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3"}, +] + +[[package]] +name = "jmespath" +version = "1.0.1" +description = "JSON Matching Expressions" +optional = false +python-versions = ">=3.7" +files = [ + {file = "jmespath-1.0.1-py3-none-any.whl", hash = "sha256:02e2e4cc71b5bcab88332eebf907519190dd9e6e82107fa7f83b1003a6252980"}, + {file = "jmespath-1.0.1.tar.gz", hash = "sha256:90261b206d6defd58fdd5e85f478bf633a2901798906be2ad389150c5c60edbe"}, +] + +[[package]] +name = "mccabe" +version = "0.7.0" +description = "McCabe checker, plugin for flake8" +optional = false +python-versions = ">=3.6" +files = [ + {file = "mccabe-0.7.0-py2.py3-none-any.whl", hash = "sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e"}, + {file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"}, +] + +[[package]] +name = "mypy" +version = "1.4.1" +description = "Optional static typing for Python" +optional = false +python-versions = ">=3.7" +files = [ + {file = "mypy-1.4.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:566e72b0cd6598503e48ea610e0052d1b8168e60a46e0bfd34b3acf2d57f96a8"}, + {file = "mypy-1.4.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ca637024ca67ab24a7fd6f65d280572c3794665eaf5edcc7e90a866544076878"}, + {file = "mypy-1.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0dde1d180cd84f0624c5dcaaa89c89775550a675aff96b5848de78fb11adabcd"}, + {file = "mypy-1.4.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8c4d8e89aa7de683e2056a581ce63c46a0c41e31bd2b6d34144e2c80f5ea53dc"}, + {file = "mypy-1.4.1-cp310-cp310-win_amd64.whl", hash = "sha256:bfdca17c36ae01a21274a3c387a63aa1aafe72bff976522886869ef131b937f1"}, + {file = "mypy-1.4.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:7549fbf655e5825d787bbc9ecf6028731973f78088fbca3a1f4145c39ef09462"}, + {file = "mypy-1.4.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:98324ec3ecf12296e6422939e54763faedbfcc502ea4a4c38502082711867258"}, + {file = "mypy-1.4.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:141dedfdbfe8a04142881ff30ce6e6653c9685b354876b12e4fe6c78598b45e2"}, + {file = "mypy-1.4.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:8207b7105829eca6f3d774f64a904190bb2231de91b8b186d21ffd98005f14a7"}, + {file = "mypy-1.4.1-cp311-cp311-win_amd64.whl", hash = "sha256:16f0db5b641ba159eff72cff08edc3875f2b62b2fa2bc24f68c1e7a4e8232d01"}, + {file = "mypy-1.4.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:470c969bb3f9a9efcedbadcd19a74ffb34a25f8e6b0e02dae7c0e71f8372f97b"}, + {file = "mypy-1.4.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e5952d2d18b79f7dc25e62e014fe5a23eb1a3d2bc66318df8988a01b1a037c5b"}, + {file = "mypy-1.4.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:190b6bab0302cec4e9e6767d3eb66085aef2a1cc98fe04936d8a42ed2ba77bb7"}, + {file = "mypy-1.4.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9d40652cc4fe33871ad3338581dca3297ff5f2213d0df345bcfbde5162abf0c9"}, + {file = "mypy-1.4.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:01fd2e9f85622d981fd9063bfaef1aed6e336eaacca00892cd2d82801ab7c042"}, + {file = "mypy-1.4.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2460a58faeea905aeb1b9b36f5065f2dc9a9c6e4c992a6499a2360c6c74ceca3"}, + {file = "mypy-1.4.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a2746d69a8196698146a3dbe29104f9eb6a2a4d8a27878d92169a6c0b74435b6"}, + {file = "mypy-1.4.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:ae704dcfaa180ff7c4cfbad23e74321a2b774f92ca77fd94ce1049175a21c97f"}, + {file = "mypy-1.4.1-cp38-cp38-win_amd64.whl", hash = "sha256:43d24f6437925ce50139a310a64b2ab048cb2d3694c84c71c3f2a1626d8101dc"}, + {file = "mypy-1.4.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c482e1246726616088532b5e964e39765b6d1520791348e6c9dc3af25b233828"}, + {file = "mypy-1.4.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:43b592511672017f5b1a483527fd2684347fdffc041c9ef53428c8dc530f79a3"}, + {file = "mypy-1.4.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:34a9239d5b3502c17f07fd7c0b2ae6b7dd7d7f6af35fbb5072c6208e76295816"}, + {file = "mypy-1.4.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5703097c4936bbb9e9bce41478c8d08edd2865e177dc4c52be759f81ee4dd26c"}, + {file = "mypy-1.4.1-cp39-cp39-win_amd64.whl", hash = "sha256:e02d700ec8d9b1859790c0475df4e4092c7bf3272a4fd2c9f33d87fac4427b8f"}, + {file = "mypy-1.4.1-py3-none-any.whl", hash = "sha256:45d32cec14e7b97af848bddd97d85ea4f0db4d5a149ed9676caa4eb2f7402bb4"}, + {file = "mypy-1.4.1.tar.gz", hash = "sha256:9bbcd9ab8ea1f2e1c8031c21445b511442cc45c89951e49bbf852cbb70755b1b"}, +] + +[package.dependencies] +mypy-extensions = ">=1.0.0" +tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""} +typing-extensions = ">=4.1.0" + +[package.extras] +dmypy = ["psutil (>=4.0)"] +install-types = ["pip"] +python2 = ["typed-ast (>=1.4.0,<2)"] +reports = ["lxml"] + +[[package]] +name = "mypy-extensions" +version = "1.0.0" +description = "Type system extensions for programs checked with the mypy type checker." +optional = false +python-versions = ">=3.5" +files = [ + {file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"}, + {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"}, +] + +[[package]] +name = "packaging" +version = "24.1" +description = "Core utilities for Python packages" +optional = false +python-versions = ">=3.8" +files = [ + {file = "packaging-24.1-py3-none-any.whl", hash = "sha256:5b8f2217dbdbd2f7f384c41c628544e6d52f2d0f53c6d0c3ea61aa5d1d7ff124"}, + {file = "packaging-24.1.tar.gz", hash = "sha256:026ed72c8ed3fcce5bf8950572258698927fd1dbda10a5e981cdf0ac37f4f002"}, +] + +[[package]] +name = "platformdirs" +version = "4.2.2" +description = "A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`." +optional = false +python-versions = ">=3.8" +files = [ + {file = "platformdirs-4.2.2-py3-none-any.whl", hash = "sha256:2d7a1657e36a80ea911db832a8a6ece5ee53d8de21edd5cc5879af6530b1bfee"}, + {file = "platformdirs-4.2.2.tar.gz", hash = "sha256:38b7b51f512eed9e84a22788b4bce1de17c0adb134d6becb09836e37d8654cd3"}, +] + +[package.extras] +docs = ["furo (>=2023.9.10)", "proselint (>=0.13)", "sphinx (>=7.2.6)", "sphinx-autodoc-typehints (>=1.25.2)"] +test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=7.4.3)", "pytest-cov (>=4.1)", "pytest-mock (>=3.12)"] +type = ["mypy (>=1.8)"] + +[[package]] +name = "pluggy" +version = "1.5.0" +description = "plugin and hook calling mechanisms for python" +optional = false +python-versions = ">=3.8" +files = [ + {file = "pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669"}, + {file = "pluggy-1.5.0.tar.gz", hash = "sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1"}, +] + +[package.extras] +dev = ["pre-commit", "tox"] +testing = ["pytest", "pytest-benchmark"] + +[[package]] +name = "pycodestyle" +version = "2.9.1" +description = "Python style guide checker" +optional = false +python-versions = ">=3.6" +files = [ + {file = "pycodestyle-2.9.1-py2.py3-none-any.whl", hash = "sha256:d1735fc58b418fd7c5f658d28d943854f8a849b01a5d0a1e6f3f3fdd0166804b"}, + {file = "pycodestyle-2.9.1.tar.gz", hash = "sha256:2c9607871d58c76354b697b42f5d57e1ada7d261c261efac224b664affdc5785"}, +] + +[[package]] +name = "pydantic" +version = "2.7.4" +description = "Data validation using Python type hints" +optional = false +python-versions = ">=3.8" +files = [ + {file = "pydantic-2.7.4-py3-none-any.whl", hash = "sha256:ee8538d41ccb9c0a9ad3e0e5f07bf15ed8015b481ced539a1759d8cc89ae90d0"}, + {file = "pydantic-2.7.4.tar.gz", hash = "sha256:0c84efd9548d545f63ac0060c1e4d39bb9b14db8b3c0652338aecc07b5adec52"}, +] + +[package.dependencies] +annotated-types = ">=0.4.0" +pydantic-core = "2.18.4" +typing-extensions = ">=4.6.1" + +[package.extras] +email = ["email-validator (>=2.0.0)"] + +[[package]] +name = "pydantic-core" +version = "2.18.4" +description = "Core functionality for Pydantic validation and serialization" +optional = false +python-versions = ">=3.8" +files = [ + {file = "pydantic_core-2.18.4-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:f76d0ad001edd426b92233d45c746fd08f467d56100fd8f30e9ace4b005266e4"}, + {file = "pydantic_core-2.18.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:59ff3e89f4eaf14050c8022011862df275b552caef8082e37b542b066ce1ff26"}, + {file = "pydantic_core-2.18.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a55b5b16c839df1070bc113c1f7f94a0af4433fcfa1b41799ce7606e5c79ce0a"}, + {file = "pydantic_core-2.18.4-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4d0dcc59664fcb8974b356fe0a18a672d6d7cf9f54746c05f43275fc48636851"}, + {file = "pydantic_core-2.18.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8951eee36c57cd128f779e641e21eb40bc5073eb28b2d23f33eb0ef14ffb3f5d"}, + {file = "pydantic_core-2.18.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4701b19f7e3a06ea655513f7938de6f108123bf7c86bbebb1196eb9bd35cf724"}, + {file = "pydantic_core-2.18.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e00a3f196329e08e43d99b79b286d60ce46bed10f2280d25a1718399457e06be"}, + {file = "pydantic_core-2.18.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:97736815b9cc893b2b7f663628e63f436018b75f44854c8027040e05230eeddb"}, + {file = "pydantic_core-2.18.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:6891a2ae0e8692679c07728819b6e2b822fb30ca7445f67bbf6509b25a96332c"}, + {file = "pydantic_core-2.18.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:bc4ff9805858bd54d1a20efff925ccd89c9d2e7cf4986144b30802bf78091c3e"}, + {file = "pydantic_core-2.18.4-cp310-none-win32.whl", hash = "sha256:1b4de2e51bbcb61fdebd0ab86ef28062704f62c82bbf4addc4e37fa4b00b7cbc"}, + {file = "pydantic_core-2.18.4-cp310-none-win_amd64.whl", hash = "sha256:6a750aec7bf431517a9fd78cb93c97b9b0c496090fee84a47a0d23668976b4b0"}, + {file = "pydantic_core-2.18.4-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:942ba11e7dfb66dc70f9ae66b33452f51ac7bb90676da39a7345e99ffb55402d"}, + {file = "pydantic_core-2.18.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b2ebef0e0b4454320274f5e83a41844c63438fdc874ea40a8b5b4ecb7693f1c4"}, + {file = "pydantic_core-2.18.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a642295cd0c8df1b86fc3dced1d067874c353a188dc8e0f744626d49e9aa51c4"}, + {file = "pydantic_core-2.18.4-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5f09baa656c904807e832cf9cce799c6460c450c4ad80803517032da0cd062e2"}, + {file = "pydantic_core-2.18.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:98906207f29bc2c459ff64fa007afd10a8c8ac080f7e4d5beff4c97086a3dabd"}, + {file = "pydantic_core-2.18.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:19894b95aacfa98e7cb093cd7881a0c76f55731efad31073db4521e2b6ff5b7d"}, + {file = "pydantic_core-2.18.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0fbbdc827fe5e42e4d196c746b890b3d72876bdbf160b0eafe9f0334525119c8"}, + {file = "pydantic_core-2.18.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f85d05aa0918283cf29a30b547b4df2fbb56b45b135f9e35b6807cb28bc47951"}, + {file = "pydantic_core-2.18.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e85637bc8fe81ddb73fda9e56bab24560bdddfa98aa64f87aaa4e4b6730c23d2"}, + {file = "pydantic_core-2.18.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:2f5966897e5461f818e136b8451d0551a2e77259eb0f73a837027b47dc95dab9"}, + {file = "pydantic_core-2.18.4-cp311-none-win32.whl", hash = "sha256:44c7486a4228413c317952e9d89598bcdfb06399735e49e0f8df643e1ccd0558"}, + {file = "pydantic_core-2.18.4-cp311-none-win_amd64.whl", hash = "sha256:8a7164fe2005d03c64fd3b85649891cd4953a8de53107940bf272500ba8a788b"}, + {file = "pydantic_core-2.18.4-cp311-none-win_arm64.whl", hash = "sha256:4e99bc050fe65c450344421017f98298a97cefc18c53bb2f7b3531eb39bc7805"}, + {file = "pydantic_core-2.18.4-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:6f5c4d41b2771c730ea1c34e458e781b18cc668d194958e0112455fff4e402b2"}, + {file = "pydantic_core-2.18.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2fdf2156aa3d017fddf8aea5adfba9f777db1d6022d392b682d2a8329e087cef"}, + {file = "pydantic_core-2.18.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4748321b5078216070b151d5271ef3e7cc905ab170bbfd27d5c83ee3ec436695"}, + {file = "pydantic_core-2.18.4-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:847a35c4d58721c5dc3dba599878ebbdfd96784f3fb8bb2c356e123bdcd73f34"}, + {file = "pydantic_core-2.18.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3c40d4eaad41f78e3bbda31b89edc46a3f3dc6e171bf0ecf097ff7a0ffff7cb1"}, + {file = "pydantic_core-2.18.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:21a5e440dbe315ab9825fcd459b8814bb92b27c974cbc23c3e8baa2b76890077"}, + {file = "pydantic_core-2.18.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:01dd777215e2aa86dfd664daed5957704b769e726626393438f9c87690ce78c3"}, + {file = "pydantic_core-2.18.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:4b06beb3b3f1479d32befd1f3079cc47b34fa2da62457cdf6c963393340b56e9"}, + {file = "pydantic_core-2.18.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:564d7922e4b13a16b98772441879fcdcbe82ff50daa622d681dd682175ea918c"}, + {file = "pydantic_core-2.18.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:0eb2a4f660fcd8e2b1c90ad566db2b98d7f3f4717c64fe0a83e0adb39766d5b8"}, + {file = "pydantic_core-2.18.4-cp312-none-win32.whl", hash = "sha256:8b8bab4c97248095ae0c4455b5a1cd1cdd96e4e4769306ab19dda135ea4cdb07"}, + {file = "pydantic_core-2.18.4-cp312-none-win_amd64.whl", hash = "sha256:14601cdb733d741b8958224030e2bfe21a4a881fb3dd6fbb21f071cabd48fa0a"}, + {file = "pydantic_core-2.18.4-cp312-none-win_arm64.whl", hash = "sha256:c1322d7dd74713dcc157a2b7898a564ab091ca6c58302d5c7b4c07296e3fd00f"}, + {file = "pydantic_core-2.18.4-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:823be1deb01793da05ecb0484d6c9e20baebb39bd42b5d72636ae9cf8350dbd2"}, + {file = "pydantic_core-2.18.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:ebef0dd9bf9b812bf75bda96743f2a6c5734a02092ae7f721c048d156d5fabae"}, + {file = "pydantic_core-2.18.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ae1d6df168efb88d7d522664693607b80b4080be6750c913eefb77e34c12c71a"}, + {file = "pydantic_core-2.18.4-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f9899c94762343f2cc2fc64c13e7cae4c3cc65cdfc87dd810a31654c9b7358cc"}, + {file = "pydantic_core-2.18.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:99457f184ad90235cfe8461c4d70ab7dd2680e28821c29eca00252ba90308c78"}, + {file = "pydantic_core-2.18.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:18f469a3d2a2fdafe99296a87e8a4c37748b5080a26b806a707f25a902c040a8"}, + {file = "pydantic_core-2.18.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b7cdf28938ac6b8b49ae5e92f2735056a7ba99c9b110a474473fd71185c1af5d"}, + {file = "pydantic_core-2.18.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:938cb21650855054dc54dfd9120a851c974f95450f00683399006aa6e8abb057"}, + {file = "pydantic_core-2.18.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:44cd83ab6a51da80fb5adbd9560e26018e2ac7826f9626bc06ca3dc074cd198b"}, + {file = "pydantic_core-2.18.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:972658f4a72d02b8abfa2581d92d59f59897d2e9f7e708fdabe922f9087773af"}, + {file = "pydantic_core-2.18.4-cp38-none-win32.whl", hash = "sha256:1d886dc848e60cb7666f771e406acae54ab279b9f1e4143babc9c2258213daa2"}, + {file = "pydantic_core-2.18.4-cp38-none-win_amd64.whl", hash = "sha256:bb4462bd43c2460774914b8525f79b00f8f407c945d50881568f294c1d9b4443"}, + {file = "pydantic_core-2.18.4-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:44a688331d4a4e2129140a8118479443bd6f1905231138971372fcde37e43528"}, + {file = "pydantic_core-2.18.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a2fdd81edd64342c85ac7cf2753ccae0b79bf2dfa063785503cb85a7d3593223"}, + {file = "pydantic_core-2.18.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:86110d7e1907ab36691f80b33eb2da87d780f4739ae773e5fc83fb272f88825f"}, + {file = "pydantic_core-2.18.4-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:46387e38bd641b3ee5ce247563b60c5ca098da9c56c75c157a05eaa0933ed154"}, + {file = "pydantic_core-2.18.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:123c3cec203e3f5ac7b000bd82235f1a3eced8665b63d18be751f115588fea30"}, + {file = "pydantic_core-2.18.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dc1803ac5c32ec324c5261c7209e8f8ce88e83254c4e1aebdc8b0a39f9ddb443"}, + {file = "pydantic_core-2.18.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:53db086f9f6ab2b4061958d9c276d1dbe3690e8dd727d6abf2321d6cce37fa94"}, + {file = "pydantic_core-2.18.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:abc267fa9837245cc28ea6929f19fa335f3dc330a35d2e45509b6566dc18be23"}, + {file = "pydantic_core-2.18.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:a0d829524aaefdebccb869eed855e2d04c21d2d7479b6cada7ace5448416597b"}, + {file = "pydantic_core-2.18.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:509daade3b8649f80d4e5ff21aa5673e4ebe58590b25fe42fac5f0f52c6f034a"}, + {file = "pydantic_core-2.18.4-cp39-none-win32.whl", hash = "sha256:ca26a1e73c48cfc54c4a76ff78df3727b9d9f4ccc8dbee4ae3f73306a591676d"}, + {file = "pydantic_core-2.18.4-cp39-none-win_amd64.whl", hash = "sha256:c67598100338d5d985db1b3d21f3619ef392e185e71b8d52bceacc4a7771ea7e"}, + {file = "pydantic_core-2.18.4-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:574d92eac874f7f4db0ca653514d823a0d22e2354359d0759e3f6a406db5d55d"}, + {file = "pydantic_core-2.18.4-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:1f4d26ceb5eb9eed4af91bebeae4b06c3fb28966ca3a8fb765208cf6b51102ab"}, + {file = "pydantic_core-2.18.4-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77450e6d20016ec41f43ca4a6c63e9fdde03f0ae3fe90e7c27bdbeaece8b1ed4"}, + {file = "pydantic_core-2.18.4-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d323a01da91851a4f17bf592faf46149c9169d68430b3146dcba2bb5e5719abc"}, + {file = "pydantic_core-2.18.4-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:43d447dd2ae072a0065389092a231283f62d960030ecd27565672bd40746c507"}, + {file = "pydantic_core-2.18.4-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:578e24f761f3b425834f297b9935e1ce2e30f51400964ce4801002435a1b41ef"}, + {file = "pydantic_core-2.18.4-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:81b5efb2f126454586d0f40c4d834010979cb80785173d1586df845a632e4e6d"}, + {file = "pydantic_core-2.18.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:ab86ce7c8f9bea87b9d12c7f0af71102acbf5ecbc66c17796cff45dae54ef9a5"}, + {file = "pydantic_core-2.18.4-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:90afc12421df2b1b4dcc975f814e21bc1754640d502a2fbcc6d41e77af5ec312"}, + {file = "pydantic_core-2.18.4-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:51991a89639a912c17bef4b45c87bd83593aee0437d8102556af4885811d59f5"}, + {file = "pydantic_core-2.18.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:293afe532740370aba8c060882f7d26cfd00c94cae32fd2e212a3a6e3b7bc15e"}, + {file = "pydantic_core-2.18.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b48ece5bde2e768197a2d0f6e925f9d7e3e826f0ad2271120f8144a9db18d5c8"}, + {file = "pydantic_core-2.18.4-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:eae237477a873ab46e8dd748e515c72c0c804fb380fbe6c85533c7de51f23a8f"}, + {file = "pydantic_core-2.18.4-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:834b5230b5dfc0c1ec37b2fda433b271cbbc0e507560b5d1588e2cc1148cf1ce"}, + {file = "pydantic_core-2.18.4-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:e858ac0a25074ba4bce653f9b5d0a85b7456eaddadc0ce82d3878c22489fa4ee"}, + {file = "pydantic_core-2.18.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:2fd41f6eff4c20778d717af1cc50eca52f5afe7805ee530a4fbd0bae284f16e9"}, + {file = "pydantic_core-2.18.4.tar.gz", hash = "sha256:ec3beeada09ff865c344ff3bc2f427f5e6c26401cc6113d77e372c3fdac73864"}, +] + +[package.dependencies] +typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0" + +[[package]] +name = "pyflakes" +version = "2.5.0" +description = "passive checker of Python programs" +optional = false +python-versions = ">=3.6" +files = [ + {file = "pyflakes-2.5.0-py2.py3-none-any.whl", hash = "sha256:4579f67d887f804e67edb544428f264b7b24f435b263c4614f384135cea553d2"}, + {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"}, +] + +[[package]] +name = "pyproject-api" +version = "1.7.1" +description = "API to interact with the python pyproject.toml based projects" +optional = false +python-versions = ">=3.8" +files = [ + {file = "pyproject_api-1.7.1-py3-none-any.whl", hash = "sha256:2dc1654062c2b27733d8fd4cdda672b22fe8741ef1dde8e3a998a9547b071eeb"}, + {file = "pyproject_api-1.7.1.tar.gz", hash = "sha256:7ebc6cd10710f89f4cf2a2731710a98abce37ebff19427116ff2174c9236a827"}, +] + +[package.dependencies] +packaging = ">=24.1" +tomli = {version = ">=2.0.1", markers = "python_version < \"3.11\""} + +[package.extras] +docs = ["furo (>=2024.5.6)", "sphinx-autodoc-typehints (>=2.2.1)"] +testing = ["covdefaults (>=2.3)", "pytest (>=8.2.2)", "pytest-cov (>=5)", "pytest-mock (>=3.14)", "setuptools (>=70.1)"] + +[[package]] +name = "pytest" +version = "8.2.2" +description = "pytest: simple powerful testing with Python" +optional = false +python-versions = ">=3.8" +files = [ + {file = "pytest-8.2.2-py3-none-any.whl", hash = "sha256:c434598117762e2bd304e526244f67bf66bbd7b5d6cf22138be51ff661980343"}, + {file = "pytest-8.2.2.tar.gz", hash = "sha256:de4bb8104e201939ccdc688b27a89a7be2079b22e2bd2b07f806b6ba71117977"}, +] + +[package.dependencies] +colorama = {version = "*", markers = "sys_platform == \"win32\""} +exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""} +iniconfig = "*" +packaging = "*" +pluggy = ">=1.5,<2.0" +tomli = {version = ">=1", markers = "python_version < \"3.11\""} + +[package.extras] +dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"] + +[[package]] +name = "python-dateutil" +version = "2.9.0.post0" +description = "Extensions to the standard Python datetime module" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7" +files = [ + {file = "python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3"}, + {file = "python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427"}, +] + +[package.dependencies] +six = ">=1.5" + +[[package]] +name = "s3transfer" +version = "0.10.2" +description = "An Amazon S3 Transfer Manager" +optional = false +python-versions = ">=3.8" +files = [ + {file = "s3transfer-0.10.2-py3-none-any.whl", hash = "sha256:eca1c20de70a39daee580aef4986996620f365c4e0fda6a86100231d62f1bf69"}, + {file = "s3transfer-0.10.2.tar.gz", hash = "sha256:0711534e9356d3cc692fdde846b4a1e4b0cb6519971860796e6bc4c7aea00ef6"}, +] + +[package.dependencies] +botocore = ">=1.33.2,<2.0a.0" + +[package.extras] +crt = ["botocore[crt] (>=1.33.2,<2.0a.0)"] + +[[package]] +name = "six" +version = "1.16.0" +description = "Python 2 and 3 compatibility utilities" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" +files = [ + {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"}, + {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"}, +] + +[[package]] +name = "tomli" +version = "2.0.1" +description = "A lil' TOML parser" +optional = false +python-versions = ">=3.7" +files = [ + {file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"}, + {file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"}, +] + +[[package]] +name = "tox" +version = "4.15.1" +description = "tox is a generic virtualenv management and test command line tool" +optional = false +python-versions = ">=3.8" +files = [ + {file = "tox-4.15.1-py3-none-any.whl", hash = "sha256:f00a5dc4222b358e69694e47e3da0227ac41253509bca9f45aa8f012053e8d9d"}, + {file = "tox-4.15.1.tar.gz", hash = "sha256:53a092527d65e873e39213ebd4bd027a64623320b6b0326136384213f95b7076"}, +] + +[package.dependencies] +cachetools = ">=5.3.2" +chardet = ">=5.2" +colorama = ">=0.4.6" +filelock = ">=3.13.1" +packaging = ">=23.2" +platformdirs = ">=4.1" +pluggy = ">=1.3" +pyproject-api = ">=1.6.1" +tomli = {version = ">=2.0.1", markers = "python_version < \"3.11\""} +virtualenv = ">=20.25" + +[package.extras] +docs = ["furo (>=2023.9.10)", "sphinx (>=7.2.6)", "sphinx-argparse-cli (>=1.11.1)", "sphinx-autodoc-typehints (>=1.25.2)", "sphinx-copybutton (>=0.5.2)", "sphinx-inline-tabs (>=2023.4.21)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=23.11)"] +testing = ["build[virtualenv] (>=1.0.3)", "covdefaults (>=2.3)", "detect-test-pollution (>=1.2)", "devpi-process (>=1)", "diff-cover (>=8.0.2)", "distlib (>=0.3.8)", "flaky (>=3.7)", "hatch-vcs (>=0.4)", "hatchling (>=1.21)", "psutil (>=5.9.7)", "pytest (>=7.4.4)", "pytest-cov (>=4.1)", "pytest-mock (>=3.12)", "pytest-xdist (>=3.5)", "re-assert (>=1.1)", "time-machine (>=2.13)", "wheel (>=0.42)"] + +[[package]] +name = "types-python-dateutil" +version = "2.9.0.20240316" +description = "Typing stubs for python-dateutil" +optional = false +python-versions = ">=3.8" +files = [ + {file = "types-python-dateutil-2.9.0.20240316.tar.gz", hash = "sha256:5d2f2e240b86905e40944dd787db6da9263f0deabef1076ddaed797351ec0202"}, + {file = "types_python_dateutil-2.9.0.20240316-py3-none-any.whl", hash = "sha256:6b8cb66d960771ce5ff974e9dd45e38facb81718cc1e208b10b1baccbfdbee3b"}, +] + +[[package]] +name = "typing-extensions" +version = "4.12.2" +description = "Backported and Experimental Type Hints for Python 3.8+" +optional = false +python-versions = ">=3.8" +files = [ + {file = "typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d"}, + {file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"}, +] + +[[package]] +name = "urllib3" +version = "1.26.19" +description = "HTTP library with thread-safe connection pooling, file post, and more." +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7" +files = [ + {file = "urllib3-1.26.19-py2.py3-none-any.whl", hash = "sha256:37a0344459b199fce0e80b0d3569837ec6b6937435c5244e7fd73fa6006830f3"}, + {file = "urllib3-1.26.19.tar.gz", hash = "sha256:3e3d753a8618b86d7de333b4223005f68720bcd6a7d2bcb9fbd2229ec7c1e429"}, +] + +[package.extras] +brotli = ["brotli (==1.0.9)", "brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"] +secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"] +socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"] + +[[package]] +name = "virtualenv" +version = "20.26.3" +description = "Virtual Python Environment builder" +optional = false +python-versions = ">=3.7" +files = [ + {file = "virtualenv-20.26.3-py3-none-any.whl", hash = "sha256:8cc4a31139e796e9a7de2cd5cf2489de1217193116a8fd42328f1bd65f434589"}, + {file = "virtualenv-20.26.3.tar.gz", hash = "sha256:4c43a2a236279d9ea36a0d76f98d84bd6ca94ac4e0f4a3b9d46d05e10fea542a"}, +] + +[package.dependencies] +distlib = ">=0.3.7,<1" +filelock = ">=3.12.2,<4" +platformdirs = ">=3.9.1,<5" + +[package.extras] +docs = ["furo (>=2023.7.26)", "proselint (>=0.13)", "sphinx (>=7.1.2,!=7.3)", "sphinx-argparse (>=0.4)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=23.6)"] +test = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.4)", "pytest-env (>=0.8.2)", "pytest-freezer (>=0.4.8)", "pytest-mock (>=3.11.1)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=68)", "time-machine (>=2.10)"] + +[metadata] +lock-version = "2.0" +python-versions = "^3.8" +content-hash = "c4842ff5ce0e433e93b0d1270e77ba79cf84362da1ce0b63b1940c50cf70da00" diff --git a/regtests/client/python/polaris/__init__.py b/regtests/client/python/polaris/__init__.py new file mode 100644 index 0000000000..8d220260f1 --- /dev/null +++ b/regtests/client/python/polaris/__init__.py @@ -0,0 +1,15 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# \ No newline at end of file diff --git a/regtests/client/python/polaris/catalog/__init__.py b/regtests/client/python/polaris/catalog/__init__.py new file mode 100644 index 0000000000..25a4c14b29 --- /dev/null +++ b/regtests/client/python/polaris/catalog/__init__.py @@ -0,0 +1,160 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +# flake8: noqa + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +__version__ = "1.0.0" + +# import apis into sdk package +from polaris.catalog.api.iceberg_catalog_api import IcebergCatalogAPI +from polaris.catalog.api.iceberg_configuration_api import IcebergConfigurationAPI +from polaris.catalog.api.iceberg_o_auth2_api import IcebergOAuth2API + +# import ApiClient +from polaris.catalog.api_response import ApiResponse +from polaris.catalog.api_client import ApiClient +from polaris.catalog.configuration import Configuration +from polaris.catalog.exceptions import OpenApiException +from polaris.catalog.exceptions import ApiTypeError +from polaris.catalog.exceptions import ApiValueError +from polaris.catalog.exceptions import ApiKeyError +from polaris.catalog.exceptions import ApiAttributeError +from polaris.catalog.exceptions import ApiException + +# import models into sdk package +from polaris.catalog.models.add_partition_spec_update import AddPartitionSpecUpdate +from polaris.catalog.models.add_schema_update import AddSchemaUpdate +from polaris.catalog.models.add_snapshot_update import AddSnapshotUpdate +from polaris.catalog.models.add_sort_order_update import AddSortOrderUpdate +from polaris.catalog.models.add_view_version_update import AddViewVersionUpdate +from polaris.catalog.models.and_or_expression import AndOrExpression +from polaris.catalog.models.assert_create import AssertCreate +from polaris.catalog.models.assert_current_schema_id import AssertCurrentSchemaId +from polaris.catalog.models.assert_default_sort_order_id import AssertDefaultSortOrderId +from polaris.catalog.models.assert_default_spec_id import AssertDefaultSpecId +from polaris.catalog.models.assert_last_assigned_field_id import AssertLastAssignedFieldId +from polaris.catalog.models.assert_last_assigned_partition_id import AssertLastAssignedPartitionId +from polaris.catalog.models.assert_ref_snapshot_id import AssertRefSnapshotId +from polaris.catalog.models.assert_table_uuid import AssertTableUUID +from polaris.catalog.models.assert_view_uuid import AssertViewUUID +from polaris.catalog.models.assign_uuid_update import AssignUUIDUpdate +from polaris.catalog.models.base_update import BaseUpdate +from polaris.catalog.models.blob_metadata import BlobMetadata +from polaris.catalog.models.catalog_config import CatalogConfig +from polaris.catalog.models.commit_report import CommitReport +from polaris.catalog.models.commit_table_request import CommitTableRequest +from polaris.catalog.models.commit_table_response import CommitTableResponse +from polaris.catalog.models.commit_transaction_request import CommitTransactionRequest +from polaris.catalog.models.commit_view_request import CommitViewRequest +from polaris.catalog.models.content_file import ContentFile +from polaris.catalog.models.count_map import CountMap +from polaris.catalog.models.counter_result import CounterResult +from polaris.catalog.models.create_namespace_request import CreateNamespaceRequest +from polaris.catalog.models.create_namespace_response import CreateNamespaceResponse +from polaris.catalog.models.create_table_request import CreateTableRequest +from polaris.catalog.models.create_view_request import CreateViewRequest +from polaris.catalog.models.data_file import DataFile +from polaris.catalog.models.equality_delete_file import EqualityDeleteFile +from polaris.catalog.models.error_model import ErrorModel +from polaris.catalog.models.expression import Expression +from polaris.catalog.models.file_format import FileFormat +from polaris.catalog.models.get_namespace_response import GetNamespaceResponse +from polaris.catalog.models.iceberg_error_response import IcebergErrorResponse +from polaris.catalog.models.list_namespaces_response import ListNamespacesResponse +from polaris.catalog.models.list_tables_response import ListTablesResponse +from polaris.catalog.models.list_type import ListType +from polaris.catalog.models.literal_expression import LiteralExpression +from polaris.catalog.models.load_table_result import LoadTableResult +from polaris.catalog.models.load_view_result import LoadViewResult +from polaris.catalog.models.map_type import MapType +from polaris.catalog.models.metadata_log_inner import MetadataLogInner +from polaris.catalog.models.metric_result import MetricResult +from polaris.catalog.models.model_schema import ModelSchema +from polaris.catalog.models.not_expression import NotExpression +from polaris.catalog.models.notification_request import NotificationRequest +from polaris.catalog.models.notification_type import NotificationType +from polaris.catalog.models.null_order import NullOrder +from polaris.catalog.models.o_auth_error import OAuthError +from polaris.catalog.models.o_auth_token_response import OAuthTokenResponse +from polaris.catalog.models.partition_field import PartitionField +from polaris.catalog.models.partition_spec import PartitionSpec +from polaris.catalog.models.partition_statistics_file import PartitionStatisticsFile +from polaris.catalog.models.position_delete_file import PositionDeleteFile +from polaris.catalog.models.primitive_type_value import PrimitiveTypeValue +from polaris.catalog.models.register_table_request import RegisterTableRequest +from polaris.catalog.models.remove_partition_statistics_update import RemovePartitionStatisticsUpdate +from polaris.catalog.models.remove_properties_update import RemovePropertiesUpdate +from polaris.catalog.models.remove_snapshot_ref_update import RemoveSnapshotRefUpdate +from polaris.catalog.models.remove_snapshots_update import RemoveSnapshotsUpdate +from polaris.catalog.models.remove_statistics_update import RemoveStatisticsUpdate +from polaris.catalog.models.rename_table_request import RenameTableRequest +from polaris.catalog.models.report_metrics_request import ReportMetricsRequest +from polaris.catalog.models.sql_view_representation import SQLViewRepresentation +from polaris.catalog.models.scan_report import ScanReport +from polaris.catalog.models.set_current_schema_update import SetCurrentSchemaUpdate +from polaris.catalog.models.set_current_view_version_update import SetCurrentViewVersionUpdate +from polaris.catalog.models.set_default_sort_order_update import SetDefaultSortOrderUpdate +from polaris.catalog.models.set_default_spec_update import SetDefaultSpecUpdate +from polaris.catalog.models.set_expression import SetExpression +from polaris.catalog.models.set_location_update import SetLocationUpdate +from polaris.catalog.models.set_partition_statistics_update import SetPartitionStatisticsUpdate +from polaris.catalog.models.set_properties_update import SetPropertiesUpdate +from polaris.catalog.models.set_snapshot_ref_update import SetSnapshotRefUpdate +from polaris.catalog.models.set_statistics_update import SetStatisticsUpdate +from polaris.catalog.models.snapshot import Snapshot +from polaris.catalog.models.snapshot_log_inner import SnapshotLogInner +from polaris.catalog.models.snapshot_reference import SnapshotReference +from polaris.catalog.models.snapshot_summary import SnapshotSummary +from polaris.catalog.models.sort_direction import SortDirection +from polaris.catalog.models.sort_field import SortField +from polaris.catalog.models.sort_order import SortOrder +from polaris.catalog.models.statistics_file import StatisticsFile +from polaris.catalog.models.struct_field import StructField +from polaris.catalog.models.struct_type import StructType +from polaris.catalog.models.table_identifier import TableIdentifier +from polaris.catalog.models.table_metadata import TableMetadata +from polaris.catalog.models.table_requirement import TableRequirement +from polaris.catalog.models.table_update import TableUpdate +from polaris.catalog.models.table_update_notification import TableUpdateNotification +from polaris.catalog.models.term import Term +from polaris.catalog.models.timer_result import TimerResult +from polaris.catalog.models.token_type import TokenType +from polaris.catalog.models.transform_term import TransformTerm +from polaris.catalog.models.type import Type +from polaris.catalog.models.unary_expression import UnaryExpression +from polaris.catalog.models.update_namespace_properties_request import UpdateNamespacePropertiesRequest +from polaris.catalog.models.update_namespace_properties_response import UpdateNamespacePropertiesResponse +from polaris.catalog.models.upgrade_format_version_update import UpgradeFormatVersionUpdate +from polaris.catalog.models.value_map import ValueMap +from polaris.catalog.models.view_history_entry import ViewHistoryEntry +from polaris.catalog.models.view_metadata import ViewMetadata +from polaris.catalog.models.view_representation import ViewRepresentation +from polaris.catalog.models.view_requirement import ViewRequirement +from polaris.catalog.models.view_update import ViewUpdate +from polaris.catalog.models.view_version import ViewVersion diff --git a/regtests/client/python/polaris/catalog/api/__init__.py b/regtests/client/python/polaris/catalog/api/__init__.py new file mode 100644 index 0000000000..fd8a960a30 --- /dev/null +++ b/regtests/client/python/polaris/catalog/api/__init__.py @@ -0,0 +1,22 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# flake8: noqa + +# import apis into api package +from polaris.catalog.api.iceberg_catalog_api import IcebergCatalogAPI +from polaris.catalog.api.iceberg_configuration_api import IcebergConfigurationAPI +from polaris.catalog.api.iceberg_o_auth2_api import IcebergOAuth2API + diff --git a/regtests/client/python/polaris/catalog/api/iceberg_catalog_api.py b/regtests/client/python/polaris/catalog/api/iceberg_catalog_api.py new file mode 100644 index 0000000000..87fc6e5eb0 --- /dev/null +++ b/regtests/client/python/polaris/catalog/api/iceberg_catalog_api.py @@ -0,0 +1,7821 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + +import warnings +from pydantic import validate_call, Field, StrictFloat, StrictStr, StrictInt +from typing import Any, Dict, List, Optional, Tuple, Union +from typing_extensions import Annotated + +from pydantic import Field, StrictBool, StrictStr, field_validator +from typing import Optional +from typing_extensions import Annotated +from polaris.catalog.models.commit_table_request import CommitTableRequest +from polaris.catalog.models.commit_table_response import CommitTableResponse +from polaris.catalog.models.commit_transaction_request import CommitTransactionRequest +from polaris.catalog.models.commit_view_request import CommitViewRequest +from polaris.catalog.models.create_namespace_request import CreateNamespaceRequest +from polaris.catalog.models.create_namespace_response import CreateNamespaceResponse +from polaris.catalog.models.create_table_request import CreateTableRequest +from polaris.catalog.models.create_view_request import CreateViewRequest +from polaris.catalog.models.get_namespace_response import GetNamespaceResponse +from polaris.catalog.models.list_namespaces_response import ListNamespacesResponse +from polaris.catalog.models.list_tables_response import ListTablesResponse +from polaris.catalog.models.load_table_result import LoadTableResult +from polaris.catalog.models.load_view_result import LoadViewResult +from polaris.catalog.models.notification_request import NotificationRequest +from polaris.catalog.models.register_table_request import RegisterTableRequest +from polaris.catalog.models.rename_table_request import RenameTableRequest +from polaris.catalog.models.report_metrics_request import ReportMetricsRequest +from polaris.catalog.models.update_namespace_properties_request import UpdateNamespacePropertiesRequest +from polaris.catalog.models.update_namespace_properties_response import UpdateNamespacePropertiesResponse + +from polaris.catalog.api_client import ApiClient, RequestSerialized +from polaris.catalog.api_response import ApiResponse +from polaris.catalog.rest import RESTResponseType + + +class IcebergCatalogAPI: + """NOTE: This class is auto generated by OpenAPI Generator + Ref: https://openapi-generator.tech + + Do not edit the class manually. + """ + + def __init__(self, api_client=None) -> None: + if api_client is None: + api_client = ApiClient.get_default() + self.api_client = api_client + + + @validate_call + def commit_transaction( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + commit_transaction_request: Annotated[CommitTransactionRequest, Field(description="Commit updates to multiple tables in an atomic operation A commit for a single table consists of a table identifier with requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id.")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """Commit updates to multiple tables in an atomic operation + + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param commit_transaction_request: Commit updates to multiple tables in an atomic operation A commit for a single table consists of a table identifier with requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id. (required) + :type commit_transaction_request: CommitTransactionRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._commit_transaction_serialize( + prefix=prefix, + commit_transaction_request=commit_transaction_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '500': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '502': "IcebergErrorResponse", + '504': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def commit_transaction_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + commit_transaction_request: Annotated[CommitTransactionRequest, Field(description="Commit updates to multiple tables in an atomic operation A commit for a single table consists of a table identifier with requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id.")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """Commit updates to multiple tables in an atomic operation + + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param commit_transaction_request: Commit updates to multiple tables in an atomic operation A commit for a single table consists of a table identifier with requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id. (required) + :type commit_transaction_request: CommitTransactionRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._commit_transaction_serialize( + prefix=prefix, + commit_transaction_request=commit_transaction_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '500': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '502': "IcebergErrorResponse", + '504': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def commit_transaction_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + commit_transaction_request: Annotated[CommitTransactionRequest, Field(description="Commit updates to multiple tables in an atomic operation A commit for a single table consists of a table identifier with requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id.")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Commit updates to multiple tables in an atomic operation + + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param commit_transaction_request: Commit updates to multiple tables in an atomic operation A commit for a single table consists of a table identifier with requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id. (required) + :type commit_transaction_request: CommitTransactionRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._commit_transaction_serialize( + prefix=prefix, + commit_transaction_request=commit_transaction_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '500': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '502': "IcebergErrorResponse", + '504': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _commit_transaction_serialize( + self, + prefix, + commit_transaction_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if commit_transaction_request is not None: + _body_params = commit_transaction_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/{prefix}/transactions/commit', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def create_namespace( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + create_namespace_request: CreateNamespaceRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> CreateNamespaceResponse: + """Create a namespace + + Create a namespace, with an optional set of properties. The server might also add properties, such as `last_modified_time` etc. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param create_namespace_request: (required) + :type create_namespace_request: CreateNamespaceRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_namespace_serialize( + prefix=prefix, + create_namespace_request=create_namespace_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CreateNamespaceResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '406': "ErrorModel", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def create_namespace_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + create_namespace_request: CreateNamespaceRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[CreateNamespaceResponse]: + """Create a namespace + + Create a namespace, with an optional set of properties. The server might also add properties, such as `last_modified_time` etc. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param create_namespace_request: (required) + :type create_namespace_request: CreateNamespaceRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_namespace_serialize( + prefix=prefix, + create_namespace_request=create_namespace_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CreateNamespaceResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '406': "ErrorModel", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def create_namespace_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + create_namespace_request: CreateNamespaceRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Create a namespace + + Create a namespace, with an optional set of properties. The server might also add properties, such as `last_modified_time` etc. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param create_namespace_request: (required) + :type create_namespace_request: CreateNamespaceRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_namespace_serialize( + prefix=prefix, + create_namespace_request=create_namespace_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CreateNamespaceResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '406': "ErrorModel", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _create_namespace_serialize( + self, + prefix, + create_namespace_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if create_namespace_request is not None: + _body_params = create_namespace_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/{prefix}/namespaces', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def create_table( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + create_table_request: CreateTableRequest, + x_iceberg_access_delegation: Annotated[Optional[StrictStr], Field(description="Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. ")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> LoadTableResult: + """Create a table in the given namespace + + Create a table or start a create transaction, like atomic CTAS. If `stage-create` is false, the table is created immediately. If `stage-create` is true, the table is not created, but table metadata is initialized and returned. The service should prepare as needed for a commit to the table commit endpoint to complete the create transaction. The client uses the returned metadata to begin a transaction. To commit the transaction, the client sends all create and subsequent changes to the table commit route. Changes from the table create operation include changes like AddSchemaUpdate and SetCurrentSchemaUpdate that set the initial table state. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param create_table_request: (required) + :type create_table_request: CreateTableRequest + :param x_iceberg_access_delegation: Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. + :type x_iceberg_access_delegation: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_table_serialize( + prefix=prefix, + namespace=namespace, + create_table_request=create_table_request, + x_iceberg_access_delegation=x_iceberg_access_delegation, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadTableResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def create_table_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + create_table_request: CreateTableRequest, + x_iceberg_access_delegation: Annotated[Optional[StrictStr], Field(description="Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. ")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[LoadTableResult]: + """Create a table in the given namespace + + Create a table or start a create transaction, like atomic CTAS. If `stage-create` is false, the table is created immediately. If `stage-create` is true, the table is not created, but table metadata is initialized and returned. The service should prepare as needed for a commit to the table commit endpoint to complete the create transaction. The client uses the returned metadata to begin a transaction. To commit the transaction, the client sends all create and subsequent changes to the table commit route. Changes from the table create operation include changes like AddSchemaUpdate and SetCurrentSchemaUpdate that set the initial table state. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param create_table_request: (required) + :type create_table_request: CreateTableRequest + :param x_iceberg_access_delegation: Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. + :type x_iceberg_access_delegation: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_table_serialize( + prefix=prefix, + namespace=namespace, + create_table_request=create_table_request, + x_iceberg_access_delegation=x_iceberg_access_delegation, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadTableResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def create_table_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + create_table_request: CreateTableRequest, + x_iceberg_access_delegation: Annotated[Optional[StrictStr], Field(description="Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. ")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Create a table in the given namespace + + Create a table or start a create transaction, like atomic CTAS. If `stage-create` is false, the table is created immediately. If `stage-create` is true, the table is not created, but table metadata is initialized and returned. The service should prepare as needed for a commit to the table commit endpoint to complete the create transaction. The client uses the returned metadata to begin a transaction. To commit the transaction, the client sends all create and subsequent changes to the table commit route. Changes from the table create operation include changes like AddSchemaUpdate and SetCurrentSchemaUpdate that set the initial table state. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param create_table_request: (required) + :type create_table_request: CreateTableRequest + :param x_iceberg_access_delegation: Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. + :type x_iceberg_access_delegation: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_table_serialize( + prefix=prefix, + namespace=namespace, + create_table_request=create_table_request, + x_iceberg_access_delegation=x_iceberg_access_delegation, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadTableResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _create_table_serialize( + self, + prefix, + namespace, + create_table_request, + x_iceberg_access_delegation, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + # process the query parameters + # process the header parameters + if x_iceberg_access_delegation is not None: + _header_params['X-Iceberg-Access-Delegation'] = x_iceberg_access_delegation + # process the form parameters + # process the body parameter + if create_table_request is not None: + _body_params = create_table_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/{prefix}/namespaces/{namespace}/tables', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def create_view( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + create_view_request: CreateViewRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> LoadViewResult: + """Create a view in the given namespace + + Create a view in the given namespace. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param create_view_request: (required) + :type create_view_request: CreateViewRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_view_serialize( + prefix=prefix, + namespace=namespace, + create_view_request=create_view_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadViewResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '409': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def create_view_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + create_view_request: CreateViewRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[LoadViewResult]: + """Create a view in the given namespace + + Create a view in the given namespace. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param create_view_request: (required) + :type create_view_request: CreateViewRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_view_serialize( + prefix=prefix, + namespace=namespace, + create_view_request=create_view_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadViewResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '409': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def create_view_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + create_view_request: CreateViewRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Create a view in the given namespace + + Create a view in the given namespace. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param create_view_request: (required) + :type create_view_request: CreateViewRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_view_serialize( + prefix=prefix, + namespace=namespace, + create_view_request=create_view_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadViewResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '409': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _create_view_serialize( + self, + prefix, + namespace, + create_view_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if create_view_request is not None: + _body_params = create_view_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/{prefix}/namespaces/{namespace}/views', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def drop_namespace( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """Drop a namespace from the catalog. Namespace must be empty. + + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._drop_namespace_serialize( + prefix=prefix, + namespace=namespace, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def drop_namespace_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """Drop a namespace from the catalog. Namespace must be empty. + + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._drop_namespace_serialize( + prefix=prefix, + namespace=namespace, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def drop_namespace_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Drop a namespace from the catalog. Namespace must be empty. + + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._drop_namespace_serialize( + prefix=prefix, + namespace=namespace, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _drop_namespace_serialize( + self, + prefix, + namespace, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='DELETE', + resource_path='/v1/{prefix}/namespaces/{namespace}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def drop_table( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + purge_requested: Annotated[Optional[StrictBool], Field(description="Whether the user requested to purge the underlying table's data and metadata")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """Drop a table from the catalog + + Remove a table from the catalog + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param purge_requested: Whether the user requested to purge the underlying table's data and metadata + :type purge_requested: bool + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._drop_table_serialize( + prefix=prefix, + namespace=namespace, + table=table, + purge_requested=purge_requested, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def drop_table_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + purge_requested: Annotated[Optional[StrictBool], Field(description="Whether the user requested to purge the underlying table's data and metadata")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """Drop a table from the catalog + + Remove a table from the catalog + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param purge_requested: Whether the user requested to purge the underlying table's data and metadata + :type purge_requested: bool + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._drop_table_serialize( + prefix=prefix, + namespace=namespace, + table=table, + purge_requested=purge_requested, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def drop_table_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + purge_requested: Annotated[Optional[StrictBool], Field(description="Whether the user requested to purge the underlying table's data and metadata")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Drop a table from the catalog + + Remove a table from the catalog + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param purge_requested: Whether the user requested to purge the underlying table's data and metadata + :type purge_requested: bool + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._drop_table_serialize( + prefix=prefix, + namespace=namespace, + table=table, + purge_requested=purge_requested, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _drop_table_serialize( + self, + prefix, + namespace, + table, + purge_requested, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + if table is not None: + _path_params['table'] = table + # process the query parameters + if purge_requested is not None: + + _query_params.append(('purgeRequested', purge_requested)) + + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='DELETE', + resource_path='/v1/{prefix}/namespaces/{namespace}/tables/{table}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def drop_view( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + view: Annotated[StrictStr, Field(description="A view name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """Drop a view from the catalog + + Remove a view from the catalog + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param view: A view name (required) + :type view: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._drop_view_serialize( + prefix=prefix, + namespace=namespace, + view=view, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def drop_view_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + view: Annotated[StrictStr, Field(description="A view name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """Drop a view from the catalog + + Remove a view from the catalog + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param view: A view name (required) + :type view: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._drop_view_serialize( + prefix=prefix, + namespace=namespace, + view=view, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def drop_view_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + view: Annotated[StrictStr, Field(description="A view name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Drop a view from the catalog + + Remove a view from the catalog + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param view: A view name (required) + :type view: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._drop_view_serialize( + prefix=prefix, + namespace=namespace, + view=view, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _drop_view_serialize( + self, + prefix, + namespace, + view, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + if view is not None: + _path_params['view'] = view + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='DELETE', + resource_path='/v1/{prefix}/namespaces/{namespace}/views/{view}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def list_namespaces( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + page_token: Optional[StrictStr] = None, + page_size: Annotated[Optional[Annotated[int, Field(strict=True, ge=1)]], Field(description="For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`.")] = None, + parent: Annotated[Optional[StrictStr], Field(description="An optional namespace, underneath which to list namespaces. If not provided or empty, all top-level namespaces should be listed. If parent is a multipart namespace, the parts must be separated by the unit separator (`0x1F`) byte.")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ListNamespacesResponse: + """List namespaces, optionally providing a parent namespace to list underneath + + List all namespaces at a certain level, optionally starting from a given parent namespace. If table accounting.tax.paid.info exists, using 'SELECT NAMESPACE IN accounting' would translate into `GET /namespaces?parent=accounting` and must return a namespace, [\"accounting\", \"tax\"] only. Using 'SELECT NAMESPACE IN accounting.tax' would translate into `GET /namespaces?parent=accounting%1Ftax` and must return a namespace, [\"accounting\", \"tax\", \"paid\"]. If `parent` is not provided, all top-level namespaces should be listed. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param page_token: + :type page_token: str + :param page_size: For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. + :type page_size: int + :param parent: An optional namespace, underneath which to list namespaces. If not provided or empty, all top-level namespaces should be listed. If parent is a multipart namespace, the parts must be separated by the unit separator (`0x1F`) byte. + :type parent: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_namespaces_serialize( + prefix=prefix, + page_token=page_token, + page_size=page_size, + parent=parent, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "ListNamespacesResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def list_namespaces_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + page_token: Optional[StrictStr] = None, + page_size: Annotated[Optional[Annotated[int, Field(strict=True, ge=1)]], Field(description="For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`.")] = None, + parent: Annotated[Optional[StrictStr], Field(description="An optional namespace, underneath which to list namespaces. If not provided or empty, all top-level namespaces should be listed. If parent is a multipart namespace, the parts must be separated by the unit separator (`0x1F`) byte.")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[ListNamespacesResponse]: + """List namespaces, optionally providing a parent namespace to list underneath + + List all namespaces at a certain level, optionally starting from a given parent namespace. If table accounting.tax.paid.info exists, using 'SELECT NAMESPACE IN accounting' would translate into `GET /namespaces?parent=accounting` and must return a namespace, [\"accounting\", \"tax\"] only. Using 'SELECT NAMESPACE IN accounting.tax' would translate into `GET /namespaces?parent=accounting%1Ftax` and must return a namespace, [\"accounting\", \"tax\", \"paid\"]. If `parent` is not provided, all top-level namespaces should be listed. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param page_token: + :type page_token: str + :param page_size: For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. + :type page_size: int + :param parent: An optional namespace, underneath which to list namespaces. If not provided or empty, all top-level namespaces should be listed. If parent is a multipart namespace, the parts must be separated by the unit separator (`0x1F`) byte. + :type parent: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_namespaces_serialize( + prefix=prefix, + page_token=page_token, + page_size=page_size, + parent=parent, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "ListNamespacesResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def list_namespaces_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + page_token: Optional[StrictStr] = None, + page_size: Annotated[Optional[Annotated[int, Field(strict=True, ge=1)]], Field(description="For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`.")] = None, + parent: Annotated[Optional[StrictStr], Field(description="An optional namespace, underneath which to list namespaces. If not provided or empty, all top-level namespaces should be listed. If parent is a multipart namespace, the parts must be separated by the unit separator (`0x1F`) byte.")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """List namespaces, optionally providing a parent namespace to list underneath + + List all namespaces at a certain level, optionally starting from a given parent namespace. If table accounting.tax.paid.info exists, using 'SELECT NAMESPACE IN accounting' would translate into `GET /namespaces?parent=accounting` and must return a namespace, [\"accounting\", \"tax\"] only. Using 'SELECT NAMESPACE IN accounting.tax' would translate into `GET /namespaces?parent=accounting%1Ftax` and must return a namespace, [\"accounting\", \"tax\", \"paid\"]. If `parent` is not provided, all top-level namespaces should be listed. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param page_token: + :type page_token: str + :param page_size: For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. + :type page_size: int + :param parent: An optional namespace, underneath which to list namespaces. If not provided or empty, all top-level namespaces should be listed. If parent is a multipart namespace, the parts must be separated by the unit separator (`0x1F`) byte. + :type parent: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_namespaces_serialize( + prefix=prefix, + page_token=page_token, + page_size=page_size, + parent=parent, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "ListNamespacesResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _list_namespaces_serialize( + self, + prefix, + page_token, + page_size, + parent, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + # process the query parameters + if page_token is not None: + + _query_params.append(('pageToken', page_token)) + + if page_size is not None: + + _query_params.append(('pageSize', page_size)) + + if parent is not None: + + _query_params.append(('parent', parent)) + + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/v1/{prefix}/namespaces', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def list_tables( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + page_token: Optional[StrictStr] = None, + page_size: Annotated[Optional[Annotated[int, Field(strict=True, ge=1)]], Field(description="For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`.")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ListTablesResponse: + """List all table identifiers underneath a given namespace + + Return all table identifiers under this namespace + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param page_token: + :type page_token: str + :param page_size: For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. + :type page_size: int + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_tables_serialize( + prefix=prefix, + namespace=namespace, + page_token=page_token, + page_size=page_size, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "ListTablesResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def list_tables_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + page_token: Optional[StrictStr] = None, + page_size: Annotated[Optional[Annotated[int, Field(strict=True, ge=1)]], Field(description="For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`.")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[ListTablesResponse]: + """List all table identifiers underneath a given namespace + + Return all table identifiers under this namespace + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param page_token: + :type page_token: str + :param page_size: For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. + :type page_size: int + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_tables_serialize( + prefix=prefix, + namespace=namespace, + page_token=page_token, + page_size=page_size, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "ListTablesResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def list_tables_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + page_token: Optional[StrictStr] = None, + page_size: Annotated[Optional[Annotated[int, Field(strict=True, ge=1)]], Field(description="For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`.")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """List all table identifiers underneath a given namespace + + Return all table identifiers under this namespace + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param page_token: + :type page_token: str + :param page_size: For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. + :type page_size: int + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_tables_serialize( + prefix=prefix, + namespace=namespace, + page_token=page_token, + page_size=page_size, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "ListTablesResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _list_tables_serialize( + self, + prefix, + namespace, + page_token, + page_size, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + # process the query parameters + if page_token is not None: + + _query_params.append(('pageToken', page_token)) + + if page_size is not None: + + _query_params.append(('pageSize', page_size)) + + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/v1/{prefix}/namespaces/{namespace}/tables', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def list_views( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + page_token: Optional[StrictStr] = None, + page_size: Annotated[Optional[Annotated[int, Field(strict=True, ge=1)]], Field(description="For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`.")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ListTablesResponse: + """List all view identifiers underneath a given namespace + + Return all view identifiers under this namespace + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param page_token: + :type page_token: str + :param page_size: For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. + :type page_size: int + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_views_serialize( + prefix=prefix, + namespace=namespace, + page_token=page_token, + page_size=page_size, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "ListTablesResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def list_views_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + page_token: Optional[StrictStr] = None, + page_size: Annotated[Optional[Annotated[int, Field(strict=True, ge=1)]], Field(description="For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`.")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[ListTablesResponse]: + """List all view identifiers underneath a given namespace + + Return all view identifiers under this namespace + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param page_token: + :type page_token: str + :param page_size: For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. + :type page_size: int + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_views_serialize( + prefix=prefix, + namespace=namespace, + page_token=page_token, + page_size=page_size, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "ListTablesResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def list_views_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + page_token: Optional[StrictStr] = None, + page_size: Annotated[Optional[Annotated[int, Field(strict=True, ge=1)]], Field(description="For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`.")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """List all view identifiers underneath a given namespace + + Return all view identifiers under this namespace + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param page_token: + :type page_token: str + :param page_size: For servers that support pagination, this signals an upper bound of the number of results that a client will receive. For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. + :type page_size: int + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_views_serialize( + prefix=prefix, + namespace=namespace, + page_token=page_token, + page_size=page_size, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "ListTablesResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _list_views_serialize( + self, + prefix, + namespace, + page_token, + page_size, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + # process the query parameters + if page_token is not None: + + _query_params.append(('pageToken', page_token)) + + if page_size is not None: + + _query_params.append(('pageSize', page_size)) + + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/v1/{prefix}/namespaces/{namespace}/views', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def load_namespace_metadata( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> GetNamespaceResponse: + """Load the metadata properties for a namespace + + Return all stored metadata properties for a given namespace + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._load_namespace_metadata_serialize( + prefix=prefix, + namespace=namespace, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "GetNamespaceResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def load_namespace_metadata_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[GetNamespaceResponse]: + """Load the metadata properties for a namespace + + Return all stored metadata properties for a given namespace + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._load_namespace_metadata_serialize( + prefix=prefix, + namespace=namespace, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "GetNamespaceResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def load_namespace_metadata_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Load the metadata properties for a namespace + + Return all stored metadata properties for a given namespace + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._load_namespace_metadata_serialize( + prefix=prefix, + namespace=namespace, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "GetNamespaceResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _load_namespace_metadata_serialize( + self, + prefix, + namespace, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/v1/{prefix}/namespaces/{namespace}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def load_table( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + x_iceberg_access_delegation: Annotated[Optional[StrictStr], Field(description="Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. ")] = None, + snapshots: Annotated[Optional[StrictStr], Field(description="The snapshots to return in the body of the metadata. Setting the value to `all` would return the full set of snapshots currently valid for the table. Setting the value to `refs` would load all snapshots referenced by branches or tags. Default if no param is provided is `all`.")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> LoadTableResult: + """Load a table from the catalog + + Load a table from the catalog. The response contains both configuration and table metadata. The configuration, if non-empty is used as additional configuration for the table that overrides catalog configuration. For example, this configuration may change the FileIO implementation to be used for the table. The response also contains the table's full metadata, matching the table metadata JSON file. The catalog configuration may contain credentials that should be used for subsequent requests for the table. The configuration key \"token\" is used to pass an access token to be used as a bearer token for table requests. Otherwise, a token may be passed using a RFC 8693 token type as a configuration key. For example, \"urn:ietf:params:oauth:token-type:jwt=\". + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param x_iceberg_access_delegation: Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. + :type x_iceberg_access_delegation: str + :param snapshots: The snapshots to return in the body of the metadata. Setting the value to `all` would return the full set of snapshots currently valid for the table. Setting the value to `refs` would load all snapshots referenced by branches or tags. Default if no param is provided is `all`. + :type snapshots: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._load_table_serialize( + prefix=prefix, + namespace=namespace, + table=table, + x_iceberg_access_delegation=x_iceberg_access_delegation, + snapshots=snapshots, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadTableResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def load_table_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + x_iceberg_access_delegation: Annotated[Optional[StrictStr], Field(description="Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. ")] = None, + snapshots: Annotated[Optional[StrictStr], Field(description="The snapshots to return in the body of the metadata. Setting the value to `all` would return the full set of snapshots currently valid for the table. Setting the value to `refs` would load all snapshots referenced by branches or tags. Default if no param is provided is `all`.")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[LoadTableResult]: + """Load a table from the catalog + + Load a table from the catalog. The response contains both configuration and table metadata. The configuration, if non-empty is used as additional configuration for the table that overrides catalog configuration. For example, this configuration may change the FileIO implementation to be used for the table. The response also contains the table's full metadata, matching the table metadata JSON file. The catalog configuration may contain credentials that should be used for subsequent requests for the table. The configuration key \"token\" is used to pass an access token to be used as a bearer token for table requests. Otherwise, a token may be passed using a RFC 8693 token type as a configuration key. For example, \"urn:ietf:params:oauth:token-type:jwt=\". + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param x_iceberg_access_delegation: Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. + :type x_iceberg_access_delegation: str + :param snapshots: The snapshots to return in the body of the metadata. Setting the value to `all` would return the full set of snapshots currently valid for the table. Setting the value to `refs` would load all snapshots referenced by branches or tags. Default if no param is provided is `all`. + :type snapshots: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._load_table_serialize( + prefix=prefix, + namespace=namespace, + table=table, + x_iceberg_access_delegation=x_iceberg_access_delegation, + snapshots=snapshots, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadTableResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def load_table_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + x_iceberg_access_delegation: Annotated[Optional[StrictStr], Field(description="Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. ")] = None, + snapshots: Annotated[Optional[StrictStr], Field(description="The snapshots to return in the body of the metadata. Setting the value to `all` would return the full set of snapshots currently valid for the table. Setting the value to `refs` would load all snapshots referenced by branches or tags. Default if no param is provided is `all`.")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Load a table from the catalog + + Load a table from the catalog. The response contains both configuration and table metadata. The configuration, if non-empty is used as additional configuration for the table that overrides catalog configuration. For example, this configuration may change the FileIO implementation to be used for the table. The response also contains the table's full metadata, matching the table metadata JSON file. The catalog configuration may contain credentials that should be used for subsequent requests for the table. The configuration key \"token\" is used to pass an access token to be used as a bearer token for table requests. Otherwise, a token may be passed using a RFC 8693 token type as a configuration key. For example, \"urn:ietf:params:oauth:token-type:jwt=\". + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param x_iceberg_access_delegation: Optional signal to the server that the client supports delegated access via a comma-separated list of access mechanisms. The server may choose to supply access via any or none of the requested mechanisms. Specific properties and handling for `vended-credentials` is documented in the `LoadTableResult` schema section of this spec document. The protocol and specification for `remote-signing` is documented in the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. + :type x_iceberg_access_delegation: str + :param snapshots: The snapshots to return in the body of the metadata. Setting the value to `all` would return the full set of snapshots currently valid for the table. Setting the value to `refs` would load all snapshots referenced by branches or tags. Default if no param is provided is `all`. + :type snapshots: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._load_table_serialize( + prefix=prefix, + namespace=namespace, + table=table, + x_iceberg_access_delegation=x_iceberg_access_delegation, + snapshots=snapshots, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadTableResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _load_table_serialize( + self, + prefix, + namespace, + table, + x_iceberg_access_delegation, + snapshots, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + if table is not None: + _path_params['table'] = table + # process the query parameters + if snapshots is not None: + + _query_params.append(('snapshots', snapshots)) + + # process the header parameters + if x_iceberg_access_delegation is not None: + _header_params['X-Iceberg-Access-Delegation'] = x_iceberg_access_delegation + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/v1/{prefix}/namespaces/{namespace}/tables/{table}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def load_view( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + view: Annotated[StrictStr, Field(description="A view name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> LoadViewResult: + """Load a view from the catalog + + Load a view from the catalog. The response contains both configuration and view metadata. The configuration, if non-empty is used as additional configuration for the view that overrides catalog configuration. The response also contains the view's full metadata, matching the view metadata JSON file. The catalog configuration may contain credentials that should be used for subsequent requests for the view. The configuration key \"token\" is used to pass an access token to be used as a bearer token for view requests. Otherwise, a token may be passed using a RFC 8693 token type as a configuration key. For example, \"urn:ietf:params:oauth:token-type:jwt=\". + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param view: A view name (required) + :type view: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._load_view_serialize( + prefix=prefix, + namespace=namespace, + view=view, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadViewResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def load_view_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + view: Annotated[StrictStr, Field(description="A view name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[LoadViewResult]: + """Load a view from the catalog + + Load a view from the catalog. The response contains both configuration and view metadata. The configuration, if non-empty is used as additional configuration for the view that overrides catalog configuration. The response also contains the view's full metadata, matching the view metadata JSON file. The catalog configuration may contain credentials that should be used for subsequent requests for the view. The configuration key \"token\" is used to pass an access token to be used as a bearer token for view requests. Otherwise, a token may be passed using a RFC 8693 token type as a configuration key. For example, \"urn:ietf:params:oauth:token-type:jwt=\". + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param view: A view name (required) + :type view: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._load_view_serialize( + prefix=prefix, + namespace=namespace, + view=view, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadViewResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def load_view_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + view: Annotated[StrictStr, Field(description="A view name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Load a view from the catalog + + Load a view from the catalog. The response contains both configuration and view metadata. The configuration, if non-empty is used as additional configuration for the view that overrides catalog configuration. The response also contains the view's full metadata, matching the view metadata JSON file. The catalog configuration may contain credentials that should be used for subsequent requests for the view. The configuration key \"token\" is used to pass an access token to be used as a bearer token for view requests. Otherwise, a token may be passed using a RFC 8693 token type as a configuration key. For example, \"urn:ietf:params:oauth:token-type:jwt=\". + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param view: A view name (required) + :type view: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._load_view_serialize( + prefix=prefix, + namespace=namespace, + view=view, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadViewResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _load_view_serialize( + self, + prefix, + namespace, + view, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + if view is not None: + _path_params['view'] = view + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/v1/{prefix}/namespaces/{namespace}/views/{view}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def namespace_exists( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """Check if a namespace exists + + Check if a namespace exists. The response does not contain a body. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._namespace_exists_serialize( + prefix=prefix, + namespace=namespace, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def namespace_exists_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """Check if a namespace exists + + Check if a namespace exists. The response does not contain a body. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._namespace_exists_serialize( + prefix=prefix, + namespace=namespace, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def namespace_exists_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Check if a namespace exists + + Check if a namespace exists. The response does not contain a body. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._namespace_exists_serialize( + prefix=prefix, + namespace=namespace, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _namespace_exists_serialize( + self, + prefix, + namespace, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='HEAD', + resource_path='/v1/{prefix}/namespaces/{namespace}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def register_table( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + register_table_request: RegisterTableRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> LoadTableResult: + """Register a table in the given namespace using given metadata file location + + Register a table using given metadata file location. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param register_table_request: (required) + :type register_table_request: RegisterTableRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._register_table_serialize( + prefix=prefix, + namespace=namespace, + register_table_request=register_table_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadTableResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def register_table_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + register_table_request: RegisterTableRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[LoadTableResult]: + """Register a table in the given namespace using given metadata file location + + Register a table using given metadata file location. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param register_table_request: (required) + :type register_table_request: RegisterTableRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._register_table_serialize( + prefix=prefix, + namespace=namespace, + register_table_request=register_table_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadTableResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def register_table_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + register_table_request: RegisterTableRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Register a table in the given namespace using given metadata file location + + Register a table using given metadata file location. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param register_table_request: (required) + :type register_table_request: RegisterTableRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._register_table_serialize( + prefix=prefix, + namespace=namespace, + register_table_request=register_table_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadTableResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _register_table_serialize( + self, + prefix, + namespace, + register_table_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if register_table_request is not None: + _body_params = register_table_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/{prefix}/namespaces/{namespace}/register', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def rename_table( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + rename_table_request: Annotated[RenameTableRequest, Field(description="Current table identifier to rename and new table identifier to rename to")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """Rename a table from its current name to a new name + + Rename a table from one identifier to another. It's valid to move a table across namespaces, but the server implementation is not required to support it. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param rename_table_request: Current table identifier to rename and new table identifier to rename to (required) + :type rename_table_request: RenameTableRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._rename_table_serialize( + prefix=prefix, + rename_table_request=rename_table_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '406': "ErrorModel", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def rename_table_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + rename_table_request: Annotated[RenameTableRequest, Field(description="Current table identifier to rename and new table identifier to rename to")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """Rename a table from its current name to a new name + + Rename a table from one identifier to another. It's valid to move a table across namespaces, but the server implementation is not required to support it. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param rename_table_request: Current table identifier to rename and new table identifier to rename to (required) + :type rename_table_request: RenameTableRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._rename_table_serialize( + prefix=prefix, + rename_table_request=rename_table_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '406': "ErrorModel", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def rename_table_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + rename_table_request: Annotated[RenameTableRequest, Field(description="Current table identifier to rename and new table identifier to rename to")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Rename a table from its current name to a new name + + Rename a table from one identifier to another. It's valid to move a table across namespaces, but the server implementation is not required to support it. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param rename_table_request: Current table identifier to rename and new table identifier to rename to (required) + :type rename_table_request: RenameTableRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._rename_table_serialize( + prefix=prefix, + rename_table_request=rename_table_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '406': "ErrorModel", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _rename_table_serialize( + self, + prefix, + rename_table_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if rename_table_request is not None: + _body_params = rename_table_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/{prefix}/tables/rename', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def rename_view( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + rename_table_request: Annotated[RenameTableRequest, Field(description="Current view identifier to rename and new view identifier to rename to")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """Rename a view from its current name to a new name + + Rename a view from one identifier to another. It's valid to move a view across namespaces, but the server implementation is not required to support it. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param rename_table_request: Current view identifier to rename and new view identifier to rename to (required) + :type rename_table_request: RenameTableRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._rename_view_serialize( + prefix=prefix, + rename_table_request=rename_table_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '406': "ErrorModel", + '409': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def rename_view_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + rename_table_request: Annotated[RenameTableRequest, Field(description="Current view identifier to rename and new view identifier to rename to")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """Rename a view from its current name to a new name + + Rename a view from one identifier to another. It's valid to move a view across namespaces, but the server implementation is not required to support it. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param rename_table_request: Current view identifier to rename and new view identifier to rename to (required) + :type rename_table_request: RenameTableRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._rename_view_serialize( + prefix=prefix, + rename_table_request=rename_table_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '406': "ErrorModel", + '409': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def rename_view_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + rename_table_request: Annotated[RenameTableRequest, Field(description="Current view identifier to rename and new view identifier to rename to")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Rename a view from its current name to a new name + + Rename a view from one identifier to another. It's valid to move a view across namespaces, but the server implementation is not required to support it. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param rename_table_request: Current view identifier to rename and new view identifier to rename to (required) + :type rename_table_request: RenameTableRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._rename_view_serialize( + prefix=prefix, + rename_table_request=rename_table_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '406': "ErrorModel", + '409': "ErrorModel", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _rename_view_serialize( + self, + prefix, + rename_table_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if rename_table_request is not None: + _body_params = rename_table_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/{prefix}/views/rename', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def replace_view( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + view: Annotated[StrictStr, Field(description="A view name")], + commit_view_request: CommitViewRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> LoadViewResult: + """Replace a view + + Commit updates to a view. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param view: A view name (required) + :type view: str + :param commit_view_request: (required) + :type commit_view_request: CommitViewRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._replace_view_serialize( + prefix=prefix, + namespace=namespace, + view=view, + commit_view_request=commit_view_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadViewResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '409': "ErrorModel", + '419': "IcebergErrorResponse", + '500': "ErrorModel", + '503': "IcebergErrorResponse", + '502': "ErrorModel", + '504': "ErrorModel", + '5XX': "ErrorModel", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def replace_view_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + view: Annotated[StrictStr, Field(description="A view name")], + commit_view_request: CommitViewRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[LoadViewResult]: + """Replace a view + + Commit updates to a view. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param view: A view name (required) + :type view: str + :param commit_view_request: (required) + :type commit_view_request: CommitViewRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._replace_view_serialize( + prefix=prefix, + namespace=namespace, + view=view, + commit_view_request=commit_view_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadViewResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '409': "ErrorModel", + '419': "IcebergErrorResponse", + '500': "ErrorModel", + '503': "IcebergErrorResponse", + '502': "ErrorModel", + '504': "ErrorModel", + '5XX': "ErrorModel", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def replace_view_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + view: Annotated[StrictStr, Field(description="A view name")], + commit_view_request: CommitViewRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Replace a view + + Commit updates to a view. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param view: A view name (required) + :type view: str + :param commit_view_request: (required) + :type commit_view_request: CommitViewRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._replace_view_serialize( + prefix=prefix, + namespace=namespace, + view=view, + commit_view_request=commit_view_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "LoadViewResult", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "ErrorModel", + '409': "ErrorModel", + '419': "IcebergErrorResponse", + '500': "ErrorModel", + '503': "IcebergErrorResponse", + '502': "ErrorModel", + '504': "ErrorModel", + '5XX': "ErrorModel", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _replace_view_serialize( + self, + prefix, + namespace, + view, + commit_view_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + if view is not None: + _path_params['view'] = view + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if commit_view_request is not None: + _body_params = commit_view_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/{prefix}/namespaces/{namespace}/views/{view}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def report_metrics( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + report_metrics_request: Annotated[ReportMetricsRequest, Field(description="The request containing the metrics report to be sent")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """Send a metrics report to this endpoint to be processed by the backend + + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param report_metrics_request: The request containing the metrics report to be sent (required) + :type report_metrics_request: ReportMetricsRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._report_metrics_serialize( + prefix=prefix, + namespace=namespace, + table=table, + report_metrics_request=report_metrics_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def report_metrics_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + report_metrics_request: Annotated[ReportMetricsRequest, Field(description="The request containing the metrics report to be sent")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """Send a metrics report to this endpoint to be processed by the backend + + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param report_metrics_request: The request containing the metrics report to be sent (required) + :type report_metrics_request: ReportMetricsRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._report_metrics_serialize( + prefix=prefix, + namespace=namespace, + table=table, + report_metrics_request=report_metrics_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def report_metrics_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + report_metrics_request: Annotated[ReportMetricsRequest, Field(description="The request containing the metrics report to be sent")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Send a metrics report to this endpoint to be processed by the backend + + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param report_metrics_request: The request containing the metrics report to be sent (required) + :type report_metrics_request: ReportMetricsRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._report_metrics_serialize( + prefix=prefix, + namespace=namespace, + table=table, + report_metrics_request=report_metrics_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _report_metrics_serialize( + self, + prefix, + namespace, + table, + report_metrics_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + if table is not None: + _path_params['table'] = table + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if report_metrics_request is not None: + _body_params = report_metrics_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/{prefix}/namespaces/{namespace}/tables/{table}/metrics', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def send_notification( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + notification_request: Annotated[NotificationRequest, Field(description="The request containing the notification to be sent")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """Sends a notification to the table + + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param notification_request: The request containing the notification to be sent (required) + :type notification_request: NotificationRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._send_notification_serialize( + prefix=prefix, + namespace=namespace, + table=table, + notification_request=notification_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def send_notification_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + notification_request: Annotated[NotificationRequest, Field(description="The request containing the notification to be sent")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """Sends a notification to the table + + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param notification_request: The request containing the notification to be sent (required) + :type notification_request: NotificationRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._send_notification_serialize( + prefix=prefix, + namespace=namespace, + table=table, + notification_request=notification_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def send_notification_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + notification_request: Annotated[NotificationRequest, Field(description="The request containing the notification to be sent")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Sends a notification to the table + + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param notification_request: The request containing the notification to be sent (required) + :type notification_request: NotificationRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._send_notification_serialize( + prefix=prefix, + namespace=namespace, + table=table, + notification_request=notification_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _send_notification_serialize( + self, + prefix, + namespace, + table, + notification_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + if table is not None: + _path_params['table'] = table + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if notification_request is not None: + _body_params = notification_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/{prefix}/namespaces/{namespace}/tables/{table}/notifications', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def table_exists( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """Check if a table exists + + Check if a table exists within a given namespace. The response does not contain a body. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._table_exists_serialize( + prefix=prefix, + namespace=namespace, + table=table, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def table_exists_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """Check if a table exists + + Check if a table exists within a given namespace. The response does not contain a body. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._table_exists_serialize( + prefix=prefix, + namespace=namespace, + table=table, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def table_exists_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Check if a table exists + + Check if a table exists within a given namespace. The response does not contain a body. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._table_exists_serialize( + prefix=prefix, + namespace=namespace, + table=table, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _table_exists_serialize( + self, + prefix, + namespace, + table, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + if table is not None: + _path_params['table'] = table + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='HEAD', + resource_path='/v1/{prefix}/namespaces/{namespace}/tables/{table}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def update_properties( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + update_namespace_properties_request: UpdateNamespacePropertiesRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> UpdateNamespacePropertiesResponse: + """Set or remove properties on a namespace + + Set and/or remove properties on a namespace. The request body specifies a list of properties to remove and a map of key value pairs to update. Properties that are not in the request are not modified or removed by this call. Server implementations are not required to support namespace properties. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param update_namespace_properties_request: (required) + :type update_namespace_properties_request: UpdateNamespacePropertiesRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_properties_serialize( + prefix=prefix, + namespace=namespace, + update_namespace_properties_request=update_namespace_properties_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "UpdateNamespacePropertiesResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '406': "ErrorModel", + '422': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def update_properties_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + update_namespace_properties_request: UpdateNamespacePropertiesRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[UpdateNamespacePropertiesResponse]: + """Set or remove properties on a namespace + + Set and/or remove properties on a namespace. The request body specifies a list of properties to remove and a map of key value pairs to update. Properties that are not in the request are not modified or removed by this call. Server implementations are not required to support namespace properties. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param update_namespace_properties_request: (required) + :type update_namespace_properties_request: UpdateNamespacePropertiesRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_properties_serialize( + prefix=prefix, + namespace=namespace, + update_namespace_properties_request=update_namespace_properties_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "UpdateNamespacePropertiesResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '406': "ErrorModel", + '422': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def update_properties_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + update_namespace_properties_request: UpdateNamespacePropertiesRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Set or remove properties on a namespace + + Set and/or remove properties on a namespace. The request body specifies a list of properties to remove and a map of key value pairs to update. Properties that are not in the request are not modified or removed by this call. Server implementations are not required to support namespace properties. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param update_namespace_properties_request: (required) + :type update_namespace_properties_request: UpdateNamespacePropertiesRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_properties_serialize( + prefix=prefix, + namespace=namespace, + update_namespace_properties_request=update_namespace_properties_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "UpdateNamespacePropertiesResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '406': "ErrorModel", + '422': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _update_properties_serialize( + self, + prefix, + namespace, + update_namespace_properties_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if update_namespace_properties_request is not None: + _body_params = update_namespace_properties_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/{prefix}/namespaces/{namespace}/properties', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def update_table( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + commit_table_request: CommitTableRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> CommitTableResponse: + """Commit updates to a table + + Commit updates to a table. Commits have two parts, requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id. Create table transactions that are started by createTable with `stage-create` set to true are committed using this route. Transactions should include all changes to the table, including table initialization, like AddSchemaUpdate and SetCurrentSchemaUpdate. The `assert-create` requirement is used to ensure that the table was not created concurrently. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param commit_table_request: (required) + :type commit_table_request: CommitTableRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_table_serialize( + prefix=prefix, + namespace=namespace, + table=table, + commit_table_request=commit_table_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CommitTableResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '500': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '502': "IcebergErrorResponse", + '504': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def update_table_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + commit_table_request: CommitTableRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[CommitTableResponse]: + """Commit updates to a table + + Commit updates to a table. Commits have two parts, requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id. Create table transactions that are started by createTable with `stage-create` set to true are committed using this route. Transactions should include all changes to the table, including table initialization, like AddSchemaUpdate and SetCurrentSchemaUpdate. The `assert-create` requirement is used to ensure that the table was not created concurrently. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param commit_table_request: (required) + :type commit_table_request: CommitTableRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_table_serialize( + prefix=prefix, + namespace=namespace, + table=table, + commit_table_request=commit_table_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CommitTableResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '500': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '502': "IcebergErrorResponse", + '504': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def update_table_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + table: Annotated[StrictStr, Field(description="A table name")], + commit_table_request: CommitTableRequest, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Commit updates to a table + + Commit updates to a table. Commits have two parts, requirements and updates. Requirements are assertions that will be validated before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. Updates are changes to make to table metadata. For example, after asserting that the current main ref is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new snapshot id. Create table transactions that are started by createTable with `stage-create` set to true are committed using this route. Transactions should include all changes to the table, including table initialization, like AddSchemaUpdate and SetCurrentSchemaUpdate. The `assert-create` requirement is used to ensure that the table was not created concurrently. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param table: A table name (required) + :type table: str + :param commit_table_request: (required) + :type commit_table_request: CommitTableRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_table_serialize( + prefix=prefix, + namespace=namespace, + table=table, + commit_table_request=commit_table_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CommitTableResponse", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '404': "IcebergErrorResponse", + '409': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '500': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '502': "IcebergErrorResponse", + '504': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _update_table_serialize( + self, + prefix, + namespace, + table, + commit_table_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + if table is not None: + _path_params['table'] = table + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if commit_table_request is not None: + _body_params = commit_table_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/{prefix}/namespaces/{namespace}/tables/{table}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def view_exists( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + view: Annotated[StrictStr, Field(description="A view name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """Check if a view exists + + Check if a view exists within a given namespace. This request does not return a response body. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param view: A view name (required) + :type view: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._view_exists_serialize( + prefix=prefix, + namespace=namespace, + view=view, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': None, + '401': None, + '404': None, + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def view_exists_with_http_info( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + view: Annotated[StrictStr, Field(description="A view name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """Check if a view exists + + Check if a view exists within a given namespace. This request does not return a response body. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param view: A view name (required) + :type view: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._view_exists_serialize( + prefix=prefix, + namespace=namespace, + view=view, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': None, + '401': None, + '404': None, + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def view_exists_without_preload_content( + self, + prefix: Annotated[StrictStr, Field(description="An optional prefix in the path")], + namespace: Annotated[StrictStr, Field(description="A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte.")], + view: Annotated[StrictStr, Field(description="A view name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Check if a view exists + + Check if a view exists within a given namespace. This request does not return a response body. + + :param prefix: An optional prefix in the path (required) + :type prefix: str + :param namespace: A namespace identifier as a single string. Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. (required) + :type namespace: str + :param view: A view name (required) + :type view: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._view_exists_serialize( + prefix=prefix, + namespace=namespace, + view=view, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '400': None, + '401': None, + '404': None, + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _view_exists_serialize( + self, + prefix, + namespace, + view, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if prefix is not None: + _path_params['prefix'] = prefix + if namespace is not None: + _path_params['namespace'] = namespace + if view is not None: + _path_params['view'] = view + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='HEAD', + resource_path='/v1/{prefix}/namespaces/{namespace}/views/{view}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + diff --git a/regtests/client/python/polaris/catalog/api/iceberg_configuration_api.py b/regtests/client/python/polaris/catalog/api/iceberg_configuration_api.py new file mode 100644 index 0000000000..b504d6af5d --- /dev/null +++ b/regtests/client/python/polaris/catalog/api/iceberg_configuration_api.py @@ -0,0 +1,334 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + +import warnings +from pydantic import validate_call, Field, StrictFloat, StrictStr, StrictInt +from typing import Any, Dict, List, Optional, Tuple, Union +from typing_extensions import Annotated + +from pydantic import Field, StrictStr +from typing import Optional +from typing_extensions import Annotated +from polaris.catalog.models.catalog_config import CatalogConfig + +from polaris.catalog.api_client import ApiClient, RequestSerialized +from polaris.catalog.api_response import ApiResponse +from polaris.catalog.rest import RESTResponseType + + +class IcebergConfigurationAPI: + """NOTE: This class is auto generated by OpenAPI Generator + Ref: https://openapi-generator.tech + + Do not edit the class manually. + """ + + def __init__(self, api_client=None) -> None: + if api_client is None: + api_client = ApiClient.get_default() + self.api_client = api_client + + + @validate_call + def get_config( + self, + warehouse: Annotated[Optional[StrictStr], Field(description="Warehouse location or identifier to request from the service")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> CatalogConfig: + """List all catalog configuration settings + + All REST clients should first call this route to get catalog configuration properties from the server to configure the catalog and its HTTP client. Configuration from the server consists of two sets of key/value pairs. - defaults - properties that should be used as default configuration; applied before client configuration - overrides - properties that should be used to override client configuration; applied after defaults and client configuration Catalog configuration is constructed by setting the defaults, then client- provided configuration, and finally overrides. The final property set is then used to configure the catalog. For example, a default configuration property might set the size of the client pool, which can be replaced with a client-specific setting. An override might be used to set the warehouse location, which is stored on the server rather than in client configuration. Common catalog configuration settings are documented at https://iceberg.apache.org/docs/latest/configuration/#catalog-properties + + :param warehouse: Warehouse location or identifier to request from the service + :type warehouse: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_config_serialize( + warehouse=warehouse, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogConfig", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def get_config_with_http_info( + self, + warehouse: Annotated[Optional[StrictStr], Field(description="Warehouse location or identifier to request from the service")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[CatalogConfig]: + """List all catalog configuration settings + + All REST clients should first call this route to get catalog configuration properties from the server to configure the catalog and its HTTP client. Configuration from the server consists of two sets of key/value pairs. - defaults - properties that should be used as default configuration; applied before client configuration - overrides - properties that should be used to override client configuration; applied after defaults and client configuration Catalog configuration is constructed by setting the defaults, then client- provided configuration, and finally overrides. The final property set is then used to configure the catalog. For example, a default configuration property might set the size of the client pool, which can be replaced with a client-specific setting. An override might be used to set the warehouse location, which is stored on the server rather than in client configuration. Common catalog configuration settings are documented at https://iceberg.apache.org/docs/latest/configuration/#catalog-properties + + :param warehouse: Warehouse location or identifier to request from the service + :type warehouse: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_config_serialize( + warehouse=warehouse, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogConfig", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def get_config_without_preload_content( + self, + warehouse: Annotated[Optional[StrictStr], Field(description="Warehouse location or identifier to request from the service")] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """List all catalog configuration settings + + All REST clients should first call this route to get catalog configuration properties from the server to configure the catalog and its HTTP client. Configuration from the server consists of two sets of key/value pairs. - defaults - properties that should be used as default configuration; applied before client configuration - overrides - properties that should be used to override client configuration; applied after defaults and client configuration Catalog configuration is constructed by setting the defaults, then client- provided configuration, and finally overrides. The final property set is then used to configure the catalog. For example, a default configuration property might set the size of the client pool, which can be replaced with a client-specific setting. An override might be used to set the warehouse location, which is stored on the server rather than in client configuration. Common catalog configuration settings are documented at https://iceberg.apache.org/docs/latest/configuration/#catalog-properties + + :param warehouse: Warehouse location or identifier to request from the service + :type warehouse: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_config_serialize( + warehouse=warehouse, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogConfig", + '400': "IcebergErrorResponse", + '401': "IcebergErrorResponse", + '403': "IcebergErrorResponse", + '419': "IcebergErrorResponse", + '503': "IcebergErrorResponse", + '5XX': "IcebergErrorResponse", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _get_config_serialize( + self, + warehouse, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + # process the query parameters + if warehouse is not None: + + _query_params.append(('warehouse', warehouse)) + + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2', + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/v1/config', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + diff --git a/regtests/client/python/polaris/catalog/api/iceberg_o_auth2_api.py b/regtests/client/python/polaris/catalog/api/iceberg_o_auth2_api.py new file mode 100644 index 0000000000..2d5adaadeb --- /dev/null +++ b/regtests/client/python/polaris/catalog/api/iceberg_o_auth2_api.py @@ -0,0 +1,456 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + +import warnings +from pydantic import validate_call, Field, StrictFloat, StrictStr, StrictInt +from typing import Any, Dict, List, Optional, Tuple, Union +from typing_extensions import Annotated + +from pydantic import Field, StrictStr, field_validator +from typing import Optional +from typing_extensions import Annotated +from polaris.catalog.models.o_auth_token_response import OAuthTokenResponse +from polaris.catalog.models.token_type import TokenType + +from polaris.catalog.api_client import ApiClient, RequestSerialized +from polaris.catalog.api_response import ApiResponse +from polaris.catalog.rest import RESTResponseType + + +class IcebergOAuth2API: + """NOTE: This class is auto generated by OpenAPI Generator + Ref: https://openapi-generator.tech + + Do not edit the class manually. + """ + + def __init__(self, api_client=None) -> None: + if api_client is None: + api_client = ApiClient.get_default() + self.api_client = api_client + + + @validate_call + def get_token( + self, + grant_type: Optional[StrictStr] = None, + scope: Optional[StrictStr] = None, + client_id: Annotated[Optional[StrictStr], Field(description="Client ID This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header.")] = None, + client_secret: Annotated[Optional[StrictStr], Field(description="Client secret This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header.")] = None, + requested_token_type: Optional[TokenType] = None, + subject_token: Annotated[Optional[StrictStr], Field(description="Subject token for token exchange request")] = None, + subject_token_type: Optional[TokenType] = None, + actor_token: Annotated[Optional[StrictStr], Field(description="Actor token for token exchange request")] = None, + actor_token_type: Optional[TokenType] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> OAuthTokenResponse: + """Get a token using an OAuth2 flow + + Exchange credentials for a token using the OAuth2 client credentials flow or token exchange. This endpoint is used for three purposes - 1. To exchange client credentials (client ID and secret) for an access token This uses the client credentials flow. 2. To exchange a client token and an identity token for a more specific access token This uses the token exchange flow. 3. To exchange an access token for one with the same claims and a refreshed expiration period This uses the token exchange flow. For example, a catalog client may be configured with client credentials from the OAuth2 Authorization flow. This client would exchange its client ID and secret for an access token using the client credentials request with this endpoint (1). Subsequent requests would then use that access token. Some clients may also handle sessions that have additional user context. These clients would use the token exchange flow to exchange a user token (the \"subject\" token) from the session for a more specific access token for that user, using the catalog's access token as the \"actor\" token (2). The user ID token is the \"subject\" token and can be any token type allowed by the OAuth2 token exchange flow, including a unsecured JWT token with a sub claim. This request should use the catalog's bearer token in the \"Authorization\" header. Clients may also use the token exchange flow to refresh a token that is about to expire by sending a token exchange request (3). The request's \"subject\" token should be the expiring token. This request should use the subject token in the \"Authorization\" header. + + :param grant_type: + :type grant_type: str + :param scope: + :type scope: str + :param client_id: Client ID This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header. + :type client_id: str + :param client_secret: Client secret This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header. + :type client_secret: str + :param requested_token_type: + :type requested_token_type: TokenType + :param subject_token: Subject token for token exchange request + :type subject_token: str + :param subject_token_type: + :type subject_token_type: TokenType + :param actor_token: Actor token for token exchange request + :type actor_token: str + :param actor_token_type: + :type actor_token_type: TokenType + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_token_serialize( + grant_type=grant_type, + scope=scope, + client_id=client_id, + client_secret=client_secret, + requested_token_type=requested_token_type, + subject_token=subject_token, + subject_token_type=subject_token_type, + actor_token=actor_token, + actor_token_type=actor_token_type, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "OAuthTokenResponse", + '400': "OAuthError", + '401': "OAuthError", + '5XX': "OAuthError", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def get_token_with_http_info( + self, + grant_type: Optional[StrictStr] = None, + scope: Optional[StrictStr] = None, + client_id: Annotated[Optional[StrictStr], Field(description="Client ID This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header.")] = None, + client_secret: Annotated[Optional[StrictStr], Field(description="Client secret This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header.")] = None, + requested_token_type: Optional[TokenType] = None, + subject_token: Annotated[Optional[StrictStr], Field(description="Subject token for token exchange request")] = None, + subject_token_type: Optional[TokenType] = None, + actor_token: Annotated[Optional[StrictStr], Field(description="Actor token for token exchange request")] = None, + actor_token_type: Optional[TokenType] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[OAuthTokenResponse]: + """Get a token using an OAuth2 flow + + Exchange credentials for a token using the OAuth2 client credentials flow or token exchange. This endpoint is used for three purposes - 1. To exchange client credentials (client ID and secret) for an access token This uses the client credentials flow. 2. To exchange a client token and an identity token for a more specific access token This uses the token exchange flow. 3. To exchange an access token for one with the same claims and a refreshed expiration period This uses the token exchange flow. For example, a catalog client may be configured with client credentials from the OAuth2 Authorization flow. This client would exchange its client ID and secret for an access token using the client credentials request with this endpoint (1). Subsequent requests would then use that access token. Some clients may also handle sessions that have additional user context. These clients would use the token exchange flow to exchange a user token (the \"subject\" token) from the session for a more specific access token for that user, using the catalog's access token as the \"actor\" token (2). The user ID token is the \"subject\" token and can be any token type allowed by the OAuth2 token exchange flow, including a unsecured JWT token with a sub claim. This request should use the catalog's bearer token in the \"Authorization\" header. Clients may also use the token exchange flow to refresh a token that is about to expire by sending a token exchange request (3). The request's \"subject\" token should be the expiring token. This request should use the subject token in the \"Authorization\" header. + + :param grant_type: + :type grant_type: str + :param scope: + :type scope: str + :param client_id: Client ID This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header. + :type client_id: str + :param client_secret: Client secret This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header. + :type client_secret: str + :param requested_token_type: + :type requested_token_type: TokenType + :param subject_token: Subject token for token exchange request + :type subject_token: str + :param subject_token_type: + :type subject_token_type: TokenType + :param actor_token: Actor token for token exchange request + :type actor_token: str + :param actor_token_type: + :type actor_token_type: TokenType + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_token_serialize( + grant_type=grant_type, + scope=scope, + client_id=client_id, + client_secret=client_secret, + requested_token_type=requested_token_type, + subject_token=subject_token, + subject_token_type=subject_token_type, + actor_token=actor_token, + actor_token_type=actor_token_type, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "OAuthTokenResponse", + '400': "OAuthError", + '401': "OAuthError", + '5XX': "OAuthError", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def get_token_without_preload_content( + self, + grant_type: Optional[StrictStr] = None, + scope: Optional[StrictStr] = None, + client_id: Annotated[Optional[StrictStr], Field(description="Client ID This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header.")] = None, + client_secret: Annotated[Optional[StrictStr], Field(description="Client secret This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header.")] = None, + requested_token_type: Optional[TokenType] = None, + subject_token: Annotated[Optional[StrictStr], Field(description="Subject token for token exchange request")] = None, + subject_token_type: Optional[TokenType] = None, + actor_token: Annotated[Optional[StrictStr], Field(description="Actor token for token exchange request")] = None, + actor_token_type: Optional[TokenType] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """Get a token using an OAuth2 flow + + Exchange credentials for a token using the OAuth2 client credentials flow or token exchange. This endpoint is used for three purposes - 1. To exchange client credentials (client ID and secret) for an access token This uses the client credentials flow. 2. To exchange a client token and an identity token for a more specific access token This uses the token exchange flow. 3. To exchange an access token for one with the same claims and a refreshed expiration period This uses the token exchange flow. For example, a catalog client may be configured with client credentials from the OAuth2 Authorization flow. This client would exchange its client ID and secret for an access token using the client credentials request with this endpoint (1). Subsequent requests would then use that access token. Some clients may also handle sessions that have additional user context. These clients would use the token exchange flow to exchange a user token (the \"subject\" token) from the session for a more specific access token for that user, using the catalog's access token as the \"actor\" token (2). The user ID token is the \"subject\" token and can be any token type allowed by the OAuth2 token exchange flow, including a unsecured JWT token with a sub claim. This request should use the catalog's bearer token in the \"Authorization\" header. Clients may also use the token exchange flow to refresh a token that is about to expire by sending a token exchange request (3). The request's \"subject\" token should be the expiring token. This request should use the subject token in the \"Authorization\" header. + + :param grant_type: + :type grant_type: str + :param scope: + :type scope: str + :param client_id: Client ID This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header. + :type client_id: str + :param client_secret: Client secret This can be sent in the request body, but OAuth2 recommends sending it in a Basic Authorization header. + :type client_secret: str + :param requested_token_type: + :type requested_token_type: TokenType + :param subject_token: Subject token for token exchange request + :type subject_token: str + :param subject_token_type: + :type subject_token_type: TokenType + :param actor_token: Actor token for token exchange request + :type actor_token: str + :param actor_token_type: + :type actor_token_type: TokenType + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_token_serialize( + grant_type=grant_type, + scope=scope, + client_id=client_id, + client_secret=client_secret, + requested_token_type=requested_token_type, + subject_token=subject_token, + subject_token_type=subject_token_type, + actor_token=actor_token, + actor_token_type=actor_token_type, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "OAuthTokenResponse", + '400': "OAuthError", + '401': "OAuthError", + '5XX': "OAuthError", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _get_token_serialize( + self, + grant_type, + scope, + client_id, + client_secret, + requested_token_type, + subject_token, + subject_token_type, + actor_token, + actor_token_type, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + # process the query parameters + # process the header parameters + # process the form parameters + if grant_type is not None: + _form_params.append(('grant_type', grant_type)) + if scope is not None: + _form_params.append(('scope', scope)) + if client_id is not None: + _form_params.append(('client_id', client_id)) + if client_secret is not None: + _form_params.append(('client_secret', client_secret)) + if requested_token_type is not None: + _form_params.append(('requested_token_type', requested_token_type)) + if subject_token is not None: + _form_params.append(('subject_token', subject_token)) + if subject_token_type is not None: + _form_params.append(('subject_token_type', subject_token_type)) + if actor_token is not None: + _form_params.append(('actor_token', actor_token)) + if actor_token_type is not None: + _form_params.append(('actor_token_type', actor_token_type)) + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/x-www-form-urlencoded' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'BearerAuth' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/v1/oauth/tokens', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + diff --git a/regtests/client/python/polaris/catalog/api_client.py b/regtests/client/python/polaris/catalog/api_client.py new file mode 100644 index 0000000000..a07a05d249 --- /dev/null +++ b/regtests/client/python/polaris/catalog/api_client.py @@ -0,0 +1,803 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import datetime +from dateutil.parser import parse +from enum import Enum +import decimal +import json +import mimetypes +import os +import re +import tempfile + +from urllib.parse import quote +from typing import Tuple, Optional, List, Dict, Union +from pydantic import SecretStr + +from polaris.catalog.configuration import Configuration +from polaris.catalog.api_response import ApiResponse, T as ApiResponseT +import polaris.catalog.models +from polaris.catalog import rest +from polaris.catalog.exceptions import ( + ApiValueError, + ApiException, + BadRequestException, + UnauthorizedException, + ForbiddenException, + NotFoundException, + ServiceException +) + +RequestSerialized = Tuple[str, str, Dict[str, str], Optional[str], List[str]] + +class ApiClient: + """Generic API client for OpenAPI client library builds. + + OpenAPI generic API client. This client handles the client- + server communication, and is invariant across implementations. Specifics of + the methods and models for each application are generated from the OpenAPI + templates. + + :param configuration: .Configuration object for this client + :param header_name: a header to pass when making calls to the API. + :param header_value: a header value to pass when making calls to + the API. + :param cookie: a cookie to include in the header when making calls + to the API + """ + + PRIMITIVE_TYPES = (float, bool, bytes, str, int) + NATIVE_TYPES_MAPPING = { + 'int': int, + 'long': int, # TODO remove as only py3 is supported? + 'float': float, + 'str': str, + 'bool': bool, + 'date': datetime.date, + 'datetime': datetime.datetime, + 'decimal': decimal.Decimal, + 'object': object, + } + _pool = None + + def __init__( + self, + configuration=None, + header_name=None, + header_value=None, + cookie=None + ) -> None: + # use default configuration if none is provided + if configuration is None: + configuration = Configuration.get_default() + self.configuration = configuration + + self.rest_client = rest.RESTClientObject(configuration) + self.default_headers = {} + if header_name is not None: + self.default_headers[header_name] = header_value + self.cookie = cookie + # Set default User-Agent. + self.user_agent = 'OpenAPI-Generator/1.0.0/python' + self.client_side_validation = configuration.client_side_validation + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, traceback): + pass + + @property + def user_agent(self): + """User agent for this API client""" + return self.default_headers['User-Agent'] + + @user_agent.setter + def user_agent(self, value): + self.default_headers['User-Agent'] = value + + def set_default_header(self, header_name, header_value): + self.default_headers[header_name] = header_value + + + _default = None + + @classmethod + def get_default(cls): + """Return new instance of ApiClient. + + This method returns newly created, based on default constructor, + object of ApiClient class or returns a copy of default + ApiClient. + + :return: The ApiClient object. + """ + if cls._default is None: + cls._default = ApiClient() + return cls._default + + @classmethod + def set_default(cls, default): + """Set default instance of ApiClient. + + It stores default ApiClient. + + :param default: object of ApiClient. + """ + cls._default = default + + def param_serialize( + self, + method, + resource_path, + path_params=None, + query_params=None, + header_params=None, + body=None, + post_params=None, + files=None, auth_settings=None, + collection_formats=None, + _host=None, + _request_auth=None + ) -> RequestSerialized: + + """Builds the HTTP request params needed by the request. + :param method: Method to call. + :param resource_path: Path to method endpoint. + :param path_params: Path parameters in the url. + :param query_params: Query parameters in the url. + :param header_params: Header parameters to be + placed in the request header. + :param body: Request body. + :param post_params dict: Request post form parameters, + for `application/x-www-form-urlencoded`, `multipart/form-data`. + :param auth_settings list: Auth Settings names for the request. + :param files dict: key -> filename, value -> filepath, + for `multipart/form-data`. + :param collection_formats: dict of collection formats for path, query, + header, and post parameters. + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the authentication + in the spec for a single request. + :return: tuple of form (path, http_method, query_params, header_params, + body, post_params, files) + """ + + config = self.configuration + + # header parameters + header_params = header_params or {} + header_params.update(self.default_headers) + if self.cookie: + header_params['Cookie'] = self.cookie + if header_params: + header_params = self.sanitize_for_serialization(header_params) + header_params = dict( + self.parameters_to_tuples(header_params,collection_formats) + ) + + # path parameters + if path_params: + path_params = self.sanitize_for_serialization(path_params) + path_params = self.parameters_to_tuples( + path_params, + collection_formats + ) + for k, v in path_params: + # specified safe chars, encode everything + resource_path = resource_path.replace( + '{%s}' % k, + quote(str(v), safe=config.safe_chars_for_path_param) + ) + + # post parameters + if post_params or files: + post_params = post_params if post_params else [] + post_params = self.sanitize_for_serialization(post_params) + post_params = self.parameters_to_tuples( + post_params, + collection_formats + ) + if files: + post_params.extend(self.files_parameters(files)) + + # auth setting + self.update_params_for_auth( + header_params, + query_params, + auth_settings, + resource_path, + method, + body, + request_auth=_request_auth + ) + + # body + if body: + body = self.sanitize_for_serialization(body) + + # request url + if _host is None or self.configuration.ignore_operation_servers: + url = self.configuration.host + resource_path + else: + # use server/host defined in path or operation instead + url = _host + resource_path + + # query parameters + if query_params: + query_params = self.sanitize_for_serialization(query_params) + url_query = self.parameters_to_url_query( + query_params, + collection_formats + ) + url += "?" + url_query + + return method, url, header_params, body, post_params + + + def call_api( + self, + method, + url, + header_params=None, + body=None, + post_params=None, + _request_timeout=None + ) -> rest.RESTResponse: + """Makes the HTTP request (synchronous) + :param method: Method to call. + :param url: Path to method endpoint. + :param header_params: Header parameters to be + placed in the request header. + :param body: Request body. + :param post_params dict: Request post form parameters, + for `application/x-www-form-urlencoded`, `multipart/form-data`. + :param _request_timeout: timeout setting for this request. + :return: RESTResponse + """ + + try: + # perform request and return response + response_data = self.rest_client.request( + method, url, + headers=header_params, + body=body, post_params=post_params, + _request_timeout=_request_timeout + ) + + except ApiException as e: + raise e + + return response_data + + def response_deserialize( + self, + response_data: rest.RESTResponse, + response_types_map: Optional[Dict[str, ApiResponseT]]=None + ) -> ApiResponse[ApiResponseT]: + """Deserializes response into an object. + :param response_data: RESTResponse object to be deserialized. + :param response_types_map: dict of response types. + :return: ApiResponse + """ + + msg = "RESTResponse.read() must be called before passing it to response_deserialize()" + assert response_data.data is not None, msg + + response_type = response_types_map.get(str(response_data.status), None) + if not response_type and isinstance(response_data.status, int) and 100 <= response_data.status <= 599: + # if not found, look for '1XX', '2XX', etc. + response_type = response_types_map.get(str(response_data.status)[0] + "XX", None) + + # deserialize response data + response_text = None + return_data = None + try: + if response_type == "bytearray": + return_data = response_data.data + elif response_type == "file": + return_data = self.__deserialize_file(response_data) + elif response_type is not None: + match = None + content_type = response_data.getheader('content-type') + if content_type is not None: + match = re.search(r"charset=([a-zA-Z\-\d]+)[\s;]?", content_type) + encoding = match.group(1) if match else "utf-8" + response_text = response_data.data.decode(encoding) + return_data = self.deserialize(response_text, response_type, content_type) + finally: + if not 200 <= response_data.status <= 299: + raise ApiException.from_response( + http_resp=response_data, + body=response_text, + data=return_data, + ) + + return ApiResponse( + status_code = response_data.status, + data = return_data, + headers = response_data.getheaders(), + raw_data = response_data.data + ) + + def sanitize_for_serialization(self, obj): + """Builds a JSON POST object. + + If obj is None, return None. + If obj is SecretStr, return obj.get_secret_value() + If obj is str, int, long, float, bool, return directly. + If obj is datetime.datetime, datetime.date + convert to string in iso8601 format. + If obj is decimal.Decimal return string representation. + If obj is list, sanitize each element in the list. + If obj is dict, return the dict. + If obj is OpenAPI model, return the properties dict. + + :param obj: The data to serialize. + :return: The serialized form of data. + """ + if obj is None: + return None + elif isinstance(obj, Enum): + return obj.value + elif isinstance(obj, SecretStr): + return obj.get_secret_value() + elif isinstance(obj, self.PRIMITIVE_TYPES): + return obj + elif isinstance(obj, list): + return [ + self.sanitize_for_serialization(sub_obj) for sub_obj in obj + ] + elif isinstance(obj, tuple): + return tuple( + self.sanitize_for_serialization(sub_obj) for sub_obj in obj + ) + elif isinstance(obj, (datetime.datetime, datetime.date)): + return obj.isoformat() + elif isinstance(obj, decimal.Decimal): + return str(obj) + + elif isinstance(obj, dict): + obj_dict = obj + else: + # Convert model obj to dict except + # attributes `openapi_types`, `attribute_map` + # and attributes which value is not None. + # Convert attribute name to json key in + # model definition for request. + if hasattr(obj, 'to_dict') and callable(getattr(obj, 'to_dict')): + obj_dict = obj.to_dict() + else: + obj_dict = obj.__dict__ + + return { + key: self.sanitize_for_serialization(val) + for key, val in obj_dict.items() + } + + def deserialize(self, response_text: str, response_type: str, content_type: Optional[str]): + """Deserializes response into an object. + + :param response: RESTResponse object to be deserialized. + :param response_type: class literal for + deserialized object, or string of class name. + :param content_type: content type of response. + + :return: deserialized object. + """ + + # fetch data from response object + if content_type is None: + try: + data = json.loads(response_text) + except ValueError: + data = response_text + elif content_type.startswith("application/json"): + if response_text == "": + data = "" + else: + data = json.loads(response_text) + elif content_type.startswith("text/plain"): + data = response_text + else: + raise ApiException( + status=0, + reason="Unsupported content type: {0}".format(content_type) + ) + + return self.__deserialize(data, response_type) + + def __deserialize(self, data, klass): + """Deserializes dict, list, str into an object. + + :param data: dict, list or str. + :param klass: class literal, or string of class name. + + :return: object. + """ + if data is None: + return None + + if isinstance(klass, str): + if klass.startswith('List['): + m = re.match(r'List\[(.*)]', klass) + assert m is not None, "Malformed List type definition" + sub_kls = m.group(1) + return [self.__deserialize(sub_data, sub_kls) + for sub_data in data] + + if klass.startswith('Dict['): + m = re.match(r'Dict\[([^,]*), (.*)]', klass) + assert m is not None, "Malformed Dict type definition" + sub_kls = m.group(2) + return {k: self.__deserialize(v, sub_kls) + for k, v in data.items()} + + # convert str to class + if klass in self.NATIVE_TYPES_MAPPING: + klass = self.NATIVE_TYPES_MAPPING[klass] + else: + klass = getattr(polaris.catalog.models, klass) + + if klass in self.PRIMITIVE_TYPES: + return self.__deserialize_primitive(data, klass) + elif klass == object: + return self.__deserialize_object(data) + elif klass == datetime.date: + return self.__deserialize_date(data) + elif klass == datetime.datetime: + return self.__deserialize_datetime(data) + elif klass == decimal.Decimal: + return decimal.Decimal(data) + elif issubclass(klass, Enum): + return self.__deserialize_enum(data, klass) + else: + return self.__deserialize_model(data, klass) + + def parameters_to_tuples(self, params, collection_formats): + """Get parameters as list of tuples, formatting collections. + + :param params: Parameters as dict or list of two-tuples + :param dict collection_formats: Parameter collection formats + :return: Parameters as list of tuples, collections formatted + """ + new_params: List[Tuple[str, str]] = [] + if collection_formats is None: + collection_formats = {} + for k, v in params.items() if isinstance(params, dict) else params: + if k in collection_formats: + collection_format = collection_formats[k] + if collection_format == 'multi': + new_params.extend((k, value) for value in v) + else: + if collection_format == 'ssv': + delimiter = ' ' + elif collection_format == 'tsv': + delimiter = '\t' + elif collection_format == 'pipes': + delimiter = '|' + else: # csv is the default + delimiter = ',' + new_params.append( + (k, delimiter.join(str(value) for value in v))) + else: + new_params.append((k, v)) + return new_params + + def parameters_to_url_query(self, params, collection_formats): + """Get parameters as list of tuples, formatting collections. + + :param params: Parameters as dict or list of two-tuples + :param dict collection_formats: Parameter collection formats + :return: URL query string (e.g. a=Hello%20World&b=123) + """ + new_params: List[Tuple[str, str]] = [] + if collection_formats is None: + collection_formats = {} + for k, v in params.items() if isinstance(params, dict) else params: + if isinstance(v, bool): + v = str(v).lower() + if isinstance(v, (int, float)): + v = str(v) + if isinstance(v, dict): + v = json.dumps(v) + + if k in collection_formats: + collection_format = collection_formats[k] + if collection_format == 'multi': + new_params.extend((k, str(value)) for value in v) + else: + if collection_format == 'ssv': + delimiter = ' ' + elif collection_format == 'tsv': + delimiter = '\t' + elif collection_format == 'pipes': + delimiter = '|' + else: # csv is the default + delimiter = ',' + new_params.append( + (k, delimiter.join(quote(str(value)) for value in v)) + ) + else: + new_params.append((k, quote(str(v)))) + + return "&".join(["=".join(map(str, item)) for item in new_params]) + + def files_parameters(self, files: Dict[str, Union[str, bytes]]): + """Builds form parameters. + + :param files: File parameters. + :return: Form parameters with files. + """ + params = [] + for k, v in files.items(): + if isinstance(v, str): + with open(v, 'rb') as f: + filename = os.path.basename(f.name) + filedata = f.read() + elif isinstance(v, bytes): + filename = k + filedata = v + else: + raise ValueError("Unsupported file value") + mimetype = ( + mimetypes.guess_type(filename)[0] + or 'application/octet-stream' + ) + params.append( + tuple([k, tuple([filename, filedata, mimetype])]) + ) + return params + + def select_header_accept(self, accepts: List[str]) -> Optional[str]: + """Returns `Accept` based on an array of accepts provided. + + :param accepts: List of headers. + :return: Accept (e.g. application/json). + """ + if not accepts: + return None + + for accept in accepts: + if re.search('json', accept, re.IGNORECASE): + return accept + + return accepts[0] + + def select_header_content_type(self, content_types): + """Returns `Content-Type` based on an array of content_types provided. + + :param content_types: List of content-types. + :return: Content-Type (e.g. application/json). + """ + if not content_types: + return None + + for content_type in content_types: + if re.search('json', content_type, re.IGNORECASE): + return content_type + + return content_types[0] + + def update_params_for_auth( + self, + headers, + queries, + auth_settings, + resource_path, + method, + body, + request_auth=None + ) -> None: + """Updates header and query params based on authentication setting. + + :param headers: Header parameters dict to be updated. + :param queries: Query parameters tuple list to be updated. + :param auth_settings: Authentication setting identifiers list. + :resource_path: A string representation of the HTTP request resource path. + :method: A string representation of the HTTP request method. + :body: A object representing the body of the HTTP request. + The object type is the return value of sanitize_for_serialization(). + :param request_auth: if set, the provided settings will + override the token in the configuration. + """ + if not auth_settings: + return + + if request_auth: + self._apply_auth_params( + headers, + queries, + resource_path, + method, + body, + request_auth + ) + else: + for auth in auth_settings: + auth_setting = self.configuration.auth_settings().get(auth) + if auth_setting: + self._apply_auth_params( + headers, + queries, + resource_path, + method, + body, + auth_setting + ) + + def _apply_auth_params( + self, + headers, + queries, + resource_path, + method, + body, + auth_setting + ) -> None: + """Updates the request parameters based on a single auth_setting + + :param headers: Header parameters dict to be updated. + :param queries: Query parameters tuple list to be updated. + :resource_path: A string representation of the HTTP request resource path. + :method: A string representation of the HTTP request method. + :body: A object representing the body of the HTTP request. + The object type is the return value of sanitize_for_serialization(). + :param auth_setting: auth settings for the endpoint + """ + if auth_setting['in'] == 'cookie': + headers['Cookie'] = auth_setting['value'] + elif auth_setting['in'] == 'header': + if auth_setting['type'] != 'http-signature': + headers[auth_setting['key']] = auth_setting['value'] + elif auth_setting['in'] == 'query': + queries.append((auth_setting['key'], auth_setting['value'])) + else: + raise ApiValueError( + 'Authentication token must be in `query` or `header`' + ) + + def __deserialize_file(self, response): + """Deserializes body to file + + Saves response body into a file in a temporary folder, + using the filename from the `Content-Disposition` header if provided. + + handle file downloading + save response body into a tmp file and return the instance + + :param response: RESTResponse. + :return: file path. + """ + fd, path = tempfile.mkstemp(dir=self.configuration.temp_folder_path) + os.close(fd) + os.remove(path) + + content_disposition = response.getheader("Content-Disposition") + if content_disposition: + m = re.search( + r'filename=[\'"]?([^\'"\s]+)[\'"]?', + content_disposition + ) + assert m is not None, "Unexpected 'content-disposition' header value" + filename = m.group(1) + path = os.path.join(os.path.dirname(path), filename) + + with open(path, "wb") as f: + f.write(response.data) + + return path + + def __deserialize_primitive(self, data, klass): + """Deserializes string to primitive type. + + :param data: str. + :param klass: class literal. + + :return: int, long, float, str, bool. + """ + try: + return klass(data) + except UnicodeEncodeError: + return str(data) + except TypeError: + return data + + def __deserialize_object(self, value): + """Return an original value. + + :return: object. + """ + return value + + def __deserialize_date(self, string): + """Deserializes string to date. + + :param string: str. + :return: date. + """ + try: + return parse(string).date() + except ImportError: + return string + except ValueError: + raise rest.ApiException( + status=0, + reason="Failed to parse `{0}` as date object".format(string) + ) + + def __deserialize_datetime(self, string): + """Deserializes string to datetime. + + The string should be in iso8601 datetime format. + + :param string: str. + :return: datetime. + """ + try: + return parse(string) + except ImportError: + return string + except ValueError: + raise rest.ApiException( + status=0, + reason=( + "Failed to parse `{0}` as datetime object" + .format(string) + ) + ) + + def __deserialize_enum(self, data, klass): + """Deserializes primitive type to enum. + + :param data: primitive type. + :param klass: class literal. + :return: enum value. + """ + try: + return klass(data) + except ValueError: + raise rest.ApiException( + status=0, + reason=( + "Failed to parse `{0}` as `{1}`" + .format(data, klass) + ) + ) + + def __deserialize_model(self, data, klass): + """Deserializes list or dict to model. + + :param data: dict, list. + :param klass: class literal. + :return: model object. + """ + + return klass.from_dict(data) diff --git a/regtests/client/python/polaris/catalog/api_response.py b/regtests/client/python/polaris/catalog/api_response.py new file mode 100644 index 0000000000..e3a3bc42e0 --- /dev/null +++ b/regtests/client/python/polaris/catalog/api_response.py @@ -0,0 +1,37 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +"""API response object.""" + +from __future__ import annotations +from typing import Optional, Generic, Mapping, TypeVar +from pydantic import Field, StrictInt, StrictBytes, BaseModel + +T = TypeVar("T") + +class ApiResponse(BaseModel, Generic[T]): + """ + API response object + """ + + status_code: StrictInt = Field(description="HTTP status code") + headers: Optional[Mapping[str, str]] = Field(None, description="HTTP headers") + data: T = Field(description="Deserialized data given the data type") + raw_data: StrictBytes = Field(description="Raw data (HTTP response body)") + + model_config = { + "arbitrary_types_allowed": True + } diff --git a/regtests/client/python/polaris/catalog/configuration.py b/regtests/client/python/polaris/catalog/configuration.py new file mode 100644 index 0000000000..aecac866a7 --- /dev/null +++ b/regtests/client/python/polaris/catalog/configuration.py @@ -0,0 +1,516 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import copy +import logging +from logging import FileHandler +import multiprocessing +import sys +from typing import Optional +import urllib3 + +import http.client as httplib + +JSON_SCHEMA_VALIDATION_KEYWORDS = { + 'multipleOf', 'maximum', 'exclusiveMaximum', + 'minimum', 'exclusiveMinimum', 'maxLength', + 'minLength', 'pattern', 'maxItems', 'minItems' +} + +class Configuration: + """This class contains various settings of the API client. + + :param host: Base url. + :param ignore_operation_servers + Boolean to ignore operation servers for the API client. + Config will use `host` as the base url regardless of the operation servers. + :param api_key: Dict to store API key(s). + Each entry in the dict specifies an API key. + The dict key is the name of the security scheme in the OAS specification. + The dict value is the API key secret. + :param api_key_prefix: Dict to store API prefix (e.g. Bearer). + The dict key is the name of the security scheme in the OAS specification. + The dict value is an API key prefix when generating the auth data. + :param username: Username for HTTP basic authentication. + :param password: Password for HTTP basic authentication. + :param access_token: Access token. + :param server_index: Index to servers configuration. + :param server_variables: Mapping with string values to replace variables in + templated server configuration. The validation of enums is performed for + variables with defined enum values before. + :param server_operation_index: Mapping from operation ID to an index to server + configuration. + :param server_operation_variables: Mapping from operation ID to a mapping with + string values to replace variables in templated server configuration. + The validation of enums is performed for variables with defined enum + values before. + :param ssl_ca_cert: str - the path to a file of concatenated CA certificates + in PEM format. + :param retries: Number of retries for API requests. + + :Example: + """ + + _default = None + + def __init__(self, host=None, + api_key=None, api_key_prefix=None, + username=None, password=None, + access_token=None, + server_index=None, server_variables=None, + server_operation_index=None, server_operation_variables=None, + ignore_operation_servers=False, + ssl_ca_cert=None, + retries=None, + *, + debug: Optional[bool] = None + ) -> None: + """Constructor + """ + self._base_path = "https://localhost" if host is None else host + """Default Base url + """ + self.server_index = 0 if server_index is None and host is None else server_index + self.server_operation_index = server_operation_index or {} + """Default server index + """ + self.server_variables = server_variables or {} + self.server_operation_variables = server_operation_variables or {} + """Default server variables + """ + self.ignore_operation_servers = ignore_operation_servers + """Ignore operation servers + """ + self.temp_folder_path = None + """Temp file folder for downloading files + """ + # Authentication Settings + self.api_key = {} + if api_key: + self.api_key = api_key + """dict to store API key(s) + """ + self.api_key_prefix = {} + if api_key_prefix: + self.api_key_prefix = api_key_prefix + """dict to store API prefix (e.g. Bearer) + """ + self.refresh_api_key_hook = None + """function hook to refresh API key if expired + """ + self.username = username + """Username for HTTP basic authentication + """ + self.password = password + """Password for HTTP basic authentication + """ + self.access_token = access_token + """Access token + """ + self.logger = {} + """Logging Settings + """ + self.logger["package_logger"] = logging.getLogger("polaris.catalog") + self.logger["urllib3_logger"] = logging.getLogger("urllib3") + self.logger_format = '%(asctime)s %(levelname)s %(message)s' + """Log format + """ + self.logger_stream_handler = None + """Log stream handler + """ + self.logger_file_handler: Optional[FileHandler] = None + """Log file handler + """ + self.logger_file = None + """Debug file location + """ + if debug is not None: + self.debug = debug + else: + self.__debug = False + """Debug switch + """ + + self.verify_ssl = True + """SSL/TLS verification + Set this to false to skip verifying SSL certificate when calling API + from https server. + """ + self.ssl_ca_cert = ssl_ca_cert + """Set this to customize the certificate file to verify the peer. + """ + self.cert_file = None + """client certificate file + """ + self.key_file = None + """client key file + """ + self.assert_hostname = None + """Set this to True/False to enable/disable SSL hostname verification. + """ + self.tls_server_name = None + """SSL/TLS Server Name Indication (SNI) + Set this to the SNI value expected by the server. + """ + + self.connection_pool_maxsize = multiprocessing.cpu_count() * 5 + """urllib3 connection pool's maximum number of connections saved + per pool. urllib3 uses 1 connection as default value, but this is + not the best value when you are making a lot of possibly parallel + requests to the same host, which is often the case here. + cpu_count * 5 is used as default value to increase performance. + """ + + self.proxy: Optional[str] = None + """Proxy URL + """ + self.proxy_headers = None + """Proxy headers + """ + self.safe_chars_for_path_param = '' + """Safe chars for path_param + """ + self.retries = retries + """Adding retries to override urllib3 default value 3 + """ + # Enable client side validation + self.client_side_validation = True + + self.socket_options = None + """Options to pass down to the underlying urllib3 socket + """ + + self.datetime_format = "%Y-%m-%dT%H:%M:%S.%f%z" + """datetime format + """ + + self.date_format = "%Y-%m-%d" + """date format + """ + + def __deepcopy__(self, memo): + cls = self.__class__ + result = cls.__new__(cls) + memo[id(self)] = result + for k, v in self.__dict__.items(): + if k not in ('logger', 'logger_file_handler'): + setattr(result, k, copy.deepcopy(v, memo)) + # shallow copy of loggers + result.logger = copy.copy(self.logger) + # use setters to configure loggers + result.logger_file = self.logger_file + result.debug = self.debug + return result + + def __setattr__(self, name, value): + object.__setattr__(self, name, value) + + @classmethod + def set_default(cls, default): + """Set default instance of configuration. + + It stores default configuration, which can be + returned by get_default_copy method. + + :param default: object of Configuration + """ + cls._default = default + + @classmethod + def get_default_copy(cls): + """Deprecated. Please use `get_default` instead. + + Deprecated. Please use `get_default` instead. + + :return: The configuration object. + """ + return cls.get_default() + + @classmethod + def get_default(cls): + """Return the default configuration. + + This method returns newly created, based on default constructor, + object of Configuration class or returns a copy of default + configuration. + + :return: The configuration object. + """ + if cls._default is None: + cls._default = Configuration() + return cls._default + + @property + def logger_file(self): + """The logger file. + + If the logger_file is None, then add stream handler and remove file + handler. Otherwise, add file handler and remove stream handler. + + :param value: The logger_file path. + :type: str + """ + return self.__logger_file + + @logger_file.setter + def logger_file(self, value): + """The logger file. + + If the logger_file is None, then add stream handler and remove file + handler. Otherwise, add file handler and remove stream handler. + + :param value: The logger_file path. + :type: str + """ + self.__logger_file = value + if self.__logger_file: + # If set logging file, + # then add file handler and remove stream handler. + self.logger_file_handler = logging.FileHandler(self.__logger_file) + self.logger_file_handler.setFormatter(self.logger_formatter) + for _, logger in self.logger.items(): + logger.addHandler(self.logger_file_handler) + + @property + def debug(self): + """Debug status + + :param value: The debug status, True or False. + :type: bool + """ + return self.__debug + + @debug.setter + def debug(self, value): + """Debug status + + :param value: The debug status, True or False. + :type: bool + """ + self.__debug = value + if self.__debug: + # if debug status is True, turn on debug logging + for _, logger in self.logger.items(): + logger.setLevel(logging.DEBUG) + # turn on httplib debug + httplib.HTTPConnection.debuglevel = 1 + else: + # if debug status is False, turn off debug logging, + # setting log level to default `logging.WARNING` + for _, logger in self.logger.items(): + logger.setLevel(logging.WARNING) + # turn off httplib debug + httplib.HTTPConnection.debuglevel = 0 + + @property + def logger_format(self): + """The logger format. + + The logger_formatter will be updated when sets logger_format. + + :param value: The format string. + :type: str + """ + return self.__logger_format + + @logger_format.setter + def logger_format(self, value): + """The logger format. + + The logger_formatter will be updated when sets logger_format. + + :param value: The format string. + :type: str + """ + self.__logger_format = value + self.logger_formatter = logging.Formatter(self.__logger_format) + + def get_api_key_with_prefix(self, identifier, alias=None): + """Gets API key (with prefix if set). + + :param identifier: The identifier of apiKey. + :param alias: The alternative identifier of apiKey. + :return: The token for api key authentication. + """ + if self.refresh_api_key_hook is not None: + self.refresh_api_key_hook(self) + key = self.api_key.get(identifier, self.api_key.get(alias) if alias is not None else None) + if key: + prefix = self.api_key_prefix.get(identifier) + if prefix: + return "%s %s" % (prefix, key) + else: + return key + + def get_basic_auth_token(self): + """Gets HTTP basic authentication header (string). + + :return: The token for basic HTTP authentication. + """ + username = "" + if self.username is not None: + username = self.username + password = "" + if self.password is not None: + password = self.password + return urllib3.util.make_headers( + basic_auth=username + ':' + password + ).get('authorization') + + def auth_settings(self): + """Gets Auth Settings dict for api client. + + :return: The Auth Settings information dict. + """ + auth = {} + if self.access_token is not None: + auth['OAuth2'] = { + 'type': 'oauth2', + 'in': 'header', + 'key': 'Authorization', + 'value': 'Bearer ' + self.access_token + } + if self.access_token is not None: + auth['BearerAuth'] = { + 'type': 'bearer', + 'in': 'header', + 'key': 'Authorization', + 'value': 'Bearer ' + self.access_token + } + return auth + + def to_debug_report(self): + """Gets the essential information for debugging. + + :return: The report for debugging. + """ + return "Python SDK Debug Report:\n"\ + "OS: {env}\n"\ + "Python Version: {pyversion}\n"\ + "Version of the API: 0.0.1\n"\ + "SDK Package Version: 1.0.0".\ + format(env=sys.platform, pyversion=sys.version) + + def get_host_settings(self): + """Gets an array of host settings + + :return: An array of host settings + """ + return [ + { + 'url': "{scheme}://{host}/{basePath}", + 'description': "Server URL when the port can be inferred from the scheme", + 'variables': { + 'scheme': { + 'description': "The scheme of the URI, either http or https.", + 'default_value': "https", + }, + 'host': { + 'description': "The host address for the specified server", + 'default_value': "localhost", + }, + 'basePath': { + 'description': "Optional prefix to be appended to all routes", + 'default_value': "", + } + } + }, + { + 'url': "{scheme}://{host}:{port}/{basePath}", + 'description': "Generic base server URL, with all parts configurable", + 'variables': { + 'scheme': { + 'description': "The scheme of the URI, either http or https.", + 'default_value': "https", + }, + 'host': { + 'description': "The host address for the specified server", + 'default_value': "localhost", + }, + 'port': { + 'description': "The port used when addressing the host", + 'default_value': "443", + }, + 'basePath': { + 'description': "Optional prefix to be appended to all routes", + 'default_value': "", + } + } + } + ] + + def get_host_from_settings(self, index, variables=None, servers=None): + """Gets host URL based on the index and variables + :param index: array index of the host settings + :param variables: hash of variable and the corresponding value + :param servers: an array of host settings or None + :return: URL based on host settings + """ + if index is None: + return self._base_path + + variables = {} if variables is None else variables + servers = self.get_host_settings() if servers is None else servers + + try: + server = servers[index] + except IndexError: + raise ValueError( + "Invalid index {0} when selecting the host settings. " + "Must be less than {1}".format(index, len(servers))) + + url = server['url'] + + # go through variables and replace placeholders + for variable_name, variable in server.get('variables', {}).items(): + used_value = variables.get( + variable_name, variable['default_value']) + + if 'enum_values' in variable \ + and used_value not in variable['enum_values']: + raise ValueError( + "The variable `{0}` in the host URL has invalid value " + "{1}. Must be {2}.".format( + variable_name, variables[variable_name], + variable['enum_values'])) + + url = url.replace("{" + variable_name + "}", used_value) + + return url + + @property + def host(self): + """Return generated host.""" + return self.get_host_from_settings(self.server_index, variables=self.server_variables) + + @host.setter + def host(self, value): + """Fix base path.""" + self._base_path = value + self.server_index = None diff --git a/regtests/client/python/polaris/catalog/exceptions.py b/regtests/client/python/polaris/catalog/exceptions.py new file mode 100644 index 0000000000..9da03d2d2b --- /dev/null +++ b/regtests/client/python/polaris/catalog/exceptions.py @@ -0,0 +1,214 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + +from typing import Any, Optional +from typing_extensions import Self + +class OpenApiException(Exception): + """The base exception class for all OpenAPIExceptions""" + + +class ApiTypeError(OpenApiException, TypeError): + def __init__(self, msg, path_to_item=None, valid_classes=None, + key_type=None) -> None: + """ Raises an exception for TypeErrors + + Args: + msg (str): the exception message + + Keyword Args: + path_to_item (list): a list of keys an indices to get to the + current_item + None if unset + valid_classes (tuple): the primitive classes that current item + should be an instance of + None if unset + key_type (bool): False if our value is a value in a dict + True if it is a key in a dict + False if our item is an item in a list + None if unset + """ + self.path_to_item = path_to_item + self.valid_classes = valid_classes + self.key_type = key_type + full_msg = msg + if path_to_item: + full_msg = "{0} at {1}".format(msg, render_path(path_to_item)) + super(ApiTypeError, self).__init__(full_msg) + + +class ApiValueError(OpenApiException, ValueError): + def __init__(self, msg, path_to_item=None) -> None: + """ + Args: + msg (str): the exception message + + Keyword Args: + path_to_item (list) the path to the exception in the + received_data dict. None if unset + """ + + self.path_to_item = path_to_item + full_msg = msg + if path_to_item: + full_msg = "{0} at {1}".format(msg, render_path(path_to_item)) + super(ApiValueError, self).__init__(full_msg) + + +class ApiAttributeError(OpenApiException, AttributeError): + def __init__(self, msg, path_to_item=None) -> None: + """ + Raised when an attribute reference or assignment fails. + + Args: + msg (str): the exception message + + Keyword Args: + path_to_item (None/list) the path to the exception in the + received_data dict + """ + self.path_to_item = path_to_item + full_msg = msg + if path_to_item: + full_msg = "{0} at {1}".format(msg, render_path(path_to_item)) + super(ApiAttributeError, self).__init__(full_msg) + + +class ApiKeyError(OpenApiException, KeyError): + def __init__(self, msg, path_to_item=None) -> None: + """ + Args: + msg (str): the exception message + + Keyword Args: + path_to_item (None/list) the path to the exception in the + received_data dict + """ + self.path_to_item = path_to_item + full_msg = msg + if path_to_item: + full_msg = "{0} at {1}".format(msg, render_path(path_to_item)) + super(ApiKeyError, self).__init__(full_msg) + + +class ApiException(OpenApiException): + + def __init__( + self, + status=None, + reason=None, + http_resp=None, + *, + body: Optional[str] = None, + data: Optional[Any] = None, + ) -> None: + self.status = status + self.reason = reason + self.body = body + self.data = data + self.headers = None + + if http_resp: + if self.status is None: + self.status = http_resp.status + if self.reason is None: + self.reason = http_resp.reason + if self.body is None: + try: + self.body = http_resp.data.decode('utf-8') + except Exception: + pass + self.headers = http_resp.getheaders() + + @classmethod + def from_response( + cls, + *, + http_resp, + body: Optional[str], + data: Optional[Any], + ) -> Self: + if http_resp.status == 400: + raise BadRequestException(http_resp=http_resp, body=body, data=data) + + if http_resp.status == 401: + raise UnauthorizedException(http_resp=http_resp, body=body, data=data) + + if http_resp.status == 403: + raise ForbiddenException(http_resp=http_resp, body=body, data=data) + + if http_resp.status == 404: + raise NotFoundException(http_resp=http_resp, body=body, data=data) + + if 500 <= http_resp.status <= 599: + raise ServiceException(http_resp=http_resp, body=body, data=data) + raise ApiException(http_resp=http_resp, body=body, data=data) + + def __str__(self): + """Custom error messages for exception""" + error_message = "({0})\n"\ + "Reason: {1}\n".format(self.status, self.reason) + if self.headers: + error_message += "HTTP response headers: {0}\n".format( + self.headers) + + if self.data or self.body: + error_message += "HTTP response body: {0}\n".format(self.data or self.body) + + return error_message + + +class BadRequestException(ApiException): + pass + + +class NotFoundException(ApiException): + pass + + +class UnauthorizedException(ApiException): + pass + + +class ForbiddenException(ApiException): + pass + + +class ServiceException(ApiException): + pass + + +def render_path(path_to_item): + """Returns a string representation of a path""" + result = "" + for pth in path_to_item: + if isinstance(pth, int): + result += "[{0}]".format(pth) + else: + result += "['{0}']".format(pth) + return result diff --git a/regtests/client/python/polaris/catalog/models/__init__.py b/regtests/client/python/polaris/catalog/models/__init__.py new file mode 100644 index 0000000000..c2da2dc1f6 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/__init__.py @@ -0,0 +1,141 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +# flake8: noqa +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +# import models into model package +from polaris.catalog.models.add_partition_spec_update import AddPartitionSpecUpdate +from polaris.catalog.models.add_schema_update import AddSchemaUpdate +from polaris.catalog.models.add_snapshot_update import AddSnapshotUpdate +from polaris.catalog.models.add_sort_order_update import AddSortOrderUpdate +from polaris.catalog.models.add_view_version_update import AddViewVersionUpdate +from polaris.catalog.models.and_or_expression import AndOrExpression +from polaris.catalog.models.assert_create import AssertCreate +from polaris.catalog.models.assert_current_schema_id import AssertCurrentSchemaId +from polaris.catalog.models.assert_default_sort_order_id import AssertDefaultSortOrderId +from polaris.catalog.models.assert_default_spec_id import AssertDefaultSpecId +from polaris.catalog.models.assert_last_assigned_field_id import AssertLastAssignedFieldId +from polaris.catalog.models.assert_last_assigned_partition_id import AssertLastAssignedPartitionId +from polaris.catalog.models.assert_ref_snapshot_id import AssertRefSnapshotId +from polaris.catalog.models.assert_table_uuid import AssertTableUUID +from polaris.catalog.models.assert_view_uuid import AssertViewUUID +from polaris.catalog.models.assign_uuid_update import AssignUUIDUpdate +from polaris.catalog.models.base_update import BaseUpdate +from polaris.catalog.models.blob_metadata import BlobMetadata +from polaris.catalog.models.catalog_config import CatalogConfig +from polaris.catalog.models.commit_report import CommitReport +from polaris.catalog.models.commit_table_request import CommitTableRequest +from polaris.catalog.models.commit_table_response import CommitTableResponse +from polaris.catalog.models.commit_transaction_request import CommitTransactionRequest +from polaris.catalog.models.commit_view_request import CommitViewRequest +from polaris.catalog.models.content_file import ContentFile +from polaris.catalog.models.count_map import CountMap +from polaris.catalog.models.counter_result import CounterResult +from polaris.catalog.models.create_namespace_request import CreateNamespaceRequest +from polaris.catalog.models.create_namespace_response import CreateNamespaceResponse +from polaris.catalog.models.create_table_request import CreateTableRequest +from polaris.catalog.models.create_view_request import CreateViewRequest +from polaris.catalog.models.data_file import DataFile +from polaris.catalog.models.equality_delete_file import EqualityDeleteFile +from polaris.catalog.models.error_model import ErrorModel +from polaris.catalog.models.expression import Expression +from polaris.catalog.models.file_format import FileFormat +from polaris.catalog.models.get_namespace_response import GetNamespaceResponse +from polaris.catalog.models.iceberg_error_response import IcebergErrorResponse +from polaris.catalog.models.list_namespaces_response import ListNamespacesResponse +from polaris.catalog.models.list_tables_response import ListTablesResponse +from polaris.catalog.models.list_type import ListType +from polaris.catalog.models.literal_expression import LiteralExpression +from polaris.catalog.models.load_table_result import LoadTableResult +from polaris.catalog.models.load_view_result import LoadViewResult +from polaris.catalog.models.map_type import MapType +from polaris.catalog.models.metadata_log_inner import MetadataLogInner +from polaris.catalog.models.metric_result import MetricResult +from polaris.catalog.models.model_schema import ModelSchema +from polaris.catalog.models.not_expression import NotExpression +from polaris.catalog.models.notification_request import NotificationRequest +from polaris.catalog.models.notification_type import NotificationType +from polaris.catalog.models.null_order import NullOrder +from polaris.catalog.models.o_auth_error import OAuthError +from polaris.catalog.models.o_auth_token_response import OAuthTokenResponse +from polaris.catalog.models.partition_field import PartitionField +from polaris.catalog.models.partition_spec import PartitionSpec +from polaris.catalog.models.partition_statistics_file import PartitionStatisticsFile +from polaris.catalog.models.position_delete_file import PositionDeleteFile +from polaris.catalog.models.primitive_type_value import PrimitiveTypeValue +from polaris.catalog.models.register_table_request import RegisterTableRequest +from polaris.catalog.models.remove_partition_statistics_update import RemovePartitionStatisticsUpdate +from polaris.catalog.models.remove_properties_update import RemovePropertiesUpdate +from polaris.catalog.models.remove_snapshot_ref_update import RemoveSnapshotRefUpdate +from polaris.catalog.models.remove_snapshots_update import RemoveSnapshotsUpdate +from polaris.catalog.models.remove_statistics_update import RemoveStatisticsUpdate +from polaris.catalog.models.rename_table_request import RenameTableRequest +from polaris.catalog.models.report_metrics_request import ReportMetricsRequest +from polaris.catalog.models.sql_view_representation import SQLViewRepresentation +from polaris.catalog.models.scan_report import ScanReport +from polaris.catalog.models.set_current_schema_update import SetCurrentSchemaUpdate +from polaris.catalog.models.set_current_view_version_update import SetCurrentViewVersionUpdate +from polaris.catalog.models.set_default_sort_order_update import SetDefaultSortOrderUpdate +from polaris.catalog.models.set_default_spec_update import SetDefaultSpecUpdate +from polaris.catalog.models.set_expression import SetExpression +from polaris.catalog.models.set_location_update import SetLocationUpdate +from polaris.catalog.models.set_partition_statistics_update import SetPartitionStatisticsUpdate +from polaris.catalog.models.set_properties_update import SetPropertiesUpdate +from polaris.catalog.models.set_snapshot_ref_update import SetSnapshotRefUpdate +from polaris.catalog.models.set_statistics_update import SetStatisticsUpdate +from polaris.catalog.models.snapshot import Snapshot +from polaris.catalog.models.snapshot_log_inner import SnapshotLogInner +from polaris.catalog.models.snapshot_reference import SnapshotReference +from polaris.catalog.models.snapshot_summary import SnapshotSummary +from polaris.catalog.models.sort_direction import SortDirection +from polaris.catalog.models.sort_field import SortField +from polaris.catalog.models.sort_order import SortOrder +from polaris.catalog.models.statistics_file import StatisticsFile +from polaris.catalog.models.struct_field import StructField +from polaris.catalog.models.struct_type import StructType +from polaris.catalog.models.table_identifier import TableIdentifier +from polaris.catalog.models.table_metadata import TableMetadata +from polaris.catalog.models.table_requirement import TableRequirement +from polaris.catalog.models.table_update import TableUpdate +from polaris.catalog.models.table_update_notification import TableUpdateNotification +from polaris.catalog.models.term import Term +from polaris.catalog.models.timer_result import TimerResult +from polaris.catalog.models.token_type import TokenType +from polaris.catalog.models.transform_term import TransformTerm +from polaris.catalog.models.type import Type +from polaris.catalog.models.unary_expression import UnaryExpression +from polaris.catalog.models.update_namespace_properties_request import UpdateNamespacePropertiesRequest +from polaris.catalog.models.update_namespace_properties_response import UpdateNamespacePropertiesResponse +from polaris.catalog.models.upgrade_format_version_update import UpgradeFormatVersionUpdate +from polaris.catalog.models.value_map import ValueMap +from polaris.catalog.models.view_history_entry import ViewHistoryEntry +from polaris.catalog.models.view_metadata import ViewMetadata +from polaris.catalog.models.view_representation import ViewRepresentation +from polaris.catalog.models.view_requirement import ViewRequirement +from polaris.catalog.models.view_update import ViewUpdate +from polaris.catalog.models.view_version import ViewVersion diff --git a/regtests/client/python/polaris/catalog/models/add_partition_spec_update.py b/regtests/client/python/polaris/catalog/models/add_partition_spec_update.py new file mode 100644 index 0000000000..080ccc3cea --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/add_partition_spec_update.py @@ -0,0 +1,112 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from polaris.catalog.models.partition_spec import PartitionSpec +from typing import Optional, Set +from typing_extensions import Self + +class AddPartitionSpecUpdate(BaseUpdate): + """ + AddPartitionSpecUpdate + """ # noqa: E501 + action: StrictStr + spec: PartitionSpec + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['add-spec']): + raise ValueError("must be one of enum values ('add-spec')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AddPartitionSpecUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AddPartitionSpecUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/add_schema_update.py b/regtests/client/python/polaris/catalog/models/add_schema_update.py new file mode 100644 index 0000000000..f5f70a560c --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/add_schema_update.py @@ -0,0 +1,113 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.base_update import BaseUpdate +from polaris.catalog.models.model_schema import ModelSchema +from typing import Optional, Set +from typing_extensions import Self + +class AddSchemaUpdate(BaseUpdate): + """ + AddSchemaUpdate + """ # noqa: E501 + action: StrictStr + var_schema: ModelSchema = Field(alias="schema") + last_column_id: Optional[StrictInt] = Field(default=None, description="The highest assigned column ID for the table. This is used to ensure columns are always assigned an unused ID when evolving schemas. When omitted, it will be computed on the server side.", alias="last-column-id") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['add-schema']): + raise ValueError("must be one of enum values ('add-schema')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AddSchemaUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AddSchemaUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/add_snapshot_update.py b/regtests/client/python/polaris/catalog/models/add_snapshot_update.py new file mode 100644 index 0000000000..4da8a2ded1 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/add_snapshot_update.py @@ -0,0 +1,112 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from polaris.catalog.models.snapshot import Snapshot +from typing import Optional, Set +from typing_extensions import Self + +class AddSnapshotUpdate(BaseUpdate): + """ + AddSnapshotUpdate + """ # noqa: E501 + action: StrictStr + snapshot: Snapshot + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['add-snapshot']): + raise ValueError("must be one of enum values ('add-snapshot')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AddSnapshotUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AddSnapshotUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/add_sort_order_update.py b/regtests/client/python/polaris/catalog/models/add_sort_order_update.py new file mode 100644 index 0000000000..e2990a5e57 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/add_sort_order_update.py @@ -0,0 +1,112 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from polaris.catalog.models.sort_order import SortOrder +from typing import Optional, Set +from typing_extensions import Self + +class AddSortOrderUpdate(BaseUpdate): + """ + AddSortOrderUpdate + """ # noqa: E501 + action: StrictStr + sort_order: SortOrder = Field(alias="sort-order") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['add-sort-order']): + raise ValueError("must be one of enum values ('add-sort-order')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AddSortOrderUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AddSortOrderUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/add_view_version_update.py b/regtests/client/python/polaris/catalog/models/add_view_version_update.py new file mode 100644 index 0000000000..883ad3edff --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/add_view_version_update.py @@ -0,0 +1,112 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from polaris.catalog.models.view_version import ViewVersion +from typing import Optional, Set +from typing_extensions import Self + +class AddViewVersionUpdate(BaseUpdate): + """ + AddViewVersionUpdate + """ # noqa: E501 + action: StrictStr + view_version: ViewVersion = Field(alias="view-version") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['add-view-version']): + raise ValueError("must be one of enum values ('add-view-version')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AddViewVersionUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AddViewVersionUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/and_or_expression.py b/regtests/client/python/polaris/catalog/models/and_or_expression.py new file mode 100644 index 0000000000..2648c43cc4 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/and_or_expression.py @@ -0,0 +1,115 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class AndOrExpression(BaseModel): + """ + AndOrExpression + """ # noqa: E501 + type: StrictStr + left: Expression + right: Expression + __properties: ClassVar[List[str]] = ["type", "left", "right"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AndOrExpression from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of left + if self.left: + _dict['left'] = self.left.to_dict() + # override the default output from pydantic by calling `to_dict()` of right + if self.right: + _dict['right'] = self.right.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AndOrExpression from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "left": Expression.from_dict(obj["left"]) if obj.get("left") is not None else None, + "right": Expression.from_dict(obj["right"]) if obj.get("right") is not None else None + }) + return _obj + +from polaris.catalog.models.expression import Expression +# TODO: Rewrite to not use raise_errors +AndOrExpression.model_rebuild(raise_errors=False) + diff --git a/regtests/client/python/polaris/catalog/models/assert_create.py b/regtests/client/python/polaris/catalog/models/assert_create.py new file mode 100644 index 0000000000..c5aeaa3621 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/assert_create.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.table_requirement import TableRequirement +from typing import Optional, Set +from typing_extensions import Self + +class AssertCreate(TableRequirement): + """ + The table must not already exist; used for create transactions + """ # noqa: E501 + type: StrictStr + __properties: ClassVar[List[str]] = ["type"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['assert-create']): + raise ValueError("must be one of enum values ('assert-create')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AssertCreate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AssertCreate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/assert_current_schema_id.py b/regtests/client/python/polaris/catalog/models/assert_current_schema_id.py new file mode 100644 index 0000000000..1cd1604852 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/assert_current_schema_id.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.table_requirement import TableRequirement +from typing import Optional, Set +from typing_extensions import Self + +class AssertCurrentSchemaId(TableRequirement): + """ + The table's current schema id must match the requirement's `current-schema-id` + """ # noqa: E501 + type: StrictStr + current_schema_id: StrictInt = Field(alias="current-schema-id") + __properties: ClassVar[List[str]] = ["type"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['assert-current-schema-id']): + raise ValueError("must be one of enum values ('assert-current-schema-id')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AssertCurrentSchemaId from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AssertCurrentSchemaId from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/assert_default_sort_order_id.py b/regtests/client/python/polaris/catalog/models/assert_default_sort_order_id.py new file mode 100644 index 0000000000..694b96b7e9 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/assert_default_sort_order_id.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.table_requirement import TableRequirement +from typing import Optional, Set +from typing_extensions import Self + +class AssertDefaultSortOrderId(TableRequirement): + """ + The table's default sort order id must match the requirement's `default-sort-order-id` + """ # noqa: E501 + type: StrictStr + default_sort_order_id: StrictInt = Field(alias="default-sort-order-id") + __properties: ClassVar[List[str]] = ["type"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['assert-default-sort-order-id']): + raise ValueError("must be one of enum values ('assert-default-sort-order-id')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AssertDefaultSortOrderId from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AssertDefaultSortOrderId from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/assert_default_spec_id.py b/regtests/client/python/polaris/catalog/models/assert_default_spec_id.py new file mode 100644 index 0000000000..69445be2a9 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/assert_default_spec_id.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.table_requirement import TableRequirement +from typing import Optional, Set +from typing_extensions import Self + +class AssertDefaultSpecId(TableRequirement): + """ + The table's default spec id must match the requirement's `default-spec-id` + """ # noqa: E501 + type: StrictStr + default_spec_id: StrictInt = Field(alias="default-spec-id") + __properties: ClassVar[List[str]] = ["type"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['assert-default-spec-id']): + raise ValueError("must be one of enum values ('assert-default-spec-id')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AssertDefaultSpecId from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AssertDefaultSpecId from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/assert_last_assigned_field_id.py b/regtests/client/python/polaris/catalog/models/assert_last_assigned_field_id.py new file mode 100644 index 0000000000..f6143aadbf --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/assert_last_assigned_field_id.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.table_requirement import TableRequirement +from typing import Optional, Set +from typing_extensions import Self + +class AssertLastAssignedFieldId(TableRequirement): + """ + The table's last assigned column id must match the requirement's `last-assigned-field-id` + """ # noqa: E501 + type: StrictStr + last_assigned_field_id: StrictInt = Field(alias="last-assigned-field-id") + __properties: ClassVar[List[str]] = ["type"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['assert-last-assigned-field-id']): + raise ValueError("must be one of enum values ('assert-last-assigned-field-id')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AssertLastAssignedFieldId from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AssertLastAssignedFieldId from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/assert_last_assigned_partition_id.py b/regtests/client/python/polaris/catalog/models/assert_last_assigned_partition_id.py new file mode 100644 index 0000000000..2329c6de1d --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/assert_last_assigned_partition_id.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.table_requirement import TableRequirement +from typing import Optional, Set +from typing_extensions import Self + +class AssertLastAssignedPartitionId(TableRequirement): + """ + The table's last assigned partition id must match the requirement's `last-assigned-partition-id` + """ # noqa: E501 + type: StrictStr + last_assigned_partition_id: StrictInt = Field(alias="last-assigned-partition-id") + __properties: ClassVar[List[str]] = ["type"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['assert-last-assigned-partition-id']): + raise ValueError("must be one of enum values ('assert-last-assigned-partition-id')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AssertLastAssignedPartitionId from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AssertLastAssignedPartitionId from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/assert_ref_snapshot_id.py b/regtests/client/python/polaris/catalog/models/assert_ref_snapshot_id.py new file mode 100644 index 0000000000..d59d1cbc0e --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/assert_ref_snapshot_id.py @@ -0,0 +1,112 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.table_requirement import TableRequirement +from typing import Optional, Set +from typing_extensions import Self + +class AssertRefSnapshotId(TableRequirement): + """ + The table branch or tag identified by the requirement's `ref` must reference the requirement's `snapshot-id`; if `snapshot-id` is `null` or missing, the ref must not already exist + """ # noqa: E501 + type: StrictStr + ref: StrictStr + snapshot_id: StrictInt = Field(alias="snapshot-id") + __properties: ClassVar[List[str]] = ["type"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['assert-ref-snapshot-id']): + raise ValueError("must be one of enum values ('assert-ref-snapshot-id')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AssertRefSnapshotId from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AssertRefSnapshotId from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/assert_table_uuid.py b/regtests/client/python/polaris/catalog/models/assert_table_uuid.py new file mode 100644 index 0000000000..befdaf9415 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/assert_table_uuid.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.table_requirement import TableRequirement +from typing import Optional, Set +from typing_extensions import Self + +class AssertTableUUID(TableRequirement): + """ + The table UUID must match the requirement's `uuid` + """ # noqa: E501 + type: StrictStr + uuid: StrictStr + __properties: ClassVar[List[str]] = ["type"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['assert-table-uuid']): + raise ValueError("must be one of enum values ('assert-table-uuid')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AssertTableUUID from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AssertTableUUID from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/assert_view_uuid.py b/regtests/client/python/polaris/catalog/models/assert_view_uuid.py new file mode 100644 index 0000000000..a0fb33be82 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/assert_view_uuid.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.view_requirement import ViewRequirement +from typing import Optional, Set +from typing_extensions import Self + +class AssertViewUUID(ViewRequirement): + """ + The view UUID must match the requirement's `uuid` + """ # noqa: E501 + type: StrictStr + uuid: StrictStr + __properties: ClassVar[List[str]] = ["type"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['assert-view-uuid']): + raise ValueError("must be one of enum values ('assert-view-uuid')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AssertViewUUID from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AssertViewUUID from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/assign_uuid_update.py b/regtests/client/python/polaris/catalog/models/assign_uuid_update.py new file mode 100644 index 0000000000..d58d8b8e65 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/assign_uuid_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class AssignUUIDUpdate(BaseUpdate): + """ + Assigning a UUID to a table/view should only be done when creating the table/view. It is not safe to re-assign the UUID if a table/view already has a UUID assigned + """ # noqa: E501 + action: StrictStr + uuid: StrictStr + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['assign-uuid']): + raise ValueError("must be one of enum values ('assign-uuid')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AssignUUIDUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AssignUUIDUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/base_update.py b/regtests/client/python/polaris/catalog/models/base_update.py new file mode 100644 index 0000000000..0f3044d0be --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/base_update.py @@ -0,0 +1,182 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from importlib import import_module +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List, Union +from typing import Optional, Set +from typing_extensions import Self + +from typing import TYPE_CHECKING +if TYPE_CHECKING: + from polaris.catalog.models.add_schema_update import AddSchemaUpdate + from polaris.catalog.models.add_snapshot_update import AddSnapshotUpdate + from polaris.catalog.models.add_sort_order_update import AddSortOrderUpdate + from polaris.catalog.models.add_partition_spec_update import AddPartitionSpecUpdate + from polaris.catalog.models.add_view_version_update import AddViewVersionUpdate + from polaris.catalog.models.assign_uuid_update import AssignUUIDUpdate + from polaris.catalog.models.remove_partition_statistics_update import RemovePartitionStatisticsUpdate + from polaris.catalog.models.remove_properties_update import RemovePropertiesUpdate + from polaris.catalog.models.remove_snapshot_ref_update import RemoveSnapshotRefUpdate + from polaris.catalog.models.remove_snapshots_update import RemoveSnapshotsUpdate + from polaris.catalog.models.remove_statistics_update import RemoveStatisticsUpdate + from polaris.catalog.models.set_current_schema_update import SetCurrentSchemaUpdate + from polaris.catalog.models.set_current_view_version_update import SetCurrentViewVersionUpdate + from polaris.catalog.models.set_default_sort_order_update import SetDefaultSortOrderUpdate + from polaris.catalog.models.set_default_spec_update import SetDefaultSpecUpdate + from polaris.catalog.models.set_location_update import SetLocationUpdate + from polaris.catalog.models.set_partition_statistics_update import SetPartitionStatisticsUpdate + from polaris.catalog.models.set_properties_update import SetPropertiesUpdate + from polaris.catalog.models.set_snapshot_ref_update import SetSnapshotRefUpdate + from polaris.catalog.models.set_statistics_update import SetStatisticsUpdate + from polaris.catalog.models.upgrade_format_version_update import UpgradeFormatVersionUpdate + +class BaseUpdate(BaseModel): + """ + BaseUpdate + """ # noqa: E501 + action: StrictStr + __properties: ClassVar[List[str]] = ["action"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + # JSON field name that stores the object type + __discriminator_property_name: ClassVar[str] = 'action' + + # discriminator mappings + __discriminator_value_class_map: ClassVar[Dict[str, str]] = { + 'add-schema': 'AddSchemaUpdate','add-snapshot': 'AddSnapshotUpdate','add-sort-order': 'AddSortOrderUpdate','add-spec': 'AddPartitionSpecUpdate','add-view-version': 'AddViewVersionUpdate','assign-uuid': 'AssignUUIDUpdate','remove-partition-statistics': 'RemovePartitionStatisticsUpdate','remove-properties': 'RemovePropertiesUpdate','remove-snapshot-ref': 'RemoveSnapshotRefUpdate','remove-snapshots': 'RemoveSnapshotsUpdate','remove-statistics': 'RemoveStatisticsUpdate','set-current-schema': 'SetCurrentSchemaUpdate','set-current-view-version': 'SetCurrentViewVersionUpdate','set-default-sort-order': 'SetDefaultSortOrderUpdate','set-default-spec': 'SetDefaultSpecUpdate','set-location': 'SetLocationUpdate','set-partition-statistics': 'SetPartitionStatisticsUpdate','set-properties': 'SetPropertiesUpdate','set-snapshot-ref': 'SetSnapshotRefUpdate','set-statistics': 'SetStatisticsUpdate','upgrade-format-version': 'UpgradeFormatVersionUpdate' + } + + @classmethod + def get_discriminator_value(cls, obj: Dict[str, Any]) -> Optional[str]: + """Returns the discriminator value (object type) of the data""" + discriminator_value = obj[cls.__discriminator_property_name] + if discriminator_value: + return cls.__discriminator_value_class_map.get(discriminator_value) + else: + return None + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Union[AddSchemaUpdate, AddSnapshotUpdate, AddSortOrderUpdate, AddPartitionSpecUpdate, AddViewVersionUpdate, AssignUUIDUpdate, RemovePartitionStatisticsUpdate, RemovePropertiesUpdate, RemoveSnapshotRefUpdate, RemoveSnapshotsUpdate, RemoveStatisticsUpdate, SetCurrentSchemaUpdate, SetCurrentViewVersionUpdate, SetDefaultSortOrderUpdate, SetDefaultSpecUpdate, SetLocationUpdate, SetPartitionStatisticsUpdate, SetPropertiesUpdate, SetSnapshotRefUpdate, SetStatisticsUpdate, UpgradeFormatVersionUpdate]]: + """Create an instance of BaseUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Dict[str, Any]) -> Optional[Union[AddSchemaUpdate, AddSnapshotUpdate, AddSortOrderUpdate, AddPartitionSpecUpdate, AddViewVersionUpdate, AssignUUIDUpdate, RemovePartitionStatisticsUpdate, RemovePropertiesUpdate, RemoveSnapshotRefUpdate, RemoveSnapshotsUpdate, RemoveStatisticsUpdate, SetCurrentSchemaUpdate, SetCurrentViewVersionUpdate, SetDefaultSortOrderUpdate, SetDefaultSpecUpdate, SetLocationUpdate, SetPartitionStatisticsUpdate, SetPropertiesUpdate, SetSnapshotRefUpdate, SetStatisticsUpdate, UpgradeFormatVersionUpdate]]: + """Create an instance of BaseUpdate from a dict""" + # look up the object type based on discriminator mapping + object_type = cls.get_discriminator_value(obj) + if object_type == 'AddSchemaUpdate': + return import_module("polaris.catalog.models.add_schema_update").AddSchemaUpdate.from_dict(obj) + if object_type == 'AddSnapshotUpdate': + return import_module("polaris.catalog.models.add_snapshot_update").AddSnapshotUpdate.from_dict(obj) + if object_type == 'AddSortOrderUpdate': + return import_module("polaris.catalog.models.add_sort_order_update").AddSortOrderUpdate.from_dict(obj) + if object_type == 'AddPartitionSpecUpdate': + return import_module("polaris.catalog.models.add_partition_spec_update").AddPartitionSpecUpdate.from_dict(obj) + if object_type == 'AddViewVersionUpdate': + return import_module("polaris.catalog.models.add_view_version_update").AddViewVersionUpdate.from_dict(obj) + if object_type == 'AssignUUIDUpdate': + return import_module("polaris.catalog.models.assign_uuid_update").AssignUUIDUpdate.from_dict(obj) + if object_type == 'RemovePartitionStatisticsUpdate': + return import_module("polaris.catalog.models.remove_partition_statistics_update").RemovePartitionStatisticsUpdate.from_dict(obj) + if object_type == 'RemovePropertiesUpdate': + return import_module("polaris.catalog.models.remove_properties_update").RemovePropertiesUpdate.from_dict(obj) + if object_type == 'RemoveSnapshotRefUpdate': + return import_module("polaris.catalog.models.remove_snapshot_ref_update").RemoveSnapshotRefUpdate.from_dict(obj) + if object_type == 'RemoveSnapshotsUpdate': + return import_module("polaris.catalog.models.remove_snapshots_update").RemoveSnapshotsUpdate.from_dict(obj) + if object_type == 'RemoveStatisticsUpdate': + return import_module("polaris.catalog.models.remove_statistics_update").RemoveStatisticsUpdate.from_dict(obj) + if object_type == 'SetCurrentSchemaUpdate': + return import_module("polaris.catalog.models.set_current_schema_update").SetCurrentSchemaUpdate.from_dict(obj) + if object_type == 'SetCurrentViewVersionUpdate': + return import_module("polaris.catalog.models.set_current_view_version_update").SetCurrentViewVersionUpdate.from_dict(obj) + if object_type == 'SetDefaultSortOrderUpdate': + return import_module("polaris.catalog.models.set_default_sort_order_update").SetDefaultSortOrderUpdate.from_dict(obj) + if object_type == 'SetDefaultSpecUpdate': + return import_module("polaris.catalog.models.set_default_spec_update").SetDefaultSpecUpdate.from_dict(obj) + if object_type == 'SetLocationUpdate': + return import_module("polaris.catalog.models.set_location_update").SetLocationUpdate.from_dict(obj) + if object_type == 'SetPartitionStatisticsUpdate': + return import_module("polaris.catalog.models.set_partition_statistics_update").SetPartitionStatisticsUpdate.from_dict(obj) + if object_type == 'SetPropertiesUpdate': + return import_module("polaris.catalog.models.set_properties_update").SetPropertiesUpdate.from_dict(obj) + if object_type == 'SetSnapshotRefUpdate': + return import_module("polaris.catalog.models.set_snapshot_ref_update").SetSnapshotRefUpdate.from_dict(obj) + if object_type == 'SetStatisticsUpdate': + return import_module("polaris.catalog.models.set_statistics_update").SetStatisticsUpdate.from_dict(obj) + if object_type == 'UpgradeFormatVersionUpdate': + return import_module("polaris.catalog.models.upgrade_format_version_update").UpgradeFormatVersionUpdate.from_dict(obj) + + raise ValueError("BaseUpdate failed to lookup discriminator value from " + + json.dumps(obj) + ". Discriminator property name: " + cls.__discriminator_property_name + + ", mapping: " + json.dumps(cls.__discriminator_value_class_map)) + + diff --git a/regtests/client/python/polaris/catalog/models/blob_metadata.py b/regtests/client/python/polaris/catalog/models/blob_metadata.py new file mode 100644 index 0000000000..878668220e --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/blob_metadata.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class BlobMetadata(BaseModel): + """ + BlobMetadata + """ # noqa: E501 + type: StrictStr + snapshot_id: StrictInt = Field(alias="snapshot-id") + sequence_number: StrictInt = Field(alias="sequence-number") + fields: List[StrictInt] + properties: Optional[Dict[str, Any]] = None + __properties: ClassVar[List[str]] = ["type", "snapshot-id", "sequence-number", "fields", "properties"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of BlobMetadata from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of BlobMetadata from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "snapshot-id": obj.get("snapshot-id"), + "sequence-number": obj.get("sequence-number"), + "fields": obj.get("fields"), + "properties": obj.get("properties") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/catalog_config.py b/regtests/client/python/polaris/catalog/models/catalog_config.py new file mode 100644 index 0000000000..02cff71399 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/catalog_config.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class CatalogConfig(BaseModel): + """ + Server-provided configuration for the catalog. + """ # noqa: E501 + overrides: Dict[str, StrictStr] = Field(description="Properties that should be used to override client configuration; applied after defaults and client configuration.") + defaults: Dict[str, StrictStr] = Field(description="Properties that should be used as default configuration; applied before client configuration.") + __properties: ClassVar[List[str]] = ["overrides", "defaults"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CatalogConfig from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CatalogConfig from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "overrides": obj.get("overrides"), + "defaults": obj.get("defaults") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/commit_report.py b/regtests/client/python/polaris/catalog/models/commit_report.py new file mode 100644 index 0000000000..44315d8b65 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/commit_report.py @@ -0,0 +1,125 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.metric_result import MetricResult +from typing import Optional, Set +from typing_extensions import Self + +class CommitReport(BaseModel): + """ + CommitReport + """ # noqa: E501 + table_name: StrictStr = Field(alias="table-name") + snapshot_id: StrictInt = Field(alias="snapshot-id") + sequence_number: StrictInt = Field(alias="sequence-number") + operation: StrictStr + metrics: Dict[str, MetricResult] + metadata: Optional[Dict[str, StrictStr]] = None + __properties: ClassVar[List[str]] = ["table-name", "snapshot-id", "sequence-number", "operation", "metrics", "metadata"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CommitReport from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each value in metrics (dict) + _field_dict = {} + if self.metrics: + for _key in self.metrics: + if self.metrics[_key]: + _field_dict[_key] = self.metrics[_key].to_dict() + _dict['metrics'] = _field_dict + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CommitReport from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "table-name": obj.get("table-name"), + "snapshot-id": obj.get("snapshot-id"), + "sequence-number": obj.get("sequence-number"), + "operation": obj.get("operation"), + "metrics": dict( + (_k, MetricResult.from_dict(_v)) + for _k, _v in obj["metrics"].items() + ) + if obj.get("metrics") is not None + else None, + "metadata": obj.get("metadata") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/commit_table_request.py b/regtests/client/python/polaris/catalog/models/commit_table_request.py new file mode 100644 index 0000000000..8261edc5f7 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/commit_table_request.py @@ -0,0 +1,126 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.table_identifier import TableIdentifier +from polaris.catalog.models.table_requirement import TableRequirement +from polaris.catalog.models.table_update import TableUpdate +from typing import Optional, Set +from typing_extensions import Self + +class CommitTableRequest(BaseModel): + """ + CommitTableRequest + """ # noqa: E501 + identifier: Optional[TableIdentifier] = None + requirements: List[TableRequirement] + updates: List[TableUpdate] + __properties: ClassVar[List[str]] = ["identifier", "requirements", "updates"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CommitTableRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of identifier + if self.identifier: + _dict['identifier'] = self.identifier.to_dict() + # override the default output from pydantic by calling `to_dict()` of each item in requirements (list) + _items = [] + if self.requirements: + for _item in self.requirements: + if _item: + _items.append(_item.to_dict()) + _dict['requirements'] = _items + # override the default output from pydantic by calling `to_dict()` of each item in updates (list) + _items = [] + if self.updates: + for _item in self.updates: + if _item: + _items.append(_item.to_dict()) + _dict['updates'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CommitTableRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "identifier": TableIdentifier.from_dict(obj["identifier"]) if obj.get("identifier") is not None else None, + "requirements": [TableRequirement.from_dict(_item) for _item in obj["requirements"]] if obj.get("requirements") is not None else None, + "updates": [TableUpdate.from_dict(_item) for _item in obj["updates"]] if obj.get("updates") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/commit_table_response.py b/regtests/client/python/polaris/catalog/models/commit_table_response.py new file mode 100644 index 0000000000..a4133f9f6f --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/commit_table_response.py @@ -0,0 +1,108 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.table_metadata import TableMetadata +from typing import Optional, Set +from typing_extensions import Self + +class CommitTableResponse(BaseModel): + """ + CommitTableResponse + """ # noqa: E501 + metadata_location: StrictStr = Field(alias="metadata-location") + metadata: TableMetadata + __properties: ClassVar[List[str]] = ["metadata-location", "metadata"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CommitTableResponse from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of metadata + if self.metadata: + _dict['metadata'] = self.metadata.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CommitTableResponse from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "metadata-location": obj.get("metadata-location"), + "metadata": TableMetadata.from_dict(obj["metadata"]) if obj.get("metadata") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/commit_transaction_request.py b/regtests/client/python/polaris/catalog/models/commit_transaction_request.py new file mode 100644 index 0000000000..f26a7c7c2a --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/commit_transaction_request.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.commit_table_request import CommitTableRequest +from typing import Optional, Set +from typing_extensions import Self + +class CommitTransactionRequest(BaseModel): + """ + CommitTransactionRequest + """ # noqa: E501 + table_changes: List[CommitTableRequest] = Field(alias="table-changes") + __properties: ClassVar[List[str]] = ["table-changes"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CommitTransactionRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in table_changes (list) + _items = [] + if self.table_changes: + for _item in self.table_changes: + if _item: + _items.append(_item.to_dict()) + _dict['table-changes'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CommitTransactionRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "table-changes": [CommitTableRequest.from_dict(_item) for _item in obj["table-changes"]] if obj.get("table-changes") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/commit_view_request.py b/regtests/client/python/polaris/catalog/models/commit_view_request.py new file mode 100644 index 0000000000..aa31000342 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/commit_view_request.py @@ -0,0 +1,126 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.table_identifier import TableIdentifier +from polaris.catalog.models.view_requirement import ViewRequirement +from polaris.catalog.models.view_update import ViewUpdate +from typing import Optional, Set +from typing_extensions import Self + +class CommitViewRequest(BaseModel): + """ + CommitViewRequest + """ # noqa: E501 + identifier: Optional[TableIdentifier] = None + requirements: Optional[List[ViewRequirement]] = None + updates: List[ViewUpdate] + __properties: ClassVar[List[str]] = ["identifier", "requirements", "updates"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CommitViewRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of identifier + if self.identifier: + _dict['identifier'] = self.identifier.to_dict() + # override the default output from pydantic by calling `to_dict()` of each item in requirements (list) + _items = [] + if self.requirements: + for _item in self.requirements: + if _item: + _items.append(_item.to_dict()) + _dict['requirements'] = _items + # override the default output from pydantic by calling `to_dict()` of each item in updates (list) + _items = [] + if self.updates: + for _item in self.updates: + if _item: + _items.append(_item.to_dict()) + _dict['updates'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CommitViewRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "identifier": TableIdentifier.from_dict(obj["identifier"]) if obj.get("identifier") is not None else None, + "requirements": [ViewRequirement.from_dict(_item) for _item in obj["requirements"]] if obj.get("requirements") is not None else None, + "updates": [ViewUpdate.from_dict(_item) for _item in obj["updates"]] if obj.get("updates") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/content_file.py b/regtests/client/python/polaris/catalog/models/content_file.py new file mode 100644 index 0000000000..9ef02722ea --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/content_file.py @@ -0,0 +1,146 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from importlib import import_module +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional, Union +from polaris.catalog.models.file_format import FileFormat +from polaris.catalog.models.primitive_type_value import PrimitiveTypeValue +from typing import Optional, Set +from typing_extensions import Self + +from typing import TYPE_CHECKING +if TYPE_CHECKING: + from polaris.catalog.models.data_file import DataFile + from polaris.catalog.models.equality_delete_file import EqualityDeleteFile + from polaris.catalog.models.position_delete_file import PositionDeleteFile + +class ContentFile(BaseModel): + """ + ContentFile + """ # noqa: E501 + content: StrictStr + file_path: StrictStr = Field(alias="file-path") + file_format: FileFormat = Field(alias="file-format") + spec_id: StrictInt = Field(alias="spec-id") + partition: Optional[List[PrimitiveTypeValue]] = Field(default=None, description="A list of partition field values ordered based on the fields of the partition spec specified by the `spec-id`") + file_size_in_bytes: StrictInt = Field(description="Total file size in bytes", alias="file-size-in-bytes") + record_count: StrictInt = Field(description="Number of records in the file", alias="record-count") + key_metadata: Optional[StrictStr] = Field(default=None, description="Encryption key metadata blob", alias="key-metadata") + split_offsets: Optional[List[StrictInt]] = Field(default=None, description="List of splittable offsets", alias="split-offsets") + sort_order_id: Optional[StrictInt] = Field(default=None, alias="sort-order-id") + __properties: ClassVar[List[str]] = ["content", "file-path", "file-format", "spec-id", "partition", "file-size-in-bytes", "record-count", "key-metadata", "split-offsets", "sort-order-id"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + # JSON field name that stores the object type + __discriminator_property_name: ClassVar[str] = 'content' + + # discriminator mappings + __discriminator_value_class_map: ClassVar[Dict[str, str]] = { + 'data': 'DataFile','equality-deletes': 'EqualityDeleteFile','position-deletes': 'PositionDeleteFile' + } + + @classmethod + def get_discriminator_value(cls, obj: Dict[str, Any]) -> Optional[str]: + """Returns the discriminator value (object type) of the data""" + discriminator_value = obj[cls.__discriminator_property_name] + if discriminator_value: + return cls.__discriminator_value_class_map.get(discriminator_value) + else: + return None + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Union[DataFile, EqualityDeleteFile, PositionDeleteFile]]: + """Create an instance of ContentFile from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in partition (list) + _items = [] + if self.partition: + for _item in self.partition: + if _item: + _items.append(_item.to_dict()) + _dict['partition'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Dict[str, Any]) -> Optional[Union[DataFile, EqualityDeleteFile, PositionDeleteFile]]: + """Create an instance of ContentFile from a dict""" + # look up the object type based on discriminator mapping + object_type = cls.get_discriminator_value(obj) + if object_type == 'DataFile': + return import_module("polaris.catalog.models.data_file").DataFile.from_dict(obj) + if object_type == 'EqualityDeleteFile': + return import_module("polaris.catalog.models.equality_delete_file").EqualityDeleteFile.from_dict(obj) + if object_type == 'PositionDeleteFile': + return import_module("polaris.catalog.models.position_delete_file").PositionDeleteFile.from_dict(obj) + + raise ValueError("ContentFile failed to lookup discriminator value from " + + json.dumps(obj) + ". Discriminator property name: " + cls.__discriminator_property_name + + ", mapping: " + json.dumps(cls.__discriminator_value_class_map)) + + diff --git a/regtests/client/python/polaris/catalog/models/count_map.py b/regtests/client/python/polaris/catalog/models/count_map.py new file mode 100644 index 0000000000..448cdfcf59 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/count_map.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class CountMap(BaseModel): + """ + CountMap + """ # noqa: E501 + keys: Optional[List[StrictInt]] = Field(default=None, description="List of integer column ids for each corresponding value") + values: Optional[List[StrictInt]] = Field(default=None, description="List of Long values, matched to 'keys' by index") + __properties: ClassVar[List[str]] = ["keys", "values"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CountMap from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CountMap from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "keys": obj.get("keys"), + "values": obj.get("values") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/counter_result.py b/regtests/client/python/polaris/catalog/models/counter_result.py new file mode 100644 index 0000000000..dde3687ae7 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/counter_result.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class CounterResult(BaseModel): + """ + CounterResult + """ # noqa: E501 + unit: StrictStr + value: StrictInt + __properties: ClassVar[List[str]] = ["unit", "value"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CounterResult from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CounterResult from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "unit": obj.get("unit"), + "value": obj.get("value") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/create_namespace_request.py b/regtests/client/python/polaris/catalog/models/create_namespace_request.py new file mode 100644 index 0000000000..1232839088 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/create_namespace_request.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class CreateNamespaceRequest(BaseModel): + """ + CreateNamespaceRequest + """ # noqa: E501 + namespace: List[StrictStr] = Field(description="Reference to one or more levels of a namespace") + properties: Optional[Dict[str, StrictStr]] = Field(default=None, description="Configured string to string map of properties for the namespace") + __properties: ClassVar[List[str]] = ["namespace", "properties"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CreateNamespaceRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CreateNamespaceRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "namespace": obj.get("namespace"), + "properties": obj.get("properties") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/create_namespace_response.py b/regtests/client/python/polaris/catalog/models/create_namespace_response.py new file mode 100644 index 0000000000..f046409584 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/create_namespace_response.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class CreateNamespaceResponse(BaseModel): + """ + CreateNamespaceResponse + """ # noqa: E501 + namespace: List[StrictStr] = Field(description="Reference to one or more levels of a namespace") + properties: Optional[Dict[str, StrictStr]] = Field(default=None, description="Properties stored on the namespace, if supported by the server.") + __properties: ClassVar[List[str]] = ["namespace", "properties"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CreateNamespaceResponse from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CreateNamespaceResponse from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "namespace": obj.get("namespace"), + "properties": obj.get("properties") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/create_table_request.py b/regtests/client/python/polaris/catalog/models/create_table_request.py new file mode 100644 index 0000000000..611f2b77aa --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/create_table_request.py @@ -0,0 +1,126 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictBool, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.model_schema import ModelSchema +from polaris.catalog.models.partition_spec import PartitionSpec +from polaris.catalog.models.sort_order import SortOrder +from typing import Optional, Set +from typing_extensions import Self + +class CreateTableRequest(BaseModel): + """ + CreateTableRequest + """ # noqa: E501 + name: StrictStr + location: Optional[StrictStr] = None + var_schema: ModelSchema = Field(alias="schema") + partition_spec: Optional[PartitionSpec] = Field(default=None, alias="partition-spec") + write_order: Optional[SortOrder] = Field(default=None, alias="write-order") + stage_create: Optional[StrictBool] = Field(default=None, alias="stage-create") + properties: Optional[Dict[str, StrictStr]] = None + __properties: ClassVar[List[str]] = ["name", "location", "schema", "partition-spec", "write-order", "stage-create", "properties"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CreateTableRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of var_schema + if self.var_schema: + _dict['schema'] = self.var_schema.to_dict() + # override the default output from pydantic by calling `to_dict()` of partition_spec + if self.partition_spec: + _dict['partition-spec'] = self.partition_spec.to_dict() + # override the default output from pydantic by calling `to_dict()` of write_order + if self.write_order: + _dict['write-order'] = self.write_order.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CreateTableRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "name": obj.get("name"), + "location": obj.get("location"), + "schema": ModelSchema.from_dict(obj["schema"]) if obj.get("schema") is not None else None, + "partition-spec": PartitionSpec.from_dict(obj["partition-spec"]) if obj.get("partition-spec") is not None else None, + "write-order": SortOrder.from_dict(obj["write-order"]) if obj.get("write-order") is not None else None, + "stage-create": obj.get("stage-create"), + "properties": obj.get("properties") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/create_view_request.py b/regtests/client/python/polaris/catalog/models/create_view_request.py new file mode 100644 index 0000000000..6f61479672 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/create_view_request.py @@ -0,0 +1,118 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.model_schema import ModelSchema +from polaris.catalog.models.view_version import ViewVersion +from typing import Optional, Set +from typing_extensions import Self + +class CreateViewRequest(BaseModel): + """ + CreateViewRequest + """ # noqa: E501 + name: StrictStr + location: Optional[StrictStr] = None + var_schema: ModelSchema = Field(alias="schema") + view_version: ViewVersion = Field(alias="view-version") + properties: Dict[str, StrictStr] + __properties: ClassVar[List[str]] = ["name", "location", "schema", "view-version", "properties"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CreateViewRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of var_schema + if self.var_schema: + _dict['schema'] = self.var_schema.to_dict() + # override the default output from pydantic by calling `to_dict()` of view_version + if self.view_version: + _dict['view-version'] = self.view_version.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CreateViewRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "name": obj.get("name"), + "location": obj.get("location"), + "schema": ModelSchema.from_dict(obj["schema"]) if obj.get("schema") is not None else None, + "view-version": ViewVersion.from_dict(obj["view-version"]) if obj.get("view-version") is not None else None, + "properties": obj.get("properties") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/data_file.py b/regtests/client/python/polaris/catalog/models/data_file.py new file mode 100644 index 0000000000..2445bc1c2c --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/data_file.py @@ -0,0 +1,136 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.content_file import ContentFile +from polaris.catalog.models.count_map import CountMap +from polaris.catalog.models.file_format import FileFormat +from polaris.catalog.models.primitive_type_value import PrimitiveTypeValue +from polaris.catalog.models.value_map import ValueMap +from typing import Optional, Set +from typing_extensions import Self + +class DataFile(ContentFile): + """ + DataFile + """ # noqa: E501 + content: StrictStr + column_sizes: Optional[CountMap] = Field(default=None, description="Map of column id to total count, including null and NaN", alias="column-sizes") + value_counts: Optional[CountMap] = Field(default=None, description="Map of column id to null value count", alias="value-counts") + null_value_counts: Optional[CountMap] = Field(default=None, description="Map of column id to null value count", alias="null-value-counts") + nan_value_counts: Optional[CountMap] = Field(default=None, description="Map of column id to number of NaN values in the column", alias="nan-value-counts") + lower_bounds: Optional[ValueMap] = Field(default=None, description="Map of column id to lower bound primitive type values", alias="lower-bounds") + upper_bounds: Optional[ValueMap] = Field(default=None, description="Map of column id to upper bound primitive type values", alias="upper-bounds") + __properties: ClassVar[List[str]] = ["content", "file-path", "file-format", "spec-id", "partition", "file-size-in-bytes", "record-count", "key-metadata", "split-offsets", "sort-order-id"] + + @field_validator('content') + def content_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['data']): + raise ValueError("must be one of enum values ('data')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of DataFile from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in partition (list) + _items = [] + if self.partition: + for _item in self.partition: + if _item: + _items.append(_item.to_dict()) + _dict['partition'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of DataFile from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "content": obj.get("content"), + "file-path": obj.get("file-path"), + "file-format": obj.get("file-format"), + "spec-id": obj.get("spec-id"), + "partition": [PrimitiveTypeValue.from_dict(_item) for _item in obj["partition"]] if obj.get("partition") is not None else None, + "file-size-in-bytes": obj.get("file-size-in-bytes"), + "record-count": obj.get("record-count"), + "key-metadata": obj.get("key-metadata"), + "split-offsets": obj.get("split-offsets"), + "sort-order-id": obj.get("sort-order-id") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/equality_delete_file.py b/regtests/client/python/polaris/catalog/models/equality_delete_file.py new file mode 100644 index 0000000000..3b83eb60be --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/equality_delete_file.py @@ -0,0 +1,129 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.content_file import ContentFile +from polaris.catalog.models.file_format import FileFormat +from polaris.catalog.models.primitive_type_value import PrimitiveTypeValue +from typing import Optional, Set +from typing_extensions import Self + +class EqualityDeleteFile(ContentFile): + """ + EqualityDeleteFile + """ # noqa: E501 + content: StrictStr + equality_ids: Optional[List[StrictInt]] = Field(default=None, description="List of equality field IDs", alias="equality-ids") + __properties: ClassVar[List[str]] = ["content", "file-path", "file-format", "spec-id", "partition", "file-size-in-bytes", "record-count", "key-metadata", "split-offsets", "sort-order-id"] + + @field_validator('content') + def content_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['equality-deletes']): + raise ValueError("must be one of enum values ('equality-deletes')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of EqualityDeleteFile from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in partition (list) + _items = [] + if self.partition: + for _item in self.partition: + if _item: + _items.append(_item.to_dict()) + _dict['partition'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of EqualityDeleteFile from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "content": obj.get("content"), + "file-path": obj.get("file-path"), + "file-format": obj.get("file-format"), + "spec-id": obj.get("spec-id"), + "partition": [PrimitiveTypeValue.from_dict(_item) for _item in obj["partition"]] if obj.get("partition") is not None else None, + "file-size-in-bytes": obj.get("file-size-in-bytes"), + "record-count": obj.get("record-count"), + "key-metadata": obj.get("key-metadata"), + "split-offsets": obj.get("split-offsets"), + "sort-order-id": obj.get("sort-order-id") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/error_model.py b/regtests/client/python/polaris/catalog/models/error_model.py new file mode 100644 index 0000000000..780ba7d47f --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/error_model.py @@ -0,0 +1,109 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing_extensions import Annotated +from typing import Optional, Set +from typing_extensions import Self + +class ErrorModel(BaseModel): + """ + JSON error payload returned in a response with further details on the error + """ # noqa: E501 + message: StrictStr = Field(description="Human-readable error message") + type: StrictStr = Field(description="Internal type definition of the error") + code: Annotated[int, Field(le=600, strict=True, ge=400)] = Field(description="HTTP response code") + stack: Optional[List[StrictStr]] = None + __properties: ClassVar[List[str]] = ["message", "type", "code", "stack"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of ErrorModel from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of ErrorModel from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "message": obj.get("message"), + "type": obj.get("type"), + "code": obj.get("code"), + "stack": obj.get("stack") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/expression.py b/regtests/client/python/polaris/catalog/models/expression.py new file mode 100644 index 0000000000..00b37f99a8 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/expression.py @@ -0,0 +1,196 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +import pprint +from pydantic import BaseModel, ConfigDict, Field, StrictStr, ValidationError, field_validator +from typing import Any, List, Optional +from polaris.catalog.models.literal_expression import LiteralExpression +from polaris.catalog.models.set_expression import SetExpression +from polaris.catalog.models.unary_expression import UnaryExpression +from pydantic import StrictStr, Field +from typing import Union, List, Set, Optional, Dict +from typing_extensions import Literal, Self + +EXPRESSION_ONE_OF_SCHEMAS = ["AndOrExpression", "LiteralExpression", "NotExpression", "SetExpression", "UnaryExpression"] + +class Expression(BaseModel): + """ + Expression + """ + # data type: AndOrExpression + oneof_schema_1_validator: Optional[AndOrExpression] = None + # data type: NotExpression + oneof_schema_2_validator: Optional[NotExpression] = None + # data type: SetExpression + oneof_schema_3_validator: Optional[SetExpression] = None + # data type: LiteralExpression + oneof_schema_4_validator: Optional[LiteralExpression] = None + # data type: UnaryExpression + oneof_schema_5_validator: Optional[UnaryExpression] = None + actual_instance: Optional[Union[AndOrExpression, LiteralExpression, NotExpression, SetExpression, UnaryExpression]] = None + one_of_schemas: Set[str] = { "AndOrExpression", "LiteralExpression", "NotExpression", "SetExpression", "UnaryExpression" } + + model_config = ConfigDict( + validate_assignment=True, + protected_namespaces=(), + ) + + + def __init__(self, *args, **kwargs) -> None: + if args: + if len(args) > 1: + raise ValueError("If a position argument is used, only 1 is allowed to set `actual_instance`") + if kwargs: + raise ValueError("If a position argument is used, keyword arguments cannot be used.") + super().__init__(actual_instance=args[0]) + else: + super().__init__(**kwargs) + + @field_validator('actual_instance') + def actual_instance_must_validate_oneof(cls, v): + instance = Expression.model_construct() + error_messages = [] + match = 0 + # validate data type: AndOrExpression + if not isinstance(v, AndOrExpression): + error_messages.append(f"Error! Input type `{type(v)}` is not `AndOrExpression`") + else: + match += 1 + # validate data type: NotExpression + if not isinstance(v, NotExpression): + error_messages.append(f"Error! Input type `{type(v)}` is not `NotExpression`") + else: + match += 1 + # validate data type: SetExpression + if not isinstance(v, SetExpression): + error_messages.append(f"Error! Input type `{type(v)}` is not `SetExpression`") + else: + match += 1 + # validate data type: LiteralExpression + if not isinstance(v, LiteralExpression): + error_messages.append(f"Error! Input type `{type(v)}` is not `LiteralExpression`") + else: + match += 1 + # validate data type: UnaryExpression + if not isinstance(v, UnaryExpression): + error_messages.append(f"Error! Input type `{type(v)}` is not `UnaryExpression`") + else: + match += 1 + if match > 1: + # more than 1 match + raise ValueError("Multiple matches found when setting `actual_instance` in Expression with oneOf schemas: AndOrExpression, LiteralExpression, NotExpression, SetExpression, UnaryExpression. Details: " + ", ".join(error_messages)) + elif match == 0: + # no match + raise ValueError("No match found when setting `actual_instance` in Expression with oneOf schemas: AndOrExpression, LiteralExpression, NotExpression, SetExpression, UnaryExpression. Details: " + ", ".join(error_messages)) + else: + return v + + @classmethod + def from_dict(cls, obj: Union[str, Dict[str, Any]]) -> Self: + return cls.from_json(json.dumps(obj)) + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Returns the object represented by the json string""" + instance = cls.model_construct() + error_messages = [] + match = 0 + + # deserialize data into AndOrExpression + try: + instance.actual_instance = AndOrExpression.from_json(json_str) + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into NotExpression + try: + instance.actual_instance = NotExpression.from_json(json_str) + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into SetExpression + try: + instance.actual_instance = SetExpression.from_json(json_str) + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into LiteralExpression + try: + instance.actual_instance = LiteralExpression.from_json(json_str) + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into UnaryExpression + try: + instance.actual_instance = UnaryExpression.from_json(json_str) + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + + if match > 1: + # more than 1 match + raise ValueError("Multiple matches found when deserializing the JSON string into Expression with oneOf schemas: AndOrExpression, LiteralExpression, NotExpression, SetExpression, UnaryExpression. Details: " + ", ".join(error_messages)) + elif match == 0: + # no match + raise ValueError("No match found when deserializing the JSON string into Expression with oneOf schemas: AndOrExpression, LiteralExpression, NotExpression, SetExpression, UnaryExpression. Details: " + ", ".join(error_messages)) + else: + return instance + + def to_json(self) -> str: + """Returns the JSON representation of the actual instance""" + if self.actual_instance is None: + return "null" + + if hasattr(self.actual_instance, "to_json") and callable(self.actual_instance.to_json): + return self.actual_instance.to_json() + else: + return json.dumps(self.actual_instance) + + def to_dict(self) -> Optional[Union[Dict[str, Any], AndOrExpression, LiteralExpression, NotExpression, SetExpression, UnaryExpression]]: + """Returns the dict representation of the actual instance""" + if self.actual_instance is None: + return None + + if hasattr(self.actual_instance, "to_dict") and callable(self.actual_instance.to_dict): + return self.actual_instance.to_dict() + else: + # primitive type + return self.actual_instance + + def to_str(self) -> str: + """Returns the string representation of the actual instance""" + return pprint.pformat(self.model_dump()) + +from polaris.catalog.models.and_or_expression import AndOrExpression +from polaris.catalog.models.not_expression import NotExpression +# TODO: Rewrite to not use raise_errors +Expression.model_rebuild(raise_errors=False) + diff --git a/regtests/client/python/polaris/catalog/models/file_format.py b/regtests/client/python/polaris/catalog/models/file_format.py new file mode 100644 index 0000000000..f82d4cccee --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/file_format.py @@ -0,0 +1,53 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +from enum import Enum +from typing_extensions import Self + + +class FileFormat(str, Enum): + """ + FileFormat + """ + + """ + allowed enum values + """ + AVRO = 'avro' + ORC = 'orc' + PARQUET = 'parquet' + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Create an instance of FileFormat from a JSON string""" + return cls(json.loads(json_str)) + + diff --git a/regtests/client/python/polaris/catalog/models/get_namespace_response.py b/regtests/client/python/polaris/catalog/models/get_namespace_response.py new file mode 100644 index 0000000000..3510306ffb --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/get_namespace_response.py @@ -0,0 +1,109 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class GetNamespaceResponse(BaseModel): + """ + GetNamespaceResponse + """ # noqa: E501 + namespace: List[StrictStr] = Field(description="Reference to one or more levels of a namespace") + properties: Optional[Dict[str, StrictStr]] = Field(default=None, description="Properties stored on the namespace, if supported by the server. If the server does not support namespace properties, it should return null for this field. If namespace properties are supported, but none are set, it should return an empty object.") + __properties: ClassVar[List[str]] = ["namespace", "properties"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of GetNamespaceResponse from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # set to None if properties (nullable) is None + # and model_fields_set contains the field + if self.properties is None and "properties" in self.model_fields_set: + _dict['properties'] = None + + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of GetNamespaceResponse from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "namespace": obj.get("namespace"), + "properties": obj.get("properties") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/iceberg_error_response.py b/regtests/client/python/polaris/catalog/models/iceberg_error_response.py new file mode 100644 index 0000000000..e51e83315d --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/iceberg_error_response.py @@ -0,0 +1,106 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.error_model import ErrorModel +from typing import Optional, Set +from typing_extensions import Self + +class IcebergErrorResponse(BaseModel): + """ + JSON wrapper for all error responses (non-2xx) + """ # noqa: E501 + error: ErrorModel + __properties: ClassVar[List[str]] = ["error"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of IcebergErrorResponse from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of error + if self.error: + _dict['error'] = self.error.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of IcebergErrorResponse from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "error": ErrorModel.from_dict(obj["error"]) if obj.get("error") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/list_namespaces_response.py b/regtests/client/python/polaris/catalog/models/list_namespaces_response.py new file mode 100644 index 0000000000..772fb37eab --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/list_namespaces_response.py @@ -0,0 +1,109 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class ListNamespacesResponse(BaseModel): + """ + ListNamespacesResponse + """ # noqa: E501 + next_page_token: Optional[StrictStr] = Field(default=None, description="An opaque token that allows clients to make use of pagination for list APIs (e.g. ListTables). Clients may initiate the first paginated request by sending an empty query parameter `pageToken` to the server. Servers that support pagination should identify the `pageToken` parameter and return a `next-page-token` in the response if there are more results available. After the initial request, the value of `next-page-token` from each response must be used as the `pageToken` parameter value for the next request. The server must return `null` value for the `next-page-token` in the last response. Servers that support pagination must return all results in a single response with the value of `next-page-token` set to `null` if the query parameter `pageToken` is not set in the request. Servers that do not support pagination should ignore the `pageToken` parameter and return all results in a single response. The `next-page-token` must be omitted from the response. Clients must interpret either `null` or missing response value of `next-page-token` as the end of the listing results.", alias="next-page-token") + namespaces: Optional[List[List[StrictStr]]] = None + __properties: ClassVar[List[str]] = ["next-page-token", "namespaces"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of ListNamespacesResponse from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # set to None if next_page_token (nullable) is None + # and model_fields_set contains the field + if self.next_page_token is None and "next_page_token" in self.model_fields_set: + _dict['next-page-token'] = None + + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of ListNamespacesResponse from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "next-page-token": obj.get("next-page-token"), + "namespaces": obj.get("namespaces") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/list_tables_response.py b/regtests/client/python/polaris/catalog/models/list_tables_response.py new file mode 100644 index 0000000000..38ee5dedbd --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/list_tables_response.py @@ -0,0 +1,117 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.table_identifier import TableIdentifier +from typing import Optional, Set +from typing_extensions import Self + +class ListTablesResponse(BaseModel): + """ + ListTablesResponse + """ # noqa: E501 + next_page_token: Optional[StrictStr] = Field(default=None, description="An opaque token that allows clients to make use of pagination for list APIs (e.g. ListTables). Clients may initiate the first paginated request by sending an empty query parameter `pageToken` to the server. Servers that support pagination should identify the `pageToken` parameter and return a `next-page-token` in the response if there are more results available. After the initial request, the value of `next-page-token` from each response must be used as the `pageToken` parameter value for the next request. The server must return `null` value for the `next-page-token` in the last response. Servers that support pagination must return all results in a single response with the value of `next-page-token` set to `null` if the query parameter `pageToken` is not set in the request. Servers that do not support pagination should ignore the `pageToken` parameter and return all results in a single response. The `next-page-token` must be omitted from the response. Clients must interpret either `null` or missing response value of `next-page-token` as the end of the listing results.", alias="next-page-token") + identifiers: Optional[List[TableIdentifier]] = None + __properties: ClassVar[List[str]] = ["next-page-token", "identifiers"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of ListTablesResponse from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in identifiers (list) + _items = [] + if self.identifiers: + for _item in self.identifiers: + if _item: + _items.append(_item.to_dict()) + _dict['identifiers'] = _items + # set to None if next_page_token (nullable) is None + # and model_fields_set contains the field + if self.next_page_token is None and "next_page_token" in self.model_fields_set: + _dict['next-page-token'] = None + + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of ListTablesResponse from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "next-page-token": obj.get("next-page-token"), + "identifiers": [TableIdentifier.from_dict(_item) for _item in obj["identifiers"]] if obj.get("identifiers") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/list_type.py b/regtests/client/python/polaris/catalog/models/list_type.py new file mode 100644 index 0000000000..c54c322e1a --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/list_type.py @@ -0,0 +1,121 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictBool, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class ListType(BaseModel): + """ + ListType + """ # noqa: E501 + type: StrictStr + element_id: StrictInt = Field(alias="element-id") + element: Type + element_required: StrictBool = Field(alias="element-required") + __properties: ClassVar[List[str]] = ["type", "element-id", "element", "element-required"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['list']): + raise ValueError("must be one of enum values ('list')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of ListType from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of element + if self.element: + _dict['element'] = self.element.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of ListType from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "element-id": obj.get("element-id"), + "element": Type.from_dict(obj["element"]) if obj.get("element") is not None else None, + "element-required": obj.get("element-required") + }) + return _obj + +from polaris.catalog.models.type import Type +# TODO: Rewrite to not use raise_errors +ListType.model_rebuild(raise_errors=False) + diff --git a/regtests/client/python/polaris/catalog/models/literal_expression.py b/regtests/client/python/polaris/catalog/models/literal_expression.py new file mode 100644 index 0000000000..10d006c090 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/literal_expression.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.term import Term +from typing import Optional, Set +from typing_extensions import Self + +class LiteralExpression(BaseModel): + """ + LiteralExpression + """ # noqa: E501 + type: StrictStr + term: Term + value: Dict[str, Any] + __properties: ClassVar[List[str]] = ["type", "term", "value"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of LiteralExpression from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of term + if self.term: + _dict['term'] = self.term.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of LiteralExpression from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "term": Term.from_dict(obj["term"]) if obj.get("term") is not None else None, + "value": obj.get("value") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/load_table_result.py b/regtests/client/python/polaris/catalog/models/load_table_result.py new file mode 100644 index 0000000000..e9e7a715f1 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/load_table_result.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.table_metadata import TableMetadata +from typing import Optional, Set +from typing_extensions import Self + +class LoadTableResult(BaseModel): + """ + Result used when a table is successfully loaded. The table metadata JSON is returned in the `metadata` field. The corresponding file location of table metadata should be returned in the `metadata-location` field, unless the metadata is not yet committed. For example, a create transaction may return metadata that is staged but not committed. Clients can check whether metadata has changed by comparing metadata locations after the table has been created. The `config` map returns table-specific configuration for the table's resources, including its HTTP client and FileIO. For example, config may contain a specific FileIO implementation class for the table depending on its underlying storage. The following configurations should be respected by clients: ## General Configurations - `token`: Authorization bearer token to use for table requests if OAuth2 security is enabled ## AWS Configurations The following configurations should be respected when working with tables stored in AWS S3 - `client.region`: region to configure client for making requests to AWS - `s3.access-key-id`: id for for credentials that provide access to the data in S3 - `s3.secret-access-key`: secret for credentials that provide access to data in S3 - `s3.session-token`: if present, this value should be used for as the session token - `s3.remote-signing-enabled`: if `true` remote signing should be performed as described in the `s3-signer-open-api.yaml` specification + """ # noqa: E501 + metadata_location: Optional[StrictStr] = Field(default=None, description="May be null if the table is staged as part of a transaction", alias="metadata-location") + metadata: TableMetadata + config: Optional[Dict[str, StrictStr]] = None + __properties: ClassVar[List[str]] = ["metadata-location", "metadata", "config"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of LoadTableResult from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of metadata + if self.metadata: + _dict['metadata'] = self.metadata.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of LoadTableResult from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "metadata-location": obj.get("metadata-location"), + "metadata": TableMetadata.from_dict(obj["metadata"]) if obj.get("metadata") is not None else None, + "config": obj.get("config") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/load_view_result.py b/regtests/client/python/polaris/catalog/models/load_view_result.py new file mode 100644 index 0000000000..fdf1a6ab56 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/load_view_result.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.view_metadata import ViewMetadata +from typing import Optional, Set +from typing_extensions import Self + +class LoadViewResult(BaseModel): + """ + Result used when a view is successfully loaded. The view metadata JSON is returned in the `metadata` field. The corresponding file location of view metadata is returned in the `metadata-location` field. Clients can check whether metadata has changed by comparing metadata locations after the view has been created. The `config` map returns view-specific configuration for the view's resources. The following configurations should be respected by clients: ## General Configurations - `token`: Authorization bearer token to use for view requests if OAuth2 security is enabled + """ # noqa: E501 + metadata_location: StrictStr = Field(alias="metadata-location") + metadata: ViewMetadata + config: Optional[Dict[str, StrictStr]] = None + __properties: ClassVar[List[str]] = ["metadata-location", "metadata", "config"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of LoadViewResult from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of metadata + if self.metadata: + _dict['metadata'] = self.metadata.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of LoadViewResult from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "metadata-location": obj.get("metadata-location"), + "metadata": ViewMetadata.from_dict(obj["metadata"]) if obj.get("metadata") is not None else None, + "config": obj.get("config") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/map_type.py b/regtests/client/python/polaris/catalog/models/map_type.py new file mode 100644 index 0000000000..f59f558163 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/map_type.py @@ -0,0 +1,128 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictBool, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class MapType(BaseModel): + """ + MapType + """ # noqa: E501 + type: StrictStr + key_id: StrictInt = Field(alias="key-id") + key: Type + value_id: StrictInt = Field(alias="value-id") + value: Type + value_required: StrictBool = Field(alias="value-required") + __properties: ClassVar[List[str]] = ["type", "key-id", "key", "value-id", "value", "value-required"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['map']): + raise ValueError("must be one of enum values ('map')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of MapType from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of key + if self.key: + _dict['key'] = self.key.to_dict() + # override the default output from pydantic by calling `to_dict()` of value + if self.value: + _dict['value'] = self.value.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of MapType from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "key-id": obj.get("key-id"), + "key": Type.from_dict(obj["key"]) if obj.get("key") is not None else None, + "value-id": obj.get("value-id"), + "value": Type.from_dict(obj["value"]) if obj.get("value") is not None else None, + "value-required": obj.get("value-required") + }) + return _obj + +from polaris.catalog.models.type import Type +# TODO: Rewrite to not use raise_errors +MapType.model_rebuild(raise_errors=False) + diff --git a/regtests/client/python/polaris/catalog/models/metadata_log_inner.py b/regtests/client/python/polaris/catalog/models/metadata_log_inner.py new file mode 100644 index 0000000000..acbb2a9d0d --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/metadata_log_inner.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class MetadataLogInner(BaseModel): + """ + MetadataLogInner + """ # noqa: E501 + metadata_file: StrictStr = Field(alias="metadata-file") + timestamp_ms: StrictInt = Field(alias="timestamp-ms") + __properties: ClassVar[List[str]] = ["metadata-file", "timestamp-ms"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of MetadataLogInner from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of MetadataLogInner from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "metadata-file": obj.get("metadata-file"), + "timestamp-ms": obj.get("timestamp-ms") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/metric_result.py b/regtests/client/python/polaris/catalog/models/metric_result.py new file mode 100644 index 0000000000..94b8fd86a1 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/metric_result.py @@ -0,0 +1,149 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +from inspect import getfullargspec +import json +import pprint +import re # noqa: F401 +from pydantic import BaseModel, ConfigDict, Field, StrictStr, ValidationError, field_validator +from typing import Optional +from polaris.catalog.models.counter_result import CounterResult +from polaris.catalog.models.timer_result import TimerResult +from typing import Union, Any, List, Set, TYPE_CHECKING, Optional, Dict +from typing_extensions import Literal, Self +from pydantic import Field + +METRICRESULT_ANY_OF_SCHEMAS = ["CounterResult", "TimerResult"] + +class MetricResult(BaseModel): + """ + MetricResult + """ + + # data type: CounterResult + anyof_schema_1_validator: Optional[CounterResult] = None + # data type: TimerResult + anyof_schema_2_validator: Optional[TimerResult] = None + if TYPE_CHECKING: + actual_instance: Optional[Union[CounterResult, TimerResult]] = None + else: + actual_instance: Any = None + any_of_schemas: Set[str] = { "CounterResult", "TimerResult" } + + model_config = { + "validate_assignment": True, + "protected_namespaces": (), + } + + def __init__(self, *args, **kwargs) -> None: + if args: + if len(args) > 1: + raise ValueError("If a position argument is used, only 1 is allowed to set `actual_instance`") + if kwargs: + raise ValueError("If a position argument is used, keyword arguments cannot be used.") + super().__init__(actual_instance=args[0]) + else: + super().__init__(**kwargs) + + @field_validator('actual_instance') + def actual_instance_must_validate_anyof(cls, v): + instance = MetricResult.model_construct() + error_messages = [] + # validate data type: CounterResult + if not isinstance(v, CounterResult): + error_messages.append(f"Error! Input type `{type(v)}` is not `CounterResult`") + else: + return v + + # validate data type: TimerResult + if not isinstance(v, TimerResult): + error_messages.append(f"Error! Input type `{type(v)}` is not `TimerResult`") + else: + return v + + if error_messages: + # no match + raise ValueError("No match found when setting the actual_instance in MetricResult with anyOf schemas: CounterResult, TimerResult. Details: " + ", ".join(error_messages)) + else: + return v + + @classmethod + def from_dict(cls, obj: Dict[str, Any]) -> Self: + return cls.from_json(json.dumps(obj)) + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Returns the object represented by the json string""" + instance = cls.model_construct() + error_messages = [] + # anyof_schema_1_validator: Optional[CounterResult] = None + try: + instance.actual_instance = CounterResult.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_2_validator: Optional[TimerResult] = None + try: + instance.actual_instance = TimerResult.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + + if error_messages: + # no match + raise ValueError("No match found when deserializing the JSON string into MetricResult with anyOf schemas: CounterResult, TimerResult. Details: " + ", ".join(error_messages)) + else: + return instance + + def to_json(self) -> str: + """Returns the JSON representation of the actual instance""" + if self.actual_instance is None: + return "null" + + if hasattr(self.actual_instance, "to_json") and callable(self.actual_instance.to_json): + return self.actual_instance.to_json() + else: + return json.dumps(self.actual_instance) + + def to_dict(self) -> Optional[Union[Dict[str, Any], CounterResult, TimerResult]]: + """Returns the dict representation of the actual instance""" + if self.actual_instance is None: + return None + + if hasattr(self.actual_instance, "to_dict") and callable(self.actual_instance.to_dict): + return self.actual_instance.to_dict() + else: + return self.actual_instance + + def to_str(self) -> str: + """Returns the string representation of the actual instance""" + return pprint.pformat(self.model_dump()) + + diff --git a/regtests/client/python/polaris/catalog/models/model_schema.py b/regtests/client/python/polaris/catalog/models/model_schema.py new file mode 100644 index 0000000000..bc2c82e1af --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/model_schema.py @@ -0,0 +1,125 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.struct_field import StructField +from typing import Optional, Set +from typing_extensions import Self + +class ModelSchema(BaseModel): + """ + ModelSchema + """ # noqa: E501 + type: StrictStr + fields: List[StructField] + schema_id: Optional[StrictInt] = Field(default=None, alias="schema-id") + identifier_field_ids: Optional[List[StrictInt]] = Field(default=None, alias="identifier-field-ids") + __properties: ClassVar[List[str]] = ["type", "fields", "schema-id", "identifier-field-ids"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['struct']): + raise ValueError("must be one of enum values ('struct')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of ModelSchema from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + * OpenAPI `readOnly` fields are excluded. + """ + excluded_fields: Set[str] = set([ + "schema_id", + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in fields (list) + _items = [] + if self.fields: + for _item in self.fields: + if _item: + _items.append(_item.to_dict()) + _dict['fields'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of ModelSchema from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "fields": [StructField.from_dict(_item) for _item in obj["fields"]] if obj.get("fields") is not None else None, + "schema-id": obj.get("schema-id"), + "identifier-field-ids": obj.get("identifier-field-ids") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/not_expression.py b/regtests/client/python/polaris/catalog/models/not_expression.py new file mode 100644 index 0000000000..8bc6e8cc1a --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/not_expression.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class NotExpression(BaseModel): + """ + NotExpression + """ # noqa: E501 + type: StrictStr + child: Expression + __properties: ClassVar[List[str]] = ["type", "child"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of NotExpression from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of child + if self.child: + _dict['child'] = self.child.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of NotExpression from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "child": Expression.from_dict(obj["child"]) if obj.get("child") is not None else None + }) + return _obj + +from polaris.catalog.models.expression import Expression +# TODO: Rewrite to not use raise_errors +NotExpression.model_rebuild(raise_errors=False) + diff --git a/regtests/client/python/polaris/catalog/models/notification_request.py b/regtests/client/python/polaris/catalog/models/notification_request.py new file mode 100644 index 0000000000..b408c32b99 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/notification_request.py @@ -0,0 +1,109 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.notification_type import NotificationType +from polaris.catalog.models.table_update_notification import TableUpdateNotification +from typing import Optional, Set +from typing_extensions import Self + +class NotificationRequest(BaseModel): + """ + NotificationRequest + """ # noqa: E501 + notification_type: NotificationType = Field(alias="notification-type") + payload: Optional[TableUpdateNotification] = None + __properties: ClassVar[List[str]] = ["notification-type", "payload"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of NotificationRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of payload + if self.payload: + _dict['payload'] = self.payload.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of NotificationRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "notification-type": obj.get("notification-type"), + "payload": TableUpdateNotification.from_dict(obj["payload"]) if obj.get("payload") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/notification_type.py b/regtests/client/python/polaris/catalog/models/notification_type.py new file mode 100644 index 0000000000..c5b8491782 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/notification_type.py @@ -0,0 +1,54 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +from enum import Enum +from typing_extensions import Self + + +class NotificationType(str, Enum): + """ + NotificationType + """ + + """ + allowed enum values + """ + UNKNOWN = 'UNKNOWN' + CREATE = 'CREATE' + UPDATE = 'UPDATE' + DROP = 'DROP' + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Create an instance of NotificationType from a JSON string""" + return cls(json.loads(json_str)) + + diff --git a/regtests/client/python/polaris/catalog/models/null_order.py b/regtests/client/python/polaris/catalog/models/null_order.py new file mode 100644 index 0000000000..f05c35ae9c --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/null_order.py @@ -0,0 +1,52 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +from enum import Enum +from typing_extensions import Self + + +class NullOrder(str, Enum): + """ + NullOrder + """ + + """ + allowed enum values + """ + NULLS_MINUS_FIRST = 'nulls-first' + NULLS_MINUS_LAST = 'nulls-last' + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Create an instance of NullOrder from a JSON string""" + return cls(json.loads(json_str)) + + diff --git a/regtests/client/python/polaris/catalog/models/o_auth_error.py b/regtests/client/python/polaris/catalog/models/o_auth_error.py new file mode 100644 index 0000000000..df82791b09 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/o_auth_error.py @@ -0,0 +1,113 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class OAuthError(BaseModel): + """ + OAuthError + """ # noqa: E501 + error: StrictStr + error_description: Optional[StrictStr] = None + error_uri: Optional[StrictStr] = None + __properties: ClassVar[List[str]] = ["error", "error_description", "error_uri"] + + @field_validator('error') + def error_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['invalid_request', 'invalid_client', 'invalid_grant', 'unauthorized_client', 'unsupported_grant_type', 'invalid_scope']): + raise ValueError("must be one of enum values ('invalid_request', 'invalid_client', 'invalid_grant', 'unauthorized_client', 'unsupported_grant_type', 'invalid_scope')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of OAuthError from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of OAuthError from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "error": obj.get("error"), + "error_description": obj.get("error_description"), + "error_uri": obj.get("error_uri") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/o_auth_token_response.py b/regtests/client/python/polaris/catalog/models/o_auth_token_response.py new file mode 100644 index 0000000000..369ce4a9b6 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/o_auth_token_response.py @@ -0,0 +1,120 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.token_type import TokenType +from typing import Optional, Set +from typing_extensions import Self + +class OAuthTokenResponse(BaseModel): + """ + OAuthTokenResponse + """ # noqa: E501 + access_token: StrictStr = Field(description="The access token, for client credentials or token exchange") + token_type: StrictStr = Field(description="Access token type for client credentials or token exchange See https://datatracker.ietf.org/doc/html/rfc6749#section-7.1") + expires_in: Optional[StrictInt] = Field(default=None, description="Lifetime of the access token in seconds for client credentials or token exchange") + issued_token_type: Optional[TokenType] = None + refresh_token: Optional[StrictStr] = Field(default=None, description="Refresh token for client credentials or token exchange") + scope: Optional[StrictStr] = Field(default=None, description="Authorization scope for client credentials or token exchange") + __properties: ClassVar[List[str]] = ["access_token", "token_type", "expires_in", "issued_token_type", "refresh_token", "scope"] + + @field_validator('token_type') + def token_type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['bearer', 'mac', 'N_A']): + raise ValueError("must be one of enum values ('bearer', 'mac', 'N_A')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of OAuthTokenResponse from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of OAuthTokenResponse from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "access_token": obj.get("access_token"), + "token_type": obj.get("token_type"), + "expires_in": obj.get("expires_in"), + "issued_token_type": obj.get("issued_token_type"), + "refresh_token": obj.get("refresh_token"), + "scope": obj.get("scope") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/partition_field.py b/regtests/client/python/polaris/catalog/models/partition_field.py new file mode 100644 index 0000000000..7d06fdaec3 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/partition_field.py @@ -0,0 +1,108 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class PartitionField(BaseModel): + """ + PartitionField + """ # noqa: E501 + field_id: Optional[StrictInt] = Field(default=None, alias="field-id") + source_id: StrictInt = Field(alias="source-id") + name: StrictStr + transform: StrictStr + __properties: ClassVar[List[str]] = ["field-id", "source-id", "name", "transform"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of PartitionField from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of PartitionField from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "field-id": obj.get("field-id"), + "source-id": obj.get("source-id"), + "name": obj.get("name"), + "transform": obj.get("transform") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/partition_spec.py b/regtests/client/python/polaris/catalog/models/partition_spec.py new file mode 100644 index 0000000000..3f22124d32 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/partition_spec.py @@ -0,0 +1,114 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.partition_field import PartitionField +from typing import Optional, Set +from typing_extensions import Self + +class PartitionSpec(BaseModel): + """ + PartitionSpec + """ # noqa: E501 + spec_id: Optional[StrictInt] = Field(default=None, alias="spec-id") + fields: List[PartitionField] + __properties: ClassVar[List[str]] = ["spec-id", "fields"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of PartitionSpec from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + * OpenAPI `readOnly` fields are excluded. + """ + excluded_fields: Set[str] = set([ + "spec_id", + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in fields (list) + _items = [] + if self.fields: + for _item in self.fields: + if _item: + _items.append(_item.to_dict()) + _dict['fields'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of PartitionSpec from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "spec-id": obj.get("spec-id"), + "fields": [PartitionField.from_dict(_item) for _item in obj["fields"]] if obj.get("fields") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/partition_statistics_file.py b/regtests/client/python/polaris/catalog/models/partition_statistics_file.py new file mode 100644 index 0000000000..265d1f5be6 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/partition_statistics_file.py @@ -0,0 +1,106 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class PartitionStatisticsFile(BaseModel): + """ + PartitionStatisticsFile + """ # noqa: E501 + snapshot_id: StrictInt = Field(alias="snapshot-id") + statistics_path: StrictStr = Field(alias="statistics-path") + file_size_in_bytes: StrictInt = Field(alias="file-size-in-bytes") + __properties: ClassVar[List[str]] = ["snapshot-id", "statistics-path", "file-size-in-bytes"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of PartitionStatisticsFile from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of PartitionStatisticsFile from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "snapshot-id": obj.get("snapshot-id"), + "statistics-path": obj.get("statistics-path"), + "file-size-in-bytes": obj.get("file-size-in-bytes") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/position_delete_file.py b/regtests/client/python/polaris/catalog/models/position_delete_file.py new file mode 100644 index 0000000000..548166d078 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/position_delete_file.py @@ -0,0 +1,128 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.content_file import ContentFile +from polaris.catalog.models.file_format import FileFormat +from polaris.catalog.models.primitive_type_value import PrimitiveTypeValue +from typing import Optional, Set +from typing_extensions import Self + +class PositionDeleteFile(ContentFile): + """ + PositionDeleteFile + """ # noqa: E501 + content: StrictStr + __properties: ClassVar[List[str]] = ["content", "file-path", "file-format", "spec-id", "partition", "file-size-in-bytes", "record-count", "key-metadata", "split-offsets", "sort-order-id"] + + @field_validator('content') + def content_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['position-deletes']): + raise ValueError("must be one of enum values ('position-deletes')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of PositionDeleteFile from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in partition (list) + _items = [] + if self.partition: + for _item in self.partition: + if _item: + _items.append(_item.to_dict()) + _dict['partition'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of PositionDeleteFile from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "content": obj.get("content"), + "file-path": obj.get("file-path"), + "file-format": obj.get("file-format"), + "spec-id": obj.get("spec-id"), + "partition": [PrimitiveTypeValue.from_dict(_item) for _item in obj["partition"]] if obj.get("partition") is not None else None, + "file-size-in-bytes": obj.get("file-size-in-bytes"), + "record-count": obj.get("record-count"), + "key-metadata": obj.get("key-metadata"), + "split-offsets": obj.get("split-offsets"), + "sort-order-id": obj.get("sort-order-id") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/primitive_type_value.py b/regtests/client/python/polaris/catalog/models/primitive_type_value.py new file mode 100644 index 0000000000..0632ea1dab --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/primitive_type_value.py @@ -0,0 +1,398 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +import pprint +from datetime import date +from pydantic import BaseModel, ConfigDict, Field, StrictBool, StrictFloat, StrictInt, StrictStr, ValidationError, field_validator +from typing import Any, List, Optional, Union +from typing_extensions import Annotated +from pydantic import StrictStr, Field +from typing import Union, List, Set, Optional, Dict +from typing_extensions import Literal, Self + +PRIMITIVETYPEVALUE_ONE_OF_SCHEMAS = ["bool", "date", "float", "int", "str"] + +class PrimitiveTypeValue(BaseModel): + """ + PrimitiveTypeValue + """ + # data type: bool + oneof_schema_1_validator: Optional[StrictBool] = None + # data type: int + oneof_schema_2_validator: Optional[StrictInt] = None + # data type: int + oneof_schema_3_validator: Optional[StrictInt] = None + # data type: float + oneof_schema_4_validator: Optional[Union[StrictFloat, StrictInt]] = None + # data type: float + oneof_schema_5_validator: Optional[Union[StrictFloat, StrictInt]] = None + # data type: str + oneof_schema_6_validator: Optional[StrictStr] = Field(default=None, description="Decimal type values are serialized as strings. Decimals with a positive scale serialize as numeric plain text, while decimals with a negative scale use scientific notation and the exponent will be equal to the negated scale. For instance, a decimal with a positive scale is '123.4500', with zero scale is '2', and with a negative scale is '2E+20'") + # data type: str + oneof_schema_7_validator: Optional[StrictStr] = None + # data type: str + oneof_schema_8_validator: Optional[Annotated[str, Field(min_length=36, strict=True, max_length=36)]] = Field(default=None, description="UUID type values are serialized as a 36-character lowercase string in standard UUID format as specified by RFC-4122") + # data type: date + oneof_schema_9_validator: Optional[date] = Field(default=None, description="Date type values follow the 'YYYY-MM-DD' ISO-8601 standard date format") + # data type: str + oneof_schema_10_validator: Optional[StrictStr] = Field(default=None, description="Time type values follow the 'HH:MM:SS.ssssss' ISO-8601 format with microsecond precision") + # data type: str + oneof_schema_11_validator: Optional[StrictStr] = Field(default=None, description="Timestamp type values follow the 'YYYY-MM-DDTHH:MM:SS.ssssss' ISO-8601 format with microsecond precision") + # data type: str + oneof_schema_12_validator: Optional[StrictStr] = Field(default=None, description="TimestampTz type values follow the 'YYYY-MM-DDTHH:MM:SS.ssssss+00:00' ISO-8601 format with microsecond precision, and a timezone offset (+00:00 for UTC)") + # data type: str + oneof_schema_13_validator: Optional[StrictStr] = Field(default=None, description="Timestamp_ns type values follow the 'YYYY-MM-DDTHH:MM:SS.sssssssss' ISO-8601 format with nanosecond precision") + # data type: str + oneof_schema_14_validator: Optional[StrictStr] = Field(default=None, description="Timestamp_ns type values follow the 'YYYY-MM-DDTHH:MM:SS.sssssssss+00:00' ISO-8601 format with nanosecond precision, and a timezone offset (+00:00 for UTC)") + # data type: str + oneof_schema_15_validator: Optional[StrictStr] = Field(default=None, description="Fixed length type values are stored and serialized as an uppercase hexadecimal string preserving the fixed length") + # data type: str + oneof_schema_16_validator: Optional[StrictStr] = Field(default=None, description="Binary type values are stored and serialized as an uppercase hexadecimal string") + actual_instance: Optional[Union[bool, date, float, int, str]] = None + one_of_schemas: Set[str] = { "bool", "date", "float", "int", "str" } + + model_config = ConfigDict( + validate_assignment=True, + protected_namespaces=(), + ) + + + def __init__(self, *args, **kwargs) -> None: + if args: + if len(args) > 1: + raise ValueError("If a position argument is used, only 1 is allowed to set `actual_instance`") + if kwargs: + raise ValueError("If a position argument is used, keyword arguments cannot be used.") + super().__init__(actual_instance=args[0]) + else: + super().__init__(**kwargs) + + @field_validator('actual_instance') + def actual_instance_must_validate_oneof(cls, v): + instance = PrimitiveTypeValue.model_construct() + error_messages = [] + match = 0 + # validate data type: bool + try: + instance.oneof_schema_1_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: int + try: + instance.oneof_schema_2_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: int + try: + instance.oneof_schema_3_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: float + try: + instance.oneof_schema_4_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: float + try: + instance.oneof_schema_5_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: str + try: + instance.oneof_schema_6_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: str + try: + instance.oneof_schema_7_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: str + try: + instance.oneof_schema_8_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: date + try: + instance.oneof_schema_9_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: str + try: + instance.oneof_schema_10_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: str + try: + instance.oneof_schema_11_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: str + try: + instance.oneof_schema_12_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: str + try: + instance.oneof_schema_13_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: str + try: + instance.oneof_schema_14_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: str + try: + instance.oneof_schema_15_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: str + try: + instance.oneof_schema_16_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + if match > 1: + # more than 1 match + raise ValueError("Multiple matches found when setting `actual_instance` in PrimitiveTypeValue with oneOf schemas: bool, date, float, int, str. Details: " + ", ".join(error_messages)) + elif match == 0: + # no match + raise ValueError("No match found when setting `actual_instance` in PrimitiveTypeValue with oneOf schemas: bool, date, float, int, str. Details: " + ", ".join(error_messages)) + else: + return v + + @classmethod + def from_dict(cls, obj: Union[str, Dict[str, Any]]) -> Self: + return cls.from_json(json.dumps(obj)) + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Returns the object represented by the json string""" + instance = cls.model_construct() + error_messages = [] + match = 0 + + # deserialize data into bool + try: + # validation + instance.oneof_schema_1_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_1_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into int + try: + # validation + instance.oneof_schema_2_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_2_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into int + try: + # validation + instance.oneof_schema_3_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_3_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into float + try: + # validation + instance.oneof_schema_4_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_4_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into float + try: + # validation + instance.oneof_schema_5_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_5_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into str + try: + # validation + instance.oneof_schema_6_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_6_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into str + try: + # validation + instance.oneof_schema_7_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_7_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into str + try: + # validation + instance.oneof_schema_8_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_8_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into date + try: + # validation + instance.oneof_schema_9_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_9_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into str + try: + # validation + instance.oneof_schema_10_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_10_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into str + try: + # validation + instance.oneof_schema_11_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_11_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into str + try: + # validation + instance.oneof_schema_12_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_12_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into str + try: + # validation + instance.oneof_schema_13_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_13_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into str + try: + # validation + instance.oneof_schema_14_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_14_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into str + try: + # validation + instance.oneof_schema_15_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_15_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into str + try: + # validation + instance.oneof_schema_16_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_16_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + + if match > 1: + # more than 1 match + raise ValueError("Multiple matches found when deserializing the JSON string into PrimitiveTypeValue with oneOf schemas: bool, date, float, int, str. Details: " + ", ".join(error_messages)) + elif match == 0: + # no match + raise ValueError("No match found when deserializing the JSON string into PrimitiveTypeValue with oneOf schemas: bool, date, float, int, str. Details: " + ", ".join(error_messages)) + else: + return instance + + def to_json(self) -> str: + """Returns the JSON representation of the actual instance""" + if self.actual_instance is None: + return "null" + + if hasattr(self.actual_instance, "to_json") and callable(self.actual_instance.to_json): + return self.actual_instance.to_json() + else: + return json.dumps(self.actual_instance) + + def to_dict(self) -> Optional[Union[Dict[str, Any], bool, date, float, int, str]]: + """Returns the dict representation of the actual instance""" + if self.actual_instance is None: + return None + + if hasattr(self.actual_instance, "to_dict") and callable(self.actual_instance.to_dict): + return self.actual_instance.to_dict() + else: + # primitive type + return self.actual_instance + + def to_str(self) -> str: + """Returns the string representation of the actual instance""" + return pprint.pformat(self.model_dump()) + + diff --git a/regtests/client/python/polaris/catalog/models/register_table_request.py b/regtests/client/python/polaris/catalog/models/register_table_request.py new file mode 100644 index 0000000000..332f8cb7d6 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/register_table_request.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class RegisterTableRequest(BaseModel): + """ + RegisterTableRequest + """ # noqa: E501 + name: StrictStr + metadata_location: StrictStr = Field(alias="metadata-location") + __properties: ClassVar[List[str]] = ["name", "metadata-location"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of RegisterTableRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of RegisterTableRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "name": obj.get("name"), + "metadata-location": obj.get("metadata-location") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/remove_partition_statistics_update.py b/regtests/client/python/polaris/catalog/models/remove_partition_statistics_update.py new file mode 100644 index 0000000000..d23fcfe50f --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/remove_partition_statistics_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class RemovePartitionStatisticsUpdate(BaseUpdate): + """ + RemovePartitionStatisticsUpdate + """ # noqa: E501 + action: StrictStr + snapshot_id: StrictInt = Field(alias="snapshot-id") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['remove-partition-statistics']): + raise ValueError("must be one of enum values ('remove-partition-statistics')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of RemovePartitionStatisticsUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of RemovePartitionStatisticsUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/remove_properties_update.py b/regtests/client/python/polaris/catalog/models/remove_properties_update.py new file mode 100644 index 0000000000..ac88e9087a --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/remove_properties_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class RemovePropertiesUpdate(BaseUpdate): + """ + RemovePropertiesUpdate + """ # noqa: E501 + action: StrictStr + removals: List[StrictStr] + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['remove-properties']): + raise ValueError("must be one of enum values ('remove-properties')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of RemovePropertiesUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of RemovePropertiesUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/remove_snapshot_ref_update.py b/regtests/client/python/polaris/catalog/models/remove_snapshot_ref_update.py new file mode 100644 index 0000000000..c4d2e7b749 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/remove_snapshot_ref_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class RemoveSnapshotRefUpdate(BaseUpdate): + """ + RemoveSnapshotRefUpdate + """ # noqa: E501 + action: StrictStr + ref_name: StrictStr = Field(alias="ref-name") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['remove-snapshot-ref']): + raise ValueError("must be one of enum values ('remove-snapshot-ref')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of RemoveSnapshotRefUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of RemoveSnapshotRefUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/remove_snapshots_update.py b/regtests/client/python/polaris/catalog/models/remove_snapshots_update.py new file mode 100644 index 0000000000..f4727cb6af --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/remove_snapshots_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class RemoveSnapshotsUpdate(BaseUpdate): + """ + RemoveSnapshotsUpdate + """ # noqa: E501 + action: StrictStr + snapshot_ids: List[StrictInt] = Field(alias="snapshot-ids") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['remove-snapshots']): + raise ValueError("must be one of enum values ('remove-snapshots')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of RemoveSnapshotsUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of RemoveSnapshotsUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/remove_statistics_update.py b/regtests/client/python/polaris/catalog/models/remove_statistics_update.py new file mode 100644 index 0000000000..ba684f4561 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/remove_statistics_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class RemoveStatisticsUpdate(BaseUpdate): + """ + RemoveStatisticsUpdate + """ # noqa: E501 + action: StrictStr + snapshot_id: StrictInt = Field(alias="snapshot-id") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['remove-statistics']): + raise ValueError("must be one of enum values ('remove-statistics')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of RemoveStatisticsUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of RemoveStatisticsUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/rename_table_request.py b/regtests/client/python/polaris/catalog/models/rename_table_request.py new file mode 100644 index 0000000000..f1e3c68525 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/rename_table_request.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.table_identifier import TableIdentifier +from typing import Optional, Set +from typing_extensions import Self + +class RenameTableRequest(BaseModel): + """ + RenameTableRequest + """ # noqa: E501 + source: TableIdentifier + destination: TableIdentifier + __properties: ClassVar[List[str]] = ["source", "destination"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of RenameTableRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of source + if self.source: + _dict['source'] = self.source.to_dict() + # override the default output from pydantic by calling `to_dict()` of destination + if self.destination: + _dict['destination'] = self.destination.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of RenameTableRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "source": TableIdentifier.from_dict(obj["source"]) if obj.get("source") is not None else None, + "destination": TableIdentifier.from_dict(obj["destination"]) if obj.get("destination") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/report_metrics_request.py b/regtests/client/python/polaris/catalog/models/report_metrics_request.py new file mode 100644 index 0000000000..481613e48d --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/report_metrics_request.py @@ -0,0 +1,149 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +from inspect import getfullargspec +import json +import pprint +import re # noqa: F401 +from pydantic import BaseModel, ConfigDict, Field, StrictStr, ValidationError, field_validator +from typing import Optional +from polaris.catalog.models.commit_report import CommitReport +from polaris.catalog.models.scan_report import ScanReport +from typing import Union, Any, List, Set, TYPE_CHECKING, Optional, Dict +from typing_extensions import Literal, Self +from pydantic import Field + +REPORTMETRICSREQUEST_ANY_OF_SCHEMAS = ["CommitReport", "ScanReport"] + +class ReportMetricsRequest(BaseModel): + """ + ReportMetricsRequest + """ + + # data type: ScanReport + anyof_schema_1_validator: Optional[ScanReport] = None + # data type: CommitReport + anyof_schema_2_validator: Optional[CommitReport] = None + if TYPE_CHECKING: + actual_instance: Optional[Union[CommitReport, ScanReport]] = None + else: + actual_instance: Any = None + any_of_schemas: Set[str] = { "CommitReport", "ScanReport" } + + model_config = { + "validate_assignment": True, + "protected_namespaces": (), + } + + def __init__(self, *args, **kwargs) -> None: + if args: + if len(args) > 1: + raise ValueError("If a position argument is used, only 1 is allowed to set `actual_instance`") + if kwargs: + raise ValueError("If a position argument is used, keyword arguments cannot be used.") + super().__init__(actual_instance=args[0]) + else: + super().__init__(**kwargs) + + @field_validator('actual_instance') + def actual_instance_must_validate_anyof(cls, v): + instance = ReportMetricsRequest.model_construct() + error_messages = [] + # validate data type: ScanReport + if not isinstance(v, ScanReport): + error_messages.append(f"Error! Input type `{type(v)}` is not `ScanReport`") + else: + return v + + # validate data type: CommitReport + if not isinstance(v, CommitReport): + error_messages.append(f"Error! Input type `{type(v)}` is not `CommitReport`") + else: + return v + + if error_messages: + # no match + raise ValueError("No match found when setting the actual_instance in ReportMetricsRequest with anyOf schemas: CommitReport, ScanReport. Details: " + ", ".join(error_messages)) + else: + return v + + @classmethod + def from_dict(cls, obj: Dict[str, Any]) -> Self: + return cls.from_json(json.dumps(obj)) + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Returns the object represented by the json string""" + instance = cls.model_construct() + error_messages = [] + # anyof_schema_1_validator: Optional[ScanReport] = None + try: + instance.actual_instance = ScanReport.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_2_validator: Optional[CommitReport] = None + try: + instance.actual_instance = CommitReport.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + + if error_messages: + # no match + raise ValueError("No match found when deserializing the JSON string into ReportMetricsRequest with anyOf schemas: CommitReport, ScanReport. Details: " + ", ".join(error_messages)) + else: + return instance + + def to_json(self) -> str: + """Returns the JSON representation of the actual instance""" + if self.actual_instance is None: + return "null" + + if hasattr(self.actual_instance, "to_json") and callable(self.actual_instance.to_json): + return self.actual_instance.to_json() + else: + return json.dumps(self.actual_instance) + + def to_dict(self) -> Optional[Union[Dict[str, Any], CommitReport, ScanReport]]: + """Returns the dict representation of the actual instance""" + if self.actual_instance is None: + return None + + if hasattr(self.actual_instance, "to_dict") and callable(self.actual_instance.to_dict): + return self.actual_instance.to_dict() + else: + return self.actual_instance + + def to_str(self) -> str: + """Returns the string representation of the actual instance""" + return pprint.pformat(self.model_dump()) + + diff --git a/regtests/client/python/polaris/catalog/models/scan_report.py b/regtests/client/python/polaris/catalog/models/scan_report.py new file mode 100644 index 0000000000..41c3b96f5d --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/scan_report.py @@ -0,0 +1,133 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.expression import Expression +from polaris.catalog.models.metric_result import MetricResult +from typing import Optional, Set +from typing_extensions import Self + +class ScanReport(BaseModel): + """ + ScanReport + """ # noqa: E501 + table_name: StrictStr = Field(alias="table-name") + snapshot_id: StrictInt = Field(alias="snapshot-id") + filter: Expression + schema_id: StrictInt = Field(alias="schema-id") + projected_field_ids: List[StrictInt] = Field(alias="projected-field-ids") + projected_field_names: List[StrictStr] = Field(alias="projected-field-names") + metrics: Dict[str, MetricResult] + metadata: Optional[Dict[str, StrictStr]] = None + __properties: ClassVar[List[str]] = ["table-name", "snapshot-id", "filter", "schema-id", "projected-field-ids", "projected-field-names", "metrics", "metadata"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of ScanReport from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of filter + if self.filter: + _dict['filter'] = self.filter.to_dict() + # override the default output from pydantic by calling `to_dict()` of each value in metrics (dict) + _field_dict = {} + if self.metrics: + for _key in self.metrics: + if self.metrics[_key]: + _field_dict[_key] = self.metrics[_key].to_dict() + _dict['metrics'] = _field_dict + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of ScanReport from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "table-name": obj.get("table-name"), + "snapshot-id": obj.get("snapshot-id"), + "filter": Expression.from_dict(obj["filter"]) if obj.get("filter") is not None else None, + "schema-id": obj.get("schema-id"), + "projected-field-ids": obj.get("projected-field-ids"), + "projected-field-names": obj.get("projected-field-names"), + "metrics": dict( + (_k, MetricResult.from_dict(_v)) + for _k, _v in obj["metrics"].items() + ) + if obj.get("metrics") is not None + else None, + "metadata": obj.get("metadata") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/set_current_schema_update.py b/regtests/client/python/polaris/catalog/models/set_current_schema_update.py new file mode 100644 index 0000000000..5aeb3ddc88 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/set_current_schema_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class SetCurrentSchemaUpdate(BaseUpdate): + """ + SetCurrentSchemaUpdate + """ # noqa: E501 + action: StrictStr + schema_id: StrictInt = Field(description="Schema ID to set as current, or -1 to set last added schema", alias="schema-id") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['set-current-schema']): + raise ValueError("must be one of enum values ('set-current-schema')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SetCurrentSchemaUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SetCurrentSchemaUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/set_current_view_version_update.py b/regtests/client/python/polaris/catalog/models/set_current_view_version_update.py new file mode 100644 index 0000000000..7767f13bc6 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/set_current_view_version_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class SetCurrentViewVersionUpdate(BaseUpdate): + """ + SetCurrentViewVersionUpdate + """ # noqa: E501 + action: StrictStr + view_version_id: StrictInt = Field(description="The view version id to set as current, or -1 to set last added view version id", alias="view-version-id") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['set-current-view-version']): + raise ValueError("must be one of enum values ('set-current-view-version')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SetCurrentViewVersionUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SetCurrentViewVersionUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/set_default_sort_order_update.py b/regtests/client/python/polaris/catalog/models/set_default_sort_order_update.py new file mode 100644 index 0000000000..25edad166d --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/set_default_sort_order_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class SetDefaultSortOrderUpdate(BaseUpdate): + """ + SetDefaultSortOrderUpdate + """ # noqa: E501 + action: StrictStr + sort_order_id: StrictInt = Field(description="Sort order ID to set as the default, or -1 to set last added sort order", alias="sort-order-id") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['set-default-sort-order']): + raise ValueError("must be one of enum values ('set-default-sort-order')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SetDefaultSortOrderUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SetDefaultSortOrderUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/set_default_spec_update.py b/regtests/client/python/polaris/catalog/models/set_default_spec_update.py new file mode 100644 index 0000000000..e4daeca6de --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/set_default_spec_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class SetDefaultSpecUpdate(BaseUpdate): + """ + SetDefaultSpecUpdate + """ # noqa: E501 + action: StrictStr + spec_id: StrictInt = Field(description="Partition spec ID to set as the default, or -1 to set last added spec", alias="spec-id") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['set-default-spec']): + raise ValueError("must be one of enum values ('set-default-spec')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SetDefaultSpecUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SetDefaultSpecUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/set_expression.py b/regtests/client/python/polaris/catalog/models/set_expression.py new file mode 100644 index 0000000000..ee3718c16a --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/set_expression.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.term import Term +from typing import Optional, Set +from typing_extensions import Self + +class SetExpression(BaseModel): + """ + SetExpression + """ # noqa: E501 + type: StrictStr + term: Term + values: List[Dict[str, Any]] + __properties: ClassVar[List[str]] = ["type", "term", "values"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SetExpression from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of term + if self.term: + _dict['term'] = self.term.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SetExpression from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "term": Term.from_dict(obj["term"]) if obj.get("term") is not None else None, + "values": obj.get("values") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/set_location_update.py b/regtests/client/python/polaris/catalog/models/set_location_update.py new file mode 100644 index 0000000000..cefc437bec --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/set_location_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class SetLocationUpdate(BaseUpdate): + """ + SetLocationUpdate + """ # noqa: E501 + action: StrictStr + location: StrictStr + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['set-location']): + raise ValueError("must be one of enum values ('set-location')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SetLocationUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SetLocationUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/set_partition_statistics_update.py b/regtests/client/python/polaris/catalog/models/set_partition_statistics_update.py new file mode 100644 index 0000000000..37073914a4 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/set_partition_statistics_update.py @@ -0,0 +1,112 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from polaris.catalog.models.partition_statistics_file import PartitionStatisticsFile +from typing import Optional, Set +from typing_extensions import Self + +class SetPartitionStatisticsUpdate(BaseUpdate): + """ + SetPartitionStatisticsUpdate + """ # noqa: E501 + action: StrictStr + partition_statistics: PartitionStatisticsFile = Field(alias="partition-statistics") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['set-partition-statistics']): + raise ValueError("must be one of enum values ('set-partition-statistics')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SetPartitionStatisticsUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SetPartitionStatisticsUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/set_properties_update.py b/regtests/client/python/polaris/catalog/models/set_properties_update.py new file mode 100644 index 0000000000..abebb40cf7 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/set_properties_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class SetPropertiesUpdate(BaseUpdate): + """ + SetPropertiesUpdate + """ # noqa: E501 + action: StrictStr + updates: Dict[str, StrictStr] + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['set-properties']): + raise ValueError("must be one of enum values ('set-properties')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SetPropertiesUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SetPropertiesUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/set_snapshot_ref_update.py b/regtests/client/python/polaris/catalog/models/set_snapshot_ref_update.py new file mode 100644 index 0000000000..5957ace507 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/set_snapshot_ref_update.py @@ -0,0 +1,128 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class SetSnapshotRefUpdate(BaseUpdate): + """ + SetSnapshotRefUpdate + """ # noqa: E501 + action: StrictStr + ref_name: StrictStr = Field(alias="ref-name") + type: StrictStr + snapshot_id: StrictInt = Field(alias="snapshot-id") + max_ref_age_ms: Optional[StrictInt] = Field(default=None, alias="max-ref-age-ms") + max_snapshot_age_ms: Optional[StrictInt] = Field(default=None, alias="max-snapshot-age-ms") + min_snapshots_to_keep: Optional[StrictInt] = Field(default=None, alias="min-snapshots-to-keep") + __properties: ClassVar[List[str]] = ["action", "type", "snapshot-id", "max-ref-age-ms", "max-snapshot-age-ms", "min-snapshots-to-keep"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['set-snapshot-ref']): + raise ValueError("must be one of enum values ('set-snapshot-ref')") + return value + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['tag', 'branch']): + raise ValueError("must be one of enum values ('tag', 'branch')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SetSnapshotRefUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SetSnapshotRefUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action"), + "type": obj.get("type"), + "snapshot-id": obj.get("snapshot-id"), + "max-ref-age-ms": obj.get("max-ref-age-ms"), + "max-snapshot-age-ms": obj.get("max-snapshot-age-ms"), + "min-snapshots-to-keep": obj.get("min-snapshots-to-keep") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/set_statistics_update.py b/regtests/client/python/polaris/catalog/models/set_statistics_update.py new file mode 100644 index 0000000000..55ad85a2a4 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/set_statistics_update.py @@ -0,0 +1,113 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from polaris.catalog.models.statistics_file import StatisticsFile +from typing import Optional, Set +from typing_extensions import Self + +class SetStatisticsUpdate(BaseUpdate): + """ + SetStatisticsUpdate + """ # noqa: E501 + action: StrictStr + snapshot_id: StrictInt = Field(alias="snapshot-id") + statistics: StatisticsFile + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['set-statistics']): + raise ValueError("must be one of enum values ('set-statistics')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SetStatisticsUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SetStatisticsUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/snapshot.py b/regtests/client/python/polaris/catalog/models/snapshot.py new file mode 100644 index 0000000000..406b7e7cd7 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/snapshot.py @@ -0,0 +1,118 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.snapshot_summary import SnapshotSummary +from typing import Optional, Set +from typing_extensions import Self + +class Snapshot(BaseModel): + """ + Snapshot + """ # noqa: E501 + snapshot_id: StrictInt = Field(alias="snapshot-id") + parent_snapshot_id: Optional[StrictInt] = Field(default=None, alias="parent-snapshot-id") + sequence_number: Optional[StrictInt] = Field(default=None, alias="sequence-number") + timestamp_ms: StrictInt = Field(alias="timestamp-ms") + manifest_list: StrictStr = Field(description="Location of the snapshot's manifest list file", alias="manifest-list") + summary: SnapshotSummary + schema_id: Optional[StrictInt] = Field(default=None, alias="schema-id") + __properties: ClassVar[List[str]] = ["snapshot-id", "parent-snapshot-id", "sequence-number", "timestamp-ms", "manifest-list", "summary", "schema-id"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of Snapshot from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of summary + if self.summary: + _dict['summary'] = self.summary.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of Snapshot from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "snapshot-id": obj.get("snapshot-id"), + "parent-snapshot-id": obj.get("parent-snapshot-id"), + "sequence-number": obj.get("sequence-number"), + "timestamp-ms": obj.get("timestamp-ms"), + "manifest-list": obj.get("manifest-list"), + "summary": SnapshotSummary.from_dict(obj["summary"]) if obj.get("summary") is not None else None, + "schema-id": obj.get("schema-id") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/snapshot_log_inner.py b/regtests/client/python/polaris/catalog/models/snapshot_log_inner.py new file mode 100644 index 0000000000..0ac7547901 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/snapshot_log_inner.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class SnapshotLogInner(BaseModel): + """ + SnapshotLogInner + """ # noqa: E501 + snapshot_id: StrictInt = Field(alias="snapshot-id") + timestamp_ms: StrictInt = Field(alias="timestamp-ms") + __properties: ClassVar[List[str]] = ["snapshot-id", "timestamp-ms"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SnapshotLogInner from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SnapshotLogInner from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "snapshot-id": obj.get("snapshot-id"), + "timestamp-ms": obj.get("timestamp-ms") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/snapshot_reference.py b/regtests/client/python/polaris/catalog/models/snapshot_reference.py new file mode 100644 index 0000000000..7a3ceb9806 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/snapshot_reference.py @@ -0,0 +1,117 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class SnapshotReference(BaseModel): + """ + SnapshotReference + """ # noqa: E501 + type: StrictStr + snapshot_id: StrictInt = Field(alias="snapshot-id") + max_ref_age_ms: Optional[StrictInt] = Field(default=None, alias="max-ref-age-ms") + max_snapshot_age_ms: Optional[StrictInt] = Field(default=None, alias="max-snapshot-age-ms") + min_snapshots_to_keep: Optional[StrictInt] = Field(default=None, alias="min-snapshots-to-keep") + __properties: ClassVar[List[str]] = ["type", "snapshot-id", "max-ref-age-ms", "max-snapshot-age-ms", "min-snapshots-to-keep"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['tag', 'branch']): + raise ValueError("must be one of enum values ('tag', 'branch')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SnapshotReference from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SnapshotReference from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "snapshot-id": obj.get("snapshot-id"), + "max-ref-age-ms": obj.get("max-ref-age-ms"), + "max-snapshot-age-ms": obj.get("max-snapshot-age-ms"), + "min-snapshots-to-keep": obj.get("min-snapshots-to-keep") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/snapshot_summary.py b/regtests/client/python/polaris/catalog/models/snapshot_summary.py new file mode 100644 index 0000000000..c26e86609d --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/snapshot_summary.py @@ -0,0 +1,122 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class SnapshotSummary(BaseModel): + """ + SnapshotSummary + """ # noqa: E501 + operation: StrictStr + additional_properties: Dict[str, Any] = {} + __properties: ClassVar[List[str]] = ["operation"] + + @field_validator('operation') + def operation_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['append', 'replace', 'overwrite', 'delete']): + raise ValueError("must be one of enum values ('append', 'replace', 'overwrite', 'delete')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SnapshotSummary from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + * Fields in `self.additional_properties` are added to the output dict. + """ + excluded_fields: Set[str] = set([ + "additional_properties", + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # puts key-value pairs in additional_properties in the top level + if self.additional_properties is not None: + for _key, _value in self.additional_properties.items(): + _dict[_key] = _value + + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SnapshotSummary from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "operation": obj.get("operation") + }) + # store additional fields in additional_properties + for _key in obj.keys(): + if _key not in cls.__properties: + _obj.additional_properties[_key] = obj.get(_key) + + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/sort_direction.py b/regtests/client/python/polaris/catalog/models/sort_direction.py new file mode 100644 index 0000000000..f1306e9227 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/sort_direction.py @@ -0,0 +1,52 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +from enum import Enum +from typing_extensions import Self + + +class SortDirection(str, Enum): + """ + SortDirection + """ + + """ + allowed enum values + """ + ASC = 'asc' + DESC = 'desc' + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Create an instance of SortDirection from a JSON string""" + return cls(json.loads(json_str)) + + diff --git a/regtests/client/python/polaris/catalog/models/sort_field.py b/regtests/client/python/polaris/catalog/models/sort_field.py new file mode 100644 index 0000000000..d162b22167 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/sort_field.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.null_order import NullOrder +from polaris.catalog.models.sort_direction import SortDirection +from typing import Optional, Set +from typing_extensions import Self + +class SortField(BaseModel): + """ + SortField + """ # noqa: E501 + source_id: StrictInt = Field(alias="source-id") + transform: StrictStr + direction: SortDirection + null_order: NullOrder = Field(alias="null-order") + __properties: ClassVar[List[str]] = ["source-id", "transform", "direction", "null-order"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SortField from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SortField from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "source-id": obj.get("source-id"), + "transform": obj.get("transform"), + "direction": obj.get("direction"), + "null-order": obj.get("null-order") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/sort_order.py b/regtests/client/python/polaris/catalog/models/sort_order.py new file mode 100644 index 0000000000..8612e694ac --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/sort_order.py @@ -0,0 +1,114 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.sort_field import SortField +from typing import Optional, Set +from typing_extensions import Self + +class SortOrder(BaseModel): + """ + SortOrder + """ # noqa: E501 + order_id: StrictInt = Field(alias="order-id") + fields: List[SortField] + __properties: ClassVar[List[str]] = ["order-id", "fields"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SortOrder from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + * OpenAPI `readOnly` fields are excluded. + """ + excluded_fields: Set[str] = set([ + "order_id", + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in fields (list) + _items = [] + if self.fields: + for _item in self.fields: + if _item: + _items.append(_item.to_dict()) + _dict['fields'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SortOrder from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "order-id": obj.get("order-id"), + "fields": [SortField.from_dict(_item) for _item in obj["fields"]] if obj.get("fields") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/sql_view_representation.py b/regtests/client/python/polaris/catalog/models/sql_view_representation.py new file mode 100644 index 0000000000..fd8422b1be --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/sql_view_representation.py @@ -0,0 +1,106 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class SQLViewRepresentation(BaseModel): + """ + SQLViewRepresentation + """ # noqa: E501 + type: StrictStr + sql: StrictStr + dialect: StrictStr + __properties: ClassVar[List[str]] = ["type", "sql", "dialect"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of SQLViewRepresentation from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of SQLViewRepresentation from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "sql": obj.get("sql"), + "dialect": obj.get("dialect") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/statistics_file.py b/regtests/client/python/polaris/catalog/models/statistics_file.py new file mode 100644 index 0000000000..ba500a90e0 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/statistics_file.py @@ -0,0 +1,118 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.blob_metadata import BlobMetadata +from typing import Optional, Set +from typing_extensions import Self + +class StatisticsFile(BaseModel): + """ + StatisticsFile + """ # noqa: E501 + snapshot_id: StrictInt = Field(alias="snapshot-id") + statistics_path: StrictStr = Field(alias="statistics-path") + file_size_in_bytes: StrictInt = Field(alias="file-size-in-bytes") + file_footer_size_in_bytes: StrictInt = Field(alias="file-footer-size-in-bytes") + blob_metadata: List[BlobMetadata] = Field(alias="blob-metadata") + __properties: ClassVar[List[str]] = ["snapshot-id", "statistics-path", "file-size-in-bytes", "file-footer-size-in-bytes", "blob-metadata"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of StatisticsFile from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in blob_metadata (list) + _items = [] + if self.blob_metadata: + for _item in self.blob_metadata: + if _item: + _items.append(_item.to_dict()) + _dict['blob-metadata'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of StatisticsFile from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "snapshot-id": obj.get("snapshot-id"), + "statistics-path": obj.get("statistics-path"), + "file-size-in-bytes": obj.get("file-size-in-bytes"), + "file-footer-size-in-bytes": obj.get("file-footer-size-in-bytes"), + "blob-metadata": [BlobMetadata.from_dict(_item) for _item in obj["blob-metadata"]] if obj.get("blob-metadata") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/struct_field.py b/regtests/client/python/polaris/catalog/models/struct_field.py new file mode 100644 index 0000000000..411962b82b --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/struct_field.py @@ -0,0 +1,116 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictBool, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class StructField(BaseModel): + """ + StructField + """ # noqa: E501 + id: StrictInt + name: StrictStr + type: Type + required: StrictBool + doc: Optional[StrictStr] = None + __properties: ClassVar[List[str]] = ["id", "name", "type", "required", "doc"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of StructField from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of type + if self.type: + _dict['type'] = self.type.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of StructField from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "id": obj.get("id"), + "name": obj.get("name"), + "type": Type.from_dict(obj["type"]) if obj.get("type") is not None else None, + "required": obj.get("required"), + "doc": obj.get("doc") + }) + return _obj + +from polaris.catalog.models.type import Type +# TODO: Rewrite to not use raise_errors +StructField.model_rebuild(raise_errors=False) + diff --git a/regtests/client/python/polaris/catalog/models/struct_type.py b/regtests/client/python/polaris/catalog/models/struct_type.py new file mode 100644 index 0000000000..5107cce271 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/struct_type.py @@ -0,0 +1,121 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class StructType(BaseModel): + """ + StructType + """ # noqa: E501 + type: StrictStr + fields: List[StructField] + __properties: ClassVar[List[str]] = ["type", "fields"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['struct']): + raise ValueError("must be one of enum values ('struct')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of StructType from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in fields (list) + _items = [] + if self.fields: + for _item in self.fields: + if _item: + _items.append(_item.to_dict()) + _dict['fields'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of StructType from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "fields": [StructField.from_dict(_item) for _item in obj["fields"]] if obj.get("fields") is not None else None + }) + return _obj + +from polaris.catalog.models.struct_field import StructField +# TODO: Rewrite to not use raise_errors +StructType.model_rebuild(raise_errors=False) + diff --git a/regtests/client/python/polaris/catalog/models/table_identifier.py b/regtests/client/python/polaris/catalog/models/table_identifier.py new file mode 100644 index 0000000000..6c25d2e92d --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/table_identifier.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class TableIdentifier(BaseModel): + """ + TableIdentifier + """ # noqa: E501 + namespace: List[StrictStr] = Field(description="Reference to one or more levels of a namespace") + name: StrictStr + __properties: ClassVar[List[str]] = ["namespace", "name"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of TableIdentifier from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of TableIdentifier from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "namespace": obj.get("namespace"), + "name": obj.get("name") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/table_metadata.py b/regtests/client/python/polaris/catalog/models/table_metadata.py new file mode 100644 index 0000000000..c55a0b96f4 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/table_metadata.py @@ -0,0 +1,220 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing_extensions import Annotated +from polaris.catalog.models.metadata_log_inner import MetadataLogInner +from polaris.catalog.models.model_schema import ModelSchema +from polaris.catalog.models.partition_spec import PartitionSpec +from polaris.catalog.models.partition_statistics_file import PartitionStatisticsFile +from polaris.catalog.models.snapshot import Snapshot +from polaris.catalog.models.snapshot_log_inner import SnapshotLogInner +from polaris.catalog.models.snapshot_reference import SnapshotReference +from polaris.catalog.models.sort_order import SortOrder +from polaris.catalog.models.statistics_file import StatisticsFile +from typing import Optional, Set +from typing_extensions import Self + +class TableMetadata(BaseModel): + """ + TableMetadata + """ # noqa: E501 + format_version: Annotated[int, Field(le=2, strict=True, ge=1)] = Field(alias="format-version") + table_uuid: StrictStr = Field(alias="table-uuid") + location: Optional[StrictStr] = None + last_updated_ms: Optional[StrictInt] = Field(default=None, alias="last-updated-ms") + properties: Optional[Dict[str, StrictStr]] = None + schemas: Optional[List[ModelSchema]] = None + current_schema_id: Optional[StrictInt] = Field(default=None, alias="current-schema-id") + last_column_id: Optional[StrictInt] = Field(default=None, alias="last-column-id") + partition_specs: Optional[List[PartitionSpec]] = Field(default=None, alias="partition-specs") + default_spec_id: Optional[StrictInt] = Field(default=None, alias="default-spec-id") + last_partition_id: Optional[StrictInt] = Field(default=None, alias="last-partition-id") + sort_orders: Optional[List[SortOrder]] = Field(default=None, alias="sort-orders") + default_sort_order_id: Optional[StrictInt] = Field(default=None, alias="default-sort-order-id") + snapshots: Optional[List[Snapshot]] = None + refs: Optional[Dict[str, SnapshotReference]] = None + current_snapshot_id: Optional[StrictInt] = Field(default=None, alias="current-snapshot-id") + last_sequence_number: Optional[StrictInt] = Field(default=None, alias="last-sequence-number") + snapshot_log: Optional[List[SnapshotLogInner]] = Field(default=None, alias="snapshot-log") + metadata_log: Optional[List[MetadataLogInner]] = Field(default=None, alias="metadata-log") + statistics_files: Optional[List[StatisticsFile]] = Field(default=None, alias="statistics-files") + partition_statistics_files: Optional[List[PartitionStatisticsFile]] = Field(default=None, alias="partition-statistics-files") + __properties: ClassVar[List[str]] = ["format-version", "table-uuid", "location", "last-updated-ms", "properties", "schemas", "current-schema-id", "last-column-id", "partition-specs", "default-spec-id", "last-partition-id", "sort-orders", "default-sort-order-id", "snapshots", "refs", "current-snapshot-id", "last-sequence-number", "snapshot-log", "metadata-log", "statistics-files", "partition-statistics-files"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of TableMetadata from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in schemas (list) + _items = [] + if self.schemas: + for _item in self.schemas: + if _item: + _items.append(_item.to_dict()) + _dict['schemas'] = _items + # override the default output from pydantic by calling `to_dict()` of each item in partition_specs (list) + _items = [] + if self.partition_specs: + for _item in self.partition_specs: + if _item: + _items.append(_item.to_dict()) + _dict['partition-specs'] = _items + # override the default output from pydantic by calling `to_dict()` of each item in sort_orders (list) + _items = [] + if self.sort_orders: + for _item in self.sort_orders: + if _item: + _items.append(_item.to_dict()) + _dict['sort-orders'] = _items + # override the default output from pydantic by calling `to_dict()` of each item in snapshots (list) + _items = [] + if self.snapshots: + for _item in self.snapshots: + if _item: + _items.append(_item.to_dict()) + _dict['snapshots'] = _items + # override the default output from pydantic by calling `to_dict()` of each value in refs (dict) + _field_dict = {} + if self.refs: + for _key in self.refs: + if self.refs[_key]: + _field_dict[_key] = self.refs[_key].to_dict() + _dict['refs'] = _field_dict + # override the default output from pydantic by calling `to_dict()` of each item in snapshot_log (list) + _items = [] + if self.snapshot_log: + for _item in self.snapshot_log: + if _item: + _items.append(_item.to_dict()) + _dict['snapshot-log'] = _items + # override the default output from pydantic by calling `to_dict()` of each item in metadata_log (list) + _items = [] + if self.metadata_log: + for _item in self.metadata_log: + if _item: + _items.append(_item.to_dict()) + _dict['metadata-log'] = _items + # override the default output from pydantic by calling `to_dict()` of each item in statistics_files (list) + _items = [] + if self.statistics_files: + for _item in self.statistics_files: + if _item: + _items.append(_item.to_dict()) + _dict['statistics-files'] = _items + # override the default output from pydantic by calling `to_dict()` of each item in partition_statistics_files (list) + _items = [] + if self.partition_statistics_files: + for _item in self.partition_statistics_files: + if _item: + _items.append(_item.to_dict()) + _dict['partition-statistics-files'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of TableMetadata from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "format-version": obj.get("format-version"), + "table-uuid": obj.get("table-uuid"), + "location": obj.get("location"), + "last-updated-ms": obj.get("last-updated-ms"), + "properties": obj.get("properties"), + "schemas": [ModelSchema.from_dict(_item) for _item in obj["schemas"]] if obj.get("schemas") is not None else None, + "current-schema-id": obj.get("current-schema-id"), + "last-column-id": obj.get("last-column-id"), + "partition-specs": [PartitionSpec.from_dict(_item) for _item in obj["partition-specs"]] if obj.get("partition-specs") is not None else None, + "default-spec-id": obj.get("default-spec-id"), + "last-partition-id": obj.get("last-partition-id"), + "sort-orders": [SortOrder.from_dict(_item) for _item in obj["sort-orders"]] if obj.get("sort-orders") is not None else None, + "default-sort-order-id": obj.get("default-sort-order-id"), + "snapshots": [Snapshot.from_dict(_item) for _item in obj["snapshots"]] if obj.get("snapshots") is not None else None, + "refs": dict( + (_k, SnapshotReference.from_dict(_v)) + for _k, _v in obj["refs"].items() + ) + if obj.get("refs") is not None + else None, + "current-snapshot-id": obj.get("current-snapshot-id"), + "last-sequence-number": obj.get("last-sequence-number"), + "snapshot-log": [SnapshotLogInner.from_dict(_item) for _item in obj["snapshot-log"]] if obj.get("snapshot-log") is not None else None, + "metadata-log": [MetadataLogInner.from_dict(_item) for _item in obj["metadata-log"]] if obj.get("metadata-log") is not None else None, + "statistics-files": [StatisticsFile.from_dict(_item) for _item in obj["statistics-files"]] if obj.get("statistics-files") is not None else None, + "partition-statistics-files": [PartitionStatisticsFile.from_dict(_item) for _item in obj["partition-statistics-files"]] if obj.get("partition-statistics-files") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/table_requirement.py b/regtests/client/python/polaris/catalog/models/table_requirement.py new file mode 100644 index 0000000000..087aa5d9dc --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/table_requirement.py @@ -0,0 +1,143 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from importlib import import_module +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List, Union +from typing import Optional, Set +from typing_extensions import Self + +from typing import TYPE_CHECKING +if TYPE_CHECKING: + from polaris.catalog.models.assert_create import AssertCreate + from polaris.catalog.models.assert_current_schema_id import AssertCurrentSchemaId + from polaris.catalog.models.assert_default_sort_order_id import AssertDefaultSortOrderId + from polaris.catalog.models.assert_default_spec_id import AssertDefaultSpecId + from polaris.catalog.models.assert_last_assigned_field_id import AssertLastAssignedFieldId + from polaris.catalog.models.assert_last_assigned_partition_id import AssertLastAssignedPartitionId + from polaris.catalog.models.assert_ref_snapshot_id import AssertRefSnapshotId + from polaris.catalog.models.assert_table_uuid import AssertTableUUID + +class TableRequirement(BaseModel): + """ + TableRequirement + """ # noqa: E501 + type: StrictStr + __properties: ClassVar[List[str]] = ["type"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + # JSON field name that stores the object type + __discriminator_property_name: ClassVar[str] = 'type' + + # discriminator mappings + __discriminator_value_class_map: ClassVar[Dict[str, str]] = { + 'assert-create': 'AssertCreate','assert-current-schema-id': 'AssertCurrentSchemaId','assert-default-sort-order-id': 'AssertDefaultSortOrderId','assert-default-spec-id': 'AssertDefaultSpecId','assert-last-assigned-field-id': 'AssertLastAssignedFieldId','assert-last-assigned-partition-id': 'AssertLastAssignedPartitionId','assert-ref-snapshot-id': 'AssertRefSnapshotId','assert-table-uuid': 'AssertTableUUID' + } + + @classmethod + def get_discriminator_value(cls, obj: Dict[str, Any]) -> Optional[str]: + """Returns the discriminator value (object type) of the data""" + discriminator_value = obj[cls.__discriminator_property_name] + if discriminator_value: + return cls.__discriminator_value_class_map.get(discriminator_value) + else: + return None + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Union[AssertCreate, AssertCurrentSchemaId, AssertDefaultSortOrderId, AssertDefaultSpecId, AssertLastAssignedFieldId, AssertLastAssignedPartitionId, AssertRefSnapshotId, AssertTableUUID]]: + """Create an instance of TableRequirement from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Dict[str, Any]) -> Optional[Union[AssertCreate, AssertCurrentSchemaId, AssertDefaultSortOrderId, AssertDefaultSpecId, AssertLastAssignedFieldId, AssertLastAssignedPartitionId, AssertRefSnapshotId, AssertTableUUID]]: + """Create an instance of TableRequirement from a dict""" + # look up the object type based on discriminator mapping + object_type = cls.get_discriminator_value(obj) + if object_type == 'AssertCreate': + return import_module("polaris.catalog.models.assert_create").AssertCreate.from_dict(obj) + if object_type == 'AssertCurrentSchemaId': + return import_module("polaris.catalog.models.assert_current_schema_id").AssertCurrentSchemaId.from_dict(obj) + if object_type == 'AssertDefaultSortOrderId': + return import_module("polaris.catalog.models.assert_default_sort_order_id").AssertDefaultSortOrderId.from_dict(obj) + if object_type == 'AssertDefaultSpecId': + return import_module("polaris.catalog.models.assert_default_spec_id").AssertDefaultSpecId.from_dict(obj) + if object_type == 'AssertLastAssignedFieldId': + return import_module("polaris.catalog.models.assert_last_assigned_field_id").AssertLastAssignedFieldId.from_dict(obj) + if object_type == 'AssertLastAssignedPartitionId': + return import_module("polaris.catalog.models.assert_last_assigned_partition_id").AssertLastAssignedPartitionId.from_dict(obj) + if object_type == 'AssertRefSnapshotId': + return import_module("polaris.catalog.models.assert_ref_snapshot_id").AssertRefSnapshotId.from_dict(obj) + if object_type == 'AssertTableUUID': + return import_module("polaris.catalog.models.assert_table_uuid").AssertTableUUID.from_dict(obj) + + raise ValueError("TableRequirement failed to lookup discriminator value from " + + json.dumps(obj) + ". Discriminator property name: " + cls.__discriminator_property_name + + ", mapping: " + json.dumps(cls.__discriminator_value_class_map)) + + diff --git a/regtests/client/python/polaris/catalog/models/table_update.py b/regtests/client/python/polaris/catalog/models/table_update.py new file mode 100644 index 0000000000..9ca03eafdc --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/table_update.py @@ -0,0 +1,377 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +from inspect import getfullargspec +import json +import pprint +import re # noqa: F401 +from pydantic import BaseModel, ConfigDict, Field, StrictStr, ValidationError, field_validator +from typing import Optional +from polaris.catalog.models.add_partition_spec_update import AddPartitionSpecUpdate +from polaris.catalog.models.add_schema_update import AddSchemaUpdate +from polaris.catalog.models.add_snapshot_update import AddSnapshotUpdate +from polaris.catalog.models.add_sort_order_update import AddSortOrderUpdate +from polaris.catalog.models.assign_uuid_update import AssignUUIDUpdate +from polaris.catalog.models.remove_properties_update import RemovePropertiesUpdate +from polaris.catalog.models.remove_snapshot_ref_update import RemoveSnapshotRefUpdate +from polaris.catalog.models.remove_snapshots_update import RemoveSnapshotsUpdate +from polaris.catalog.models.remove_statistics_update import RemoveStatisticsUpdate +from polaris.catalog.models.set_current_schema_update import SetCurrentSchemaUpdate +from polaris.catalog.models.set_default_sort_order_update import SetDefaultSortOrderUpdate +from polaris.catalog.models.set_default_spec_update import SetDefaultSpecUpdate +from polaris.catalog.models.set_location_update import SetLocationUpdate +from polaris.catalog.models.set_properties_update import SetPropertiesUpdate +from polaris.catalog.models.set_snapshot_ref_update import SetSnapshotRefUpdate +from polaris.catalog.models.set_statistics_update import SetStatisticsUpdate +from polaris.catalog.models.upgrade_format_version_update import UpgradeFormatVersionUpdate +from typing import Union, Any, List, Set, TYPE_CHECKING, Optional, Dict +from typing_extensions import Literal, Self +from pydantic import Field + +TABLEUPDATE_ANY_OF_SCHEMAS = ["AddPartitionSpecUpdate", "AddSchemaUpdate", "AddSnapshotUpdate", "AddSortOrderUpdate", "AssignUUIDUpdate", "RemovePropertiesUpdate", "RemoveSnapshotRefUpdate", "RemoveSnapshotsUpdate", "RemoveStatisticsUpdate", "SetCurrentSchemaUpdate", "SetDefaultSortOrderUpdate", "SetDefaultSpecUpdate", "SetLocationUpdate", "SetPropertiesUpdate", "SetSnapshotRefUpdate", "SetStatisticsUpdate", "UpgradeFormatVersionUpdate"] + +class TableUpdate(BaseModel): + """ + TableUpdate + """ + + # data type: AssignUUIDUpdate + anyof_schema_1_validator: Optional[AssignUUIDUpdate] = None + # data type: UpgradeFormatVersionUpdate + anyof_schema_2_validator: Optional[UpgradeFormatVersionUpdate] = None + # data type: AddSchemaUpdate + anyof_schema_3_validator: Optional[AddSchemaUpdate] = None + # data type: SetCurrentSchemaUpdate + anyof_schema_4_validator: Optional[SetCurrentSchemaUpdate] = None + # data type: AddPartitionSpecUpdate + anyof_schema_5_validator: Optional[AddPartitionSpecUpdate] = None + # data type: SetDefaultSpecUpdate + anyof_schema_6_validator: Optional[SetDefaultSpecUpdate] = None + # data type: AddSortOrderUpdate + anyof_schema_7_validator: Optional[AddSortOrderUpdate] = None + # data type: SetDefaultSortOrderUpdate + anyof_schema_8_validator: Optional[SetDefaultSortOrderUpdate] = None + # data type: AddSnapshotUpdate + anyof_schema_9_validator: Optional[AddSnapshotUpdate] = None + # data type: SetSnapshotRefUpdate + anyof_schema_10_validator: Optional[SetSnapshotRefUpdate] = None + # data type: RemoveSnapshotsUpdate + anyof_schema_11_validator: Optional[RemoveSnapshotsUpdate] = None + # data type: RemoveSnapshotRefUpdate + anyof_schema_12_validator: Optional[RemoveSnapshotRefUpdate] = None + # data type: SetLocationUpdate + anyof_schema_13_validator: Optional[SetLocationUpdate] = None + # data type: SetPropertiesUpdate + anyof_schema_14_validator: Optional[SetPropertiesUpdate] = None + # data type: RemovePropertiesUpdate + anyof_schema_15_validator: Optional[RemovePropertiesUpdate] = None + # data type: SetStatisticsUpdate + anyof_schema_16_validator: Optional[SetStatisticsUpdate] = None + # data type: RemoveStatisticsUpdate + anyof_schema_17_validator: Optional[RemoveStatisticsUpdate] = None + if TYPE_CHECKING: + actual_instance: Optional[Union[AddPartitionSpecUpdate, AddSchemaUpdate, AddSnapshotUpdate, AddSortOrderUpdate, AssignUUIDUpdate, RemovePropertiesUpdate, RemoveSnapshotRefUpdate, RemoveSnapshotsUpdate, RemoveStatisticsUpdate, SetCurrentSchemaUpdate, SetDefaultSortOrderUpdate, SetDefaultSpecUpdate, SetLocationUpdate, SetPropertiesUpdate, SetSnapshotRefUpdate, SetStatisticsUpdate, UpgradeFormatVersionUpdate]] = None + else: + actual_instance: Any = None + any_of_schemas: Set[str] = { "AddPartitionSpecUpdate", "AddSchemaUpdate", "AddSnapshotUpdate", "AddSortOrderUpdate", "AssignUUIDUpdate", "RemovePropertiesUpdate", "RemoveSnapshotRefUpdate", "RemoveSnapshotsUpdate", "RemoveStatisticsUpdate", "SetCurrentSchemaUpdate", "SetDefaultSortOrderUpdate", "SetDefaultSpecUpdate", "SetLocationUpdate", "SetPropertiesUpdate", "SetSnapshotRefUpdate", "SetStatisticsUpdate", "UpgradeFormatVersionUpdate" } + + model_config = { + "validate_assignment": True, + "protected_namespaces": (), + } + + discriminator_value_class_map: Dict[str, str] = { + } + + def __init__(self, *args, **kwargs) -> None: + if args: + if len(args) > 1: + raise ValueError("If a position argument is used, only 1 is allowed to set `actual_instance`") + if kwargs: + raise ValueError("If a position argument is used, keyword arguments cannot be used.") + super().__init__(actual_instance=args[0]) + else: + super().__init__(**kwargs) + + @field_validator('actual_instance') + def actual_instance_must_validate_anyof(cls, v): + instance = TableUpdate.model_construct() + error_messages = [] + # validate data type: AssignUUIDUpdate + if not isinstance(v, AssignUUIDUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `AssignUUIDUpdate`") + else: + return v + + # validate data type: UpgradeFormatVersionUpdate + if not isinstance(v, UpgradeFormatVersionUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `UpgradeFormatVersionUpdate`") + else: + return v + + # validate data type: AddSchemaUpdate + if not isinstance(v, AddSchemaUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `AddSchemaUpdate`") + else: + return v + + # validate data type: SetCurrentSchemaUpdate + if not isinstance(v, SetCurrentSchemaUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `SetCurrentSchemaUpdate`") + else: + return v + + # validate data type: AddPartitionSpecUpdate + if not isinstance(v, AddPartitionSpecUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `AddPartitionSpecUpdate`") + else: + return v + + # validate data type: SetDefaultSpecUpdate + if not isinstance(v, SetDefaultSpecUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `SetDefaultSpecUpdate`") + else: + return v + + # validate data type: AddSortOrderUpdate + if not isinstance(v, AddSortOrderUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `AddSortOrderUpdate`") + else: + return v + + # validate data type: SetDefaultSortOrderUpdate + if not isinstance(v, SetDefaultSortOrderUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `SetDefaultSortOrderUpdate`") + else: + return v + + # validate data type: AddSnapshotUpdate + if not isinstance(v, AddSnapshotUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `AddSnapshotUpdate`") + else: + return v + + # validate data type: SetSnapshotRefUpdate + if not isinstance(v, SetSnapshotRefUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `SetSnapshotRefUpdate`") + else: + return v + + # validate data type: RemoveSnapshotsUpdate + if not isinstance(v, RemoveSnapshotsUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `RemoveSnapshotsUpdate`") + else: + return v + + # validate data type: RemoveSnapshotRefUpdate + if not isinstance(v, RemoveSnapshotRefUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `RemoveSnapshotRefUpdate`") + else: + return v + + # validate data type: SetLocationUpdate + if not isinstance(v, SetLocationUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `SetLocationUpdate`") + else: + return v + + # validate data type: SetPropertiesUpdate + if not isinstance(v, SetPropertiesUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `SetPropertiesUpdate`") + else: + return v + + # validate data type: RemovePropertiesUpdate + if not isinstance(v, RemovePropertiesUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `RemovePropertiesUpdate`") + else: + return v + + # validate data type: SetStatisticsUpdate + if not isinstance(v, SetStatisticsUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `SetStatisticsUpdate`") + else: + return v + + # validate data type: RemoveStatisticsUpdate + if not isinstance(v, RemoveStatisticsUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `RemoveStatisticsUpdate`") + else: + return v + + if error_messages: + # no match + raise ValueError("No match found when setting the actual_instance in TableUpdate with anyOf schemas: AddPartitionSpecUpdate, AddSchemaUpdate, AddSnapshotUpdate, AddSortOrderUpdate, AssignUUIDUpdate, RemovePropertiesUpdate, RemoveSnapshotRefUpdate, RemoveSnapshotsUpdate, RemoveStatisticsUpdate, SetCurrentSchemaUpdate, SetDefaultSortOrderUpdate, SetDefaultSpecUpdate, SetLocationUpdate, SetPropertiesUpdate, SetSnapshotRefUpdate, SetStatisticsUpdate, UpgradeFormatVersionUpdate. Details: " + ", ".join(error_messages)) + else: + return v + + @classmethod + def from_dict(cls, obj: Dict[str, Any]) -> Self: + return cls.from_json(json.dumps(obj)) + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Returns the object represented by the json string""" + instance = cls.model_construct() + error_messages = [] + # anyof_schema_1_validator: Optional[AssignUUIDUpdate] = None + try: + instance.actual_instance = AssignUUIDUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_2_validator: Optional[UpgradeFormatVersionUpdate] = None + try: + instance.actual_instance = UpgradeFormatVersionUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_3_validator: Optional[AddSchemaUpdate] = None + try: + instance.actual_instance = AddSchemaUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_4_validator: Optional[SetCurrentSchemaUpdate] = None + try: + instance.actual_instance = SetCurrentSchemaUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_5_validator: Optional[AddPartitionSpecUpdate] = None + try: + instance.actual_instance = AddPartitionSpecUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_6_validator: Optional[SetDefaultSpecUpdate] = None + try: + instance.actual_instance = SetDefaultSpecUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_7_validator: Optional[AddSortOrderUpdate] = None + try: + instance.actual_instance = AddSortOrderUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_8_validator: Optional[SetDefaultSortOrderUpdate] = None + try: + instance.actual_instance = SetDefaultSortOrderUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_9_validator: Optional[AddSnapshotUpdate] = None + try: + instance.actual_instance = AddSnapshotUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_10_validator: Optional[SetSnapshotRefUpdate] = None + try: + instance.actual_instance = SetSnapshotRefUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_11_validator: Optional[RemoveSnapshotsUpdate] = None + try: + instance.actual_instance = RemoveSnapshotsUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_12_validator: Optional[RemoveSnapshotRefUpdate] = None + try: + instance.actual_instance = RemoveSnapshotRefUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_13_validator: Optional[SetLocationUpdate] = None + try: + instance.actual_instance = SetLocationUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_14_validator: Optional[SetPropertiesUpdate] = None + try: + instance.actual_instance = SetPropertiesUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_15_validator: Optional[RemovePropertiesUpdate] = None + try: + instance.actual_instance = RemovePropertiesUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_16_validator: Optional[SetStatisticsUpdate] = None + try: + instance.actual_instance = SetStatisticsUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_17_validator: Optional[RemoveStatisticsUpdate] = None + try: + instance.actual_instance = RemoveStatisticsUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + + if error_messages: + # no match + raise ValueError("No match found when deserializing the JSON string into TableUpdate with anyOf schemas: AddPartitionSpecUpdate, AddSchemaUpdate, AddSnapshotUpdate, AddSortOrderUpdate, AssignUUIDUpdate, RemovePropertiesUpdate, RemoveSnapshotRefUpdate, RemoveSnapshotsUpdate, RemoveStatisticsUpdate, SetCurrentSchemaUpdate, SetDefaultSortOrderUpdate, SetDefaultSpecUpdate, SetLocationUpdate, SetPropertiesUpdate, SetSnapshotRefUpdate, SetStatisticsUpdate, UpgradeFormatVersionUpdate. Details: " + ", ".join(error_messages)) + else: + return instance + + def to_json(self) -> str: + """Returns the JSON representation of the actual instance""" + if self.actual_instance is None: + return "null" + + if hasattr(self.actual_instance, "to_json") and callable(self.actual_instance.to_json): + return self.actual_instance.to_json() + else: + return json.dumps(self.actual_instance) + + def to_dict(self) -> Optional[Union[Dict[str, Any], AddPartitionSpecUpdate, AddSchemaUpdate, AddSnapshotUpdate, AddSortOrderUpdate, AssignUUIDUpdate, RemovePropertiesUpdate, RemoveSnapshotRefUpdate, RemoveSnapshotsUpdate, RemoveStatisticsUpdate, SetCurrentSchemaUpdate, SetDefaultSortOrderUpdate, SetDefaultSpecUpdate, SetLocationUpdate, SetPropertiesUpdate, SetSnapshotRefUpdate, SetStatisticsUpdate, UpgradeFormatVersionUpdate]]: + """Returns the dict representation of the actual instance""" + if self.actual_instance is None: + return None + + if hasattr(self.actual_instance, "to_dict") and callable(self.actual_instance.to_dict): + return self.actual_instance.to_dict() + else: + return self.actual_instance + + def to_str(self) -> str: + """Returns the string representation of the actual instance""" + return pprint.pformat(self.model_dump()) + + diff --git a/regtests/client/python/polaris/catalog/models/table_update_notification.py b/regtests/client/python/polaris/catalog/models/table_update_notification.py new file mode 100644 index 0000000000..61053b0bd6 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/table_update_notification.py @@ -0,0 +1,114 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.table_metadata import TableMetadata +from typing import Optional, Set +from typing_extensions import Self + +class TableUpdateNotification(BaseModel): + """ + TableUpdateNotification + """ # noqa: E501 + table_name: StrictStr = Field(alias="table-name") + timestamp: StrictInt + table_uuid: StrictStr = Field(alias="table-uuid") + metadata_location: StrictStr = Field(alias="metadata-location") + metadata: Optional[TableMetadata] = None + __properties: ClassVar[List[str]] = ["table-name", "timestamp", "table-uuid", "metadata-location", "metadata"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of TableUpdateNotification from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of metadata + if self.metadata: + _dict['metadata'] = self.metadata.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of TableUpdateNotification from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "table-name": obj.get("table-name"), + "timestamp": obj.get("timestamp"), + "table-uuid": obj.get("table-uuid"), + "metadata-location": obj.get("metadata-location"), + "metadata": TableMetadata.from_dict(obj["metadata"]) if obj.get("metadata") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/term.py b/regtests/client/python/polaris/catalog/models/term.py new file mode 100644 index 0000000000..68135ef6f5 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/term.py @@ -0,0 +1,155 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +import pprint +from pydantic import BaseModel, ConfigDict, Field, StrictStr, ValidationError, field_validator +from typing import Any, List, Optional +from polaris.catalog.models.transform_term import TransformTerm +from pydantic import StrictStr, Field +from typing import Union, List, Set, Optional, Dict +from typing_extensions import Literal, Self + +TERM_ONE_OF_SCHEMAS = ["TransformTerm", "str"] + +class Term(BaseModel): + """ + Term + """ + # data type: str + oneof_schema_1_validator: Optional[StrictStr] = None + # data type: TransformTerm + oneof_schema_2_validator: Optional[TransformTerm] = None + actual_instance: Optional[Union[TransformTerm, str]] = None + one_of_schemas: Set[str] = { "TransformTerm", "str" } + + model_config = ConfigDict( + validate_assignment=True, + protected_namespaces=(), + ) + + + def __init__(self, *args, **kwargs) -> None: + if args: + if len(args) > 1: + raise ValueError("If a position argument is used, only 1 is allowed to set `actual_instance`") + if kwargs: + raise ValueError("If a position argument is used, keyword arguments cannot be used.") + super().__init__(actual_instance=args[0]) + else: + super().__init__(**kwargs) + + @field_validator('actual_instance') + def actual_instance_must_validate_oneof(cls, v): + instance = Term.model_construct() + error_messages = [] + match = 0 + # validate data type: str + try: + instance.oneof_schema_1_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: TransformTerm + if not isinstance(v, TransformTerm): + error_messages.append(f"Error! Input type `{type(v)}` is not `TransformTerm`") + else: + match += 1 + if match > 1: + # more than 1 match + raise ValueError("Multiple matches found when setting `actual_instance` in Term with oneOf schemas: TransformTerm, str. Details: " + ", ".join(error_messages)) + elif match == 0: + # no match + raise ValueError("No match found when setting `actual_instance` in Term with oneOf schemas: TransformTerm, str. Details: " + ", ".join(error_messages)) + else: + return v + + @classmethod + def from_dict(cls, obj: Union[str, Dict[str, Any]]) -> Self: + return cls.from_json(json.dumps(obj)) + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Returns the object represented by the json string""" + instance = cls.model_construct() + error_messages = [] + match = 0 + + # deserialize data into str + try: + # validation + instance.oneof_schema_1_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_1_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into TransformTerm + try: + instance.actual_instance = TransformTerm.from_json(json_str) + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + + if match > 1: + # more than 1 match + raise ValueError("Multiple matches found when deserializing the JSON string into Term with oneOf schemas: TransformTerm, str. Details: " + ", ".join(error_messages)) + elif match == 0: + # no match + raise ValueError("No match found when deserializing the JSON string into Term with oneOf schemas: TransformTerm, str. Details: " + ", ".join(error_messages)) + else: + return instance + + def to_json(self) -> str: + """Returns the JSON representation of the actual instance""" + if self.actual_instance is None: + return "null" + + if hasattr(self.actual_instance, "to_json") and callable(self.actual_instance.to_json): + return self.actual_instance.to_json() + else: + return json.dumps(self.actual_instance) + + def to_dict(self) -> Optional[Union[Dict[str, Any], TransformTerm, str]]: + """Returns the dict representation of the actual instance""" + if self.actual_instance is None: + return None + + if hasattr(self.actual_instance, "to_dict") and callable(self.actual_instance.to_dict): + return self.actual_instance.to_dict() + else: + # primitive type + return self.actual_instance + + def to_str(self) -> str: + """Returns the string representation of the actual instance""" + return pprint.pformat(self.model_dump()) + + diff --git a/regtests/client/python/polaris/catalog/models/timer_result.py b/regtests/client/python/polaris/catalog/models/timer_result.py new file mode 100644 index 0000000000..39f82396c9 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/timer_result.py @@ -0,0 +1,106 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class TimerResult(BaseModel): + """ + TimerResult + """ # noqa: E501 + time_unit: StrictStr = Field(alias="time-unit") + count: StrictInt + total_duration: StrictInt = Field(alias="total-duration") + __properties: ClassVar[List[str]] = ["time-unit", "count", "total-duration"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of TimerResult from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of TimerResult from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "time-unit": obj.get("time-unit"), + "count": obj.get("count"), + "total-duration": obj.get("total-duration") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/token_type.py b/regtests/client/python/polaris/catalog/models/token_type.py new file mode 100644 index 0000000000..734ea6c677 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/token_type.py @@ -0,0 +1,56 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +from enum import Enum +from typing_extensions import Self + + +class TokenType(str, Enum): + """ + Token type identifier, from RFC 8693 Section 3 See https://datatracker.ietf.org/doc/html/rfc8693#section-3 + """ + + """ + allowed enum values + """ + URN_COLON_IETF_COLON_PARAMS_COLON_OAUTH_COLON_TOKEN_MINUS_TYPE_COLON_ACCESS_TOKEN = 'urn:ietf:params:oauth:token-type:access_token' + URN_COLON_IETF_COLON_PARAMS_COLON_OAUTH_COLON_TOKEN_MINUS_TYPE_COLON_REFRESH_TOKEN = 'urn:ietf:params:oauth:token-type:refresh_token' + URN_COLON_IETF_COLON_PARAMS_COLON_OAUTH_COLON_TOKEN_MINUS_TYPE_COLON_ID_TOKEN = 'urn:ietf:params:oauth:token-type:id_token' + URN_COLON_IETF_COLON_PARAMS_COLON_OAUTH_COLON_TOKEN_MINUS_TYPE_COLON_SAML1 = 'urn:ietf:params:oauth:token-type:saml1' + URN_COLON_IETF_COLON_PARAMS_COLON_OAUTH_COLON_TOKEN_MINUS_TYPE_COLON_SAML2 = 'urn:ietf:params:oauth:token-type:saml2' + URN_COLON_IETF_COLON_PARAMS_COLON_OAUTH_COLON_TOKEN_MINUS_TYPE_COLON_JWT = 'urn:ietf:params:oauth:token-type:jwt' + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Create an instance of TokenType from a JSON string""" + return cls(json.loads(json_str)) + + diff --git a/regtests/client/python/polaris/catalog/models/transform_term.py b/regtests/client/python/polaris/catalog/models/transform_term.py new file mode 100644 index 0000000000..591538becd --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/transform_term.py @@ -0,0 +1,113 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class TransformTerm(BaseModel): + """ + TransformTerm + """ # noqa: E501 + type: StrictStr + transform: StrictStr + term: StrictStr + __properties: ClassVar[List[str]] = ["type", "transform", "term"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['transform']): + raise ValueError("must be one of enum values ('transform')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of TransformTerm from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of TransformTerm from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "transform": obj.get("transform"), + "term": obj.get("term") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/type.py b/regtests/client/python/polaris/catalog/models/type.py new file mode 100644 index 0000000000..8e0a716a6c --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/type.py @@ -0,0 +1,185 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +import pprint +from pydantic import BaseModel, ConfigDict, Field, StrictStr, ValidationError, field_validator +from typing import Any, List, Optional +from pydantic import StrictStr, Field +from typing import Union, List, Set, Optional, Dict +from typing_extensions import Literal, Self + +TYPE_ONE_OF_SCHEMAS = ["ListType", "MapType", "StructType", "str"] + +class Type(BaseModel): + """ + Type + """ + # data type: str + oneof_schema_1_validator: Optional[StrictStr] = None + # data type: StructType + oneof_schema_2_validator: Optional[StructType] = None + # data type: ListType + oneof_schema_3_validator: Optional[ListType] = None + # data type: MapType + oneof_schema_4_validator: Optional[MapType] = None + actual_instance: Optional[Union[ListType, MapType, StructType, str]] = None + one_of_schemas: Set[str] = { "ListType", "MapType", "StructType", "str" } + + model_config = ConfigDict( + validate_assignment=True, + protected_namespaces=(), + ) + + + def __init__(self, *args, **kwargs) -> None: + if args: + if len(args) > 1: + raise ValueError("If a position argument is used, only 1 is allowed to set `actual_instance`") + if kwargs: + raise ValueError("If a position argument is used, keyword arguments cannot be used.") + super().__init__(actual_instance=args[0]) + else: + super().__init__(**kwargs) + + @field_validator('actual_instance') + def actual_instance_must_validate_oneof(cls, v): + instance = Type.model_construct() + error_messages = [] + match = 0 + # validate data type: str + try: + instance.oneof_schema_1_validator = v + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # validate data type: StructType + if not isinstance(v, StructType): + error_messages.append(f"Error! Input type `{type(v)}` is not `StructType`") + else: + match += 1 + # validate data type: ListType + if not isinstance(v, ListType): + error_messages.append(f"Error! Input type `{type(v)}` is not `ListType`") + else: + match += 1 + # validate data type: MapType + if not isinstance(v, MapType): + error_messages.append(f"Error! Input type `{type(v)}` is not `MapType`") + else: + match += 1 + if match > 1: + # more than 1 match + raise ValueError("Multiple matches found when setting `actual_instance` in Type with oneOf schemas: ListType, MapType, StructType, str. Details: " + ", ".join(error_messages)) + elif match == 0: + # no match + raise ValueError("No match found when setting `actual_instance` in Type with oneOf schemas: ListType, MapType, StructType, str. Details: " + ", ".join(error_messages)) + else: + return v + + @classmethod + def from_dict(cls, obj: Union[str, Dict[str, Any]]) -> Self: + return cls.from_json(json.dumps(obj)) + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Returns the object represented by the json string""" + instance = cls.model_construct() + error_messages = [] + match = 0 + + # deserialize data into str + try: + # validation + instance.oneof_schema_1_validator = json.loads(json_str) + # assign value to actual_instance + instance.actual_instance = instance.oneof_schema_1_validator + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into StructType + try: + instance.actual_instance = StructType.from_json(json_str) + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into ListType + try: + instance.actual_instance = ListType.from_json(json_str) + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # deserialize data into MapType + try: + instance.actual_instance = MapType.from_json(json_str) + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + + if match > 1: + # more than 1 match + raise ValueError("Multiple matches found when deserializing the JSON string into Type with oneOf schemas: ListType, MapType, StructType, str. Details: " + ", ".join(error_messages)) + elif match == 0: + # no match + raise ValueError("No match found when deserializing the JSON string into Type with oneOf schemas: ListType, MapType, StructType, str. Details: " + ", ".join(error_messages)) + else: + return instance + + def to_json(self) -> str: + """Returns the JSON representation of the actual instance""" + if self.actual_instance is None: + return "null" + + if hasattr(self.actual_instance, "to_json") and callable(self.actual_instance.to_json): + return self.actual_instance.to_json() + else: + return json.dumps(self.actual_instance) + + def to_dict(self) -> Optional[Union[Dict[str, Any], ListType, MapType, StructType, str]]: + """Returns the dict representation of the actual instance""" + if self.actual_instance is None: + return None + + if hasattr(self.actual_instance, "to_dict") and callable(self.actual_instance.to_dict): + return self.actual_instance.to_dict() + else: + # primitive type + return self.actual_instance + + def to_str(self) -> str: + """Returns the string representation of the actual instance""" + return pprint.pformat(self.model_dump()) + +from polaris.catalog.models.list_type import ListType +from polaris.catalog.models.map_type import MapType +from polaris.catalog.models.struct_type import StructType +# TODO: Rewrite to not use raise_errors +Type.model_rebuild(raise_errors=False) + diff --git a/regtests/client/python/polaris/catalog/models/unary_expression.py b/regtests/client/python/polaris/catalog/models/unary_expression.py new file mode 100644 index 0000000000..c400a8f753 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/unary_expression.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.term import Term +from typing import Optional, Set +from typing_extensions import Self + +class UnaryExpression(BaseModel): + """ + UnaryExpression + """ # noqa: E501 + type: StrictStr + term: Term + value: Dict[str, Any] + __properties: ClassVar[List[str]] = ["type", "term", "value"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of UnaryExpression from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of term + if self.term: + _dict['term'] = self.term.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of UnaryExpression from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "term": Term.from_dict(obj["term"]) if obj.get("term") is not None else None, + "value": obj.get("value") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/update_namespace_properties_request.py b/regtests/client/python/polaris/catalog/models/update_namespace_properties_request.py new file mode 100644 index 0000000000..27ccfe7f30 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/update_namespace_properties_request.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class UpdateNamespacePropertiesRequest(BaseModel): + """ + UpdateNamespacePropertiesRequest + """ # noqa: E501 + removals: Optional[List[StrictStr]] = None + updates: Optional[Dict[str, StrictStr]] = None + __properties: ClassVar[List[str]] = ["removals", "updates"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of UpdateNamespacePropertiesRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of UpdateNamespacePropertiesRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "removals": obj.get("removals"), + "updates": obj.get("updates") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/update_namespace_properties_response.py b/regtests/client/python/polaris/catalog/models/update_namespace_properties_response.py new file mode 100644 index 0000000000..2c17214b9c --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/update_namespace_properties_response.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class UpdateNamespacePropertiesResponse(BaseModel): + """ + UpdateNamespacePropertiesResponse + """ # noqa: E501 + updated: List[StrictStr] = Field(description="List of property keys that were added or updated") + removed: List[StrictStr] = Field(description="List of properties that were removed") + missing: Optional[List[StrictStr]] = Field(default=None, description="List of properties requested for removal that were not found in the namespace's properties. Represents a partial success response. Server's do not need to implement this.") + __properties: ClassVar[List[str]] = ["updated", "removed", "missing"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of UpdateNamespacePropertiesResponse from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # set to None if missing (nullable) is None + # and model_fields_set contains the field + if self.missing is None and "missing" in self.model_fields_set: + _dict['missing'] = None + + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of UpdateNamespacePropertiesResponse from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "updated": obj.get("updated"), + "removed": obj.get("removed"), + "missing": obj.get("missing") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/upgrade_format_version_update.py b/regtests/client/python/polaris/catalog/models/upgrade_format_version_update.py new file mode 100644 index 0000000000..05e6789b3f --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/upgrade_format_version_update.py @@ -0,0 +1,111 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from polaris.catalog.models.base_update import BaseUpdate +from typing import Optional, Set +from typing_extensions import Self + +class UpgradeFormatVersionUpdate(BaseUpdate): + """ + UpgradeFormatVersionUpdate + """ # noqa: E501 + action: StrictStr + format_version: StrictInt = Field(alias="format-version") + __properties: ClassVar[List[str]] = ["action"] + + @field_validator('action') + def action_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['upgrade-format-version']): + raise ValueError("must be one of enum values ('upgrade-format-version')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of UpgradeFormatVersionUpdate from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of UpgradeFormatVersionUpdate from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "action": obj.get("action") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/value_map.py b/regtests/client/python/polaris/catalog/models/value_map.py new file mode 100644 index 0000000000..4e27b8face --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/value_map.py @@ -0,0 +1,112 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.primitive_type_value import PrimitiveTypeValue +from typing import Optional, Set +from typing_extensions import Self + +class ValueMap(BaseModel): + """ + ValueMap + """ # noqa: E501 + keys: Optional[List[StrictInt]] = Field(default=None, description="List of integer column ids for each corresponding value") + values: Optional[List[PrimitiveTypeValue]] = Field(default=None, description="List of primitive type values, matched to 'keys' by index") + __properties: ClassVar[List[str]] = ["keys", "values"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of ValueMap from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in values (list) + _items = [] + if self.values: + for _item in self.values: + if _item: + _items.append(_item.to_dict()) + _dict['values'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of ValueMap from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "keys": obj.get("keys"), + "values": [PrimitiveTypeValue.from_dict(_item) for _item in obj["values"]] if obj.get("values") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/view_history_entry.py b/regtests/client/python/polaris/catalog/models/view_history_entry.py new file mode 100644 index 0000000000..432e9dc5bd --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/view_history_entry.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class ViewHistoryEntry(BaseModel): + """ + ViewHistoryEntry + """ # noqa: E501 + version_id: StrictInt = Field(alias="version-id") + timestamp_ms: StrictInt = Field(alias="timestamp-ms") + __properties: ClassVar[List[str]] = ["version-id", "timestamp-ms"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of ViewHistoryEntry from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of ViewHistoryEntry from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "version-id": obj.get("version-id"), + "timestamp-ms": obj.get("timestamp-ms") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/view_metadata.py b/regtests/client/python/polaris/catalog/models/view_metadata.py new file mode 100644 index 0000000000..4ac6cd2d3f --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/view_metadata.py @@ -0,0 +1,141 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing_extensions import Annotated +from polaris.catalog.models.model_schema import ModelSchema +from polaris.catalog.models.view_history_entry import ViewHistoryEntry +from polaris.catalog.models.view_version import ViewVersion +from typing import Optional, Set +from typing_extensions import Self + +class ViewMetadata(BaseModel): + """ + ViewMetadata + """ # noqa: E501 + view_uuid: StrictStr = Field(alias="view-uuid") + format_version: Annotated[int, Field(le=1, strict=True, ge=1)] = Field(alias="format-version") + location: StrictStr + current_version_id: StrictInt = Field(alias="current-version-id") + versions: List[ViewVersion] + version_log: List[ViewHistoryEntry] = Field(alias="version-log") + schemas: List[ModelSchema] + properties: Optional[Dict[str, StrictStr]] = None + __properties: ClassVar[List[str]] = ["view-uuid", "format-version", "location", "current-version-id", "versions", "version-log", "schemas", "properties"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of ViewMetadata from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in versions (list) + _items = [] + if self.versions: + for _item in self.versions: + if _item: + _items.append(_item.to_dict()) + _dict['versions'] = _items + # override the default output from pydantic by calling `to_dict()` of each item in version_log (list) + _items = [] + if self.version_log: + for _item in self.version_log: + if _item: + _items.append(_item.to_dict()) + _dict['version-log'] = _items + # override the default output from pydantic by calling `to_dict()` of each item in schemas (list) + _items = [] + if self.schemas: + for _item in self.schemas: + if _item: + _items.append(_item.to_dict()) + _dict['schemas'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of ViewMetadata from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "view-uuid": obj.get("view-uuid"), + "format-version": obj.get("format-version"), + "location": obj.get("location"), + "current-version-id": obj.get("current-version-id"), + "versions": [ViewVersion.from_dict(_item) for _item in obj["versions"]] if obj.get("versions") is not None else None, + "version-log": [ViewHistoryEntry.from_dict(_item) for _item in obj["version-log"]] if obj.get("version-log") is not None else None, + "schemas": [ModelSchema.from_dict(_item) for _item in obj["schemas"]] if obj.get("schemas") is not None else None, + "properties": obj.get("properties") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/models/view_representation.py b/regtests/client/python/polaris/catalog/models/view_representation.py new file mode 100644 index 0000000000..68ed5ad528 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/view_representation.py @@ -0,0 +1,138 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +import pprint +from pydantic import BaseModel, ConfigDict, Field, StrictStr, ValidationError, field_validator +from typing import Any, List, Optional +from polaris.catalog.models.sql_view_representation import SQLViewRepresentation +from pydantic import StrictStr, Field +from typing import Union, List, Set, Optional, Dict +from typing_extensions import Literal, Self + +VIEWREPRESENTATION_ONE_OF_SCHEMAS = ["SQLViewRepresentation"] + +class ViewRepresentation(BaseModel): + """ + ViewRepresentation + """ + # data type: SQLViewRepresentation + oneof_schema_1_validator: Optional[SQLViewRepresentation] = None + actual_instance: Optional[Union[SQLViewRepresentation]] = None + one_of_schemas: Set[str] = { "SQLViewRepresentation" } + + model_config = ConfigDict( + validate_assignment=True, + protected_namespaces=(), + ) + + + def __init__(self, *args, **kwargs) -> None: + if args: + if len(args) > 1: + raise ValueError("If a position argument is used, only 1 is allowed to set `actual_instance`") + if kwargs: + raise ValueError("If a position argument is used, keyword arguments cannot be used.") + super().__init__(actual_instance=args[0]) + else: + super().__init__(**kwargs) + + @field_validator('actual_instance') + def actual_instance_must_validate_oneof(cls, v): + instance = ViewRepresentation.model_construct() + error_messages = [] + match = 0 + # validate data type: SQLViewRepresentation + if not isinstance(v, SQLViewRepresentation): + error_messages.append(f"Error! Input type `{type(v)}` is not `SQLViewRepresentation`") + else: + match += 1 + if match > 1: + # more than 1 match + raise ValueError("Multiple matches found when setting `actual_instance` in ViewRepresentation with oneOf schemas: SQLViewRepresentation. Details: " + ", ".join(error_messages)) + elif match == 0: + # no match + raise ValueError("No match found when setting `actual_instance` in ViewRepresentation with oneOf schemas: SQLViewRepresentation. Details: " + ", ".join(error_messages)) + else: + return v + + @classmethod + def from_dict(cls, obj: Union[str, Dict[str, Any]]) -> Self: + return cls.from_json(json.dumps(obj)) + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Returns the object represented by the json string""" + instance = cls.model_construct() + error_messages = [] + match = 0 + + # deserialize data into SQLViewRepresentation + try: + instance.actual_instance = SQLViewRepresentation.from_json(json_str) + match += 1 + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + + if match > 1: + # more than 1 match + raise ValueError("Multiple matches found when deserializing the JSON string into ViewRepresentation with oneOf schemas: SQLViewRepresentation. Details: " + ", ".join(error_messages)) + elif match == 0: + # no match + raise ValueError("No match found when deserializing the JSON string into ViewRepresentation with oneOf schemas: SQLViewRepresentation. Details: " + ", ".join(error_messages)) + else: + return instance + + def to_json(self) -> str: + """Returns the JSON representation of the actual instance""" + if self.actual_instance is None: + return "null" + + if hasattr(self.actual_instance, "to_json") and callable(self.actual_instance.to_json): + return self.actual_instance.to_json() + else: + return json.dumps(self.actual_instance) + + def to_dict(self) -> Optional[Union[Dict[str, Any], SQLViewRepresentation]]: + """Returns the dict representation of the actual instance""" + if self.actual_instance is None: + return None + + if hasattr(self.actual_instance, "to_dict") and callable(self.actual_instance.to_dict): + return self.actual_instance.to_dict() + else: + # primitive type + return self.actual_instance + + def to_str(self) -> str: + """Returns the string representation of the actual instance""" + return pprint.pformat(self.model_dump()) + + diff --git a/regtests/client/python/polaris/catalog/models/view_requirement.py b/regtests/client/python/polaris/catalog/models/view_requirement.py new file mode 100644 index 0000000000..69645ffb09 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/view_requirement.py @@ -0,0 +1,122 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from importlib import import_module +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List, Union +from typing import Optional, Set +from typing_extensions import Self + +from typing import TYPE_CHECKING +if TYPE_CHECKING: + from polaris.catalog.models.assert_view_uuid import AssertViewUUID + +class ViewRequirement(BaseModel): + """ + ViewRequirement + """ # noqa: E501 + type: StrictStr + __properties: ClassVar[List[str]] = ["type"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + # JSON field name that stores the object type + __discriminator_property_name: ClassVar[str] = 'type' + + # discriminator mappings + __discriminator_value_class_map: ClassVar[Dict[str, str]] = { + 'assert-view-uuid': 'AssertViewUUID' + } + + @classmethod + def get_discriminator_value(cls, obj: Dict[str, Any]) -> Optional[str]: + """Returns the discriminator value (object type) of the data""" + discriminator_value = obj[cls.__discriminator_property_name] + if discriminator_value: + return cls.__discriminator_value_class_map.get(discriminator_value) + else: + return None + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Union[AssertViewUUID]]: + """Create an instance of ViewRequirement from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Dict[str, Any]) -> Optional[Union[AssertViewUUID]]: + """Create an instance of ViewRequirement from a dict""" + # look up the object type based on discriminator mapping + object_type = cls.get_discriminator_value(obj) + if object_type == 'AssertViewUUID': + return import_module("polaris.catalog.models.assert_view_uuid").AssertViewUUID.from_dict(obj) + + raise ValueError("ViewRequirement failed to lookup discriminator value from " + + json.dumps(obj) + ". Discriminator property name: " + cls.__discriminator_property_name + + ", mapping: " + json.dumps(cls.__discriminator_value_class_map)) + + diff --git a/regtests/client/python/polaris/catalog/models/view_update.py b/regtests/client/python/polaris/catalog/models/view_update.py new file mode 100644 index 0000000000..74fb2a834a --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/view_update.py @@ -0,0 +1,242 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +from inspect import getfullargspec +import json +import pprint +import re # noqa: F401 +from pydantic import BaseModel, ConfigDict, Field, StrictStr, ValidationError, field_validator +from typing import Optional +from polaris.catalog.models.add_schema_update import AddSchemaUpdate +from polaris.catalog.models.add_view_version_update import AddViewVersionUpdate +from polaris.catalog.models.assign_uuid_update import AssignUUIDUpdate +from polaris.catalog.models.remove_properties_update import RemovePropertiesUpdate +from polaris.catalog.models.set_current_view_version_update import SetCurrentViewVersionUpdate +from polaris.catalog.models.set_location_update import SetLocationUpdate +from polaris.catalog.models.set_properties_update import SetPropertiesUpdate +from polaris.catalog.models.upgrade_format_version_update import UpgradeFormatVersionUpdate +from typing import Union, Any, List, Set, TYPE_CHECKING, Optional, Dict +from typing_extensions import Literal, Self +from pydantic import Field + +VIEWUPDATE_ANY_OF_SCHEMAS = ["AddSchemaUpdate", "AddViewVersionUpdate", "AssignUUIDUpdate", "RemovePropertiesUpdate", "SetCurrentViewVersionUpdate", "SetLocationUpdate", "SetPropertiesUpdate", "UpgradeFormatVersionUpdate"] + +class ViewUpdate(BaseModel): + """ + ViewUpdate + """ + + # data type: AssignUUIDUpdate + anyof_schema_1_validator: Optional[AssignUUIDUpdate] = None + # data type: UpgradeFormatVersionUpdate + anyof_schema_2_validator: Optional[UpgradeFormatVersionUpdate] = None + # data type: AddSchemaUpdate + anyof_schema_3_validator: Optional[AddSchemaUpdate] = None + # data type: SetLocationUpdate + anyof_schema_4_validator: Optional[SetLocationUpdate] = None + # data type: SetPropertiesUpdate + anyof_schema_5_validator: Optional[SetPropertiesUpdate] = None + # data type: RemovePropertiesUpdate + anyof_schema_6_validator: Optional[RemovePropertiesUpdate] = None + # data type: AddViewVersionUpdate + anyof_schema_7_validator: Optional[AddViewVersionUpdate] = None + # data type: SetCurrentViewVersionUpdate + anyof_schema_8_validator: Optional[SetCurrentViewVersionUpdate] = None + if TYPE_CHECKING: + actual_instance: Optional[Union[AddSchemaUpdate, AddViewVersionUpdate, AssignUUIDUpdate, RemovePropertiesUpdate, SetCurrentViewVersionUpdate, SetLocationUpdate, SetPropertiesUpdate, UpgradeFormatVersionUpdate]] = None + else: + actual_instance: Any = None + any_of_schemas: Set[str] = { "AddSchemaUpdate", "AddViewVersionUpdate", "AssignUUIDUpdate", "RemovePropertiesUpdate", "SetCurrentViewVersionUpdate", "SetLocationUpdate", "SetPropertiesUpdate", "UpgradeFormatVersionUpdate" } + + model_config = { + "validate_assignment": True, + "protected_namespaces": (), + } + + discriminator_value_class_map: Dict[str, str] = { + } + + def __init__(self, *args, **kwargs) -> None: + if args: + if len(args) > 1: + raise ValueError("If a position argument is used, only 1 is allowed to set `actual_instance`") + if kwargs: + raise ValueError("If a position argument is used, keyword arguments cannot be used.") + super().__init__(actual_instance=args[0]) + else: + super().__init__(**kwargs) + + @field_validator('actual_instance') + def actual_instance_must_validate_anyof(cls, v): + instance = ViewUpdate.model_construct() + error_messages = [] + # validate data type: AssignUUIDUpdate + if not isinstance(v, AssignUUIDUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `AssignUUIDUpdate`") + else: + return v + + # validate data type: UpgradeFormatVersionUpdate + if not isinstance(v, UpgradeFormatVersionUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `UpgradeFormatVersionUpdate`") + else: + return v + + # validate data type: AddSchemaUpdate + if not isinstance(v, AddSchemaUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `AddSchemaUpdate`") + else: + return v + + # validate data type: SetLocationUpdate + if not isinstance(v, SetLocationUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `SetLocationUpdate`") + else: + return v + + # validate data type: SetPropertiesUpdate + if not isinstance(v, SetPropertiesUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `SetPropertiesUpdate`") + else: + return v + + # validate data type: RemovePropertiesUpdate + if not isinstance(v, RemovePropertiesUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `RemovePropertiesUpdate`") + else: + return v + + # validate data type: AddViewVersionUpdate + if not isinstance(v, AddViewVersionUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `AddViewVersionUpdate`") + else: + return v + + # validate data type: SetCurrentViewVersionUpdate + if not isinstance(v, SetCurrentViewVersionUpdate): + error_messages.append(f"Error! Input type `{type(v)}` is not `SetCurrentViewVersionUpdate`") + else: + return v + + if error_messages: + # no match + raise ValueError("No match found when setting the actual_instance in ViewUpdate with anyOf schemas: AddSchemaUpdate, AddViewVersionUpdate, AssignUUIDUpdate, RemovePropertiesUpdate, SetCurrentViewVersionUpdate, SetLocationUpdate, SetPropertiesUpdate, UpgradeFormatVersionUpdate. Details: " + ", ".join(error_messages)) + else: + return v + + @classmethod + def from_dict(cls, obj: Dict[str, Any]) -> Self: + return cls.from_json(json.dumps(obj)) + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Returns the object represented by the json string""" + instance = cls.model_construct() + error_messages = [] + # anyof_schema_1_validator: Optional[AssignUUIDUpdate] = None + try: + instance.actual_instance = AssignUUIDUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_2_validator: Optional[UpgradeFormatVersionUpdate] = None + try: + instance.actual_instance = UpgradeFormatVersionUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_3_validator: Optional[AddSchemaUpdate] = None + try: + instance.actual_instance = AddSchemaUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_4_validator: Optional[SetLocationUpdate] = None + try: + instance.actual_instance = SetLocationUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_5_validator: Optional[SetPropertiesUpdate] = None + try: + instance.actual_instance = SetPropertiesUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_6_validator: Optional[RemovePropertiesUpdate] = None + try: + instance.actual_instance = RemovePropertiesUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_7_validator: Optional[AddViewVersionUpdate] = None + try: + instance.actual_instance = AddViewVersionUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + # anyof_schema_8_validator: Optional[SetCurrentViewVersionUpdate] = None + try: + instance.actual_instance = SetCurrentViewVersionUpdate.from_json(json_str) + return instance + except (ValidationError, ValueError) as e: + error_messages.append(str(e)) + + if error_messages: + # no match + raise ValueError("No match found when deserializing the JSON string into ViewUpdate with anyOf schemas: AddSchemaUpdate, AddViewVersionUpdate, AssignUUIDUpdate, RemovePropertiesUpdate, SetCurrentViewVersionUpdate, SetLocationUpdate, SetPropertiesUpdate, UpgradeFormatVersionUpdate. Details: " + ", ".join(error_messages)) + else: + return instance + + def to_json(self) -> str: + """Returns the JSON representation of the actual instance""" + if self.actual_instance is None: + return "null" + + if hasattr(self.actual_instance, "to_json") and callable(self.actual_instance.to_json): + return self.actual_instance.to_json() + else: + return json.dumps(self.actual_instance) + + def to_dict(self) -> Optional[Union[Dict[str, Any], AddSchemaUpdate, AddViewVersionUpdate, AssignUUIDUpdate, RemovePropertiesUpdate, SetCurrentViewVersionUpdate, SetLocationUpdate, SetPropertiesUpdate, UpgradeFormatVersionUpdate]]: + """Returns the dict representation of the actual instance""" + if self.actual_instance is None: + return None + + if hasattr(self.actual_instance, "to_dict") and callable(self.actual_instance.to_dict): + return self.actual_instance.to_dict() + else: + return self.actual_instance + + def to_str(self) -> str: + """Returns the string representation of the actual instance""" + return pprint.pformat(self.model_dump()) + + diff --git a/regtests/client/python/polaris/catalog/models/view_version.py b/regtests/client/python/polaris/catalog/models/view_version.py new file mode 100644 index 0000000000..4e5fa47735 --- /dev/null +++ b/regtests/client/python/polaris/catalog/models/view_version.py @@ -0,0 +1,122 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.catalog.models.view_representation import ViewRepresentation +from typing import Optional, Set +from typing_extensions import Self + +class ViewVersion(BaseModel): + """ + ViewVersion + """ # noqa: E501 + version_id: StrictInt = Field(alias="version-id") + timestamp_ms: StrictInt = Field(alias="timestamp-ms") + schema_id: StrictInt = Field(description="Schema ID to set as current, or -1 to set last added schema", alias="schema-id") + summary: Dict[str, StrictStr] + representations: List[ViewRepresentation] + default_catalog: Optional[StrictStr] = Field(default=None, alias="default-catalog") + default_namespace: List[StrictStr] = Field(description="Reference to one or more levels of a namespace", alias="default-namespace") + __properties: ClassVar[List[str]] = ["version-id", "timestamp-ms", "schema-id", "summary", "representations", "default-catalog", "default-namespace"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of ViewVersion from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in representations (list) + _items = [] + if self.representations: + for _item in self.representations: + if _item: + _items.append(_item.to_dict()) + _dict['representations'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of ViewVersion from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "version-id": obj.get("version-id"), + "timestamp-ms": obj.get("timestamp-ms"), + "schema-id": obj.get("schema-id"), + "summary": obj.get("summary"), + "representations": [ViewRepresentation.from_dict(_item) for _item in obj["representations"]] if obj.get("representations") is not None else None, + "default-catalog": obj.get("default-catalog"), + "default-namespace": obj.get("default-namespace") + }) + return _obj + + diff --git a/regtests/client/python/polaris/catalog/py.typed b/regtests/client/python/polaris/catalog/py.typed new file mode 100644 index 0000000000..e69de29bb2 diff --git a/regtests/client/python/polaris/catalog/rest.py b/regtests/client/python/polaris/catalog/rest.py new file mode 100644 index 0000000000..7d1b969c2c --- /dev/null +++ b/regtests/client/python/polaris/catalog/rest.py @@ -0,0 +1,272 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import io +import json +import re +import ssl + +import urllib3 + +from polaris.catalog.exceptions import ApiException, ApiValueError + +SUPPORTED_SOCKS_PROXIES = {"socks5", "socks5h", "socks4", "socks4a"} +RESTResponseType = urllib3.HTTPResponse + + +def is_socks_proxy_url(url): + if url is None: + return False + split_section = url.split("://") + if len(split_section) < 2: + return False + else: + return split_section[0].lower() in SUPPORTED_SOCKS_PROXIES + + +class RESTResponse(io.IOBase): + + def __init__(self, resp) -> None: + self.response = resp + self.status = resp.status + self.reason = resp.reason + self.data = None + + def read(self): + if self.data is None: + self.data = self.response.data + return self.data + + def getheaders(self): + """Returns a dictionary of the response headers.""" + return self.response.headers + + def getheader(self, name, default=None): + """Returns a given response header.""" + return self.response.headers.get(name, default) + + +class RESTClientObject: + + def __init__(self, configuration) -> None: + # urllib3.PoolManager will pass all kw parameters to connectionpool + # https://github.com/shazow/urllib3/blob/f9409436f83aeb79fbaf090181cd81b784f1b8ce/urllib3/poolmanager.py#L75 # noqa: E501 + # https://github.com/shazow/urllib3/blob/f9409436f83aeb79fbaf090181cd81b784f1b8ce/urllib3/connectionpool.py#L680 # noqa: E501 + # Custom SSL certificates and client certificates: http://urllib3.readthedocs.io/en/latest/advanced-usage.html # noqa: E501 + + # cert_reqs + if configuration.verify_ssl: + cert_reqs = ssl.CERT_REQUIRED + else: + cert_reqs = ssl.CERT_NONE + + pool_args = { + "cert_reqs": cert_reqs, + "ca_certs": configuration.ssl_ca_cert, + "cert_file": configuration.cert_file, + "key_file": configuration.key_file, + } + if configuration.assert_hostname is not None: + pool_args['assert_hostname'] = ( + configuration.assert_hostname + ) + + if configuration.retries is not None: + pool_args['retries'] = configuration.retries + + if configuration.tls_server_name: + pool_args['server_hostname'] = configuration.tls_server_name + + + if configuration.socket_options is not None: + pool_args['socket_options'] = configuration.socket_options + + if configuration.connection_pool_maxsize is not None: + pool_args['maxsize'] = configuration.connection_pool_maxsize + + # https pool manager + self.pool_manager: urllib3.PoolManager + + if configuration.proxy: + if is_socks_proxy_url(configuration.proxy): + from urllib3.contrib.socks import SOCKSProxyManager + pool_args["proxy_url"] = configuration.proxy + pool_args["headers"] = configuration.proxy_headers + self.pool_manager = SOCKSProxyManager(**pool_args) + else: + pool_args["proxy_url"] = configuration.proxy + pool_args["proxy_headers"] = configuration.proxy_headers + self.pool_manager = urllib3.ProxyManager(**pool_args) + else: + self.pool_manager = urllib3.PoolManager(**pool_args) + + def request( + self, + method, + url, + headers=None, + body=None, + post_params=None, + _request_timeout=None + ): + """Perform requests. + + :param method: http request method + :param url: http request url + :param headers: http request headers + :param body: request json body, for `application/json` + :param post_params: request post parameters, + `application/x-www-form-urlencoded` + and `multipart/form-data` + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + """ + method = method.upper() + assert method in [ + 'GET', + 'HEAD', + 'DELETE', + 'POST', + 'PUT', + 'PATCH', + 'OPTIONS' + ] + + if post_params and body: + raise ApiValueError( + "body parameter cannot be used with post_params parameter." + ) + + post_params = post_params or {} + headers = headers or {} + + timeout = None + if _request_timeout: + if isinstance(_request_timeout, (int, float)): + timeout = urllib3.Timeout(total=_request_timeout) + elif ( + isinstance(_request_timeout, tuple) + and len(_request_timeout) == 2 + ): + timeout = urllib3.Timeout( + connect=_request_timeout[0], + read=_request_timeout[1] + ) + + try: + # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` + if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: + + # no content type provided or payload is json + content_type = headers.get('Content-Type') + if ( + not content_type + or re.search('json', content_type, re.IGNORECASE) + ): + request_body = None + if body is not None: + request_body = json.dumps(body) + r = self.pool_manager.request( + method, + url, + body=request_body, + timeout=timeout, + headers=headers, + preload_content=False + ) + elif content_type == 'application/x-www-form-urlencoded': + r = self.pool_manager.request( + method, + url, + fields=post_params, + encode_multipart=False, + timeout=timeout, + headers=headers, + preload_content=False + ) + elif content_type == 'multipart/form-data': + # must del headers['Content-Type'], or the correct + # Content-Type which generated by urllib3 will be + # overwritten. + del headers['Content-Type'] + # Ensures that dict objects are serialized + post_params = [(a, json.dumps(b)) if isinstance(b, dict) else (a,b) for a, b in post_params] + r = self.pool_manager.request( + method, + url, + fields=post_params, + encode_multipart=True, + timeout=timeout, + headers=headers, + preload_content=False + ) + # Pass a `string` parameter directly in the body to support + # other content types than JSON when `body` argument is + # provided in serialized form. + elif isinstance(body, str) or isinstance(body, bytes): + r = self.pool_manager.request( + method, + url, + body=body, + timeout=timeout, + headers=headers, + preload_content=False + ) + elif headers['Content-Type'] == 'text/plain' and isinstance(body, bool): + request_body = "true" if body else "false" + r = self.pool_manager.request( + method, + url, + body=request_body, + preload_content=False, + timeout=timeout, + headers=headers) + else: + # Cannot generate the request from given parameters + msg = """Cannot prepare a request message for provided + arguments. Please check that your arguments match + declared content type.""" + raise ApiException(status=0, reason=msg) + # For `GET`, `HEAD` + else: + r = self.pool_manager.request( + method, + url, + fields={}, + timeout=timeout, + headers=headers, + preload_content=False + ) + except urllib3.exceptions.SSLError as e: + msg = "\n".join([type(e).__name__, str(e)]) + raise ApiException(status=0, reason=msg) + + return RESTResponse(r) diff --git a/regtests/client/python/polaris/management/__init__.py b/regtests/client/python/polaris/management/__init__.py new file mode 100644 index 0000000000..aca9584611 --- /dev/null +++ b/regtests/client/python/polaris/management/__init__.py @@ -0,0 +1,88 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +# flake8: noqa + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +__version__ = "1.0.0" + +# import apis into sdk package +from polaris.management.api.polaris_default_api import PolarisDefaultApi + +# import ApiClient +from polaris.management.api_response import ApiResponse +from polaris.management.api_client import ApiClient +from polaris.management.configuration import Configuration +from polaris.management.exceptions import OpenApiException +from polaris.management.exceptions import ApiTypeError +from polaris.management.exceptions import ApiValueError +from polaris.management.exceptions import ApiKeyError +from polaris.management.exceptions import ApiAttributeError +from polaris.management.exceptions import ApiException + +# import models into sdk package +from polaris.management.models.add_grant_request import AddGrantRequest +from polaris.management.models.aws_storage_config_info import AwsStorageConfigInfo +from polaris.management.models.azure_storage_config_info import AzureStorageConfigInfo +from polaris.management.models.catalog import Catalog +from polaris.management.models.catalog_grant import CatalogGrant +from polaris.management.models.catalog_privilege import CatalogPrivilege +from polaris.management.models.catalog_properties import CatalogProperties +from polaris.management.models.catalog_role import CatalogRole +from polaris.management.models.catalog_roles import CatalogRoles +from polaris.management.models.catalogs import Catalogs +from polaris.management.models.create_catalog_request import CreateCatalogRequest +from polaris.management.models.create_catalog_role_request import CreateCatalogRoleRequest +from polaris.management.models.create_principal_request import CreatePrincipalRequest +from polaris.management.models.create_principal_role_request import CreatePrincipalRoleRequest +from polaris.management.models.external_catalog import ExternalCatalog +from polaris.management.models.file_storage_config_info import FileStorageConfigInfo +from polaris.management.models.gcp_storage_config_info import GcpStorageConfigInfo +from polaris.management.models.grant_catalog_role_request import GrantCatalogRoleRequest +from polaris.management.models.grant_principal_role_request import GrantPrincipalRoleRequest +from polaris.management.models.grant_resource import GrantResource +from polaris.management.models.grant_resources import GrantResources +from polaris.management.models.namespace_grant import NamespaceGrant +from polaris.management.models.namespace_privilege import NamespacePrivilege +from polaris.management.models.polaris_catalog import PolarisCatalog +from polaris.management.models.principal import Principal +from polaris.management.models.principal_role import PrincipalRole +from polaris.management.models.principal_roles import PrincipalRoles +from polaris.management.models.principal_with_credentials import PrincipalWithCredentials +from polaris.management.models.principal_with_credentials_credentials import PrincipalWithCredentialsCredentials +from polaris.management.models.principals import Principals +from polaris.management.models.revoke_grant_request import RevokeGrantRequest +from polaris.management.models.storage_config_info import StorageConfigInfo +from polaris.management.models.table_grant import TableGrant +from polaris.management.models.table_privilege import TablePrivilege +from polaris.management.models.update_catalog_request import UpdateCatalogRequest +from polaris.management.models.update_catalog_role_request import UpdateCatalogRoleRequest +from polaris.management.models.update_principal_request import UpdatePrincipalRequest +from polaris.management.models.update_principal_role_request import UpdatePrincipalRoleRequest +from polaris.management.models.view_grant import ViewGrant +from polaris.management.models.view_privilege import ViewPrivilege diff --git a/regtests/client/python/polaris/management/api/__init__.py b/regtests/client/python/polaris/management/api/__init__.py new file mode 100644 index 0000000000..9856835dad --- /dev/null +++ b/regtests/client/python/polaris/management/api/__init__.py @@ -0,0 +1,20 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# flake8: noqa + +# import apis into api package +from polaris.management.api.polaris_default_api import PolarisDefaultApi + diff --git a/regtests/client/python/polaris/management/api/polaris_default_api.py b/regtests/client/python/polaris/management/api/polaris_default_api.py new file mode 100644 index 0000000000..f8b6d70fdc --- /dev/null +++ b/regtests/client/python/polaris/management/api/polaris_default_api.py @@ -0,0 +1,8898 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + +import warnings +from pydantic import validate_call, Field, StrictFloat, StrictStr, StrictInt +from typing import Any, Dict, List, Optional, Tuple, Union +from typing_extensions import Annotated + +from pydantic import Field, StrictBool, StrictStr +from typing import Optional +from typing_extensions import Annotated +from polaris.management.models.add_grant_request import AddGrantRequest +from polaris.management.models.catalog import Catalog +from polaris.management.models.catalog_role import CatalogRole +from polaris.management.models.catalog_roles import CatalogRoles +from polaris.management.models.catalogs import Catalogs +from polaris.management.models.create_catalog_request import CreateCatalogRequest +from polaris.management.models.create_catalog_role_request import CreateCatalogRoleRequest +from polaris.management.models.create_principal_request import CreatePrincipalRequest +from polaris.management.models.create_principal_role_request import CreatePrincipalRoleRequest +from polaris.management.models.grant_catalog_role_request import GrantCatalogRoleRequest +from polaris.management.models.grant_principal_role_request import GrantPrincipalRoleRequest +from polaris.management.models.grant_resources import GrantResources +from polaris.management.models.principal import Principal +from polaris.management.models.principal_role import PrincipalRole +from polaris.management.models.principal_roles import PrincipalRoles +from polaris.management.models.principal_with_credentials import PrincipalWithCredentials +from polaris.management.models.principals import Principals +from polaris.management.models.revoke_grant_request import RevokeGrantRequest +from polaris.management.models.update_catalog_request import UpdateCatalogRequest +from polaris.management.models.update_catalog_role_request import UpdateCatalogRoleRequest +from polaris.management.models.update_principal_request import UpdatePrincipalRequest +from polaris.management.models.update_principal_role_request import UpdatePrincipalRoleRequest + +from polaris.management.api_client import ApiClient, RequestSerialized +from polaris.management.api_response import ApiResponse +from polaris.management.rest import RESTResponseType + + +class PolarisDefaultApi: + """NOTE: This class is auto generated by OpenAPI Generator + Ref: https://openapi-generator.tech + + Do not edit the class manually. + """ + + def __init__(self, api_client=None) -> None: + if api_client is None: + api_client = ApiClient.get_default() + self.api_client = api_client + + + @validate_call + def add_grant_to_catalog_role( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the role will receive the grant")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role receiving the grant (must exist)")], + add_grant_request: Optional[AddGrantRequest] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """add_grant_to_catalog_role + + Add a new grant to the catalog role + + :param catalog_name: The name of the catalog where the role will receive the grant (required) + :type catalog_name: str + :param catalog_role_name: The name of the role receiving the grant (must exist) (required) + :type catalog_role_name: str + :param add_grant_request: + :type add_grant_request: AddGrantRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._add_grant_to_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + add_grant_request=add_grant_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def add_grant_to_catalog_role_with_http_info( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the role will receive the grant")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role receiving the grant (must exist)")], + add_grant_request: Optional[AddGrantRequest] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """add_grant_to_catalog_role + + Add a new grant to the catalog role + + :param catalog_name: The name of the catalog where the role will receive the grant (required) + :type catalog_name: str + :param catalog_role_name: The name of the role receiving the grant (must exist) (required) + :type catalog_role_name: str + :param add_grant_request: + :type add_grant_request: AddGrantRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._add_grant_to_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + add_grant_request=add_grant_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def add_grant_to_catalog_role_without_preload_content( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the role will receive the grant")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role receiving the grant (must exist)")], + add_grant_request: Optional[AddGrantRequest] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """add_grant_to_catalog_role + + Add a new grant to the catalog role + + :param catalog_name: The name of the catalog where the role will receive the grant (required) + :type catalog_name: str + :param catalog_role_name: The name of the role receiving the grant (must exist) (required) + :type catalog_role_name: str + :param add_grant_request: + :type add_grant_request: AddGrantRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._add_grant_to_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + add_grant_request=add_grant_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _add_grant_to_catalog_role_serialize( + self, + catalog_name, + catalog_role_name, + add_grant_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + if catalog_role_name is not None: + _path_params['catalogRoleName'] = catalog_role_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if add_grant_request is not None: + _body_params = add_grant_request + + + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='PUT', + resource_path='/catalogs/{catalogName}/catalog-roles/{catalogRoleName}/grants', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def assign_catalog_role_to_principal_role( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the catalogRoles reside")], + grant_catalog_role_request: Annotated[GrantCatalogRoleRequest, Field(description="The principal to create")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """assign_catalog_role_to_principal_role + + Assign a catalog role to a principal role + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param catalog_name: The name of the catalog where the catalogRoles reside (required) + :type catalog_name: str + :param grant_catalog_role_request: The principal to create (required) + :type grant_catalog_role_request: GrantCatalogRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._assign_catalog_role_to_principal_role_serialize( + principal_role_name=principal_role_name, + catalog_name=catalog_name, + grant_catalog_role_request=grant_catalog_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def assign_catalog_role_to_principal_role_with_http_info( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the catalogRoles reside")], + grant_catalog_role_request: Annotated[GrantCatalogRoleRequest, Field(description="The principal to create")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """assign_catalog_role_to_principal_role + + Assign a catalog role to a principal role + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param catalog_name: The name of the catalog where the catalogRoles reside (required) + :type catalog_name: str + :param grant_catalog_role_request: The principal to create (required) + :type grant_catalog_role_request: GrantCatalogRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._assign_catalog_role_to_principal_role_serialize( + principal_role_name=principal_role_name, + catalog_name=catalog_name, + grant_catalog_role_request=grant_catalog_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def assign_catalog_role_to_principal_role_without_preload_content( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the catalogRoles reside")], + grant_catalog_role_request: Annotated[GrantCatalogRoleRequest, Field(description="The principal to create")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """assign_catalog_role_to_principal_role + + Assign a catalog role to a principal role + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param catalog_name: The name of the catalog where the catalogRoles reside (required) + :type catalog_name: str + :param grant_catalog_role_request: The principal to create (required) + :type grant_catalog_role_request: GrantCatalogRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._assign_catalog_role_to_principal_role_serialize( + principal_role_name=principal_role_name, + catalog_name=catalog_name, + grant_catalog_role_request=grant_catalog_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _assign_catalog_role_to_principal_role_serialize( + self, + principal_role_name, + catalog_name, + grant_catalog_role_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_role_name is not None: + _path_params['principalRoleName'] = principal_role_name + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if grant_catalog_role_request is not None: + _body_params = grant_catalog_role_request + + + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='PUT', + resource_path='/principal-roles/{principalRoleName}/catalog-roles/{catalogName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def assign_principal_role( + self, + principal_name: Annotated[StrictStr, Field(description="The name of the target principal")], + grant_principal_role_request: Annotated[GrantPrincipalRoleRequest, Field(description="The principal role to assign")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """assign_principal_role + + Add a role to the principal + + :param principal_name: The name of the target principal (required) + :type principal_name: str + :param grant_principal_role_request: The principal role to assign (required) + :type grant_principal_role_request: GrantPrincipalRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._assign_principal_role_serialize( + principal_name=principal_name, + grant_principal_role_request=grant_principal_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def assign_principal_role_with_http_info( + self, + principal_name: Annotated[StrictStr, Field(description="The name of the target principal")], + grant_principal_role_request: Annotated[GrantPrincipalRoleRequest, Field(description="The principal role to assign")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """assign_principal_role + + Add a role to the principal + + :param principal_name: The name of the target principal (required) + :type principal_name: str + :param grant_principal_role_request: The principal role to assign (required) + :type grant_principal_role_request: GrantPrincipalRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._assign_principal_role_serialize( + principal_name=principal_name, + grant_principal_role_request=grant_principal_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def assign_principal_role_without_preload_content( + self, + principal_name: Annotated[StrictStr, Field(description="The name of the target principal")], + grant_principal_role_request: Annotated[GrantPrincipalRoleRequest, Field(description="The principal role to assign")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """assign_principal_role + + Add a role to the principal + + :param principal_name: The name of the target principal (required) + :type principal_name: str + :param grant_principal_role_request: The principal role to assign (required) + :type grant_principal_role_request: GrantPrincipalRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._assign_principal_role_serialize( + principal_name=principal_name, + grant_principal_role_request=grant_principal_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _assign_principal_role_serialize( + self, + principal_name, + grant_principal_role_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_name is not None: + _path_params['principalName'] = principal_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if grant_principal_role_request is not None: + _body_params = grant_principal_role_request + + + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='PUT', + resource_path='/principals/{principalName}/principal-roles', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def create_catalog( + self, + create_catalog_request: Annotated[CreateCatalogRequest, Field(description="The Catalog to create")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """create_catalog + + Add a new Catalog + + :param create_catalog_request: The Catalog to create (required) + :type create_catalog_request: CreateCatalogRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_catalog_serialize( + create_catalog_request=create_catalog_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def create_catalog_with_http_info( + self, + create_catalog_request: Annotated[CreateCatalogRequest, Field(description="The Catalog to create")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """create_catalog + + Add a new Catalog + + :param create_catalog_request: The Catalog to create (required) + :type create_catalog_request: CreateCatalogRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_catalog_serialize( + create_catalog_request=create_catalog_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def create_catalog_without_preload_content( + self, + create_catalog_request: Annotated[CreateCatalogRequest, Field(description="The Catalog to create")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """create_catalog + + Add a new Catalog + + :param create_catalog_request: The Catalog to create (required) + :type create_catalog_request: CreateCatalogRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_catalog_serialize( + create_catalog_request=create_catalog_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _create_catalog_serialize( + self, + create_catalog_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if create_catalog_request is not None: + _body_params = create_catalog_request + + + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/catalogs', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def create_catalog_role( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are reading/updating roles")], + create_catalog_role_request: Optional[CreateCatalogRoleRequest] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """create_catalog_role + + Create a new role in the catalog + + :param catalog_name: The catalog for which we are reading/updating roles (required) + :type catalog_name: str + :param create_catalog_role_request: + :type create_catalog_role_request: CreateCatalogRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_catalog_role_serialize( + catalog_name=catalog_name, + create_catalog_role_request=create_catalog_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def create_catalog_role_with_http_info( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are reading/updating roles")], + create_catalog_role_request: Optional[CreateCatalogRoleRequest] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """create_catalog_role + + Create a new role in the catalog + + :param catalog_name: The catalog for which we are reading/updating roles (required) + :type catalog_name: str + :param create_catalog_role_request: + :type create_catalog_role_request: CreateCatalogRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_catalog_role_serialize( + catalog_name=catalog_name, + create_catalog_role_request=create_catalog_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def create_catalog_role_without_preload_content( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are reading/updating roles")], + create_catalog_role_request: Optional[CreateCatalogRoleRequest] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """create_catalog_role + + Create a new role in the catalog + + :param catalog_name: The catalog for which we are reading/updating roles (required) + :type catalog_name: str + :param create_catalog_role_request: + :type create_catalog_role_request: CreateCatalogRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_catalog_role_serialize( + catalog_name=catalog_name, + create_catalog_role_request=create_catalog_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _create_catalog_role_serialize( + self, + catalog_name, + create_catalog_role_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if create_catalog_role_request is not None: + _body_params = create_catalog_role_request + + + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/catalogs/{catalogName}/catalog-roles', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def create_principal( + self, + create_principal_request: Annotated[CreatePrincipalRequest, Field(description="The principal to create")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> PrincipalWithCredentials: + """create_principal + + Create a principal + + :param create_principal_request: The principal to create (required) + :type create_principal_request: CreatePrincipalRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_principal_serialize( + create_principal_request=create_principal_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': "PrincipalWithCredentials", + '403': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def create_principal_with_http_info( + self, + create_principal_request: Annotated[CreatePrincipalRequest, Field(description="The principal to create")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[PrincipalWithCredentials]: + """create_principal + + Create a principal + + :param create_principal_request: The principal to create (required) + :type create_principal_request: CreatePrincipalRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_principal_serialize( + create_principal_request=create_principal_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': "PrincipalWithCredentials", + '403': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def create_principal_without_preload_content( + self, + create_principal_request: Annotated[CreatePrincipalRequest, Field(description="The principal to create")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """create_principal + + Create a principal + + :param create_principal_request: The principal to create (required) + :type create_principal_request: CreatePrincipalRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_principal_serialize( + create_principal_request=create_principal_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': "PrincipalWithCredentials", + '403': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _create_principal_serialize( + self, + create_principal_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if create_principal_request is not None: + _body_params = create_principal_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/principals', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def create_principal_role( + self, + create_principal_role_request: Annotated[CreatePrincipalRoleRequest, Field(description="The principal to create")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """create_principal_role + + Create a principal role + + :param create_principal_role_request: The principal to create (required) + :type create_principal_role_request: CreatePrincipalRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_principal_role_serialize( + create_principal_role_request=create_principal_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def create_principal_role_with_http_info( + self, + create_principal_role_request: Annotated[CreatePrincipalRoleRequest, Field(description="The principal to create")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """create_principal_role + + Create a principal role + + :param create_principal_role_request: The principal to create (required) + :type create_principal_role_request: CreatePrincipalRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_principal_role_serialize( + create_principal_role_request=create_principal_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def create_principal_role_without_preload_content( + self, + create_principal_role_request: Annotated[CreatePrincipalRoleRequest, Field(description="The principal to create")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """create_principal_role + + Create a principal role + + :param create_principal_role_request: The principal to create (required) + :type create_principal_role_request: CreatePrincipalRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._create_principal_role_serialize( + create_principal_role_request=create_principal_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _create_principal_role_serialize( + self, + create_principal_role_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if create_principal_role_request is not None: + _body_params = create_principal_role_request + + + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/principal-roles', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def delete_catalog( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """delete_catalog + + Delete an existing catalog. This is a cascading operation that deletes all metadata, including principals, roles and grants. If the catalog is an internal catalog, all tables and namespaces are dropped without purge. + + :param catalog_name: The name of the catalog (required) + :type catalog_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._delete_catalog_serialize( + catalog_name=catalog_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def delete_catalog_with_http_info( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """delete_catalog + + Delete an existing catalog. This is a cascading operation that deletes all metadata, including principals, roles and grants. If the catalog is an internal catalog, all tables and namespaces are dropped without purge. + + :param catalog_name: The name of the catalog (required) + :type catalog_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._delete_catalog_serialize( + catalog_name=catalog_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def delete_catalog_without_preload_content( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """delete_catalog + + Delete an existing catalog. This is a cascading operation that deletes all metadata, including principals, roles and grants. If the catalog is an internal catalog, all tables and namespaces are dropped without purge. + + :param catalog_name: The name of the catalog (required) + :type catalog_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._delete_catalog_serialize( + catalog_name=catalog_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _delete_catalog_serialize( + self, + catalog_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='DELETE', + resource_path='/catalogs/{catalogName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def delete_catalog_role( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are retrieving roles")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """delete_catalog_role + + Delete an existing role from the catalog. All associated grants will also be deleted + + :param catalog_name: The catalog for which we are retrieving roles (required) + :type catalog_name: str + :param catalog_role_name: The name of the role (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._delete_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def delete_catalog_role_with_http_info( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are retrieving roles")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """delete_catalog_role + + Delete an existing role from the catalog. All associated grants will also be deleted + + :param catalog_name: The catalog for which we are retrieving roles (required) + :type catalog_name: str + :param catalog_role_name: The name of the role (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._delete_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def delete_catalog_role_without_preload_content( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are retrieving roles")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """delete_catalog_role + + Delete an existing role from the catalog. All associated grants will also be deleted + + :param catalog_name: The catalog for which we are retrieving roles (required) + :type catalog_name: str + :param catalog_role_name: The name of the role (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._delete_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _delete_catalog_role_serialize( + self, + catalog_name, + catalog_role_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + if catalog_role_name is not None: + _path_params['catalogRoleName'] = catalog_role_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='DELETE', + resource_path='/catalogs/{catalogName}/catalog-roles/{catalogRoleName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def delete_principal( + self, + principal_name: Annotated[StrictStr, Field(description="The principal name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """delete_principal + + Remove a principal from polaris + + :param principal_name: The principal name (required) + :type principal_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._delete_principal_serialize( + principal_name=principal_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def delete_principal_with_http_info( + self, + principal_name: Annotated[StrictStr, Field(description="The principal name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """delete_principal + + Remove a principal from polaris + + :param principal_name: The principal name (required) + :type principal_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._delete_principal_serialize( + principal_name=principal_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def delete_principal_without_preload_content( + self, + principal_name: Annotated[StrictStr, Field(description="The principal name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """delete_principal + + Remove a principal from polaris + + :param principal_name: The principal name (required) + :type principal_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._delete_principal_serialize( + principal_name=principal_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _delete_principal_serialize( + self, + principal_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_name is not None: + _path_params['principalName'] = principal_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='DELETE', + resource_path='/principals/{principalName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def delete_principal_role( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """delete_principal_role + + Remove a principal role from polaris + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._delete_principal_role_serialize( + principal_role_name=principal_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def delete_principal_role_with_http_info( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """delete_principal_role + + Remove a principal role from polaris + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._delete_principal_role_serialize( + principal_role_name=principal_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def delete_principal_role_without_preload_content( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """delete_principal_role + + Remove a principal role from polaris + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._delete_principal_role_serialize( + principal_role_name=principal_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _delete_principal_role_serialize( + self, + principal_role_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_role_name is not None: + _path_params['principalRoleName'] = principal_role_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='DELETE', + resource_path='/principal-roles/{principalRoleName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def get_catalog( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> Catalog: + """get_catalog + + Get the details of a catalog + + :param catalog_name: The name of the catalog (required) + :type catalog_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_catalog_serialize( + catalog_name=catalog_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Catalog", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def get_catalog_with_http_info( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[Catalog]: + """get_catalog + + Get the details of a catalog + + :param catalog_name: The name of the catalog (required) + :type catalog_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_catalog_serialize( + catalog_name=catalog_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Catalog", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def get_catalog_without_preload_content( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """get_catalog + + Get the details of a catalog + + :param catalog_name: The name of the catalog (required) + :type catalog_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_catalog_serialize( + catalog_name=catalog_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Catalog", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _get_catalog_serialize( + self, + catalog_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/catalogs/{catalogName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def get_catalog_role( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are retrieving roles")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> CatalogRole: + """get_catalog_role + + Get the details of an existing role + + :param catalog_name: The catalog for which we are retrieving roles (required) + :type catalog_name: str + :param catalog_role_name: The name of the role (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogRole", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def get_catalog_role_with_http_info( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are retrieving roles")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[CatalogRole]: + """get_catalog_role + + Get the details of an existing role + + :param catalog_name: The catalog for which we are retrieving roles (required) + :type catalog_name: str + :param catalog_role_name: The name of the role (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogRole", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def get_catalog_role_without_preload_content( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are retrieving roles")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """get_catalog_role + + Get the details of an existing role + + :param catalog_name: The catalog for which we are retrieving roles (required) + :type catalog_name: str + :param catalog_role_name: The name of the role (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogRole", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _get_catalog_role_serialize( + self, + catalog_name, + catalog_role_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + if catalog_role_name is not None: + _path_params['catalogRoleName'] = catalog_role_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/catalogs/{catalogName}/catalog-roles/{catalogRoleName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def get_principal( + self, + principal_name: Annotated[StrictStr, Field(description="The principal name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> Principal: + """get_principal + + Get the principal details + + :param principal_name: The principal name (required) + :type principal_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_principal_serialize( + principal_name=principal_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Principal", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def get_principal_with_http_info( + self, + principal_name: Annotated[StrictStr, Field(description="The principal name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[Principal]: + """get_principal + + Get the principal details + + :param principal_name: The principal name (required) + :type principal_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_principal_serialize( + principal_name=principal_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Principal", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def get_principal_without_preload_content( + self, + principal_name: Annotated[StrictStr, Field(description="The principal name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """get_principal + + Get the principal details + + :param principal_name: The principal name (required) + :type principal_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_principal_serialize( + principal_name=principal_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Principal", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _get_principal_serialize( + self, + principal_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_name is not None: + _path_params['principalName'] = principal_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/principals/{principalName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def get_principal_role( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> PrincipalRole: + """get_principal_role + + Get the principal role details + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_principal_role_serialize( + principal_role_name=principal_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRole", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def get_principal_role_with_http_info( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[PrincipalRole]: + """get_principal_role + + Get the principal role details + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_principal_role_serialize( + principal_role_name=principal_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRole", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def get_principal_role_without_preload_content( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """get_principal_role + + Get the principal role details + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._get_principal_role_serialize( + principal_role_name=principal_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRole", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _get_principal_role_serialize( + self, + principal_role_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_role_name is not None: + _path_params['principalRoleName'] = principal_role_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/principal-roles/{principalRoleName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def list_assignee_principal_roles_for_catalog_role( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the catalog role resides")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the catalog role")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> PrincipalRoles: + """list_assignee_principal_roles_for_catalog_role + + List the PrincipalRoles to whome the tagetcatalog role has been assigned + + :param catalog_name: The name of the catalog where the catalog role resides (required) + :type catalog_name: str + :param catalog_role_name: The name of the catalog role (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_assignee_principal_roles_for_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRoles", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def list_assignee_principal_roles_for_catalog_role_with_http_info( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the catalog role resides")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the catalog role")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[PrincipalRoles]: + """list_assignee_principal_roles_for_catalog_role + + List the PrincipalRoles to whome the tagetcatalog role has been assigned + + :param catalog_name: The name of the catalog where the catalog role resides (required) + :type catalog_name: str + :param catalog_role_name: The name of the catalog role (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_assignee_principal_roles_for_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRoles", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def list_assignee_principal_roles_for_catalog_role_without_preload_content( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the catalog role resides")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the catalog role")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """list_assignee_principal_roles_for_catalog_role + + List the PrincipalRoles to whome the tagetcatalog role has been assigned + + :param catalog_name: The name of the catalog where the catalog role resides (required) + :type catalog_name: str + :param catalog_role_name: The name of the catalog role (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_assignee_principal_roles_for_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRoles", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _list_assignee_principal_roles_for_catalog_role_serialize( + self, + catalog_name, + catalog_role_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + if catalog_role_name is not None: + _path_params['catalogRoleName'] = catalog_role_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/catalogs/{catalogName}/catalog-roles/{catalogRoleName}/principal-roles', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def list_assignee_principals_for_principal_role( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> Principals: + """list_assignee_principals_for_principal_role + + List the Principals to whom the target principal role has been assigned + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_assignee_principals_for_principal_role_serialize( + principal_role_name=principal_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Principals", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def list_assignee_principals_for_principal_role_with_http_info( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[Principals]: + """list_assignee_principals_for_principal_role + + List the Principals to whom the target principal role has been assigned + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_assignee_principals_for_principal_role_serialize( + principal_role_name=principal_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Principals", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def list_assignee_principals_for_principal_role_without_preload_content( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """list_assignee_principals_for_principal_role + + List the Principals to whom the target principal role has been assigned + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_assignee_principals_for_principal_role_serialize( + principal_role_name=principal_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Principals", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _list_assignee_principals_for_principal_role_serialize( + self, + principal_role_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_role_name is not None: + _path_params['principalRoleName'] = principal_role_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/principal-roles/{principalRoleName}/principals', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def list_catalog_roles( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are reading/updating roles")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> CatalogRoles: + """list_catalog_roles + + List existing roles in the catalog + + :param catalog_name: The catalog for which we are reading/updating roles (required) + :type catalog_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_catalog_roles_serialize( + catalog_name=catalog_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogRoles", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def list_catalog_roles_with_http_info( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are reading/updating roles")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[CatalogRoles]: + """list_catalog_roles + + List existing roles in the catalog + + :param catalog_name: The catalog for which we are reading/updating roles (required) + :type catalog_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_catalog_roles_serialize( + catalog_name=catalog_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogRoles", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def list_catalog_roles_without_preload_content( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are reading/updating roles")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """list_catalog_roles + + List existing roles in the catalog + + :param catalog_name: The catalog for which we are reading/updating roles (required) + :type catalog_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_catalog_roles_serialize( + catalog_name=catalog_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogRoles", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _list_catalog_roles_serialize( + self, + catalog_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/catalogs/{catalogName}/catalog-roles', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def list_catalog_roles_for_principal_role( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the catalogRoles reside")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> CatalogRoles: + """list_catalog_roles_for_principal_role + + Get the catalog roles mapped to the principal role + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param catalog_name: The name of the catalog where the catalogRoles reside (required) + :type catalog_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_catalog_roles_for_principal_role_serialize( + principal_role_name=principal_role_name, + catalog_name=catalog_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogRoles", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def list_catalog_roles_for_principal_role_with_http_info( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the catalogRoles reside")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[CatalogRoles]: + """list_catalog_roles_for_principal_role + + Get the catalog roles mapped to the principal role + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param catalog_name: The name of the catalog where the catalogRoles reside (required) + :type catalog_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_catalog_roles_for_principal_role_serialize( + principal_role_name=principal_role_name, + catalog_name=catalog_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogRoles", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def list_catalog_roles_for_principal_role_without_preload_content( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the catalogRoles reside")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """list_catalog_roles_for_principal_role + + Get the catalog roles mapped to the principal role + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param catalog_name: The name of the catalog where the catalogRoles reside (required) + :type catalog_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_catalog_roles_for_principal_role_serialize( + principal_role_name=principal_role_name, + catalog_name=catalog_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogRoles", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _list_catalog_roles_for_principal_role_serialize( + self, + principal_role_name, + catalog_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_role_name is not None: + _path_params['principalRoleName'] = principal_role_name + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/principal-roles/{principalRoleName}/catalog-roles/{catalogName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def list_catalogs( + self, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> Catalogs: + """list_catalogs + + List all catalogs in this polaris service + + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_catalogs_serialize( + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Catalogs", + '403': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def list_catalogs_with_http_info( + self, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[Catalogs]: + """list_catalogs + + List all catalogs in this polaris service + + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_catalogs_serialize( + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Catalogs", + '403': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def list_catalogs_without_preload_content( + self, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """list_catalogs + + List all catalogs in this polaris service + + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_catalogs_serialize( + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Catalogs", + '403': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _list_catalogs_serialize( + self, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/catalogs', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def list_grants_for_catalog_role( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the role will receive the grant")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role receiving the grant (must exist)")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> GrantResources: + """list_grants_for_catalog_role + + List the grants the catalog role holds + + :param catalog_name: The name of the catalog where the role will receive the grant (required) + :type catalog_name: str + :param catalog_role_name: The name of the role receiving the grant (must exist) (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_grants_for_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "GrantResources", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def list_grants_for_catalog_role_with_http_info( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the role will receive the grant")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role receiving the grant (must exist)")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[GrantResources]: + """list_grants_for_catalog_role + + List the grants the catalog role holds + + :param catalog_name: The name of the catalog where the role will receive the grant (required) + :type catalog_name: str + :param catalog_role_name: The name of the role receiving the grant (must exist) (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_grants_for_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "GrantResources", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def list_grants_for_catalog_role_without_preload_content( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the role will receive the grant")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role receiving the grant (must exist)")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """list_grants_for_catalog_role + + List the grants the catalog role holds + + :param catalog_name: The name of the catalog where the role will receive the grant (required) + :type catalog_name: str + :param catalog_role_name: The name of the role receiving the grant (must exist) (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_grants_for_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "GrantResources", + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _list_grants_for_catalog_role_serialize( + self, + catalog_name, + catalog_role_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + if catalog_role_name is not None: + _path_params['catalogRoleName'] = catalog_role_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/catalogs/{catalogName}/catalog-roles/{catalogRoleName}/grants', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def list_principal_roles( + self, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> PrincipalRoles: + """list_principal_roles + + List the principal roles + + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_principal_roles_serialize( + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRoles", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def list_principal_roles_with_http_info( + self, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[PrincipalRoles]: + """list_principal_roles + + List the principal roles + + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_principal_roles_serialize( + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRoles", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def list_principal_roles_without_preload_content( + self, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """list_principal_roles + + List the principal roles + + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_principal_roles_serialize( + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRoles", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _list_principal_roles_serialize( + self, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/principal-roles', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def list_principal_roles_assigned( + self, + principal_name: Annotated[StrictStr, Field(description="The name of the target principal")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> PrincipalRoles: + """list_principal_roles_assigned + + List the roles assigned to the principal + + :param principal_name: The name of the target principal (required) + :type principal_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_principal_roles_assigned_serialize( + principal_name=principal_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRoles", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def list_principal_roles_assigned_with_http_info( + self, + principal_name: Annotated[StrictStr, Field(description="The name of the target principal")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[PrincipalRoles]: + """list_principal_roles_assigned + + List the roles assigned to the principal + + :param principal_name: The name of the target principal (required) + :type principal_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_principal_roles_assigned_serialize( + principal_name=principal_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRoles", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def list_principal_roles_assigned_without_preload_content( + self, + principal_name: Annotated[StrictStr, Field(description="The name of the target principal")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """list_principal_roles_assigned + + List the roles assigned to the principal + + :param principal_name: The name of the target principal (required) + :type principal_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_principal_roles_assigned_serialize( + principal_name=principal_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRoles", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _list_principal_roles_assigned_serialize( + self, + principal_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_name is not None: + _path_params['principalName'] = principal_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/principals/{principalName}/principal-roles', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def list_principals( + self, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> Principals: + """list_principals + + List the principals for the current catalog + + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_principals_serialize( + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Principals", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def list_principals_with_http_info( + self, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[Principals]: + """list_principals + + List the principals for the current catalog + + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_principals_serialize( + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Principals", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def list_principals_without_preload_content( + self, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """list_principals + + List the principals for the current catalog + + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._list_principals_serialize( + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Principals", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _list_principals_serialize( + self, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='GET', + resource_path='/principals', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def revoke_catalog_role_from_principal_role( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog that contains the role to revoke")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the catalog role that should be revoked")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """revoke_catalog_role_from_principal_role + + Remove a catalog role from a principal role + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param catalog_name: The name of the catalog that contains the role to revoke (required) + :type catalog_name: str + :param catalog_role_name: The name of the catalog role that should be revoked (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._revoke_catalog_role_from_principal_role_serialize( + principal_role_name=principal_role_name, + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def revoke_catalog_role_from_principal_role_with_http_info( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog that contains the role to revoke")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the catalog role that should be revoked")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """revoke_catalog_role_from_principal_role + + Remove a catalog role from a principal role + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param catalog_name: The name of the catalog that contains the role to revoke (required) + :type catalog_name: str + :param catalog_role_name: The name of the catalog role that should be revoked (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._revoke_catalog_role_from_principal_role_serialize( + principal_role_name=principal_role_name, + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def revoke_catalog_role_from_principal_role_without_preload_content( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog that contains the role to revoke")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the catalog role that should be revoked")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """revoke_catalog_role_from_principal_role + + Remove a catalog role from a principal role + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param catalog_name: The name of the catalog that contains the role to revoke (required) + :type catalog_name: str + :param catalog_role_name: The name of the catalog role that should be revoked (required) + :type catalog_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._revoke_catalog_role_from_principal_role_serialize( + principal_role_name=principal_role_name, + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _revoke_catalog_role_from_principal_role_serialize( + self, + principal_role_name, + catalog_name, + catalog_role_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_role_name is not None: + _path_params['principalRoleName'] = principal_role_name + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + if catalog_role_name is not None: + _path_params['catalogRoleName'] = catalog_role_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='DELETE', + resource_path='/principal-roles/{principalRoleName}/catalog-roles/{catalogName}/{catalogRoleName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def revoke_grant_from_catalog_role( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the role will receive the grant")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role receiving the grant (must exist)")], + cascade: Annotated[Optional[StrictBool], Field(description="If true, the grant revocation cascades to all subresources.")] = None, + revoke_grant_request: Optional[RevokeGrantRequest] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """revoke_grant_from_catalog_role + + Delete a specific grant from the role. This may be a subset or a superset of the grants the role has. In case of a subset, the role will retain the grants not specified. If the `cascade` parameter is true, grant revocation will have a cascading effect - that is, if a principal has specific grants on a subresource, and grants are revoked on a parent resource, the grants present on the subresource will be revoked as well. By default, this behavior is disabled and grant revocation only affects the specified resource. + + :param catalog_name: The name of the catalog where the role will receive the grant (required) + :type catalog_name: str + :param catalog_role_name: The name of the role receiving the grant (must exist) (required) + :type catalog_role_name: str + :param cascade: If true, the grant revocation cascades to all subresources. + :type cascade: bool + :param revoke_grant_request: + :type revoke_grant_request: RevokeGrantRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._revoke_grant_from_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + cascade=cascade, + revoke_grant_request=revoke_grant_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def revoke_grant_from_catalog_role_with_http_info( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the role will receive the grant")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role receiving the grant (must exist)")], + cascade: Annotated[Optional[StrictBool], Field(description="If true, the grant revocation cascades to all subresources.")] = None, + revoke_grant_request: Optional[RevokeGrantRequest] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """revoke_grant_from_catalog_role + + Delete a specific grant from the role. This may be a subset or a superset of the grants the role has. In case of a subset, the role will retain the grants not specified. If the `cascade` parameter is true, grant revocation will have a cascading effect - that is, if a principal has specific grants on a subresource, and grants are revoked on a parent resource, the grants present on the subresource will be revoked as well. By default, this behavior is disabled and grant revocation only affects the specified resource. + + :param catalog_name: The name of the catalog where the role will receive the grant (required) + :type catalog_name: str + :param catalog_role_name: The name of the role receiving the grant (must exist) (required) + :type catalog_role_name: str + :param cascade: If true, the grant revocation cascades to all subresources. + :type cascade: bool + :param revoke_grant_request: + :type revoke_grant_request: RevokeGrantRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._revoke_grant_from_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + cascade=cascade, + revoke_grant_request=revoke_grant_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def revoke_grant_from_catalog_role_without_preload_content( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog where the role will receive the grant")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role receiving the grant (must exist)")], + cascade: Annotated[Optional[StrictBool], Field(description="If true, the grant revocation cascades to all subresources.")] = None, + revoke_grant_request: Optional[RevokeGrantRequest] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """revoke_grant_from_catalog_role + + Delete a specific grant from the role. This may be a subset or a superset of the grants the role has. In case of a subset, the role will retain the grants not specified. If the `cascade` parameter is true, grant revocation will have a cascading effect - that is, if a principal has specific grants on a subresource, and grants are revoked on a parent resource, the grants present on the subresource will be revoked as well. By default, this behavior is disabled and grant revocation only affects the specified resource. + + :param catalog_name: The name of the catalog where the role will receive the grant (required) + :type catalog_name: str + :param catalog_role_name: The name of the role receiving the grant (must exist) (required) + :type catalog_role_name: str + :param cascade: If true, the grant revocation cascades to all subresources. + :type cascade: bool + :param revoke_grant_request: + :type revoke_grant_request: RevokeGrantRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._revoke_grant_from_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + cascade=cascade, + revoke_grant_request=revoke_grant_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '201': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _revoke_grant_from_catalog_role_serialize( + self, + catalog_name, + catalog_role_name, + cascade, + revoke_grant_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + if catalog_role_name is not None: + _path_params['catalogRoleName'] = catalog_role_name + # process the query parameters + if cascade is not None: + + _query_params.append(('cascade', cascade)) + + # process the header parameters + # process the form parameters + # process the body parameter + if revoke_grant_request is not None: + _body_params = revoke_grant_request + + + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/catalogs/{catalogName}/catalog-roles/{catalogRoleName}/grants', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def revoke_principal_role( + self, + principal_name: Annotated[StrictStr, Field(description="The name of the target principal")], + principal_role_name: Annotated[StrictStr, Field(description="The name of the role")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> None: + """revoke_principal_role + + Remove a role from a catalog principal + + :param principal_name: The name of the target principal (required) + :type principal_name: str + :param principal_role_name: The name of the role (required) + :type principal_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._revoke_principal_role_serialize( + principal_name=principal_name, + principal_role_name=principal_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def revoke_principal_role_with_http_info( + self, + principal_name: Annotated[StrictStr, Field(description="The name of the target principal")], + principal_role_name: Annotated[StrictStr, Field(description="The name of the role")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[None]: + """revoke_principal_role + + Remove a role from a catalog principal + + :param principal_name: The name of the target principal (required) + :type principal_name: str + :param principal_role_name: The name of the role (required) + :type principal_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._revoke_principal_role_serialize( + principal_name=principal_name, + principal_role_name=principal_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def revoke_principal_role_without_preload_content( + self, + principal_name: Annotated[StrictStr, Field(description="The name of the target principal")], + principal_role_name: Annotated[StrictStr, Field(description="The name of the role")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """revoke_principal_role + + Remove a role from a catalog principal + + :param principal_name: The name of the target principal (required) + :type principal_name: str + :param principal_role_name: The name of the role (required) + :type principal_role_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._revoke_principal_role_serialize( + principal_name=principal_name, + principal_role_name=principal_role_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '204': None, + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _revoke_principal_role_serialize( + self, + principal_name, + principal_role_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_name is not None: + _path_params['principalName'] = principal_name + if principal_role_name is not None: + _path_params['principalRoleName'] = principal_role_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='DELETE', + resource_path='/principals/{principalName}/principal-roles/{principalRoleName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def rotate_credentials( + self, + principal_name: Annotated[StrictStr, Field(description="The user name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> PrincipalWithCredentials: + """rotate_credentials + + Rotate a principal's credentials. The new credentials will be returned in the response. This is the only API, aside from createPrincipal, that returns the user's credentials. This API is *not* idempotent. + + :param principal_name: The user name (required) + :type principal_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._rotate_credentials_serialize( + principal_name=principal_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalWithCredentials", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def rotate_credentials_with_http_info( + self, + principal_name: Annotated[StrictStr, Field(description="The user name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[PrincipalWithCredentials]: + """rotate_credentials + + Rotate a principal's credentials. The new credentials will be returned in the response. This is the only API, aside from createPrincipal, that returns the user's credentials. This API is *not* idempotent. + + :param principal_name: The user name (required) + :type principal_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._rotate_credentials_serialize( + principal_name=principal_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalWithCredentials", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def rotate_credentials_without_preload_content( + self, + principal_name: Annotated[StrictStr, Field(description="The user name")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """rotate_credentials + + Rotate a principal's credentials. The new credentials will be returned in the response. This is the only API, aside from createPrincipal, that returns the user's credentials. This API is *not* idempotent. + + :param principal_name: The user name (required) + :type principal_name: str + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._rotate_credentials_serialize( + principal_name=principal_name, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalWithCredentials", + '403': None, + '404': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _rotate_credentials_serialize( + self, + principal_name, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_name is not None: + _path_params['principalName'] = principal_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='POST', + resource_path='/principals/{principalName}/rotate', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def update_catalog( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog")], + update_catalog_request: Annotated[UpdateCatalogRequest, Field(description="The catalog details to use in the update")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> Catalog: + """update_catalog + + Update an existing catalog + + :param catalog_name: The name of the catalog (required) + :type catalog_name: str + :param update_catalog_request: The catalog details to use in the update (required) + :type update_catalog_request: UpdateCatalogRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_catalog_serialize( + catalog_name=catalog_name, + update_catalog_request=update_catalog_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Catalog", + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def update_catalog_with_http_info( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog")], + update_catalog_request: Annotated[UpdateCatalogRequest, Field(description="The catalog details to use in the update")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[Catalog]: + """update_catalog + + Update an existing catalog + + :param catalog_name: The name of the catalog (required) + :type catalog_name: str + :param update_catalog_request: The catalog details to use in the update (required) + :type update_catalog_request: UpdateCatalogRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_catalog_serialize( + catalog_name=catalog_name, + update_catalog_request=update_catalog_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Catalog", + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def update_catalog_without_preload_content( + self, + catalog_name: Annotated[StrictStr, Field(description="The name of the catalog")], + update_catalog_request: Annotated[UpdateCatalogRequest, Field(description="The catalog details to use in the update")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """update_catalog + + Update an existing catalog + + :param catalog_name: The name of the catalog (required) + :type catalog_name: str + :param update_catalog_request: The catalog details to use in the update (required) + :type update_catalog_request: UpdateCatalogRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_catalog_serialize( + catalog_name=catalog_name, + update_catalog_request=update_catalog_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Catalog", + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _update_catalog_serialize( + self, + catalog_name, + update_catalog_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if update_catalog_request is not None: + _body_params = update_catalog_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='PUT', + resource_path='/catalogs/{catalogName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def update_catalog_role( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are retrieving roles")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role")], + update_catalog_role_request: Optional[UpdateCatalogRoleRequest] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> CatalogRole: + """update_catalog_role + + Update an existing role in the catalog + + :param catalog_name: The catalog for which we are retrieving roles (required) + :type catalog_name: str + :param catalog_role_name: The name of the role (required) + :type catalog_role_name: str + :param update_catalog_role_request: + :type update_catalog_role_request: UpdateCatalogRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + update_catalog_role_request=update_catalog_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogRole", + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def update_catalog_role_with_http_info( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are retrieving roles")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role")], + update_catalog_role_request: Optional[UpdateCatalogRoleRequest] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[CatalogRole]: + """update_catalog_role + + Update an existing role in the catalog + + :param catalog_name: The catalog for which we are retrieving roles (required) + :type catalog_name: str + :param catalog_role_name: The name of the role (required) + :type catalog_role_name: str + :param update_catalog_role_request: + :type update_catalog_role_request: UpdateCatalogRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + update_catalog_role_request=update_catalog_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogRole", + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def update_catalog_role_without_preload_content( + self, + catalog_name: Annotated[StrictStr, Field(description="The catalog for which we are retrieving roles")], + catalog_role_name: Annotated[StrictStr, Field(description="The name of the role")], + update_catalog_role_request: Optional[UpdateCatalogRoleRequest] = None, + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """update_catalog_role + + Update an existing role in the catalog + + :param catalog_name: The catalog for which we are retrieving roles (required) + :type catalog_name: str + :param catalog_role_name: The name of the role (required) + :type catalog_role_name: str + :param update_catalog_role_request: + :type update_catalog_role_request: UpdateCatalogRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_catalog_role_serialize( + catalog_name=catalog_name, + catalog_role_name=catalog_role_name, + update_catalog_role_request=update_catalog_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "CatalogRole", + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _update_catalog_role_serialize( + self, + catalog_name, + catalog_role_name, + update_catalog_role_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if catalog_name is not None: + _path_params['catalogName'] = catalog_name + if catalog_role_name is not None: + _path_params['catalogRoleName'] = catalog_role_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if update_catalog_role_request is not None: + _body_params = update_catalog_role_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='PUT', + resource_path='/catalogs/{catalogName}/catalog-roles/{catalogRoleName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def update_principal( + self, + principal_name: Annotated[StrictStr, Field(description="The principal name")], + update_principal_request: Annotated[UpdatePrincipalRequest, Field(description="The principal details to use in the update")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> Principal: + """update_principal + + Update an existing principal + + :param principal_name: The principal name (required) + :type principal_name: str + :param update_principal_request: The principal details to use in the update (required) + :type update_principal_request: UpdatePrincipalRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_principal_serialize( + principal_name=principal_name, + update_principal_request=update_principal_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Principal", + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def update_principal_with_http_info( + self, + principal_name: Annotated[StrictStr, Field(description="The principal name")], + update_principal_request: Annotated[UpdatePrincipalRequest, Field(description="The principal details to use in the update")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[Principal]: + """update_principal + + Update an existing principal + + :param principal_name: The principal name (required) + :type principal_name: str + :param update_principal_request: The principal details to use in the update (required) + :type update_principal_request: UpdatePrincipalRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_principal_serialize( + principal_name=principal_name, + update_principal_request=update_principal_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Principal", + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def update_principal_without_preload_content( + self, + principal_name: Annotated[StrictStr, Field(description="The principal name")], + update_principal_request: Annotated[UpdatePrincipalRequest, Field(description="The principal details to use in the update")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """update_principal + + Update an existing principal + + :param principal_name: The principal name (required) + :type principal_name: str + :param update_principal_request: The principal details to use in the update (required) + :type update_principal_request: UpdatePrincipalRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_principal_serialize( + principal_name=principal_name, + update_principal_request=update_principal_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "Principal", + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _update_principal_serialize( + self, + principal_name, + update_principal_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_name is not None: + _path_params['principalName'] = principal_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if update_principal_request is not None: + _body_params = update_principal_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='PUT', + resource_path='/principals/{principalName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + + + + @validate_call + def update_principal_role( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + update_principal_role_request: Annotated[UpdatePrincipalRoleRequest, Field(description="The principalRole details to use in the update")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> PrincipalRole: + """update_principal_role + + Update an existing principalRole + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param update_principal_role_request: The principalRole details to use in the update (required) + :type update_principal_role_request: UpdatePrincipalRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_principal_role_serialize( + principal_role_name=principal_role_name, + update_principal_role_request=update_principal_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRole", + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ).data + + + @validate_call + def update_principal_role_with_http_info( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + update_principal_role_request: Annotated[UpdatePrincipalRoleRequest, Field(description="The principalRole details to use in the update")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> ApiResponse[PrincipalRole]: + """update_principal_role + + Update an existing principalRole + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param update_principal_role_request: The principalRole details to use in the update (required) + :type update_principal_role_request: UpdatePrincipalRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_principal_role_serialize( + principal_role_name=principal_role_name, + update_principal_role_request=update_principal_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRole", + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + response_data.read() + return self.api_client.response_deserialize( + response_data=response_data, + response_types_map=_response_types_map, + ) + + + @validate_call + def update_principal_role_without_preload_content( + self, + principal_role_name: Annotated[StrictStr, Field(description="The principal role name")], + update_principal_role_request: Annotated[UpdatePrincipalRoleRequest, Field(description="The principalRole details to use in the update")], + _request_timeout: Union[ + None, + Annotated[StrictFloat, Field(gt=0)], + Tuple[ + Annotated[StrictFloat, Field(gt=0)], + Annotated[StrictFloat, Field(gt=0)] + ] + ] = None, + _request_auth: Optional[Dict[StrictStr, Any]] = None, + _content_type: Optional[StrictStr] = None, + _headers: Optional[Dict[StrictStr, Any]] = None, + _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, + ) -> RESTResponseType: + """update_principal_role + + Update an existing principalRole + + :param principal_role_name: The principal role name (required) + :type principal_role_name: str + :param update_principal_role_request: The principalRole details to use in the update (required) + :type update_principal_role_request: UpdatePrincipalRoleRequest + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + :type _request_timeout: int, tuple(int, int), optional + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the + authentication in the spec for a single request. + :type _request_auth: dict, optional + :param _content_type: force content-type for the request. + :type _content_type: str, Optional + :param _headers: set to override the headers for a single + request; this effectively ignores the headers + in the spec for a single request. + :type _headers: dict, optional + :param _host_index: set to override the host_index for a single + request; this effectively ignores the host_index + in the spec for a single request. + :type _host_index: int, optional + :return: Returns the result object. + """ # noqa: E501 + + _param = self._update_principal_role_serialize( + principal_role_name=principal_role_name, + update_principal_role_request=update_principal_role_request, + _request_auth=_request_auth, + _content_type=_content_type, + _headers=_headers, + _host_index=_host_index + ) + + _response_types_map: Dict[str, Optional[str]] = { + '200': "PrincipalRole", + '403': None, + '404': None, + '409': None, + } + response_data = self.api_client.call_api( + *_param, + _request_timeout=_request_timeout + ) + return response_data.response + + + def _update_principal_role_serialize( + self, + principal_role_name, + update_principal_role_request, + _request_auth, + _content_type, + _headers, + _host_index, + ) -> RequestSerialized: + + _host = None + + _collection_formats: Dict[str, str] = { + } + + _path_params: Dict[str, str] = {} + _query_params: List[Tuple[str, str]] = [] + _header_params: Dict[str, Optional[str]] = _headers or {} + _form_params: List[Tuple[str, str]] = [] + _files: Dict[str, Union[str, bytes]] = {} + _body_params: Optional[bytes] = None + + # process the path parameters + if principal_role_name is not None: + _path_params['principalRoleName'] = principal_role_name + # process the query parameters + # process the header parameters + # process the form parameters + # process the body parameter + if update_principal_role_request is not None: + _body_params = update_principal_role_request + + + # set the HTTP header `Accept` + if 'Accept' not in _header_params: + _header_params['Accept'] = self.api_client.select_header_accept( + [ + 'application/json' + ] + ) + + # set the HTTP header `Content-Type` + if _content_type: + _header_params['Content-Type'] = _content_type + else: + _default_content_type = ( + self.api_client.select_header_content_type( + [ + 'application/json' + ] + ) + ) + if _default_content_type is not None: + _header_params['Content-Type'] = _default_content_type + + # authentication setting + _auth_settings: List[str] = [ + 'OAuth2' + ] + + return self.api_client.param_serialize( + method='PUT', + resource_path='/principal-roles/{principalRoleName}', + path_params=_path_params, + query_params=_query_params, + header_params=_header_params, + body=_body_params, + post_params=_form_params, + files=_files, + auth_settings=_auth_settings, + collection_formats=_collection_formats, + _host=_host, + _request_auth=_request_auth + ) + + diff --git a/regtests/client/python/polaris/management/api_client.py b/regtests/client/python/polaris/management/api_client.py new file mode 100644 index 0000000000..15e99856ed --- /dev/null +++ b/regtests/client/python/polaris/management/api_client.py @@ -0,0 +1,803 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import datetime +from dateutil.parser import parse +from enum import Enum +import decimal +import json +import mimetypes +import os +import re +import tempfile + +from urllib.parse import quote +from typing import Tuple, Optional, List, Dict, Union +from pydantic import SecretStr + +from polaris.management.configuration import Configuration +from polaris.management.api_response import ApiResponse, T as ApiResponseT +import polaris.management.models +from polaris.management import rest +from polaris.management.exceptions import ( + ApiValueError, + ApiException, + BadRequestException, + UnauthorizedException, + ForbiddenException, + NotFoundException, + ServiceException +) + +RequestSerialized = Tuple[str, str, Dict[str, str], Optional[str], List[str]] + +class ApiClient: + """Generic API client for OpenAPI client library builds. + + OpenAPI generic API client. This client handles the client- + server communication, and is invariant across implementations. Specifics of + the methods and models for each application are generated from the OpenAPI + templates. + + :param configuration: .Configuration object for this client + :param header_name: a header to pass when making calls to the API. + :param header_value: a header value to pass when making calls to + the API. + :param cookie: a cookie to include in the header when making calls + to the API + """ + + PRIMITIVE_TYPES = (float, bool, bytes, str, int) + NATIVE_TYPES_MAPPING = { + 'int': int, + 'long': int, # TODO remove as only py3 is supported? + 'float': float, + 'str': str, + 'bool': bool, + 'date': datetime.date, + 'datetime': datetime.datetime, + 'decimal': decimal.Decimal, + 'object': object, + } + _pool = None + + def __init__( + self, + configuration=None, + header_name=None, + header_value=None, + cookie=None + ) -> None: + # use default configuration if none is provided + if configuration is None: + configuration = Configuration.get_default() + self.configuration = configuration + + self.rest_client = rest.RESTClientObject(configuration) + self.default_headers = {} + if header_name is not None: + self.default_headers[header_name] = header_value + self.cookie = cookie + # Set default User-Agent. + self.user_agent = 'OpenAPI-Generator/1.0.0/python' + self.client_side_validation = configuration.client_side_validation + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, traceback): + pass + + @property + def user_agent(self): + """User agent for this API client""" + return self.default_headers['User-Agent'] + + @user_agent.setter + def user_agent(self, value): + self.default_headers['User-Agent'] = value + + def set_default_header(self, header_name, header_value): + self.default_headers[header_name] = header_value + + + _default = None + + @classmethod + def get_default(cls): + """Return new instance of ApiClient. + + This method returns newly created, based on default constructor, + object of ApiClient class or returns a copy of default + ApiClient. + + :return: The ApiClient object. + """ + if cls._default is None: + cls._default = ApiClient() + return cls._default + + @classmethod + def set_default(cls, default): + """Set default instance of ApiClient. + + It stores default ApiClient. + + :param default: object of ApiClient. + """ + cls._default = default + + def param_serialize( + self, + method, + resource_path, + path_params=None, + query_params=None, + header_params=None, + body=None, + post_params=None, + files=None, auth_settings=None, + collection_formats=None, + _host=None, + _request_auth=None + ) -> RequestSerialized: + + """Builds the HTTP request params needed by the request. + :param method: Method to call. + :param resource_path: Path to method endpoint. + :param path_params: Path parameters in the url. + :param query_params: Query parameters in the url. + :param header_params: Header parameters to be + placed in the request header. + :param body: Request body. + :param post_params dict: Request post form parameters, + for `application/x-www-form-urlencoded`, `multipart/form-data`. + :param auth_settings list: Auth Settings names for the request. + :param files dict: key -> filename, value -> filepath, + for `multipart/form-data`. + :param collection_formats: dict of collection formats for path, query, + header, and post parameters. + :param _request_auth: set to override the auth_settings for an a single + request; this effectively ignores the authentication + in the spec for a single request. + :return: tuple of form (path, http_method, query_params, header_params, + body, post_params, files) + """ + + config = self.configuration + + # header parameters + header_params = header_params or {} + header_params.update(self.default_headers) + if self.cookie: + header_params['Cookie'] = self.cookie + if header_params: + header_params = self.sanitize_for_serialization(header_params) + header_params = dict( + self.parameters_to_tuples(header_params,collection_formats) + ) + + # path parameters + if path_params: + path_params = self.sanitize_for_serialization(path_params) + path_params = self.parameters_to_tuples( + path_params, + collection_formats + ) + for k, v in path_params: + # specified safe chars, encode everything + resource_path = resource_path.replace( + '{%s}' % k, + quote(str(v), safe=config.safe_chars_for_path_param) + ) + + # post parameters + if post_params or files: + post_params = post_params if post_params else [] + post_params = self.sanitize_for_serialization(post_params) + post_params = self.parameters_to_tuples( + post_params, + collection_formats + ) + if files: + post_params.extend(self.files_parameters(files)) + + # auth setting + self.update_params_for_auth( + header_params, + query_params, + auth_settings, + resource_path, + method, + body, + request_auth=_request_auth + ) + + # body + if body: + body = self.sanitize_for_serialization(body) + + # request url + if _host is None or self.configuration.ignore_operation_servers: + url = self.configuration.host + resource_path + else: + # use server/host defined in path or operation instead + url = _host + resource_path + + # query parameters + if query_params: + query_params = self.sanitize_for_serialization(query_params) + url_query = self.parameters_to_url_query( + query_params, + collection_formats + ) + url += "?" + url_query + + return method, url, header_params, body, post_params + + + def call_api( + self, + method, + url, + header_params=None, + body=None, + post_params=None, + _request_timeout=None + ) -> rest.RESTResponse: + """Makes the HTTP request (synchronous) + :param method: Method to call. + :param url: Path to method endpoint. + :param header_params: Header parameters to be + placed in the request header. + :param body: Request body. + :param post_params dict: Request post form parameters, + for `application/x-www-form-urlencoded`, `multipart/form-data`. + :param _request_timeout: timeout setting for this request. + :return: RESTResponse + """ + + try: + # perform request and return response + response_data = self.rest_client.request( + method, url, + headers=header_params, + body=body, post_params=post_params, + _request_timeout=_request_timeout + ) + + except ApiException as e: + raise e + + return response_data + + def response_deserialize( + self, + response_data: rest.RESTResponse, + response_types_map: Optional[Dict[str, ApiResponseT]]=None + ) -> ApiResponse[ApiResponseT]: + """Deserializes response into an object. + :param response_data: RESTResponse object to be deserialized. + :param response_types_map: dict of response types. + :return: ApiResponse + """ + + msg = "RESTResponse.read() must be called before passing it to response_deserialize()" + assert response_data.data is not None, msg + + response_type = response_types_map.get(str(response_data.status), None) + if not response_type and isinstance(response_data.status, int) and 100 <= response_data.status <= 599: + # if not found, look for '1XX', '2XX', etc. + response_type = response_types_map.get(str(response_data.status)[0] + "XX", None) + + # deserialize response data + response_text = None + return_data = None + try: + if response_type == "bytearray": + return_data = response_data.data + elif response_type == "file": + return_data = self.__deserialize_file(response_data) + elif response_type is not None: + match = None + content_type = response_data.getheader('content-type') + if content_type is not None: + match = re.search(r"charset=([a-zA-Z\-\d]+)[\s;]?", content_type) + encoding = match.group(1) if match else "utf-8" + response_text = response_data.data.decode(encoding) + return_data = self.deserialize(response_text, response_type, content_type) + finally: + if not 200 <= response_data.status <= 299: + raise ApiException.from_response( + http_resp=response_data, + body=response_text, + data=return_data, + ) + + return ApiResponse( + status_code = response_data.status, + data = return_data, + headers = response_data.getheaders(), + raw_data = response_data.data + ) + + def sanitize_for_serialization(self, obj): + """Builds a JSON POST object. + + If obj is None, return None. + If obj is SecretStr, return obj.get_secret_value() + If obj is str, int, long, float, bool, return directly. + If obj is datetime.datetime, datetime.date + convert to string in iso8601 format. + If obj is decimal.Decimal return string representation. + If obj is list, sanitize each element in the list. + If obj is dict, return the dict. + If obj is OpenAPI model, return the properties dict. + + :param obj: The data to serialize. + :return: The serialized form of data. + """ + if obj is None: + return None + elif isinstance(obj, Enum): + return obj.value + elif isinstance(obj, SecretStr): + return obj.get_secret_value() + elif isinstance(obj, self.PRIMITIVE_TYPES): + return obj + elif isinstance(obj, list): + return [ + self.sanitize_for_serialization(sub_obj) for sub_obj in obj + ] + elif isinstance(obj, tuple): + return tuple( + self.sanitize_for_serialization(sub_obj) for sub_obj in obj + ) + elif isinstance(obj, (datetime.datetime, datetime.date)): + return obj.isoformat() + elif isinstance(obj, decimal.Decimal): + return str(obj) + + elif isinstance(obj, dict): + obj_dict = obj + else: + # Convert model obj to dict except + # attributes `openapi_types`, `attribute_map` + # and attributes which value is not None. + # Convert attribute name to json key in + # model definition for request. + if hasattr(obj, 'to_dict') and callable(getattr(obj, 'to_dict')): + obj_dict = obj.to_dict() + else: + obj_dict = obj.__dict__ + + return { + key: self.sanitize_for_serialization(val) + for key, val in obj_dict.items() + } + + def deserialize(self, response_text: str, response_type: str, content_type: Optional[str]): + """Deserializes response into an object. + + :param response: RESTResponse object to be deserialized. + :param response_type: class literal for + deserialized object, or string of class name. + :param content_type: content type of response. + + :return: deserialized object. + """ + + # fetch data from response object + if content_type is None: + try: + data = json.loads(response_text) + except ValueError: + data = response_text + elif content_type.startswith("application/json"): + if response_text == "": + data = "" + else: + data = json.loads(response_text) + elif content_type.startswith("text/plain"): + data = response_text + else: + raise ApiException( + status=0, + reason="Unsupported content type: {0}".format(content_type) + ) + + return self.__deserialize(data, response_type) + + def __deserialize(self, data, klass): + """Deserializes dict, list, str into an object. + + :param data: dict, list or str. + :param klass: class literal, or string of class name. + + :return: object. + """ + if data is None: + return None + + if isinstance(klass, str): + if klass.startswith('List['): + m = re.match(r'List\[(.*)]', klass) + assert m is not None, "Malformed List type definition" + sub_kls = m.group(1) + return [self.__deserialize(sub_data, sub_kls) + for sub_data in data] + + if klass.startswith('Dict['): + m = re.match(r'Dict\[([^,]*), (.*)]', klass) + assert m is not None, "Malformed Dict type definition" + sub_kls = m.group(2) + return {k: self.__deserialize(v, sub_kls) + for k, v in data.items()} + + # convert str to class + if klass in self.NATIVE_TYPES_MAPPING: + klass = self.NATIVE_TYPES_MAPPING[klass] + else: + klass = getattr(polaris.management.models, klass) + + if klass in self.PRIMITIVE_TYPES: + return self.__deserialize_primitive(data, klass) + elif klass == object: + return self.__deserialize_object(data) + elif klass == datetime.date: + return self.__deserialize_date(data) + elif klass == datetime.datetime: + return self.__deserialize_datetime(data) + elif klass == decimal.Decimal: + return decimal.Decimal(data) + elif issubclass(klass, Enum): + return self.__deserialize_enum(data, klass) + else: + return self.__deserialize_model(data, klass) + + def parameters_to_tuples(self, params, collection_formats): + """Get parameters as list of tuples, formatting collections. + + :param params: Parameters as dict or list of two-tuples + :param dict collection_formats: Parameter collection formats + :return: Parameters as list of tuples, collections formatted + """ + new_params: List[Tuple[str, str]] = [] + if collection_formats is None: + collection_formats = {} + for k, v in params.items() if isinstance(params, dict) else params: + if k in collection_formats: + collection_format = collection_formats[k] + if collection_format == 'multi': + new_params.extend((k, value) for value in v) + else: + if collection_format == 'ssv': + delimiter = ' ' + elif collection_format == 'tsv': + delimiter = '\t' + elif collection_format == 'pipes': + delimiter = '|' + else: # csv is the default + delimiter = ',' + new_params.append( + (k, delimiter.join(str(value) for value in v))) + else: + new_params.append((k, v)) + return new_params + + def parameters_to_url_query(self, params, collection_formats): + """Get parameters as list of tuples, formatting collections. + + :param params: Parameters as dict or list of two-tuples + :param dict collection_formats: Parameter collection formats + :return: URL query string (e.g. a=Hello%20World&b=123) + """ + new_params: List[Tuple[str, str]] = [] + if collection_formats is None: + collection_formats = {} + for k, v in params.items() if isinstance(params, dict) else params: + if isinstance(v, bool): + v = str(v).lower() + if isinstance(v, (int, float)): + v = str(v) + if isinstance(v, dict): + v = json.dumps(v) + + if k in collection_formats: + collection_format = collection_formats[k] + if collection_format == 'multi': + new_params.extend((k, str(value)) for value in v) + else: + if collection_format == 'ssv': + delimiter = ' ' + elif collection_format == 'tsv': + delimiter = '\t' + elif collection_format == 'pipes': + delimiter = '|' + else: # csv is the default + delimiter = ',' + new_params.append( + (k, delimiter.join(quote(str(value)) for value in v)) + ) + else: + new_params.append((k, quote(str(v)))) + + return "&".join(["=".join(map(str, item)) for item in new_params]) + + def files_parameters(self, files: Dict[str, Union[str, bytes]]): + """Builds form parameters. + + :param files: File parameters. + :return: Form parameters with files. + """ + params = [] + for k, v in files.items(): + if isinstance(v, str): + with open(v, 'rb') as f: + filename = os.path.basename(f.name) + filedata = f.read() + elif isinstance(v, bytes): + filename = k + filedata = v + else: + raise ValueError("Unsupported file value") + mimetype = ( + mimetypes.guess_type(filename)[0] + or 'application/octet-stream' + ) + params.append( + tuple([k, tuple([filename, filedata, mimetype])]) + ) + return params + + def select_header_accept(self, accepts: List[str]) -> Optional[str]: + """Returns `Accept` based on an array of accepts provided. + + :param accepts: List of headers. + :return: Accept (e.g. application/json). + """ + if not accepts: + return None + + for accept in accepts: + if re.search('json', accept, re.IGNORECASE): + return accept + + return accepts[0] + + def select_header_content_type(self, content_types): + """Returns `Content-Type` based on an array of content_types provided. + + :param content_types: List of content-types. + :return: Content-Type (e.g. application/json). + """ + if not content_types: + return None + + for content_type in content_types: + if re.search('json', content_type, re.IGNORECASE): + return content_type + + return content_types[0] + + def update_params_for_auth( + self, + headers, + queries, + auth_settings, + resource_path, + method, + body, + request_auth=None + ) -> None: + """Updates header and query params based on authentication setting. + + :param headers: Header parameters dict to be updated. + :param queries: Query parameters tuple list to be updated. + :param auth_settings: Authentication setting identifiers list. + :resource_path: A string representation of the HTTP request resource path. + :method: A string representation of the HTTP request method. + :body: A object representing the body of the HTTP request. + The object type is the return value of sanitize_for_serialization(). + :param request_auth: if set, the provided settings will + override the token in the configuration. + """ + if not auth_settings: + return + + if request_auth: + self._apply_auth_params( + headers, + queries, + resource_path, + method, + body, + request_auth + ) + else: + for auth in auth_settings: + auth_setting = self.configuration.auth_settings().get(auth) + if auth_setting: + self._apply_auth_params( + headers, + queries, + resource_path, + method, + body, + auth_setting + ) + + def _apply_auth_params( + self, + headers, + queries, + resource_path, + method, + body, + auth_setting + ) -> None: + """Updates the request parameters based on a single auth_setting + + :param headers: Header parameters dict to be updated. + :param queries: Query parameters tuple list to be updated. + :resource_path: A string representation of the HTTP request resource path. + :method: A string representation of the HTTP request method. + :body: A object representing the body of the HTTP request. + The object type is the return value of sanitize_for_serialization(). + :param auth_setting: auth settings for the endpoint + """ + if auth_setting['in'] == 'cookie': + headers['Cookie'] = auth_setting['value'] + elif auth_setting['in'] == 'header': + if auth_setting['type'] != 'http-signature': + headers[auth_setting['key']] = auth_setting['value'] + elif auth_setting['in'] == 'query': + queries.append((auth_setting['key'], auth_setting['value'])) + else: + raise ApiValueError( + 'Authentication token must be in `query` or `header`' + ) + + def __deserialize_file(self, response): + """Deserializes body to file + + Saves response body into a file in a temporary folder, + using the filename from the `Content-Disposition` header if provided. + + handle file downloading + save response body into a tmp file and return the instance + + :param response: RESTResponse. + :return: file path. + """ + fd, path = tempfile.mkstemp(dir=self.configuration.temp_folder_path) + os.close(fd) + os.remove(path) + + content_disposition = response.getheader("Content-Disposition") + if content_disposition: + m = re.search( + r'filename=[\'"]?([^\'"\s]+)[\'"]?', + content_disposition + ) + assert m is not None, "Unexpected 'content-disposition' header value" + filename = m.group(1) + path = os.path.join(os.path.dirname(path), filename) + + with open(path, "wb") as f: + f.write(response.data) + + return path + + def __deserialize_primitive(self, data, klass): + """Deserializes string to primitive type. + + :param data: str. + :param klass: class literal. + + :return: int, long, float, str, bool. + """ + try: + return klass(data) + except UnicodeEncodeError: + return str(data) + except TypeError: + return data + + def __deserialize_object(self, value): + """Return an original value. + + :return: object. + """ + return value + + def __deserialize_date(self, string): + """Deserializes string to date. + + :param string: str. + :return: date. + """ + try: + return parse(string).date() + except ImportError: + return string + except ValueError: + raise rest.ApiException( + status=0, + reason="Failed to parse `{0}` as date object".format(string) + ) + + def __deserialize_datetime(self, string): + """Deserializes string to datetime. + + The string should be in iso8601 datetime format. + + :param string: str. + :return: datetime. + """ + try: + return parse(string) + except ImportError: + return string + except ValueError: + raise rest.ApiException( + status=0, + reason=( + "Failed to parse `{0}` as datetime object" + .format(string) + ) + ) + + def __deserialize_enum(self, data, klass): + """Deserializes primitive type to enum. + + :param data: primitive type. + :param klass: class literal. + :return: enum value. + """ + try: + return klass(data) + except ValueError: + raise rest.ApiException( + status=0, + reason=( + "Failed to parse `{0}` as `{1}`" + .format(data, klass) + ) + ) + + def __deserialize_model(self, data, klass): + """Deserializes list or dict to model. + + :param data: dict, list. + :param klass: class literal. + :return: model object. + """ + + return klass.from_dict(data) diff --git a/regtests/client/python/polaris/management/api_response.py b/regtests/client/python/polaris/management/api_response.py new file mode 100644 index 0000000000..e3a3bc42e0 --- /dev/null +++ b/regtests/client/python/polaris/management/api_response.py @@ -0,0 +1,37 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +"""API response object.""" + +from __future__ import annotations +from typing import Optional, Generic, Mapping, TypeVar +from pydantic import Field, StrictInt, StrictBytes, BaseModel + +T = TypeVar("T") + +class ApiResponse(BaseModel, Generic[T]): + """ + API response object + """ + + status_code: StrictInt = Field(description="HTTP status code") + headers: Optional[Mapping[str, str]] = Field(None, description="HTTP headers") + data: T = Field(description="Deserialized data given the data type") + raw_data: StrictBytes = Field(description="Raw data (HTTP response body)") + + model_config = { + "arbitrary_types_allowed": True + } diff --git a/regtests/client/python/polaris/management/configuration.py b/regtests/client/python/polaris/management/configuration.py new file mode 100644 index 0000000000..08c49869bf --- /dev/null +++ b/regtests/client/python/polaris/management/configuration.py @@ -0,0 +1,483 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import copy +import logging +from logging import FileHandler +import multiprocessing +import sys +from typing import Optional +import urllib3 + +import http.client as httplib + +JSON_SCHEMA_VALIDATION_KEYWORDS = { + 'multipleOf', 'maximum', 'exclusiveMaximum', + 'minimum', 'exclusiveMinimum', 'maxLength', + 'minLength', 'pattern', 'maxItems', 'minItems' +} + +class Configuration: + """This class contains various settings of the API client. + + :param host: Base url. + :param ignore_operation_servers + Boolean to ignore operation servers for the API client. + Config will use `host` as the base url regardless of the operation servers. + :param api_key: Dict to store API key(s). + Each entry in the dict specifies an API key. + The dict key is the name of the security scheme in the OAS specification. + The dict value is the API key secret. + :param api_key_prefix: Dict to store API prefix (e.g. Bearer). + The dict key is the name of the security scheme in the OAS specification. + The dict value is an API key prefix when generating the auth data. + :param username: Username for HTTP basic authentication. + :param password: Password for HTTP basic authentication. + :param access_token: Access token. + :param server_index: Index to servers configuration. + :param server_variables: Mapping with string values to replace variables in + templated server configuration. The validation of enums is performed for + variables with defined enum values before. + :param server_operation_index: Mapping from operation ID to an index to server + configuration. + :param server_operation_variables: Mapping from operation ID to a mapping with + string values to replace variables in templated server configuration. + The validation of enums is performed for variables with defined enum + values before. + :param ssl_ca_cert: str - the path to a file of concatenated CA certificates + in PEM format. + :param retries: Number of retries for API requests. + + :Example: + """ + + _default = None + + def __init__(self, host=None, + api_key=None, api_key_prefix=None, + username=None, password=None, + access_token=None, + server_index=None, server_variables=None, + server_operation_index=None, server_operation_variables=None, + ignore_operation_servers=False, + ssl_ca_cert=None, + retries=None, + *, + debug: Optional[bool] = None + ) -> None: + """Constructor + """ + self._base_path = "https://localhost/api/management/v1" if host is None else host + """Default Base url + """ + self.server_index = 0 if server_index is None and host is None else server_index + self.server_operation_index = server_operation_index or {} + """Default server index + """ + self.server_variables = server_variables or {} + self.server_operation_variables = server_operation_variables or {} + """Default server variables + """ + self.ignore_operation_servers = ignore_operation_servers + """Ignore operation servers + """ + self.temp_folder_path = None + """Temp file folder for downloading files + """ + # Authentication Settings + self.api_key = {} + if api_key: + self.api_key = api_key + """dict to store API key(s) + """ + self.api_key_prefix = {} + if api_key_prefix: + self.api_key_prefix = api_key_prefix + """dict to store API prefix (e.g. Bearer) + """ + self.refresh_api_key_hook = None + """function hook to refresh API key if expired + """ + self.username = username + """Username for HTTP basic authentication + """ + self.password = password + """Password for HTTP basic authentication + """ + self.access_token = access_token + """Access token + """ + self.logger = {} + """Logging Settings + """ + self.logger["package_logger"] = logging.getLogger("polaris.management") + self.logger["urllib3_logger"] = logging.getLogger("urllib3") + self.logger_format = '%(asctime)s %(levelname)s %(message)s' + """Log format + """ + self.logger_stream_handler = None + """Log stream handler + """ + self.logger_file_handler: Optional[FileHandler] = None + """Log file handler + """ + self.logger_file = None + """Debug file location + """ + if debug is not None: + self.debug = debug + else: + self.__debug = False + """Debug switch + """ + + self.verify_ssl = True + """SSL/TLS verification + Set this to false to skip verifying SSL certificate when calling API + from https server. + """ + self.ssl_ca_cert = ssl_ca_cert + """Set this to customize the certificate file to verify the peer. + """ + self.cert_file = None + """client certificate file + """ + self.key_file = None + """client key file + """ + self.assert_hostname = None + """Set this to True/False to enable/disable SSL hostname verification. + """ + self.tls_server_name = None + """SSL/TLS Server Name Indication (SNI) + Set this to the SNI value expected by the server. + """ + + self.connection_pool_maxsize = multiprocessing.cpu_count() * 5 + """urllib3 connection pool's maximum number of connections saved + per pool. urllib3 uses 1 connection as default value, but this is + not the best value when you are making a lot of possibly parallel + requests to the same host, which is often the case here. + cpu_count * 5 is used as default value to increase performance. + """ + + self.proxy: Optional[str] = None + """Proxy URL + """ + self.proxy_headers = None + """Proxy headers + """ + self.safe_chars_for_path_param = '' + """Safe chars for path_param + """ + self.retries = retries + """Adding retries to override urllib3 default value 3 + """ + # Enable client side validation + self.client_side_validation = True + + self.socket_options = None + """Options to pass down to the underlying urllib3 socket + """ + + self.datetime_format = "%Y-%m-%dT%H:%M:%S.%f%z" + """datetime format + """ + + self.date_format = "%Y-%m-%d" + """date format + """ + + def __deepcopy__(self, memo): + cls = self.__class__ + result = cls.__new__(cls) + memo[id(self)] = result + for k, v in self.__dict__.items(): + if k not in ('logger', 'logger_file_handler'): + setattr(result, k, copy.deepcopy(v, memo)) + # shallow copy of loggers + result.logger = copy.copy(self.logger) + # use setters to configure loggers + result.logger_file = self.logger_file + result.debug = self.debug + return result + + def __setattr__(self, name, value): + object.__setattr__(self, name, value) + + @classmethod + def set_default(cls, default): + """Set default instance of configuration. + + It stores default configuration, which can be + returned by get_default_copy method. + + :param default: object of Configuration + """ + cls._default = default + + @classmethod + def get_default_copy(cls): + """Deprecated. Please use `get_default` instead. + + Deprecated. Please use `get_default` instead. + + :return: The configuration object. + """ + return cls.get_default() + + @classmethod + def get_default(cls): + """Return the default configuration. + + This method returns newly created, based on default constructor, + object of Configuration class or returns a copy of default + configuration. + + :return: The configuration object. + """ + if cls._default is None: + cls._default = Configuration() + return cls._default + + @property + def logger_file(self): + """The logger file. + + If the logger_file is None, then add stream handler and remove file + handler. Otherwise, add file handler and remove stream handler. + + :param value: The logger_file path. + :type: str + """ + return self.__logger_file + + @logger_file.setter + def logger_file(self, value): + """The logger file. + + If the logger_file is None, then add stream handler and remove file + handler. Otherwise, add file handler and remove stream handler. + + :param value: The logger_file path. + :type: str + """ + self.__logger_file = value + if self.__logger_file: + # If set logging file, + # then add file handler and remove stream handler. + self.logger_file_handler = logging.FileHandler(self.__logger_file) + self.logger_file_handler.setFormatter(self.logger_formatter) + for _, logger in self.logger.items(): + logger.addHandler(self.logger_file_handler) + + @property + def debug(self): + """Debug status + + :param value: The debug status, True or False. + :type: bool + """ + return self.__debug + + @debug.setter + def debug(self, value): + """Debug status + + :param value: The debug status, True or False. + :type: bool + """ + self.__debug = value + if self.__debug: + # if debug status is True, turn on debug logging + for _, logger in self.logger.items(): + logger.setLevel(logging.DEBUG) + # turn on httplib debug + httplib.HTTPConnection.debuglevel = 1 + else: + # if debug status is False, turn off debug logging, + # setting log level to default `logging.WARNING` + for _, logger in self.logger.items(): + logger.setLevel(logging.WARNING) + # turn off httplib debug + httplib.HTTPConnection.debuglevel = 0 + + @property + def logger_format(self): + """The logger format. + + The logger_formatter will be updated when sets logger_format. + + :param value: The format string. + :type: str + """ + return self.__logger_format + + @logger_format.setter + def logger_format(self, value): + """The logger format. + + The logger_formatter will be updated when sets logger_format. + + :param value: The format string. + :type: str + """ + self.__logger_format = value + self.logger_formatter = logging.Formatter(self.__logger_format) + + def get_api_key_with_prefix(self, identifier, alias=None): + """Gets API key (with prefix if set). + + :param identifier: The identifier of apiKey. + :param alias: The alternative identifier of apiKey. + :return: The token for api key authentication. + """ + if self.refresh_api_key_hook is not None: + self.refresh_api_key_hook(self) + key = self.api_key.get(identifier, self.api_key.get(alias) if alias is not None else None) + if key: + prefix = self.api_key_prefix.get(identifier) + if prefix: + return "%s %s" % (prefix, key) + else: + return key + + def get_basic_auth_token(self): + """Gets HTTP basic authentication header (string). + + :return: The token for basic HTTP authentication. + """ + username = "" + if self.username is not None: + username = self.username + password = "" + if self.password is not None: + password = self.password + return urllib3.util.make_headers( + basic_auth=username + ':' + password + ).get('authorization') + + def auth_settings(self): + """Gets Auth Settings dict for api client. + + :return: The Auth Settings information dict. + """ + auth = {} + if self.access_token is not None: + auth['OAuth2'] = { + 'type': 'oauth2', + 'in': 'header', + 'key': 'Authorization', + 'value': 'Bearer ' + self.access_token + } + return auth + + def to_debug_report(self): + """Gets the essential information for debugging. + + :return: The report for debugging. + """ + return "Python SDK Debug Report:\n"\ + "OS: {env}\n"\ + "Python Version: {pyversion}\n"\ + "Version of the API: 0.0.1\n"\ + "SDK Package Version: 1.0.0".\ + format(env=sys.platform, pyversion=sys.version) + + def get_host_settings(self): + """Gets an array of host settings + + :return: An array of host settings + """ + return [ + { + 'url': "{scheme}://{host}/api/management/v1", + 'description': "Server URL when the port can be inferred from the scheme", + 'variables': { + 'scheme': { + 'description': "The scheme of the URI, either http or https.", + 'default_value': "https", + }, + 'host': { + 'description': "The host address for the specified server", + 'default_value': "localhost", + } + } + } + ] + + def get_host_from_settings(self, index, variables=None, servers=None): + """Gets host URL based on the index and variables + :param index: array index of the host settings + :param variables: hash of variable and the corresponding value + :param servers: an array of host settings or None + :return: URL based on host settings + """ + if index is None: + return self._base_path + + variables = {} if variables is None else variables + servers = self.get_host_settings() if servers is None else servers + + try: + server = servers[index] + except IndexError: + raise ValueError( + "Invalid index {0} when selecting the host settings. " + "Must be less than {1}".format(index, len(servers))) + + url = server['url'] + + # go through variables and replace placeholders + for variable_name, variable in server.get('variables', {}).items(): + used_value = variables.get( + variable_name, variable['default_value']) + + if 'enum_values' in variable \ + and used_value not in variable['enum_values']: + raise ValueError( + "The variable `{0}` in the host URL has invalid value " + "{1}. Must be {2}.".format( + variable_name, variables[variable_name], + variable['enum_values'])) + + url = url.replace("{" + variable_name + "}", used_value) + + return url + + @property + def host(self): + """Return generated host.""" + return self.get_host_from_settings(self.server_index, variables=self.server_variables) + + @host.setter + def host(self, value): + """Fix base path.""" + self._base_path = value + self.server_index = None diff --git a/regtests/client/python/polaris/management/exceptions.py b/regtests/client/python/polaris/management/exceptions.py new file mode 100644 index 0000000000..fb71e432c1 --- /dev/null +++ b/regtests/client/python/polaris/management/exceptions.py @@ -0,0 +1,214 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + +from typing import Any, Optional +from typing_extensions import Self + +class OpenApiException(Exception): + """The base exception class for all OpenAPIExceptions""" + + +class ApiTypeError(OpenApiException, TypeError): + def __init__(self, msg, path_to_item=None, valid_classes=None, + key_type=None) -> None: + """ Raises an exception for TypeErrors + + Args: + msg (str): the exception message + + Keyword Args: + path_to_item (list): a list of keys an indices to get to the + current_item + None if unset + valid_classes (tuple): the primitive classes that current item + should be an instance of + None if unset + key_type (bool): False if our value is a value in a dict + True if it is a key in a dict + False if our item is an item in a list + None if unset + """ + self.path_to_item = path_to_item + self.valid_classes = valid_classes + self.key_type = key_type + full_msg = msg + if path_to_item: + full_msg = "{0} at {1}".format(msg, render_path(path_to_item)) + super(ApiTypeError, self).__init__(full_msg) + + +class ApiValueError(OpenApiException, ValueError): + def __init__(self, msg, path_to_item=None) -> None: + """ + Args: + msg (str): the exception message + + Keyword Args: + path_to_item (list) the path to the exception in the + received_data dict. None if unset + """ + + self.path_to_item = path_to_item + full_msg = msg + if path_to_item: + full_msg = "{0} at {1}".format(msg, render_path(path_to_item)) + super(ApiValueError, self).__init__(full_msg) + + +class ApiAttributeError(OpenApiException, AttributeError): + def __init__(self, msg, path_to_item=None) -> None: + """ + Raised when an attribute reference or assignment fails. + + Args: + msg (str): the exception message + + Keyword Args: + path_to_item (None/list) the path to the exception in the + received_data dict + """ + self.path_to_item = path_to_item + full_msg = msg + if path_to_item: + full_msg = "{0} at {1}".format(msg, render_path(path_to_item)) + super(ApiAttributeError, self).__init__(full_msg) + + +class ApiKeyError(OpenApiException, KeyError): + def __init__(self, msg, path_to_item=None) -> None: + """ + Args: + msg (str): the exception message + + Keyword Args: + path_to_item (None/list) the path to the exception in the + received_data dict + """ + self.path_to_item = path_to_item + full_msg = msg + if path_to_item: + full_msg = "{0} at {1}".format(msg, render_path(path_to_item)) + super(ApiKeyError, self).__init__(full_msg) + + +class ApiException(OpenApiException): + + def __init__( + self, + status=None, + reason=None, + http_resp=None, + *, + body: Optional[str] = None, + data: Optional[Any] = None, + ) -> None: + self.status = status + self.reason = reason + self.body = body + self.data = data + self.headers = None + + if http_resp: + if self.status is None: + self.status = http_resp.status + if self.reason is None: + self.reason = http_resp.reason + if self.body is None: + try: + self.body = http_resp.data.decode('utf-8') + except Exception: + pass + self.headers = http_resp.getheaders() + + @classmethod + def from_response( + cls, + *, + http_resp, + body: Optional[str], + data: Optional[Any], + ) -> Self: + if http_resp.status == 400: + raise BadRequestException(http_resp=http_resp, body=body, data=data) + + if http_resp.status == 401: + raise UnauthorizedException(http_resp=http_resp, body=body, data=data) + + if http_resp.status == 403: + raise ForbiddenException(http_resp=http_resp, body=body, data=data) + + if http_resp.status == 404: + raise NotFoundException(http_resp=http_resp, body=body, data=data) + + if 500 <= http_resp.status <= 599: + raise ServiceException(http_resp=http_resp, body=body, data=data) + raise ApiException(http_resp=http_resp, body=body, data=data) + + def __str__(self): + """Custom error messages for exception""" + error_message = "({0})\n"\ + "Reason: {1}\n".format(self.status, self.reason) + if self.headers: + error_message += "HTTP response headers: {0}\n".format( + self.headers) + + if self.data or self.body: + error_message += "HTTP response body: {0}\n".format(self.data or self.body) + + return error_message + + +class BadRequestException(ApiException): + pass + + +class NotFoundException(ApiException): + pass + + +class UnauthorizedException(ApiException): + pass + + +class ForbiddenException(ApiException): + pass + + +class ServiceException(ApiException): + pass + + +def render_path(path_to_item): + """Returns a string representation of a path""" + result = "" + for pth in path_to_item: + if isinstance(pth, int): + result += "[{0}]".format(pth) + else: + result += "['{0}']".format(pth) + return result diff --git a/regtests/client/python/polaris/management/models/__init__.py b/regtests/client/python/polaris/management/models/__init__.py new file mode 100644 index 0000000000..6c61f415df --- /dev/null +++ b/regtests/client/python/polaris/management/models/__init__.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +# flake8: noqa +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +# import models into model package +from polaris.management.models.add_grant_request import AddGrantRequest +from polaris.management.models.aws_storage_config_info import AwsStorageConfigInfo +from polaris.management.models.azure_storage_config_info import AzureStorageConfigInfo +from polaris.management.models.catalog import Catalog +from polaris.management.models.catalog_grant import CatalogGrant +from polaris.management.models.catalog_privilege import CatalogPrivilege +from polaris.management.models.catalog_properties import CatalogProperties +from polaris.management.models.catalog_role import CatalogRole +from polaris.management.models.catalog_roles import CatalogRoles +from polaris.management.models.catalogs import Catalogs +from polaris.management.models.create_catalog_request import CreateCatalogRequest +from polaris.management.models.create_catalog_role_request import CreateCatalogRoleRequest +from polaris.management.models.create_principal_request import CreatePrincipalRequest +from polaris.management.models.create_principal_role_request import CreatePrincipalRoleRequest +from polaris.management.models.external_catalog import ExternalCatalog +from polaris.management.models.file_storage_config_info import FileStorageConfigInfo +from polaris.management.models.gcp_storage_config_info import GcpStorageConfigInfo +from polaris.management.models.grant_catalog_role_request import GrantCatalogRoleRequest +from polaris.management.models.grant_principal_role_request import GrantPrincipalRoleRequest +from polaris.management.models.grant_resource import GrantResource +from polaris.management.models.grant_resources import GrantResources +from polaris.management.models.namespace_grant import NamespaceGrant +from polaris.management.models.namespace_privilege import NamespacePrivilege +from polaris.management.models.polaris_catalog import PolarisCatalog +from polaris.management.models.principal import Principal +from polaris.management.models.principal_role import PrincipalRole +from polaris.management.models.principal_roles import PrincipalRoles +from polaris.management.models.principal_with_credentials import PrincipalWithCredentials +from polaris.management.models.principal_with_credentials_credentials import PrincipalWithCredentialsCredentials +from polaris.management.models.principals import Principals +from polaris.management.models.revoke_grant_request import RevokeGrantRequest +from polaris.management.models.storage_config_info import StorageConfigInfo +from polaris.management.models.table_grant import TableGrant +from polaris.management.models.table_privilege import TablePrivilege +from polaris.management.models.update_catalog_request import UpdateCatalogRequest +from polaris.management.models.update_catalog_role_request import UpdateCatalogRoleRequest +from polaris.management.models.update_principal_request import UpdatePrincipalRequest +from polaris.management.models.update_principal_role_request import UpdatePrincipalRoleRequest +from polaris.management.models.view_grant import ViewGrant +from polaris.management.models.view_privilege import ViewPrivilege diff --git a/regtests/client/python/polaris/management/models/add_grant_request.py b/regtests/client/python/polaris/management/models/add_grant_request.py new file mode 100644 index 0000000000..b83f88754b --- /dev/null +++ b/regtests/client/python/polaris/management/models/add_grant_request.py @@ -0,0 +1,106 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict +from typing import Any, ClassVar, Dict, List, Optional +from polaris.management.models.grant_resource import GrantResource +from typing import Optional, Set +from typing_extensions import Self + +class AddGrantRequest(BaseModel): + """ + AddGrantRequest + """ # noqa: E501 + grant: Optional[GrantResource] = None + __properties: ClassVar[List[str]] = ["grant"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AddGrantRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of grant + if self.grant: + _dict['grant'] = self.grant.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AddGrantRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "grant": GrantResource.from_dict(obj["grant"]) if obj.get("grant") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/aws_storage_config_info.py b/regtests/client/python/polaris/management/models/aws_storage_config_info.py new file mode 100644 index 0000000000..b7441165da --- /dev/null +++ b/regtests/client/python/polaris/management/models/aws_storage_config_info.py @@ -0,0 +1,108 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.management.models.storage_config_info import StorageConfigInfo +from typing import Optional, Set +from typing_extensions import Self + + +class AwsStorageConfigInfo(StorageConfigInfo): + """ + aws storage configuration info + """ # noqa: E501 + role_arn: StrictStr = Field(description="the aws role arn that grants privileges on the S3 buckets", + alias="roleArn") + external_id: Optional[StrictStr] = Field(default=None, + description="an optional external id used to establish a trust relationship with AWS in the trust policy", + alias="externalId") + user_arn: Optional[StrictStr] = Field(default=None, description="the aws user arn used to assume the aws role", + alias="userArn") + __properties: ClassVar[List[str]] = ["storageType", "allowedLocations"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AwsStorageConfigInfo from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AwsStorageConfigInfo from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "storageType": obj.get("storageType"), + "allowedLocations": obj.get("allowedLocations"), + "roleArn": obj.get("roleArn") + }) + return _obj diff --git a/regtests/client/python/polaris/management/models/azure_storage_config_info.py b/regtests/client/python/polaris/management/models/azure_storage_config_info.py new file mode 100644 index 0000000000..f03320e444 --- /dev/null +++ b/regtests/client/python/polaris/management/models/azure_storage_config_info.py @@ -0,0 +1,106 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.management.models.storage_config_info import StorageConfigInfo +from typing import Optional, Set +from typing_extensions import Self + +class AzureStorageConfigInfo(StorageConfigInfo): + """ + azure storage configuration info + """ # noqa: E501 + tenant_id: StrictStr = Field(description="the tenant id that the storage accounts belong to", alias="tenantId") + multi_tenant_app_name: Optional[StrictStr] = Field(default=None, description="the name of the azure client application", alias="multiTenantAppName") + consent_url: Optional[StrictStr] = Field(default=None, description="URL to the Azure permissions request page", alias="consentUrl") + __properties: ClassVar[List[str]] = ["storageType", "allowedLocations"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of AzureStorageConfigInfo from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of AzureStorageConfigInfo from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "storageType": obj.get("storageType"), + "allowedLocations": obj.get("allowedLocations") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/catalog.py b/regtests/client/python/polaris/management/models/catalog.py new file mode 100644 index 0000000000..66fa7242b4 --- /dev/null +++ b/regtests/client/python/polaris/management/models/catalog.py @@ -0,0 +1,146 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from importlib import import_module +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List, Optional, Union +from polaris.management.models.catalog_properties import CatalogProperties +from polaris.management.models.storage_config_info import StorageConfigInfo +from typing import Optional, Set +from typing_extensions import Self + +from typing import TYPE_CHECKING +if TYPE_CHECKING: + from polaris.management.models.external_catalog import ExternalCatalog + from polaris.management.models.polaris_catalog import PolarisCatalog + +class Catalog(BaseModel): + """ + A catalog object. A catalog may be internal or external. Internal catalogs are managed entirely by an external catalog interface. Third party catalogs may be other Iceberg REST implementations or other services with their own proprietary APIs + """ # noqa: E501 + type: StrictStr = Field(description="the type of catalog - internal or external") + name: StrictStr = Field(description="The name of the catalog") + properties: CatalogProperties + create_timestamp: Optional[StrictInt] = Field(default=None, description="The creation time represented as unix epoch timestamp in milliseconds", alias="createTimestamp") + last_update_timestamp: Optional[StrictInt] = Field(default=None, description="The last update time represented as unix epoch timestamp in milliseconds", alias="lastUpdateTimestamp") + entity_version: Optional[StrictInt] = Field(default=None, description="The version of the catalog object used to determine if the catalog metadata has changed", alias="entityVersion") + storage_config_info: StorageConfigInfo = Field(alias="storageConfigInfo") + __properties: ClassVar[List[str]] = ["type", "name", "properties", "createTimestamp", "lastUpdateTimestamp", "entityVersion", "storageConfigInfo"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['INTERNAL', 'EXTERNAL']): + raise ValueError("must be one of enum values ('INTERNAL', 'EXTERNAL')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + # JSON field name that stores the object type + __discriminator_property_name: ClassVar[str] = 'type' + + # discriminator mappings + __discriminator_value_class_map: ClassVar[Dict[str, str]] = { + 'EXTERNAL': 'ExternalCatalog','INTERNAL': 'PolarisCatalog' + } + + @classmethod + def get_discriminator_value(cls, obj: Dict[str, Any]) -> Optional[str]: + """Returns the discriminator value (object type) of the data""" + discriminator_value = obj[cls.__discriminator_property_name] + if discriminator_value: + return cls.__discriminator_value_class_map.get(discriminator_value) + else: + return None + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Union[ExternalCatalog, PolarisCatalog]]: + """Create an instance of Catalog from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of properties + if self.properties: + _dict['properties'] = self.properties.to_dict() + # override the default output from pydantic by calling `to_dict()` of storage_config_info + if self.storage_config_info: + _dict['storageConfigInfo'] = self.storage_config_info.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Dict[str, Any]) -> Optional[Union[ExternalCatalog, PolarisCatalog]]: + """Create an instance of Catalog from a dict""" + # look up the object type based on discriminator mapping + object_type = cls.get_discriminator_value(obj) + if object_type == 'ExternalCatalog': + return import_module("polaris.management.models.external_catalog").ExternalCatalog.from_dict(obj) + if object_type == 'PolarisCatalog': + return import_module("polaris.management.models.polaris_catalog").PolarisCatalog.from_dict(obj) + + raise ValueError("Catalog failed to lookup discriminator value from " + + json.dumps(obj) + ". Discriminator property name: " + cls.__discriminator_property_name + + ", mapping: " + json.dumps(cls.__discriminator_value_class_map)) + + diff --git a/regtests/client/python/polaris/management/models/catalog_grant.py b/regtests/client/python/polaris/management/models/catalog_grant.py new file mode 100644 index 0000000000..628b32b9e7 --- /dev/null +++ b/regtests/client/python/polaris/management/models/catalog_grant.py @@ -0,0 +1,105 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict +from typing import Any, ClassVar, Dict, List +from polaris.management.models.catalog_privilege import CatalogPrivilege +from polaris.management.models.grant_resource import GrantResource +from typing import Optional, Set +from typing_extensions import Self + +class CatalogGrant(GrantResource): + """ + CatalogGrant + """ # noqa: E501 + privilege: CatalogPrivilege + __properties: ClassVar[List[str]] = ["type", "privilege"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CatalogGrant from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CatalogGrant from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "privilege": obj.get("privilege") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/catalog_privilege.py b/regtests/client/python/polaris/management/models/catalog_privilege.py new file mode 100644 index 0000000000..03404a10bc --- /dev/null +++ b/regtests/client/python/polaris/management/models/catalog_privilege.py @@ -0,0 +1,75 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +from enum import Enum +from typing_extensions import Self + + +class CatalogPrivilege(str, Enum): + """ + CatalogPrivilege + """ + + """ + allowed enum values + """ + CATALOG_MANAGE_ACCESS = 'CATALOG_MANAGE_ACCESS' + CATALOG_MANAGE_CONTENT = 'CATALOG_MANAGE_CONTENT' + CATALOG_MANAGE_METADATA = 'CATALOG_MANAGE_METADATA' + CATALOG_READ_PROPERTIES = 'CATALOG_READ_PROPERTIES' + CATALOG_WRITE_PROPERTIES = 'CATALOG_WRITE_PROPERTIES' + NAMESPACE_CREATE = 'NAMESPACE_CREATE' + TABLE_CREATE = 'TABLE_CREATE' + VIEW_CREATE = 'VIEW_CREATE' + NAMESPACE_DROP = 'NAMESPACE_DROP' + TABLE_DROP = 'TABLE_DROP' + VIEW_DROP = 'VIEW_DROP' + NAMESPACE_LIST = 'NAMESPACE_LIST' + TABLE_LIST = 'TABLE_LIST' + VIEW_LIST = 'VIEW_LIST' + NAMESPACE_READ_PROPERTIES = 'NAMESPACE_READ_PROPERTIES' + TABLE_READ_PROPERTIES = 'TABLE_READ_PROPERTIES' + VIEW_READ_PROPERTIES = 'VIEW_READ_PROPERTIES' + NAMESPACE_WRITE_PROPERTIES = 'NAMESPACE_WRITE_PROPERTIES' + TABLE_WRITE_PROPERTIES = 'TABLE_WRITE_PROPERTIES' + VIEW_WRITE_PROPERTIES = 'VIEW_WRITE_PROPERTIES' + TABLE_READ_DATA = 'TABLE_READ_DATA' + TABLE_WRITE_DATA = 'TABLE_WRITE_DATA' + NAMESPACE_FULL_METADATA = 'NAMESPACE_FULL_METADATA' + TABLE_FULL_METADATA = 'TABLE_FULL_METADATA' + VIEW_FULL_METADATA = 'VIEW_FULL_METADATA' + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Create an instance of CatalogPrivilege from a JSON string""" + return cls(json.loads(json_str)) + + diff --git a/regtests/client/python/polaris/management/models/catalog_properties.py b/regtests/client/python/polaris/management/models/catalog_properties.py new file mode 100644 index 0000000000..8669c57743 --- /dev/null +++ b/regtests/client/python/polaris/management/models/catalog_properties.py @@ -0,0 +1,115 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class CatalogProperties(BaseModel): + """ + CatalogProperties + """ # noqa: E501 + default_base_location: StrictStr = Field(alias="default-base-location") + additional_properties: Dict[str, Any] = {} + __properties: ClassVar[List[str]] = ["default-base-location"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CatalogProperties from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + * Fields in `self.additional_properties` are added to the output dict. + """ + excluded_fields: Set[str] = set([ + "additional_properties", + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # puts key-value pairs in additional_properties in the top level + if self.additional_properties is not None: + for _key, _value in self.additional_properties.items(): + _dict[_key] = _value + + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CatalogProperties from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "default-base-location": obj.get("default-base-location") + }) + # store additional fields in additional_properties + for _key in obj.keys(): + if _key not in cls.__properties: + _obj.additional_properties[_key] = obj.get(_key) + + return _obj + + diff --git a/regtests/client/python/polaris/management/models/catalog_role.py b/regtests/client/python/polaris/management/models/catalog_role.py new file mode 100644 index 0000000000..1823dfffa6 --- /dev/null +++ b/regtests/client/python/polaris/management/models/catalog_role.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class CatalogRole(BaseModel): + """ + CatalogRole + """ # noqa: E501 + name: StrictStr = Field(description="The name of the role") + properties: Optional[Dict[str, StrictStr]] = None + create_timestamp: Optional[StrictInt] = Field(default=None, alias="createTimestamp") + last_update_timestamp: Optional[StrictInt] = Field(default=None, alias="lastUpdateTimestamp") + entity_version: Optional[StrictInt] = Field(default=None, description="The version of the catalog role object used to determine if the catalog role metadata has changed", alias="entityVersion") + __properties: ClassVar[List[str]] = ["name", "properties", "createTimestamp", "lastUpdateTimestamp", "entityVersion"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CatalogRole from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CatalogRole from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "name": obj.get("name"), + "properties": obj.get("properties"), + "createTimestamp": obj.get("createTimestamp"), + "lastUpdateTimestamp": obj.get("lastUpdateTimestamp"), + "entityVersion": obj.get("entityVersion") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/catalog_roles.py b/regtests/client/python/polaris/management/models/catalog_roles.py new file mode 100644 index 0000000000..5072c191a3 --- /dev/null +++ b/regtests/client/python/polaris/management/models/catalog_roles.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field +from typing import Any, ClassVar, Dict, List +from polaris.management.models.catalog_role import CatalogRole +from typing import Optional, Set +from typing_extensions import Self + +class CatalogRoles(BaseModel): + """ + CatalogRoles + """ # noqa: E501 + roles: List[CatalogRole] = Field(description="The list of catalog roles") + __properties: ClassVar[List[str]] = ["roles"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CatalogRoles from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in roles (list) + _items = [] + if self.roles: + for _item in self.roles: + if _item: + _items.append(_item.to_dict()) + _dict['roles'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CatalogRoles from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "roles": [CatalogRole.from_dict(_item) for _item in obj["roles"]] if obj.get("roles") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/catalogs.py b/regtests/client/python/polaris/management/models/catalogs.py new file mode 100644 index 0000000000..66698aec16 --- /dev/null +++ b/regtests/client/python/polaris/management/models/catalogs.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict +from typing import Any, ClassVar, Dict, List +from polaris.management.models.catalog import Catalog +from typing import Optional, Set +from typing_extensions import Self + +class Catalogs(BaseModel): + """ + A list of Catalog objects + """ # noqa: E501 + catalogs: List[Catalog] + __properties: ClassVar[List[str]] = ["catalogs"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of Catalogs from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in catalogs (list) + _items = [] + if self.catalogs: + for _item in self.catalogs: + if _item: + _items.append(_item.to_dict()) + _dict['catalogs'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of Catalogs from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "catalogs": [Catalog.from_dict(_item) for _item in obj["catalogs"]] if obj.get("catalogs") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/create_catalog_request.py b/regtests/client/python/polaris/management/models/create_catalog_request.py new file mode 100644 index 0000000000..4136ac7522 --- /dev/null +++ b/regtests/client/python/polaris/management/models/create_catalog_request.py @@ -0,0 +1,106 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict +from typing import Any, ClassVar, Dict, List +from polaris.management.models.catalog import Catalog +from typing import Optional, Set +from typing_extensions import Self + +class CreateCatalogRequest(BaseModel): + """ + Request to create a new catalog + """ # noqa: E501 + catalog: Catalog + __properties: ClassVar[List[str]] = ["catalog"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CreateCatalogRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of catalog + if self.catalog: + _dict['catalog'] = self.catalog.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CreateCatalogRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "catalog": Catalog.from_dict(obj["catalog"]) if obj.get("catalog") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/create_catalog_role_request.py b/regtests/client/python/polaris/management/models/create_catalog_role_request.py new file mode 100644 index 0000000000..189cef6e93 --- /dev/null +++ b/regtests/client/python/polaris/management/models/create_catalog_role_request.py @@ -0,0 +1,106 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field +from typing import Any, ClassVar, Dict, List, Optional +from polaris.management.models.catalog_role import CatalogRole +from typing import Optional, Set +from typing_extensions import Self + +class CreateCatalogRoleRequest(BaseModel): + """ + CreateCatalogRoleRequest + """ # noqa: E501 + catalog_role: Optional[CatalogRole] = Field(default=None, alias="catalogRole") + __properties: ClassVar[List[str]] = ["catalogRole"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CreateCatalogRoleRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of catalog_role + if self.catalog_role: + _dict['catalogRole'] = self.catalog_role.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CreateCatalogRoleRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "catalogRole": CatalogRole.from_dict(obj["catalogRole"]) if obj.get("catalogRole") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/create_principal_request.py b/regtests/client/python/polaris/management/models/create_principal_request.py new file mode 100644 index 0000000000..89031d58e5 --- /dev/null +++ b/regtests/client/python/polaris/management/models/create_principal_request.py @@ -0,0 +1,108 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictBool +from typing import Any, ClassVar, Dict, List, Optional +from polaris.management.models.principal import Principal +from typing import Optional, Set +from typing_extensions import Self + +class CreatePrincipalRequest(BaseModel): + """ + CreatePrincipalRequest + """ # noqa: E501 + principal: Optional[Principal] = None + credential_rotation_required: Optional[StrictBool] = Field(default=None, description="If true, the initial credentials can only be used to call rotateCredentials", alias="credentialRotationRequired") + __properties: ClassVar[List[str]] = ["principal", "credentialRotationRequired"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CreatePrincipalRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of principal + if self.principal: + _dict['principal'] = self.principal.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CreatePrincipalRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "principal": Principal.from_dict(obj["principal"]) if obj.get("principal") is not None else None, + "credentialRotationRequired": obj.get("credentialRotationRequired") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/create_principal_role_request.py b/regtests/client/python/polaris/management/models/create_principal_role_request.py new file mode 100644 index 0000000000..4678dfe04f --- /dev/null +++ b/regtests/client/python/polaris/management/models/create_principal_role_request.py @@ -0,0 +1,106 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field +from typing import Any, ClassVar, Dict, List, Optional +from polaris.management.models.principal_role import PrincipalRole +from typing import Optional, Set +from typing_extensions import Self + +class CreatePrincipalRoleRequest(BaseModel): + """ + CreatePrincipalRoleRequest + """ # noqa: E501 + principal_role: Optional[PrincipalRole] = Field(default=None, alias="principalRole") + __properties: ClassVar[List[str]] = ["principalRole"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of CreatePrincipalRoleRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of principal_role + if self.principal_role: + _dict['principalRole'] = self.principal_role.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of CreatePrincipalRoleRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "principalRole": PrincipalRole.from_dict(obj["principalRole"]) if obj.get("principalRole") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/external_catalog.py b/regtests/client/python/polaris/management/models/external_catalog.py new file mode 100644 index 0000000000..bbcfa05478 --- /dev/null +++ b/regtests/client/python/polaris/management/models/external_catalog.py @@ -0,0 +1,118 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.management.models.catalog import Catalog +from polaris.management.models.catalog_properties import CatalogProperties +from polaris.management.models.storage_config_info import StorageConfigInfo +from typing import Optional, Set +from typing_extensions import Self + +class ExternalCatalog(Catalog): + """ + An externally managed catalog + """ # noqa: E501 + remote_url: Optional[StrictStr] = Field(default=None, description="URL to the remote catalog API", alias="remoteUrl") + __properties: ClassVar[List[str]] = ["type", "name", "properties", "createTimestamp", "lastUpdateTimestamp", "entityVersion", "storageConfigInfo", "remoteUrl"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of ExternalCatalog from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of properties + if self.properties: + _dict['properties'] = self.properties.to_dict() + # override the default output from pydantic by calling `to_dict()` of storage_config_info + if self.storage_config_info: + _dict['storageConfigInfo'] = self.storage_config_info.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of ExternalCatalog from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type") if obj.get("type") is not None else 'INTERNAL', + "name": obj.get("name"), + "properties": CatalogProperties.from_dict(obj["properties"]) if obj.get("properties") is not None else None, + "createTimestamp": obj.get("createTimestamp"), + "lastUpdateTimestamp": obj.get("lastUpdateTimestamp"), + "entityVersion": obj.get("entityVersion"), + "storageConfigInfo": StorageConfigInfo.from_dict(obj["storageConfigInfo"]) if obj.get("storageConfigInfo") is not None else None, + "remoteUrl": obj.get("remoteUrl") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/file_storage_config_info.py b/regtests/client/python/polaris/management/models/file_storage_config_info.py new file mode 100644 index 0000000000..7179236bc7 --- /dev/null +++ b/regtests/client/python/polaris/management/models/file_storage_config_info.py @@ -0,0 +1,103 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict +from typing import Any, ClassVar, Dict, List +from polaris.management.models.storage_config_info import StorageConfigInfo +from typing import Optional, Set +from typing_extensions import Self + +class FileStorageConfigInfo(StorageConfigInfo): + """ + gcp storage configuration info + """ # noqa: E501 + __properties: ClassVar[List[str]] = ["storageType", "allowedLocations"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of FileStorageConfigInfo from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of FileStorageConfigInfo from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "storageType": obj.get("storageType"), + "allowedLocations": obj.get("allowedLocations") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/gcp_storage_config_info.py b/regtests/client/python/polaris/management/models/gcp_storage_config_info.py new file mode 100644 index 0000000000..7b9e06ec82 --- /dev/null +++ b/regtests/client/python/polaris/management/models/gcp_storage_config_info.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.management.models.storage_config_info import StorageConfigInfo +from typing import Optional, Set +from typing_extensions import Self + +class GcpStorageConfigInfo(StorageConfigInfo): + """ + gcp storage configuration info + """ # noqa: E501 + gcs_service_account: Optional[StrictStr] = Field(default=None, description="a Google cloud storage service account", alias="gcsServiceAccount") + __properties: ClassVar[List[str]] = ["storageType", "allowedLocations"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of GcpStorageConfigInfo from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of GcpStorageConfigInfo from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "storageType": obj.get("storageType"), + "allowedLocations": obj.get("allowedLocations") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/grant_catalog_role_request.py b/regtests/client/python/polaris/management/models/grant_catalog_role_request.py new file mode 100644 index 0000000000..acef99731e --- /dev/null +++ b/regtests/client/python/polaris/management/models/grant_catalog_role_request.py @@ -0,0 +1,106 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field +from typing import Any, ClassVar, Dict, List, Optional +from polaris.management.models.catalog_role import CatalogRole +from typing import Optional, Set +from typing_extensions import Self + +class GrantCatalogRoleRequest(BaseModel): + """ + GrantCatalogRoleRequest + """ # noqa: E501 + catalog_role: Optional[CatalogRole] = Field(default=None, alias="catalogRole") + __properties: ClassVar[List[str]] = ["catalogRole"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of GrantCatalogRoleRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of catalog_role + if self.catalog_role: + _dict['catalogRole'] = self.catalog_role.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of GrantCatalogRoleRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "catalogRole": CatalogRole.from_dict(obj["catalogRole"]) if obj.get("catalogRole") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/grant_principal_role_request.py b/regtests/client/python/polaris/management/models/grant_principal_role_request.py new file mode 100644 index 0000000000..7033179ef0 --- /dev/null +++ b/regtests/client/python/polaris/management/models/grant_principal_role_request.py @@ -0,0 +1,106 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field +from typing import Any, ClassVar, Dict, List, Optional +from polaris.management.models.principal_role import PrincipalRole +from typing import Optional, Set +from typing_extensions import Self + +class GrantPrincipalRoleRequest(BaseModel): + """ + GrantPrincipalRoleRequest + """ # noqa: E501 + principal_role: Optional[PrincipalRole] = Field(default=None, alias="principalRole") + __properties: ClassVar[List[str]] = ["principalRole"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of GrantPrincipalRoleRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of principal_role + if self.principal_role: + _dict['principalRole'] = self.principal_role.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of GrantPrincipalRoleRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "principalRole": PrincipalRole.from_dict(obj["principalRole"]) if obj.get("principalRole") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/grant_resource.py b/regtests/client/python/polaris/management/models/grant_resource.py new file mode 100644 index 0000000000..666fffd06f --- /dev/null +++ b/regtests/client/python/polaris/management/models/grant_resource.py @@ -0,0 +1,138 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from importlib import import_module +from pydantic import BaseModel, ConfigDict, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List, Union +from typing import Optional, Set +from typing_extensions import Self + +from typing import TYPE_CHECKING +if TYPE_CHECKING: + from polaris.management.models.catalog_grant import CatalogGrant + from polaris.management.models.namespace_grant import NamespaceGrant + from polaris.management.models.table_grant import TableGrant + from polaris.management.models.view_grant import ViewGrant + +class GrantResource(BaseModel): + """ + GrantResource + """ # noqa: E501 + type: StrictStr + __properties: ClassVar[List[str]] = ["type"] + + @field_validator('type') + def type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['catalog', 'namespace', 'table', 'view']): + raise ValueError("must be one of enum values ('catalog', 'namespace', 'table', 'view')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + # JSON field name that stores the object type + __discriminator_property_name: ClassVar[str] = 'type' + + # discriminator mappings + __discriminator_value_class_map: ClassVar[Dict[str, str]] = { + 'catalog': 'CatalogGrant','namespace': 'NamespaceGrant','table': 'TableGrant','view': 'ViewGrant' + } + + @classmethod + def get_discriminator_value(cls, obj: Dict[str, Any]) -> Optional[str]: + """Returns the discriminator value (object type) of the data""" + discriminator_value = obj[cls.__discriminator_property_name] + if discriminator_value: + return cls.__discriminator_value_class_map.get(discriminator_value) + else: + return None + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Union[CatalogGrant, NamespaceGrant, TableGrant, ViewGrant]]: + """Create an instance of GrantResource from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Dict[str, Any]) -> Optional[Union[CatalogGrant, NamespaceGrant, TableGrant, ViewGrant]]: + """Create an instance of GrantResource from a dict""" + # look up the object type based on discriminator mapping + object_type = cls.get_discriminator_value(obj) + if object_type == 'CatalogGrant': + return import_module("polaris.management.models.catalog_grant").CatalogGrant.from_dict(obj) + if object_type == 'NamespaceGrant': + return import_module("polaris.management.models.namespace_grant").NamespaceGrant.from_dict(obj) + if object_type == 'TableGrant': + return import_module("polaris.management.models.table_grant").TableGrant.from_dict(obj) + if object_type == 'ViewGrant': + return import_module("polaris.management.models.view_grant").ViewGrant.from_dict(obj) + + raise ValueError("GrantResource failed to lookup discriminator value from " + + json.dumps(obj) + ". Discriminator property name: " + cls.__discriminator_property_name + + ", mapping: " + json.dumps(cls.__discriminator_value_class_map)) + + diff --git a/regtests/client/python/polaris/management/models/grant_resources.py b/regtests/client/python/polaris/management/models/grant_resources.py new file mode 100644 index 0000000000..23d9a99442 --- /dev/null +++ b/regtests/client/python/polaris/management/models/grant_resources.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict +from typing import Any, ClassVar, Dict, List +from polaris.management.models.grant_resource import GrantResource +from typing import Optional, Set +from typing_extensions import Self + +class GrantResources(BaseModel): + """ + GrantResources + """ # noqa: E501 + grants: List[GrantResource] + __properties: ClassVar[List[str]] = ["grants"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of GrantResources from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in grants (list) + _items = [] + if self.grants: + for _item in self.grants: + if _item: + _items.append(_item.to_dict()) + _dict['grants'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of GrantResources from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "grants": [GrantResource.from_dict(_item) for _item in obj["grants"]] if obj.get("grants") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/namespace_grant.py b/regtests/client/python/polaris/management/models/namespace_grant.py new file mode 100644 index 0000000000..44238a7822 --- /dev/null +++ b/regtests/client/python/polaris/management/models/namespace_grant.py @@ -0,0 +1,107 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List +from polaris.management.models.grant_resource import GrantResource +from polaris.management.models.namespace_privilege import NamespacePrivilege +from typing import Optional, Set +from typing_extensions import Self + +class NamespaceGrant(GrantResource): + """ + NamespaceGrant + """ # noqa: E501 + namespace: List[StrictStr] + privilege: NamespacePrivilege + __properties: ClassVar[List[str]] = ["type", "namespace", "privilege"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of NamespaceGrant from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of NamespaceGrant from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "namespace": obj.get("namespace"), + "privilege": obj.get("privilege") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/namespace_privilege.py b/regtests/client/python/polaris/management/models/namespace_privilege.py new file mode 100644 index 0000000000..79b785449a --- /dev/null +++ b/regtests/client/python/polaris/management/models/namespace_privilege.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +from enum import Enum +from typing_extensions import Self + + +class NamespacePrivilege(str, Enum): + """ + NamespacePrivilege + """ + + """ + allowed enum values + """ + CATALOG_MANAGE_ACCESS = 'CATALOG_MANAGE_ACCESS' + CATALOG_MANAGE_CONTENT = 'CATALOG_MANAGE_CONTENT' + CATALOG_MANAGE_METADATA = 'CATALOG_MANAGE_METADATA' + NAMESPACE_CREATE = 'NAMESPACE_CREATE' + TABLE_CREATE = 'TABLE_CREATE' + VIEW_CREATE = 'VIEW_CREATE' + NAMESPACE_DROP = 'NAMESPACE_DROP' + TABLE_DROP = 'TABLE_DROP' + VIEW_DROP = 'VIEW_DROP' + NAMESPACE_LIST = 'NAMESPACE_LIST' + TABLE_LIST = 'TABLE_LIST' + VIEW_LIST = 'VIEW_LIST' + NAMESPACE_READ_PROPERTIES = 'NAMESPACE_READ_PROPERTIES' + TABLE_READ_PROPERTIES = 'TABLE_READ_PROPERTIES' + VIEW_READ_PROPERTIES = 'VIEW_READ_PROPERTIES' + NAMESPACE_WRITE_PROPERTIES = 'NAMESPACE_WRITE_PROPERTIES' + TABLE_WRITE_PROPERTIES = 'TABLE_WRITE_PROPERTIES' + VIEW_WRITE_PROPERTIES = 'VIEW_WRITE_PROPERTIES' + TABLE_READ_DATA = 'TABLE_READ_DATA' + TABLE_WRITE_DATA = 'TABLE_WRITE_DATA' + NAMESPACE_FULL_METADATA = 'NAMESPACE_FULL_METADATA' + TABLE_FULL_METADATA = 'TABLE_FULL_METADATA' + VIEW_FULL_METADATA = 'VIEW_FULL_METADATA' + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Create an instance of NamespacePrivilege from a JSON string""" + return cls(json.loads(json_str)) + + diff --git a/regtests/client/python/polaris/management/models/polaris_catalog.py b/regtests/client/python/polaris/management/models/polaris_catalog.py new file mode 100644 index 0000000000..8e44d2a7da --- /dev/null +++ b/regtests/client/python/polaris/management/models/polaris_catalog.py @@ -0,0 +1,116 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict +from typing import Any, ClassVar, Dict, List +from polaris.management.models.catalog import Catalog +from polaris.management.models.catalog_properties import CatalogProperties +from polaris.management.models.storage_config_info import StorageConfigInfo +from typing import Optional, Set +from typing_extensions import Self + +class PolarisCatalog(Catalog): + """ + The base catalog type - this contains all the fields necessary to construct an INTERNAL catalog + """ # noqa: E501 + __properties: ClassVar[List[str]] = ["type", "name", "properties", "createTimestamp", "lastUpdateTimestamp", "entityVersion", "storageConfigInfo"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of PolarisCatalog from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of properties + if self.properties: + _dict['properties'] = self.properties.to_dict() + # override the default output from pydantic by calling `to_dict()` of storage_config_info + if self.storage_config_info: + _dict['storageConfigInfo'] = self.storage_config_info.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of PolarisCatalog from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type") if obj.get("type") is not None else 'INTERNAL', + "name": obj.get("name"), + "properties": CatalogProperties.from_dict(obj["properties"]) if obj.get("properties") is not None else None, + "createTimestamp": obj.get("createTimestamp"), + "lastUpdateTimestamp": obj.get("lastUpdateTimestamp"), + "entityVersion": obj.get("entityVersion"), + "storageConfigInfo": StorageConfigInfo.from_dict(obj["storageConfigInfo"]) if obj.get("storageConfigInfo") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/principal.py b/regtests/client/python/polaris/management/models/principal.py new file mode 100644 index 0000000000..e7af4e7605 --- /dev/null +++ b/regtests/client/python/polaris/management/models/principal.py @@ -0,0 +1,112 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class Principal(BaseModel): + """ + A Polaris principal. + """ # noqa: E501 + name: StrictStr + client_id: Optional[StrictStr] = Field(default=None, description="The output-only OAuth clientId associated with this principal if applicable", alias="clientId") + properties: Optional[Dict[str, StrictStr]] = None + create_timestamp: Optional[StrictInt] = Field(default=None, alias="createTimestamp") + last_update_timestamp: Optional[StrictInt] = Field(default=None, alias="lastUpdateTimestamp") + entity_version: Optional[StrictInt] = Field(default=None, description="The version of the principal object used to determine if the principal metadata has changed", alias="entityVersion") + __properties: ClassVar[List[str]] = ["name", "clientId", "properties", "createTimestamp", "lastUpdateTimestamp", "entityVersion"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of Principal from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of Principal from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "name": obj.get("name"), + "clientId": obj.get("clientId"), + "properties": obj.get("properties"), + "createTimestamp": obj.get("createTimestamp"), + "lastUpdateTimestamp": obj.get("lastUpdateTimestamp"), + "entityVersion": obj.get("entityVersion") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/principal_role.py b/regtests/client/python/polaris/management/models/principal_role.py new file mode 100644 index 0000000000..4e23d86ecb --- /dev/null +++ b/regtests/client/python/polaris/management/models/principal_role.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class PrincipalRole(BaseModel): + """ + PrincipalRole + """ # noqa: E501 + name: StrictStr = Field(description="The name of the role") + properties: Optional[Dict[str, StrictStr]] = None + create_timestamp: Optional[StrictInt] = Field(default=None, alias="createTimestamp") + last_update_timestamp: Optional[StrictInt] = Field(default=None, alias="lastUpdateTimestamp") + entity_version: Optional[StrictInt] = Field(default=None, description="The version of the principal role object used to determine if the principal role metadata has changed", alias="entityVersion") + __properties: ClassVar[List[str]] = ["name", "properties", "createTimestamp", "lastUpdateTimestamp", "entityVersion"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of PrincipalRole from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of PrincipalRole from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "name": obj.get("name"), + "properties": obj.get("properties"), + "createTimestamp": obj.get("createTimestamp"), + "lastUpdateTimestamp": obj.get("lastUpdateTimestamp"), + "entityVersion": obj.get("entityVersion") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/principal_roles.py b/regtests/client/python/polaris/management/models/principal_roles.py new file mode 100644 index 0000000000..92e52074a0 --- /dev/null +++ b/regtests/client/python/polaris/management/models/principal_roles.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict +from typing import Any, ClassVar, Dict, List +from polaris.management.models.principal_role import PrincipalRole +from typing import Optional, Set +from typing_extensions import Self + +class PrincipalRoles(BaseModel): + """ + PrincipalRoles + """ # noqa: E501 + roles: List[PrincipalRole] + __properties: ClassVar[List[str]] = ["roles"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of PrincipalRoles from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in roles (list) + _items = [] + if self.roles: + for _item in self.roles: + if _item: + _items.append(_item.to_dict()) + _dict['roles'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of PrincipalRoles from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "roles": [PrincipalRole.from_dict(_item) for _item in obj["roles"]] if obj.get("roles") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/principal_with_credentials.py b/regtests/client/python/polaris/management/models/principal_with_credentials.py new file mode 100644 index 0000000000..69a2e2b4bf --- /dev/null +++ b/regtests/client/python/polaris/management/models/principal_with_credentials.py @@ -0,0 +1,112 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict +from typing import Any, ClassVar, Dict, List +from polaris.management.models.principal import Principal +from polaris.management.models.principal_with_credentials_credentials import PrincipalWithCredentialsCredentials +from typing import Optional, Set +from typing_extensions import Self + +class PrincipalWithCredentials(BaseModel): + """ + A user with its client id and secret. This type is returned when a new principal is created or when its credentials are rotated + """ # noqa: E501 + principal: Principal + credentials: PrincipalWithCredentialsCredentials + __properties: ClassVar[List[str]] = ["principal", "credentials"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of PrincipalWithCredentials from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of principal + if self.principal: + _dict['principal'] = self.principal.to_dict() + # override the default output from pydantic by calling `to_dict()` of credentials + if self.credentials: + _dict['credentials'] = self.credentials.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of PrincipalWithCredentials from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "principal": Principal.from_dict(obj["principal"]) if obj.get("principal") is not None else None, + "credentials": PrincipalWithCredentialsCredentials.from_dict(obj["credentials"]) if obj.get("credentials") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/principal_with_credentials_credentials.py b/regtests/client/python/polaris/management/models/principal_with_credentials_credentials.py new file mode 100644 index 0000000000..be6d57d1b2 --- /dev/null +++ b/regtests/client/python/polaris/management/models/principal_with_credentials_credentials.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from typing import Optional, Set +from typing_extensions import Self + +class PrincipalWithCredentialsCredentials(BaseModel): + """ + PrincipalWithCredentialsCredentials + """ # noqa: E501 + client_id: Optional[StrictStr] = Field(default=None, alias="clientId") + client_secret: Optional[StrictStr] = Field(default=None, alias="clientSecret") + __properties: ClassVar[List[str]] = ["clientId", "clientSecret"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of PrincipalWithCredentialsCredentials from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of PrincipalWithCredentialsCredentials from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "clientId": obj.get("clientId"), + "clientSecret": obj.get("clientSecret") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/principals.py b/regtests/client/python/polaris/management/models/principals.py new file mode 100644 index 0000000000..746608eb85 --- /dev/null +++ b/regtests/client/python/polaris/management/models/principals.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict +from typing import Any, ClassVar, Dict, List +from polaris.management.models.principal import Principal +from typing import Optional, Set +from typing_extensions import Self + +class Principals(BaseModel): + """ + A list of Principals + """ # noqa: E501 + principals: List[Principal] + __properties: ClassVar[List[str]] = ["principals"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of Principals from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each item in principals (list) + _items = [] + if self.principals: + for _item in self.principals: + if _item: + _items.append(_item.to_dict()) + _dict['principals'] = _items + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of Principals from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "principals": [Principal.from_dict(_item) for _item in obj["principals"]] if obj.get("principals") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/revoke_grant_request.py b/regtests/client/python/polaris/management/models/revoke_grant_request.py new file mode 100644 index 0000000000..681706990d --- /dev/null +++ b/regtests/client/python/polaris/management/models/revoke_grant_request.py @@ -0,0 +1,106 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict +from typing import Any, ClassVar, Dict, List, Optional +from polaris.management.models.grant_resource import GrantResource +from typing import Optional, Set +from typing_extensions import Self + +class RevokeGrantRequest(BaseModel): + """ + RevokeGrantRequest + """ # noqa: E501 + grant: Optional[GrantResource] = None + __properties: ClassVar[List[str]] = ["grant"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of RevokeGrantRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of grant + if self.grant: + _dict['grant'] = self.grant.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of RevokeGrantRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "grant": GrantResource.from_dict(obj["grant"]) if obj.get("grant") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/storage_config_info.py b/regtests/client/python/polaris/management/models/storage_config_info.py new file mode 100644 index 0000000000..7b36a47e4b --- /dev/null +++ b/regtests/client/python/polaris/management/models/storage_config_info.py @@ -0,0 +1,139 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from importlib import import_module +from pydantic import BaseModel, ConfigDict, Field, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List, Optional, Union +from typing import Optional, Set +from typing_extensions import Self + +from typing import TYPE_CHECKING +if TYPE_CHECKING: + from polaris.management.models.azure_storage_config_info import AzureStorageConfigInfo + from polaris.management.models.file_storage_config_info import FileStorageConfigInfo + from polaris.management.models.gcp_storage_config_info import GcpStorageConfigInfo + from polaris.management.models.aws_storage_config_info import AwsStorageConfigInfo + +class StorageConfigInfo(BaseModel): + """ + A storage configuration used by catalogs + """ # noqa: E501 + storage_type: StrictStr = Field(description="The cloud provider type this storage is built on. FILE is supported for testing purposes only", alias="storageType") + allowed_locations: Optional[List[StrictStr]] = Field(default=None, alias="allowedLocations") + __properties: ClassVar[List[str]] = ["storageType", "allowedLocations"] + + @field_validator('storage_type') + def storage_type_validate_enum(cls, value): + """Validates the enum""" + if value not in set(['S3', 'GCS', 'AZURE', 'FILE']): + raise ValueError("must be one of enum values ('S3', 'GCS', 'AZURE', 'FILE')") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + # JSON field name that stores the object type + __discriminator_property_name: ClassVar[str] = 'storageType' + + # discriminator mappings + __discriminator_value_class_map: ClassVar[Dict[str, str]] = { + 'AZURE': 'AzureStorageConfigInfo','FILE': 'FileStorageConfigInfo','GCS': 'GcpStorageConfigInfo','S3': 'AwsStorageConfigInfo' + } + + @classmethod + def get_discriminator_value(cls, obj: Dict[str, Any]) -> Optional[str]: + """Returns the discriminator value (object type) of the data""" + discriminator_value = obj[cls.__discriminator_property_name] + if discriminator_value: + return cls.__discriminator_value_class_map.get(discriminator_value) + else: + return None + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Union[AzureStorageConfigInfo, FileStorageConfigInfo, GcpStorageConfigInfo, AwsStorageConfigInfo]]: + """Create an instance of StorageConfigInfo from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Dict[str, Any]) -> Optional[Union[AzureStorageConfigInfo, FileStorageConfigInfo, GcpStorageConfigInfo, AwsStorageConfigInfo]]: + """Create an instance of StorageConfigInfo from a dict""" + # look up the object type based on discriminator mapping + object_type = cls.get_discriminator_value(obj) + if object_type == 'AzureStorageConfigInfo': + return import_module("polaris.management.models.azure_storage_config_info").AzureStorageConfigInfo.from_dict(obj) + if object_type == 'FileStorageConfigInfo': + return import_module("polaris.management.models.file_storage_config_info").FileStorageConfigInfo.from_dict(obj) + if object_type == 'GcpStorageConfigInfo': + return import_module("polaris.management.models.gcp_storage_config_info").GcpStorageConfigInfo.from_dict(obj) + if object_type == 'AwsStorageConfigInfo': + return import_module("polaris.management.models.aws_storage_config_info").AwsStorageConfigInfo.from_dict(obj) + + raise ValueError("StorageConfigInfo failed to lookup discriminator value from " + + json.dumps(obj) + ". Discriminator property name: " + cls.__discriminator_property_name + + ", mapping: " + json.dumps(cls.__discriminator_value_class_map)) + + diff --git a/regtests/client/python/polaris/management/models/table_grant.py b/regtests/client/python/polaris/management/models/table_grant.py new file mode 100644 index 0000000000..a2d9367d0a --- /dev/null +++ b/regtests/client/python/polaris/management/models/table_grant.py @@ -0,0 +1,109 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List +from polaris.management.models.grant_resource import GrantResource +from polaris.management.models.table_privilege import TablePrivilege +from typing import Optional, Set +from typing_extensions import Self + +class TableGrant(GrantResource): + """ + TableGrant + """ # noqa: E501 + namespace: List[StrictStr] + table_name: StrictStr = Field(alias="tableName") + privilege: TablePrivilege + __properties: ClassVar[List[str]] = ["type", "namespace", "tableName", "privilege"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of TableGrant from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of TableGrant from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "namespace": obj.get("namespace"), + "tableName": obj.get("tableName"), + "privilege": obj.get("privilege") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/table_privilege.py b/regtests/client/python/polaris/management/models/table_privilege.py new file mode 100644 index 0000000000..4303296b6b --- /dev/null +++ b/regtests/client/python/polaris/management/models/table_privilege.py @@ -0,0 +1,59 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +from enum import Enum +from typing_extensions import Self + + +class TablePrivilege(str, Enum): + """ + TablePrivilege + """ + + """ + allowed enum values + """ + CATALOG_MANAGE_ACCESS = 'CATALOG_MANAGE_ACCESS' + TABLE_DROP = 'TABLE_DROP' + TABLE_LIST = 'TABLE_LIST' + TABLE_READ_PROPERTIES = 'TABLE_READ_PROPERTIES' + VIEW_READ_PROPERTIES = 'VIEW_READ_PROPERTIES' + TABLE_WRITE_PROPERTIES = 'TABLE_WRITE_PROPERTIES' + TABLE_READ_DATA = 'TABLE_READ_DATA' + TABLE_WRITE_DATA = 'TABLE_WRITE_DATA' + TABLE_FULL_METADATA = 'TABLE_FULL_METADATA' + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Create an instance of TablePrivilege from a JSON string""" + return cls(json.loads(json_str)) + + diff --git a/regtests/client/python/polaris/management/models/update_catalog_request.py b/regtests/client/python/polaris/management/models/update_catalog_request.py new file mode 100644 index 0000000000..d91fa23840 --- /dev/null +++ b/regtests/client/python/polaris/management/models/update_catalog_request.py @@ -0,0 +1,110 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from polaris.management.models.storage_config_info import StorageConfigInfo +from typing import Optional, Set +from typing_extensions import Self + +class UpdateCatalogRequest(BaseModel): + """ + Updates to apply to a Catalog + """ # noqa: E501 + current_entity_version: Optional[StrictInt] = Field(default=None, description="The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version.", alias="currentEntityVersion") + properties: Optional[Dict[str, StrictStr]] = None + storage_config_info: Optional[StorageConfigInfo] = Field(default=None, alias="storageConfigInfo") + __properties: ClassVar[List[str]] = ["currentEntityVersion", "properties", "storageConfigInfo"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of UpdateCatalogRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of storage_config_info + if self.storage_config_info: + _dict['storageConfigInfo'] = self.storage_config_info.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of UpdateCatalogRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "currentEntityVersion": obj.get("currentEntityVersion"), + "properties": obj.get("properties"), + "storageConfigInfo": StorageConfigInfo.from_dict(obj["storageConfigInfo"]) if obj.get("storageConfigInfo") is not None else None + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/update_catalog_role_request.py b/regtests/client/python/polaris/management/models/update_catalog_role_request.py new file mode 100644 index 0000000000..d95e279383 --- /dev/null +++ b/regtests/client/python/polaris/management/models/update_catalog_role_request.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class UpdateCatalogRoleRequest(BaseModel): + """ + Updates to apply to a Catalog Role + """ # noqa: E501 + current_entity_version: StrictInt = Field(description="The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version.", alias="currentEntityVersion") + properties: Dict[str, StrictStr] + __properties: ClassVar[List[str]] = ["currentEntityVersion", "properties"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of UpdateCatalogRoleRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of UpdateCatalogRoleRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "currentEntityVersion": obj.get("currentEntityVersion"), + "properties": obj.get("properties") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/update_principal_request.py b/regtests/client/python/polaris/management/models/update_principal_request.py new file mode 100644 index 0000000000..60ac283d1e --- /dev/null +++ b/regtests/client/python/polaris/management/models/update_principal_request.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class UpdatePrincipalRequest(BaseModel): + """ + Updates to apply to a Principal + """ # noqa: E501 + current_entity_version: StrictInt = Field(description="The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version.", alias="currentEntityVersion") + properties: Dict[str, StrictStr] + __properties: ClassVar[List[str]] = ["currentEntityVersion", "properties"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of UpdatePrincipalRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of UpdatePrincipalRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "currentEntityVersion": obj.get("currentEntityVersion"), + "properties": obj.get("properties") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/update_principal_role_request.py b/regtests/client/python/polaris/management/models/update_principal_role_request.py new file mode 100644 index 0000000000..1d0abd887c --- /dev/null +++ b/regtests/client/python/polaris/management/models/update_principal_role_request.py @@ -0,0 +1,104 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictInt, StrictStr +from typing import Any, ClassVar, Dict, List +from typing import Optional, Set +from typing_extensions import Self + +class UpdatePrincipalRoleRequest(BaseModel): + """ + Updates to apply to a Principal Role + """ # noqa: E501 + current_entity_version: StrictInt = Field(description="The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version.", alias="currentEntityVersion") + properties: Dict[str, StrictStr] + __properties: ClassVar[List[str]] = ["currentEntityVersion", "properties"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of UpdatePrincipalRoleRequest from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of UpdatePrincipalRoleRequest from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "currentEntityVersion": obj.get("currentEntityVersion"), + "properties": obj.get("properties") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/view_grant.py b/regtests/client/python/polaris/management/models/view_grant.py new file mode 100644 index 0000000000..3089ece07e --- /dev/null +++ b/regtests/client/python/polaris/management/models/view_grant.py @@ -0,0 +1,109 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import ConfigDict, Field, StrictStr +from typing import Any, ClassVar, Dict, List +from polaris.management.models.grant_resource import GrantResource +from polaris.management.models.view_privilege import ViewPrivilege +from typing import Optional, Set +from typing_extensions import Self + +class ViewGrant(GrantResource): + """ + ViewGrant + """ # noqa: E501 + namespace: List[StrictStr] + view_name: StrictStr = Field(alias="viewName") + privilege: ViewPrivilege + __properties: ClassVar[List[str]] = ["type", "namespace", "viewName", "privilege"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of ViewGrant from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of ViewGrant from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "type": obj.get("type"), + "namespace": obj.get("namespace"), + "viewName": obj.get("viewName"), + "privilege": obj.get("privilege") + }) + return _obj + + diff --git a/regtests/client/python/polaris/management/models/view_privilege.py b/regtests/client/python/polaris/management/models/view_privilege.py new file mode 100644 index 0000000000..e2f268b204 --- /dev/null +++ b/regtests/client/python/polaris/management/models/view_privilege.py @@ -0,0 +1,57 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import json +from enum import Enum +from typing_extensions import Self + + +class ViewPrivilege(str, Enum): + """ + ViewPrivilege + """ + + """ + allowed enum values + """ + CATALOG_MANAGE_ACCESS = 'CATALOG_MANAGE_ACCESS' + VIEW_CREATE = 'VIEW_CREATE' + VIEW_DROP = 'VIEW_DROP' + VIEW_LIST = 'VIEW_LIST' + VIEW_READ_PROPERTIES = 'VIEW_READ_PROPERTIES' + VIEW_WRITE_PROPERTIES = 'VIEW_WRITE_PROPERTIES' + VIEW_FULL_METADATA = 'VIEW_FULL_METADATA' + + @classmethod + def from_json(cls, json_str: str) -> Self: + """Create an instance of ViewPrivilege from a JSON string""" + return cls(json.loads(json_str)) + + diff --git a/regtests/client/python/polaris/management/py.typed b/regtests/client/python/polaris/management/py.typed new file mode 100644 index 0000000000..e69de29bb2 diff --git a/regtests/client/python/polaris/management/rest.py b/regtests/client/python/polaris/management/rest.py new file mode 100644 index 0000000000..516566cd5e --- /dev/null +++ b/regtests/client/python/polaris/management/rest.py @@ -0,0 +1,272 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import io +import json +import re +import ssl + +import urllib3 + +from polaris.management.exceptions import ApiException, ApiValueError + +SUPPORTED_SOCKS_PROXIES = {"socks5", "socks5h", "socks4", "socks4a"} +RESTResponseType = urllib3.HTTPResponse + + +def is_socks_proxy_url(url): + if url is None: + return False + split_section = url.split("://") + if len(split_section) < 2: + return False + else: + return split_section[0].lower() in SUPPORTED_SOCKS_PROXIES + + +class RESTResponse(io.IOBase): + + def __init__(self, resp) -> None: + self.response = resp + self.status = resp.status + self.reason = resp.reason + self.data = None + + def read(self): + if self.data is None: + self.data = self.response.data + return self.data + + def getheaders(self): + """Returns a dictionary of the response headers.""" + return self.response.headers + + def getheader(self, name, default=None): + """Returns a given response header.""" + return self.response.headers.get(name, default) + + +class RESTClientObject: + + def __init__(self, configuration) -> None: + # urllib3.PoolManager will pass all kw parameters to connectionpool + # https://github.com/shazow/urllib3/blob/f9409436f83aeb79fbaf090181cd81b784f1b8ce/urllib3/poolmanager.py#L75 # noqa: E501 + # https://github.com/shazow/urllib3/blob/f9409436f83aeb79fbaf090181cd81b784f1b8ce/urllib3/connectionpool.py#L680 # noqa: E501 + # Custom SSL certificates and client certificates: http://urllib3.readthedocs.io/en/latest/advanced-usage.html # noqa: E501 + + # cert_reqs + if configuration.verify_ssl: + cert_reqs = ssl.CERT_REQUIRED + else: + cert_reqs = ssl.CERT_NONE + + pool_args = { + "cert_reqs": cert_reqs, + "ca_certs": configuration.ssl_ca_cert, + "cert_file": configuration.cert_file, + "key_file": configuration.key_file, + } + if configuration.assert_hostname is not None: + pool_args['assert_hostname'] = ( + configuration.assert_hostname + ) + + if configuration.retries is not None: + pool_args['retries'] = configuration.retries + + if configuration.tls_server_name: + pool_args['server_hostname'] = configuration.tls_server_name + + + if configuration.socket_options is not None: + pool_args['socket_options'] = configuration.socket_options + + if configuration.connection_pool_maxsize is not None: + pool_args['maxsize'] = configuration.connection_pool_maxsize + + # https pool manager + self.pool_manager: urllib3.PoolManager + + if configuration.proxy: + if is_socks_proxy_url(configuration.proxy): + from urllib3.contrib.socks import SOCKSProxyManager + pool_args["proxy_url"] = configuration.proxy + pool_args["headers"] = configuration.proxy_headers + self.pool_manager = SOCKSProxyManager(**pool_args) + else: + pool_args["proxy_url"] = configuration.proxy + pool_args["proxy_headers"] = configuration.proxy_headers + self.pool_manager = urllib3.ProxyManager(**pool_args) + else: + self.pool_manager = urllib3.PoolManager(**pool_args) + + def request( + self, + method, + url, + headers=None, + body=None, + post_params=None, + _request_timeout=None + ): + """Perform requests. + + :param method: http request method + :param url: http request url + :param headers: http request headers + :param body: request json body, for `application/json` + :param post_params: request post parameters, + `application/x-www-form-urlencoded` + and `multipart/form-data` + :param _request_timeout: timeout setting for this request. If one + number provided, it will be total request + timeout. It can also be a pair (tuple) of + (connection, read) timeouts. + """ + method = method.upper() + assert method in [ + 'GET', + 'HEAD', + 'DELETE', + 'POST', + 'PUT', + 'PATCH', + 'OPTIONS' + ] + + if post_params and body: + raise ApiValueError( + "body parameter cannot be used with post_params parameter." + ) + + post_params = post_params or {} + headers = headers or {} + + timeout = None + if _request_timeout: + if isinstance(_request_timeout, (int, float)): + timeout = urllib3.Timeout(total=_request_timeout) + elif ( + isinstance(_request_timeout, tuple) + and len(_request_timeout) == 2 + ): + timeout = urllib3.Timeout( + connect=_request_timeout[0], + read=_request_timeout[1] + ) + + try: + # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` + if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: + + # no content type provided or payload is json + content_type = headers.get('Content-Type') + if ( + not content_type + or re.search('json', content_type, re.IGNORECASE) + ): + request_body = None + if body is not None: + request_body = json.dumps(body) + r = self.pool_manager.request( + method, + url, + body=request_body, + timeout=timeout, + headers=headers, + preload_content=False + ) + elif content_type == 'application/x-www-form-urlencoded': + r = self.pool_manager.request( + method, + url, + fields=post_params, + encode_multipart=False, + timeout=timeout, + headers=headers, + preload_content=False + ) + elif content_type == 'multipart/form-data': + # must del headers['Content-Type'], or the correct + # Content-Type which generated by urllib3 will be + # overwritten. + del headers['Content-Type'] + # Ensures that dict objects are serialized + post_params = [(a, json.dumps(b)) if isinstance(b, dict) else (a,b) for a, b in post_params] + r = self.pool_manager.request( + method, + url, + fields=post_params, + encode_multipart=True, + timeout=timeout, + headers=headers, + preload_content=False + ) + # Pass a `string` parameter directly in the body to support + # other content types than JSON when `body` argument is + # provided in serialized form. + elif isinstance(body, str) or isinstance(body, bytes): + r = self.pool_manager.request( + method, + url, + body=body, + timeout=timeout, + headers=headers, + preload_content=False + ) + elif headers['Content-Type'] == 'text/plain' and isinstance(body, bool): + request_body = "true" if body else "false" + r = self.pool_manager.request( + method, + url, + body=request_body, + preload_content=False, + timeout=timeout, + headers=headers) + else: + # Cannot generate the request from given parameters + msg = """Cannot prepare a request message for provided + arguments. Please check that your arguments match + declared content type.""" + raise ApiException(status=0, reason=msg) + # For `GET`, `HEAD` + else: + r = self.pool_manager.request( + method, + url, + fields={}, + timeout=timeout, + headers=headers, + preload_content=False + ) + except urllib3.exceptions.SSLError as e: + msg = "\n".join([type(e).__name__, str(e)]) + raise ApiException(status=0, reason=msg) + + return RESTResponse(r) diff --git a/regtests/client/python/pyproject.toml b/regtests/client/python/pyproject.toml new file mode 100644 index 0000000000..5263e89871 --- /dev/null +++ b/regtests/client/python/pyproject.toml @@ -0,0 +1,88 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +[tool.poetry] +name = "polaris" +version = "1.0.0" +description = "Polaris Management Service" +authors = ["OpenAPI Generator Community "] +license = "NoLicense" +readme = "README.md" +repository = "https://github.com/GIT_USER_ID/GIT_REPO_ID" +keywords = ["OpenAPI", "OpenAPI-Generator", "Polaris Management Service"] +include = ["polaris.management/py.typed"] + +[tool.poetry.dependencies] +python = "^3.8" + +urllib3 = "^1.25.3" +python-dateutil = ">=2.8.2" +pydantic = ">=2" +typing-extensions = ">=4.7.1" +boto3 = "==1.34.120" + +[tool.poetry.dev-dependencies] +pytest = ">=7.2.1" +tox = ">=3.9.0" +flake8 = ">=4.0.0" +types-python-dateutil = ">=2.8.19.14" +mypy = "1.4.1" + + +[build-system] +requires = ["setuptools"] +build-backend = "setuptools.build_meta" + +[tool.pylint.'MESSAGES CONTROL'] +extension-pkg-whitelist = "pydantic" + +[tool.mypy] +files = [ + "polaris", + #"test", # auto-generated tests + "tests", # hand-written tests +] +# TODO: enable "strict" once all these individual checks are passing +# strict = true + +# List from: https://mypy.readthedocs.io/en/stable/existing_code.html#introduce-stricter-options +warn_unused_configs = true +warn_redundant_casts = true +warn_unused_ignores = true + +## Getting these passing should be easy +strict_equality = true +strict_concatenate = true + +## Strongly recommend enabling this one as soon as you can +check_untyped_defs = true + +## These shouldn't be too much additional work, but may be tricky to +## get passing if you use a lot of untyped libraries +disallow_subclassing_any = true +disallow_untyped_decorators = true +disallow_any_generics = true + +### These next few are various gradations of forcing use of type annotations +#disallow_untyped_calls = true +#disallow_incomplete_defs = true +#disallow_untyped_defs = true +# +### This one isn't too hard to get passing, but return on investment is lower +#no_implicit_reexport = true +# +### This one can be tricky to get passing if you use a lot of untyped libraries +#warn_return_any = true diff --git a/regtests/client/python/requirements.txt b/regtests/client/python/requirements.txt new file mode 100644 index 0000000000..cc85509ec5 --- /dev/null +++ b/regtests/client/python/requirements.txt @@ -0,0 +1,5 @@ +python_dateutil >= 2.5.3 +setuptools >= 21.0.0 +urllib3 >= 1.25.3, < 2.1.0 +pydantic >= 2 +typing-extensions >= 4.7.1 diff --git a/regtests/client/python/setup.cfg b/regtests/client/python/setup.cfg new file mode 100644 index 0000000000..11433ee875 --- /dev/null +++ b/regtests/client/python/setup.cfg @@ -0,0 +1,2 @@ +[flake8] +max-line-length=99 diff --git a/regtests/client/python/setup.py b/regtests/client/python/setup.py new file mode 100644 index 0000000000..48eab69197 --- /dev/null +++ b/regtests/client/python/setup.py @@ -0,0 +1,64 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from setuptools import setup, find_packages # noqa: H301 + +# To install the library, run the following +# +# python setup.py install +# +# prerequisite: setuptools +# http://pypi.python.org/pypi/setuptools +NAME = "polaris.management" +VERSION = "1.0.0" +PYTHON_REQUIRES = ">=3.7" +REQUIRES = [ + "urllib3 >= 1.25.3, < 2.1.0", + "python-dateutil", + "pydantic >= 2", + "typing-extensions >= 4.7.1", +] + +setup( + name=NAME, + version=VERSION, + description="Polaris Management Service", + author="OpenAPI Generator community", + author_email="team@openapitools.org", + url="", + keywords=["OpenAPI", "OpenAPI-Generator", "Polaris Management Service"], + install_requires=REQUIRES, + packages=find_packages(exclude=["test", "tests"]), + include_package_data=True, + long_description_content_type='text/markdown', + long_description="""\ + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + """, # noqa: E501 + package_data={"polaris.management": ["py.typed"]}, +) diff --git a/regtests/client/python/test-requirements.txt b/regtests/client/python/test-requirements.txt new file mode 100644 index 0000000000..8e6d8cb137 --- /dev/null +++ b/regtests/client/python/test-requirements.txt @@ -0,0 +1,5 @@ +pytest~=7.1.3 +pytest-cov>=2.8.1 +pytest-randomly>=3.12.0 +mypy>=1.4.1 +types-python-dateutil>=2.8.19 diff --git a/regtests/client/python/test/__init__.py b/regtests/client/python/test/__init__.py new file mode 100644 index 0000000000..8d220260f1 --- /dev/null +++ b/regtests/client/python/test/__init__.py @@ -0,0 +1,15 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# \ No newline at end of file diff --git a/regtests/client/python/test/test_add_grant_request.py b/regtests/client/python/test/test_add_grant_request.py new file mode 100644 index 0000000000..765fd48a39 --- /dev/null +++ b/regtests/client/python/test/test_add_grant_request.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.add_grant_request import AddGrantRequest + +class TestAddGrantRequest(unittest.TestCase): + """AddGrantRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AddGrantRequest: + """Test AddGrantRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AddGrantRequest` + """ + model = AddGrantRequest() + if include_optional: + return AddGrantRequest( + grant = polaris.management.models.grant_resource.GrantResource( + type = 'catalog', ) + ) + else: + return AddGrantRequest( + ) + """ + + def testAddGrantRequest(self): + """Test AddGrantRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_add_partition_spec_update.py b/regtests/client/python/test/test_add_partition_spec_update.py new file mode 100644 index 0000000000..bd6ac6e9e1 --- /dev/null +++ b/regtests/client/python/test/test_add_partition_spec_update.py @@ -0,0 +1,85 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.add_partition_spec_update import AddPartitionSpecUpdate + +class TestAddPartitionSpecUpdate(unittest.TestCase): + """AddPartitionSpecUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AddPartitionSpecUpdate: + """Test AddPartitionSpecUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AddPartitionSpecUpdate` + """ + model = AddPartitionSpecUpdate() + if include_optional: + return AddPartitionSpecUpdate( + action = 'add-spec', + spec = polaris.catalog.models.partition_spec.PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], ) + ) + else: + return AddPartitionSpecUpdate( + action = 'add-spec', + spec = polaris.catalog.models.partition_spec.PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], ), + ) + """ + + def testAddPartitionSpecUpdate(self): + """Test AddPartitionSpecUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_add_schema_update.py b/regtests/client/python/test/test_add_schema_update.py new file mode 100644 index 0000000000..9799c78d65 --- /dev/null +++ b/regtests/client/python/test/test_add_schema_update.py @@ -0,0 +1,70 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.add_schema_update import AddSchemaUpdate + +class TestAddSchemaUpdate(unittest.TestCase): + """AddSchemaUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AddSchemaUpdate: + """Test AddSchemaUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AddSchemaUpdate` + """ + model = AddSchemaUpdate() + if include_optional: + return AddSchemaUpdate( + action = 'add-schema', + var_schema = None, + last_column_id = 56 + ) + else: + return AddSchemaUpdate( + action = 'add-schema', + var_schema = None, + ) + """ + + def testAddSchemaUpdate(self): + """Test AddSchemaUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_add_snapshot_update.py b/regtests/client/python/test/test_add_snapshot_update.py new file mode 100644 index 0000000000..4640737823 --- /dev/null +++ b/regtests/client/python/test/test_add_snapshot_update.py @@ -0,0 +1,87 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.add_snapshot_update import AddSnapshotUpdate + +class TestAddSnapshotUpdate(unittest.TestCase): + """AddSnapshotUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AddSnapshotUpdate: + """Test AddSnapshotUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AddSnapshotUpdate` + """ + model = AddSnapshotUpdate() + if include_optional: + return AddSnapshotUpdate( + action = 'add-snapshot', + snapshot = polaris.catalog.models.snapshot.Snapshot( + snapshot_id = 56, + parent_snapshot_id = 56, + sequence_number = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + schema_id = 56, ) + ) + else: + return AddSnapshotUpdate( + action = 'add-snapshot', + snapshot = polaris.catalog.models.snapshot.Snapshot( + snapshot_id = 56, + parent_snapshot_id = 56, + sequence_number = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + schema_id = 56, ), + ) + """ + + def testAddSnapshotUpdate(self): + """Test AddSnapshotUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_add_sort_order_update.py b/regtests/client/python/test/test_add_sort_order_update.py new file mode 100644 index 0000000000..ca436a697a --- /dev/null +++ b/regtests/client/python/test/test_add_sort_order_update.py @@ -0,0 +1,85 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.add_sort_order_update import AddSortOrderUpdate + +class TestAddSortOrderUpdate(unittest.TestCase): + """AddSortOrderUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AddSortOrderUpdate: + """Test AddSortOrderUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AddSortOrderUpdate` + """ + model = AddSortOrderUpdate() + if include_optional: + return AddSortOrderUpdate( + action = 'add-sort-order', + sort_order = polaris.catalog.models.sort_order.SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], ) + ) + else: + return AddSortOrderUpdate( + action = 'add-sort-order', + sort_order = polaris.catalog.models.sort_order.SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], ), + ) + """ + + def testAddSortOrderUpdate(self): + """Test AddSortOrderUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_add_view_version_update.py b/regtests/client/python/test/test_add_view_version_update.py new file mode 100644 index 0000000000..be145b9e94 --- /dev/null +++ b/regtests/client/python/test/test_add_view_version_update.py @@ -0,0 +1,91 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.add_view_version_update import AddViewVersionUpdate + +class TestAddViewVersionUpdate(unittest.TestCase): + """AddViewVersionUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AddViewVersionUpdate: + """Test AddViewVersionUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AddViewVersionUpdate` + """ + model = AddViewVersionUpdate() + if include_optional: + return AddViewVersionUpdate( + action = 'add-view-version', + view_version = polaris.catalog.models.view_version.ViewVersion( + version_id = 56, + timestamp_ms = 56, + schema_id = 56, + summary = { + 'key' : '' + }, + representations = [ + null + ], + default_catalog = '', + default_namespace = ["accounting","tax"], ) + ) + else: + return AddViewVersionUpdate( + action = 'add-view-version', + view_version = polaris.catalog.models.view_version.ViewVersion( + version_id = 56, + timestamp_ms = 56, + schema_id = 56, + summary = { + 'key' : '' + }, + representations = [ + null + ], + default_catalog = '', + default_namespace = ["accounting","tax"], ), + ) + """ + + def testAddViewVersionUpdate(self): + """Test AddViewVersionUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_and_or_expression.py b/regtests/client/python/test/test_and_or_expression.py new file mode 100644 index 0000000000..c396e62514 --- /dev/null +++ b/regtests/client/python/test/test_and_or_expression.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.and_or_expression import AndOrExpression + +class TestAndOrExpression(unittest.TestCase): + """AndOrExpression unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AndOrExpression: + """Test AndOrExpression + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AndOrExpression` + """ + model = AndOrExpression() + if include_optional: + return AndOrExpression( + type = '["eq","and","or","not","in","not-in","lt","lt-eq","gt","gt-eq","not-eq","starts-with","not-starts-with","is-null","not-null","is-nan","not-nan"]', + left = None, + right = None + ) + else: + return AndOrExpression( + type = '["eq","and","or","not","in","not-in","lt","lt-eq","gt","gt-eq","not-eq","starts-with","not-starts-with","is-null","not-null","is-nan","not-nan"]', + left = None, + right = None, + ) + """ + + def testAndOrExpression(self): + """Test AndOrExpression""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_assert_create.py b/regtests/client/python/test/test_assert_create.py new file mode 100644 index 0000000000..7128a928da --- /dev/null +++ b/regtests/client/python/test/test_assert_create.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.assert_create import AssertCreate + +class TestAssertCreate(unittest.TestCase): + """AssertCreate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AssertCreate: + """Test AssertCreate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AssertCreate` + """ + model = AssertCreate() + if include_optional: + return AssertCreate( + type = 'assert-create' + ) + else: + return AssertCreate( + type = 'assert-create', + ) + """ + + def testAssertCreate(self): + """Test AssertCreate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_assert_current_schema_id.py b/regtests/client/python/test/test_assert_current_schema_id.py new file mode 100644 index 0000000000..1dd77a0f15 --- /dev/null +++ b/regtests/client/python/test/test_assert_current_schema_id.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.assert_current_schema_id import AssertCurrentSchemaId + +class TestAssertCurrentSchemaId(unittest.TestCase): + """AssertCurrentSchemaId unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AssertCurrentSchemaId: + """Test AssertCurrentSchemaId + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AssertCurrentSchemaId` + """ + model = AssertCurrentSchemaId() + if include_optional: + return AssertCurrentSchemaId( + type = 'assert-current-schema-id', + current_schema_id = 56 + ) + else: + return AssertCurrentSchemaId( + type = 'assert-current-schema-id', + current_schema_id = 56, + ) + """ + + def testAssertCurrentSchemaId(self): + """Test AssertCurrentSchemaId""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_assert_default_sort_order_id.py b/regtests/client/python/test/test_assert_default_sort_order_id.py new file mode 100644 index 0000000000..0d52104659 --- /dev/null +++ b/regtests/client/python/test/test_assert_default_sort_order_id.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.assert_default_sort_order_id import AssertDefaultSortOrderId + +class TestAssertDefaultSortOrderId(unittest.TestCase): + """AssertDefaultSortOrderId unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AssertDefaultSortOrderId: + """Test AssertDefaultSortOrderId + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AssertDefaultSortOrderId` + """ + model = AssertDefaultSortOrderId() + if include_optional: + return AssertDefaultSortOrderId( + type = 'assert-default-sort-order-id', + default_sort_order_id = 56 + ) + else: + return AssertDefaultSortOrderId( + type = 'assert-default-sort-order-id', + default_sort_order_id = 56, + ) + """ + + def testAssertDefaultSortOrderId(self): + """Test AssertDefaultSortOrderId""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_assert_default_spec_id.py b/regtests/client/python/test/test_assert_default_spec_id.py new file mode 100644 index 0000000000..8fceeb5643 --- /dev/null +++ b/regtests/client/python/test/test_assert_default_spec_id.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.assert_default_spec_id import AssertDefaultSpecId + +class TestAssertDefaultSpecId(unittest.TestCase): + """AssertDefaultSpecId unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AssertDefaultSpecId: + """Test AssertDefaultSpecId + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AssertDefaultSpecId` + """ + model = AssertDefaultSpecId() + if include_optional: + return AssertDefaultSpecId( + type = 'assert-default-spec-id', + default_spec_id = 56 + ) + else: + return AssertDefaultSpecId( + type = 'assert-default-spec-id', + default_spec_id = 56, + ) + """ + + def testAssertDefaultSpecId(self): + """Test AssertDefaultSpecId""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_assert_last_assigned_field_id.py b/regtests/client/python/test/test_assert_last_assigned_field_id.py new file mode 100644 index 0000000000..35d12109ba --- /dev/null +++ b/regtests/client/python/test/test_assert_last_assigned_field_id.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.assert_last_assigned_field_id import AssertLastAssignedFieldId + +class TestAssertLastAssignedFieldId(unittest.TestCase): + """AssertLastAssignedFieldId unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AssertLastAssignedFieldId: + """Test AssertLastAssignedFieldId + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AssertLastAssignedFieldId` + """ + model = AssertLastAssignedFieldId() + if include_optional: + return AssertLastAssignedFieldId( + type = 'assert-last-assigned-field-id', + last_assigned_field_id = 56 + ) + else: + return AssertLastAssignedFieldId( + type = 'assert-last-assigned-field-id', + last_assigned_field_id = 56, + ) + """ + + def testAssertLastAssignedFieldId(self): + """Test AssertLastAssignedFieldId""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_assert_last_assigned_partition_id.py b/regtests/client/python/test/test_assert_last_assigned_partition_id.py new file mode 100644 index 0000000000..e9cb23a9d5 --- /dev/null +++ b/regtests/client/python/test/test_assert_last_assigned_partition_id.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.assert_last_assigned_partition_id import AssertLastAssignedPartitionId + +class TestAssertLastAssignedPartitionId(unittest.TestCase): + """AssertLastAssignedPartitionId unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AssertLastAssignedPartitionId: + """Test AssertLastAssignedPartitionId + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AssertLastAssignedPartitionId` + """ + model = AssertLastAssignedPartitionId() + if include_optional: + return AssertLastAssignedPartitionId( + type = 'assert-last-assigned-partition-id', + last_assigned_partition_id = 56 + ) + else: + return AssertLastAssignedPartitionId( + type = 'assert-last-assigned-partition-id', + last_assigned_partition_id = 56, + ) + """ + + def testAssertLastAssignedPartitionId(self): + """Test AssertLastAssignedPartitionId""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_assert_ref_snapshot_id.py b/regtests/client/python/test/test_assert_ref_snapshot_id.py new file mode 100644 index 0000000000..98b1889230 --- /dev/null +++ b/regtests/client/python/test/test_assert_ref_snapshot_id.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.assert_ref_snapshot_id import AssertRefSnapshotId + +class TestAssertRefSnapshotId(unittest.TestCase): + """AssertRefSnapshotId unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AssertRefSnapshotId: + """Test AssertRefSnapshotId + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AssertRefSnapshotId` + """ + model = AssertRefSnapshotId() + if include_optional: + return AssertRefSnapshotId( + type = 'assert-ref-snapshot-id', + ref = '', + snapshot_id = 56 + ) + else: + return AssertRefSnapshotId( + type = 'assert-ref-snapshot-id', + ref = '', + snapshot_id = 56, + ) + """ + + def testAssertRefSnapshotId(self): + """Test AssertRefSnapshotId""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_assert_table_uuid.py b/regtests/client/python/test/test_assert_table_uuid.py new file mode 100644 index 0000000000..685a71f28b --- /dev/null +++ b/regtests/client/python/test/test_assert_table_uuid.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.assert_table_uuid import AssertTableUUID + +class TestAssertTableUUID(unittest.TestCase): + """AssertTableUUID unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AssertTableUUID: + """Test AssertTableUUID + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AssertTableUUID` + """ + model = AssertTableUUID() + if include_optional: + return AssertTableUUID( + type = 'assert-table-uuid', + uuid = '' + ) + else: + return AssertTableUUID( + type = 'assert-table-uuid', + uuid = '', + ) + """ + + def testAssertTableUUID(self): + """Test AssertTableUUID""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_assert_view_uuid.py b/regtests/client/python/test/test_assert_view_uuid.py new file mode 100644 index 0000000000..b746a89918 --- /dev/null +++ b/regtests/client/python/test/test_assert_view_uuid.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.assert_view_uuid import AssertViewUUID + +class TestAssertViewUUID(unittest.TestCase): + """AssertViewUUID unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AssertViewUUID: + """Test AssertViewUUID + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AssertViewUUID` + """ + model = AssertViewUUID() + if include_optional: + return AssertViewUUID( + type = 'assert-view-uuid', + uuid = '' + ) + else: + return AssertViewUUID( + type = 'assert-view-uuid', + uuid = '', + ) + """ + + def testAssertViewUUID(self): + """Test AssertViewUUID""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_assign_uuid_update.py b/regtests/client/python/test/test_assign_uuid_update.py new file mode 100644 index 0000000000..2f80f9b997 --- /dev/null +++ b/regtests/client/python/test/test_assign_uuid_update.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.assign_uuid_update import AssignUUIDUpdate + +class TestAssignUUIDUpdate(unittest.TestCase): + """AssignUUIDUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AssignUUIDUpdate: + """Test AssignUUIDUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AssignUUIDUpdate` + """ + model = AssignUUIDUpdate() + if include_optional: + return AssignUUIDUpdate( + action = 'assign-uuid', + uuid = '' + ) + else: + return AssignUUIDUpdate( + action = 'assign-uuid', + uuid = '', + ) + """ + + def testAssignUUIDUpdate(self): + """Test AssignUUIDUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_aws_storage_config_info.py b/regtests/client/python/test/test_aws_storage_config_info.py new file mode 100644 index 0000000000..683c9dd512 --- /dev/null +++ b/regtests/client/python/test/test_aws_storage_config_info.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.aws_storage_config_info import AwsStorageConfigInfo + +class TestAwsStorageConfigInfo(unittest.TestCase): + """AwsStorageConfigInfo unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AwsStorageConfigInfo: + """Test AwsStorageConfigInfo + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AwsStorageConfigInfo` + """ + model = AwsStorageConfigInfo() + if include_optional: + return AwsStorageConfigInfo( + role_arn = 'arn:aws:iam::123456789001:principal/abc1-b-self1234', + external_id = '', + user_arn = 'arn:aws:iam::123456789001:user/abc1-b-self1234' + ) + else: + return AwsStorageConfigInfo( + role_arn = 'arn:aws:iam::123456789001:principal/abc1-b-self1234', + ) + """ + + def testAwsStorageConfigInfo(self): + """Test AwsStorageConfigInfo""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_azure_storage_config_info.py b/regtests/client/python/test/test_azure_storage_config_info.py new file mode 100644 index 0000000000..bffd6f3208 --- /dev/null +++ b/regtests/client/python/test/test_azure_storage_config_info.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.azure_storage_config_info import AzureStorageConfigInfo + +class TestAzureStorageConfigInfo(unittest.TestCase): + """AzureStorageConfigInfo unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> AzureStorageConfigInfo: + """Test AzureStorageConfigInfo + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `AzureStorageConfigInfo` + """ + model = AzureStorageConfigInfo() + if include_optional: + return AzureStorageConfigInfo( + tenant_id = '', + multi_tenant_app_name = '', + consent_url = '' + ) + else: + return AzureStorageConfigInfo( + tenant_id = '', + ) + """ + + def testAzureStorageConfigInfo(self): + """Test AzureStorageConfigInfo""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_base_update.py b/regtests/client/python/test/test_base_update.py new file mode 100644 index 0000000000..2d2fc86b05 --- /dev/null +++ b/regtests/client/python/test/test_base_update.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.base_update import BaseUpdate + +class TestBaseUpdate(unittest.TestCase): + """BaseUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> BaseUpdate: + """Test BaseUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `BaseUpdate` + """ + model = BaseUpdate() + if include_optional: + return BaseUpdate( + action = '' + ) + else: + return BaseUpdate( + action = '', + ) + """ + + def testBaseUpdate(self): + """Test BaseUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_blob_metadata.py b/regtests/client/python/test/test_blob_metadata.py new file mode 100644 index 0000000000..ef03ab7c67 --- /dev/null +++ b/regtests/client/python/test/test_blob_metadata.py @@ -0,0 +1,78 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.blob_metadata import BlobMetadata + +class TestBlobMetadata(unittest.TestCase): + """BlobMetadata unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> BlobMetadata: + """Test BlobMetadata + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `BlobMetadata` + """ + model = BlobMetadata() + if include_optional: + return BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], + properties = polaris.catalog.models.properties.properties() + ) + else: + return BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], + ) + """ + + def testBlobMetadata(self): + """Test BlobMetadata""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_catalog.py b/regtests/client/python/test/test_catalog.py new file mode 100644 index 0000000000..0b021e96e4 --- /dev/null +++ b/regtests/client/python/test/test_catalog.py @@ -0,0 +1,79 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.catalog import Catalog + +class TestCatalog(unittest.TestCase): + """Catalog unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> Catalog: + """Test Catalog + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `Catalog` + """ + model = Catalog() + if include_optional: + return Catalog( + type = 'INTERNAL', + name = '', + read_only = True, + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, + storage_config_info = polaris.management.models.storage_config_info.StorageConfigInfo( + storage_type = 'S3', + allowed_locations = For AWS [s3://bucketname/prefix/], for AZURE [abfss://container@storageaccount.blob.core.windows.net/prefix/], for GCP [gs://bucketname/prefix/], ) + ) + else: + return Catalog( + type = 'INTERNAL', + name = '', + ) + """ + + def testCatalog(self): + """Test Catalog""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_catalog_config.py b/regtests/client/python/test/test_catalog_config.py new file mode 100644 index 0000000000..9cd70b2905 --- /dev/null +++ b/regtests/client/python/test/test_catalog_config.py @@ -0,0 +1,77 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.catalog_config import CatalogConfig + +class TestCatalogConfig(unittest.TestCase): + """CatalogConfig unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CatalogConfig: + """Test CatalogConfig + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CatalogConfig` + """ + model = CatalogConfig() + if include_optional: + return CatalogConfig( + overrides = { + 'key' : '' + }, + defaults = { + 'key' : '' + } + ) + else: + return CatalogConfig( + overrides = { + 'key' : '' + }, + defaults = { + 'key' : '' + }, + ) + """ + + def testCatalogConfig(self): + """Test CatalogConfig""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_catalog_grant.py b/regtests/client/python/test/test_catalog_grant.py new file mode 100644 index 0000000000..a0ab0e341c --- /dev/null +++ b/regtests/client/python/test/test_catalog_grant.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.catalog_grant import CatalogGrant + +class TestCatalogGrant(unittest.TestCase): + """CatalogGrant unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CatalogGrant: + """Test CatalogGrant + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CatalogGrant` + """ + model = CatalogGrant() + if include_optional: + return CatalogGrant( + privilege = 'CATALOG_MANAGE_ACCESS' + ) + else: + return CatalogGrant( + privilege = 'CATALOG_MANAGE_ACCESS', + ) + """ + + def testCatalogGrant(self): + """Test CatalogGrant""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_catalog_privilege.py b/regtests/client/python/test/test_catalog_privilege.py new file mode 100644 index 0000000000..5ba2bfe556 --- /dev/null +++ b/regtests/client/python/test/test_catalog_privilege.py @@ -0,0 +1,48 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.catalog_privilege import CatalogPrivilege + +class TestCatalogPrivilege(unittest.TestCase): + """CatalogPrivilege unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def testCatalogPrivilege(self): + """Test CatalogPrivilege""" + # inst = CatalogPrivilege() + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_catalog_properties.py b/regtests/client/python/test/test_catalog_properties.py new file mode 100644 index 0000000000..e9730b172d --- /dev/null +++ b/regtests/client/python/test/test_catalog_properties.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.catalog_properties import CatalogProperties + +class TestCatalogProperties(unittest.TestCase): + """CatalogProperties unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CatalogProperties: + """Test CatalogProperties + include_optional is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CatalogProperties` + """ + model = CatalogProperties() + if include_optional: + return CatalogProperties( + default_base_location = '' + ) + else: + return CatalogProperties( + default_base_location = '', + ) + """ + + def testCatalogProperties(self): + """Test CatalogProperties""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_catalog_role.py b/regtests/client/python/test/test_catalog_role.py new file mode 100644 index 0000000000..f74d138a67 --- /dev/null +++ b/regtests/client/python/test/test_catalog_role.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.catalog_role import CatalogRole + +class TestCatalogRole(unittest.TestCase): + """CatalogRole unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CatalogRole: + """Test CatalogRole + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CatalogRole` + """ + model = CatalogRole() + if include_optional: + return CatalogRole( + name = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56 + ) + else: + return CatalogRole( + name = '', + ) + """ + + def testCatalogRole(self): + """Test CatalogRole""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_catalog_roles.py b/regtests/client/python/test/test_catalog_roles.py new file mode 100644 index 0000000000..bcc1847053 --- /dev/null +++ b/regtests/client/python/test/test_catalog_roles.py @@ -0,0 +1,85 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.catalog_roles import CatalogRoles + +class TestCatalogRoles(unittest.TestCase): + """CatalogRoles unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CatalogRoles: + """Test CatalogRoles + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CatalogRoles` + """ + model = CatalogRoles() + if include_optional: + return CatalogRoles( + roles = [ + polaris.management.models.catalog_role.CatalogRole( + name = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ) + ] + ) + else: + return CatalogRoles( + roles = [ + polaris.management.models.catalog_role.CatalogRole( + name = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ) + ], + ) + """ + + def testCatalogRoles(self): + """Test CatalogRoles""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_catalogs.py b/regtests/client/python/test/test_catalogs.py new file mode 100644 index 0000000000..27aef6a689 --- /dev/null +++ b/regtests/client/python/test/test_catalogs.py @@ -0,0 +1,95 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.catalogs import Catalogs + +class TestCatalogs(unittest.TestCase): + """Catalogs unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> Catalogs: + """Test Catalogs + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `Catalogs` + """ + model = Catalogs() + if include_optional: + return Catalogs( + catalogs = [ + polaris.management.models.catalog.Catalog( + type = 'INTERNAL', + name = '', + read_only = True, + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, + storage_config_info = polaris.management.models.storage_config_info.StorageConfigInfo( + storage_type = 'S3', + allowed_locations = For AWS [s3://bucketname/prefix/], for AZURE [abfss://container@storageaccount.blob.core.windows.net/prefix/], for GCP [gs://bucketname/prefix/], ), ) + ] + ) + else: + return Catalogs( + catalogs = [ + polaris.management.models.catalog.Catalog( + type = 'INTERNAL', + name = '', + read_only = True, + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, + storage_config_info = polaris.management.models.storage_config_info.StorageConfigInfo( + storage_type = 'S3', + allowed_locations = For AWS [s3://bucketname/prefix/], for AZURE [abfss://container@storageaccount.blob.core.windows.net/prefix/], for GCP [gs://bucketname/prefix/], ), ) + ], + ) + """ + + def testCatalogs(self): + """Test Catalogs""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_cli_parsing.py b/regtests/client/python/test/test_cli_parsing.py new file mode 100644 index 0000000000..073c8ecd5d --- /dev/null +++ b/regtests/client/python/test/test_cli_parsing.py @@ -0,0 +1,434 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import unittest +import io +from functools import reduce +from typing import List +from unittest.mock import patch, MagicMock + +from cli.command import Command +from cli.options.parser import Parser +from polaris.catalog import ApiClient +from polaris.management import PolarisDefaultApi + +INVALID_ARGS = 2 + + +class TestCliParsing(unittest.TestCase): + + def test_invalid_commands(self): + with self.assertRaises(SystemExit) as cm: + Parser.parse(['not-real-command!', 'list']) + self.assertEqual(cm.exception.code, INVALID_ARGS) + + with self.assertRaises(SystemExit) as cm: + Parser.parse(['catalogs', 'not-real-subcommand']) + self.assertEqual(cm.exception.code, INVALID_ARGS) + + with self.assertRaises(SystemExit) as cm: + Parser.parse(['catalogs', 'create']) # missing required input + self.assertEqual(cm.exception.code, INVALID_ARGS) + + with self.assertRaises(SystemExit) as cm: + Parser.parse(['catalogs', 'create', 'catalog_name', '--type', 'BANANA']) # invalid catalog type + self.assertEqual(cm.exception.code, INVALID_ARGS) + + with self.assertRaises(SystemExit) as cm: + Parser.parse(['catalogs', 'get', 'catalog_name', '--fake-flag']) + self.assertEqual(cm.exception.code, INVALID_ARGS) + + with self.assertRaises(SystemExit) as cm: + Parser.parse(['principals', 'create', 'name', '--type', 'bad']) + self.assertEqual(cm.exception.code, INVALID_ARGS) + + with self.assertRaises(SystemExit) as cm: + Parser.parse(['principals', 'update', 'name', '--client-id', 'something']) + self.assertEqual(cm.exception.code, INVALID_ARGS) + + with self.assertRaises(SystemExit) as cm: + Parser.parse(['privileges', 'catalog', '--catalog', 'c', '--catalog-role', 'r', 'privilege', 'grant']) + self.assertEqual(cm.exception.code, INVALID_ARGS) + + with self.assertRaises(SystemExit) as cm: + Parser.parse(['privileges', '--catalog', 'c', '--catalog-role', 'r', 'catalog', 'grant', 'privilege', + '--namespace', 'unexpected!']) + self.assertEqual(cm.exception.code, INVALID_ARGS) + + def _check_usage_output(self, f, needle='usage:'): + with patch('sys.stdout', new_callable=io.StringIO) as mock_stdout, \ + patch('sys.stderr', new_callable=io.StringIO) as mock_stderr: + with self.assertRaises(SystemExit) as cm: + f() + self.assertEqual(cm.exception.code, 0) + help_output = str(mock_stdout.getvalue()) + self.assertIn('usage:', help_output) + print(help_output) + + def test_usage(self): + self._check_usage_output(lambda: Parser.parse(['--help'])) + self._check_usage_output(lambda: Parser.parse(['catalogs', '--help'])) + self._check_usage_output(lambda: Parser.parse(['catalogs', 'create', '--help'])) + self._check_usage_output(lambda: Parser.parse(['catalogs', 'create', 'something', '--help'])) + + def test_extended_usage(self): + self._check_usage_output(lambda: Parser._build_parser().parse_args(['--help'], 'input:')) + self._check_usage_output(lambda: Parser._build_parser().parse_args(['catalogs', '--help'], 'input:')) + self._check_usage_output(lambda: Parser._build_parser().parse_args(['catalogs', 'create', '--help'], 'input:')) + self._check_usage_output(lambda: Parser._build_parser().parse_args([ + 'catalogs', 'create', 'c', '--help'], 'input:')) + self._check_usage_output(lambda: Parser._build_parser().parse_args([ + 'privileges', 'table', 'grant', '--help'], 'input:')) + self._check_usage_output(lambda: Parser.parse(['catalogs', 'create', 'something', '--help']), 'input:') + + def test_parsing_valid_commands(self): + Parser.parse(['catalogs', 'create', 'catalog_name']) + Parser.parse(['catalogs', 'create', 'catalog_name', '--type', 'internal']) + Parser.parse(['catalogs', 'create', 'catalog_name', '--type', 'INTERNAL']) + Parser.parse(['catalogs', 'list']) + Parser.parse(['catalogs', 'get', 'catalog_name']) + Parser.parse(['principals', 'list']) + Parser.parse(['--host', 'some-host', 'catalogs', 'list']) + Parser.parse(['privileges', '--catalog', 'foo', '--catalog-role', 'bar', 'catalog', 'grant', 'TABLE_READ_DATA']) + Parser.parse(['privileges', '--catalog', 'foo', '--catalog-role', 'bar', 'table', 'grant', + '--namespace', 'n', '--table', 't', 'TABLE_READ_DATA']) + Parser.parse(['privileges', '--catalog', 'foo', '--catalog-role', 'bar', 'table', 'revoke', + '--namespace', 'n', '--table', 't', 'TABLE_READ_DATA']) + + # These commands are valid for parsing, but may cause errors within the command itself + def test_parse_valid_commands(self): + Parser.parse(['catalogs', 'create', 'catalog_name', '--type', 'internal', '--remote-url', 'www.apache.org']) + Parser.parse(['privileges', 'table', 'grant', + '--namespace', 'n', '--table', 't', 'TABLE_READ_DATA']) + Parser.parse(['privileges', '--catalog', 'c', '--catalog-role', 'r', 'catalog', 'grant', 'fake-privilege']) + + def test_commands(self): + + def build_mock_client(): + client = MagicMock(spec=PolarisDefaultApi) + client.call_tracker = dict() + + def capture_method(method_name): + def _capture(*args, **kwargs): + client.call_tracker['_method'] = method_name + for i, arg in enumerate(args): + if arg is not None: + client.call_tracker[i] = arg + + return _capture + + for method_name in dir(client): + if callable(getattr(client, method_name)) and not method_name.startswith('__'): + setattr(client, method_name, + MagicMock(name=method_name, side_effect=capture_method(method_name))) + return client + mock_client = build_mock_client() + + def mock_execute(input: List[str]): + mock_client.call_tracker = dict() + + # Assuming Parser and Command are used to parse input and generate commands + options = Parser.parse(input) + command = Command.from_options(options) + + try: + command.execute(mock_client) + except AttributeError as e: + # Some commands may fail due to the mock, but the results should still match expectations + print(f'Suppressed error: {e}') + return mock_client.call_tracker + + def check_exception(f, exception_str): + throws = True + try: + f() + throws = False + except Exception as e: + self.assertIn(exception_str, str(e)) + self.assertTrue(throws, 'Exception should be raised') + + def check_arguments(result, method_name, args=dict()): + self.assertEqual(method_name, result['_method']) + + def get(obj, arg_string): + attributes = arg_string.split('.') + return reduce(getattr, attributes, obj) + + for arg, value in args.items(): + index, path = arg + if path is not None: + self.assertEqual(value, get(result[index], path)) + else: + self.assertEqual(value, result[index]) + + # Test various failing commands: + check_exception(lambda: mock_execute(['catalogs', 'create', 'my-catalog']), + '--storage-type') + check_exception(lambda: mock_execute(['catalogs', 'create', 'my-catalog', '--storage-type', 'gcs']), + '--default-base-location') + check_exception(lambda: mock_execute(['catalogs', 'create', 'my-catalog', '--type', 'external', + '--default-base-location', 'x', '--storage-type', 'gcs']), + '--remote-url') + check_exception(lambda: mock_execute(['catalog-roles', 'get', 'foo']), + '--catalog') + check_exception(lambda: mock_execute(['catalogs', 'update', 'foo', '--property', 'bad-format']), + 'bad-format') + check_exception(lambda: mock_execute(['privileges', '--catalog', 'foo', '--catalog-role', 'bar', + 'catalog', 'grant', 'TABLE_READ_MORE_BOOKS']), + 'catalog privilege: TABLE_READ_MORE_BOOKS') + check_exception(lambda: mock_execute(['catalogs', 'create', 'my-catalog', '--storage-type', 'gcs', + '--allowed-location', 'a', '--allowed-location', 'b', + '--role-arn', 'ra', '--default-base-location', 'x']), + 'gcs') + + # Test various correct commands: + check_arguments( + mock_execute(['catalogs', 'create', 'my-catalog', '--storage-type', 'gcs', '--default-base-location', 'x']), + 'create_catalog', { + (0, 'catalog.name'): 'my-catalog', + (0, 'catalog.storage_config_info.storage_type'): 'GCS', + (0, 'catalog.properties.default_base_location'): 'x', + }) + check_arguments( + mock_execute(['catalogs', 'create', 'my-catalog', '--type', 'external', '--remote-url', 'foo.bar', + '--storage-type', 'gcs', '--default-base-location', 'dbl']), + 'create_catalog', { + (0, 'catalog.name'): 'my-catalog', + (0, 'catalog.type'): 'EXTERNAL', + (0, 'catalog.remote_url'): 'foo.bar', + }) + check_arguments( + mock_execute([ + 'catalogs', 'create', 'my-catalog', '--storage-type', 's3', + '--allowed-location', 'a', '--allowed-location', 'b', '--role-arn', 'ra', + '--user-arn', 'ua', '--external-id', 'ei', '--default-base-location', 'x']), + 'create_catalog', { + (0, 'catalog.name'): 'my-catalog', + (0, 'catalog.storage_config_info.storage_type'): 'S3', + (0, 'catalog.properties.default_base_location'): 'x', + (0, 'catalog.storage_config_info.allowed_locations'): ['a', 'b'], + }) + check_arguments(mock_execute(['catalogs', 'list']), 'list_catalogs') + check_arguments(mock_execute(['catalogs', 'delete', 'foo']), 'delete_catalog', { + (0, None): 'foo', + }) + check_arguments(mock_execute(['catalogs', 'get', 'foo']), 'get_catalog', { + (0, None): 'foo', + }) + check_arguments( + mock_execute(['catalogs', 'update', 'foo', '--default-base-location', 'x']), + 'get_catalog', { + (0, None): 'foo', + }) + check_arguments( + mock_execute(['principals', 'create', 'foo', '--client-id', 'id', '--property', 'key=value']), + 'create_principal', { + (0, 'principal.name'): 'foo', + (0, 'principal.client_id'): 'id', + (0, 'principal.properties'): {'key': 'value'}, + }) + check_arguments( + mock_execute(['principals', 'delete', 'foo']), + 'delete_principal', { + (0, None): 'foo', + }) + check_arguments( + mock_execute(['principals', 'get', 'foo']), + 'get_principal', { + (0, None): 'foo', + }) + check_arguments(mock_execute(['principals', 'list']), 'list_principals') + check_arguments( + mock_execute(['principals', 'rotate-credentials', 'foo']), + 'rotate_credentials', { + (0, None): 'foo', + }) + check_arguments( + mock_execute(['principals', 'update', 'foo', '--property', 'key=value']), + 'get_principal', { + (0, None): 'foo', + }) + check_arguments( + mock_execute(['principal-roles', 'create', 'foo']), + 'create_principal_role', { + (0, 'principal_role.name'): 'foo', + }) + check_arguments( + mock_execute(['principal-roles', 'delete', 'foo']), + 'delete_principal_role', { + (0, None): 'foo', + }) + check_arguments( + mock_execute(['principal-roles', 'delete', 'foo']), + 'delete_principal_role', { + (0, None): 'foo', + }) + check_arguments( + mock_execute(['principal-roles', 'get', 'foo']), + 'get_principal_role', { + (0, None): 'foo', + }) + check_arguments( + mock_execute(['principal-roles', 'get', 'foo']), + 'get_principal_role', { + (0, None): 'foo', + }) + check_arguments(mock_execute(['principal-roles', 'list']), 'list_principal_roles') + check_arguments( + mock_execute(['principal-roles', 'list', '--principal', 'foo']), + 'list_principal_roles_assigned', { + (0, None): 'foo', + }) + check_arguments( + mock_execute(['principal-roles', 'update', 'foo', '--property', 'key=value']), + 'get_principal_role', { + (0, None): 'foo' + }) + check_arguments( + mock_execute(['principal-roles', 'update', 'foo', '--property', 'key=value']), + 'get_principal_role', { + (0, None): 'foo', + }) + check_arguments( + mock_execute(['principal-roles', 'grant', 'bar', '--principal', 'foo']), + 'assign_principal_role', { + (0, None): 'foo', + (1, 'principal_role.name'): 'bar', + }) + check_arguments( + mock_execute(['principal-roles', 'revoke', 'bar', '--principal', 'foo']), + 'revoke_principal_role', { + (0, None): 'foo', + (1, None): 'bar', + }) + check_arguments( + mock_execute( + ['catalog-roles', 'create', 'foo', '--catalog', 'bar', '--property', 'key=value']), + 'create_catalog_role', { + (0, None): 'bar', + (1, 'catalog_role.name'): 'foo', + (1, 'catalog_role.properties'): {'key': 'value'}, + }) + check_arguments( + mock_execute( + ['catalog-roles', 'delete', 'foo', '--catalog', 'bar']), + 'delete_catalog_role', { + (0, None): 'bar', + (1, None): 'foo', + }) + check_arguments( + mock_execute( + ['catalog-roles', 'get', 'foo', '--catalog', 'bar']), + 'get_catalog_role', { + (0, None): 'bar', + (1, None): 'foo', + }) + check_arguments(mock_execute( + ['catalog-roles', 'list', 'foo']), + 'list_catalog_roles', { + (0, None): 'foo', + }) + check_arguments(mock_execute( + ['catalog-roles', 'list', 'foo', '--principal-role', 'bar']), + 'list_catalog_roles_for_principal_role', { + (0, None): 'bar', + (1, None): 'foo', + }) + check_arguments(mock_execute( + ['catalog-roles', 'update', 'foo', '--catalog', 'bar', '--property', 'key=value']), + 'get_catalog_role', { + (0, None): 'bar', + (1, None): 'foo', + }) + check_arguments( + mock_execute(['catalog-roles', 'grant', '--principal-role', 'foo', '--catalog', 'bar', 'baz']), + 'assign_catalog_role_to_principal_role', { + (0, None): 'foo', + (1, None): 'bar', + (2, 'catalog_role.name'): 'baz', + }) + check_arguments( + mock_execute(['catalog-roles', 'revoke', '--principal-role', 'foo', '--catalog', 'bar', 'baz']), + 'revoke_catalog_role_from_principal_role', { + (0, None): 'foo', + (1, None): 'bar', + (2, None): 'baz', + }) + check_arguments( + mock_execute( + ['privileges', '--catalog', 'foo', '--catalog-role', 'bar', 'catalog', 'grant', 'TABLE_READ_DATA']), + 'add_grant_to_catalog_role', { + (0, None): 'foo', + (1, None): 'bar', + (2, 'grant.privilege.value'): 'TABLE_READ_DATA', + }) + check_arguments( + mock_execute( + ['privileges', '--catalog', 'foo', '--catalog-role', 'bar', 'catalog', 'revoke', 'TABLE_READ_DATA']), + 'revoke_grant_from_catalog_role', { + (0, None): 'foo', + (1, None): 'bar', + (2, None): False, + (3, 'grant.privilege.value'): 'TABLE_READ_DATA', + }) + check_arguments( + mock_execute( + ['privileges', '--catalog', 'foo', '--catalog-role', 'bar', 'namespace', 'grant', '--namespace', 'a.b.c', + 'TABLE_READ_DATA']), + 'add_grant_to_catalog_role', { + (0, None): 'foo', + (1, None): 'bar', + (2, 'grant.privilege.value'): 'TABLE_READ_DATA', + (2, 'grant.namespace'): ['a', 'b', 'c'], + }) + check_arguments( + mock_execute( + ['privileges', '--catalog', 'foo', '--catalog-role', 'bar', 'table', 'grant', '--namespace', 'a.b.c', + '--table', 't', 'TABLE_READ_DATA']), + 'add_grant_to_catalog_role', { + (0, None): 'foo', + (1, None): 'bar', + (2, 'grant.privilege.value'): 'TABLE_READ_DATA', + (2, 'grant.namespace'): ['a', 'b', 'c'], + (2, 'grant.table_name'): 't', + }) + check_arguments( + mock_execute( + ['privileges', '--catalog', 'foo', '--catalog-role', 'bar', 'table', 'revoke', '--namespace', 'a.b.c', + '--table', 't', '--cascade', 'TABLE_READ_DATA']), + 'revoke_grant_from_catalog_role', { + (0, None): 'foo', + (1, None): 'bar', + (2, None): True, + (3, 'grant.privilege.value'): 'TABLE_READ_DATA', + (3, 'grant.namespace'): ['a', 'b', 'c'], + (3, 'grant.table_name'): 't', + }) + check_arguments( + mock_execute( + ['privileges', '--catalog', 'foo', '--catalog-role', 'bar', 'view', 'grant', '--namespace', 'a.b.c', + '--view', 'v', 'VIEW_CREATE']), + 'add_grant_to_catalog_role', { + (0, None): 'foo', + (1, None): 'bar', + (2, 'grant.privilege.value'): 'VIEW_CREATE', + (2, 'grant.namespace'): ['a', 'b', 'c'], + (2, 'grant.view_name'): 'v', + }) + + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_commit_report.py b/regtests/client/python/test/test_commit_report.py new file mode 100644 index 0000000000..bf049e5afe --- /dev/null +++ b/regtests/client/python/test/test_commit_report.py @@ -0,0 +1,78 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.commit_report import CommitReport + +class TestCommitReport(unittest.TestCase): + """CommitReport unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CommitReport: + """Test CommitReport + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CommitReport` + """ + model = CommitReport() + if include_optional: + return CommitReport( + table_name = '', + snapshot_id = 56, + sequence_number = 56, + operation = '', + metrics = {"metrics":{"total-planning-duration":{"count":1,"time-unit":"nanoseconds","total-duration":2644235116},"result-data-files":{"unit":"count","value":1},"result-delete-files":{"unit":"count","value":0},"total-data-manifests":{"unit":"count","value":1},"total-delete-manifests":{"unit":"count","value":0},"scanned-data-manifests":{"unit":"count","value":1},"skipped-data-manifests":{"unit":"count","value":0},"total-file-size-bytes":{"unit":"bytes","value":10},"total-delete-file-size-bytes":{"unit":"bytes","value":0}}}, + metadata = { + 'key' : '' + } + ) + else: + return CommitReport( + table_name = '', + snapshot_id = 56, + sequence_number = 56, + operation = '', + metrics = {"metrics":{"total-planning-duration":{"count":1,"time-unit":"nanoseconds","total-duration":2644235116},"result-data-files":{"unit":"count","value":1},"result-delete-files":{"unit":"count","value":0},"total-data-manifests":{"unit":"count","value":1},"total-delete-manifests":{"unit":"count","value":0},"scanned-data-manifests":{"unit":"count","value":1},"skipped-data-manifests":{"unit":"count","value":0},"total-file-size-bytes":{"unit":"bytes","value":10},"total-delete-file-size-bytes":{"unit":"bytes","value":0}}}, + ) + """ + + def testCommitReport(self): + """Test CommitReport""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_commit_table_request.py b/regtests/client/python/test/test_commit_table_request.py new file mode 100644 index 0000000000..c94e0f3e19 --- /dev/null +++ b/regtests/client/python/test/test_commit_table_request.py @@ -0,0 +1,82 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.commit_table_request import CommitTableRequest + +class TestCommitTableRequest(unittest.TestCase): + """CommitTableRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CommitTableRequest: + """Test CommitTableRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CommitTableRequest` + """ + model = CommitTableRequest() + if include_optional: + return CommitTableRequest( + identifier = polaris.catalog.models.table_identifier.TableIdentifier( + namespace = ["accounting","tax"], + name = '', ), + requirements = [ + polaris.catalog.models.table_requirement.TableRequirement( + type = '', ) + ], + updates = [ + null + ] + ) + else: + return CommitTableRequest( + requirements = [ + polaris.catalog.models.table_requirement.TableRequirement( + type = '', ) + ], + updates = [ + null + ], + ) + """ + + def testCommitTableRequest(self): + """Test CommitTableRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_commit_table_response.py b/regtests/client/python/test/test_commit_table_response.py new file mode 100644 index 0000000000..9fde1a54c8 --- /dev/null +++ b/regtests/client/python/test/test_commit_table_response.py @@ -0,0 +1,251 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.commit_table_response import CommitTableResponse + +class TestCommitTableResponse(unittest.TestCase): + """CommitTableResponse unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CommitTableResponse: + """Test CommitTableResponse + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CommitTableResponse` + """ + model = CommitTableResponse() + if include_optional: + return CommitTableResponse( + metadata_location = '', + metadata = polaris.catalog.models.table_metadata.TableMetadata( + format_version = 1, + table_uuid = '', + location = '', + last_updated_ms = 56, + properties = { + 'key' : '' + }, + schemas = [ + null + ], + current_schema_id = 56, + last_column_id = 56, + partition_specs = [ + polaris.catalog.models.partition_spec.PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], ) + ], + default_spec_id = 56, + last_partition_id = 56, + sort_orders = [ + polaris.catalog.models.sort_order.SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], ) + ], + default_sort_order_id = 56, + snapshots = [ + polaris.catalog.models.snapshot.Snapshot( + snapshot_id = 56, + parent_snapshot_id = 56, + sequence_number = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + schema_id = 56, ) + ], + refs = { + 'key' : polaris.catalog.models.snapshot_reference.SnapshotReference( + type = 'tag', + snapshot_id = 56, + max_ref_age_ms = 56, + max_snapshot_age_ms = 56, + min_snapshots_to_keep = 56, ) + }, + current_snapshot_id = 56, + last_sequence_number = 56, + snapshot_log = [ + polaris.catalog.models.snapshot_log_inner.SnapshotLog_inner( + snapshot_id = 56, + timestamp_ms = 56, ) + ], + metadata_log = [ + polaris.catalog.models.metadata_log_inner.MetadataLog_inner( + metadata_file = '', + timestamp_ms = 56, ) + ], + statistics_files = [ + polaris.catalog.models.statistics_file.StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], ) + ], ) + ], + partition_statistics_files = [ + polaris.catalog.models.partition_statistics_file.PartitionStatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, ) + ], ) + ) + else: + return CommitTableResponse( + metadata_location = '', + metadata = polaris.catalog.models.table_metadata.TableMetadata( + format_version = 1, + table_uuid = '', + location = '', + last_updated_ms = 56, + properties = { + 'key' : '' + }, + schemas = [ + null + ], + current_schema_id = 56, + last_column_id = 56, + partition_specs = [ + polaris.catalog.models.partition_spec.PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], ) + ], + default_spec_id = 56, + last_partition_id = 56, + sort_orders = [ + polaris.catalog.models.sort_order.SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], ) + ], + default_sort_order_id = 56, + snapshots = [ + polaris.catalog.models.snapshot.Snapshot( + snapshot_id = 56, + parent_snapshot_id = 56, + sequence_number = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + schema_id = 56, ) + ], + refs = { + 'key' : polaris.catalog.models.snapshot_reference.SnapshotReference( + type = 'tag', + snapshot_id = 56, + max_ref_age_ms = 56, + max_snapshot_age_ms = 56, + min_snapshots_to_keep = 56, ) + }, + current_snapshot_id = 56, + last_sequence_number = 56, + snapshot_log = [ + polaris.catalog.models.snapshot_log_inner.SnapshotLog_inner( + snapshot_id = 56, + timestamp_ms = 56, ) + ], + metadata_log = [ + polaris.catalog.models.metadata_log_inner.MetadataLog_inner( + metadata_file = '', + timestamp_ms = 56, ) + ], + statistics_files = [ + polaris.catalog.models.statistics_file.StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], ) + ], ) + ], + partition_statistics_files = [ + polaris.catalog.models.partition_statistics_file.PartitionStatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, ) + ], ), + ) + """ + + def testCommitTableResponse(self): + """Test CommitTableResponse""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_commit_transaction_request.py b/regtests/client/python/test/test_commit_transaction_request.py new file mode 100644 index 0000000000..b20c19dc57 --- /dev/null +++ b/regtests/client/python/test/test_commit_transaction_request.py @@ -0,0 +1,91 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.commit_transaction_request import CommitTransactionRequest + +class TestCommitTransactionRequest(unittest.TestCase): + """CommitTransactionRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CommitTransactionRequest: + """Test CommitTransactionRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CommitTransactionRequest` + """ + model = CommitTransactionRequest() + if include_optional: + return CommitTransactionRequest( + table_changes = [ + polaris.catalog.models.commit_table_request.CommitTableRequest( + identifier = polaris.catalog.models.table_identifier.TableIdentifier( + namespace = ["accounting","tax"], + name = '', ), + requirements = [ + polaris.catalog.models.table_requirement.TableRequirement( + type = '', ) + ], + updates = [ + null + ], ) + ] + ) + else: + return CommitTransactionRequest( + table_changes = [ + polaris.catalog.models.commit_table_request.CommitTableRequest( + identifier = polaris.catalog.models.table_identifier.TableIdentifier( + namespace = ["accounting","tax"], + name = '', ), + requirements = [ + polaris.catalog.models.table_requirement.TableRequirement( + type = '', ) + ], + updates = [ + null + ], ) + ], + ) + """ + + def testCommitTransactionRequest(self): + """Test CommitTransactionRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_commit_view_request.py b/regtests/client/python/test/test_commit_view_request.py new file mode 100644 index 0000000000..346c9363c0 --- /dev/null +++ b/regtests/client/python/test/test_commit_view_request.py @@ -0,0 +1,78 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.commit_view_request import CommitViewRequest + +class TestCommitViewRequest(unittest.TestCase): + """CommitViewRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CommitViewRequest: + """Test CommitViewRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CommitViewRequest` + """ + model = CommitViewRequest() + if include_optional: + return CommitViewRequest( + identifier = polaris.catalog.models.table_identifier.TableIdentifier( + namespace = ["accounting","tax"], + name = '', ), + requirements = [ + polaris.catalog.models.view_requirement.ViewRequirement( + type = '', ) + ], + updates = [ + null + ] + ) + else: + return CommitViewRequest( + updates = [ + null + ], + ) + """ + + def testCommitViewRequest(self): + """Test CommitViewRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_content_file.py b/regtests/client/python/test/test_content_file.py new file mode 100644 index 0000000000..72763c7866 --- /dev/null +++ b/regtests/client/python/test/test_content_file.py @@ -0,0 +1,83 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.content_file import ContentFile + +class TestContentFile(unittest.TestCase): + """ContentFile unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ContentFile: + """Test ContentFile + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ContentFile` + """ + model = ContentFile() + if include_optional: + return ContentFile( + content = '', + file_path = '', + file_format = 'avro', + spec_id = 56, + partition = [1,"bar"], + file_size_in_bytes = 56, + record_count = 56, + key_metadata = '78797A', + split_offsets = [ + 56 + ], + sort_order_id = 56 + ) + else: + return ContentFile( + content = '', + file_path = '', + file_format = 'avro', + spec_id = 56, + file_size_in_bytes = 56, + record_count = 56, + ) + """ + + def testContentFile(self): + """Test ContentFile""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_count_map.py b/regtests/client/python/test/test_count_map.py new file mode 100644 index 0000000000..db330f129b --- /dev/null +++ b/regtests/client/python/test/test_count_map.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.count_map import CountMap + +class TestCountMap(unittest.TestCase): + """CountMap unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CountMap: + """Test CountMap + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CountMap` + """ + model = CountMap() + if include_optional: + return CountMap( + keys = [ + 42 + ], + values = [ + 9223372036854775807 + ] + ) + else: + return CountMap( + ) + """ + + def testCountMap(self): + """Test CountMap""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_counter_result.py b/regtests/client/python/test/test_counter_result.py new file mode 100644 index 0000000000..1fa21fed14 --- /dev/null +++ b/regtests/client/python/test/test_counter_result.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.counter_result import CounterResult + +class TestCounterResult(unittest.TestCase): + """CounterResult unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CounterResult: + """Test CounterResult + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CounterResult` + """ + model = CounterResult() + if include_optional: + return CounterResult( + unit = '', + value = 56 + ) + else: + return CounterResult( + unit = '', + value = 56, + ) + """ + + def testCounterResult(self): + """Test CounterResult""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_create_catalog_request.py b/regtests/client/python/test/test_create_catalog_request.py new file mode 100644 index 0000000000..884711c98a --- /dev/null +++ b/regtests/client/python/test/test_create_catalog_request.py @@ -0,0 +1,91 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.create_catalog_request import CreateCatalogRequest + +class TestCreateCatalogRequest(unittest.TestCase): + """CreateCatalogRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CreateCatalogRequest: + """Test CreateCatalogRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CreateCatalogRequest` + """ + model = CreateCatalogRequest() + if include_optional: + return CreateCatalogRequest( + catalog = polaris.management.models.catalog.Catalog( + type = 'INTERNAL', + name = '', + read_only = True, + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, + storage_config_info = polaris.management.models.storage_config_info.StorageConfigInfo( + storage_type = 'S3', + allowed_locations = For AWS [s3://bucketname/prefix/], for AZURE [abfss://container@storageaccount.blob.core.windows.net/prefix/], for GCP [gs://bucketname/prefix/], ), ) + ) + else: + return CreateCatalogRequest( + catalog = polaris.management.models.catalog.Catalog( + type = 'INTERNAL', + name = '', + read_only = True, + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, + storage_config_info = polaris.management.models.storage_config_info.StorageConfigInfo( + storage_type = 'S3', + allowed_locations = For AWS [s3://bucketname/prefix/], for AZURE [abfss://container@storageaccount.blob.core.windows.net/prefix/], for GCP [gs://bucketname/prefix/], ), ), + ) + """ + + def testCreateCatalogRequest(self): + """Test CreateCatalogRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_create_catalog_role_request.py b/regtests/client/python/test/test_create_catalog_role_request.py new file mode 100644 index 0000000000..be4dc47522 --- /dev/null +++ b/regtests/client/python/test/test_create_catalog_role_request.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.create_catalog_role_request import CreateCatalogRoleRequest + +class TestCreateCatalogRoleRequest(unittest.TestCase): + """CreateCatalogRoleRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CreateCatalogRoleRequest: + """Test CreateCatalogRoleRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CreateCatalogRoleRequest` + """ + model = CreateCatalogRoleRequest() + if include_optional: + return CreateCatalogRoleRequest( + catalog_role = polaris.management.models.catalog_role.CatalogRole( + name = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ) + ) + else: + return CreateCatalogRoleRequest( + ) + """ + + def testCreateCatalogRoleRequest(self): + """Test CreateCatalogRoleRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_create_namespace_request.py b/regtests/client/python/test/test_create_namespace_request.py new file mode 100644 index 0000000000..1d7e3db006 --- /dev/null +++ b/regtests/client/python/test/test_create_namespace_request.py @@ -0,0 +1,68 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.create_namespace_request import CreateNamespaceRequest + +class TestCreateNamespaceRequest(unittest.TestCase): + """CreateNamespaceRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CreateNamespaceRequest: + """Test CreateNamespaceRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CreateNamespaceRequest` + """ + model = CreateNamespaceRequest() + if include_optional: + return CreateNamespaceRequest( + namespace = ["accounting","tax"], + properties = {"owner":"Hank Bendickson"} + ) + else: + return CreateNamespaceRequest( + namespace = ["accounting","tax"], + ) + """ + + def testCreateNamespaceRequest(self): + """Test CreateNamespaceRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_create_namespace_response.py b/regtests/client/python/test/test_create_namespace_response.py new file mode 100644 index 0000000000..462da06423 --- /dev/null +++ b/regtests/client/python/test/test_create_namespace_response.py @@ -0,0 +1,68 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.create_namespace_response import CreateNamespaceResponse + +class TestCreateNamespaceResponse(unittest.TestCase): + """CreateNamespaceResponse unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CreateNamespaceResponse: + """Test CreateNamespaceResponse + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CreateNamespaceResponse` + """ + model = CreateNamespaceResponse() + if include_optional: + return CreateNamespaceResponse( + namespace = ["accounting","tax"], + properties = {"owner":"Ralph","created_at":"1452120468"} + ) + else: + return CreateNamespaceResponse( + namespace = ["accounting","tax"], + ) + """ + + def testCreateNamespaceResponse(self): + """Test CreateNamespaceResponse""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_create_principal_request.py b/regtests/client/python/test/test_create_principal_request.py new file mode 100644 index 0000000000..6f06415452 --- /dev/null +++ b/regtests/client/python/test/test_create_principal_request.py @@ -0,0 +1,75 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.create_principal_request import CreatePrincipalRequest + +class TestCreatePrincipalRequest(unittest.TestCase): + """CreatePrincipalRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CreatePrincipalRequest: + """Test CreatePrincipalRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CreatePrincipalRequest` + """ + model = CreatePrincipalRequest() + if include_optional: + return CreatePrincipalRequest( + principal = polaris.management.models.principal.Principal( + type = 'SERVICE', + name = '', + client_id = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ) + ) + else: + return CreatePrincipalRequest( + ) + """ + + def testCreatePrincipalRequest(self): + """Test CreatePrincipalRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_create_principal_role_request.py b/regtests/client/python/test/test_create_principal_role_request.py new file mode 100644 index 0000000000..4f07d8b07c --- /dev/null +++ b/regtests/client/python/test/test_create_principal_role_request.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.create_principal_role_request import CreatePrincipalRoleRequest + +class TestCreatePrincipalRoleRequest(unittest.TestCase): + """CreatePrincipalRoleRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CreatePrincipalRoleRequest: + """Test CreatePrincipalRoleRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CreatePrincipalRoleRequest` + """ + model = CreatePrincipalRoleRequest() + if include_optional: + return CreatePrincipalRoleRequest( + principal_role = polaris.management.models.principal_role.PrincipalRole( + name = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ) + ) + else: + return CreatePrincipalRoleRequest( + ) + """ + + def testCreatePrincipalRoleRequest(self): + """Test CreatePrincipalRoleRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_create_table_request.py b/regtests/client/python/test/test_create_table_request.py new file mode 100644 index 0000000000..c7b2d5910d --- /dev/null +++ b/regtests/client/python/test/test_create_table_request.py @@ -0,0 +1,92 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.create_table_request import CreateTableRequest + +class TestCreateTableRequest(unittest.TestCase): + """CreateTableRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CreateTableRequest: + """Test CreateTableRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CreateTableRequest` + """ + model = CreateTableRequest() + if include_optional: + return CreateTableRequest( + name = '', + location = '', + var_schema = None, + partition_spec = polaris.catalog.models.partition_spec.PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], ), + write_order = polaris.catalog.models.sort_order.SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], ), + stage_create = True, + properties = { + 'key' : '' + } + ) + else: + return CreateTableRequest( + name = '', + var_schema = None, + ) + """ + + def testCreateTableRequest(self): + """Test CreateTableRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_create_view_request.py b/regtests/client/python/test/test_create_view_request.py new file mode 100644 index 0000000000..aefd3d03ac --- /dev/null +++ b/regtests/client/python/test/test_create_view_request.py @@ -0,0 +1,100 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.create_view_request import CreateViewRequest + +class TestCreateViewRequest(unittest.TestCase): + """CreateViewRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> CreateViewRequest: + """Test CreateViewRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `CreateViewRequest` + """ + model = CreateViewRequest() + if include_optional: + return CreateViewRequest( + name = '', + location = '', + var_schema = None, + view_version = polaris.catalog.models.view_version.ViewVersion( + version_id = 56, + timestamp_ms = 56, + schema_id = 56, + summary = { + 'key' : '' + }, + representations = [ + null + ], + default_catalog = '', + default_namespace = ["accounting","tax"], ), + properties = { + 'key' : '' + } + ) + else: + return CreateViewRequest( + name = '', + var_schema = None, + view_version = polaris.catalog.models.view_version.ViewVersion( + version_id = 56, + timestamp_ms = 56, + schema_id = 56, + summary = { + 'key' : '' + }, + representations = [ + null + ], + default_catalog = '', + default_namespace = ["accounting","tax"], ), + properties = { + 'key' : '' + }, + ) + """ + + def testCreateViewRequest(self): + """Test CreateViewRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_data_file.py b/regtests/client/python/test/test_data_file.py new file mode 100644 index 0000000000..3e6d633b89 --- /dev/null +++ b/regtests/client/python/test/test_data_file.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.data_file import DataFile + +class TestDataFile(unittest.TestCase): + """DataFile unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> DataFile: + """Test DataFile + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `DataFile` + """ + model = DataFile() + if include_optional: + return DataFile( + content = 'data', + column_sizes = {"keys":[1,2],"values":[100,200]}, + value_counts = {"keys":[1,2],"values":[100,200]}, + null_value_counts = {"keys":[1,2],"values":[100,200]}, + nan_value_counts = {"keys":[1,2],"values":[100,200]}, + lower_bounds = {"keys":[1,2],"values":[100,"test"]}, + upper_bounds = {"keys":[1,2],"values":[100,"test"]} + ) + else: + return DataFile( + content = 'data', + ) + """ + + def testDataFile(self): + """Test DataFile""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_equality_delete_file.py b/regtests/client/python/test/test_equality_delete_file.py new file mode 100644 index 0000000000..1f89bb9d40 --- /dev/null +++ b/regtests/client/python/test/test_equality_delete_file.py @@ -0,0 +1,70 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.equality_delete_file import EqualityDeleteFile + +class TestEqualityDeleteFile(unittest.TestCase): + """EqualityDeleteFile unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> EqualityDeleteFile: + """Test EqualityDeleteFile + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `EqualityDeleteFile` + """ + model = EqualityDeleteFile() + if include_optional: + return EqualityDeleteFile( + content = 'equality-deletes', + equality_ids = [ + 56 + ] + ) + else: + return EqualityDeleteFile( + content = 'equality-deletes', + ) + """ + + def testEqualityDeleteFile(self): + """Test EqualityDeleteFile""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_error_model.py b/regtests/client/python/test/test_error_model.py new file mode 100644 index 0000000000..0213e8dd73 --- /dev/null +++ b/regtests/client/python/test/test_error_model.py @@ -0,0 +1,74 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.error_model import ErrorModel + +class TestErrorModel(unittest.TestCase): + """ErrorModel unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ErrorModel: + """Test ErrorModel + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ErrorModel` + """ + model = ErrorModel() + if include_optional: + return ErrorModel( + message = '', + type = 'NoSuchNamespaceException', + code = 404, + stack = [ + '' + ] + ) + else: + return ErrorModel( + message = '', + type = 'NoSuchNamespaceException', + code = 404, + ) + """ + + def testErrorModel(self): + """Test ErrorModel""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_expression.py b/regtests/client/python/test/test_expression.py new file mode 100644 index 0000000000..a7a39b32cb --- /dev/null +++ b/regtests/client/python/test/test_expression.py @@ -0,0 +1,83 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.expression import Expression + +class TestExpression(unittest.TestCase): + """Expression unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> Expression: + """Test Expression + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `Expression` + """ + model = Expression() + if include_optional: + return Expression( + type = '["eq","and","or","not","in","not-in","lt","lt-eq","gt","gt-eq","not-eq","starts-with","not-starts-with","is-null","not-null","is-nan","not-nan"]', + left = None, + right = None, + child = None, + term = None, + values = [ + None + ], + value = polaris.catalog.models.value.value() + ) + else: + return Expression( + type = '["eq","and","or","not","in","not-in","lt","lt-eq","gt","gt-eq","not-eq","starts-with","not-starts-with","is-null","not-null","is-nan","not-nan"]', + left = None, + right = None, + child = None, + term = None, + values = [ + None + ], + value = polaris.catalog.models.value.value(), + ) + """ + + def testExpression(self): + """Test Expression""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_external_catalog.py b/regtests/client/python/test/test_external_catalog.py new file mode 100644 index 0000000000..be381c7b6a --- /dev/null +++ b/regtests/client/python/test/test_external_catalog.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.external_catalog import ExternalCatalog + +class TestExternalCatalog(unittest.TestCase): + """ExternalCatalog unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ExternalCatalog: + """Test ExternalCatalog + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ExternalCatalog` + """ + model = ExternalCatalog() + if include_optional: + return ExternalCatalog( + remote_url = '' + ) + else: + return ExternalCatalog( + remote_url = '', + ) + """ + + def testExternalCatalog(self): + """Test ExternalCatalog""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_file_format.py b/regtests/client/python/test/test_file_format.py new file mode 100644 index 0000000000..1f3f906778 --- /dev/null +++ b/regtests/client/python/test/test_file_format.py @@ -0,0 +1,48 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.file_format import FileFormat + +class TestFileFormat(unittest.TestCase): + """FileFormat unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def testFileFormat(self): + """Test FileFormat""" + # inst = FileFormat() + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_file_storage_config_info.py b/regtests/client/python/test/test_file_storage_config_info.py new file mode 100644 index 0000000000..a113b5f3df --- /dev/null +++ b/regtests/client/python/test/test_file_storage_config_info.py @@ -0,0 +1,65 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.file_storage_config_info import FileStorageConfigInfo + +class TestFileStorageConfigInfo(unittest.TestCase): + """FileStorageConfigInfo unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> FileStorageConfigInfo: + """Test FileStorageConfigInfo + include_optional is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `FileStorageConfigInfo` + """ + model = FileStorageConfigInfo() + if include_optional: + return FileStorageConfigInfo( + ) + else: + return FileStorageConfigInfo( + ) + """ + + def testFileStorageConfigInfo(self): + """Test FileStorageConfigInfo""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_gcp_storage_config_info.py b/regtests/client/python/test/test_gcp_storage_config_info.py new file mode 100644 index 0000000000..ec3510c788 --- /dev/null +++ b/regtests/client/python/test/test_gcp_storage_config_info.py @@ -0,0 +1,66 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.gcp_storage_config_info import GcpStorageConfigInfo + +class TestGcpStorageConfigInfo(unittest.TestCase): + """GcpStorageConfigInfo unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> GcpStorageConfigInfo: + """Test GcpStorageConfigInfo + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `GcpStorageConfigInfo` + """ + model = GcpStorageConfigInfo() + if include_optional: + return GcpStorageConfigInfo( + gcs_service_account = '' + ) + else: + return GcpStorageConfigInfo( + ) + """ + + def testGcpStorageConfigInfo(self): + """Test GcpStorageConfigInfo""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_get_namespace_response.py b/regtests/client/python/test/test_get_namespace_response.py new file mode 100644 index 0000000000..4a8ca64f86 --- /dev/null +++ b/regtests/client/python/test/test_get_namespace_response.py @@ -0,0 +1,68 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.get_namespace_response import GetNamespaceResponse + +class TestGetNamespaceResponse(unittest.TestCase): + """GetNamespaceResponse unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> GetNamespaceResponse: + """Test GetNamespaceResponse + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `GetNamespaceResponse` + """ + model = GetNamespaceResponse() + if include_optional: + return GetNamespaceResponse( + namespace = ["accounting","tax"], + properties = {"owner":"Ralph","transient_lastDdlTime":"1452120468"} + ) + else: + return GetNamespaceResponse( + namespace = ["accounting","tax"], + ) + """ + + def testGetNamespaceResponse(self): + """Test GetNamespaceResponse""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_grant_catalog_role_request.py b/regtests/client/python/test/test_grant_catalog_role_request.py new file mode 100644 index 0000000000..d345cba824 --- /dev/null +++ b/regtests/client/python/test/test_grant_catalog_role_request.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.grant_catalog_role_request import GrantCatalogRoleRequest + +class TestGrantCatalogRoleRequest(unittest.TestCase): + """GrantCatalogRoleRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> GrantCatalogRoleRequest: + """Test GrantCatalogRoleRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `GrantCatalogRoleRequest` + """ + model = GrantCatalogRoleRequest() + if include_optional: + return GrantCatalogRoleRequest( + catalog_role = polaris.management.models.catalog_role.CatalogRole( + name = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ) + ) + else: + return GrantCatalogRoleRequest( + ) + """ + + def testGrantCatalogRoleRequest(self): + """Test GrantCatalogRoleRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_grant_principal_role_request.py b/regtests/client/python/test/test_grant_principal_role_request.py new file mode 100644 index 0000000000..1340bf3ddc --- /dev/null +++ b/regtests/client/python/test/test_grant_principal_role_request.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.grant_principal_role_request import GrantPrincipalRoleRequest + +class TestGrantPrincipalRoleRequest(unittest.TestCase): + """GrantPrincipalRoleRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> GrantPrincipalRoleRequest: + """Test GrantPrincipalRoleRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `GrantPrincipalRoleRequest` + """ + model = GrantPrincipalRoleRequest() + if include_optional: + return GrantPrincipalRoleRequest( + principal_role = polaris.management.models.principal_role.PrincipalRole( + name = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ) + ) + else: + return GrantPrincipalRoleRequest( + ) + """ + + def testGrantPrincipalRoleRequest(self): + """Test GrantPrincipalRoleRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_grant_resource.py b/regtests/client/python/test/test_grant_resource.py new file mode 100644 index 0000000000..c779c99f02 --- /dev/null +++ b/regtests/client/python/test/test_grant_resource.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.grant_resource import GrantResource + +class TestGrantResource(unittest.TestCase): + """GrantResource unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> GrantResource: + """Test GrantResource + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `GrantResource` + """ + model = GrantResource() + if include_optional: + return GrantResource( + type = '' + ) + else: + return GrantResource( + type = '', + ) + """ + + def testGrantResource(self): + """Test GrantResource""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_grant_resources.py b/regtests/client/python/test/test_grant_resources.py new file mode 100644 index 0000000000..9b81e27e98 --- /dev/null +++ b/regtests/client/python/test/test_grant_resources.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.grant_resources import GrantResources + +class TestGrantResources(unittest.TestCase): + """GrantResources unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> GrantResources: + """Test GrantResources + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `GrantResources` + """ + model = GrantResources() + if include_optional: + return GrantResources( + grants = [ + polaris.management.models.grant_resource.GrantResource( + type = '', ) + ] + ) + else: + return GrantResources( + grants = [ + polaris.management.models.grant_resource.GrantResource( + type = '', ) + ], + ) + """ + + def testGrantResources(self): + """Test GrantResources""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_iceberg_catalog_api.py b/regtests/client/python/test/test_iceberg_catalog_api.py new file mode 100644 index 0000000000..4e3238644c --- /dev/null +++ b/regtests/client/python/test/test_iceberg_catalog_api.py @@ -0,0 +1,214 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.api.iceberg_catalog_api import IcebergCatalogAPI + + +class TestIcebergCatalogAPI(unittest.TestCase): + """IcebergCatalogAPI unit test stubs""" + + def setUp(self) -> None: + self.api = IcebergCatalogAPI() + + def tearDown(self) -> None: + pass + + def test_commit_transaction(self) -> None: + """Test case for commit_transaction + + Commit updates to multiple tables in an atomic operation + """ + pass + + def test_create_namespace(self) -> None: + """Test case for create_namespace + + Create a namespace + """ + pass + + def test_create_table(self) -> None: + """Test case for create_table + + Create a table in the given namespace + """ + pass + + def test_create_view(self) -> None: + """Test case for create_view + + Create a view in the given namespace + """ + pass + + def test_drop_namespace(self) -> None: + """Test case for drop_namespace + + Drop a namespace from the catalog. Namespace must be empty. + """ + pass + + def test_drop_table(self) -> None: + """Test case for drop_table + + Drop a table from the catalog + """ + pass + + def test_drop_view(self) -> None: + """Test case for drop_view + + Drop a view from the catalog + """ + pass + + def test_list_namespaces(self) -> None: + """Test case for list_namespaces + + List namespaces, optionally providing a parent namespace to list underneath + """ + pass + + def test_list_tables(self) -> None: + """Test case for list_tables + + List all table identifiers underneath a given namespace + """ + pass + + def test_list_views(self) -> None: + """Test case for list_views + + List all view identifiers underneath a given namespace + """ + pass + + def test_load_namespace_metadata(self) -> None: + """Test case for load_namespace_metadata + + Load the metadata properties for a namespace + """ + pass + + def test_load_table(self) -> None: + """Test case for load_table + + Load a table from the catalog + """ + pass + + def test_load_view(self) -> None: + """Test case for load_view + + Load a view from the catalog + """ + pass + + def test_namespace_exists(self) -> None: + """Test case for namespace_exists + + Check if a namespace exists + """ + pass + + def test_register_table(self) -> None: + """Test case for register_table + + Register a table in the given namespace using given metadata file location + """ + pass + + def test_rename_table(self) -> None: + """Test case for rename_table + + Rename a table from its current name to a new name + """ + pass + + def test_rename_view(self) -> None: + """Test case for rename_view + + Rename a view from its current name to a new name + """ + pass + + def test_replace_view(self) -> None: + """Test case for replace_view + + Replace a view + """ + pass + + def test_report_metrics(self) -> None: + """Test case for report_metrics + + Send a metrics report to this endpoint to be processed by the backend + """ + pass + + def test_send_notification(self) -> None: + """Test case for send_notification + + Sends a notification to the table + """ + pass + + def test_table_exists(self) -> None: + """Test case for table_exists + + Check if a table exists + """ + pass + + def test_update_properties(self) -> None: + """Test case for update_properties + + Set or remove properties on a namespace + """ + pass + + def test_update_table(self) -> None: + """Test case for update_table + + Commit updates to a table + """ + pass + + def test_view_exists(self) -> None: + """Test case for view_exists + + Check if a view exists + """ + pass + + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_iceberg_configuration_api.py b/regtests/client/python/test/test_iceberg_configuration_api.py new file mode 100644 index 0000000000..4ef1757b66 --- /dev/null +++ b/regtests/client/python/test/test_iceberg_configuration_api.py @@ -0,0 +1,53 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.api.iceberg_configuration_api import IcebergConfigurationAPI + + +class TestIcebergConfigurationAPI(unittest.TestCase): + """IcebergConfigurationAPI unit test stubs""" + + def setUp(self) -> None: + self.api = IcebergConfigurationAPI() + + def tearDown(self) -> None: + pass + + def test_get_config(self) -> None: + """Test case for get_config + + List all catalog configuration settings + """ + pass + + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_iceberg_error_response.py b/regtests/client/python/test/test_iceberg_error_response.py new file mode 100644 index 0000000000..a2c0f54b38 --- /dev/null +++ b/regtests/client/python/test/test_iceberg_error_response.py @@ -0,0 +1,79 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.iceberg_error_response import IcebergErrorResponse + +class TestIcebergErrorResponse(unittest.TestCase): + """IcebergErrorResponse unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> IcebergErrorResponse: + """Test IcebergErrorResponse + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `IcebergErrorResponse` + """ + model = IcebergErrorResponse() + if include_optional: + return IcebergErrorResponse( + error = polaris.catalog.models.error_model.ErrorModel( + message = '', + type = 'NoSuchNamespaceException', + code = 404, + stack = [ + '' + ], ) + ) + else: + return IcebergErrorResponse( + error = polaris.catalog.models.error_model.ErrorModel( + message = '', + type = 'NoSuchNamespaceException', + code = 404, + stack = [ + '' + ], ), + ) + """ + + def testIcebergErrorResponse(self): + """Test IcebergErrorResponse""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_iceberg_o_auth2_api.py b/regtests/client/python/test/test_iceberg_o_auth2_api.py new file mode 100644 index 0000000000..d90869611e --- /dev/null +++ b/regtests/client/python/test/test_iceberg_o_auth2_api.py @@ -0,0 +1,53 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.api.iceberg_o_auth2_api import IcebergOAuth2API + + +class TestIcebergOAuth2API(unittest.TestCase): + """IcebergOAuth2API unit test stubs""" + + def setUp(self) -> None: + self.api = IcebergOAuth2API() + + def tearDown(self) -> None: + pass + + def test_get_token(self) -> None: + """Test case for get_token + + Get a token using an OAuth2 flow + """ + pass + + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_list_namespaces_response.py b/regtests/client/python/test/test_list_namespaces_response.py new file mode 100644 index 0000000000..79ce0a09f5 --- /dev/null +++ b/regtests/client/python/test/test_list_namespaces_response.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.list_namespaces_response import ListNamespacesResponse + +class TestListNamespacesResponse(unittest.TestCase): + """ListNamespacesResponse unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ListNamespacesResponse: + """Test ListNamespacesResponse + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ListNamespacesResponse` + """ + model = ListNamespacesResponse() + if include_optional: + return ListNamespacesResponse( + next_page_token = '', + namespaces = [ + ["accounting","tax"] + ] + ) + else: + return ListNamespacesResponse( + ) + """ + + def testListNamespacesResponse(self): + """Test ListNamespacesResponse""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_list_tables_response.py b/regtests/client/python/test/test_list_tables_response.py new file mode 100644 index 0000000000..8a3dc64f32 --- /dev/null +++ b/regtests/client/python/test/test_list_tables_response.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.list_tables_response import ListTablesResponse + +class TestListTablesResponse(unittest.TestCase): + """ListTablesResponse unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ListTablesResponse: + """Test ListTablesResponse + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ListTablesResponse` + """ + model = ListTablesResponse() + if include_optional: + return ListTablesResponse( + next_page_token = '', + identifiers = [ + polaris.catalog.models.table_identifier.TableIdentifier( + namespace = ["accounting","tax"], + name = '', ) + ] + ) + else: + return ListTablesResponse( + ) + """ + + def testListTablesResponse(self): + """Test ListTablesResponse""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_list_type.py b/regtests/client/python/test/test_list_type.py new file mode 100644 index 0000000000..406ac62f04 --- /dev/null +++ b/regtests/client/python/test/test_list_type.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.list_type import ListType + +class TestListType(unittest.TestCase): + """ListType unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ListType: + """Test ListType + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ListType` + """ + model = ListType() + if include_optional: + return ListType( + type = 'list', + element_id = 56, + element = None, + element_required = True + ) + else: + return ListType( + type = 'list', + element_id = 56, + element = None, + element_required = True, + ) + """ + + def testListType(self): + """Test ListType""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_literal_expression.py b/regtests/client/python/test/test_literal_expression.py new file mode 100644 index 0000000000..2a524a9406 --- /dev/null +++ b/regtests/client/python/test/test_literal_expression.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.literal_expression import LiteralExpression + +class TestLiteralExpression(unittest.TestCase): + """LiteralExpression unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> LiteralExpression: + """Test LiteralExpression + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `LiteralExpression` + """ + model = LiteralExpression() + if include_optional: + return LiteralExpression( + type = '["eq","and","or","not","in","not-in","lt","lt-eq","gt","gt-eq","not-eq","starts-with","not-starts-with","is-null","not-null","is-nan","not-nan"]', + term = None, + value = polaris.catalog.models.value.value() + ) + else: + return LiteralExpression( + type = '["eq","and","or","not","in","not-in","lt","lt-eq","gt","gt-eq","not-eq","starts-with","not-starts-with","is-null","not-null","is-nan","not-nan"]', + term = None, + value = polaris.catalog.models.value.value(), + ) + """ + + def testLiteralExpression(self): + """Test LiteralExpression""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_load_table_result.py b/regtests/client/python/test/test_load_table_result.py new file mode 100644 index 0000000000..e05f20a7b2 --- /dev/null +++ b/regtests/client/python/test/test_load_table_result.py @@ -0,0 +1,253 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.load_table_result import LoadTableResult + +class TestLoadTableResult(unittest.TestCase): + """LoadTableResult unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> LoadTableResult: + """Test LoadTableResult + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `LoadTableResult` + """ + model = LoadTableResult() + if include_optional: + return LoadTableResult( + metadata_location = '', + metadata = polaris.catalog.models.table_metadata.TableMetadata( + format_version = 1, + table_uuid = '', + location = '', + last_updated_ms = 56, + properties = { + 'key' : '' + }, + schemas = [ + null + ], + current_schema_id = 56, + last_column_id = 56, + partition_specs = [ + polaris.catalog.models.partition_spec.PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], ) + ], + default_spec_id = 56, + last_partition_id = 56, + sort_orders = [ + polaris.catalog.models.sort_order.SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], ) + ], + default_sort_order_id = 56, + snapshots = [ + polaris.catalog.models.snapshot.Snapshot( + snapshot_id = 56, + parent_snapshot_id = 56, + sequence_number = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + schema_id = 56, ) + ], + refs = { + 'key' : polaris.catalog.models.snapshot_reference.SnapshotReference( + type = 'tag', + snapshot_id = 56, + max_ref_age_ms = 56, + max_snapshot_age_ms = 56, + min_snapshots_to_keep = 56, ) + }, + current_snapshot_id = 56, + last_sequence_number = 56, + snapshot_log = [ + polaris.catalog.models.snapshot_log_inner.SnapshotLog_inner( + snapshot_id = 56, + timestamp_ms = 56, ) + ], + metadata_log = [ + polaris.catalog.models.metadata_log_inner.MetadataLog_inner( + metadata_file = '', + timestamp_ms = 56, ) + ], + statistics_files = [ + polaris.catalog.models.statistics_file.StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], ) + ], ) + ], + partition_statistics_files = [ + polaris.catalog.models.partition_statistics_file.PartitionStatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, ) + ], ), + config = { + 'key' : '' + } + ) + else: + return LoadTableResult( + metadata = polaris.catalog.models.table_metadata.TableMetadata( + format_version = 1, + table_uuid = '', + location = '', + last_updated_ms = 56, + properties = { + 'key' : '' + }, + schemas = [ + null + ], + current_schema_id = 56, + last_column_id = 56, + partition_specs = [ + polaris.catalog.models.partition_spec.PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], ) + ], + default_spec_id = 56, + last_partition_id = 56, + sort_orders = [ + polaris.catalog.models.sort_order.SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], ) + ], + default_sort_order_id = 56, + snapshots = [ + polaris.catalog.models.snapshot.Snapshot( + snapshot_id = 56, + parent_snapshot_id = 56, + sequence_number = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + schema_id = 56, ) + ], + refs = { + 'key' : polaris.catalog.models.snapshot_reference.SnapshotReference( + type = 'tag', + snapshot_id = 56, + max_ref_age_ms = 56, + max_snapshot_age_ms = 56, + min_snapshots_to_keep = 56, ) + }, + current_snapshot_id = 56, + last_sequence_number = 56, + snapshot_log = [ + polaris.catalog.models.snapshot_log_inner.SnapshotLog_inner( + snapshot_id = 56, + timestamp_ms = 56, ) + ], + metadata_log = [ + polaris.catalog.models.metadata_log_inner.MetadataLog_inner( + metadata_file = '', + timestamp_ms = 56, ) + ], + statistics_files = [ + polaris.catalog.models.statistics_file.StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], ) + ], ) + ], + partition_statistics_files = [ + polaris.catalog.models.partition_statistics_file.PartitionStatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, ) + ], ), + ) + """ + + def testLoadTableResult(self): + """Test LoadTableResult""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_load_view_result.py b/regtests/client/python/test/test_load_view_result.py new file mode 100644 index 0000000000..73e446917b --- /dev/null +++ b/regtests/client/python/test/test_load_view_result.py @@ -0,0 +1,130 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.load_view_result import LoadViewResult + +class TestLoadViewResult(unittest.TestCase): + """LoadViewResult unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> LoadViewResult: + """Test LoadViewResult + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `LoadViewResult` + """ + model = LoadViewResult() + if include_optional: + return LoadViewResult( + metadata_location = '', + metadata = polaris.catalog.models.view_metadata.ViewMetadata( + view_uuid = '', + format_version = 1, + location = '', + current_version_id = 56, + versions = [ + polaris.catalog.models.view_version.ViewVersion( + version_id = 56, + timestamp_ms = 56, + schema_id = 56, + summary = { + 'key' : '' + }, + representations = [ + null + ], + default_catalog = '', + default_namespace = ["accounting","tax"], ) + ], + version_log = [ + polaris.catalog.models.view_history_entry.ViewHistoryEntry( + version_id = 56, + timestamp_ms = 56, ) + ], + schemas = [ + null + ], + properties = { + 'key' : '' + }, ), + config = { + 'key' : '' + } + ) + else: + return LoadViewResult( + metadata_location = '', + metadata = polaris.catalog.models.view_metadata.ViewMetadata( + view_uuid = '', + format_version = 1, + location = '', + current_version_id = 56, + versions = [ + polaris.catalog.models.view_version.ViewVersion( + version_id = 56, + timestamp_ms = 56, + schema_id = 56, + summary = { + 'key' : '' + }, + representations = [ + null + ], + default_catalog = '', + default_namespace = ["accounting","tax"], ) + ], + version_log = [ + polaris.catalog.models.view_history_entry.ViewHistoryEntry( + version_id = 56, + timestamp_ms = 56, ) + ], + schemas = [ + null + ], + properties = { + 'key' : '' + }, ), + ) + """ + + def testLoadViewResult(self): + """Test LoadViewResult""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_map_type.py b/regtests/client/python/test/test_map_type.py new file mode 100644 index 0000000000..f9c795a150 --- /dev/null +++ b/regtests/client/python/test/test_map_type.py @@ -0,0 +1,77 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.map_type import MapType + +class TestMapType(unittest.TestCase): + """MapType unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> MapType: + """Test MapType + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `MapType` + """ + model = MapType() + if include_optional: + return MapType( + type = 'map', + key_id = 56, + key = None, + value_id = 56, + value = None, + value_required = True + ) + else: + return MapType( + type = 'map', + key_id = 56, + key = None, + value_id = 56, + value = None, + value_required = True, + ) + """ + + def testMapType(self): + """Test MapType""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_metadata_log_inner.py b/regtests/client/python/test/test_metadata_log_inner.py new file mode 100644 index 0000000000..603ec1b406 --- /dev/null +++ b/regtests/client/python/test/test_metadata_log_inner.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.metadata_log_inner import MetadataLogInner + +class TestMetadataLogInner(unittest.TestCase): + """MetadataLogInner unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> MetadataLogInner: + """Test MetadataLogInner + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `MetadataLogInner` + """ + model = MetadataLogInner() + if include_optional: + return MetadataLogInner( + metadata_file = '', + timestamp_ms = 56 + ) + else: + return MetadataLogInner( + metadata_file = '', + timestamp_ms = 56, + ) + """ + + def testMetadataLogInner(self): + """Test MetadataLogInner""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_metric_result.py b/regtests/client/python/test/test_metric_result.py new file mode 100644 index 0000000000..5e1dbe6443 --- /dev/null +++ b/regtests/client/python/test/test_metric_result.py @@ -0,0 +1,75 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.metric_result import MetricResult + +class TestMetricResult(unittest.TestCase): + """MetricResult unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> MetricResult: + """Test MetricResult + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `MetricResult` + """ + model = MetricResult() + if include_optional: + return MetricResult( + unit = '', + value = 56, + time_unit = '', + count = 56, + total_duration = 56 + ) + else: + return MetricResult( + unit = '', + value = 56, + time_unit = '', + count = 56, + total_duration = 56, + ) + """ + + def testMetricResult(self): + """Test MetricResult""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_model_schema.py b/regtests/client/python/test/test_model_schema.py new file mode 100644 index 0000000000..334a902ea9 --- /dev/null +++ b/regtests/client/python/test/test_model_schema.py @@ -0,0 +1,87 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.model_schema import ModelSchema + +class TestModelSchema(unittest.TestCase): + """ModelSchema unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ModelSchema: + """Test ModelSchema + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ModelSchema` + """ + model = ModelSchema() + if include_optional: + return ModelSchema( + type = 'struct', + fields = [ + polaris.catalog.models.struct_field.StructField( + id = 56, + name = '', + type = null, + required = True, + doc = '', ) + ], + schema_id = 56, + identifier_field_ids = [ + 56 + ] + ) + else: + return ModelSchema( + type = 'struct', + fields = [ + polaris.catalog.models.struct_field.StructField( + id = 56, + name = '', + type = null, + required = True, + doc = '', ) + ], + ) + """ + + def testModelSchema(self): + """Test ModelSchema""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_namespace_grant.py b/regtests/client/python/test/test_namespace_grant.py new file mode 100644 index 0000000000..8bb3d317f9 --- /dev/null +++ b/regtests/client/python/test/test_namespace_grant.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.namespace_grant import NamespaceGrant + +class TestNamespaceGrant(unittest.TestCase): + """NamespaceGrant unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> NamespaceGrant: + """Test NamespaceGrant + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `NamespaceGrant` + """ + model = NamespaceGrant() + if include_optional: + return NamespaceGrant( + namespace = [ + '' + ], + privilege = 'CATALOG_MANAGE_ACCESS' + ) + else: + return NamespaceGrant( + namespace = [ + '' + ], + privilege = 'CATALOG_MANAGE_ACCESS', + ) + """ + + def testNamespaceGrant(self): + """Test NamespaceGrant""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_namespace_privilege.py b/regtests/client/python/test/test_namespace_privilege.py new file mode 100644 index 0000000000..256b1b5326 --- /dev/null +++ b/regtests/client/python/test/test_namespace_privilege.py @@ -0,0 +1,48 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.namespace_privilege import NamespacePrivilege + +class TestNamespacePrivilege(unittest.TestCase): + """NamespacePrivilege unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def testNamespacePrivilege(self): + """Test NamespacePrivilege""" + # inst = NamespacePrivilege() + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_not_expression.py b/regtests/client/python/test/test_not_expression.py new file mode 100644 index 0000000000..deef692bdb --- /dev/null +++ b/regtests/client/python/test/test_not_expression.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.not_expression import NotExpression + +class TestNotExpression(unittest.TestCase): + """NotExpression unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> NotExpression: + """Test NotExpression + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `NotExpression` + """ + model = NotExpression() + if include_optional: + return NotExpression( + type = '["eq","and","or","not","in","not-in","lt","lt-eq","gt","gt-eq","not-eq","starts-with","not-starts-with","is-null","not-null","is-nan","not-nan"]', + child = None + ) + else: + return NotExpression( + type = '["eq","and","or","not","in","not-in","lt","lt-eq","gt","gt-eq","not-eq","starts-with","not-starts-with","is-null","not-null","is-nan","not-nan"]', + child = None, + ) + """ + + def testNotExpression(self): + """Test NotExpression""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_notification_request.py b/regtests/client/python/test/test_notification_request.py new file mode 100644 index 0000000000..f77601b3cf --- /dev/null +++ b/regtests/client/python/test/test_notification_request.py @@ -0,0 +1,164 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.notification_request import NotificationRequest + +class TestNotificationRequest(unittest.TestCase): + """NotificationRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> NotificationRequest: + """Test NotificationRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `NotificationRequest` + """ + model = NotificationRequest() + if include_optional: + return NotificationRequest( + notification_type = '', + payload = polaris.catalog.models.table_update_notification.TableUpdateNotification( + table_name = '', + timestamp = 56, + table_uuid = '', + metadata_location = '', + metadata = polaris.catalog.models.table_metadata.TableMetadata( + format_version = 1, + table_uuid = '', + location = '', + last_updated_ms = 56, + properties = { + 'key' : '' + }, + schemas = [ + null + ], + current_schema_id = 56, + last_column_id = 56, + partition_specs = [ + polaris.catalog.models.partition_spec.PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], ) + ], + default_spec_id = 56, + last_partition_id = 56, + sort_orders = [ + polaris.catalog.models.sort_order.SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], ) + ], + default_sort_order_id = 56, + snapshots = [ + polaris.catalog.models.snapshot.Snapshot( + snapshot_id = 56, + parent_snapshot_id = 56, + sequence_number = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + schema_id = 56, ) + ], + refs = { + 'key' : polaris.catalog.models.snapshot_reference.SnapshotReference( + type = 'tag', + snapshot_id = 56, + max_ref_age_ms = 56, + max_snapshot_age_ms = 56, + min_snapshots_to_keep = 56, ) + }, + current_snapshot_id = 56, + last_sequence_number = 56, + snapshot_log = [ + polaris.catalog.models.snapshot_log_inner.SnapshotLog_inner( + snapshot_id = 56, + timestamp_ms = 56, ) + ], + metadata_log = [ + polaris.catalog.models.metadata_log_inner.MetadataLog_inner( + metadata_file = '', + timestamp_ms = 56, ) + ], + statistics_files = [ + polaris.catalog.models.statistics_file.StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], ) + ], ) + ], + partition_statistics_files = [ + polaris.catalog.models.partition_statistics_file.PartitionStatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, ) + ], ), ) + ) + else: + return NotificationRequest( + notification_type = '', + ) + """ + + def testNotificationRequest(self): + """Test NotificationRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_notification_type.py b/regtests/client/python/test/test_notification_type.py new file mode 100644 index 0000000000..a2323c26f9 --- /dev/null +++ b/regtests/client/python/test/test_notification_type.py @@ -0,0 +1,48 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.notification_type import NotificationType + +class TestNotificationType(unittest.TestCase): + """NotificationType unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def testNotificationType(self): + """Test NotificationType""" + # inst = NotificationType() + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_null_order.py b/regtests/client/python/test/test_null_order.py new file mode 100644 index 0000000000..d354a8cbd9 --- /dev/null +++ b/regtests/client/python/test/test_null_order.py @@ -0,0 +1,48 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.null_order import NullOrder + +class TestNullOrder(unittest.TestCase): + """NullOrder unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def testNullOrder(self): + """Test NullOrder""" + # inst = NullOrder() + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_o_auth_error.py b/regtests/client/python/test/test_o_auth_error.py new file mode 100644 index 0000000000..b7cadb98f9 --- /dev/null +++ b/regtests/client/python/test/test_o_auth_error.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.o_auth_error import OAuthError + +class TestOAuthError(unittest.TestCase): + """OAuthError unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> OAuthError: + """Test OAuthError + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `OAuthError` + """ + model = OAuthError() + if include_optional: + return OAuthError( + error = 'invalid_request', + error_description = '', + error_uri = '' + ) + else: + return OAuthError( + error = 'invalid_request', + ) + """ + + def testOAuthError(self): + """Test OAuthError""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_o_auth_token_response.py b/regtests/client/python/test/test_o_auth_token_response.py new file mode 100644 index 0000000000..ffb250c0e7 --- /dev/null +++ b/regtests/client/python/test/test_o_auth_token_response.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.o_auth_token_response import OAuthTokenResponse + +class TestOAuthTokenResponse(unittest.TestCase): + """OAuthTokenResponse unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> OAuthTokenResponse: + """Test OAuthTokenResponse + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `OAuthTokenResponse` + """ + model = OAuthTokenResponse() + if include_optional: + return OAuthTokenResponse( + access_token = '', + token_type = 'bearer', + expires_in = 56, + issued_token_type = 'urn:ietf:params:oauth:token-type:access_token', + refresh_token = '', + scope = '' + ) + else: + return OAuthTokenResponse( + access_token = '', + token_type = 'bearer', + ) + """ + + def testOAuthTokenResponse(self): + """Test OAuthTokenResponse""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_partition_field.py b/regtests/client/python/test/test_partition_field.py new file mode 100644 index 0000000000..c4f0bf98f4 --- /dev/null +++ b/regtests/client/python/test/test_partition_field.py @@ -0,0 +1,72 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.partition_field import PartitionField + +class TestPartitionField(unittest.TestCase): + """PartitionField unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> PartitionField: + """Test PartitionField + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `PartitionField` + """ + model = PartitionField() + if include_optional: + return PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]' + ) + else: + return PartitionField( + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + ) + """ + + def testPartitionField(self): + """Test PartitionField""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_partition_spec.py b/regtests/client/python/test/test_partition_spec.py new file mode 100644 index 0000000000..7d3871d58c --- /dev/null +++ b/regtests/client/python/test/test_partition_spec.py @@ -0,0 +1,80 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.partition_spec import PartitionSpec + +class TestPartitionSpec(unittest.TestCase): + """PartitionSpec unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> PartitionSpec: + """Test PartitionSpec + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `PartitionSpec` + """ + model = PartitionSpec() + if include_optional: + return PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ] + ) + else: + return PartitionSpec( + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], + ) + """ + + def testPartitionSpec(self): + """Test PartitionSpec""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_partition_statistics_file.py b/regtests/client/python/test/test_partition_statistics_file.py new file mode 100644 index 0000000000..1f49af4e30 --- /dev/null +++ b/regtests/client/python/test/test_partition_statistics_file.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.partition_statistics_file import PartitionStatisticsFile + +class TestPartitionStatisticsFile(unittest.TestCase): + """PartitionStatisticsFile unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> PartitionStatisticsFile: + """Test PartitionStatisticsFile + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `PartitionStatisticsFile` + """ + model = PartitionStatisticsFile() + if include_optional: + return PartitionStatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56 + ) + else: + return PartitionStatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + ) + """ + + def testPartitionStatisticsFile(self): + """Test PartitionStatisticsFile""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_polaris_catalog.py b/regtests/client/python/test/test_polaris_catalog.py new file mode 100644 index 0000000000..24eacdbf6b --- /dev/null +++ b/regtests/client/python/test/test_polaris_catalog.py @@ -0,0 +1,65 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.polaris_catalog import PolarisCatalog + +class TestPolarisCatalog(unittest.TestCase): + """PolarisCatalog unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> PolarisCatalog: + """Test PolarisCatalog + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `PolarisCatalog` + """ + model = PolarisCatalog() + if include_optional: + return PolarisCatalog( + ) + else: + return PolarisCatalog( + ) + """ + + def testPolarisCatalog(self): + """Test PolarisCatalog""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_polaris_default_api.py b/regtests/client/python/test/test_polaris_default_api.py new file mode 100644 index 0000000000..bf2dcd4579 --- /dev/null +++ b/regtests/client/python/test/test_polaris_default_api.py @@ -0,0 +1,238 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.api.polaris_default_api import PolarisDefaultApi + + +class TestPolarisDefaultApi(unittest.TestCase): + """PolarisDefaultApi unit test stubs""" + + def setUp(self) -> None: + self.api = PolarisDefaultApi() + + def tearDown(self) -> None: + pass + + def test_add_grant_to_catalog_role(self) -> None: + """Test case for add_grant_to_catalog_role + + """ + pass + + def test_assign_catalog_role_to_principal_role(self) -> None: + """Test case for assign_catalog_role_to_principal_role + + """ + pass + + def test_assign_principal_role(self) -> None: + """Test case for assign_principal_role + + """ + pass + + def test_create_catalog(self) -> None: + """Test case for create_catalog + + """ + pass + + def test_create_catalog_role(self) -> None: + """Test case for create_catalog_role + + """ + pass + + def test_create_principal(self) -> None: + """Test case for create_principal + + """ + pass + + def test_create_principal_role(self) -> None: + """Test case for create_principal_role + + """ + pass + + def test_delete_catalog(self) -> None: + """Test case for delete_catalog + + """ + pass + + def test_delete_catalog_role(self) -> None: + """Test case for delete_catalog_role + + """ + pass + + def test_delete_principal(self) -> None: + """Test case for delete_principal + + """ + pass + + def test_delete_principal_role(self) -> None: + """Test case for delete_principal_role + + """ + pass + + def test_get_catalog(self) -> None: + """Test case for get_catalog + + """ + pass + + def test_get_catalog_role(self) -> None: + """Test case for get_catalog_role + + """ + pass + + def test_get_principal(self) -> None: + """Test case for get_principal + + """ + pass + + def test_get_principal_role(self) -> None: + """Test case for get_principal_role + + """ + pass + + def test_list_assignee_principal_roles_for_catalog_role(self) -> None: + """Test case for list_assignee_principal_roles_for_catalog_role + + """ + pass + + def test_list_assignee_principals_for_principal_role(self) -> None: + """Test case for list_assignee_principals_for_principal_role + + """ + pass + + def test_list_catalog_roles(self) -> None: + """Test case for list_catalog_roles + + """ + pass + + def test_list_catalog_roles_for_principal_role(self) -> None: + """Test case for list_catalog_roles_for_principal_role + + """ + pass + + def test_list_catalogs(self) -> None: + """Test case for list_catalogs + + """ + pass + + def test_list_grants_for_catalog_role(self) -> None: + """Test case for list_grants_for_catalog_role + + """ + pass + + def test_list_principal_roles(self) -> None: + """Test case for list_principal_roles + + """ + pass + + def test_list_principal_roles_assigned(self) -> None: + """Test case for list_principal_roles_assigned + + """ + pass + + def test_list_principals(self) -> None: + """Test case for list_principals + + """ + pass + + def test_revoke_catalog_role_from_principal_role(self) -> None: + """Test case for revoke_catalog_role_from_principal_role + + """ + pass + + def test_revoke_grant_from_catalog_role(self) -> None: + """Test case for revoke_grant_from_catalog_role + + """ + pass + + def test_revoke_principal_role(self) -> None: + """Test case for revoke_principal_role + + """ + pass + + def test_rotate_credentials(self) -> None: + """Test case for rotate_credentials + + """ + pass + + def test_update_catalog(self) -> None: + """Test case for update_catalog + + """ + pass + + def test_update_catalog_role(self) -> None: + """Test case for update_catalog_role + + """ + pass + + def test_update_principal(self) -> None: + """Test case for update_principal + + """ + pass + + def test_update_principal_role(self) -> None: + """Test case for update_principal_role + + """ + pass + + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_position_delete_file.py b/regtests/client/python/test/test_position_delete_file.py new file mode 100644 index 0000000000..f190a98ca6 --- /dev/null +++ b/regtests/client/python/test/test_position_delete_file.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.position_delete_file import PositionDeleteFile + +class TestPositionDeleteFile(unittest.TestCase): + """PositionDeleteFile unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> PositionDeleteFile: + """Test PositionDeleteFile + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `PositionDeleteFile` + """ + model = PositionDeleteFile() + if include_optional: + return PositionDeleteFile( + content = 'position-deletes' + ) + else: + return PositionDeleteFile( + content = 'position-deletes', + ) + """ + + def testPositionDeleteFile(self): + """Test PositionDeleteFile""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_primitive_type_value.py b/regtests/client/python/test/test_primitive_type_value.py new file mode 100644 index 0000000000..f3299a5f72 --- /dev/null +++ b/regtests/client/python/test/test_primitive_type_value.py @@ -0,0 +1,65 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.primitive_type_value import PrimitiveTypeValue + +class TestPrimitiveTypeValue(unittest.TestCase): + """PrimitiveTypeValue unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> PrimitiveTypeValue: + """Test PrimitiveTypeValue + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `PrimitiveTypeValue` + """ + model = PrimitiveTypeValue() + if include_optional: + return PrimitiveTypeValue( + ) + else: + return PrimitiveTypeValue( + ) + """ + + def testPrimitiveTypeValue(self): + """Test PrimitiveTypeValue""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_principal.py b/regtests/client/python/test/test_principal.py new file mode 100644 index 0000000000..4c26b6d133 --- /dev/null +++ b/regtests/client/python/test/test_principal.py @@ -0,0 +1,76 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.principal import Principal + +class TestPrincipal(unittest.TestCase): + """Principal unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> Principal: + """Test Principal + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `Principal` + """ + model = Principal() + if include_optional: + return Principal( + type = 'SERVICE', + name = '', + client_id = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56 + ) + else: + return Principal( + type = 'SERVICE', + name = '', + ) + """ + + def testPrincipal(self): + """Test Principal""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_principal_role.py b/regtests/client/python/test/test_principal_role.py new file mode 100644 index 0000000000..67a06d2945 --- /dev/null +++ b/regtests/client/python/test/test_principal_role.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.principal_role import PrincipalRole + +class TestPrincipalRole(unittest.TestCase): + """PrincipalRole unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> PrincipalRole: + """Test PrincipalRole + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `PrincipalRole` + """ + model = PrincipalRole() + if include_optional: + return PrincipalRole( + name = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56 + ) + else: + return PrincipalRole( + name = '', + ) + """ + + def testPrincipalRole(self): + """Test PrincipalRole""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_principal_roles.py b/regtests/client/python/test/test_principal_roles.py new file mode 100644 index 0000000000..2ebf00d23c --- /dev/null +++ b/regtests/client/python/test/test_principal_roles.py @@ -0,0 +1,85 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.principal_roles import PrincipalRoles + +class TestPrincipalRoles(unittest.TestCase): + """PrincipalRoles unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> PrincipalRoles: + """Test PrincipalRoles + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `PrincipalRoles` + """ + model = PrincipalRoles() + if include_optional: + return PrincipalRoles( + roles = [ + polaris.management.models.principal_role.PrincipalRole( + name = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ) + ] + ) + else: + return PrincipalRoles( + roles = [ + polaris.management.models.principal_role.PrincipalRole( + name = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ) + ], + ) + """ + + def testPrincipalRoles(self): + """Test PrincipalRoles""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_principal_with_credentials.py b/regtests/client/python/test/test_principal_with_credentials.py new file mode 100644 index 0000000000..ac4bfa40d5 --- /dev/null +++ b/regtests/client/python/test/test_principal_with_credentials.py @@ -0,0 +1,91 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.principal_with_credentials import PrincipalWithCredentials + +class TestPrincipalWithCredentials(unittest.TestCase): + """PrincipalWithCredentials unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> PrincipalWithCredentials: + """Test PrincipalWithCredentials + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `PrincipalWithCredentials` + """ + model = PrincipalWithCredentials() + if include_optional: + return PrincipalWithCredentials( + principal = polaris.management.models.principal.Principal( + type = 'SERVICE', + name = '', + client_id = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ), + credentials = polaris.management.models.principal_with_credentials_credentials.PrincipalWithCredentials_credentials( + client_id = '', + client_secret = '', ) + ) + else: + return PrincipalWithCredentials( + principal = polaris.management.models.principal.Principal( + type = 'SERVICE', + name = '', + client_id = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ), + credentials = polaris.management.models.principal_with_credentials_credentials.PrincipalWithCredentials_credentials( + client_id = '', + client_secret = '', ), + ) + """ + + def testPrincipalWithCredentials(self): + """Test PrincipalWithCredentials""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_principal_with_credentials_credentials.py b/regtests/client/python/test/test_principal_with_credentials_credentials.py new file mode 100644 index 0000000000..5e729f9810 --- /dev/null +++ b/regtests/client/python/test/test_principal_with_credentials_credentials.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.principal_with_credentials_credentials import PrincipalWithCredentialsCredentials + +class TestPrincipalWithCredentialsCredentials(unittest.TestCase): + """PrincipalWithCredentialsCredentials unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> PrincipalWithCredentialsCredentials: + """Test PrincipalWithCredentialsCredentials + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `PrincipalWithCredentialsCredentials` + """ + model = PrincipalWithCredentialsCredentials() + if include_optional: + return PrincipalWithCredentialsCredentials( + client_id = '', + client_secret = '' + ) + else: + return PrincipalWithCredentialsCredentials( + ) + """ + + def testPrincipalWithCredentialsCredentials(self): + """Test PrincipalWithCredentialsCredentials""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_principals.py b/regtests/client/python/test/test_principals.py new file mode 100644 index 0000000000..693d99e0fd --- /dev/null +++ b/regtests/client/python/test/test_principals.py @@ -0,0 +1,89 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.principals import Principals + +class TestPrincipals(unittest.TestCase): + """Principals unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> Principals: + """Test Principals + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `Principals` + """ + model = Principals() + if include_optional: + return Principals( + principals = [ + polaris.management.models.principal.Principal( + type = 'SERVICE', + name = '', + client_id = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ) + ] + ) + else: + return Principals( + principals = [ + polaris.management.models.principal.Principal( + type = 'SERVICE', + name = '', + client_id = '', + properties = { + 'key' : '' + }, + create_timestamp = 56, + last_update_timestamp = 56, + entity_version = 56, ) + ], + ) + """ + + def testPrincipals(self): + """Test Principals""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_register_table_request.py b/regtests/client/python/test/test_register_table_request.py new file mode 100644 index 0000000000..00a96a6c51 --- /dev/null +++ b/regtests/client/python/test/test_register_table_request.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.register_table_request import RegisterTableRequest + +class TestRegisterTableRequest(unittest.TestCase): + """RegisterTableRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> RegisterTableRequest: + """Test RegisterTableRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `RegisterTableRequest` + """ + model = RegisterTableRequest() + if include_optional: + return RegisterTableRequest( + name = '', + metadata_location = '' + ) + else: + return RegisterTableRequest( + name = '', + metadata_location = '', + ) + """ + + def testRegisterTableRequest(self): + """Test RegisterTableRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_remove_partition_statistics_update.py b/regtests/client/python/test/test_remove_partition_statistics_update.py new file mode 100644 index 0000000000..40eed1c10f --- /dev/null +++ b/regtests/client/python/test/test_remove_partition_statistics_update.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.remove_partition_statistics_update import RemovePartitionStatisticsUpdate + +class TestRemovePartitionStatisticsUpdate(unittest.TestCase): + """RemovePartitionStatisticsUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> RemovePartitionStatisticsUpdate: + """Test RemovePartitionStatisticsUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `RemovePartitionStatisticsUpdate` + """ + model = RemovePartitionStatisticsUpdate() + if include_optional: + return RemovePartitionStatisticsUpdate( + action = 'remove-partition-statistics', + snapshot_id = 56 + ) + else: + return RemovePartitionStatisticsUpdate( + action = 'remove-partition-statistics', + snapshot_id = 56, + ) + """ + + def testRemovePartitionStatisticsUpdate(self): + """Test RemovePartitionStatisticsUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_remove_properties_update.py b/regtests/client/python/test/test_remove_properties_update.py new file mode 100644 index 0000000000..886c606839 --- /dev/null +++ b/regtests/client/python/test/test_remove_properties_update.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.remove_properties_update import RemovePropertiesUpdate + +class TestRemovePropertiesUpdate(unittest.TestCase): + """RemovePropertiesUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> RemovePropertiesUpdate: + """Test RemovePropertiesUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `RemovePropertiesUpdate` + """ + model = RemovePropertiesUpdate() + if include_optional: + return RemovePropertiesUpdate( + action = 'remove-properties', + removals = [ + '' + ] + ) + else: + return RemovePropertiesUpdate( + action = 'remove-properties', + removals = [ + '' + ], + ) + """ + + def testRemovePropertiesUpdate(self): + """Test RemovePropertiesUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_remove_snapshot_ref_update.py b/regtests/client/python/test/test_remove_snapshot_ref_update.py new file mode 100644 index 0000000000..75442c3208 --- /dev/null +++ b/regtests/client/python/test/test_remove_snapshot_ref_update.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.remove_snapshot_ref_update import RemoveSnapshotRefUpdate + +class TestRemoveSnapshotRefUpdate(unittest.TestCase): + """RemoveSnapshotRefUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> RemoveSnapshotRefUpdate: + """Test RemoveSnapshotRefUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `RemoveSnapshotRefUpdate` + """ + model = RemoveSnapshotRefUpdate() + if include_optional: + return RemoveSnapshotRefUpdate( + action = 'remove-snapshot-ref', + ref_name = '' + ) + else: + return RemoveSnapshotRefUpdate( + action = 'remove-snapshot-ref', + ref_name = '', + ) + """ + + def testRemoveSnapshotRefUpdate(self): + """Test RemoveSnapshotRefUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_remove_snapshots_update.py b/regtests/client/python/test/test_remove_snapshots_update.py new file mode 100644 index 0000000000..97460e2751 --- /dev/null +++ b/regtests/client/python/test/test_remove_snapshots_update.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.remove_snapshots_update import RemoveSnapshotsUpdate + +class TestRemoveSnapshotsUpdate(unittest.TestCase): + """RemoveSnapshotsUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> RemoveSnapshotsUpdate: + """Test RemoveSnapshotsUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `RemoveSnapshotsUpdate` + """ + model = RemoveSnapshotsUpdate() + if include_optional: + return RemoveSnapshotsUpdate( + action = 'remove-snapshots', + snapshot_ids = [ + 56 + ] + ) + else: + return RemoveSnapshotsUpdate( + action = 'remove-snapshots', + snapshot_ids = [ + 56 + ], + ) + """ + + def testRemoveSnapshotsUpdate(self): + """Test RemoveSnapshotsUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_remove_statistics_update.py b/regtests/client/python/test/test_remove_statistics_update.py new file mode 100644 index 0000000000..d47e09f7e9 --- /dev/null +++ b/regtests/client/python/test/test_remove_statistics_update.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.remove_statistics_update import RemoveStatisticsUpdate + +class TestRemoveStatisticsUpdate(unittest.TestCase): + """RemoveStatisticsUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> RemoveStatisticsUpdate: + """Test RemoveStatisticsUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `RemoveStatisticsUpdate` + """ + model = RemoveStatisticsUpdate() + if include_optional: + return RemoveStatisticsUpdate( + action = 'remove-statistics', + snapshot_id = 56 + ) + else: + return RemoveStatisticsUpdate( + action = 'remove-statistics', + snapshot_id = 56, + ) + """ + + def testRemoveStatisticsUpdate(self): + """Test RemoveStatisticsUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_rename_table_request.py b/regtests/client/python/test/test_rename_table_request.py new file mode 100644 index 0000000000..3059ffb8ff --- /dev/null +++ b/regtests/client/python/test/test_rename_table_request.py @@ -0,0 +1,77 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.rename_table_request import RenameTableRequest + +class TestRenameTableRequest(unittest.TestCase): + """RenameTableRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> RenameTableRequest: + """Test RenameTableRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `RenameTableRequest` + """ + model = RenameTableRequest() + if include_optional: + return RenameTableRequest( + source = polaris.catalog.models.table_identifier.TableIdentifier( + namespace = ["accounting","tax"], + name = '', ), + destination = polaris.catalog.models.table_identifier.TableIdentifier( + namespace = ["accounting","tax"], + name = '', ) + ) + else: + return RenameTableRequest( + source = polaris.catalog.models.table_identifier.TableIdentifier( + namespace = ["accounting","tax"], + name = '', ), + destination = polaris.catalog.models.table_identifier.TableIdentifier( + namespace = ["accounting","tax"], + name = '', ), + ) + """ + + def testRenameTableRequest(self): + """Test RenameTableRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_report_metrics_request.py b/regtests/client/python/test/test_report_metrics_request.py new file mode 100644 index 0000000000..8bcf61a321 --- /dev/null +++ b/regtests/client/python/test/test_report_metrics_request.py @@ -0,0 +1,96 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.report_metrics_request import ReportMetricsRequest + +class TestReportMetricsRequest(unittest.TestCase): + """ReportMetricsRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ReportMetricsRequest: + """Test ReportMetricsRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ReportMetricsRequest` + """ + model = ReportMetricsRequest() + if include_optional: + return ReportMetricsRequest( + report_type = '', + table_name = '', + snapshot_id = 56, + filter = None, + schema_id = 56, + projected_field_ids = [ + 56 + ], + projected_field_names = [ + '' + ], + metrics = {"metrics":{"total-planning-duration":{"count":1,"time-unit":"nanoseconds","total-duration":2644235116},"result-data-files":{"unit":"count","value":1},"result-delete-files":{"unit":"count","value":0},"total-data-manifests":{"unit":"count","value":1},"total-delete-manifests":{"unit":"count","value":0},"scanned-data-manifests":{"unit":"count","value":1},"skipped-data-manifests":{"unit":"count","value":0},"total-file-size-bytes":{"unit":"bytes","value":10},"total-delete-file-size-bytes":{"unit":"bytes","value":0}}}, + metadata = { + 'key' : '' + }, + sequence_number = 56, + operation = '' + ) + else: + return ReportMetricsRequest( + report_type = '', + table_name = '', + snapshot_id = 56, + filter = None, + schema_id = 56, + projected_field_ids = [ + 56 + ], + projected_field_names = [ + '' + ], + metrics = {"metrics":{"total-planning-duration":{"count":1,"time-unit":"nanoseconds","total-duration":2644235116},"result-data-files":{"unit":"count","value":1},"result-delete-files":{"unit":"count","value":0},"total-data-manifests":{"unit":"count","value":1},"total-delete-manifests":{"unit":"count","value":0},"scanned-data-manifests":{"unit":"count","value":1},"skipped-data-manifests":{"unit":"count","value":0},"total-file-size-bytes":{"unit":"bytes","value":10},"total-delete-file-size-bytes":{"unit":"bytes","value":0}}}, + sequence_number = 56, + operation = '', + ) + """ + + def testReportMetricsRequest(self): + """Test ReportMetricsRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_revoke_grant_request.py b/regtests/client/python/test/test_revoke_grant_request.py new file mode 100644 index 0000000000..a54a9a61d9 --- /dev/null +++ b/regtests/client/python/test/test_revoke_grant_request.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.revoke_grant_request import RevokeGrantRequest + +class TestRevokeGrantRequest(unittest.TestCase): + """RevokeGrantRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> RevokeGrantRequest: + """Test RevokeGrantRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `RevokeGrantRequest` + """ + model = RevokeGrantRequest() + if include_optional: + return RevokeGrantRequest( + grant = polaris.management.models.grant_resource.GrantResource( + type = 'catalog', ) + ) + else: + return RevokeGrantRequest( + ) + """ + + def testRevokeGrantRequest(self): + """Test RevokeGrantRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_scan_report.py b/regtests/client/python/test/test_scan_report.py new file mode 100644 index 0000000000..be8858897e --- /dev/null +++ b/regtests/client/python/test/test_scan_report.py @@ -0,0 +1,90 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.scan_report import ScanReport + +class TestScanReport(unittest.TestCase): + """ScanReport unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ScanReport: + """Test ScanReport + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ScanReport` + """ + model = ScanReport() + if include_optional: + return ScanReport( + table_name = '', + snapshot_id = 56, + filter = None, + schema_id = 56, + projected_field_ids = [ + 56 + ], + projected_field_names = [ + '' + ], + metrics = {"metrics":{"total-planning-duration":{"count":1,"time-unit":"nanoseconds","total-duration":2644235116},"result-data-files":{"unit":"count","value":1},"result-delete-files":{"unit":"count","value":0},"total-data-manifests":{"unit":"count","value":1},"total-delete-manifests":{"unit":"count","value":0},"scanned-data-manifests":{"unit":"count","value":1},"skipped-data-manifests":{"unit":"count","value":0},"total-file-size-bytes":{"unit":"bytes","value":10},"total-delete-file-size-bytes":{"unit":"bytes","value":0}}}, + metadata = { + 'key' : '' + } + ) + else: + return ScanReport( + table_name = '', + snapshot_id = 56, + filter = None, + schema_id = 56, + projected_field_ids = [ + 56 + ], + projected_field_names = [ + '' + ], + metrics = {"metrics":{"total-planning-duration":{"count":1,"time-unit":"nanoseconds","total-duration":2644235116},"result-data-files":{"unit":"count","value":1},"result-delete-files":{"unit":"count","value":0},"total-data-manifests":{"unit":"count","value":1},"total-delete-manifests":{"unit":"count","value":0},"scanned-data-manifests":{"unit":"count","value":1},"skipped-data-manifests":{"unit":"count","value":0},"total-file-size-bytes":{"unit":"bytes","value":10},"total-delete-file-size-bytes":{"unit":"bytes","value":0}}}, + ) + """ + + def testScanReport(self): + """Test ScanReport""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_set_current_schema_update.py b/regtests/client/python/test/test_set_current_schema_update.py new file mode 100644 index 0000000000..b1e9138064 --- /dev/null +++ b/regtests/client/python/test/test_set_current_schema_update.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.set_current_schema_update import SetCurrentSchemaUpdate + +class TestSetCurrentSchemaUpdate(unittest.TestCase): + """SetCurrentSchemaUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SetCurrentSchemaUpdate: + """Test SetCurrentSchemaUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SetCurrentSchemaUpdate` + """ + model = SetCurrentSchemaUpdate() + if include_optional: + return SetCurrentSchemaUpdate( + action = 'set-current-schema', + schema_id = 56 + ) + else: + return SetCurrentSchemaUpdate( + action = 'set-current-schema', + schema_id = 56, + ) + """ + + def testSetCurrentSchemaUpdate(self): + """Test SetCurrentSchemaUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_set_current_view_version_update.py b/regtests/client/python/test/test_set_current_view_version_update.py new file mode 100644 index 0000000000..1ca4398273 --- /dev/null +++ b/regtests/client/python/test/test_set_current_view_version_update.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.set_current_view_version_update import SetCurrentViewVersionUpdate + +class TestSetCurrentViewVersionUpdate(unittest.TestCase): + """SetCurrentViewVersionUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SetCurrentViewVersionUpdate: + """Test SetCurrentViewVersionUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SetCurrentViewVersionUpdate` + """ + model = SetCurrentViewVersionUpdate() + if include_optional: + return SetCurrentViewVersionUpdate( + action = 'set-current-view-version', + view_version_id = 56 + ) + else: + return SetCurrentViewVersionUpdate( + action = 'set-current-view-version', + view_version_id = 56, + ) + """ + + def testSetCurrentViewVersionUpdate(self): + """Test SetCurrentViewVersionUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_set_default_sort_order_update.py b/regtests/client/python/test/test_set_default_sort_order_update.py new file mode 100644 index 0000000000..3ff82c16cc --- /dev/null +++ b/regtests/client/python/test/test_set_default_sort_order_update.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.set_default_sort_order_update import SetDefaultSortOrderUpdate + +class TestSetDefaultSortOrderUpdate(unittest.TestCase): + """SetDefaultSortOrderUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SetDefaultSortOrderUpdate: + """Test SetDefaultSortOrderUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SetDefaultSortOrderUpdate` + """ + model = SetDefaultSortOrderUpdate() + if include_optional: + return SetDefaultSortOrderUpdate( + action = 'set-default-sort-order', + sort_order_id = 56 + ) + else: + return SetDefaultSortOrderUpdate( + action = 'set-default-sort-order', + sort_order_id = 56, + ) + """ + + def testSetDefaultSortOrderUpdate(self): + """Test SetDefaultSortOrderUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_set_default_spec_update.py b/regtests/client/python/test/test_set_default_spec_update.py new file mode 100644 index 0000000000..53827fb68b --- /dev/null +++ b/regtests/client/python/test/test_set_default_spec_update.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.set_default_spec_update import SetDefaultSpecUpdate + +class TestSetDefaultSpecUpdate(unittest.TestCase): + """SetDefaultSpecUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SetDefaultSpecUpdate: + """Test SetDefaultSpecUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SetDefaultSpecUpdate` + """ + model = SetDefaultSpecUpdate() + if include_optional: + return SetDefaultSpecUpdate( + action = 'set-default-spec', + spec_id = 56 + ) + else: + return SetDefaultSpecUpdate( + action = 'set-default-spec', + spec_id = 56, + ) + """ + + def testSetDefaultSpecUpdate(self): + """Test SetDefaultSpecUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_set_expression.py b/regtests/client/python/test/test_set_expression.py new file mode 100644 index 0000000000..4adeb762f4 --- /dev/null +++ b/regtests/client/python/test/test_set_expression.py @@ -0,0 +1,75 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.set_expression import SetExpression + +class TestSetExpression(unittest.TestCase): + """SetExpression unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SetExpression: + """Test SetExpression + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SetExpression` + """ + model = SetExpression() + if include_optional: + return SetExpression( + type = '["eq","and","or","not","in","not-in","lt","lt-eq","gt","gt-eq","not-eq","starts-with","not-starts-with","is-null","not-null","is-nan","not-nan"]', + term = None, + values = [ + None + ] + ) + else: + return SetExpression( + type = '["eq","and","or","not","in","not-in","lt","lt-eq","gt","gt-eq","not-eq","starts-with","not-starts-with","is-null","not-null","is-nan","not-nan"]', + term = None, + values = [ + None + ], + ) + """ + + def testSetExpression(self): + """Test SetExpression""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_set_location_update.py b/regtests/client/python/test/test_set_location_update.py new file mode 100644 index 0000000000..47922065cd --- /dev/null +++ b/regtests/client/python/test/test_set_location_update.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.set_location_update import SetLocationUpdate + +class TestSetLocationUpdate(unittest.TestCase): + """SetLocationUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SetLocationUpdate: + """Test SetLocationUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SetLocationUpdate` + """ + model = SetLocationUpdate() + if include_optional: + return SetLocationUpdate( + action = 'set-location', + location = '' + ) + else: + return SetLocationUpdate( + action = 'set-location', + location = '', + ) + """ + + def testSetLocationUpdate(self): + """Test SetLocationUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_set_partition_statistics_update.py b/regtests/client/python/test/test_set_partition_statistics_update.py new file mode 100644 index 0000000000..36411ed591 --- /dev/null +++ b/regtests/client/python/test/test_set_partition_statistics_update.py @@ -0,0 +1,75 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.set_partition_statistics_update import SetPartitionStatisticsUpdate + +class TestSetPartitionStatisticsUpdate(unittest.TestCase): + """SetPartitionStatisticsUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SetPartitionStatisticsUpdate: + """Test SetPartitionStatisticsUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SetPartitionStatisticsUpdate` + """ + model = SetPartitionStatisticsUpdate() + if include_optional: + return SetPartitionStatisticsUpdate( + action = 'set-partition-statistics', + partition_statistics = polaris.catalog.models.partition_statistics_file.PartitionStatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, ) + ) + else: + return SetPartitionStatisticsUpdate( + action = 'set-partition-statistics', + partition_statistics = polaris.catalog.models.partition_statistics_file.PartitionStatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, ), + ) + """ + + def testSetPartitionStatisticsUpdate(self): + """Test SetPartitionStatisticsUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_set_properties_update.py b/regtests/client/python/test/test_set_properties_update.py new file mode 100644 index 0000000000..9fb951b788 --- /dev/null +++ b/regtests/client/python/test/test_set_properties_update.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.set_properties_update import SetPropertiesUpdate + +class TestSetPropertiesUpdate(unittest.TestCase): + """SetPropertiesUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SetPropertiesUpdate: + """Test SetPropertiesUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SetPropertiesUpdate` + """ + model = SetPropertiesUpdate() + if include_optional: + return SetPropertiesUpdate( + action = 'set-properties', + updates = { + 'key' : '' + } + ) + else: + return SetPropertiesUpdate( + action = 'set-properties', + updates = { + 'key' : '' + }, + ) + """ + + def testSetPropertiesUpdate(self): + """Test SetPropertiesUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_set_snapshot_ref_update.py b/regtests/client/python/test/test_set_snapshot_ref_update.py new file mode 100644 index 0000000000..8cc7e29101 --- /dev/null +++ b/regtests/client/python/test/test_set_snapshot_ref_update.py @@ -0,0 +1,76 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.set_snapshot_ref_update import SetSnapshotRefUpdate + +class TestSetSnapshotRefUpdate(unittest.TestCase): + """SetSnapshotRefUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SetSnapshotRefUpdate: + """Test SetSnapshotRefUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SetSnapshotRefUpdate` + """ + model = SetSnapshotRefUpdate() + if include_optional: + return SetSnapshotRefUpdate( + action = 'set-snapshot-ref', + ref_name = '', + type = 'tag', + snapshot_id = 56, + max_ref_age_ms = 56, + max_snapshot_age_ms = 56, + min_snapshots_to_keep = 56 + ) + else: + return SetSnapshotRefUpdate( + action = 'set-snapshot-ref', + ref_name = '', + type = 'tag', + snapshot_id = 56, + ) + """ + + def testSetSnapshotRefUpdate(self): + """Test SetSnapshotRefUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_set_statistics_update.py b/regtests/client/python/test/test_set_statistics_update.py new file mode 100644 index 0000000000..777caade1c --- /dev/null +++ b/regtests/client/python/test/test_set_statistics_update.py @@ -0,0 +1,99 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.set_statistics_update import SetStatisticsUpdate + +class TestSetStatisticsUpdate(unittest.TestCase): + """SetStatisticsUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SetStatisticsUpdate: + """Test SetStatisticsUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SetStatisticsUpdate` + """ + model = SetStatisticsUpdate() + if include_optional: + return SetStatisticsUpdate( + action = 'set-statistics', + snapshot_id = 56, + statistics = polaris.catalog.models.statistics_file.StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], + properties = polaris.catalog.models.properties.properties(), ) + ], ) + ) + else: + return SetStatisticsUpdate( + action = 'set-statistics', + snapshot_id = 56, + statistics = polaris.catalog.models.statistics_file.StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], + properties = polaris.catalog.models.properties.properties(), ) + ], ), + ) + """ + + def testSetStatisticsUpdate(self): + """Test SetStatisticsUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_snapshot.py b/regtests/client/python/test/test_snapshot.py new file mode 100644 index 0000000000..ba0bdae198 --- /dev/null +++ b/regtests/client/python/test/test_snapshot.py @@ -0,0 +1,80 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.snapshot import Snapshot + +class TestSnapshot(unittest.TestCase): + """Snapshot unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> Snapshot: + """Test Snapshot + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `Snapshot` + """ + model = Snapshot() + if include_optional: + return Snapshot( + snapshot_id = 56, + parent_snapshot_id = 56, + sequence_number = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + schema_id = 56 + ) + else: + return Snapshot( + snapshot_id = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + ) + """ + + def testSnapshot(self): + """Test Snapshot""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_snapshot_log_inner.py b/regtests/client/python/test/test_snapshot_log_inner.py new file mode 100644 index 0000000000..31e98d7156 --- /dev/null +++ b/regtests/client/python/test/test_snapshot_log_inner.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.snapshot_log_inner import SnapshotLogInner + +class TestSnapshotLogInner(unittest.TestCase): + """SnapshotLogInner unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SnapshotLogInner: + """Test SnapshotLogInner + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SnapshotLogInner` + """ + model = SnapshotLogInner() + if include_optional: + return SnapshotLogInner( + snapshot_id = 56, + timestamp_ms = 56 + ) + else: + return SnapshotLogInner( + snapshot_id = 56, + timestamp_ms = 56, + ) + """ + + def testSnapshotLogInner(self): + """Test SnapshotLogInner""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_snapshot_reference.py b/regtests/client/python/test/test_snapshot_reference.py new file mode 100644 index 0000000000..9d5fe1bf8a --- /dev/null +++ b/regtests/client/python/test/test_snapshot_reference.py @@ -0,0 +1,72 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.snapshot_reference import SnapshotReference + +class TestSnapshotReference(unittest.TestCase): + """SnapshotReference unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SnapshotReference: + """Test SnapshotReference + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SnapshotReference` + """ + model = SnapshotReference() + if include_optional: + return SnapshotReference( + type = 'tag', + snapshot_id = 56, + max_ref_age_ms = 56, + max_snapshot_age_ms = 56, + min_snapshots_to_keep = 56 + ) + else: + return SnapshotReference( + type = 'tag', + snapshot_id = 56, + ) + """ + + def testSnapshotReference(self): + """Test SnapshotReference""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_snapshot_summary.py b/regtests/client/python/test/test_snapshot_summary.py new file mode 100644 index 0000000000..9039af1a34 --- /dev/null +++ b/regtests/client/python/test/test_snapshot_summary.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.snapshot_summary import SnapshotSummary + +class TestSnapshotSummary(unittest.TestCase): + """SnapshotSummary unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SnapshotSummary: + """Test SnapshotSummary + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SnapshotSummary` + """ + model = SnapshotSummary() + if include_optional: + return SnapshotSummary( + operation = 'append' + ) + else: + return SnapshotSummary( + operation = 'append', + ) + """ + + def testSnapshotSummary(self): + """Test SnapshotSummary""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_sort_direction.py b/regtests/client/python/test/test_sort_direction.py new file mode 100644 index 0000000000..7880429837 --- /dev/null +++ b/regtests/client/python/test/test_sort_direction.py @@ -0,0 +1,48 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.sort_direction import SortDirection + +class TestSortDirection(unittest.TestCase): + """SortDirection unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def testSortDirection(self): + """Test SortDirection""" + # inst = SortDirection() + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_sort_field.py b/regtests/client/python/test/test_sort_field.py new file mode 100644 index 0000000000..8c1b296f0a --- /dev/null +++ b/regtests/client/python/test/test_sort_field.py @@ -0,0 +1,73 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.sort_field import SortField + +class TestSortField(unittest.TestCase): + """SortField unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SortField: + """Test SortField + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SortField` + """ + model = SortField() + if include_optional: + return SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first' + ) + else: + return SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', + ) + """ + + def testSortField(self): + """Test SortField""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_sort_order.py b/regtests/client/python/test/test_sort_order.py new file mode 100644 index 0000000000..bd1fc93371 --- /dev/null +++ b/regtests/client/python/test/test_sort_order.py @@ -0,0 +1,81 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.sort_order import SortOrder + +class TestSortOrder(unittest.TestCase): + """SortOrder unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SortOrder: + """Test SortOrder + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SortOrder` + """ + model = SortOrder() + if include_optional: + return SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ] + ) + else: + return SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], + ) + """ + + def testSortOrder(self): + """Test SortOrder""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_sql_view_representation.py b/regtests/client/python/test/test_sql_view_representation.py new file mode 100644 index 0000000000..5909feb856 --- /dev/null +++ b/regtests/client/python/test/test_sql_view_representation.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.sql_view_representation import SQLViewRepresentation + +class TestSQLViewRepresentation(unittest.TestCase): + """SQLViewRepresentation unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> SQLViewRepresentation: + """Test SQLViewRepresentation + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `SQLViewRepresentation` + """ + model = SQLViewRepresentation() + if include_optional: + return SQLViewRepresentation( + type = '', + sql = '', + dialect = '' + ) + else: + return SQLViewRepresentation( + type = '', + sql = '', + dialect = '', + ) + """ + + def testSQLViewRepresentation(self): + """Test SQLViewRepresentation""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_statistics_file.py b/regtests/client/python/test/test_statistics_file.py new file mode 100644 index 0000000000..9bb196d833 --- /dev/null +++ b/regtests/client/python/test/test_statistics_file.py @@ -0,0 +1,93 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.statistics_file import StatisticsFile + +class TestStatisticsFile(unittest.TestCase): + """StatisticsFile unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> StatisticsFile: + """Test StatisticsFile + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `StatisticsFile` + """ + model = StatisticsFile() + if include_optional: + return StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], + properties = polaris.catalog.models.properties.properties(), ) + ] + ) + else: + return StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], + properties = polaris.catalog.models.properties.properties(), ) + ], + ) + """ + + def testStatisticsFile(self): + """Test StatisticsFile""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_storage_config_info.py b/regtests/client/python/test/test_storage_config_info.py new file mode 100644 index 0000000000..0e6c2c7f88 --- /dev/null +++ b/regtests/client/python/test/test_storage_config_info.py @@ -0,0 +1,68 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.storage_config_info import StorageConfigInfo + +class TestStorageConfigInfo(unittest.TestCase): + """StorageConfigInfo unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> StorageConfigInfo: + """Test StorageConfigInfo + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `StorageConfigInfo` + """ + model = StorageConfigInfo() + if include_optional: + return StorageConfigInfo( + storage_type = 'S3', + allowed_locations = For AWS [s3://bucketname/prefix/], for AZURE [abfss://container@storageaccount.blob.core.windows.net/prefix/], for GCP [gs://bucketname/prefix/] + ) + else: + return StorageConfigInfo( + storage_type = 'S3', + ) + """ + + def testStorageConfigInfo(self): + """Test StorageConfigInfo""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_struct_field.py b/regtests/client/python/test/test_struct_field.py new file mode 100644 index 0000000000..e4dc64b236 --- /dev/null +++ b/regtests/client/python/test/test_struct_field.py @@ -0,0 +1,74 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.struct_field import StructField + +class TestStructField(unittest.TestCase): + """StructField unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> StructField: + """Test StructField + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `StructField` + """ + model = StructField() + if include_optional: + return StructField( + id = 56, + name = '', + type = None, + required = True, + doc = '' + ) + else: + return StructField( + id = 56, + name = '', + type = None, + required = True, + ) + """ + + def testStructField(self): + """Test StructField""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_struct_type.py b/regtests/client/python/test/test_struct_type.py new file mode 100644 index 0000000000..0d8c132aed --- /dev/null +++ b/regtests/client/python/test/test_struct_type.py @@ -0,0 +1,83 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.struct_type import StructType + +class TestStructType(unittest.TestCase): + """StructType unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> StructType: + """Test StructType + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `StructType` + """ + model = StructType() + if include_optional: + return StructType( + type = 'struct', + fields = [ + polaris.catalog.models.struct_field.StructField( + id = 56, + name = '', + type = null, + required = True, + doc = '', ) + ] + ) + else: + return StructType( + type = 'struct', + fields = [ + polaris.catalog.models.struct_field.StructField( + id = 56, + name = '', + type = null, + required = True, + doc = '', ) + ], + ) + """ + + def testStructType(self): + """Test StructType""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_table_grant.py b/regtests/client/python/test/test_table_grant.py new file mode 100644 index 0000000000..e6d1fb898d --- /dev/null +++ b/regtests/client/python/test/test_table_grant.py @@ -0,0 +1,75 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.table_grant import TableGrant + +class TestTableGrant(unittest.TestCase): + """TableGrant unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> TableGrant: + """Test TableGrant + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `TableGrant` + """ + model = TableGrant() + if include_optional: + return TableGrant( + namespace = [ + '' + ], + table_name = '', + privilege = 'CATALOG_MANAGE_ACCESS' + ) + else: + return TableGrant( + namespace = [ + '' + ], + table_name = '', + privilege = 'CATALOG_MANAGE_ACCESS', + ) + """ + + def testTableGrant(self): + """Test TableGrant""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_table_identifier.py b/regtests/client/python/test/test_table_identifier.py new file mode 100644 index 0000000000..fe685a038b --- /dev/null +++ b/regtests/client/python/test/test_table_identifier.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.table_identifier import TableIdentifier + +class TestTableIdentifier(unittest.TestCase): + """TableIdentifier unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> TableIdentifier: + """Test TableIdentifier + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `TableIdentifier` + """ + model = TableIdentifier() + if include_optional: + return TableIdentifier( + namespace = ["accounting","tax"], + name = '' + ) + else: + return TableIdentifier( + namespace = ["accounting","tax"], + name = '', + ) + """ + + def testTableIdentifier(self): + """Test TableIdentifier""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_table_metadata.py b/regtests/client/python/test/test_table_metadata.py new file mode 100644 index 0000000000..bc714a4fea --- /dev/null +++ b/regtests/client/python/test/test_table_metadata.py @@ -0,0 +1,159 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.table_metadata import TableMetadata + +class TestTableMetadata(unittest.TestCase): + """TableMetadata unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> TableMetadata: + """Test TableMetadata + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `TableMetadata` + """ + model = TableMetadata() + if include_optional: + return TableMetadata( + format_version = 1, + table_uuid = '', + location = '', + last_updated_ms = 56, + properties = { + 'key' : '' + }, + schemas = [ + null + ], + current_schema_id = 56, + last_column_id = 56, + partition_specs = [ + polaris.catalog.models.partition_spec.PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], ) + ], + default_spec_id = 56, + last_partition_id = 56, + sort_orders = [ + polaris.catalog.models.sort_order.SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], ) + ], + default_sort_order_id = 56, + snapshots = [ + polaris.catalog.models.snapshot.Snapshot( + snapshot_id = 56, + parent_snapshot_id = 56, + sequence_number = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + schema_id = 56, ) + ], + refs = { + 'key' : polaris.catalog.models.snapshot_reference.SnapshotReference( + type = 'tag', + snapshot_id = 56, + max_ref_age_ms = 56, + max_snapshot_age_ms = 56, + min_snapshots_to_keep = 56, ) + }, + current_snapshot_id = 56, + last_sequence_number = 56, + snapshot_log = [ + polaris.catalog.models.snapshot_log_inner.SnapshotLog_inner( + snapshot_id = 56, + timestamp_ms = 56, ) + ], + metadata_log = [ + polaris.catalog.models.metadata_log_inner.MetadataLog_inner( + metadata_file = '', + timestamp_ms = 56, ) + ], + statistics_files = [ + polaris.catalog.models.statistics_file.StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], + properties = polaris.catalog.models.properties.properties(), ) + ], ) + ], + partition_statistics_files = [ + polaris.catalog.models.partition_statistics_file.PartitionStatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, ) + ] + ) + else: + return TableMetadata( + format_version = 1, + table_uuid = '', + ) + """ + + def testTableMetadata(self): + """Test TableMetadata""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_table_privilege.py b/regtests/client/python/test/test_table_privilege.py new file mode 100644 index 0000000000..42ac331443 --- /dev/null +++ b/regtests/client/python/test/test_table_privilege.py @@ -0,0 +1,48 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.table_privilege import TablePrivilege + +class TestTablePrivilege(unittest.TestCase): + """TablePrivilege unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def testTablePrivilege(self): + """Test TablePrivilege""" + # inst = TablePrivilege() + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_table_requirement.py b/regtests/client/python/test/test_table_requirement.py new file mode 100644 index 0000000000..938b4d1952 --- /dev/null +++ b/regtests/client/python/test/test_table_requirement.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.table_requirement import TableRequirement + +class TestTableRequirement(unittest.TestCase): + """TableRequirement unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> TableRequirement: + """Test TableRequirement + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `TableRequirement` + """ + model = TableRequirement() + if include_optional: + return TableRequirement( + type = '' + ) + else: + return TableRequirement( + type = '', + ) + """ + + def testTableRequirement(self): + """Test TableRequirement""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_table_update.py b/regtests/client/python/test/test_table_update.py new file mode 100644 index 0000000000..3f7a126f9e --- /dev/null +++ b/regtests/client/python/test/test_table_update.py @@ -0,0 +1,193 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.table_update import TableUpdate + +class TestTableUpdate(unittest.TestCase): + """TableUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> TableUpdate: + """Test TableUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `TableUpdate` + """ + model = TableUpdate() + if include_optional: + return TableUpdate( + action = '', + format_version = 56, + var_schema = None, + last_column_id = 56, + schema_id = 56, + spec = polaris.catalog.models.partition_spec.PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], ), + spec_id = 56, + sort_order = polaris.catalog.models.sort_order.SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], ), + sort_order_id = 56, + snapshot = polaris.catalog.models.snapshot.Snapshot( + snapshot_id = 56, + parent_snapshot_id = 56, + sequence_number = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + schema_id = 56, ), + ref_name = '', + type = 'tag', + snapshot_id = 56, + max_ref_age_ms = 56, + max_snapshot_age_ms = 56, + min_snapshots_to_keep = 56, + snapshot_ids = [ + 56 + ], + location = '', + updates = { + 'key' : '' + }, + removals = [ + '' + ], + statistics = polaris.catalog.models.statistics_file.StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], + properties = polaris.catalog.models.properties.properties(), ) + ], ) + ) + else: + return TableUpdate( + action = '', + format_version = 56, + var_schema = None, + schema_id = 56, + spec = polaris.catalog.models.partition_spec.PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], ), + spec_id = 56, + sort_order = polaris.catalog.models.sort_order.SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], ), + sort_order_id = 56, + snapshot = polaris.catalog.models.snapshot.Snapshot( + snapshot_id = 56, + parent_snapshot_id = 56, + sequence_number = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + schema_id = 56, ), + ref_name = '', + type = 'tag', + snapshot_id = 56, + snapshot_ids = [ + 56 + ], + location = '', + updates = { + 'key' : '' + }, + removals = [ + '' + ], + statistics = polaris.catalog.models.statistics_file.StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], + properties = polaris.catalog.models.properties.properties(), ) + ], ), + ) + """ + + def testTableUpdate(self): + """Test TableUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_table_update_notification.py b/regtests/client/python/test/test_table_update_notification.py new file mode 100644 index 0000000000..a767194896 --- /dev/null +++ b/regtests/client/python/test/test_table_update_notification.py @@ -0,0 +1,165 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.table_update_notification import TableUpdateNotification + +class TestTableUpdateNotification(unittest.TestCase): + """TableUpdateNotification unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> TableUpdateNotification: + """Test TableUpdateNotification + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `TableUpdateNotification` + """ + model = TableUpdateNotification() + if include_optional: + return TableUpdateNotification( + table_name = '', + timestamp = 56, + table_uuid = '', + metadata_location = '', + metadata = polaris.catalog.models.table_metadata.TableMetadata( + format_version = 1, + table_uuid = '', + location = '', + last_updated_ms = 56, + properties = { + 'key' : '' + }, + schemas = [ + null + ], + current_schema_id = 56, + last_column_id = 56, + partition_specs = [ + polaris.catalog.models.partition_spec.PartitionSpec( + spec_id = 56, + fields = [ + polaris.catalog.models.partition_field.PartitionField( + field_id = 56, + source_id = 56, + name = '', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', ) + ], ) + ], + default_spec_id = 56, + last_partition_id = 56, + sort_orders = [ + polaris.catalog.models.sort_order.SortOrder( + order_id = 56, + fields = [ + polaris.catalog.models.sort_field.SortField( + source_id = 56, + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + direction = 'asc', + null_order = 'nulls-first', ) + ], ) + ], + default_sort_order_id = 56, + snapshots = [ + polaris.catalog.models.snapshot.Snapshot( + snapshot_id = 56, + parent_snapshot_id = 56, + sequence_number = 56, + timestamp_ms = 56, + manifest_list = '', + summary = { + 'key' : '' + }, + schema_id = 56, ) + ], + refs = { + 'key' : polaris.catalog.models.snapshot_reference.SnapshotReference( + type = 'tag', + snapshot_id = 56, + max_ref_age_ms = 56, + max_snapshot_age_ms = 56, + min_snapshots_to_keep = 56, ) + }, + current_snapshot_id = 56, + last_sequence_number = 56, + snapshot_log = [ + polaris.catalog.models.snapshot_log_inner.SnapshotLog_inner( + snapshot_id = 56, + timestamp_ms = 56, ) + ], + metadata_log = [ + polaris.catalog.models.metadata_log_inner.MetadataLog_inner( + metadata_file = '', + timestamp_ms = 56, ) + ], + statistics_files = [ + polaris.catalog.models.statistics_file.StatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, + file_footer_size_in_bytes = 56, + blob_metadata = [ + polaris.catalog.models.blob_metadata.BlobMetadata( + type = '', + snapshot_id = 56, + sequence_number = 56, + fields = [ + 56 + ], ) + ], ) + ], + partition_statistics_files = [ + polaris.catalog.models.partition_statistics_file.PartitionStatisticsFile( + snapshot_id = 56, + statistics_path = '', + file_size_in_bytes = 56, ) + ], ) + ) + else: + return TableUpdateNotification( + table_name = '', + timestamp = 56, + table_uuid = '', + metadata_location = '', + ) + """ + + def testTableUpdateNotification(self): + """Test TableUpdateNotification""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_term.py b/regtests/client/python/test/test_term.py new file mode 100644 index 0000000000..7ff7479516 --- /dev/null +++ b/regtests/client/python/test/test_term.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.term import Term + +class TestTerm(unittest.TestCase): + """Term unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> Term: + """Test Term + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `Term` + """ + model = Term() + if include_optional: + return Term( + type = 'transform', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + term = '["column-name"]' + ) + else: + return Term( + type = 'transform', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + term = '["column-name"]', + ) + """ + + def testTerm(self): + """Test Term""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_timer_result.py b/regtests/client/python/test/test_timer_result.py new file mode 100644 index 0000000000..c18feb09d4 --- /dev/null +++ b/regtests/client/python/test/test_timer_result.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.timer_result import TimerResult + +class TestTimerResult(unittest.TestCase): + """TimerResult unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> TimerResult: + """Test TimerResult + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `TimerResult` + """ + model = TimerResult() + if include_optional: + return TimerResult( + time_unit = '', + count = 56, + total_duration = 56 + ) + else: + return TimerResult( + time_unit = '', + count = 56, + total_duration = 56, + ) + """ + + def testTimerResult(self): + """Test TimerResult""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_token_type.py b/regtests/client/python/test/test_token_type.py new file mode 100644 index 0000000000..1e9b4d1e59 --- /dev/null +++ b/regtests/client/python/test/test_token_type.py @@ -0,0 +1,48 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.token_type import TokenType + +class TestTokenType(unittest.TestCase): + """TokenType unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def testTokenType(self): + """Test TokenType""" + # inst = TokenType() + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_transform_term.py b/regtests/client/python/test/test_transform_term.py new file mode 100644 index 0000000000..72da2f129b --- /dev/null +++ b/regtests/client/python/test/test_transform_term.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.transform_term import TransformTerm + +class TestTransformTerm(unittest.TestCase): + """TransformTerm unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> TransformTerm: + """Test TransformTerm + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `TransformTerm` + """ + model = TransformTerm() + if include_optional: + return TransformTerm( + type = 'transform', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + term = '["column-name"]' + ) + else: + return TransformTerm( + type = 'transform', + transform = '["identity","year","month","day","hour","bucket[256]","truncate[16]"]', + term = '["column-name"]', + ) + """ + + def testTransformTerm(self): + """Test TransformTerm""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_type.py b/regtests/client/python/test/test_type.py new file mode 100644 index 0000000000..ca80486463 --- /dev/null +++ b/regtests/client/python/test/test_type.py @@ -0,0 +1,99 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.type import Type + +class TestType(unittest.TestCase): + """Type unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> Type: + """Test Type + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `Type` + """ + model = Type() + if include_optional: + return Type( + type = 'struct', + fields = [ + polaris.catalog.models.struct_field.StructField( + id = 56, + name = '', + type = null, + required = True, + doc = '', ) + ], + element_id = 56, + element = None, + element_required = True, + key_id = 56, + key = None, + value_id = 56, + value = None, + value_required = True + ) + else: + return Type( + type = 'struct', + fields = [ + polaris.catalog.models.struct_field.StructField( + id = 56, + name = '', + type = null, + required = True, + doc = '', ) + ], + element_id = 56, + element = None, + element_required = True, + key_id = 56, + key = None, + value_id = 56, + value = None, + value_required = True, + ) + """ + + def testType(self): + """Test Type""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_unary_expression.py b/regtests/client/python/test/test_unary_expression.py new file mode 100644 index 0000000000..417b18eb2f --- /dev/null +++ b/regtests/client/python/test/test_unary_expression.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.unary_expression import UnaryExpression + +class TestUnaryExpression(unittest.TestCase): + """UnaryExpression unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> UnaryExpression: + """Test UnaryExpression + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `UnaryExpression` + """ + model = UnaryExpression() + if include_optional: + return UnaryExpression( + type = '["eq","and","or","not","in","not-in","lt","lt-eq","gt","gt-eq","not-eq","starts-with","not-starts-with","is-null","not-null","is-nan","not-nan"]', + term = None, + value = polaris.catalog.models.value.value() + ) + else: + return UnaryExpression( + type = '["eq","and","or","not","in","not-in","lt","lt-eq","gt","gt-eq","not-eq","starts-with","not-starts-with","is-null","not-null","is-nan","not-nan"]', + term = None, + value = polaris.catalog.models.value.value(), + ) + """ + + def testUnaryExpression(self): + """Test UnaryExpression""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_update_catalog_request.py b/regtests/client/python/test/test_update_catalog_request.py new file mode 100644 index 0000000000..8a04260ab4 --- /dev/null +++ b/regtests/client/python/test/test_update_catalog_request.py @@ -0,0 +1,72 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.update_catalog_request import UpdateCatalogRequest + +class TestUpdateCatalogRequest(unittest.TestCase): + """UpdateCatalogRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> UpdateCatalogRequest: + """Test UpdateCatalogRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `UpdateCatalogRequest` + """ + model = UpdateCatalogRequest() + if include_optional: + return UpdateCatalogRequest( + current_entity_version = 56, + properties = { + 'key' : '' + }, + storage_config_info = polaris.management.models.storage_config_info.StorageConfigInfo( + storage_type = 'S3', + allowed_locations = For AWS [s3://bucketname/prefix/], for AZURE [abfss://container@storageaccount.blob.core.windows.net/prefix/], for GCP [gs://bucketname/prefix/], ) + ) + else: + return UpdateCatalogRequest( + ) + """ + + def testUpdateCatalogRequest(self): + """Test UpdateCatalogRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_update_catalog_role_request.py b/regtests/client/python/test/test_update_catalog_role_request.py new file mode 100644 index 0000000000..5114bf8507 --- /dev/null +++ b/regtests/client/python/test/test_update_catalog_role_request.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.update_catalog_role_request import UpdateCatalogRoleRequest + +class TestUpdateCatalogRoleRequest(unittest.TestCase): + """UpdateCatalogRoleRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> UpdateCatalogRoleRequest: + """Test UpdateCatalogRoleRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `UpdateCatalogRoleRequest` + """ + model = UpdateCatalogRoleRequest() + if include_optional: + return UpdateCatalogRoleRequest( + current_entity_version = 56, + properties = { + 'key' : '' + } + ) + else: + return UpdateCatalogRoleRequest( + ) + """ + + def testUpdateCatalogRoleRequest(self): + """Test UpdateCatalogRoleRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_update_namespace_properties_request.py b/regtests/client/python/test/test_update_namespace_properties_request.py new file mode 100644 index 0000000000..3604e94368 --- /dev/null +++ b/regtests/client/python/test/test_update_namespace_properties_request.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.update_namespace_properties_request import UpdateNamespacePropertiesRequest + +class TestUpdateNamespacePropertiesRequest(unittest.TestCase): + """UpdateNamespacePropertiesRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> UpdateNamespacePropertiesRequest: + """Test UpdateNamespacePropertiesRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `UpdateNamespacePropertiesRequest` + """ + model = UpdateNamespacePropertiesRequest() + if include_optional: + return UpdateNamespacePropertiesRequest( + removals = ["department","access_group"], + updates = {"owner":"Hank Bendickson"} + ) + else: + return UpdateNamespacePropertiesRequest( + ) + """ + + def testUpdateNamespacePropertiesRequest(self): + """Test UpdateNamespacePropertiesRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_update_namespace_properties_response.py b/regtests/client/python/test/test_update_namespace_properties_response.py new file mode 100644 index 0000000000..d9249ea08d --- /dev/null +++ b/regtests/client/python/test/test_update_namespace_properties_response.py @@ -0,0 +1,80 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.update_namespace_properties_response import UpdateNamespacePropertiesResponse + +class TestUpdateNamespacePropertiesResponse(unittest.TestCase): + """UpdateNamespacePropertiesResponse unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> UpdateNamespacePropertiesResponse: + """Test UpdateNamespacePropertiesResponse + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `UpdateNamespacePropertiesResponse` + """ + model = UpdateNamespacePropertiesResponse() + if include_optional: + return UpdateNamespacePropertiesResponse( + updated = [ + '' + ], + removed = [ + '' + ], + missing = [ + '' + ] + ) + else: + return UpdateNamespacePropertiesResponse( + updated = [ + '' + ], + removed = [ + '' + ], + ) + """ + + def testUpdateNamespacePropertiesResponse(self): + """Test UpdateNamespacePropertiesResponse""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_update_principal_request.py b/regtests/client/python/test/test_update_principal_request.py new file mode 100644 index 0000000000..6c6174c580 --- /dev/null +++ b/regtests/client/python/test/test_update_principal_request.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.update_principal_request import UpdatePrincipalRequest + +class TestUpdatePrincipalRequest(unittest.TestCase): + """UpdatePrincipalRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> UpdatePrincipalRequest: + """Test UpdatePrincipalRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `UpdatePrincipalRequest` + """ + model = UpdatePrincipalRequest() + if include_optional: + return UpdatePrincipalRequest( + current_entity_version = 56, + properties = { + 'key' : '' + } + ) + else: + return UpdatePrincipalRequest( + ) + """ + + def testUpdatePrincipalRequest(self): + """Test UpdatePrincipalRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_update_principal_role_request.py b/regtests/client/python/test/test_update_principal_role_request.py new file mode 100644 index 0000000000..36b0b4cb65 --- /dev/null +++ b/regtests/client/python/test/test_update_principal_role_request.py @@ -0,0 +1,69 @@ + +# coding: utf-8 +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.update_principal_role_request import UpdatePrincipalRoleRequest + +class TestUpdatePrincipalRoleRequest(unittest.TestCase): + """UpdatePrincipalRoleRequest unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> UpdatePrincipalRoleRequest: + """Test UpdatePrincipalRoleRequest + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `UpdatePrincipalRoleRequest` + """ + model = UpdatePrincipalRoleRequest() + if include_optional: + return UpdatePrincipalRoleRequest( + current_entity_version = 56, + properties = { + 'key' : '' + } + ) + else: + return UpdatePrincipalRoleRequest( + ) + """ + + def testUpdatePrincipalRoleRequest(self): + """Test UpdatePrincipalRoleRequest""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_upgrade_format_version_update.py b/regtests/client/python/test/test_upgrade_format_version_update.py new file mode 100644 index 0000000000..f3afed16b7 --- /dev/null +++ b/regtests/client/python/test/test_upgrade_format_version_update.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.upgrade_format_version_update import UpgradeFormatVersionUpdate + +class TestUpgradeFormatVersionUpdate(unittest.TestCase): + """UpgradeFormatVersionUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> UpgradeFormatVersionUpdate: + """Test UpgradeFormatVersionUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `UpgradeFormatVersionUpdate` + """ + model = UpgradeFormatVersionUpdate() + if include_optional: + return UpgradeFormatVersionUpdate( + action = 'upgrade-format-version', + format_version = 56 + ) + else: + return UpgradeFormatVersionUpdate( + action = 'upgrade-format-version', + format_version = 56, + ) + """ + + def testUpgradeFormatVersionUpdate(self): + """Test UpgradeFormatVersionUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_value_map.py b/regtests/client/python/test/test_value_map.py new file mode 100644 index 0000000000..83d17c6f12 --- /dev/null +++ b/regtests/client/python/test/test_value_map.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.value_map import ValueMap + +class TestValueMap(unittest.TestCase): + """ValueMap unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ValueMap: + """Test ValueMap + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ValueMap` + """ + model = ValueMap() + if include_optional: + return ValueMap( + keys = [ + 42 + ], + values = [ + null + ] + ) + else: + return ValueMap( + ) + """ + + def testValueMap(self): + """Test ValueMap""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_view_grant.py b/regtests/client/python/test/test_view_grant.py new file mode 100644 index 0000000000..995361fb0e --- /dev/null +++ b/regtests/client/python/test/test_view_grant.py @@ -0,0 +1,75 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.view_grant import ViewGrant + +class TestViewGrant(unittest.TestCase): + """ViewGrant unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ViewGrant: + """Test ViewGrant + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ViewGrant` + """ + model = ViewGrant() + if include_optional: + return ViewGrant( + namespace = [ + '' + ], + view_name = '', + privilege = 'CATALOG_MANAGE_ACCESS' + ) + else: + return ViewGrant( + namespace = [ + '' + ], + view_name = '', + privilege = 'CATALOG_MANAGE_ACCESS', + ) + """ + + def testViewGrant(self): + """Test ViewGrant""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_view_history_entry.py b/regtests/client/python/test/test_view_history_entry.py new file mode 100644 index 0000000000..c33fc9047e --- /dev/null +++ b/regtests/client/python/test/test_view_history_entry.py @@ -0,0 +1,69 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.view_history_entry import ViewHistoryEntry + +class TestViewHistoryEntry(unittest.TestCase): + """ViewHistoryEntry unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ViewHistoryEntry: + """Test ViewHistoryEntry + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ViewHistoryEntry` + """ + model = ViewHistoryEntry() + if include_optional: + return ViewHistoryEntry( + version_id = 56, + timestamp_ms = 56 + ) + else: + return ViewHistoryEntry( + version_id = 56, + timestamp_ms = 56, + ) + """ + + def testViewHistoryEntry(self): + """Test ViewHistoryEntry""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_view_metadata.py b/regtests/client/python/test/test_view_metadata.py new file mode 100644 index 0000000000..54caeea83d --- /dev/null +++ b/regtests/client/python/test/test_view_metadata.py @@ -0,0 +1,120 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.view_metadata import ViewMetadata + +class TestViewMetadata(unittest.TestCase): + """ViewMetadata unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ViewMetadata: + """Test ViewMetadata + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ViewMetadata` + """ + model = ViewMetadata() + if include_optional: + return ViewMetadata( + view_uuid = '', + format_version = 1, + location = '', + current_version_id = 56, + versions = [ + polaris.catalog.models.view_version.ViewVersion( + version_id = 56, + timestamp_ms = 56, + schema_id = 56, + summary = { + 'key' : '' + }, + representations = [ + null + ], + default_catalog = '', + default_namespace = ["accounting","tax"], ) + ], + version_log = [ + polaris.catalog.models.view_history_entry.ViewHistoryEntry( + version_id = 56, + timestamp_ms = 56, ) + ], + schemas = [ + null + ], + properties = { + 'key' : '' + } + ) + else: + return ViewMetadata( + view_uuid = '', + format_version = 1, + location = '', + current_version_id = 56, + versions = [ + polaris.catalog.models.view_version.ViewVersion( + version_id = 56, + timestamp_ms = 56, + schema_id = 56, + summary = { + 'key' : '' + }, + representations = [ + null + ], + default_catalog = '', + default_namespace = ["accounting","tax"], ) + ], + version_log = [ + polaris.catalog.models.view_history_entry.ViewHistoryEntry( + version_id = 56, + timestamp_ms = 56, ) + ], + schemas = [ + null + ], + ) + """ + + def testViewMetadata(self): + """Test ViewMetadata""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_view_privilege.py b/regtests/client/python/test/test_view_privilege.py new file mode 100644 index 0000000000..8990a9f128 --- /dev/null +++ b/regtests/client/python/test/test_view_privilege.py @@ -0,0 +1,48 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Polaris Management Service + + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.management.models.view_privilege import ViewPrivilege + +class TestViewPrivilege(unittest.TestCase): + """ViewPrivilege unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def testViewPrivilege(self): + """Test ViewPrivilege""" + # inst = ViewPrivilege() + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_view_representation.py b/regtests/client/python/test/test_view_representation.py new file mode 100644 index 0000000000..38053032dd --- /dev/null +++ b/regtests/client/python/test/test_view_representation.py @@ -0,0 +1,71 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.view_representation import ViewRepresentation + +class TestViewRepresentation(unittest.TestCase): + """ViewRepresentation unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ViewRepresentation: + """Test ViewRepresentation + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ViewRepresentation` + """ + model = ViewRepresentation() + if include_optional: + return ViewRepresentation( + type = '', + sql = '', + dialect = '' + ) + else: + return ViewRepresentation( + type = '', + sql = '', + dialect = '', + ) + """ + + def testViewRepresentation(self): + """Test ViewRepresentation""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_view_requirement.py b/regtests/client/python/test/test_view_requirement.py new file mode 100644 index 0000000000..a787a2cdb0 --- /dev/null +++ b/regtests/client/python/test/test_view_requirement.py @@ -0,0 +1,67 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.view_requirement import ViewRequirement + +class TestViewRequirement(unittest.TestCase): + """ViewRequirement unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ViewRequirement: + """Test ViewRequirement + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ViewRequirement` + """ + model = ViewRequirement() + if include_optional: + return ViewRequirement( + type = '' + ) + else: + return ViewRequirement( + type = '', + ) + """ + + def testViewRequirement(self): + """Test ViewRequirement""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_view_update.py b/regtests/client/python/test/test_view_update.py new file mode 100644 index 0000000000..a82eb8f113 --- /dev/null +++ b/regtests/client/python/test/test_view_update.py @@ -0,0 +1,112 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.view_update import ViewUpdate + +class TestViewUpdate(unittest.TestCase): + """ViewUpdate unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ViewUpdate: + """Test ViewUpdate + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ViewUpdate` + """ + model = ViewUpdate() + if include_optional: + return ViewUpdate( + action = '', + format_version = 56, + var_schema = None, + last_column_id = 56, + location = '', + updates = { + 'key' : '' + }, + removals = [ + '' + ], + view_version = polaris.catalog.models.view_version.ViewVersion( + version_id = 56, + timestamp_ms = 56, + schema_id = 56, + summary = { + 'key' : '' + }, + representations = [ + null + ], + default_catalog = '', + default_namespace = ["accounting","tax"], ), + view_version_id = 56 + ) + else: + return ViewUpdate( + action = '', + format_version = 56, + var_schema = None, + location = '', + updates = { + 'key' : '' + }, + removals = [ + '' + ], + view_version = polaris.catalog.models.view_version.ViewVersion( + version_id = 56, + timestamp_ms = 56, + schema_id = 56, + summary = { + 'key' : '' + }, + representations = [ + null + ], + default_catalog = '', + default_namespace = ["accounting","tax"], ), + view_version_id = 56, + ) + """ + + def testViewUpdate(self): + """Test ViewUpdate""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/test/test_view_version.py b/regtests/client/python/test/test_view_version.py new file mode 100644 index 0000000000..7c068369de --- /dev/null +++ b/regtests/client/python/test/test_view_version.py @@ -0,0 +1,86 @@ +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# coding: utf-8 + +""" + Apache Iceberg REST Catalog API + + Defines the specification for the first version of the REST Catalog API. Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. + + The version of the OpenAPI document: 0.0.1 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +import unittest + +from polaris.catalog.models.view_version import ViewVersion + +class TestViewVersion(unittest.TestCase): + """ViewVersion unit test stubs""" + + def setUp(self): + pass + + def tearDown(self): + pass + + def make_instance(self, include_optional) -> ViewVersion: + """Test ViewVersion + include_option is a boolean, when False only required + params are included, when True both required and + optional params are included """ + # uncomment below to create an instance of `ViewVersion` + """ + model = ViewVersion() + if include_optional: + return ViewVersion( + version_id = 56, + timestamp_ms = 56, + schema_id = 56, + summary = { + 'key' : '' + }, + representations = [ + null + ], + default_catalog = '', + default_namespace = ["accounting","tax"] + ) + else: + return ViewVersion( + version_id = 56, + timestamp_ms = 56, + schema_id = 56, + summary = { + 'key' : '' + }, + representations = [ + null + ], + default_namespace = ["accounting","tax"], + ) + """ + + def testViewVersion(self): + """Test ViewVersion""" + # inst_req_only = self.make_instance(include_optional=False) + # inst_req_and_optional = self.make_instance(include_optional=True) + +if __name__ == '__main__': + unittest.main() diff --git a/regtests/client/python/tox.ini b/regtests/client/python/tox.ini new file mode 100644 index 0000000000..71bd9833c7 --- /dev/null +++ b/regtests/client/python/tox.ini @@ -0,0 +1,9 @@ +[tox] +envlist = py3 + +[testenv] +deps=-r{toxinidir}/requirements.txt + -r{toxinidir}/test-requirements.txt + +commands= + pytest --cov=polaris.catalog diff --git a/regtests/credentials/.keep b/regtests/credentials/.keep new file mode 100644 index 0000000000..e69de29bb2 diff --git a/regtests/output/.keep b/regtests/output/.keep new file mode 100644 index 0000000000..e69de29bb2 diff --git a/regtests/pyspark-setup.sh b/regtests/pyspark-setup.sh new file mode 100755 index 0000000000..7940037472 --- /dev/null +++ b/regtests/pyspark-setup.sh @@ -0,0 +1,28 @@ +#!/bin/bash +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +if [ ! -d ~/polaris-venv ]; then + python3 -m venv ~/polaris-venv +fi + +. ~/polaris-venv/bin/activate + +pip install poetry==1.5.0 + +cd client/python +python3 -m poetry install +deactivate \ No newline at end of file diff --git a/regtests/run.sh b/regtests/run.sh new file mode 100755 index 0000000000..eef12b622a --- /dev/null +++ b/regtests/run.sh @@ -0,0 +1,145 @@ +#!/bin/bash +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Run without args to run all tests, or single arg for single test. + +if [ -z "${SPARK_HOME}"]; then + export SPARK_HOME=$(realpath ~/spark-3.5.1-bin-hadoop3-scala2.13) +fi +export PYTHONPATH="${SPARK_HOME}/python/:${SPARK_HOME}/python/lib/py4j-0.10.9.7-src.zip:$PYTHONPATH" + +FMT_RED='\033[0;31m' +FMT_GREEN='\033[0;32m' +FMT_NC='\033[0m' + +function loginfo() { + echo "$(date): ${@}" +} +function loggreen() { + echo -e "${FMT_GREEN}$(date): ${@}${FMT_NC}" +} +function logred() { + echo -e "${FMT_RED}$(date): ${@}${FMT_NC}" +} + +REGTEST_HOME=$(dirname $(realpath $0)) +cd ${REGTEST_HOME} + +./setup.sh + +# start the python venv +. ~/polaris-venv/bin/activate + +if [ -z "${1}" ]; then + loginfo 'Running all tests' + TEST_LIST="$(find t_* -wholename '*t_*/src/*')" +else + loginfo "Running single test ${1}" + TEST_LIST=${1} +fi + +export PYTHONDONTWRITEBYTECODE=1 + +NUM_FAILURES=0 +NUM_SUCCESSES=0 + +export AWS_ACCESS_KEY_ID='' +export AWS_SECRET_ACCESS_KEY='' + +for TEST_FILE in ${TEST_LIST}; do + TEST_SUITE=$(dirname $(dirname ${TEST_FILE})) + TEST_SHORTNAME=$(basename ${TEST_FILE}) + if [[ "${TEST_SHORTNAME}" =~ .*.py ]]; then + # skip non-test python files + if [[ ! "${TEST_SHORTNAME}" =~ ^test_.*.py ]]; then + continue + fi + loginfo "Starting pytest ${TEST_SUITE}:${TEST_SHORTNAME}" + python3 -m pytest $TESTFILE + CODE=$? + if [[ $CODE -ne 0 ]]; then + logred "Test FAILED: ${TEST_SUITE}:${TEST_SHORTNAME}" + NUM_FAILURES=$(( NUM_FAILURES + 1 )) + else + loggreen "Test SUCCEEDED: ${TEST_SUITE}:${TEST_SHORTNAME}" + fi + continue + fi + if [[ "${TEST_SHORTNAME}" =~ .*.azure.*.sh ]]; then + if [ -z "${AZURE_CLIENT_ID}" ] || [ -z "${AZURE_CLIENT_SECRET}" ] || [ -z "${AZURE_TENANT_ID}" ] ; then + loginfo "Azure tests not enabled, skip running test ${TEST_FILE}" + continue + fi + fi + if [[ "${TEST_SHORTNAME}" =~ .*.s3_cross_region.*.sh ]]; then + if [ -z "$AWS_CROSS_REGION_TEST_ENABLED" ] || [ "$AWS_CROSS_REGION_TEST_ENABLED" != "true" ] ] ; then + loginfo "AWS cross region tests not enabled, skip running test ${TEST_FILE}" + continue + fi + fi + if [[ "${TEST_SHORTNAME}" =~ .*.s3.*.sh ]]; then + if [ -z "$AWS_TEST_ENABLED" ] || [ "$AWS_TEST_ENABLED" != "true" ] || [ -z "$AWS_TEST_BASE" ] ; then + loginfo "AWS tests not enabled, skip running test ${TEST_FILE}" + continue + fi + fi + if [[ "${TEST_SHORTNAME}" =~ .*.gcp.sh ]]; then + # this variable should be the location of your gcp service account key in json + # it is required by running polaris against local + gcp + # example: export GOOGLE_APPLICATION_CREDENTIALS="/home/schen/google_account/google_service_account.json" + if [ -z "$GCS_TEST_ENABLED" ] || [ "$GCS_TEST_ENABLED" != "true" ] || [ -z "${GOOGLE_APPLICATION_CREDENTIALS}" ] ; then + loginfo "GCS tests not enabled, skip running test ${TEST_FILE}" + continue + fi + fi + loginfo "Starting test ${TEST_SUITE}:${TEST_SHORTNAME}" + + TEST_TMPDIR="/tmp/polaris-regtests/${TEST_SUITE}" + TEST_STDERR="${TEST_TMPDIR}/${TEST_SHORTNAME}.stderr" + TEST_STDOUT="${TEST_TMPDIR}/${TEST_SHORTNAME}.stdout" + + mkdir -p ${TEST_TMPDIR} + if (( ${VERBOSE} )); then + ./${TEST_FILE} 2>${TEST_STDERR} | grep -v 'loading settings' | tee ${TEST_STDOUT} + else + ./${TEST_FILE} 2>${TEST_STDERR} | grep -v 'loading settings' > ${TEST_STDOUT} + fi + loginfo "Test run concluded for ${TEST_SUITE}:${TEST_SHORTNAME}" + + TEST_REF="$(realpath ${TEST_SUITE})/ref/${TEST_SHORTNAME}.ref" + touch ${TEST_REF} + if cmp --silent ${TEST_STDOUT} ${TEST_REF}; then + loggreen "Test SUCCEEDED: ${TEST_SUITE}:${TEST_SHORTNAME}" + NUM_SUCCESSES=$(( NUM_SUCCESSES + 1 )) + else + logred "Test FAILED: ${TEST_SUITE}:${TEST_SHORTNAME}" + echo '#!/bin/bash' > ${TEST_TMPDIR}/${TEST_SHORTNAME}.fixdiffs.sh + echo "meld ${TEST_STDOUT} ${TEST_REF}" >> ${TEST_TMPDIR}/${TEST_SHORTNAME}.fixdiffs.sh + chmod 750 ${TEST_TMPDIR}/${TEST_SHORTNAME}.fixdiffs.sh + logred "To compare and fix diffs (if 'meld' installed): ${TEST_TMPDIR}/${TEST_SHORTNAME}.fixdiffs.sh" + logred "Or manually diff: diff ${TEST_STDOUT} ${TEST_REF}" + logred "See stderr from test run for additional diagnostics: ${TEST_STDERR}" + diff ${TEST_STDOUT} ${TEST_REF} + NUM_FAILURES=$(( NUM_FAILURES + 1 )) + fi +done + +loginfo "Tests completed with ${NUM_SUCCESSES} successes and ${NUM_FAILURES} failures" +if (( ${NUM_FAILURES} > 0 )); then + exit 1 +else + exit 0 +fi diff --git a/regtests/run_spark_sql.sh b/regtests/run_spark_sql.sh new file mode 100755 index 0000000000..94ce7039c6 --- /dev/null +++ b/regtests/run_spark_sql.sh @@ -0,0 +1,75 @@ +#!/bin/bash +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# +# Run this to open an interactive spark-sql shell talking to a catalog named "manual_spark" +# +# You must run 'use polaris;' as your first query in the spark-sql shell. + +REGTEST_HOME=$(dirname $(realpath $0)) +cd ${REGTEST_HOME} + +./setup.sh + +if [ -z "${SPARK_HOME}"]; then + export SPARK_HOME=$(realpath ~/spark-3.5.1-bin-hadoop3-scala2.13) +fi + +SPARK_BEARER_TOKEN="${REGTEST_ROOT_BEARER_TOKEN:-principal:root;realm:default-realm}" + +# Use local filesystem by default +curl -X POST -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs \ + -d '{ + "catalog": { + "name": "manual_spark", + "type": "INTERNAL", + "readOnly": false, + "properties": { + "default-base-location": "file:///tmp/polaris/" + }, + "storageConfigInfo": { + "storageType": "FILE", + "allowedLocations": [ + "file:///tmp" + ] + } + } + }' + +# Use the following instead of below to use s3 instead of local filesystem +#curl -i -X POST -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ +# http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs \ +# -d "{\"name\": \"manual_spark\", \"id\": 100, \"type\": \"INTERNAL\", \"readOnly\": false, \"properties\": {\"default-base-location\": \"s3://${S3_BUCKET}/${USER}/polaris/\"}}" + +# Add TABLE_WRITE_DATA to the catalog's catalog_admin role since by default it can only manage access and metadata +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/manual_spark/catalog-roles/catalog_admin/grants \ + -d '{"type": "catalog", "privilege": "TABLE_WRITE_DATA"}' > /dev/stderr + +# For now, also explicitly assign the catalog_admin to the service_admin. Remove once GS fully rolled out for auto-assign. +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/principal-roles/service_admin/catalog-roles/manual_spark \ + -d '{"name": "catalog_admin"}' > /dev/stderr + +curl -X GET -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/manual_spark + +echo ${SPARK_HOME}/bin/spark-sql -S --conf spark.sql.catalog.polaris.token="${SPARK_BEARER_TOKEN}" +${SPARK_HOME}/bin/spark-sql -S --conf spark.sql.catalog.polaris.token="${SPARK_BEARER_TOKEN}" \ + --conf spark.sql.catalog.polaris.warehouse=manual_spark \ + --conf spark.sql.defaultCatalog=polaris \ + --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions diff --git a/regtests/setup.sh b/regtests/setup.sh new file mode 100755 index 0000000000..e28a6fe052 --- /dev/null +++ b/regtests/setup.sh @@ -0,0 +1,119 @@ +#!/bin/bash +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Idempotent setup for regression tests. Run manually or let run.sh auto-run. +# +# Warning - first time setup may download large amounts of files +# Warning - may clobber conf/spark-defaults.conf + +set -x + +SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) + +if [ -z "${SPARK_HOME}" ]; then + SPARK_HOME=$(realpath ~/spark-3.5.1-bin-hadoop3-scala2.13) +fi +SPARK_CONF="${SPARK_HOME}/conf/spark-defaults.conf" +export PYTHONPATH="${SPARK_HOME}/python/:${SPARK_HOME}/python/lib/py4j-0.10.9.7-src.zip:$PYTHONPATH" + +# Ensure binaries are downloaded locally +echo 'Verifying Spark binaries...' +if ! [ -f ${SPARK_HOME}/bin/spark-sql ]; then + echo 'Setting up Spark...' + if ! [ -f ~/spark-3.5.1-bin-hadoop3-scala2.13.tgz ]; then + echo 'Downloading spark distro...' + wget -O ~/spark-3.5.1-bin-hadoop3-scala2.13.tgz https://dlcdn.apache.org/spark/spark-3.5.1/spark-3.5.1-bin-hadoop3-scala2.13.tgz + if ! [ -f ~/spark-3.5.1-bin-hadoop3-scala2.13.tgz ]; then + if [[ "${OSTYPE}" == "darwin"* ]]; then + echo "Detected OS: mac. Running 'brew install wget' to try again." + brew install wget + wget -O ~/spark-3.5.1-bin-hadoop3-scala2.13.tgz https://dlcdn.apache.org/spark/spark-3.5.1/spark-3.5.1-bin-hadoop3-scala2.13.tgz + fi + fi + else + echo 'Found existing Spark tarball' + fi + tar xzvf ~/spark-3.5.1-bin-hadoop3-scala2.13.tgz -C ~ + echo 'Done!' + SPARK_HOME=$(realpath ~/spark-3.5.1-bin-hadoop3-scala2.13) + SPARK_CONF="${SPARK_HOME}/conf/spark-defaults.conf" +else + echo 'Verified Spark distro already installed.' +fi + +# Download the iceberg cloud provider bundles needed +echo 'Verified bundle jars installed.' +if ! [ -f ${SPARK_HOME}/jars/iceberg-azure-bundle-1.5.2.jar ]; then + echo 'Download azure bundle jar...' + wget -O ${SPARK_HOME}/jars/iceberg-azure-bundle-1.5.2.jar https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-azure-bundle/1.5.2/iceberg-azure-bundle-1.5.2.jar + if ! [ -f ${SPARK_HOME}/jars/iceberg-azure-bundle-1.5.2.jar ]; then + if [[ "${OSTYPE}" == "darwin"* ]]; then + echo "Detected OS: mac. Running 'brew install wget' to try again." + brew install wget + wget -O ${SPARK_HOME}/jars/iceberg-azure-bundle-1.5.2.jar https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-azure-bundle/1.5.2/iceberg-azure-bundle-1.5.2.jar + fi + fi +else + echo 'Verified azure bundle jar already installed' +fi +if ! [ -f ${SPARK_HOME}/jars/iceberg-gcp-bundle-1.5.2.jar ]; then + echo 'Download gcp bundle jar...' + wget -O ${SPARK_HOME}/jars/iceberg-gcp-bundle-1.5.2.jar https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-gcp-bundle/1.5.2/iceberg-gcp-bundle-1.5.2.jar + if ! [ -f ${SPARK_HOME}/jars/iceberg-gcp-bundle-1.5.2.jar ]; then + if [[ "${OSTYPE}" == "darwin"* ]]; then + echo "Detected OS: mac. Running 'brew install wget' to try again." + brew install wget + wget -O ${SPARK_HOME}/jars/iceberg-gcp-bundle-1.5.2.jar https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-gcp-bundle/1.5.2/iceberg-gcp-bundle-1.5.2.jar + fi + fi +else + echo 'Verified gcp bundle jar already installed' +fi + +# Ensure Spark boilerplate conf is set +echo 'Verifying Spark conf...' +if grep 'POLARIS_TESTCONF_V5' ${SPARK_CONF} 2>/dev/null; then + echo 'Verified spark conf' +else + echo 'Setting spark conf...' + # Instead of clobbering existing spark conf, just comment it all out in case it was customized carefully. + sed -i 's/^/# /' ${SPARK_CONF} +cat << EOF >> ${SPARK_CONF} + +# POLARIS_TESTCONF_V5 +spark.jars.packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.5.2,org.apache.hadoop:hadoop-aws:3.4.0,software.amazon.awssdk:bundle:2.23.19,software.amazon.awssdk:url-connection-client:2.23.19 +spark.hadoop.fs.s3.impl org.apache.hadoop.fs.s3a.S3AFileSystem +spark.hadoop.fs.AbstractFileSystem.s3.impl org.apache.hadoop.fs.s3a.S3A +spark.sql.variable.substitute true + +spark.driver.extraJavaOptions -Dderby.system.home=/tmp/derby + +spark.sql.catalog.polaris=org.apache.iceberg.spark.SparkCatalog +spark.sql.catalog.polaris.type=rest +spark.sql.catalog.polaris.uri=http://${POLARIS_HOST:-localhost}:8181/api/catalog +spark.sql.catalog.polaris.warehouse=snowflake +spark.sql.catalog.polaris.header.X-Iceberg-Access-Delegation=true +spark.sql.catalog.polaris.client.region=us-west-2 +EOF + echo 'Success!' +fi + +# setup python venv and install polaris client library and test dependencies +pushd $SCRIPT_DIR && ./pyspark-setup.sh && popd + +# bootstrap dependencies so that future queries don't need to wait for the downloads. +# this is mostly useful for building the Docker image with all needed dependencies +${SPARK_HOME}/bin/spark-sql -e "SELECT 1" diff --git a/regtests/t_hello_world/ref/hello_world.sh.ref b/regtests/t_hello_world/ref/hello_world.sh.ref new file mode 100755 index 0000000000..cd0875583a --- /dev/null +++ b/regtests/t_hello_world/ref/hello_world.sh.ref @@ -0,0 +1 @@ +Hello world! diff --git a/regtests/t_hello_world/src/hello_world.sh b/regtests/t_hello_world/src/hello_world.sh new file mode 100755 index 0000000000..5c880a51f5 --- /dev/null +++ b/regtests/t_hello_world/src/hello_world.sh @@ -0,0 +1,17 @@ +#!/bin/bash + +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +echo "Hello world!" diff --git a/regtests/t_oauth/test_oauth2_tokens.py b/regtests/t_oauth/test_oauth2_tokens.py new file mode 100644 index 0000000000..02b3839a3a --- /dev/null +++ b/regtests/t_oauth/test_oauth2_tokens.py @@ -0,0 +1,66 @@ +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +Simple class to test OAuth endpoints in the Polaris Service. +""" +import argparse +import requests +import urllib + + +def main(base_uri, client_id, client_secret): + """ + Args: + base_uri: The Base URI (ex: http://localhost:8181) + client_id: The Client ID of the OAuth2 Client to Use + client_secret: The Client Secret of the OAuth2 Client to Use + """ + oauth_uri = base_uri + '/api/catalog/v1/oauth/tokens' + headers = {} # may have client id / secret in the future + payload = { + "client_id": client_id, + "client_secret": client_secret, + "grant_type": "client_credentials" + } + r = requests.post( + oauth_uri, + headers=headers, + data=payload) + data = r.json() + + if 'error' in data: + # Cannot continue at this point + print("Unable to obtain an OAuth Token, see error below") + print(data) + return + + # Get the actual token and remove out hint/version + token = data['access_token'] + print("Successfully obtained OAuth token\n\n") + + # Let's call a sample endpoint. The "/config" one seems like the best bet + headers = {"Authorization": f"Bearer {token}"} + config_uri = base_uri + "/api/catalog/v1/config" + r = requests.get(config_uri, headers=headers) + print(r.text) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + parser.add_argument("--base-uri", help="The Base Polaris Server URI (ex: http://localhost:8181", type=str) + parser.add_argument("--client-id", help="The Client ID of the OAuth2 Client Integration", type=str) + parser.add_argument("--client-secret", help="The Client Secret of the OAuth2 Client Integration", type=str) + args = parser.parse_args() + main(args.base_uri, args.client_id, args.client_secret) diff --git a/regtests/t_pyspark/src/conftest.py b/regtests/t_pyspark/src/conftest.py new file mode 100644 index 0000000000..2b3fce2ac7 --- /dev/null +++ b/regtests/t_pyspark/src/conftest.py @@ -0,0 +1,144 @@ +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import codecs +import os +from typing import List + +import pytest + +from polaris.catalog.api.iceberg_catalog_api import IcebergCatalogAPI +from polaris.catalog.api_client import ApiClient as CatalogApiClient +from polaris.management import Catalog, AwsStorageConfigInfo, ApiClient, PolarisDefaultApi, Configuration, \ + CreateCatalogRequest, GrantCatalogRoleRequest, CatalogRole, ApiException, AddGrantRequest, CatalogGrant, \ + CatalogPrivilege, CreateCatalogRoleRequest + + +@pytest.fixture +def polaris_host(): + return os.getenv('POLARIS_HOST', 'localhost') + + +@pytest.fixture +def polaris_port(): + return int(os.getenv('POLARIS_PORT', '8181')) + + +@pytest.fixture +def polaris_url(polaris_host, polaris_port): + return f"http://{polaris_host}:{polaris_port}/api/management/v1" + + +@pytest.fixture +def polaris_catalog_url(polaris_host, polaris_port): + return f"http://{polaris_host}:{polaris_port}/api/catalog" + +@pytest.fixture +def test_bucket(): + return os.getenv('AWS_STORAGE_BUCKET') + +@pytest.fixture +def aws_role_arn(): + return os.getenv('AWS_ROLE_ARN') + +@pytest.fixture +def catalog_client(polaris_catalog_url): + """ + Create an iceberg catalog client with root credentials + :param polaris_catalog_url: + :param snowman: + :return: + """ + client = CatalogApiClient( + Configuration(access_token=os.getenv('REGTEST_ROOT_BEARER_TOKEN', 'principal:root;realm:default-realm'), + host=polaris_catalog_url)) + return IcebergCatalogAPI(client) + + +@pytest.fixture +def snowflake_catalog(root_client, catalog_client, test_bucket, aws_role_arn): + storage_conf = AwsStorageConfigInfo(storage_type="S3", + allowed_locations=[f"s3://{test_bucket}/polaris_test/"], + role_arn=aws_role_arn) + catalog_name = 'snowflake' + catalog = Catalog(name=catalog_name, type='INTERNAL', properties={ + "default-base-location": f"s3://{test_bucket}/polaris_test/snowflake_catalog", + "client.credentials-provider": "software.amazon.awssdk.auth.credentials.SystemPropertyCredentialsProvider" + }, + storage_config_info=storage_conf) + catalog.storage_config_info = storage_conf + try: + root_client.create_catalog(create_catalog_request=CreateCatalogRequest(catalog=catalog)) + resp = root_client.get_catalog(catalog_name=catalog.name) + root_client.assign_catalog_role_to_principal_role(principal_role_name='service_admin', + catalog_name=catalog_name, + grant_catalog_role_request=GrantCatalogRoleRequest( + catalog_role=CatalogRole(name='catalog_admin'))) + writer_catalog_role = create_catalog_role(root_client, resp, 'admin_writer') + root_client.add_grant_to_catalog_role(catalog_name, writer_catalog_role.name, + AddGrantRequest(grant=CatalogGrant(catalog_name=catalog_name, + type='catalog', + privilege=CatalogPrivilege.CATALOG_MANAGE_CONTENT))) + root_client.assign_catalog_role_to_principal_role(principal_role_name='service_admin', + catalog_name=catalog_name, + grant_catalog_role_request=GrantCatalogRoleRequest( + catalog_role=writer_catalog_role)) + yield resp + finally: + namespaces = catalog_client.list_namespaces(catalog_name) + for n in namespaces.namespaces: + clear_namespace(catalog_name, catalog_client, n) + catalog_roles = root_client.list_catalog_roles(catalog_name) + for r in catalog_roles.roles: + if r.name != 'catalog_admin': + root_client.delete_catalog_role(catalog_name, r.name) + root_client.delete_catalog(catalog_name=catalog_name) + + +def create_catalog_role(api, catalog, role_name): + catalog_role = CatalogRole(name=role_name) + try: + api.create_catalog_role(catalog_name=catalog.name, + create_catalog_role_request=CreateCatalogRoleRequest(catalog_role=catalog_role)) + return api.get_catalog_role(catalog_name=catalog.name, catalog_role_name=role_name) + except ApiException as e: + return api.get_catalog_role(catalog_name=catalog.name, catalog_role_name=role_name) + else: + raise e + + +def clear_namespace(catalog: str, catalog_client: IcebergCatalogAPI, namespace: List[str]): + formatted_namespace = format_namespace(namespace) + tables = catalog_client.list_tables(catalog, formatted_namespace) + for t in tables.identifiers: + catalog_client.drop_table(catalog, format_namespace(t.namespace), t.name, purge_requested=True) + views = catalog_client.list_views(catalog, formatted_namespace) + for v in views.identifiers: + catalog_client.drop_view(catalog, format_namespace(v.namespace), v.name) + nested_namespaces = catalog_client.list_namespaces(catalog, parent=formatted_namespace) + for n in nested_namespaces.namespaces: + clear_namespace(catalog, catalog_client, n) + catalog_client.drop_namespace(catalog, formatted_namespace) + + +def format_namespace(namespace): + return codecs.decode("1F", "hex").decode("UTF-8").join(namespace) + + +@pytest.fixture +def root_client(polaris_host, polaris_url): + client = ApiClient(Configuration(access_token=os.getenv('REGTEST_ROOT_BEARER_TOKEN', 'principal:root;realm:default-realm'), + host=polaris_url)) + api = PolarisDefaultApi(client) + return api diff --git a/regtests/t_pyspark/src/iceberg_spark.py b/regtests/t_pyspark/src/iceberg_spark.py new file mode 100644 index 0000000000..0e22749ca4 --- /dev/null +++ b/regtests/t_pyspark/src/iceberg_spark.py @@ -0,0 +1,122 @@ +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Spark connector with different catalog types.""" +from typing import Any, Dict, List, Optional, Union + +from pyspark.errors import PySparkRuntimeError +from pyspark.sql import SparkSession + + +class IcebergSparkSession: + """Create a Spark session that connects to Polaris. + + The session is expected to be used within a with statement, as in: + + with IcebergSparkSession( + credentials=f"{client_id}:{client_secret}", + aws_region='us-west-2', + polaris_url="http://polaris:8181/api/catalog", + catalog_name="catalog_name" + ) as spark: + spark.sql(f"USE snowflake.{hybrid_executor.database}.{hybrid_executor.schema}") + table_list = spark.sql("SHOW TABLES").collect() + """ + + def __init__( + self, + bearer_token: str = None, + credentials: str = None, + aws_region: str = "us-west-2", + catalog_name: str = None, + polaris_url: str = None, + realm: str = 'default-realm' + ): + """Constructor for Iceberg Spark session. Sets the member variables.""" + self.bearer_token = bearer_token + self.credentials = credentials + self.aws_region = aws_region + self.catalog_name = catalog_name + self.polaris_url = polaris_url + self.realm = realm + + def get_catalog_name(self): + """Get the catalog name of this spark session based on catalog_type.""" + return self.catalog_name + + def get_session(self): + """Get the real spark session.""" + return self.spark_session + + def sql(self, query: str, args: Optional[Union[Dict[str, Any], List]] = None, **kwargs): + """Wrapper for the sql function of SparkSession.""" + return self.spark_session.sql(query, args, **kwargs) + + def __enter__(self): + """Initial method for Iceberg Spark session. Creates a Spark session with specified configs. + """ + packages = [ + "org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.5.0", + "org.apache.hadoop:hadoop-aws:3.4.0", + "software.amazon.awssdk:bundle:2.23.19", + "software.amazon.awssdk:url-connection-client:2.23.19", + ] + excludes = ["org.checkerframework:checker-qual", "com.google.errorprone:error_prone_annotations"] + + packages_string = ",".join(packages) + excludes_string = ",".join(excludes) + catalog_name = self.get_catalog_name() + + creds = self.credentials + credConfig = f"spark.sql.catalog.{catalog_name}.credential" + if self.bearer_token is not None: + creds = self.bearer_token + credConfig = f"spark.sql.catalog.{catalog_name}.token" + spark_session_builder = ( + SparkSession.builder.config("spark.jars.packages", packages_string) + .config("spark.jars.excludes", excludes_string) + .config("spark.sql.iceberg.vectorization.enabled", "false") + .config("spark.hadoop.fs.s3.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem") + .config("spark.history.fs.logDirectory", "/home/iceberg/spark-events") + .config("spark.sql.extensions", "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions") + .config( + "spark.hadoop.fs.s3a.aws.credentials.provider", + "org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider", + ) + .config( + f"spark.sql.catalog.{catalog_name}", "org.apache.iceberg.spark.SparkCatalog" + ) + .config(f"spark.sql.catalog.{catalog_name}.header.X-Iceberg-Access-Delegation", "true") + .config(f"spark.sql.catalog.{catalog_name}.type", "rest") + .config(f"spark.sql.catalog.{catalog_name}.uri", self.polaris_url) + .config(f"spark.sql.catalog.{catalog_name}.warehouse", self.catalog_name) + .config(f"spark.sql.catalog.{catalog_name}.scope", 'PRINCIPAL_ROLE:ALL') + .config(f"spark.sql.catalog.{catalog_name}.header.realm", self.realm) + .config(f"spark.sql.catalog.{catalog_name}.client.region", self.aws_region) + .config(credConfig, creds) + .config("spark.ui.showConsoleProgress", False) + ) + + self.spark_session = spark_session_builder.getOrCreate() + self.quiet_logs(self.spark_session.sparkContext) + return self + + def quiet_logs(self, sc): + logger = sc._jvm.org.apache.log4j + logger.LogManager.getLogger("org").setLevel(logger.Level.ERROR) + logger.LogManager.getLogger("akka").setLevel(logger.Level.ERROR) + + def __exit__(self, exc_type, exc_val, exc_tb): + """Destructor for Iceberg Spark session. Stops the Spark session.""" + self.spark_session.stop() diff --git a/regtests/t_pyspark/src/test_spark_sql_s3_with_privileges.py b/regtests/t_pyspark/src/test_spark_sql_s3_with_privileges.py new file mode 100644 index 0000000000..313cd4334b --- /dev/null +++ b/regtests/t_pyspark/src/test_spark_sql_s3_with_privileges.py @@ -0,0 +1,904 @@ +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import codecs +import os +import time +import uuid +from urllib.parse import unquote + +import boto3 +import botocore +import pytest +from py4j.protocol import Py4JJavaError + +from botocore.exceptions import ClientError + +from iceberg_spark import IcebergSparkSession +from polaris.catalog import CreateNamespaceRequest, CreateTableRequest, ModelSchema, StructField +from polaris.catalog.api.iceberg_catalog_api import IcebergCatalogAPI +from polaris.catalog.api.iceberg_o_auth2_api import IcebergOAuth2API +from polaris.catalog.api_client import ApiClient as CatalogApiClient +from polaris.catalog.configuration import Configuration +from polaris.management import ApiClient as ManagementApiClient +from polaris.management import PolarisDefaultApi, Principal, PrincipalRole, CatalogRole, \ + CatalogGrant, CatalogPrivilege, ApiException, CreateCatalogRoleRequest, CreatePrincipalRoleRequest, \ + CreatePrincipalRequest, AddGrantRequest, GrantCatalogRoleRequest, GrantPrincipalRoleRequest + + +@pytest.fixture +def snowman(polaris_url, polaris_catalog_url, root_client, snowflake_catalog): + """ + create the snowman principal with full table/namespace privileges + :param root_client: + :param snowflake_catalog: + :return: + """ + snowman_name = "snowman" + table_writer_rolename = "table_writer" + snowflake_writer_rolename = "snowflake_writer" + try: + snowman = create_principal(polaris_url, polaris_catalog_url, root_client, snowman_name) + writer_principal_role = create_principal_role(root_client, table_writer_rolename) + writer_catalog_role = create_catalog_role(root_client, snowflake_catalog, snowflake_writer_rolename) + root_client.assign_catalog_role_to_principal_role(principal_role_name=writer_principal_role.name, + catalog_name=snowflake_catalog.name, + grant_catalog_role_request=GrantCatalogRoleRequest( + catalog_role=writer_catalog_role)) + root_client.add_grant_to_catalog_role(snowflake_catalog.name, writer_catalog_role.name, + AddGrantRequest(grant=CatalogGrant(catalog_name=snowflake_catalog.name, + type='catalog', + privilege=CatalogPrivilege.TABLE_FULL_METADATA))) + root_client.add_grant_to_catalog_role(snowflake_catalog.name, writer_catalog_role.name, + AddGrantRequest(grant=CatalogGrant(catalog_name=snowflake_catalog.name, + type='catalog', + privilege=CatalogPrivilege.VIEW_FULL_METADATA))) + root_client.add_grant_to_catalog_role(snowflake_catalog.name, writer_catalog_role.name, + AddGrantRequest(grant=CatalogGrant(catalog_name=snowflake_catalog.name, + type='catalog', + privilege=CatalogPrivilege.TABLE_WRITE_DATA))) + root_client.add_grant_to_catalog_role(snowflake_catalog.name, writer_catalog_role.name, + AddGrantRequest(grant=CatalogGrant(catalog_name=snowflake_catalog.name, + type='catalog', + privilege=CatalogPrivilege.NAMESPACE_FULL_METADATA))) + + root_client.assign_principal_role(snowman.principal.name, + grant_principal_role_request=GrantPrincipalRoleRequest( + principal_role=writer_principal_role)) + yield snowman + finally: + root_client.delete_principal(snowman_name) + root_client.delete_principal_role(principal_role_name=table_writer_rolename) + root_client.delete_catalog_role(catalog_role_name=snowflake_writer_rolename, catalog_name=snowflake_catalog.name) + + +@pytest.fixture +def reader(polaris_url, polaris_catalog_url, root_client, snowflake_catalog): + """ + create the test_reader principal with table/namespace list and read privileges + + :param root_client: + :param snowflake_catalog: + :return: + """ + reader_principal_name = 'test_reader' + reader_principal_role_name = "table_reader" + reader_catalog_role_name = 'snowflake_reader' + try: + reader = create_principal(polaris_url, polaris_catalog_url, root_client, reader_principal_name) + reader_principal_role = create_principal_role(root_client, reader_principal_role_name) + reader_catalog_role = create_catalog_role(root_client, snowflake_catalog, reader_catalog_role_name) + + root_client.assign_catalog_role_to_principal_role(principal_role_name=reader_principal_role.name, + catalog_name=snowflake_catalog.name, + grant_catalog_role_request=GrantCatalogRoleRequest( + catalog_role=reader_catalog_role)) + root_client.assign_principal_role(reader.principal.name, + grant_principal_role_request=GrantPrincipalRoleRequest( + principal_role=reader_principal_role)) + root_client.add_grant_to_catalog_role(snowflake_catalog.name, reader_catalog_role.name, + AddGrantRequest(grant=CatalogGrant(catalog_name=snowflake_catalog.name, + type='catalog', + privilege=CatalogPrivilege.TABLE_READ_DATA))) + root_client.add_grant_to_catalog_role(snowflake_catalog.name, reader_catalog_role.name, + AddGrantRequest(grant=CatalogGrant(catalog_name=snowflake_catalog.name, + type='catalog', + privilege=CatalogPrivilege.TABLE_LIST))) + root_client.add_grant_to_catalog_role(snowflake_catalog.name, reader_catalog_role.name, + AddGrantRequest(grant=CatalogGrant(catalog_name=snowflake_catalog.name, + type='catalog', + privilege=CatalogPrivilege.TABLE_READ_PROPERTIES))) + root_client.add_grant_to_catalog_role(snowflake_catalog.name, reader_catalog_role.name, + AddGrantRequest(grant=CatalogGrant(catalog_name=snowflake_catalog.name, + type='catalog', + privilege=CatalogPrivilege.NAMESPACE_LIST))) + root_client.add_grant_to_catalog_role(snowflake_catalog.name, reader_catalog_role.name, + AddGrantRequest(grant=CatalogGrant(catalog_name=snowflake_catalog.name, + type='catalog', + privilege=CatalogPrivilege.NAMESPACE_READ_PROPERTIES))) + yield reader + finally: + root_client.delete_principal(reader_principal_name) + root_client.delete_principal_role(principal_role_name=reader_principal_role_name) + root_client.delete_catalog_role(catalog_role_name=reader_catalog_role_name, catalog_name=snowflake_catalog.name) + + +@pytest.fixture +def snowman_catalog_client(polaris_catalog_url, snowman): + """ + Create an iceberg catalog client with snowman credentials + :param polaris_catalog_url: + :param snowman: + :return: + """ + client = CatalogApiClient(Configuration(username=snowman.principal.client_id, + password=snowman.credentials.client_secret, + host=polaris_catalog_url)) + oauth_api = IcebergOAuth2API(client) + token = oauth_api.get_token(scope='PRINCIPAL_ROLE:ALL', client_id=snowman.principal.client_id, + client_secret=snowman.credentials.client_secret, + grant_type='client_credentials', + _headers={'realm': 'default-realm'}) + + return IcebergCatalogAPI(CatalogApiClient(Configuration(access_token=token.access_token, + host=polaris_catalog_url))) + +@pytest.fixture +def creator_catalog_client(polaris_catalog_url, creator): + """ + Create an iceberg catalog client with TABLE_CREATE credentials + :param polaris_catalog_url: + :param creator: + :return: + """ + client = CatalogApiClient(Configuration(username=creator.principal.client_id, + password=creator.credentials.client_secret, + host=polaris_catalog_url)) + oauth_api = IcebergOAuth2API(client) + token = oauth_api.get_token(scope='PRINCIPAL_ROLE:ALL', client_id=creator.principal.client_id, + client_secret=creator.credentials.client_secret, + grant_type='client_credentials', + _headers={'realm': 'default-realm'}) + + return IcebergCatalogAPI(CatalogApiClient(Configuration(access_token=token.access_token, + host=polaris_catalog_url))) + + +@pytest.fixture +def creator(polaris_url, polaris_catalog_url, root_client, snowflake_catalog): + """ + create the creator principal with only TABLE_CREATE privileges + :param root_client: + :param snowflake_catalog: + :return: + """ + creator_name = "creator" + principal_role = "creator_principal_role" + catalog_role = "creator_catalog_role" + try: + creator = create_principal(polaris_url, polaris_catalog_url, root_client, creator_name) + creator_principal_role = create_principal_role(root_client, principal_role) + creator_catalog_role = create_catalog_role(root_client, snowflake_catalog, catalog_role) + + root_client.assign_catalog_role_to_principal_role(principal_role_name=creator_principal_role.name, + catalog_name=snowflake_catalog.name, + grant_catalog_role_request=GrantCatalogRoleRequest( + catalog_role=creator_catalog_role)) + root_client.add_grant_to_catalog_role(snowflake_catalog.name, creator_catalog_role.name, + AddGrantRequest(grant=CatalogGrant(catalog_name=snowflake_catalog.name, + type='catalog', + privilege=CatalogPrivilege.TABLE_CREATE))) + root_client.assign_principal_role(creator.principal.name, + grant_principal_role_request=GrantPrincipalRoleRequest( + principal_role=creator_principal_role)) + yield creator + finally: + root_client.delete_principal(creator_name) + root_client.delete_principal_role(principal_role_name=principal_role) + root_client.delete_catalog_role(catalog_role_name=catalog_role, catalog_name=snowflake_catalog.name) + + +@pytest.fixture +def reader_catalog_client(polaris_catalog_url, reader): + """ + Create an iceberg catalog client with test_reader credentials + :param polaris_catalog_url: + :param reader: + :return: + """ + client = CatalogApiClient(Configuration(username=reader.principal.client_id, + password=reader.credentials.client_secret, + host=polaris_catalog_url)) + oauth_api = IcebergOAuth2API(client) + token = oauth_api.get_token(scope='PRINCIPAL_ROLE:ALL', client_id=reader.principal.client_id, + client_secret=reader.credentials.client_secret, + grant_type='client_credentials', + _headers={'realm': 'default-realm'}) + + return IcebergCatalogAPI(CatalogApiClient(Configuration(access_token=token.access_token, + host=polaris_catalog_url))) + + +@pytest.mark.skipif(os.environ.get('AWS_TEST_ENABLED', 'False').lower() != 'true', reason='AWS_TEST_ENABLED is not set or is false') +def test_spark_credentials(root_client, snowflake_catalog, polaris_catalog_url, snowman, reader): + """ + Basic spark test - using snowman, create namespaces and a table. Insert into the table and read records back. + + Using the reader principal's credentials verify read access. Validate the reader cannot insert into the table. + :param root_client: + :param snowflake_catalog: + :param polaris_catalog_url: + :param snowman: + :param reader: + :return: + """ + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('CREATE NAMESPACE db1') + spark.sql('CREATE NAMESPACE db1.schema') + spark.sql('SHOW NAMESPACES') + spark.sql('USE db1.schema') + spark.sql('CREATE TABLE iceberg_table (col1 int, col2 string)') + spark.sql('SHOW TABLES') + spark.sql("""INSERT INTO iceberg_table VALUES + (10, 'mystring'), + (20, 'anotherstring'), + (30, null) + """) + count = spark.sql("SELECT * FROM iceberg_table").count() + assert count == 3 + + # switch users to the reader. we can query, show namespaces, but we can't insert + with IcebergSparkSession(credentials=f'{reader.principal.client_id}:{reader.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('SHOW NAMESPACES') + spark.sql('USE db1.schema') + count = spark.sql("SELECT * FROM iceberg_table").count() + assert count == 3 + try: + spark.sql("""INSERT INTO iceberg_table VALUES + (10, 'mystring'), + (20, 'anotherstring'), + (30, null) + """) + pytest.fail("Expected exception when trying to write without permission") + except: + print("Exception caught attempting to write without permission") + + # switch back to delete stuff + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('USE db1.schema') + spark.sql('DROP TABLE iceberg_table') + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('DROP NAMESPACE db1.schema') + spark.sql('DROP NAMESPACE db1') + + +@pytest.mark.skipif(os.environ.get('AWS_TEST_ENABLED', 'False').lower() != 'true', reason='AWS_TEST_ENABLED is not set or is false') +def test_spark_cannot_create_table_outside_of_namespace_dir(root_client, snowflake_catalog, polaris_catalog_url, snowman, reader): + """ + Basic spark test - using snowman, create a namespace and try to create a table outside of the namespace. This should fail + + Using the reader principal's credentials verify read access. Validate the reader cannot insert into the table. + :param root_client: + :param snowflake_catalog: + :param polaris_catalog_url: + :param snowman: + :param reader: + :return: + """ + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + table_location = snowflake_catalog.properties.default_base_location + '/db1/outside_schema/table_outside_namespace' + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('CREATE NAMESPACE db1') + spark.sql('CREATE NAMESPACE db1.schema') + spark.sql('SHOW NAMESPACES') + spark.sql('USE db1.schema') + try: + spark.sql(f"CREATE TABLE iceberg_table_outside_namespace (col1 int, col2 string) LOCATION '{table_location}'") + pytest.fail("Expected to fail when creating table outside of namespace directory") + except Py4JJavaError as e: + assert "is not in the list of allowed locations" in e.java_exception.getMessage() + + +@pytest.mark.skipif(os.environ.get('AWS_TEST_ENABLED', 'False').lower() != 'true', reason='AWS_TEST_ENABLED is not set or is false') +def test_spark_creates_table_in_custom_namespace_dir(root_client, snowflake_catalog, polaris_catalog_url, snowman, reader): + """ + Basic spark test - using snowman, create a namespace and try to create a table outside of the namespace. This should fail + + Using the reader principal's credentials verify read access. Validate the reader cannot insert into the table. + :param root_client: + :param snowflake_catalog: + :param polaris_catalog_url: + :param snowman: + :param reader: + :return: + """ + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + namespace_location = snowflake_catalog.properties.default_base_location + '/db1/custom_location' + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('CREATE NAMESPACE db1') + spark.sql(f"CREATE NAMESPACE db1.schema LOCATION '{namespace_location}'") + spark.sql('USE db1.schema') + spark.sql(f"CREATE TABLE table_in_custom_namespace_location (col1 int, col2 string)") + assert spark.sql("SELECT * FROM table_in_custom_namespace_location").count() == 0 + # check the metadata and assert the custom namespace location is used + entries = spark.sql(f"SELECT file FROM db1.schema.table_in_custom_namespace_location.metadata_log_entries").collect() + assert namespace_location in entries[0][0] + + +@pytest.mark.skipif(os.environ.get('AWS_TEST_ENABLED', 'False').lower() != 'true', reason='AWS_TEST_ENABLED is not set or is false') +def test_spark_can_create_table_in_custom_allowed_dir(root_client, snowflake_catalog, polaris_catalog_url, snowman, reader): + """ + Basic spark test - using snowman, create a namespace and try to create a table outside of the namespace. This should fail + + Using the reader principal's credentials verify read access. Validate the reader cannot insert into the table. + :param root_client: + :param snowflake_catalog: + :param polaris_catalog_url: + :param snowman: + :param reader: + :return: + """ + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + table_location = snowflake_catalog.properties.default_base_location + '/db1/custom_schema_location/table_outside_namespace' + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('CREATE NAMESPACE db1') + spark.sql(f"CREATE NAMESPACE db1.schema LOCATION '{snowflake_catalog.properties.default_base_location}/db1/custom_schema_location'") + spark.sql('SHOW NAMESPACES') + spark.sql('USE db1.schema') + # this is supported because it is inside of the custom namespace location + spark.sql(f"CREATE TABLE iceberg_table_outside_namespace (col1 int, col2 string) LOCATION '{table_location}'") + + +@pytest.mark.skipif(os.environ.get('AWS_TEST_ENABLED', 'False').lower() != 'true', reason='AWS_TEST_ENABLED is not set or is false') +def test_spark_cannot_create_view_overlapping_table(root_client, snowflake_catalog, polaris_catalog_url, snowman, reader): + """ + Basic spark test - using snowman, create a namespace and try to create a table outside of the namespace. This should fail + + Using the reader principal's credentials verify read access. Validate the reader cannot insert into the table. + :param root_client: + :param snowflake_catalog: + :param polaris_catalog_url: + :param snowman: + :param reader: + :return: + """ + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + table_location = snowflake_catalog.properties.default_base_location + '/db1/schema/table_dir' + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('CREATE NAMESPACE db1') + spark.sql(f"CREATE NAMESPACE db1.schema LOCATION '{snowflake_catalog.properties.default_base_location}/db1/schema'") + spark.sql('SHOW NAMESPACES') + spark.sql('USE db1.schema') + spark.sql(f"CREATE TABLE my_iceberg_table (col1 int, col2 string) LOCATION '{table_location}'") + try: + spark.sql(f"CREATE VIEW disallowed_view (int, string) TBLPROPERTIES ('location'= '{table_location}') AS SELECT * FROM my_iceberg_table") + pytest.fail("Expected to fail when creating table outside of namespace directory") + except Py4JJavaError as e: + assert "conflicts with existing table or namespace at location" in e.java_exception.getMessage() + + +@pytest.mark.skipif(os.environ.get('AWS_TEST_ENABLED', 'False').lower() != 'true', reason='AWS_TEST_ENABLED is not set or is false') +def test_spark_credentials_can_delete_after_purge(root_client, snowflake_catalog, polaris_catalog_url, snowman, + snowman_catalog_client, test_bucket): + """ + Using snowman, create namespaces and a table. Insert into the table in multiple operations and update existing records + to generate multiple metadata.json files and manfiests. Drop the table with purge=true. Poll S3 and validate all of + the files are deleted. + + Using the reader principal's credentials verify read access. Validate the reader cannot insert into the table. + :param root_client: + :param snowflake_catalog: + :param polaris_catalog_url: + :param snowman: + :param reader: + :return: + """ + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + table_name = f'iceberg_test_table_{str(uuid.uuid4())[-10:]}' + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('CREATE NAMESPACE db1') + spark.sql('CREATE NAMESPACE db1.schema') + spark.sql('SHOW NAMESPACES') + spark.sql('USE db1.schema') + spark.sql(f'CREATE TABLE {table_name} (col1 int, col2 string)') + spark.sql('SHOW TABLES') + + # several inserts and an update, which should cause earlier files to show up as deleted in the later manifests + spark.sql(f"""INSERT INTO {table_name} VALUES + (10, 'mystring'), + (20, 'anotherstring'), + (30, null) + """) + spark.sql(f"""INSERT INTO {table_name} VALUES + (40, 'mystring'), + (50, 'anotherstring'), + (60, null) + """) + spark.sql(f"""INSERT INTO {table_name} VALUES + (70, 'mystring'), + (80, 'anotherstring'), + (90, null) + """) + spark.sql(f"UPDATE {table_name} SET col2='changed string' WHERE col1 BETWEEN 20 AND 50") + count = spark.sql(f"SELECT * FROM {table_name}").count() + + assert count == 9 + + # fetch aws credentials to examine the metadata files + response = snowman_catalog_client.load_table(snowflake_catalog.name, unquote('db1%1Fschema'), table_name, + "true") + assert response.config is not None + assert 's3.access-key-id' in response.config + assert 's3.secret-access-key' in response.config + assert 's3.session-token' in response.config + + s3 = boto3.client('s3', + aws_access_key_id=response.config['s3.access-key-id'], + aws_secret_access_key=response.config['s3.secret-access-key'], + aws_session_token=response.config['s3.session-token']) + + objects = s3.list_objects(Bucket=test_bucket, Delimiter='/', + Prefix=f'polaris_test/snowflake_catalog/db1/schema/{table_name}/data/') + assert objects is not None + assert 'Contents' in objects + assert len(objects['Contents']) >= 4 # idk, it varies - at least one file for each inser and one for the update + print(f"Found {len(objects['Contents'])} data files in S3 before drop") + + objects = s3.list_objects(Bucket=test_bucket, Delimiter='/', + Prefix=f'polaris_test/snowflake_catalog/db1/schema/{table_name}/metadata/') + assert objects is not None + assert 'Contents' in objects + assert len(objects['Contents']) == 15 # 5 metadata.json files, 4 manifest lists, and 6 manifests + print(f"Found {len(objects['Contents'])} metadata files in S3 before drop") + + # use the api client to ensure the purge flag is set to true + snowman_catalog_client.drop_table(snowflake_catalog.name, + codecs.decode("1F", "hex").decode("UTF-8").join(['db1', 'schema']), table_name, + purge_requested=True) + spark.sql('DROP NAMESPACE db1.schema') + spark.sql('DROP NAMESPACE db1') + print("Dropped table with purge - waiting for files to be deleted") + attempts = 0 + + # watch the data directory. metadata will be deleted first, so if data directory is clear, we can expect + # metadatat diretory to be clear also + while 'Contents' in objects and len(objects['Contents']) > 0 and attempts < 60: + time.sleep(1) # seconds, not milliseconds ;) + objects = s3.list_objects(Bucket=test_bucket, Delimiter='/', + Prefix=f'polaris_test/snowflake_catalog/db1/schema/{table_name}/data/') + attempts = attempts + 1 + + if 'Contents' in objects and len(objects['Contents']) > 0: + pytest.fail(f"Expected all data to be deleted, but found metadata files {objects['Contents']}") + + objects = s3.list_objects(Bucket=test_bucket, Delimiter='/', + Prefix=f'polaris_test/snowflake_catalog/db1/schema/{table_name}/data/') + if 'Contents' in objects and len(objects['Contents']) > 0: + pytest.fail(f"Expected all data to be deleted, but found data files {objects['Contents']}") + + +@pytest.mark.skipif(os.environ.get('AWS_TEST_ENABLED', 'False').lower() != 'true', reason='AWS_TEST_ENABLED is not set or is false') +# @pytest.mark.skip(reason="This test is flaky") +def test_spark_credentials_can_create_views(snowflake_catalog, polaris_catalog_url, snowman): + """ + Using snowman, create namespaces and a table. Insert into the table in multiple operations and update existing records + to generate multiple metadata.json files and manifests. Create a view on the table. Verify the state of the view + matches the state of the table. + + Using the reader principal's credentials verify read access. Validate the reader cannot insert into the table. + :param snowflake_catalog: + :param polaris_catalog_url: + :param snowman: + :return: + """ + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + table_name = f'iceberg_test_table_{str(uuid.uuid4())[-10:]}' + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('CREATE NAMESPACE db1') + spark.sql('CREATE NAMESPACE db1.schema') + spark.sql('SHOW NAMESPACES') + spark.sql('USE db1.schema') + spark.sql(f'CREATE TABLE {table_name} (col1 int, col2 string)') + spark.sql('SHOW TABLES') + + # several inserts + spark.sql(f"""INSERT INTO {table_name} VALUES + (10, 'mystring'), + (20, 'anotherstring'), + (30, null) + """) + spark.sql(f"""INSERT INTO {table_name} VALUES + (40, 'mystring'), + (50, 'anotherstring'), + (60, null) + """) + spark.sql(f"""INSERT INTO {table_name} VALUES + (70, 'mystring'), + (80, 'anotherstring'), + (90, null) + """) + # verify the view reflects the current state of the table + spark.sql(f"CREATE VIEW {table_name}_view AS SELECT col2 FROM {table_name} where col1 > 30 ORDER BY col1 DESC") + view_records = spark.sql(f"SELECT * FROM {table_name}_view").collect() + assert len(view_records) == 6 + assert len(view_records[0]) == 1 + assert view_records[1][0] == 'anotherstring' + assert view_records[5][0] == 'mystring' + + # Update some records. Assert the view reflects the new state + spark.sql(f"UPDATE {table_name} SET col2='changed string' WHERE col1 BETWEEN 20 AND 50") + view_records = spark.sql(f"SELECT * FROM {table_name}_view").collect() + assert len(view_records) == 6 + assert view_records[5][0] == 'changed string' + + +@pytest.mark.skipif(os.environ.get('AWS_TEST_ENABLED', 'False').lower() != 'true', reason='AWS_TEST_ENABLED is not set or is false') +def test_spark_credentials_s3_direct_with_write(root_client, snowflake_catalog, polaris_catalog_url, + snowman, snowman_catalog_client, test_bucket): + """ + Create two tables using Spark. Then call the loadTable api directly with snowman token to fetch the vended credentials + for the first table. + Verify that the credentials returned to snowman can read and write to the table's directory in S3, but don't allow + reads or writes to the other table's directory + :param root_client: + :param snowflake_catalog: + :param polaris_catalog_url: + :param snowman_catalog_client: + :param reader_catalog_client: + :return: + """ + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('CREATE NAMESPACE db1') + spark.sql('CREATE NAMESPACE db1.schema') + spark.sql('USE db1.schema') + spark.sql('CREATE TABLE iceberg_table (col1 int, col2 string)') + spark.sql('CREATE TABLE iceberg_table_2 (col1 int, col2 string)') + + table2_metadata = snowman_catalog_client.load_table(snowflake_catalog.name, unquote('db1%1Fschema'), + "iceberg_table_2", + "s3_direct_with_write_table2").metadata_location + response = snowman_catalog_client.load_table(snowflake_catalog.name, unquote('db1%1Fschema'), "iceberg_table", + "s3_direct_with_write") + assert response.config is not None + assert 's3.access-key-id' in response.config + assert 's3.secret-access-key' in response.config + assert 's3.session-token' in response.config + + s3 = boto3.client('s3', + aws_access_key_id=response.config['s3.access-key-id'], + aws_secret_access_key=response.config['s3.secret-access-key'], + aws_session_token=response.config['s3.session-token']) + + objects = s3.list_objects(Bucket=test_bucket, Delimiter='/', + Prefix='polaris_test/snowflake_catalog/db1/schema/iceberg_table/metadata/') + assert objects is not None + assert 'Contents' in objects + assert len(objects['Contents']) > 0 + + metadata_file = next(f for f in objects['Contents'] if f['Key'].endswith('metadata.json')) + assert metadata_file is not None + + metadata_contents = s3.get_object(Bucket=test_bucket, Key=metadata_file['Key']) + assert metadata_contents is not None + assert metadata_contents['ContentLength'] > 0 + + put_object = s3.put_object(Bucket=test_bucket, Key=f"{metadata_file['Key']}.bak", + Body=metadata_contents['Body'].read()) + assert put_object is not None + assert 'VersionId' in put_object + assert put_object['VersionId'] is not None + + # list files in the other table's directory. The access policy should restrict this + try: + objects = s3.list_objects(Bucket=test_bucket, Delimiter='/', + Prefix='polaris_test/snowflake_catalog/db1/schema/iceberg_table_2/metadata/') + pytest.fail('Expected exception listing file outside of table directory') + except botocore.exceptions.ClientError as error: + print(error) + + try: + metadata_contents = s3.get_object(Bucket=test_bucket, Key=table2_metadata) + pytest.fail("Expected exception reading file outside of table directory") + except botocore.exceptions.ClientError as error: + print(error) + + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('USE db1.schema') + spark.sql('DROP TABLE iceberg_table') + spark.sql('DROP TABLE iceberg_table_2') + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('DROP NAMESPACE db1.schema') + spark.sql('DROP NAMESPACE db1') + + +@pytest.mark.skipif(os.environ.get('AWS_TEST_ENABLED', 'false').lower() != 'true', reason='AWS_TEST_ENABLED is not set or is false') +def test_spark_credentials_s3_direct_without_write(root_client, snowflake_catalog, polaris_catalog_url, + snowman, reader_catalog_client, test_bucket): + """ + Create two tables using Spark. Then call the loadTable api directly with test_reader token to fetch the vended + credentials for the first table. + Verify that the credentials returned to test_reader allow reads, but don't allow writes to the table's directory + and don't allow reads or writes anywhere else on S3. This verifies that Polaris's authz model does not only prevent + users from updating metadata to enforce read-only access, but uses credential scoping to enforce restrictions at + the storage layer. + :param root_client: + :param snowflake_catalog: + :param polaris_catalog_url: + :param reader_catalog_client: + :return: + """ + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('CREATE NAMESPACE db1') + spark.sql('CREATE NAMESPACE db1.schema') + spark.sql('USE db1.schema') + spark.sql('CREATE TABLE iceberg_table (col1 int, col2 string)') + spark.sql('CREATE TABLE iceberg_table_2 (col1 int, col2 string)') + + table2_metadata = reader_catalog_client.load_table(snowflake_catalog.name, unquote('db1%1Fschema'), + "iceberg_table_2", + "s3_direct_with_write_table2").metadata_location + + response = reader_catalog_client.load_table(snowflake_catalog.name, unquote('db1%1Fschema'), "iceberg_table", + "s3_direct_without_write") + assert response.config is not None + assert 's3.access-key-id' in response.config + assert 's3.secret-access-key' in response.config + assert 's3.session-token' in response.config + + s3 = boto3.client('s3', + aws_access_key_id=response.config['s3.access-key-id'], + aws_secret_access_key=response.config['s3.secret-access-key'], + aws_session_token=response.config['s3.session-token']) + + objects = s3.list_objects(Bucket=test_bucket, Delimiter='/', + Prefix='polaris_test/snowflake_catalog/db1/schema/iceberg_table/metadata/') + assert objects is not None + assert 'Contents' in objects + assert len(objects['Contents']) > 0 + + metadata_file = next(f for f in objects['Contents'] if f['Key'].endswith('metadata.json')) + assert metadata_file is not None + + metadata_contents = s3.get_object(Bucket=test_bucket, Key=metadata_file['Key']) + assert metadata_contents is not None + assert metadata_contents['ContentLength'] > 0 + + # try to write. Expect it to fail + try: + put_object = s3.put_object(Bucket=test_bucket, Key=f"{metadata_file['Key']}.bak", + Body=metadata_contents['Body'].read()) + pytest.fail("Expect exception trying to write to table directory") + except botocore.exceptions.ClientError as error: + print(error) + + # list files in the other table's directory. The access policy should restrict this + try: + objects = s3.list_objects(Bucket=test_bucket, Delimiter='/', + Prefix='polaris_test/snowflake_catalog/db1/schema/iceberg_table_2/metadata/') + pytest.fail('Expected exception listing file outside of table directory') + except botocore.exceptions.ClientError as error: + print(error) + + try: + metadata_contents = s3.get_object(Bucket=test_bucket, Key=table2_metadata) + pytest.fail("Expected exception reading file outside of table directory") + except botocore.exceptions.ClientError as error: + print(error) + + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('USE db1.schema') + spark.sql('DROP TABLE iceberg_table') + spark.sql('DROP TABLE iceberg_table_2') + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('DROP NAMESPACE db1.schema') + spark.sql('DROP NAMESPACE db1') + + +@pytest.mark.skipif(os.environ.get('AWS_TEST_ENABLED', 'false').lower() != 'true', reason='AWS_TEST_ENABLED is not set or is false') +def test_spark_credentials_s3_direct_without_read( + snowflake_catalog, snowman_catalog_client, creator_catalog_client, test_bucket): + """ + Create a table using `creator`, which does not have TABLE_READ_DATA and ensure that credentials to read the table + are not vended. + """ + snowman_catalog_client.create_namespace( + prefix=snowflake_catalog.name, + create_namespace_request=CreateNamespaceRequest( + namespace=["some_schema"] + ) + ) + + response = creator_catalog_client.create_table( + prefix=snowflake_catalog.name, + namespace="some_schema", + x_iceberg_access_delegation="true", + create_table_request=CreateTableRequest( + name="some_table", + var_schema=ModelSchema( + type = 'struct', + fields = [], + ) + ) + ) + + assert not response.config + + snowman_catalog_client.drop_table( + prefix=snowflake_catalog.name, + namespace="some_schema", + table="some_table" + ) + snowman_catalog_client.drop_namespace( + prefix=snowflake_catalog.name, + namespace="some_schema" + ) + + +def create_principal(polaris_url, polaris_catalog_url, api, principal_name): + principal = Principal(name=principal_name, type="SERVICE") + try: + principal_result = api.create_principal(CreatePrincipalRequest(principal=principal)) + + token_client = CatalogApiClient(Configuration(username=principal_result.principal.client_id, + password=principal_result.credentials.client_secret, + host=polaris_catalog_url)) + oauth_api = IcebergOAuth2API(token_client) + token = oauth_api.get_token(scope='PRINCIPAL_ROLE:ALL', client_id=principal_result.principal.client_id, + client_secret=principal_result.credentials.client_secret, + grant_type='client_credentials', + _headers={'realm': 'default-realm'}) + rotate_client = ManagementApiClient(Configuration(access_token=token.access_token, + host=polaris_url)) + rotate_api = PolarisDefaultApi(rotate_client) + + rotate_credentials = rotate_api.rotate_credentials(principal_name=principal_name) + return rotate_credentials + except ApiException as e: + if e.status == 409: + return rotate_api.rotate_credentials(principal_name=principal_name) + else: + raise e + +@pytest.mark.skipif(os.environ.get('AWS_TEST_ENABLED', 'False').lower() != 'true', reason='AWS_TEST_ENABLED is not set or is false') +def test_spark_credentials_s3_scoped_to_metadata_data_locations(root_client, snowflake_catalog, polaris_catalog_url, + snowman, snowman_catalog_client, test_bucket): + """ + Create a table using Spark. Then call the loadTable api directly with snowman token to fetch the vended credentials + for the table. + Verify that the credentials returned to snowman can only work for the location that ending with metadata or data directory + :param root_client: + :param snowflake_catalog: + :param polaris_catalog_url: + :param snowman_catalog_client: + :param reader_catalog_client: + :return: + """ + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('CREATE NAMESPACE db1') + spark.sql('CREATE NAMESPACE db1.schema') + spark.sql('USE db1.schema') + spark.sql('CREATE TABLE iceberg_table_scope_loc(col1 int, col2 string)') + spark.sql(f'''CREATE TABLE iceberg_table_scope_loc_slashes (col1 int, col2 string) LOCATION \'s3://{test_bucket}/polaris_test/snowflake_catalog/db1/schema/iceberg_table_scope_loc_slashes/path_with_slashes///////\'''') + + prefix1 = 'polaris_test/snowflake_catalog/db1/schema/iceberg_table_scope_loc' + prefix2 = 'polaris_test/snowflake_catalog/db1/schema/iceberg_table_scope_loc_slashes/path_with_slashes' + response1 = snowman_catalog_client.load_table(snowflake_catalog.name, unquote('db1%1Fschema'), + "iceberg_table_scope_loc", + "s3_scoped_table_locations") + response2 = snowman_catalog_client.load_table(snowflake_catalog.name, unquote('db1%1Fschema'), + "iceberg_table_scope_loc_slashes", + "s3_scoped_table_locations_with_slashes") + assert response1 is not None + assert response2 is not None + assert response1.metadata_location.startswith(f"s3://{test_bucket}/{prefix1}/metadata/") + # ensure that the slashes are removed before "/metadata/" + assert response2.metadata_location.startswith(f"s3://{test_bucket}/{prefix2}/metadata/") + + s3_1 = boto3.client('s3', + aws_access_key_id=response1.config['s3.access-key-id'], + aws_secret_access_key=response1.config['s3.secret-access-key'], + aws_session_token=response1.config['s3.session-token']) + + s3_2 = boto3.client('s3', + aws_access_key_id=response2.config['s3.access-key-id'], + aws_secret_access_key=response2.config['s3.secret-access-key'], + aws_session_token=response2.config['s3.session-token']) + for client,prefix in [(s3_1,prefix1), (s3_2, prefix2)]: + objects = client.list_objects(Bucket=test_bucket, Delimiter='/', + Prefix=f'{prefix}/metadata/') + assert objects is not None + assert 'Contents' in objects , f'list medata files failed in prefix: {prefix}/metadata/' + + objects = client.list_objects(Bucket=test_bucket, Delimiter='/', + Prefix=f'{prefix}/data/') + assert objects is not None + # no insert executed, so should not have any data files + assert 'Contents' not in objects , f'No contents should be in prefix: {prefix}/data/' + + # list files fail in the same table's other directory. The access policy should restrict this + # even metadata and data, it needs an ending `/` + for invalidPrefix in [f'{prefix}/other_directory/', f'{prefix}/metadata', f'{prefix}/data']: + try: + client.list_objects(Bucket=test_bucket, Delimiter='/', + Prefix=invalidPrefix) + pytest.fail(f'Expected exception listing files outside of allowed table directories, but succeeds on location: {invalidPrefix}') + except botocore.exceptions.ClientError as error: + assert error.response['Error']['Code'] == 'AccessDenied', 'Expected exception AccessDenied, but got: ' + error.response['Error']['Code'] + ' on location: ' + invalidPrefix + + with IcebergSparkSession(credentials=f'{snowman.principal.client_id}:{snowman.credentials.client_secret}', + catalog_name=snowflake_catalog.name, + polaris_url=polaris_catalog_url) as spark: + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('USE db1.schema') + spark.sql('DROP TABLE iceberg_table_scope_loc PURGE') + spark.sql('DROP TABLE iceberg_table_scope_loc_slashes PURGE') + spark.sql(f'USE {snowflake_catalog.name}') + spark.sql('DROP NAMESPACE db1.schema') + spark.sql('DROP NAMESPACE db1') + +def create_catalog_role(api, catalog, role_name): + catalog_role = CatalogRole(name=role_name) + try: + api.create_catalog_role(catalog_name=catalog.name, + create_catalog_role_request=CreateCatalogRoleRequest(catalog_role=catalog_role)) + return api.get_catalog_role(catalog_name=catalog.name, catalog_role_name=role_name) + except ApiException as e: + return api.get_catalog_role(catalog_name=catalog.name, catalog_role_name=role_name) + else: + raise e + + +def create_principal_role(api, role_name): + principal_role = PrincipalRole(name=role_name) + try: + api.create_principal_role(CreatePrincipalRoleRequest(principal_role=principal_role)) + return api.get_principal_role(principal_role_name=role_name) + except ApiException as e: + return api.get_principal_role(principal_role_name=role_name) diff --git a/regtests/t_spark_sql/ref/spark_sql_azure_blob.sh.ref b/regtests/t_spark_sql/ref/spark_sql_azure_blob.sh.ref new file mode 100755 index 0000000000..5c18f802cd --- /dev/null +++ b/regtests/t_spark_sql/ref/spark_sql_azure_blob.sh.ref @@ -0,0 +1,35 @@ +{"defaults":{"default-base-location":"abfss://polaris-container@polarisadls.blob.core.windows.net/polaris-test/spark_sql_blob_catalog/"},"overrides":{"prefix":"spark_sql_azure_blob_catalog"}} +Catalog created +spark-sql (default)> use polaris; +spark-sql ()> show namespaces; +spark-sql ()> create namespace db1; +spark-sql ()> create namespace db2; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> + > create namespace db1.schema1; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> show namespaces in db1; +db1.schema1 +spark-sql ()> + > create table db1.schema1.tbl1 (col1 int); +spark-sql ()> show tables in db1; +spark-sql ()> use db1.schema1; +spark-sql (db1.schema1)> + > insert into tbl1 values (123), (234); +spark-sql (db1.schema1)> select * from tbl1; +123 +234 +spark-sql (db1.schema1)> + > drop table tbl1 purge; +spark-sql (db1.schema1)> show tables; +spark-sql (db1.schema1)> drop namespace db1.schema1; +spark-sql (db1.schema1)> drop namespace db1; +spark-sql (db1.schema1)> show namespaces; +db2 +spark-sql (db1.schema1)> drop namespace db2; +spark-sql (db1.schema1)> show namespaces; +spark-sql (db1.schema1)> diff --git a/regtests/t_spark_sql/ref/spark_sql_azure_dfs.sh.ref b/regtests/t_spark_sql/ref/spark_sql_azure_dfs.sh.ref new file mode 100755 index 0000000000..422389565d --- /dev/null +++ b/regtests/t_spark_sql/ref/spark_sql_azure_dfs.sh.ref @@ -0,0 +1,35 @@ +{"defaults":{"default-base-location":"abfss://polaris-container@polarisadls.dfs.core.windows.net/polaris-test/spark_sql_dfs_catalog/"},"overrides":{"prefix":"spark_sql_azure_dfs_catalog"}} +Catalog created +spark-sql (default)> use polaris; +spark-sql ()> show namespaces; +spark-sql ()> create namespace db1; +spark-sql ()> create namespace db2; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> + > create namespace db1.schema1; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> show namespaces in db1; +db1.schema1 +spark-sql ()> + > create table db1.schema1.tbl1 (col1 int); +spark-sql ()> show tables in db1; +spark-sql ()> use db1.schema1; +spark-sql (db1.schema1)> + > insert into tbl1 values (123), (234); +spark-sql (db1.schema1)> select * from tbl1; +123 +234 +spark-sql (db1.schema1)> + > drop table tbl1 purge; +spark-sql (db1.schema1)> show tables; +spark-sql (db1.schema1)> drop namespace db1.schema1; +spark-sql (db1.schema1)> drop namespace db1; +spark-sql (db1.schema1)> show namespaces; +db2 +spark-sql (db1.schema1)> drop namespace db2; +spark-sql (db1.schema1)> show namespaces; +spark-sql (db1.schema1)> diff --git a/regtests/t_spark_sql/ref/spark_sql_basic.sh.ref b/regtests/t_spark_sql/ref/spark_sql_basic.sh.ref new file mode 100755 index 0000000000..1ab8f91896 --- /dev/null +++ b/regtests/t_spark_sql/ref/spark_sql_basic.sh.ref @@ -0,0 +1,39 @@ +{"defaults":{"default-base-location":"file:///tmp/spark_sql_s3_catalog"},"overrides":{"prefix":"spark_sql_basic_catalog"}} +Catalog created +spark-sql (default)> use polaris; +spark-sql ()> show namespaces; +spark-sql ()> create namespace db1; +spark-sql ()> create namespace db2; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> + > create namespace db1.schema1; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> show namespaces in db1; +db1.schema1 +spark-sql ()> + > create table db1.schema1.tbl1 (col1 int); +spark-sql ()> show tables in db1; +spark-sql ()> show tables in db1.schema1; +tbl1 +spark-sql ()> use db1.schema1; +spark-sql (db1.schema1)> show tables; +tbl1 +spark-sql (db1.schema1)> + > insert into tbl1 values (123), (234); +spark-sql (db1.schema1)> select * from tbl1; +123 +234 +spark-sql (db1.schema1)> + > drop table tbl1 purge; +spark-sql (db1.schema1)> show tables; +spark-sql (db1.schema1)> drop namespace db1.schema1; +spark-sql (db1.schema1)> drop namespace db1; +spark-sql (db1.schema1)> show namespaces; +db2 +spark-sql (db1.schema1)> drop namespace db2; +spark-sql (db1.schema1)> show namespaces; +spark-sql (db1.schema1)> diff --git a/regtests/t_spark_sql/ref/spark_sql_gcp.sh.ref b/regtests/t_spark_sql/ref/spark_sql_gcp.sh.ref new file mode 100755 index 0000000000..f083b9a0af --- /dev/null +++ b/regtests/t_spark_sql/ref/spark_sql_gcp.sh.ref @@ -0,0 +1,35 @@ +{"defaults":{"default-base-location":"gs://polaris-test1/polaris_test/spark_sql_gcp_catalog/"},"overrides":{"prefix":"spark_sql_gcp_catalog"}} +Catalog created +spark-sql (default)> use polaris; +spark-sql ()> show namespaces; +spark-sql ()> create namespace db1; +spark-sql ()> create namespace db2; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> + > create namespace db1.schema1; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> show namespaces in db1; +db1.schema1 +spark-sql ()> + > create table db1.schema1.tbl1 (col1 int); +spark-sql ()> show tables in db1; +spark-sql ()> use db1.schema1; +spark-sql (db1.schema1)> + > insert into tbl1 values (123), (234); +spark-sql (db1.schema1)> select * from tbl1; +123 +234 +spark-sql (db1.schema1)> + > drop table tbl1 purge; +spark-sql (db1.schema1)> show tables; +spark-sql (db1.schema1)> drop namespace db1.schema1; +spark-sql (db1.schema1)> drop namespace db1; +spark-sql (db1.schema1)> show namespaces; +db2 +spark-sql (db1.schema1)> drop namespace db2; +spark-sql (db1.schema1)> show namespaces; +spark-sql (db1.schema1)> diff --git a/regtests/t_spark_sql/ref/spark_sql_s3.sh.ref b/regtests/t_spark_sql/ref/spark_sql_s3.sh.ref new file mode 100755 index 0000000000..885663c151 --- /dev/null +++ b/regtests/t_spark_sql/ref/spark_sql_s3.sh.ref @@ -0,0 +1,35 @@ +{"defaults":{"default-base-location":"s3://datalake-storage-team/polaris_test/spark_sql_s3_catalog"},"overrides":{"prefix":"spark_sql_s3_catalog"}} +Catalog created +spark-sql (default)> use polaris; +spark-sql ()> show namespaces; +spark-sql ()> create namespace db1; +spark-sql ()> create namespace db2; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> + > create namespace db1.schema1; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> show namespaces in db1; +db1.schema1 +spark-sql ()> + > create table db1.schema1.tbl1 (col1 int); +spark-sql ()> show tables in db1; +spark-sql ()> use db1.schema1; +spark-sql (db1.schema1)> + > insert into tbl1 values (123), (234); +spark-sql (db1.schema1)> select * from tbl1; +123 +234 +spark-sql (db1.schema1)> + > drop table tbl1 purge; +spark-sql (db1.schema1)> show tables; +spark-sql (db1.schema1)> drop namespace db1.schema1; +spark-sql (db1.schema1)> drop namespace db1; +spark-sql (db1.schema1)> show namespaces; +db2 +spark-sql (db1.schema1)> drop namespace db2; +spark-sql (db1.schema1)> show namespaces; +spark-sql (db1.schema1)> diff --git a/regtests/t_spark_sql/ref/spark_sql_s3_cross_region.sh.ref b/regtests/t_spark_sql/ref/spark_sql_s3_cross_region.sh.ref new file mode 100644 index 0000000000..957214cc17 --- /dev/null +++ b/regtests/t_spark_sql/ref/spark_sql_s3_cross_region.sh.ref @@ -0,0 +1,35 @@ +{"defaults":{"default-base-location":"s3://sfc-role-stage-for-reg-test-do-not-modify-write-only/polaris_test/spark_sql_s3_cross_region_catalog/"},"overrides":{"prefix":"spark_sql_s3_cross_region_catalog"}} +Catalog created +spark-sql (default)> use polaris; +spark-sql ()> show namespaces; +spark-sql ()> create namespace db1; +spark-sql ()> create namespace db2; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> + > create namespace db1.schema1; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> show namespaces in db1; +db1.schema1 +spark-sql ()> + > create table db1.schema1.tbl1 (col1 int); +spark-sql ()> show tables in db1; +spark-sql ()> use db1.schema1; +spark-sql (db1.schema1)> + > insert into tbl1 values (123), (234); +spark-sql (db1.schema1)> select * from tbl1; +123 +234 +spark-sql (db1.schema1)> + > drop table tbl1 purge; +spark-sql (db1.schema1)> show tables; +spark-sql (db1.schema1)> drop namespace db1.schema1; +spark-sql (db1.schema1)> drop namespace db1; +spark-sql (db1.schema1)> show namespaces; +db2 +spark-sql (db1.schema1)> drop namespace db2; +spark-sql (db1.schema1)> show namespaces; +spark-sql (db1.schema1)> diff --git a/regtests/t_spark_sql/ref/spark_sql_views.sh.ref b/regtests/t_spark_sql/ref/spark_sql_views.sh.ref new file mode 100755 index 0000000000..44e64f2c29 --- /dev/null +++ b/regtests/t_spark_sql/ref/spark_sql_views.sh.ref @@ -0,0 +1,52 @@ +{"defaults":{"default-base-location":"file:///tmp/spark_sql_s3_catalog"},"overrides":{"prefix":"spark_sql_views_catalog"}} +Catalog created +spark-sql (default)> use polaris; +spark-sql ()> show namespaces; +spark-sql ()> create namespace db1; +spark-sql ()> create namespace db2; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> + > create namespace db1.schema1; +spark-sql ()> show namespaces; +db1 +db2 +spark-sql ()> show namespaces in db1; +db1.schema1 +spark-sql ()> + > create table db1.schema1.tbl1 (col1 int, col2 string); +spark-sql ()> show tables in db1; +spark-sql ()> show tables in db1.schema1; +tbl1 +spark-sql ()> use db1.schema1; +spark-sql (db1.schema1)> show tables; +tbl1 +spark-sql (db1.schema1)> + > insert into tbl1 values (123, 'hello'), (234, 'world'); +spark-sql (db1.schema1)> select * from tbl1; +123 hello +234 world +spark-sql (db1.schema1)> + > create view db1.schema1.v1 (strcol) as select col2 from tbl1 order by col1 DESC; +spark-sql (db1.schema1)> show views in db1.schema1; +db1.schema1 v1 false +spark-sql (db1.schema1)> select * from v1; +world +hello +spark-sql (db1.schema1)> + > update tbl1 set col2 = 'world2' where col1 = 234; +spark-sql (db1.schema1)> select * from v1; +world2 +hello +spark-sql (db1.schema1)> + > drop view v1; +spark-sql (db1.schema1)> drop table tbl1 purge; +spark-sql (db1.schema1)> show tables; +spark-sql (db1.schema1)> drop namespace db1.schema1; +spark-sql (db1.schema1)> drop namespace db1; +spark-sql (db1.schema1)> show namespaces; +db2 +spark-sql (db1.schema1)> drop namespace db2; +spark-sql (db1.schema1)> show namespaces; +spark-sql (db1.schema1)> diff --git a/regtests/t_spark_sql/src/spark_sql_azure_blob.sh b/regtests/t_spark_sql/src/spark_sql_azure_blob.sh new file mode 100755 index 0000000000..ccf4d0cd2c --- /dev/null +++ b/regtests/t_spark_sql/src/spark_sql_azure_blob.sh @@ -0,0 +1,65 @@ +#!/bin/bash + +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +SPARK_BEARER_TOKEN="${REGTEST_ROOT_BEARER_TOKEN:-principal:root;realm:realm1}" + +curl -i -X POST -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs \ + -d "{\"name\": \"spark_sql_azure_blob_catalog\", \"id\": 101, \"type\": \"INTERNAL\", \"readOnly\": false, \"properties\": {\"default-base-location\": \"${AZURE_BLOB_TEST_BASE}/polaris-test/spark_sql_blob_catalog/\"}, \"storageConfigInfo\": {\"storageType\": \"AZURE\", \"allowedLocations\": [\"${AZURE_BLOB_TEST_BASE}/polaris-test/spark_sql_blob_catalog2/\"], \"tenantId\": \"${AZURE_TENANT_ID}\"}}" > /dev/stderr + +# Add TABLE_WRITE_DATA to the catalog's catalog_admin role since by default it can only manage access and metadata +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_azure_blob_catalog/catalog-roles/catalog_admin/grants \ + -d '{"type": "catalog", "privilege": "TABLE_WRITE_DATA"}' > /dev/stderr + +# For now, also explicitly assign the catalog_admin to the service_admin. Remove once GS fully rolled out for auto-assign. +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/principal-roles/service_admin/catalog-roles/spark_sql_azure_blob_catalog \ + -d '{"name": "catalog_admin"}' > /dev/stderr + +curl -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + "http://${POLARIS_HOST:-localhost}:8181/api/catalog/v1/config?warehouse=spark_sql_azure_blob_catalog" +echo +echo "Catalog created" +cat << EOF | ${SPARK_HOME}/bin/spark-sql -S --conf spark.sql.catalog.polaris.token="${SPARK_BEARER_TOKEN}" --conf spark.sql.catalog.polaris.warehouse=spark_sql_azure_blob_catalog; +use polaris; +show namespaces; +create namespace db1; +create namespace db2; +show namespaces; + +create namespace db1.schema1; +show namespaces; +show namespaces in db1; + +create table db1.schema1.tbl1 (col1 int); +show tables in db1; +use db1.schema1; + +insert into tbl1 values (123), (234); +select * from tbl1; + +drop table tbl1 purge; +show tables; +drop namespace db1.schema1; +drop namespace db1; +show namespaces; +drop namespace db2; +show namespaces; +EOF + +curl -i -X DELETE -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_azure_blob_catalog > /dev/stderr diff --git a/regtests/t_spark_sql/src/spark_sql_azure_dfs.sh b/regtests/t_spark_sql/src/spark_sql_azure_dfs.sh new file mode 100755 index 0000000000..d183f7eec3 --- /dev/null +++ b/regtests/t_spark_sql/src/spark_sql_azure_dfs.sh @@ -0,0 +1,65 @@ +#!/bin/bash + +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +SPARK_BEARER_TOKEN="${REGTEST_ROOT_BEARER_TOKEN:-principal:root;realm:realm1}" + +curl -i -X POST -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs \ + -d "{\"name\": \"spark_sql_azure_dfs_catalog\", \"id\": 101, \"type\": \"INTERNAL\", \"readOnly\": false, \"properties\": {\"default-base-location\": \"${AZURE_DFS_TEST_BASE}/polaris-test/spark_sql_dfs_catalog/\"}, \"storageConfigInfo\": {\"storageType\": \"AZURE\", \"allowedLocations\": [\"${AZURE_DFS_TEST_BASE}/polaris-test/spark_sql_dfs_catalog2/\"], \"tenantId\": \"$AZURE_TENANT_ID\"}}" > /dev/stderr + +# Add TABLE_WRITE_DATA to the catalog's catalog_admin role since by default it can only manage access and metadata +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_azure_dfs_catalog/catalog-roles/catalog_admin/grants \ + -d '{"type": "catalog", "privilege": "TABLE_WRITE_DATA"}' > /dev/stderr + +# For now, also explicitly assign the catalog_admin to the service_admin. Remove once GS fully rolled out for auto-assign. +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/principal-roles/service_admin/catalog-roles/spark_sql_azure_dfs_catalog \ + -d '{"name": "catalog_admin"}' > /dev/stderr + +curl -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + "http://${POLARIS_HOST:-localhost}:8181/api/catalog/v1/config?warehouse=spark_sql_azure_dfs_catalog" +echo +echo "Catalog created" +cat << EOF | ${SPARK_HOME}/bin/spark-sql -S --conf spark.sql.catalog.polaris.token="${SPARK_BEARER_TOKEN}" --conf spark.sql.catalog.polaris.warehouse=spark_sql_azure_dfs_catalog +use polaris; +show namespaces; +create namespace db1; +create namespace db2; +show namespaces; + +create namespace db1.schema1; +show namespaces; +show namespaces in db1; + +create table db1.schema1.tbl1 (col1 int); +show tables in db1; +use db1.schema1; + +insert into tbl1 values (123), (234); +select * from tbl1; + +drop table tbl1 purge; +show tables; +drop namespace db1.schema1; +drop namespace db1; +show namespaces; +drop namespace db2; +show namespaces; +EOF + +curl -i -X DELETE -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_azure_dfs_catalog > /dev/stderr diff --git a/regtests/t_spark_sql/src/spark_sql_basic.sh b/regtests/t_spark_sql/src/spark_sql_basic.sh new file mode 100755 index 0000000000..c53628bb45 --- /dev/null +++ b/regtests/t_spark_sql/src/spark_sql_basic.sh @@ -0,0 +1,67 @@ +#!/bin/bash + +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +SPARK_BEARER_TOKEN="${REGTEST_ROOT_BEARER_TOKEN:-principal:root;realm:realm1}" + +curl -i -X POST -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs \ + -d '{"name": "spark_sql_basic_catalog", "id": 100, "type": "INTERNAL", "readOnly": false, "properties": {"default-base-location": "file:///tmp/spark_sql_s3_catalog"}, "storageConfigInfo": {"storageType": "FILE", "allowedLocations": ["file:///tmp"]}}' > /dev/stderr + +# Add TABLE_WRITE_DATA to the catalog's catalog_admin role since by default it can only manage access and metadata +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_basic_catalog/catalog-roles/catalog_admin/grants \ + -d '{"type": "catalog", "privilege": "TABLE_WRITE_DATA"}' > /dev/stderr + +# For now, also explicitly assign the catalog_admin to the service_admin. Remove once GS fully rolled out for auto-assign. +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/principal-roles/service_admin/catalog-roles/spark_sql_basic_catalog \ + -d '{"name": "catalog_admin"}' > /dev/stderr + +curl -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + "http://${POLARIS_HOST:-localhost}:8181/api/catalog/v1/config?warehouse=spark_sql_basic_catalog" +echo +echo "Catalog created" +cat << EOF | ${SPARK_HOME}/bin/spark-sql -S --conf spark.sql.catalog.polaris.token="${SPARK_BEARER_TOKEN}" --conf spark.sql.catalog.polaris.warehouse=spark_sql_basic_catalog +use polaris; +show namespaces; +create namespace db1; +create namespace db2; +show namespaces; + +create namespace db1.schema1; +show namespaces; +show namespaces in db1; + +create table db1.schema1.tbl1 (col1 int); +show tables in db1; +show tables in db1.schema1; +use db1.schema1; +show tables; + +insert into tbl1 values (123), (234); +select * from tbl1; + +drop table tbl1 purge; +show tables; +drop namespace db1.schema1; +drop namespace db1; +show namespaces; +drop namespace db2; +show namespaces; +EOF + +curl -i -X DELETE -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_basic_catalog > /dev/stderr diff --git a/regtests/t_spark_sql/src/spark_sql_gcp.sh b/regtests/t_spark_sql/src/spark_sql_gcp.sh new file mode 100755 index 0000000000..b7bd6e03c4 --- /dev/null +++ b/regtests/t_spark_sql/src/spark_sql_gcp.sh @@ -0,0 +1,65 @@ +#!/bin/bash + +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +SPARK_BEARER_TOKEN="${REGTEST_ROOT_BEARER_TOKEN:-principal:root;realm:realm1}" + +curl -i -X POST -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs \ + -d "{\"name\": \"spark_sql_gcp_catalog\", \"id\": 100, \"type\": \"INTERNAL\", \"readOnly\": false, \"properties\": {\"default-base-location\": \"${GCS_TEST_BASE}/polaris_test/spark_sql_gcp_catalog/\"}, \"storageConfigInfo\": {\"storageType\": \"GCS\", \"allowedLocations\": [\"${GCS_TEST_BASE}/polaris_test/spark_sql_gcp_catalog2/\"]}}" > /dev/stderr + +# Add TABLE_WRITE_DATA to the catalog's catalog_admin role since by default it can only manage access and metadata +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_gcp_catalog/catalog-roles/catalog_admin/grants \ + -d '{"type": "catalog", "privilege": "TABLE_WRITE_DATA"}' > /dev/stderr + +# For now, also explicitly assign the catalog_admin to the service_admin. Remove once GS fully rolled out for auto-assign. +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/principal-roles/service_admin/catalog-roles/spark_sql_gcp_catalog \ + -d '{"name": "catalog_admin"}' > /dev/stderr + +curl -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + "http://${POLARIS_HOST:-localhost}:8181/api/catalog/v1/config?warehouse=spark_sql_gcp_catalog" +echo +echo "Catalog created" +cat << EOF | ${SPARK_HOME}/bin/spark-sql -S --conf spark.sql.catalog.polaris.token="${SPARK_BEARER_TOKEN}" --conf spark.sql.catalog.polaris.warehouse=spark_sql_gcp_catalog +use polaris; +show namespaces; +create namespace db1; +create namespace db2; +show namespaces; + +create namespace db1.schema1; +show namespaces; +show namespaces in db1; + +create table db1.schema1.tbl1 (col1 int); +show tables in db1; +use db1.schema1; + +insert into tbl1 values (123), (234); +select * from tbl1; + +drop table tbl1 purge; +show tables; +drop namespace db1.schema1; +drop namespace db1; +show namespaces; +drop namespace db2; +show namespaces; +EOF + +curl -i -X DELETE -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_gcp_catalog > /dev/stderr diff --git a/regtests/t_spark_sql/src/spark_sql_s3.sh b/regtests/t_spark_sql/src/spark_sql_s3.sh new file mode 100755 index 0000000000..922567ff45 --- /dev/null +++ b/regtests/t_spark_sql/src/spark_sql_s3.sh @@ -0,0 +1,70 @@ +#!/bin/bash + +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ -z "$AWS_TEST_ENABLED" ] || [ "$AWS_TEST_ENABLED" != "true" ]; then + echo "AWS_TEST_ENABLED is not set to 'true'. Skipping test." + exit 0 +fi + +SPARK_BEARER_TOKEN="${REGTEST_ROOT_BEARER_TOKEN:-principal:root;realm:realm1}" + +curl -i -X POST -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs \ + -d "{\"name\": \"spark_sql_s3_catalog\", \"id\": 100, \"type\": \"INTERNAL\", \"readOnly\": false, \"properties\": {\"default-base-location\": \"s3://datalake-storage-team/polaris_test/spark_sql_s3_catalog\"}, \"storageConfigInfo\": {\"storageType\": \"S3\", \"allowedLocations\": [\"${AWS_TEST_BASE}/polaris_test/\"], \"roleArn\": \"arn:aws:iam::631484165566:role/datalake-storage-integration-role\"}}" > /dev/stderr + +# Add TABLE_WRITE_DATA to the catalog's catalog_admin role since by default it can only manage access and metadata +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_s3_catalog/catalog-roles/catalog_admin/grants \ + -d '{"type": "catalog", "privilege": "TABLE_WRITE_DATA"}' > /dev/stderr + +# For now, also explicitly assign the catalog_admin to the service_admin. Remove once GS fully rolled out for auto-assign. +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/principal-roles/service_admin/catalog-roles/spark_sql_s3_catalog \ + -d '{"name": "catalog_admin"}' > /dev/stderr + +curl -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + "http://${POLARIS_HOST:-localhost}:8181/api/catalog/v1/config?warehouse=spark_sql_s3_catalog" +echo +echo "Catalog created" +cat << EOF | ${SPARK_HOME}/bin/spark-sql -S --conf spark.sql.catalog.polaris.token="${SPARK_BEARER_TOKEN}" --conf spark.sql.catalog.polaris.warehouse=spark_sql_s3_catalog +use polaris; +show namespaces; +create namespace db1; +create namespace db2; +show namespaces; + +create namespace db1.schema1; +show namespaces; +show namespaces in db1; + +create table db1.schema1.tbl1 (col1 int); +show tables in db1; +use db1.schema1; + +insert into tbl1 values (123), (234); +select * from tbl1; + +drop table tbl1 purge; +show tables; +drop namespace db1.schema1; +drop namespace db1; +show namespaces; +drop namespace db2; +show namespaces; +EOF + +curl -i -X DELETE -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_s3_catalog > /dev/stderr diff --git a/regtests/t_spark_sql/src/spark_sql_s3_cross_region.sh b/regtests/t_spark_sql/src/spark_sql_s3_cross_region.sh new file mode 100644 index 0000000000..3e75831dcc --- /dev/null +++ b/regtests/t_spark_sql/src/spark_sql_s3_cross_region.sh @@ -0,0 +1,72 @@ +#!/bin/bash + +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ -z "$AWS_CROSS_REGION_TEST_ENABLED" ] || [ "$AWS_CROSS_REGION_TEST_ENABLED" != "true" ]; then + echo "AWS_CROSS_REGION_TEST_ENABLED is not set to 'true'. Skipping test." + exit 0 +fi + +SPARK_BEARER_TOKEN="${REGTEST_ROOT_BEARER_TOKEN:-principal:root;realm:realm1}" +BUCKET="${AWS_CROSS_REGION_BUCKET}" +ROLE_ARN="${AWS_ROLE_FOR_CROSS_REGION_BUCKET}" + +curl -i -X POST -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs \ + -d '{"name": "spark_sql_s3_cross_region_catalog", "id": 100, "type": "INTERNAL", "readOnly": false, "properties": {"default-base-location": "s3://${BUCKET}/polaris_test/spark_sql_s3_cross_region_catalog/"}, "storageConfigInfo": {"storageType": "S3", "allowedLocations": ["s3://${BUCKET}/polaris_test/"], "roleArn": "${ROLE_ARN}"}}' > /dev/stderr + +# Add TABLE_WRITE_DATA to the catalog's catalog_admin role since by default it can only manage access and metadata +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_s3_cross_region_catalog/catalog-roles/catalog_admin/grants \ + -d '{"type": "catalog", "privilege": "TABLE_WRITE_DATA"}' > /dev/stderr + +# For now, also explicitly assign the catalog_admin to the service_admin. Remove once GS fully rolled out for auto-assign. +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/principal-roles/service_admin/catalog-roles/spark_sql_s3_cross_region_catalog \ + -d '{"name": "catalog_admin"}' > /dev/stderr + +curl -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + "http://${POLARIS_HOST:-localhost}:8181/api/catalog/v1/config?warehouse=spark_sql_s3_cross_region_catalog" +echo +echo "Catalog created" +cat << EOF | ${SPARK_HOME}/bin/spark-sql -S --conf spark.sql.catalog.polaris.token="${SPARK_BEARER_TOKEN}" --conf spark.sql.catalog.polaris.warehouse=spark_sql_s3_cross_region_catalog +use polaris; +show namespaces; +create namespace db1; +create namespace db2; +show namespaces; + +create namespace db1.schema1; +show namespaces; +show namespaces in db1; + +create table db1.schema1.tbl1 (col1 int); +show tables in db1; +use db1.schema1; + +insert into tbl1 values (123), (234); +select * from tbl1; + +drop table tbl1 purge; +show tables; +drop namespace db1.schema1; +drop namespace db1; +show namespaces; +drop namespace db2; +show namespaces; +EOF + +curl -i -X DELETE -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_s3_catalog > /dev/stderr diff --git a/regtests/t_spark_sql/src/spark_sql_views.sh b/regtests/t_spark_sql/src/spark_sql_views.sh new file mode 100755 index 0000000000..a6a50c47e2 --- /dev/null +++ b/regtests/t_spark_sql/src/spark_sql_views.sh @@ -0,0 +1,77 @@ +#!/bin/bash + +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +SPARK_BEARER_TOKEN="${REGTEST_ROOT_BEARER_TOKEN:-principal:root;realm:default-realm}" + +curl -i -X POST -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs \ + -d '{"name": "spark_sql_views_catalog", "id": 100, "type": "INTERNAL", "readOnly": false, "properties": {"default-base-location": "file:///tmp/spark_sql_s3_catalog"}, "storageConfigInfo": {"storageType": "FILE", "allowedLocations": ["file:///tmp"]}}' > /dev/stderr + +# Add TABLE_WRITE_DATA to the catalog's catalog_admin role since by default it can only manage access and metadata +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_views_catalog/catalog-roles/catalog_admin/grants \ + -d '{"type": "catalog", "privilege": "TABLE_WRITE_DATA"}' > /dev/stderr + +# For now, also explicitly assign the catalog_admin to the service_admin. Remove once GS fully rolled out for auto-assign. +curl -i -X PUT -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/principal-roles/service_admin/catalog-roles/spark_sql_views_catalog \ + -d '{"name": "catalog_admin"}' > /dev/stderr + +curl -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + "http://${POLARIS_HOST:-localhost}:8181/api/catalog/v1/config?warehouse=spark_sql_views_catalog" +echo +echo "Catalog created" +cat << EOF | ${SPARK_HOME}/bin/spark-sql -S --conf spark.sql.catalog.polaris.token="${SPARK_BEARER_TOKEN}" \ + --conf spark.sql.catalog.polaris.warehouse=spark_sql_views_catalog \ + --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions +use polaris; +show namespaces; +create namespace db1; +create namespace db2; +show namespaces; + +create namespace db1.schema1; +show namespaces; +show namespaces in db1; + +create table db1.schema1.tbl1 (col1 int, col2 string); +show tables in db1; +show tables in db1.schema1; +use db1.schema1; +show tables; + +insert into tbl1 values (123, 'hello'), (234, 'world'); +select * from tbl1; + +create view db1.schema1.v1 (strcol) as select col2 from tbl1 order by col1 DESC; +show views in db1.schema1; +select * from v1; + +update tbl1 set col2 = 'world2' where col1 = 234; +select * from v1; + +drop view v1; +drop table tbl1 purge; +show tables; +drop namespace db1.schema1; +drop namespace db1; +show namespaces; +drop namespace db2; +show namespaces; +EOF + +curl -i -X DELETE -H "Authorization: Bearer ${SPARK_BEARER_TOKEN}" -H 'Accepts: application/json' -H 'Content-Type: application/json' \ + http://${POLARIS_HOST:-localhost}:8181/api/management/v1/catalogs/spark_sql_views_catalog > /dev/stderr diff --git a/server-templates/api.mustache b/server-templates/api.mustache new file mode 100644 index 0000000000..951b036f39 --- /dev/null +++ b/server-templates/api.mustache @@ -0,0 +1,115 @@ +/* + * Copyright (C) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package {{package}}; + +{{#imports}} +import {{import}}; +{{/imports}} + +import io.polaris.core.resource.TimedApi; + +import java.util.Map; +import java.util.List; + +import java.io.InputStream; + +import jakarta.annotation.security.RolesAllowed; + +import {{javaxPackage}}.ws.rs.Consumes; +import {{javaxPackage}}.ws.rs.Produces; +import {{javaxPackage}}.ws.rs.DELETE; +import {{javaxPackage}}.ws.rs.GET; +import {{javaxPackage}}.ws.rs.HEAD; +import {{javaxPackage}}.ws.rs.PATCH; +import {{javaxPackage}}.ws.rs.POST; +import {{javaxPackage}}.ws.rs.PUT; +import {{javaxPackage}}.ws.rs.Path; +import {{javaxPackage}}.ws.rs.DefaultValue; +import {{javaxPackage}}.ws.rs.PathParam; +import {{javaxPackage}}.ws.rs.HeaderParam; +import {{javaxPackage}}.ws.rs.QueryParam; +import {{javaxPackage}}.ws.rs.FormParam; +import {{javaxPackage}}.ws.rs.core.Response; +import {{javaxPackage}}.servlet.http.HttpServletRequest; +import {{javaxPackage}}.servlet.http.HttpServletResponse; +import {{javaxPackage}}.ws.rs.core.Context; +import {{javaxPackage}}.ws.rs.core.SecurityContext; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +{{#useBeanValidation}} +import {{javaxPackage}}.validation.constraints.*; +import {{javaxPackage}}.validation.Valid; +{{/useBeanValidation}} +{{#operations}}{{#operation}}{{#isMultipart}}import org.jboss.resteasy.plugins.providers.multipart.MultipartFormDataInput; +{{/isMultipart}}{{/operation}}{{/operations}} +{{! +Note that this template is copied /modified from +https://github.com/OpenAPITools/openapi-generator/blob/783e68c7acbbdcbb2282d167d1644b069f12d486/modules/openapi-generator/src/main/resources/JavaJaxRS/resteasy/api.mustache +It is updated to remove all swagger annotations +}} +/** + * The {{{baseName}}} API interface + * + * This file is automatically generated by the OpenAPI Code Generator based on configuratipn in the + * build.gradle file. + * + */ +@Path("{{contextPath}}{{commonPath}}"){{#hasConsumes}} +@Consumes({ {{#consumes}}"{{{mediaType}}}"{{^-last}}, {{/-last}}{{/consumes}} }){{/hasConsumes}}{{#hasProduces}} +@Produces({ {{#produces}}"{{{mediaType}}}"{{^-last}}, {{/-last}}{{/produces}} }){{/hasProduces}} +{{>generatedAnnotation}} +{{#operations}} +public class {{classname}} { + private static final Logger LOGGER = LoggerFactory.getLogger({{classname}}.class); + + private final {{classname}}Service service; + + public {{classname}}({{classname}}Service service) { + this.service = service; + } + +{{#operation}} + /** + * {{^notes}}{{{summary}}}{{/notes}}{{{notes}}} + * + * Response type {@link {{{returnBaseType}}}} + *{{#allParams}} @param {{paramName}} {{#required}}Required -{{/required}} {{description}} + *{{/allParams}}{{#responses}} + * @return {{{code}}} - {{{message}}}{{/responses}} + */ + @{{httpMethod}}{{#subresourceOperation}} + @Path("{{{path}}}"){{/subresourceOperation}}{{#hasConsumes}} + @Consumes({ {{#consumes}}"{{{mediaType}}}"{{^-last}}, {{/-last}}{{/consumes}} }){{/hasConsumes}}{{#hasProduces}} + @Produces({ {{#produces}}"{{{mediaType}}}"{{^-last}}, {{/-last}}{{/produces}} }){{/hasProduces}}{{#hasAuthMethods}} + {{#authMethods}}{{#isOAuth}}@RolesAllowed({ {{#scopes}}"{{scope}}"{{^-last}}, {{/-last}}{{/scopes}} }){{/isOAuth}}{{/authMethods}}{{/hasAuthMethods}} + @TimedApi("{{metricsPrefix}}.{{baseName}}.{{nickname}}") + public Response {{nickname}}({{#isMultipart}}MultipartFormDataInput input,{{/isMultipart}}{{#allParams}}{{>queryParams}}{{>pathParams}}{{>headerParams}}{{>bodyParams}}{{^isMultipart}}{{>formParams}},{{/isMultipart}}{{#isMultipart}}{{^isFormParam}},{{/isFormParam}}{{/isMultipart}}{{/allParams}}@Context SecurityContext securityContext) { +{{! Don't log form or header params in case there are secrets, e.g., OAuth tokens }} + LOGGER.atDebug().setMessage("Invoking {{baseName}} with params") + .addKeyValue("operation", "{{nickname}}"){{#allParams}}{{^isHeaderParam}}{{^isFormParam}} + .addKeyValue("{{paramName}}", {{^isBodyParam}}{{paramName}}{{/isBodyParam}}{{#isBodyParam}}String.valueOf({{paramName}}){{/isBodyParam}}){{/isFormParam}}{{/isHeaderParam}}{{/allParams}} + .log(); + + Response ret = + service.{{nickname}}({{#isMultipart}}input,{{/isMultipart}}{{#allParams}}{{^isMultipart}}{{paramName}},{{/isMultipart}}{{#isMultipart}}{{^isFormParam}}{{paramName}},{{/isFormParam}}{{/isMultipart}}{{/allParams}}securityContext); + LOGGER.debug("Completed execution of {{nickname}} API with status code {}", ret.getStatus()); + return ret; + } +{{/operation}} +} +{{/operations}} diff --git a/server-templates/apiService.mustache b/server-templates/apiService.mustache new file mode 100644 index 0000000000..176cc6426a --- /dev/null +++ b/server-templates/apiService.mustache @@ -0,0 +1,57 @@ +/* + * Copyright (C) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package {{package}}; + +{{#operations}}{{#operation}}{{#isMultipart}}import org.jboss.resteasy.plugins.providers.multipart.MultipartFormDataInput; +{{/isMultipart}}{{/operation}}{{/operations}} + +{{#imports}}import {{import}}; +{{/imports}} + +import java.util.List; + +import java.io.InputStream; + +{{#useBeanValidation}} +import {{javaxPackage}}.validation.constraints.*; +import {{javaxPackage}}.validation.Valid; +{{/useBeanValidation}} +import {{javaxPackage}}.ws.rs.core.Response; +import {{javaxPackage}}.ws.rs.core.SecurityContext; + +{{! +Note that this template is copied from https://github.com/OpenAPITools/openapi-generator/blob/783e68c7acbbdcbb2282d167d1644b069f12d486/modules/openapi-generator/src/main/resources/JavaJaxRS/resteasy/apiService.mustache +It is here to remove some unsupported imports and to update the default implementation to return a +501 response code +}} +/** + * Service interface for implementations of the {{classname}}Service. Provides default + * implemntations for all service methods that return 501 error codes (not implemented). + * + * This file is automatically generated by the OpenAPI Code Generator based on configuration in the + * pom.xml file in the module. + * + */ +{{>generatedAnnotation}} +{{#operations}} +public interface {{classname}}Service { + {{#operation}} + default Response {{nickname}}({{#isMultipart}}MultipartFormDataInput input,{{/isMultipart}}{{#allParams}}{{>serviceQueryParams}}{{>servicePathParams}}{{>serviceHeaderParams}}{{>serviceBodyParams}}{{^isMultipart}}{{>serviceFormParams}},{{/isMultipart}}{{#isMultipart}}{{^isFormParam}},{{/isFormParam}}{{/isMultipart}}{{/allParams}}SecurityContext securityContext) { + return Response.status(501).build(); // not implemented + } + {{/operation}} +} +{{/operations}} \ No newline at end of file diff --git a/server-templates/apiServiceImpl.mustache b/server-templates/apiServiceImpl.mustache new file mode 100644 index 0000000000..33534e434f --- /dev/null +++ b/server-templates/apiServiceImpl.mustache @@ -0,0 +1,58 @@ +/* + * Copyright (C) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package {{package}}.impl; + +{{#operations}}{{#operation}}{{#isMultipart}}import org.jboss.resteasy.plugins.providers.multipart.MultipartFormDataInput; +{{/isMultipart}}{{/operation}}{{/operations}} + +import {{package}}.{{classname}}Service; +{{#imports}}import {{import}}; +{{/imports}} + +import java.util.List; + +import java.io.InputStream; + +{{#useBeanValidation}} +import {{javaxPackage}}.validation.constraints.*; +import {{javaxPackage}}.validation.Valid; +{{/useBeanValidation}} +import {{javaxPackage}}.ws.rs.core.Response; +import {{javaxPackage}}.ws.rs.core.SecurityContext; + +{{! +Note that this template is copied from https://github.com/OpenAPITools/openapi-generator/blob/783e68c7acbbdcbb2282d167d1644b069f12d486/modules/openapi-generator/src/main/resources/JavaJaxRS/resteasy/apiServiceImpl.mustache +It is here to remove some unsupported imports (ApiResponseMessage, openapi.tools.*) +}} +/** + * Default implementation of the {{classname}}Service. Provides default + * implemntations for all service methods that return 501 error codes (not implemented). + * + * This file is automatically generated by the OpenAPI Code Generator based on configuration in the + * pom.xml file in the module. + * + * DO NOT EDIT THIS FILE BY HAND - CHANGES WILL BE AUTOMATICALLY OVERWRITTEN + */ +{{>generatedAnnotation}} +{{#operations}} +public class {{classname}}ServiceImpl implements {{classname}}Service { + {{#operation}} + public Response {{nickname}}({{#isMultipart}}MultipartFormDataInput input,{{/isMultipart}}{{#allParams}}{{>serviceQueryParams}}{{>servicePathParams}}{{>serviceHeaderParams}}{{>serviceBodyParams}}{{^isMultipart}}{{>serviceFormParams}},{{/isMultipart}}{{#isMultipart}}{{^isFormParam}},{{/isFormParam}}{{/isMultipart}}{{/allParams}}SecurityContext securityContext) { + return Response.status(501).build(); // not implemented + } + {{/operation}} +} +{{/operations}} \ No newline at end of file diff --git a/server-templates/bodyParams.mustache b/server-templates/bodyParams.mustache new file mode 100644 index 0000000000..b6b8d354cd --- /dev/null +++ b/server-templates/bodyParams.mustache @@ -0,0 +1,4 @@ +{{! +Note that this template is copied from https://github.com/OpenAPITools/openapi-generator/blob/783e68c7acbbdcbb2282d167d1644b069f12d486/modules/openapi-generator/src/main/resources/JavaJaxRS/resteasy/bodyParams.mustache +It is here to remove some unsupported swagger annotations +}}{{#isBodyParam}}{{#useBeanValidation}}{{#required}} @NotNull{{/required}} @Valid{{/useBeanValidation}} {{{dataType}}} {{paramName}}{{/isBodyParam}} \ No newline at end of file diff --git a/server-templates/formParams.mustache b/server-templates/formParams.mustache new file mode 100644 index 0000000000..187ea8a849 --- /dev/null +++ b/server-templates/formParams.mustache @@ -0,0 +1,4 @@ +{{! +Note that this template is copied from https://github.com/OpenAPITools/openapi-generator/blob/783e68c7acbbdcbb2282d167d1644b069f12d486/modules/openapi-generator/src/main/resources/JavaJaxRS/resteasy/bodyParams.mustache +It is here to remove some unsupported swagger annotations +}}{{#isFormParam}}{{^isFile}}@FormParam("{{baseName}}") {{{dataType}}} {{paramName}}{{/isFile}}{{/isFormParam}} \ No newline at end of file diff --git a/server-templates/headerParams.mustache b/server-templates/headerParams.mustache new file mode 100644 index 0000000000..f062941d44 --- /dev/null +++ b/server-templates/headerParams.mustache @@ -0,0 +1,4 @@ +{{! +Note that this template is copied from https://github.com/OpenAPITools/openapi-generator/blob/783e68c7acbbdcbb2282d167d1644b069f12d486/modules/openapi-generator/src/main/resources/JavaJaxRS/resteasy/headerParams.mustache +It is here to remove some unsupported swagger annotations +}}{{#isHeaderParam}}@HeaderParam("{{baseName}}") {{{dataType}}} {{paramName}}{{/isHeaderParam}} \ No newline at end of file diff --git a/server-templates/pojo.mustache b/server-templates/pojo.mustache new file mode 100644 index 0000000000..a274de17c0 --- /dev/null +++ b/server-templates/pojo.mustache @@ -0,0 +1,234 @@ +/* + * Copyright (C) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +import io.swagger.annotations.*; +{{#useBeanValidation}}import jakarta.validation.Valid;{{/useBeanValidation}} +{{#additionalPropertiesType}} +import com.fasterxml.jackson.annotation.JsonValue; +{{/additionalPropertiesType}} +{{! +Note that this template is copied /modified from +https://github.com/OpenAPITools/openapi-generator/blob/640ef9d9448a4a008af90eca9cc84c8a78ec87ec/modules/openapi-generator/src/main/resources/JavaJaxRS/resteasy/pojo.mustache +It is updated to remove all swagger annotations and support builders and immutability +}} + +{{#description}}@ApiModel(description="{{{.}}}"){{/description}}{{>additionalModelTypeAnnotations}}{{>generatedAnnotation}}{{#discriminator}}{{>typeInfoAnnotation}}{{/discriminator}}{{#vendorExtensions.x-class-extra-annotation}} + {{{vendorExtensions.x-class-extra-annotation}}} +{{/vendorExtensions.x-class-extra-annotation}}public class {{classname}} {{#parent}}extends {{{.}}}{{/parent}} {{#vendorExtensions.x-implements}}{{#-first}}implements {{{.}}}{{/-first}}{{^-first}}, {{{.}}}{{/-first}}{{/vendorExtensions.x-implements}} { +{{#serializableModel}} + private static final long serialVersionUID = 1L; +{{/serializableModel}} +{{#vars}}{{#isEnum}}{{^isContainer}} + {{>enumClass}}{{/isContainer}}{{#isContainer}}{{#mostInnerItems}} + {{>enumClass}}{{/mostInnerItems}}{{/isContainer}}{{/isEnum}} +{{#vendorExtensions.x-field-extra-annotation}} + {{{vendorExtensions.x-field-extra-annotation}}} +{{/vendorExtensions.x-field-extra-annotation}} +{{#useBeanValidation}}{{>beanValidation}}{{^isPrimitiveType}}{{^isDate}}{{^isDateTime}}{{^isString}}{{^isFile}} @Valid +{{/isFile}}{{/isString}}{{/isDateTime}}{{/isDate}}{{/isPrimitiveType}}{{/useBeanValidation}} private final {{{datatypeWithEnum}}} {{name}};{{/vars}} +{{#vars}} + /** + {{#description}} + * {{.}} + {{/description}} + {{#minimum}} + * minimum: {{.}} + {{/minimum}} + {{#maximum}} + * maximum: {{.}} + {{/maximum}} + **/ + {{#vendorExtensions.x-extra-annotation}}{{{vendorExtensions.x-extra-annotation}}} + {{/vendorExtensions.x-extra-annotation}}@ApiModelProperty({{#example}}example = "{{{.}}}", {{/example}}{{#required}}required = {{required}}, {{/required}}value = "{{{description}}}") + @JsonProperty(value = "{{baseName}}"{{#required}}, required = true{{/required}}) + public {{{datatypeWithEnum}}} {{getter}}() { + return {{name}}; + } + +{{/vars}} + {{#vendorExtensions.x-java-all-args-constructor}} + @JsonCreator + public {{classname}}({{#vendorExtensions.x-java-all-args-constructor-vars}}@JsonProperty(value = "{{baseName}}"{{#required}}, required = true{{/required}}) {{{datatypeWithEnum}}} {{name}}{{^-last}}, {{/-last}}{{/vendorExtensions.x-java-all-args-constructor-vars}}) { + {{#parent}} + super({{#parentVars}}{{name}}{{^-last}}, {{/-last}}{{/parentVars}}); + {{/parent}} + {{#vars}} + this.{{name}} = {{#defaultValue}}Objects.requireNonNullElse({{name}}, {{{.}}}){{/defaultValue}}{{^defaultValue}}{{name}}{{/defaultValue}}; + {{/vars}} + } + {{/vendorExtensions.x-java-all-args-constructor}} + {{^vendorExtensions.x-java-all-args-constructor}} + @JsonCreator + public {{classname}}({{#allVars}}@JsonProperty("{{baseName}}") {{{datatypeWithEnum}}} {{name}}{{^-last}}, {{/-last}}{{/allVars}}) { + {{#parent}} + super({{#parentVars}}{{name}}{{^-last}}, {{/-last}}{{/parentVars}}); + {{/parent}} + {{#vars}} + this.{{name}} = {{#defaultValue}}Objects.requireNonNullElse({{name}}, {{{.}}}){{/defaultValue}}{{^defaultValue}}{{name}}{{/defaultValue}}; + {{/vars}} + } + {{/vendorExtensions.x-java-all-args-constructor}} + + + {{#hasOptional}} + {{#hasRequired}} + public {{classname}}({{#requiredVars}}{{{datatypeWithEnum}}} {{name}}{{^-last}}, {{/-last}}{{/requiredVars}}) { + {{#parent}} + super({{#parentRequiredVars}}{{name}}{{^-last}}, {{/-last}}{{/parentRequiredVars}}); + {{/parent}} + {{#vars}} + {{#required}} + this.{{name}} = {{#defaultValue}}Objects.requireNonNullElse({{name}}, {{{.}}}){{/defaultValue}}{{^defaultValue}}{{name}}{{/defaultValue}}; + {{/required}} + {{^required}} + this.{{name}} = {{#defaultValue}}{{{.}}}{{/defaultValue}}{{^defaultValue}}null{{/defaultValue}}; + {{/required}} + {{/vars}} + } + {{/hasRequired}} + {{/hasOptional}} + + {{^hasChildren}} + public static Builder builder() { + return new Builder(); + } + {{#hasRequired}} + public static Builder builder({{#requiredVars}}{{{datatypeWithEnum}}} {{name}}{{^-last}}, {{/-last}}{{/requiredVars}}) { + return new Builder({{#requiredVars}}{{name}}{{^-last}}, {{/-last}}{{/requiredVars}}); + } + {{/hasRequired}} + {{/hasChildren}} + + {{#additionalPropertiesType}} + @JsonValue + public Map toMap() { + Map map = new HashMap<>(this); + {{#vars}} + map.put("{{baseName}}", {{name}}); + {{/vars}} + return map; + } + {{/additionalPropertiesType}} + + public static final class Builder { + {{#vendorExtensions.x-java-all-args-constructor}} + {{#vendorExtensions.x-java-all-args-constructor-vars}} + private {{{datatypeWithEnum}}} {{name}}; + {{/vendorExtensions.x-java-all-args-constructor-vars}} + {{/vendorExtensions.x-java-all-args-constructor}} + {{^vendorExtensions.x-java-all-args-constructor}} + {{#allVars}} + private {{{datatypeWithEnum}}} {{name}}; + {{/allVars}} + {{/vendorExtensions.x-java-all-args-constructor}} + {{#additionalPropertiesType}} + private Map additionalProperties = new HashMap<>(); + {{/additionalPropertiesType}} + private Builder() { + } + {{#hasRequired}} + private Builder({{#requiredVars}}{{{datatypeWithEnum}}} {{name}}{{^-last}}, {{/-last}}{{/requiredVars}}) { + {{#requiredVars}} + this.{{name}} = {{#defaultValue}}Objects.requireNonNullElse({{name}}, {{{.}}}){{/defaultValue}}{{^defaultValue}}{{name}}{{/defaultValue}}; + {{/requiredVars}} + } + {{/hasRequired}} + +{{#vendorExtensions.x-java-all-args-constructor}} + {{#vendorExtensions.x-java-all-args-constructor-vars}} + public Builder {{setter}}({{{datatypeWithEnum}}} {{name}}) { + this.{{name}} = {{name}}; + return this; + } + {{/vendorExtensions.x-java-all-args-constructor-vars}} +{{/vendorExtensions.x-java-all-args-constructor}} +{{^vendorExtensions.x-java-all-args-constructor}} + {{#allVars}} + public Builder {{setter}}({{{datatypeWithEnum}}} {{name}}) { + this.{{name}} = {{name}}; + return this; + } + {{/allVars}} +{{/vendorExtensions.x-java-all-args-constructor}} +{{#additionalPropertiesType}} + public Builder addProperty(String key, {{additionalPropertiesType}} value) { + additionalProperties.put(key, value); + return this; + } + + public Builder putAll(Map values) { + additionalProperties.putAll(values); + return this; + } +{{/additionalPropertiesType}} + + + public {{classname}} build() { +{{#vendorExtensions.x-java-all-args-constructor}} + {{classname}} inst = new {{classname}}({{#vendorExtensions.x-java-all-args-constructor-vars}}{{name}}{{^-last}}, {{/-last}}{{/vendorExtensions.x-java-all-args-constructor-vars}}); + {{#additionalPropertiesType}} + inst.putAll(additionalProperties); + {{/additionalPropertiesType}} + return inst; +{{/vendorExtensions.x-java-all-args-constructor}} +{{^vendorExtensions.x-java-all-args-constructor}} + {{classname}} inst = new {{classname}}({{#allVars}}{{name}}{{^-last}}, {{/-last}}{{/allVars}}); + {{#additionalPropertiesType}} + inst.putAll(additionalProperties); + {{/additionalPropertiesType}} + return inst; +{{/vendorExtensions.x-java-all-args-constructor}} + } + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + {{classname}} {{classVarName}} = ({{classname}}) o;{{#hasVars}} + return {{#parent}}super.equals(o) && {{/parent}}{{#vars}}Objects.equals(this.{{name}}, {{classVarName}}.{{name}}){{^-last}} && + {{/-last}}{{#-last}};{{/-last}}{{/vars}}{{/hasVars}}{{^hasVars}}{{#parent}}return super.equals(o);{{/parent}}{{^parent}}return true;{{/parent}}{{/hasVars}} + } + + @Override + public int hashCode() { + return {{^hasVars}}{{#parent}}super.hashCode(){{/parent}}{{^parent}}1{{/parent}}{{/hasVars}}{{#hasVars}}Objects.hash({{#vars}}{{#parent}}super.hashCode(), {{/parent}}{{name}}{{^-last}}, {{/-last}}{{/vars}}){{/hasVars}}; + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("class {{classname}} {\n"); + {{#parent}}sb.append(" ").append(toIndentedString(super.toString())).append("\n");{{/parent}} + {{#vars}}sb.append(" {{name}}: ").append({{#isPassword}}"*"{{/isPassword}}{{^isPassword}}toIndentedString({{name}}){{/isPassword}}).append("\n"); + {{/vars}}sb.append("}"); + return sb.toString(); + } + + /** + * Convert the given object to string with each line indented by 4 spaces + * (except the first line). + */ + private String toIndentedString(Object o) { + if (o == null) { + return "null"; + } + return o.toString().replace("\n", "\n "); + } +} \ No newline at end of file diff --git a/server-templates/queryParams.mustache b/server-templates/queryParams.mustache new file mode 100644 index 0000000000..3bcbe0368b --- /dev/null +++ b/server-templates/queryParams.mustache @@ -0,0 +1,18 @@ +{{! +Note that this template is copied from https://github.com/OpenAPITools/openapi-generator/blob/783e68c7acbbdcbb2282d167d1644b069f12d486/modules/openapi-generator/src/main/resources/JavaJaxRS/resteasy/queryParams.mustache +It is here to remove some unsupported swagger annotations +}}{{#isQueryParam}}{{! + + }}{{^isContainer}}{{! + }}{{#defaultValue}}{{! + }} @DefaultValue("{{{defaultValue}}}"){{! + }}{{/defaultValue}}{{! + }}{{/isContainer}}{{! + + }} @QueryParam("{{baseName}}"){{! + + }}{{#useBeanValidation}} {{>beanValidation}}{{/useBeanValidation}}{{! + + }} {{{dataType}}} {{paramName}}{{! + +}}{{/isQueryParam}} \ No newline at end of file diff --git a/settings.gradle b/settings.gradle new file mode 100644 index 0000000000..5f58877c66 --- /dev/null +++ b/settings.gradle @@ -0,0 +1,38 @@ +/* + * Copyright (c) 2024 Snowflake Computing Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +if (!JavaVersion.current().isCompatibleWith(JavaVersion.VERSION_21)) { + throw new GradleException(""" + + Build aborted... + + The Apache Polaris build requires Java 21. + + + """) +} + +rootProject.name = "polaris" + +Properties projects = new Properties() +file("gradle/projects.main.properties").withInputStream { projects.load(it) } +projects.entrySet().forEach { + final def name = it.key as String + include(name) + final def prj = project(":${name}") + prj.name = name + prj.projectDir = file(it.value) +} diff --git a/setup.sh b/setup.sh new file mode 100755 index 0000000000..f2a5be377f --- /dev/null +++ b/setup.sh @@ -0,0 +1,35 @@ +#!/bin/bash + +# +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +CURRENT_DIR=$(pwd) + +# deploy the registry +echo "Building Kind Registry..." +sh ./kind-registry.sh + +# build and deploy the server image +echo "Building polaris image..." +docker build -t localhost:5001/polaris -f Dockerfile . +echo "Pushing polaris image..." +docker push localhost:5001/polaris +echo "Loading polaris image to kind..." +kind load docker-image localhost:5001/polaris:latest + +echo "Applying kubernetes manifests..." +kubectl delete -f k8/deployment.yaml --ignore-not-found +kubectl apply -f k8/deployment.yaml diff --git a/spec/docs.yaml b/spec/docs.yaml new file mode 100644 index 0000000000..3099721e16 --- /dev/null +++ b/spec/docs.yaml @@ -0,0 +1,27 @@ +openapi: 3.0.0 + +info: + title: Polaris Catalog Documentation + x-logo: + url: ./img/logos/polaris-catalog-stacked-logo.svg + altText: Polaris Catalog Logo + description: + $ref: ../docs/quickstart.md + contact: + email: community [at] polaris.io + url: https://github.com/polaris-catalog/polaris + license: + name: Apache v2.0 + url: https://github.com/polaris-catalog/polaris/blob/main/LICENSE + +tags: + - name: Polaris Catalog Overview + description: + $ref: ../docs/overview.md + - name: Polaris Catalog Entities + description: + $ref: ../docs/entities.md + - name: Access Control + description: + $ref: ../docs/access-control.md + \ No newline at end of file diff --git a/spec/index.yaml b/spec/index.yaml new file mode 100644 index 0000000000..22a516b13e --- /dev/null +++ b/spec/index.yaml @@ -0,0 +1,6860 @@ +openapi: 3.0.0 +info: + title: Polaris Catalog Documentation + x-logo: + url: ./img/logos/polaris-catalog-stacked-logo.svg + altText: Polaris Catalog Logo + description: "\n\n# Quick Start\n\nThis guide serves as a introduction to several key entities that can be managed with Polaris, describes how to build and deploy Polaris locally, and finally includes examples of how to use Polaris with Spark and Trino.\n\n## Prerequisites\n\nThis guide covers building Polaris, deploying it locally or via [Docker](https://www.docker.com/), and interacting with it using the command-line interface and [Apache Spark](https://spark.apache.org/). Before proceeding with Polaris, be sure to satisfy the relevant prerequisites listed here. \n\n### Building and Deploying Polaris\n\nTo get the latest Polaris code, you'll need to clone the repository using [git](https://git-scm.com/). You can install git using [homebrew](https://brew.sh/):\n\n```\nbrew install git\n```\n\nThen, use git to clone the Polaris repo:\n\n```\ncd ~\ngit clone https://github.com/polaris-catalog/polaris.git\n```\n\n#### With Docker\n\nIf you plan to deploy Polaris inside [Docker](https://www.docker.com/)], you'll need to install docker itself. For can be done using [homebrew](https://brew.sh/):\n\n```\nbrew install docker\n```\n\nOnce installed, make sure Docker is running. This can be done on macOS with:\n\n```\nopen -a Docker\n```\n\n#### From Source\n\nIf you plan to build Polaris from source yourself, you will need to satisfy a few prerequisites first.\n\nPolaris is built using [gradle](https://gradle.org/) and is compatible with Java 21. We recommend the use of [jenv](https://www.jenv.be/) to manage multiple Java versions. For example, to install Java 21 via [homebre]w(https://brew.sh/) and configure it with jenv: \n\n```\ncd ~/polaris\njenv local 21\nbrew install openjdk@21 gradle@8 jenv\njenv add $(brew --prefix openjdk@21)\njenv local 21\n```\n\n### Connecting to Polaris\n\nPolaris is compatible with any [Apache Iceberg](https://iceberg.apache.org/) client that supports the REST API. Depending on the client you plan to use, refer to the prerequisites below.\n\n#### With Spark\n\nIf you want to connect to Polaris with [Apache Spark](https://spark.apache.org/), you'll need to start by cloning Spark. As [above](#building-and-deploying-polaris), make sure [git](https://git-scm.com/) is installed first. You can install it with [homebrew](https://brew.sh/):\n\n```\nbrew install git\n```\n\nThen, clone Spark and check out a versioned branch. This guide uses [Spark 3.5.0](https://spark.apache.org/releases/spark-release-3-5-0.html).\n\n```\ncd ~\ngit clone https://github.com/apache/spark.git\ncd ~/spark\ngit checkout branch-3.5.0\n```\n\n## Deploying Polaris \n\nPolaris can be deployed via a lightweight docker image or as a standalone process. Before starting, be sure that you've satisfied the relevant [prerequisites](#building-and-deploying-polaris) detailed above.\n\n### Docker Image\n\nTo start using Polaris in Docker, launch Polaris while Docker is running:\n\n```\ncd ~/polaris\ndocker compose -f docker-compose.yml up --build\n```\n\nOnce the `polaris-polaris` container is up, you can continue to [Defining a Catalog](#defining-a-catalog).\n\n### Building Polaris\n\nRun Polaris locally with:\n\n```\ncd ~/polaris\n./gradlew runApp\n```\n\nYou should see output for some time as Polaris builds and starts up. Eventually, you won’t see any more logs and should see messages that resemble the following:\n\n```\nINFO [...] [main] [] o.e.j.s.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@...\nINFO [...] [main] [] o.e.j.server.AbstractConnector: Started application@...\nINFO [...] [main] [] o.e.j.server.AbstractConnector: Started admin@...\nINFO [...] [main] [] o.eclipse.jetty.server.Server: Started Server@...\n```\n\nAt this point, Polaris is running.\n\n## Bootstrapping Polaris\n\nFor this tutorial, we'll launch an instance of Polaris that stores entities only in-memory. This means that any entities that you define will be destroyed when Polaris is shut down. It also means that Polaris will automatically bootstrap itself with root credentials. For more information on how to configure Polaris for production usage, see the [docs](./configuring-polaris-for-production.md).\n\nWhen Polaris is launched using in-memory mode the root `CLIENT_ID` and `CLIENT_SECRET` can be found in stdout on initial startup. For example:\n\n```\nBootstrapped with credentials: {\"client-id\": \"XXXX\", \"client-secret\": \"YYYY\"}\n```\n\nBe sure to note of these credentials as we'll be using them below.\n\n## Defining a Catalog\n\nIn Polaris, the [catalog](./entities/catalog.md) is the top-level entity that objects like [tables](./entities.md#table) and [views](./entities.md#view) are organized under. With a Polaris service running, you can create a catalog like so:\n\n```\ncd ~/polaris\n\n./polaris \\\n --client-id ${CLIENT_ID} \\\n --client-secret ${CLIENT_SECRET} \\\n catalogs \\\n create \\\n --storage-type s3 \\\n --default-base-location ${DEFAULT_BASE_LOCATION} \\\n --role-arn ${ROLE_ARN} \\\n quickstart_catalog\n```\n\nThis will create a new catalog called **quickstart_catalog**. \n\nThe `DEFAULT_BASE_LOCATION` you provide will be the default location that objects in this catalog should be stored in, and the `ROLE_ARN` you provide should be a [Role ARN](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html) with access to read and write data in that location. These credentials will be provided to engines reading data from the catalog once they have authenticated with Polaris using credentials that have access to those resources.\n\nIf you’re using a storage type other than S3, such as Azure, you’ll provide a different type of credential than a Role ARN. For more details on supported storage types, see the [docs](./entities.md#storage-type). \n\nAdditionally, if Polaris is running somewhere other than `localhost:8181`, you can specify the correct hostname and port by providing `--host` and `--port` flags. For the full set of options supported by the CLI, please refer to the [docs](./command-line-interface.md).\n\n\n### Creating a Principal and Assigning it Privileges\n\nWith a catalog created, we can create a [principal](./entities.md#principal) that has access to manage that catalog. For details on how to configure the Polaris CLI, see [the section above](#defining-a-catalog) or refer to the [docs](./command-line-interface.md).\n\n```\n./polaris \\\n --client-id ${CLIENT_ID} \\\n --client-secret ${CLIENT_SECRET} \\\n principals \\\n create \\\n quickstart_user\n\n./polaris \\\n --client-id ${CLIENT_ID} \\\n --client-secret ${CLIENT_SECRET} \\\n principal-roles \\\n create \\\n quickstart_user_role\n\n./polaris \\\n --client-id ${CLIENT_ID} \\\n --client-secret ${CLIENT_SECRET} \\\n catalog-roles \\\n create \\\n --catalog quickstart_catalog \\\n quickstart_catalog_role\n```\n\n\nBe sure to provide the necessary credentials, hostname, and port as before.\n\nWhen the `principals create` command completes successfully, it will return the credentials for this new principal. Be sure to note these down for later. For example:\n\n```\n./polaris ... principals create example\n{\"clientId\": \"XXXX\", \"clientSecret\": \"YYYY\"}\n```\n\nNow, we grant the principal the [principal role](./entities.md#principal-role) we created, and grant the [catalog role](./entities.md#catalog-role) the principal role we created. For more information on these entities, please refer to the linked documentation.\n\n```\n./polaris \\\n --client-id ${CLIENT_ID} \\\n --client-secret ${CLIENT_SECRET} \\\n principal-roles \\\n grant \\\n --principal quickstart_user \\\n quickstart_user_role\n\n./polaris \\\n --client-id ${CLIENT_ID} \\\n --client-secret ${CLIENT_SECRET} \\\n catalog-roles \\\n grant \\\n --catalog quickstart_catalog \\\n --principal-role quickstart_user_role \\\n quickstart_catalog_role\n```\n\nNow, we’ve linked our principal to the catalog via roles like so:\n\n![Principal to Catalog](./img/quickstart/privilege-illustration-1.png \"Principal to Catalog\")\n\nIn order to give this principal the ability to interact with the catalog, we must assign some [privileges](./entities.md#privileges). For the time being, we will give this principal the ability to fully manage content in our new catalog. We can do this with the CLI like so:\n\n```\n./polaris \\\n --client-id ${CLIENT_ID} \\\n --client-secret ${CLIENT_SECRET} \\\n privileges \\\n --catalog quickstart_catalog \\\n --catalog-role quickstart_catalog_role \\\n catalog \\\n grant \\\n CATALOG_MANAGE_CONTENT\n```\n\nThis grants the [catalog privileges](./entities.md#privilege) `CATALOG_MANAGE_CONTENT` to our catalog role, linking everything together like so:\n\n![Principal to Catalog with Catalog Role](./img/quickstart/privilege-illustration-2.png \"Principal to Catalog with Catalog Role\")\n\n`CATALOG_MANAGE_CONTENT` has create/list/read/write privileges on all entities within the catalog. The same privilege could be granted to a namespace, in which case the principal could create/list/read/write any entity under that namespace.\n\n## Using Iceberg & Polaris\n\nAt this point, we’ve created a principal and granted it the ability to manage a catalog. We can now use an external engine to assume that principal, access our catalog, and store data in that catalog using [Apache Iceberg](https://iceberg.apache.org/).\n\n### Connecting with Spark\n\nTo use a Polaris-managed catalog in [Apache Spark](https://spark.apache.org/), we can configure Spark to use the Iceberg catalog REST API. \n\nThis guide uses [Apache Spark 3.5](https://spark.apache.org/releases/spark-release-3-5-0.html), but be sure to find [the appropriate iceberg-spark package for your Spark version](https://mvnrepository.com/artifact/org.apache.iceberg/iceberg-spark). With a local Spark clone, we on the `branch-3.5` branch we can run the following:\n\n_Note: the credentials provided here are those for our principal, not the root credentials._\n\n```\nbin/spark-shell \\\n--packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.5.2,org.apache.hadoop:hadoop-aws:3.4.0 \\\n--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \\\n--conf spark.sql.catalog.quickstart_catalog.warehouse=quickstart_catalog \\\n--conf spark.sql.catalog.quickstart_catalog.header.X-Iceberg-Access-Delegation=true \\\n--conf spark.sql.catalog.quickstart_catalog=org.apache.iceberg.spark.SparkCatalog \\\n--conf spark.sql.catalog.quickstart_catalog.catalog-impl=org.apache.iceberg.rest.RESTCatalog \\\n--conf spark.sql.catalog.quickstart_catalog.uri=http://localhost:8181/api/catalog \\\n--conf spark.sql.catalog.quickstart_catalog.credential='XXXX:YYYY' \\\n--conf spark.sql.catalog.quickstart_catalog.scope='PRINCIPAL_ROLE:ALL' \\\n--conf spark.sql.catalog.quickstart_catalog.token-refresh-enabled=true\n```\n\n\nReplace `XXXX` and `YYYY` with the client ID and client secret generated when you created the `quickstart_user` principal.\n\nSimilar to the CLI commands above, this configures Spark to use the Polaris running at `localhost:8181` as a catalog. If your Polaris server is running elsewhere, but sure to update the configuration appropriately.\n\nFinally, note that we include the `hadoop-aws` package here. If your table is using a different filesystem, be sure to include the appropriate dependency.\n\nOnce the Spark session starts, we can create a namespace and table within the catalog:\n\n```\nspark.sql(\"USE quickstart_catalog\")\nspark.sql(\"CREATE NAMESPACE IF NOT EXISTS quickstart_namespace\")\nspark.sql(\"CREATE NAMESPACE IF NOT EXISTS quickstart_namespace.schema\")\nspark.sql(\"USE NAMESPACE quickstart_namespace.schema\")\nspark.sql(\"\"\"\n\tCREATE TABLE IF NOT EXISTS quickstart_table (\n\t\tid BIGINT, data STRING\n\t) \nUSING ICEBERG\n\"\"\")\n```\n\nWe can now use this table like any other:\n\n```\nspark.sql(\"INSERT INTO quickstart_table VALUES (1, 'some data')\")\nspark.sql(\"SELECT * FROM quickstart_table\").show(false)\n. . .\n+---+---------+\n|id |data |\n+---+---------+\n|1 |some data|\n+---+---------+\n```\n\nIf at any time access is revoked...\n\n```\n./polaris \\\n --client-id ${CLIENT_ID} \\\n --client-secret ${CLIENT_SECRET} \\\n privileges \\\n --catalog quickstart_catalog \\\n --catalog-role quickstart_catalog_role \\\n catalog \\\n revoke \\\n CATALOG_MANAGE_CONTENT\n```\n\nSpark will lose access to the table:\n\n```\nspark.sql(\"SELECT * FROM quickstart_table\").show(false)\n\norg.apache.iceberg.exceptions.ForbiddenException: Forbidden: Principal 'quickstart_user' with activated PrincipalRoles '[]' and activated ids '[6, 7]' is not authorized for op LOAD_TABLE_WITH_READ_DELEGATION\n```\n" + contact: + email: community [at] polaris.io + url: https://github.com/polaris-catalog/polaris + license: + name: Apache v2.0 + url: https://github.com/polaris-catalog/polaris/blob/main/LICENSE +servers: + - url: '{scheme}://{host}/api/management/v1' + description: Server URL when the port can be inferred from the scheme + variables: + scheme: + description: The scheme of the URI, either http or https. + default: https + host: + description: The host address for the specified server + default: localhost + - url: '{scheme}://{host}/{basePath}' + description: Server URL when the port can be inferred from the scheme + variables: + scheme: + description: The scheme of the URI, either http or https. + default: https + host: + description: The host address for the specified server + default: localhost + basePath: + description: Optional prefix to be appended to all routes + default: '' + - url: '{scheme}://{host}:{port}/{basePath}' + description: Generic base server URL, with all parts configurable + variables: + scheme: + description: The scheme of the URI, either http or https. + default: https + host: + description: The host address for the specified server + default: localhost + port: + description: The port used when addressing the host + default: '443' + basePath: + description: Optional prefix to be appended to all routes + default: '' +tags: + - name: Polaris Catalog Overview + description: >+ + + + + Polaris Catalog is a catalog implementation for Apache Iceberg built on + the open source Apache Iceberg REST protocol. + + + With Polaris Catalog, you can provide centralized, secure read and write + access across different REST-compatible query engines to your Iceberg + tables. + + + ![Conceptual diagram of Polaris Catalog.](./img/overview.svg "Polaris + Catalog overview") + + + ## Key concepts + + + This section introduces key concepts associated with using Polaris + Catalog. + + + In the following diagram, a sample [Polaris Catalog + structure](./overview.md#catalog) with nested + [namespaces](./overview.md#namespace) is shown for Catalog1. No tables + + or namespaces have been created yet for Catalog2 or Catalog3: + + + ![Diagram that shows an example Polaris Catalog + structure.](./img/sample-catalog-structure.svg "Sample Polaris Catalog + structure") + + + ### Catalog + + + In Polaris Catalog, you can create one or more catalog resources to + organize Iceberg tables. + + + Configure your catalog by setting values in the storage configuration for + S3, Azure, or Google Cloud Storage. An Iceberg catalog enables a + + query engine to manage and organize tables. The catalog forms the first + architectural layer in the [Iceberg table + specification](https://iceberg.apache.org/spec/#overview) and must + support: + + + - Storing the current metadata pointer for one or more Iceberg tables. A + metadata pointer maps a table name to the location of that table's + current metadata file. + + - Performing atomic operations so that you can update the current + metadata pointer for a table to the metadata pointer of a new version of + the table. + + To learn more about Iceberg catalogs, see the [Apache Iceberg + documentation](https://iceberg.apache.org/concepts/catalog/). + + + #### Catalog types + + + A catalog can be one of the following two types: + + + - Internal: The catalog is managed by Polaris. Tables from this catalog + can be read and written in Polaris. + + + - External: The catalog is externally managed by another Iceberg catalog + provider (for example, Snowflake, Glue, Dremio Arctic). Tables from + this catalog are synced to Polaris. These tables are read-only in Polaris. In the current release, only Snowflake external catalog is provided. + + A catalog is configured with a storage configuration that can point to S3, + Azure storage, or GCS. + + + To create a new catalog, see [Create a catalog](./create-a-catalog.md + "Sample Polaris Catalog structure"). + + + ### Namespace + + + You create *namespaces* to logically group Iceberg tables within a + catalog. A catalog can have one or more namespaces. You can also create + + nested namespaces. Iceberg tables belong to namespaces. + + + ### Iceberg tables & catalogs + + + In an internal catalog, an Iceberg table is registered in Polaris Catalog, + but read and written via query engines. The table data and + + metadata is stored in your external cloud storage. The table uses Polaris + Catalog as the Iceberg catalog. + + + If you have tables that use Snowflake as the Iceberg catalog + (Snowflake-managed tables), you can sync these tables to an external + + catalog in Polaris Catalog. If you sync this catalog to Polaris Catalog, + it appears as an external catalog in Polaris Catalog. The table data and + + metadata is stored in your external cloud storage. The Snowflake query + engine can read from or write to these tables. However, the other query + + engines can only read from these tables. + + + **Important** + + + To ensure that the access privileges defined for a catalog are enforced + + correctly, you must: + + + - Ensure a directory only contains the data files that belong to a + single table. + + - Create a directory hierarchy that matches the namespace hierarchy + for the catalog. + + For example, if a catalog includes: + + + - Top-level namespace namespace1 + + + - Nested namespace namespace1a + + + - A customers table, which is grouped under nested namespace + namespace1a + + - An orders table, which is grouped under nested namespace namespace1a + + + The directory hierarchy for the catalog must be: + + + - /namespace1/namespace1a/customers/\ + + - /namespace1/namespace1a/orders/\ + + + ### Service principal + + + A service principal is an entity that you create in Polaris Catalog. Each + service principal encapsulates credentials that you use to connect + + to Polaris Catalog. + + + Query engines use service principals to connect to catalogs. + + + Polaris Catalog generates a Client ID and Client Secret pair for each + service principal. + + + The following table displays example service principals that you might + create in Polaris Catalog: + + | Service connection name | Description | + | --------------------------- | ----------- | + | Flink ingestion | For Apache Flink to ingest streaming data into Iceberg tables. | + | Spark ETL pipeline | For Apache Spark to run ETL pipeline jobs on Iceberg tables. | + | Snowflake data pipelines | For Snowflake to run data pipelines for transforming data in Iceberg tables. | + | Trino BI dashboard | For Trino to run BI queries for powering a dashboard. | + | Snowflake AI team | For Snowflake to run AI jobs on data in Iceberg tables. | + + ### Service connection + + + A service connection represents a REST-compatible engine (such as Apache + Spark, Apache Flink, or Trino) that can read from and write to Polaris + + Catalog. When creating a new service connection, the Polaris administrator + grants the service principal that is created with the new service + + connection with either a new or existing principal role. A principal role + is a resource in Polaris that you can use to logically group Polaris + + service principals together and grant privileges on securable objects. For + more information, see [Principal role](./access-control.md#principal-role + "Principal role"). Polaris Catalog uses a role-based access control (RBAC) + model to grant service principals access to resources. For more + information, + + see [Access control](./access-control.md "Access control"). For a diagram + of this model, see [RBAC model](./access-control.md#rbac-model "RBAC + model"). + + + If the Polaris administrator grants the service principal for the new + service connection with a new principal role, the service principal + + doesn\'t have any privileges granted to it yet. When securing the catalog + that the new service connection will connect to, the Polaris + + administrator grants privileges to catalog roles and then grants these + catalog roles to the new principal role. As a result, the service + + principal for the new service connection is bestowed with these + privileges. For more information about catalog roles, see [Catalog + role](./access-control.md#catalog-role "Catalog role"). + + + If the Polaris administrator grants an existing principal role to the + service principal for the new service connection, the service principal + + is bestowed with the privileges granted to the catalog roles that are + granted to the existing principal role. If needed, the Polaris + + administrator can grant additional catalog roles to the existing principal + role or remove catalog roles from it to adjust the privileges + + bestowed to the service principal. For an example of how RBAC works in + Polaris, see [RBAC example](./access-control.md#rbac-example "RBAC + example"). + + + ### Storage configuration + + + A storage configuration stores a generated identity and access management + (IAM) entity for your external cloud storage and is created + + when you create a catalog. The storage configuration is used to set the + values to connect Polaris Catalog to your cloud storage. During the + + catalog creation process, an IAM entity is generated and used to create a + trust relationship between the cloud storage provider and Polaris + + Catalog. + + + When you create a catalog, you supply the following information about your + external cloud storage: + + + | Cloud storage provider | Information | + + | -----------------------| ----------- | + + | Amazon S3 |

  • Default base location for your Amazon S3 + bucket
  • Locations for your Amazon S3 bucket
  • S3 role + ARN
  • External ID (optional)
| + + | Google Cloud Storage (GCS) |
  • Default base location for your GCS + bucket
  • Locations for your Amazon GCS bucket
| + + | Azure |
  • Default base location for your Microsoft Azure + container
  • Locations for your Microsoft Azure + container
  • Azure tenant ID
| + + + ## Example workflow + + + In the following example workflow, Bob creates an Iceberg table named + Table1 and Alice reads data from Table1. + + + 1. Bob uses Apache Spark to create the Table1 table under the + Namespace1 namespace in the Catalog1 catalog and insert values into + Table1. + + Bob can create Table1 and insert data into it, because he is using a + service connection with a service principal that is bestowed with + the privileges to perform these actions. + + 2. Alice uses Snowflake to read data from Table1. + + Alice can read data from Table1, because she is using a service + connection with a service principal with a catalog integration that + is bestowed with the privileges to perform this action. Alice + creates an unmanaged table in Snowflake to read data from Table1. + + ![Diagram that shows an example workflow for Polaris + Catalog](./img/example-workflow.svg "Example workflow for Polaris + Catalog") + + + ## Security and access control + + + This section describes security and access control. + + + ### Credential vending + + + To secure interactions with service connections, Polaris Catalog vends + temporary storage credentials to the query engine during query + + execution. These credentials allow the query engine to run the query + without needing to have access to your external cloud storage for + + Iceberg tables. This process is called credential vending. + + + ### Identity and access management (IAM) + + + Polaris Catalog uses the identity and access management (IAM) entity to + securely connect to your storage for accessing table data, Iceberg + + metadata, and manifest files that store the table schema, partitions, and + other metadata. Polaris Catalog retains the IAM entity for your + + storage location. + + + ### Access control + + + Polaris Catalog enforces the access control that you configure across all + tables registered with the service, and governs security for all + + queries from query engines in a consistent manner. + + + Polaris uses a role-based access control (RBAC) model that lets you + centrally configure access for Polaris service principals to catalogs, + + namespaces, and tables. + + + Polaris RBAC uses two different role types to delegate privileges: + + + - **Principal roles:** Granted to Polaris service principals and + analogous to roles in other access control systems that you grant to + service principals. + + - **Catalog roles:** Configured with certain privileges on Polaris + catalog resources, and granted to principal roles. + + For more information, see [Access control](./access-control.md "Access + control"). + + x-displayName: Polaris Catalog Overview + - name: Polaris Catalog Entities + description: > + + + + This page documents various entities that can be managed in Polaris. + + + ## Catalog + + + A catalog is a top-level entity in Polaris that may contain other entities + like [namespaces](#namespace) and [tables](#table). These map directly to + [Apache Iceberg catalogs](https://iceberg.apache.org/concepts/catalog/). + + + For information on managing catalogs with the REST API or for more + information on what data can be associated with a catalog, see [the API + docs](../regtests/client/python/docs/CreateCatalogRequest.md). + + + ### Storage Type + + + All catalogs in Polaris are associated with a _storage type_. Valid + Storage Types are `S3`, `Azure`, and `GCS`. The `FILE` type is also + additionally available for testing. Each of these types relates to a + different storage provider where data within the catalog may reside. + Depending on the storage type, various other configurations may be set for + a catalog including credentials to be used when accessing data inside the + catalog. + + + For details on how to use Storage Types in the REST API, see [the API + docs](../regtests/client/python/docs/StorageConfigInfo.md). + + + ## Namespace + + + A namespace is a logical entity that resides within a [catalog](#catalog) + and can contain other entities such as [tables](#table) or [views](#view). + Some other systems may refer to namespaces as _schemas_ or _databases_. + + + In Polaris, namespaces can be nested up to 16 levels. For example, + `a.b.c.d.e.f.g` is a valid namespace. `b` is said to reside within `a`, + and so on. + + + For information on managing namespaces with the REST API or for more + information on what data can be associated with a namespace, see [the API + docs](../regtests/client/python/docs/CreateNamespaceRequest.md). + + + + ## Table + + + Polaris tables are entites that map to [Apache Iceberg + tables](https://iceberg.apache.org/docs/nightly/configuration/). + + + For information on managing tables with the REST API or for more + information on what data can be associated with a table, see [the API + docs](../regtests/client/python/docs/CreateTableRequest.md). + + + ## View + + + Polaris views are entites that map to [Apache Iceberg + views](https://iceberg.apache.org/view-spec/). + + + For information on managing views with the REST API or for more + information on what data can be associated with a view, see [the API + docs](../regtests/client/python/docs/CreateViewRequest.md). + + + ## Principal + + + Polaris principals are unique identities that can be used to represent + users or services. Each principal may have one or more [principal + roles](#principal-role) assigned to it for the purpose of accessing + catalogs and the entities within them. + + + For information on managing principals with the REST API or for more + information on what data can be associated with a principal, see [the API + docs](../regtests/client/python/docs/CreatePrincipalRequest.md). + + + ## Principal Role + + + Polaris principal roles are labels that may be granted to + [principals](#principal). Each principal may have one or more principal + roles, and the same principal role may be granted to multiple principals. + Principal roles may be assigned based on the persona or responsibilities + of a given principal, or on how that principal will need to access + different entities within Polaris. + + + For information on managing principal roles with the REST API or for more + information on what data can be associated with a principal role, see [the + API docs](../regtests/client/python/docs/CreatePrincipalRoleRequest.md). + + + + ## Catalog Role + + + Polaris catalog roles are labels that may be granted to + [catalogs](#catalog). Each catalog may have one or more catalog roles, and + the same catalog role may be granted to multiple catalogs. Catalog roles + may be assigned based on the nature of data that will reside in a catalog, + or by the groups of users and services that might need to access that + data. + + + Each catalog role may have multiple [privileges](#privilege) granted to + it, and each catalog role can be granted to one or more [principal + roles](#principal-role). This is the mechanism by which principals are + granted access to entities inside a catalog such as namespaces and tables. + + + ## Privilege + + + Polaris privileges are granted to [catalog roles](#catalog-role) in order + to grant principals with a given principal role some degree of access to + catalogs with a given catalog role. When a privilege is granted to a + catalog role, any principal roles granted that catalog role receive the + privilege. In turn, any principals who are granted that principal role + receive it. + + + A privilege can be scoped to any entity inside a catalog, including the + catalog itself. + + + For a list of supported privileges for each privilege class, see the API + docs: + + * [Table Privileges](../regtests/client/python/docs/TablePrivilege.md) + + * [View Privileges](../regtests/client/python/docs/ViewPrivilege.md) + + * [Namespace + Privileges](../regtests/client/python/docs/NamespacePrivilege.md) + + * [Catalog Privileges](../regtests/client/python/docs/CatalogPrivilege.md) + x-displayName: Polaris Catalog Entities + - name: Access Control + description: > + + + + This section provides information about how access control works for + Polaris Catalog. + + + Polaris Catalog uses a role-based access control (RBAC) model, in which + the Polaris administrator assigns access privileges to catalog roles, + + and then grants service principals access to resources by assigning + catalog roles to principal roles. + + + The key concepts to understanding access control in Polaris are: + + + - **Securable object** + + - **Principal role** + + - **Catalog role** + + - **Privilege** + + + ## Securable object + + + A securable object is an object to which access can be granted. Polaris + + has the following securable objects: + + + - Catalog + + - Namespace + + - Iceberg table + + - View + + + ## Principal role + + + A principal role is a resource in Polaris that you can use to logically + group Polaris service principals together and grant privileges on + + securable objects. + + + Polaris supports a many-to-one relationship between service principals and + principal roles. For example, to grant the same privileges to + + multiple service principals, you can grant a single principal role to + those service principals. A service principal can be granted one + + principal role. When registering a service connection, the Polaris + administrator specifies the principal role that is granted to the + + service principal. + + + You don't grant privileges directly to a principal role. Instead, you + configure object permissions at the catalog role level, and then grant + + catalog roles to a principal role. + + + The following table shows examples of principal roles that you might + configure in Polaris: + + + | Principal role name | Description | + + | -----------------------| ----------- | + + | Data_engineer | A role that is granted to multiple service principals + for running data engineering jobs. | + + | Data_scientist | A role that is granted to multiple service principals + for running data science or AI jobs. | + + + ## Catalog role + + + A catalog role belongs to a particular catalog resource in Polaris and + specifies a set of permissions for actions on the catalog, or on objects + + in the catalog, such as catalog namespaces or tables. You can create one + or more catalog roles for a catalog. + + + You grant privileges to a catalog role, and then grant the catalog role to + a principal role to bestow the privileges to one or more service + + principals. + + + **Note** + + + If you update the privileges bestowed to a service principal, the updates + won\'t take effect for up to one hour. This means that if you + + revoke or grant some privileges for a catalog, the updated privileges + won\'t take effect on any service principal with access to that catalog + + for up to one hour. + + + Polaris also supports a many-to-many relationship between catalog roles + and principal roles. You can grant the same catalog role to one or more + + principal roles. Likewise, a principal role can be granted to one or more + catalog roles. + + + The following table displays examples of catalog roles that you might + + configure in Polaris: + + + | Example Catalog role | Description | + + | -----------------------| ----------- | + + | Catalog administrators | A role that has been granted multiple + privileges to emulate full access to the catalog.

Principal + roles that have been granted this role are permitted to create, alter, + read, write, and drop tables in the catalog. | + + | Catalog readers | A role that has been granted read-only privileges + to tables in the catalog.

Principal roles that have been + granted this role are allowed to read from tables in the catalog. | + + | Catalog contributor | A role that has been granted read and write + access privileges to all tables that belong to the catalog.

Principal roles that have been granted this role are allowed to perform + read and write operations on tables in the catalog. | + + + ## RBAC model + + + The following diagram illustrates the RBAC model used by Polaris Catalog. + For each catalog, the Polaris administrator assigns access + + privileges to catalog roles, and then grants service principals access to + resources by assigning catalog roles to principal roles. Polaris + + supports a many-to-one relationship between service principals and + principal roles. + + + ![Diagram that shows the RBAC model for Polaris + Catalog.](./img/rbac-model.svg "Polaris Catalog RBAC model") + + + ## Access control privileges + + + This section describes the privileges that are available in the Polaris + access control model. Privileges are granted to catalog roles, catalog + + roles are granted to principal roles, and principal roles are granted to + service principals to specify the operations that service principals can + + perform on objects in Polaris. + + + To grant the full set of privileges (drop, list, read, write, etc.) on an + object, you can use the *full privilege* option. + + + ### Table privileges + + + **Note** + + + The TABLE_FULL_METADATA full privilege doesn't grant access to the + TABLE_READ_DATA or TABLE_WRITE_DATA individual privileges. + + + | Full privilege | Individual privilege | Description | + + | -----------------------| ----------- | ---- | + + | TABLE_FULL_METADATA | TABLE_CREATE | Enables registering a table with + the catalog. | + + | | TABLE_DROP | Enables dropping a table from the catalog. | + + | | TABLE_LIST | Enables listing any tables in the catalog. | + + | | TABLE_READ_PROPERTIES | Enables reading + [properties](https://iceberg.apache.org/docs/nightly/configuration/#table-properties) + of the table. | + + | | TABLE_WRITE_PROPERTIES | Enables configuring + [properties](https://iceberg.apache.org/docs/nightly/configuration/#table-properties) + for the table. | + + | N/A | TABLE_READ_DATA | Enables reading data from the table by receiving + short-lived read-only storage credentials from the catalog. | + + | N/A | TABLE_WRITE_DATA | Enables writing data to the table by receiving + short-lived read+write storage credentials from the catalog. | + + + ### View privileges + + + | Full privilege | Individual privilege | Description | + + | -----------------------| ----------- | ---- | + + | VIEW_FULL_METADATA | VIEW_CREATE | Enables registering a view with the + catalog. | + + | | VIEW_DROP | Enables dropping a view from the catalog. | + + | | VIEW_LIST | Enables listing any views in the catalog. | + + | | VIEW_READ_PROPERTIES | Enables reading all the view properties. | + + | | VIEW_WRITE_PROPERTIES | Enables configuring view properties. | + + + ### Namespace privileges + + + | Full privilege | Individual privilege | Description | + + | -----------------------| ----------- | ---- | + + | NAMESPACE_FULL_METADATA | NAMESPACE_CREATE | Enables creating a + namespace in a catalog. | + + | | NAMESPACE_DROP | Enables dropping the namespace from the catalog. | + + | | NAMESPACE_LIST | Enables listing any object in the namespace, + including nested namespaces and tables. | + + | | NAMESPACE_READ_PROPERTIES | Enables reading all the namespace + properties. | + + | | NAMESPACE_WRITE_PROPERTIES | Enables configuring namespace + properties. | + + + ### Catalog privileges + + + | Privilege | Description | + + | -----------------------| ----------- | + + | CATALOG_MANAGE_ACCESS | Includes the ability to grant or revoke + privileges on objects in a catalog to catalog roles, and the ability to + grant or revoke catalog roles to or from principal roles. | + + | CATALOG_MANAGE_CONTENT | Enables full management of content for the + catalog. This privilege encompasses the following + privileges:
  • CATALOG_MANAGE_METADATA
  • TABLE_FULL_METADATA
  • NAMESPACE_FULL_METADATA
  • VIEW_FULL_METADATA
  • TABLE_WRITE_DATA
  • TABLE_READ_DATA
  • CATALOG_READ_PROPERTIES
  • CATALOG_WRITE_PROPERTIES
+ | + + | CATALOG_MANAGE_METADATA | Enables full management of the catalog, as + well as catalog roles, namespaces, and tables. | + + | CATALOG_READ_PROPERTIES | Enables listing catalogs and reading + properties of the catalog. | + + | CATALOG_WRITE_PROPERTIES | Enables configuring catalog properties. | + + + ## RBAC example + + + The following diagram illustrates how RBAC works in Polaris, and + + includes the following users: + + + - **Alice**: A service admin who signs up for Polaris. Alice can + create service principals. She can also create catalogs and + namespaces, and configure access control for Polaris resources. + + > **Note** + + > + + > The service principal for Alice is not visible in the Polaris Catalog + + > user interface. + + + - **Bob**: A data engineer who uses Snowpipe Streaming (in Snowflake) + and Apache Spark connections to interact with Polaris. + + - Alice has created a service principal for Bob. It has been + granted the Data_engineer principal role, which in turn has been + granted the following catalog roles: Catalog contributor and + Data administrator (for both the Silver and Gold zone catalogs + in the following diagram). + + - The Catalog contributor role grants permission to create + namespaces and tables in the Bronze zone catalog. + + - The Data administrator roles grant full administrative rights to + the Silver zone catalog and Gold zone catalog. + + - **Mark**: A data scientist who uses Snowflake AI services to + interact with Polaris. + + - Alice has created a service principal for Mark. It has been + granted the Data_scientist principal role, which in turn has + been granted the catalog role named Catalog reader. + + - The Catalog reader role grants read-only access for a catalog + named Gold zone catalog. + + ![Diagram that shows an example of how RBAC works in Polaris + Catalog.](./img/rbac-example.svg "Polaris Catalog RBAC example") + x-displayName: Access Control + - name: polaris-management-service_other + x-displayName: other + - name: Configuration API + x-displayName: Configuration API + - name: OAuth2 API + x-displayName: OAuth2 API + - name: Catalog API + x-displayName: Catalog API +paths: + /catalogs: + get: + operationId: listCatalogs + description: List all catalogs in this polaris service + responses: + '200': + description: List of catalogs in the polaris service + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_Catalogs' + '403': + description: The caller does not have permission to list catalog details + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + post: + operationId: createCatalog + description: Add a new Catalog + requestBody: + description: The Catalog to create + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_CreateCatalogRequest + responses: + '201': + description: Successful response + '403': + description: The caller does not have permission to create a catalog + '404': + description: The catalog does not exist + '409': + description: A catalog with the specified name already exists + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /catalogs/{catalogName}: + parameters: + - name: catalogName + in: path + description: The name of the catalog + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + get: + operationId: getCatalog + description: Get the details of a catalog + responses: + '200': + description: The catalog details + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_Catalog' + '403': + description: The caller does not have permission to read catalog details + '404': + description: The catalog does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + put: + operationId: updateCatalog + description: Update an existing catalog + requestBody: + description: The catalog details to use in the update + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_UpdateCatalogRequest + responses: + '200': + description: The catalog details + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_Catalog' + '403': + description: The caller does not have permission to update catalog details + '404': + description: The catalog does not exist + '409': + description: >- + The entity version doesn't match the currentEntityVersion; retry + after fetching latest version + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + delete: + operationId: deleteCatalog + description: >- + Delete an existing catalog. This is a cascading operation that deletes + all metadata, including principals, roles and grants. If the catalog is + an internal catalog, all tables and namespaces are dropped without + purge. + responses: + '204': + description: Success, no content + '403': + description: The caller does not have permission to delete a catalog + '404': + description: The catalog does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /principals: + get: + operationId: listPrincipals + description: List the principals for the current catalog + responses: + '200': + description: List of principals for this catalog + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_Principals' + '403': + description: The caller does not have permission to list catalog admins + '404': + description: The catalog does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + post: + operationId: createPrincipal + description: Create a principal + requestBody: + description: The principal to create + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_CreatePrincipalRequest + responses: + '201': + description: Successful response + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_PrincipalWithCredentials + '403': + description: The caller does not have permission to add a principal + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /principals/{principalName}: + parameters: + - name: principalName + in: path + description: The principal name + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + get: + operationId: getPrincipal + description: Get the principal details + responses: + '200': + description: The requested principal + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_Principal' + '403': + description: The caller does not have permission to get principal details + '404': + description: The catalog or principal does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + put: + operationId: updatePrincipal + description: Update an existing principal + requestBody: + description: The principal details to use in the update + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_UpdatePrincipalRequest + responses: + '200': + description: The updated principal + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_Principal' + '403': + description: The caller does not have permission to update principal details + '404': + description: The principal does not exist + '409': + description: >- + The entity version doesn't match the currentEntityVersion; retry + after fetching latest version + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + delete: + operationId: deletePrincipal + description: Remove a principal from polaris + responses: + '204': + description: Success, no content + '403': + description: The caller does not have permission to delete a principal + '404': + description: The principal does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /principals/{principalName}/rotate: + parameters: + - name: principalName + in: path + description: The user name + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + post: + operationId: rotateCredentials + description: >- + Rotate a principal's credentials. The new credentials will be returned + in the response. This is the only API, aside from createPrincipal, that + returns the user's credentials. This API is *not* idempotent. + responses: + '200': + description: The principal details along with the newly rotated credentials + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_PrincipalWithCredentials + '403': + description: The caller does not have permission to rotate credentials + '404': + description: The principal does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /principals/{principalName}/principal-roles: + parameters: + - name: principalName + in: path + description: The name of the target principal + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + get: + operationId: listPrincipalRolesAssigned + description: List the roles assigned to the principal + responses: + '200': + description: List of roles assigned to this principal + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_PrincipalRoles' + '403': + description: The caller does not have permission to list roles + '404': + description: The principal or catalog does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + put: + operationId: assignPrincipalRole + description: Add a role to the principal + requestBody: + description: The principal role to assign + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_GrantPrincipalRoleRequest + responses: + '201': + description: Successful response + '403': + description: >- + The caller does not have permission to add assign a role to the + principal + '404': + description: The catalog, the principal, or the role does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /principals/{principalName}/principal-roles/{principalRoleName}: + parameters: + - name: principalName + in: path + description: The name of the target principal + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + - name: principalRoleName + in: path + description: The name of the role + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + delete: + operationId: revokePrincipalRole + description: Remove a role from a catalog principal + responses: + '204': + description: Success, no content + '403': + description: >- + The caller does not have permission to remove a role from the + principal + '404': + description: The catalog or principal does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /principal-roles: + get: + operationId: listPrincipalRoles + description: List the principal roles + responses: + '200': + description: List of principal roles + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_PrincipalRoles' + '403': + description: The caller does not have permission to list principal roles + '404': + description: The catalog does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + post: + operationId: createPrincipalRole + description: Create a principal role + requestBody: + description: The principal to create + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_CreatePrincipalRoleRequest + responses: + '201': + description: Successful response + '403': + description: The caller does not have permission to add a principal role + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /principal-roles/{principalRoleName}: + parameters: + - name: principalRoleName + in: path + description: The principal role name + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + get: + operationId: getPrincipalRole + description: Get the principal role details + responses: + '200': + description: The requested principal role + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_PrincipalRole' + '403': + description: The caller does not have permission to get principal role details + '404': + description: The principal role does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + put: + operationId: updatePrincipalRole + description: Update an existing principalRole + requestBody: + description: The principalRole details to use in the update + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_UpdatePrincipalRoleRequest + responses: + '200': + description: The updated principal role + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_PrincipalRole' + '403': + description: The caller does not have permission to update principal role details + '404': + description: The principal role does not exist + '409': + description: >- + The entity version doesn't match the currentEntityVersion; retry + after fetching latest version + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + delete: + operationId: deletePrincipalRole + description: Remove a principal role from polaris + responses: + '204': + description: Success, no content + '403': + description: The caller does not have permission to delete a principal role + '404': + description: The principal role does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /principal-roles/{principalRoleName}/principals: + parameters: + - name: principalRoleName + in: path + description: The principal role name + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + get: + operationId: listAssigneePrincipalsForPrincipalRole + description: List the Principals to whom the target principal role has been assigned + responses: + '200': + description: >- + List the Principals to whom the target principal role has been + assigned + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_Principals' + '403': + description: The caller does not have permission to list principals + '404': + description: The principal role does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /principal-roles/{principalRoleName}/catalog-roles/{catalogName}: + parameters: + - name: principalRoleName + in: path + description: The principal role name + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + - name: catalogName + in: path + required: true + description: The name of the catalog where the catalogRoles reside + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + get: + operationId: listCatalogRolesForPrincipalRole + description: Get the catalog roles mapped to the principal role + responses: + '200': + description: The list of catalog roles mapped to the principal role + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_CatalogRoles' + '403': + description: The caller does not have permission to list catalog roles + '404': + description: The principal role does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + put: + operationId: assignCatalogRoleToPrincipalRole + description: Assign a catalog role to a principal role + requestBody: + description: The principal to create + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_GrantCatalogRoleRequest + responses: + '201': + description: Successful response + '403': + description: The caller does not have permission to assign a catalog role + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /principal-roles/{principalRoleName}/catalog-roles/{catalogName}/{catalogRoleName}: + parameters: + - name: principalRoleName + in: path + description: The principal role name + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + - name: catalogName + in: path + description: The name of the catalog that contains the role to revoke + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + - name: catalogRoleName + in: path + description: The name of the catalog role that should be revoked + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + delete: + operationId: revokeCatalogRoleFromPrincipalRole + description: Remove a catalog role from a principal role + responses: + '204': + description: Success, no content + '403': + description: The caller does not have permission to revoke a catalog role + '404': + description: The principal role does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /catalogs/{catalogName}/catalog-roles: + parameters: + - name: catalogName + in: path + description: The catalog for which we are reading/updating roles + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + get: + operationId: listCatalogRoles + description: List existing roles in the catalog + responses: + '200': + description: The list of roles that exist in this catalog + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_CatalogRoles' + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + post: + operationId: createCatalogRole + description: Create a new role in the catalog + requestBody: + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_CreateCatalogRoleRequest + responses: + '201': + description: Successful response + '403': + description: The principal is not authorized to create roles + '404': + description: The catalog does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /catalogs/{catalogName}/catalog-roles/{catalogRoleName}: + parameters: + - name: catalogName + in: path + description: The catalog for which we are retrieving roles + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + - name: catalogRoleName + in: path + description: The name of the role + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + get: + operationId: getCatalogRole + description: Get the details of an existing role + responses: + '200': + description: The specified role details + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_CatalogRole' + '403': + description: The principal is not authorized to read role data + '404': + description: The catalog or the role does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + put: + operationId: updateCatalogRole + description: Update an existing role in the catalog + requestBody: + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_UpdateCatalogRoleRequest + responses: + '200': + description: The specified role details + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_CatalogRole' + '403': + description: The principal is not authorized to update roles + '404': + description: The catalog or the role does not exist + '409': + description: >- + The entity version doesn't match the currentEntityVersion; retry + after fetching latest version + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + delete: + operationId: deleteCatalogRole + description: >- + Delete an existing role from the catalog. All associated grants will + also be deleted + responses: + '204': + description: Success, no content + '403': + description: The principal is not authorized to delete roles + '404': + description: The catalog or the role does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /catalogs/{catalogName}/catalog-roles/{catalogRoleName}/principal-roles: + parameters: + - name: catalogName + in: path + required: true + description: The name of the catalog where the catalog role resides + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + - name: catalogRoleName + in: path + required: true + description: The name of the catalog role + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + get: + operationId: listAssigneePrincipalRolesForCatalogRole + description: >- + List the PrincipalRoles to which the target catalog role has been + assigned + responses: + '200': + description: >- + List the PrincipalRoles to which the target catalog role has been + assigned + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_PrincipalRoles' + '403': + description: The caller does not have permission to list principal roles + '404': + description: The catalog or catalog role does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /catalogs/{catalogName}/catalog-roles/{catalogRoleName}/grants: + parameters: + - name: catalogName + in: path + required: true + description: The name of the catalog where the role will receive the grant + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + - name: catalogRoleName + in: path + required: true + description: The name of the role receiving the grant (must exist) + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + get: + operationId: listGrantsForCatalogRole + description: List the grants the catalog role holds + responses: + '200': + description: List of all grants given to the role in this catalog + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_GrantResources' + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + put: + operationId: addGrantToCatalogRole + description: Add a new grant to the catalog role + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Polaris_Management_Service_AddGrantRequest' + responses: + '201': + description: Successful response + '403': + description: The principal is not authorized to create grants + '404': + description: The catalog or the role does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + post: + operationId: revokeGrantFromCatalogRole + description: >- + Delete a specific grant from the role. This may be a subset or a + superset of the grants the role has. In case of a subset, the role will + retain the grants not specified. If the `cascade` parameter is true, + grant revocation will have a cascading effect - that is, if a principal + has specific grants on a subresource, and grants are revoked on a parent + resource, the grants present on the subresource will be revoked as well. + By default, this behavior is disabled and grant revocation only affects + the specified resource. + parameters: + - name: cascade + in: query + schema: + type: boolean + default: false + description: If true, the grant revocation cascades to all subresources. + requestBody: + content: + application/json: + schema: + $ref: >- + #/components/schemas/Polaris_Management_Service_RevokeGrantRequest + responses: + '201': + description: Successful response + '403': + description: The principal is not authorized to create grants + '404': + description: The catalog or the role does not exist + tags: + - polaris-management-service_other + security: + - Polaris_Management_Service_OAuth2: [] + /v1/config: + get: + tags: + - Configuration API + summary: List all catalog configuration settings + operationId: getConfig + parameters: + - name: warehouse + in: query + required: false + schema: + type: string + description: Warehouse location or identifier to request from the service + description: >2- + All REST clients should first call this route to get catalog configuration properties from the server to configure the catalog and its HTTP client. Configuration from the server consists of two sets of key/value pairs. + - defaults - properties that should be used as default configuration; + applied before client configuration + + - overrides - properties that should be used to override client + configuration; applied after defaults and client configuration + + + Catalog configuration is constructed by setting the defaults, then + client- provided configuration, and finally overrides. The final + property set is then used to configure the catalog. + + + For example, a default configuration property might set the size of the + client pool, which can be replaced with a client-specific setting. An + override might be used to set the warehouse location, which is stored on + the server rather than in client configuration. + + + Common catalog configuration settings are documented at + https://iceberg.apache.org/docs/latest/configuration/#catalog-properties + responses: + '200': + description: Server specified configuration values. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_CatalogConfig + example: + overrides: + warehouse: s3://bucket/warehouse/ + defaults: + clients: '4' + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/oauth/tokens: + post: + tags: + - OAuth2 API + summary: Get a token using an OAuth2 flow + operationId: getToken + description: >- + Exchange credentials for a token using the OAuth2 client credentials + flow or token exchange. + + + This endpoint is used for three purposes - + + 1. To exchange client credentials (client ID and secret) for an access + token This uses the client credentials flow. + + 2. To exchange a client token and an identity token for a more specific + access token This uses the token exchange flow. + + 3. To exchange an access token for one with the same claims and a + refreshed expiration period This uses the token exchange flow. + + + For example, a catalog client may be configured with client credentials + from the OAuth2 Authorization flow. This client would exchange its + client ID and secret for an access token using the client credentials + request with this endpoint (1). Subsequent requests would then use that + access token. + + + Some clients may also handle sessions that have additional user context. + These clients would use the token exchange flow to exchange a user token + (the "subject" token) from the session for a more specific access token + for that user, using the catalog's access token as the "actor" token + (2). The user ID token is the "subject" token and can be any token type + allowed by the OAuth2 token exchange flow, including a unsecured JWT + token with a sub claim. This request should use the catalog's bearer + token in the "Authorization" header. + + + Clients may also use the token exchange flow to refresh a token that is + about to expire by sending a token exchange request (3). The request's + "subject" token should be the expiring token. This request should use + the subject token in the "Authorization" header. + requestBody: + required: true + content: + application/x-www-form-urlencoded: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_OAuthTokenRequest + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_OAuthTokenResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_OAuthErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_OAuthErrorResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_OAuthErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/namespaces: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + get: + tags: + - Catalog API + summary: >- + List namespaces, optionally providing a parent namespace to list + underneath + description: >- + List all namespaces at a certain level, optionally starting from a given + parent namespace. If table accounting.tax.paid.info exists, using + 'SELECT NAMESPACE IN accounting' would translate into `GET + /namespaces?parent=accounting` and must return a namespace, + ["accounting", "tax"] only. Using 'SELECT NAMESPACE IN accounting.tax' + would translate into `GET /namespaces?parent=accounting%1Ftax` and must + return a namespace, ["accounting", "tax", "paid"]. If `parent` is not + provided, all top-level namespaces should be listed. + operationId: listNamespaces + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_page-token' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_page-size' + - name: parent + in: query + description: >- + An optional namespace, underneath which to list namespaces. If not + provided or empty, all top-level namespaces should be listed. If + parent is a multipart namespace, the parts must be separated by the + unit separator (`0x1F`) byte. + required: false + allowEmptyValue: true + schema: + type: string + example: accounting%1Ftax + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ListNamespacesResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: >- + Not Found - Namespace provided in the `parent` query parameter is + not found. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + NoSuchNamespaceExample: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + post: + tags: + - Catalog API + summary: Create a namespace + description: >- + Create a namespace, with an optional set of properties. The server might + also add properties, such as `last_modified_time` etc. + operationId: createNamespace + requestBody: + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_CreateNamespaceRequest + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_CreateNamespaceResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '406': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnsupportedOperationResponse + '409': + description: Conflict - The namespace already exists + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + NamespaceAlreadyExists: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NamespaceAlreadyExistsError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/namespaces/{namespace}: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_namespace' + get: + tags: + - Catalog API + summary: Load the metadata properties for a namespace + operationId: loadNamespaceMetadata + description: Return all stored metadata properties for a given namespace + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_GetNamespaceResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - Namespace not found + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + NoSuchNamespaceExample: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + head: + tags: + - Catalog API + summary: Check if a namespace exists + operationId: namespaceExists + description: Check if a namespace exists. The response does not contain a body. + responses: + '204': + description: Success, no content + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - Namespace not found + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + NoSuchNamespaceExample: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + delete: + tags: + - Catalog API + summary: Drop a namespace from the catalog. Namespace must be empty. + operationId: dropNamespace + responses: + '204': + description: Success, no content + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - Namespace to delete does not exist. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + NoSuchNamespaceExample: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/namespaces/{namespace}/properties: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_namespace' + post: + tags: + - Catalog API + summary: Set or remove properties on a namespace + operationId: updateProperties + description: >- + Set and/or remove properties on a namespace. The request body specifies + a list of properties to remove and a map of key value pairs to update. + + Properties that are not in the request are not modified or removed by + this call. + + Server implementations are not required to support namespace properties. + requestBody: + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_UpdateNamespacePropertiesRequest + examples: + UpdateAndRemoveProperties: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_UpdateAndRemoveNamespacePropertiesRequest + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UpdateNamespacePropertiesResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - Namespace not found + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + NamespaceNotFound: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError + '406': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnsupportedOperationResponse + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '422': + description: >- + Unprocessable Entity - A property key was included in both + `removals` and `updates` + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + UnprocessableEntityDuplicateKey: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_UnprocessableEntityDuplicateKey + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/namespaces/{namespace}/tables: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_namespace' + get: + tags: + - Catalog API + summary: List all table identifiers underneath a given namespace + description: Return all table identifiers under this namespace + operationId: listTables + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_page-token' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_page-size' + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ListTablesResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - The namespace specified does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + NamespaceNotFound: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + post: + tags: + - Catalog API + summary: Create a table in the given namespace + description: >- + Create a table or start a create transaction, like atomic CTAS. + + + If `stage-create` is false, the table is created immediately. + + + If `stage-create` is true, the table is not created, but table metadata + is initialized and returned. The service should prepare as needed for a + commit to the table commit endpoint to complete the create transaction. + The client uses the returned metadata to begin a transaction. To commit + the transaction, the client sends all create and subsequent changes to + the table commit route. Changes from the table create operation include + changes like AddSchemaUpdate and SetCurrentSchemaUpdate that set the + initial table state. + operationId: createTable + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_data-access' + requestBody: + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_CreateTableRequest + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_CreateTableResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - The namespace specified does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + NamespaceNotFound: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError + '409': + description: Conflict - The table already exists + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + NamespaceAlreadyExists: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_TableAlreadyExistsError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/namespaces/{namespace}/register: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_namespace' + post: + tags: + - Catalog API + summary: >- + Register a table in the given namespace using given metadata file + location + description: Register a table using given metadata file location. + operationId: registerTable + requestBody: + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RegisterTableRequest + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_LoadTableResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - The namespace specified does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + NamespaceNotFound: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError + '409': + description: Conflict - The table already exists + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + NamespaceAlreadyExists: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_TableAlreadyExistsError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/namespaces/{namespace}/tables/{table}: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_namespace' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_table' + get: + tags: + - Catalog API + summary: Load a table from the catalog + operationId: loadTable + description: >- + Load a table from the catalog. + + + The response contains both configuration and table metadata. The + configuration, if non-empty is used as additional configuration for the + table that overrides catalog configuration. For example, this + configuration may change the FileIO implementation to be used for the + table. + + + The response also contains the table's full metadata, matching the table + metadata JSON file. + + + The catalog configuration may contain credentials that should be used + for subsequent requests for the table. The configuration key "token" is + used to pass an access token to be used as a bearer token for table + requests. Otherwise, a token may be passed using a RFC 8693 token type + as a configuration key. For example, + "urn:ietf:params:oauth:token-type:jwt=". + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_data-access' + - in: query + name: snapshots + description: >- + The snapshots to return in the body of the metadata. Setting the + value to `all` would return the full set of snapshots currently + valid for the table. Setting the value to `refs` would load all + snapshots referenced by branches or tags. + + Default if no param is provided is `all`. + required: false + schema: + type: string + enum: + - all + - refs + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_LoadTableResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - NoSuchTableException, table to load does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + TableToLoadDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchTableError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + post: + tags: + - Catalog API + summary: Commit updates to a table + operationId: updateTable + description: >- + Commit updates to a table. + + + Commits have two parts, requirements and updates. Requirements are + assertions that will be validated before attempting to make and commit + changes. For example, `assert-ref-snapshot-id` will check that a named + ref's snapshot ID has a certain value. + + + Updates are changes to make to table metadata. For example, after + asserting that the current main ref is at the expected snapshot, a + commit may add a new child snapshot and set the ref to the new snapshot + id. + + + Create table transactions that are started by createTable with + `stage-create` set to true are committed using this route. Transactions + should include all changes to the table, including table initialization, + like AddSchemaUpdate and SetCurrentSchemaUpdate. The `assert-create` + requirement is used to ensure that the table was not created + concurrently. + requestBody: + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_CommitTableRequest + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_CommitTableResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - NoSuchTableException, table to load does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + TableToUpdateDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchTableError + '409': + description: >- + Conflict - CommitFailedException, one or more requirements failed. + The client may retry. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '500': + description: >- + An unknown server-side problem occurred; the commit state is + unknown. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Internal Server Error + type: CommitStateUnknownException + code: 500 + '502': + description: >- + A gateway or proxy received an invalid response from the upstream + server; the commit state is unknown. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Invalid response from the upstream server + type: CommitStateUnknownException + code: 502 + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + '504': + description: A server-side gateway timeout occurred; the commit state is unknown. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Gateway timed out during commit + type: CommitStateUnknownException + code: 504 + 5XX: + description: A server-side problem that might not be addressable on the client. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Bad Gateway + type: InternalServerError + code: 502 + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + delete: + tags: + - Catalog API + summary: Drop a table from the catalog + operationId: dropTable + description: Remove a table from the catalog + parameters: + - name: purgeRequested + in: query + required: false + description: >- + Whether the user requested to purge the underlying table's data and + metadata + schema: + type: boolean + default: false + responses: + '204': + description: Success, no content + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - NoSuchTableException, Table to drop does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + TableToDeleteDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchTableError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + head: + tags: + - Catalog API + summary: Check if a table exists + operationId: tableExists + description: >- + Check if a table exists within a given namespace. The response does not + contain a body. + responses: + '204': + description: Success, no content + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - NoSuchTableException, Table not found + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + TableToLoadDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchTableError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/tables/rename: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + post: + tags: + - Catalog API + summary: Rename a table from its current name to a new name + description: >- + Rename a table from one identifier to another. It's valid to move a + table across namespaces, but the server implementation is not required + to support it. + operationId: renameTable + requestBody: + description: >- + Current table identifier to rename and new table identifier to rename + to + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RenameTableRequest + examples: + RenameTableSameNamespace: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_RenameTableSameNamespace + required: true + responses: + '204': + description: Success, no content + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: >- + Not Found - NoSuchTableException, Table to rename does not exist - + NoSuchNamespaceException, The target namespace of the new table + identifier does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + TableToRenameDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchTableError + NamespaceToRenameToDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError + '406': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnsupportedOperationResponse + '409': + description: >- + Conflict - The target identifier to rename to already exists as a + table or view + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + summary: The requested table identifier already exists + value: + error: + message: The given table already exists + type: AlreadyExistsException + code: 409 + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/namespaces/{namespace}/tables/{table}/metrics: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_namespace' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_table' + post: + tags: + - Catalog API + summary: Send a metrics report to this endpoint to be processed by the backend + operationId: reportMetrics + requestBody: + description: The request containing the metrics report to be sent + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ReportMetricsRequest + required: true + responses: + '204': + description: Success, no content + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - NoSuchTableException, table to load does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + TableToLoadDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchTableError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/namespaces/{namespace}/tables/{table}/notifications: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_namespace' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_table' + post: + tags: + - Catalog API + summary: Sends a notification to the table + operationId: sendNotification + requestBody: + description: The request containing the notification to be sent + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_NotificationRequest + required: true + responses: + '204': + description: Success, no content + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - NoSuchTableException, table to load does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + TableToLoadDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchTableError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/transactions/commit: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + post: + tags: + - Catalog API + summary: Commit updates to multiple tables in an atomic operation + operationId: commitTransaction + requestBody: + description: >- + Commit updates to multiple tables in an atomic operation + + + A commit for a single table consists of a table identifier with + requirements and updates. Requirements are assertions that will be + validated before attempting to make and commit changes. For example, + `assert-ref-snapshot-id` will check that a named ref's snapshot ID has + a certain value. + + + Updates are changes to make to table metadata. For example, after + asserting that the current main ref is at the expected snapshot, a + commit may add a new child snapshot and set the ref to the new + snapshot id. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_CommitTransactionRequest + required: true + responses: + '204': + description: Success, no content + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - NoSuchTableException, table to load does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + examples: + TableToUpdateDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchTableError + '409': + description: >- + Conflict - CommitFailedException, one or more requirements failed. + The client may retry. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '500': + description: >- + An unknown server-side problem occurred; the commit state is + unknown. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Internal Server Error + type: CommitStateUnknownException + code: 500 + '502': + description: >- + A gateway or proxy received an invalid response from the upstream + server; the commit state is unknown. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Invalid response from the upstream server + type: CommitStateUnknownException + code: 502 + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + '504': + description: A server-side gateway timeout occurred; the commit state is unknown. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Gateway timed out during commit + type: CommitStateUnknownException + code: 504 + 5XX: + description: A server-side problem that might not be addressable on the client. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Bad Gateway + type: InternalServerError + code: 502 + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/namespaces/{namespace}/views: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_namespace' + get: + tags: + - Catalog API + summary: List all view identifiers underneath a given namespace + description: Return all view identifiers under this namespace + operationId: listViews + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_page-token' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_page-size' + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ListTablesResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - The namespace specified does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + examples: + NamespaceNotFound: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + post: + tags: + - Catalog API + summary: Create a view in the given namespace + description: Create a view in the given namespace. + operationId: createView + requestBody: + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_CreateViewRequest + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_LoadViewResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - The namespace specified does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + examples: + NamespaceNotFound: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError + '409': + description: Conflict - The view already exists + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + examples: + NamespaceAlreadyExists: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_ViewAlreadyExistsError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/namespaces/{namespace}/views/{view}: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_namespace' + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_view' + get: + tags: + - Catalog API + summary: Load a view from the catalog + operationId: loadView + description: >- + Load a view from the catalog. + + + The response contains both configuration and view metadata. The + configuration, if non-empty is used as additional configuration for the + view that overrides catalog configuration. + + + The response also contains the view's full metadata, matching the view + metadata JSON file. + + + The catalog configuration may contain credentials that should be used + for subsequent requests for the view. The configuration key "token" is + used to pass an access token to be used as a bearer token for view + requests. Otherwise, a token may be passed using a RFC 8693 token type + as a configuration key. For example, + "urn:ietf:params:oauth:token-type:jwt=". + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_LoadViewResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - NoSuchViewException, view to load does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + examples: + ViewToLoadDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchViewError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + post: + tags: + - Catalog API + summary: Replace a view + operationId: replaceView + description: Commit updates to a view. + requestBody: + required: true + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_CommitViewRequest + responses: + '200': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_LoadViewResponse + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - NoSuchViewException, view to load does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + examples: + ViewToUpdateDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchViewError + '409': + description: Conflict - CommitFailedException. The client may retry. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '500': + description: >- + An unknown server-side problem occurred; the commit state is + unknown. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + example: + error: + message: Internal Server Error + type: CommitStateUnknownException + code: 500 + '502': + description: >- + A gateway or proxy received an invalid response from the upstream + server; the commit state is unknown. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + example: + error: + message: Invalid response from the upstream server + type: CommitStateUnknownException + code: 502 + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + '504': + description: A server-side gateway timeout occurred; the commit state is unknown. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + example: + error: + message: Gateway timed out during commit + type: CommitStateUnknownException + code: 504 + 5XX: + description: A server-side problem that might not be addressable on the client. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + example: + error: + message: Bad Gateway + type: InternalServerError + code: 502 + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + delete: + tags: + - Catalog API + summary: Drop a view from the catalog + operationId: dropView + description: Remove a view from the catalog + responses: + '204': + description: Success, no content + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: Not Found - NoSuchViewException, view to drop does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + examples: + ViewToDeleteDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchViewError + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + head: + tags: + - Catalog API + summary: Check if a view exists + operationId: viewExists + description: >- + Check if a view exists within a given namespace. This request does not + return a response body. + responses: + '204': + description: Success, no content + '400': + description: Bad Request + '401': + description: Unauthorized + '404': + description: Not Found + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] + /v1/{prefix}/views/rename: + parameters: + - $ref: '#/components/parameters/Apache_Iceberg_REST_Catalog_API_prefix' + post: + tags: + - Catalog API + summary: Rename a view from its current name to a new name + description: >- + Rename a view from one identifier to another. It's valid to move a view + across namespaces, but the server implementation is not required to + support it. + operationId: renameView + requestBody: + description: Current view identifier to rename and new view identifier to rename to + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RenameTableRequest + examples: + RenameViewSameNamespace: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_RenameViewSameNamespace + required: true + responses: + '204': + description: Success, no content + '400': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse + '401': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse + '403': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ForbiddenResponse + '404': + description: >- + Not Found - NoSuchViewException, view to rename does not exist - + NoSuchNamespaceException, The target namespace of the new identifier + does not exist + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + examples: + ViewToRenameDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchViewError + NamespaceToRenameToDoesNotExist: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError + '406': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_UnsupportedOperationResponse + '409': + description: >- + Conflict - The target identifier to rename to already exists as a + table or view + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel + example: + summary: The requested view identifier already exists + value: + error: + message: The given view already exists + type: AlreadyExistsException + code: 409 + '419': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse + '503': + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse + 5XX: + $ref: >- + #/components/responses/Apache_Iceberg_REST_Catalog_API_ServerErrorResponse + security: + - Apache_Iceberg_REST_Catalog_API_OAuth2: + - catalog + - Apache_Iceberg_REST_Catalog_API_BearerAuth: [] +components: + securitySchemes: + Polaris_Management_Service_OAuth2: + type: oauth2 + description: Uses OAuth 2 with client credentials flow + flows: + implicit: + authorizationUrl: '{scheme}://{host}/api/v1/oauth/tokens' + scopes: {} + Apache_Iceberg_REST_Catalog_API_OAuth2: + type: oauth2 + description: >- + This scheme is used for OAuth2 authorization. + + + For unauthorized requests, services should return an appropriate 401 or + 403 response. Implementations must not return altered success (200) + responses when a request is unauthenticated or unauthorized. + + If a separate authorization server is used, substitute the tokenUrl with + the full token path of the external authorization server, and use the + resulting token to access the resources defined in the spec. + flows: + clientCredentials: + tokenUrl: /v1/oauth/tokens + scopes: + catalog: Allows interacting with the Config and Catalog APIs + Apache_Iceberg_REST_Catalog_API_BearerAuth: + type: http + scheme: bearer + schemas: + Polaris_Management_Service_Catalogs: + type: object + description: A list of Catalog objects + properties: + catalogs: + type: array + items: + $ref: '#/components/schemas/Polaris_Management_Service_Catalog' + required: + - catalogs + Polaris_Management_Service_CreateCatalogRequest: + type: object + description: Request to create a new catalog + properties: + catalog: + $ref: '#/components/schemas/Polaris_Management_Service_Catalog' + required: + - catalog + Polaris_Management_Service_Catalog: + type: object + description: >- + A catalog object. A catalog may be internal or external. Internal + catalogs are managed entirely by an external catalog interface. Third + party catalogs may be other Iceberg REST implementations or other + services with their own proprietary APIs + properties: + type: + type: string + enum: + - INTERNAL + - EXTERNAL + description: the type of catalog - internal or external + default: INTERNAL + name: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + description: The name of the catalog + properties: + type: object + properties: + default-base-location: + type: string + additionalProperties: + type: string + required: + - default-base-location + createTimestamp: + type: integer + format: int64 + description: >- + The creation time represented as unix epoch timestamp in + milliseconds + lastUpdateTimestamp: + type: integer + format: int64 + description: >- + The last update time represented as unix epoch timestamp in + milliseconds + entityVersion: + type: integer + description: >- + The version of the catalog object used to determine if the catalog + metadata has changed + storageConfigInfo: + $ref: '#/components/schemas/Polaris_Management_Service_StorageConfigInfo' + required: + - name + - type + - storageConfigInfo + - properties + discriminator: + propertyName: type + mapping: + INTERNAL: '#/components/schemas/Polaris_Management_Service_PolarisCatalog' + EXTERNAL: '#/components/schemas/Polaris_Management_Service_ExternalCatalog' + Polaris_Management_Service_PolarisCatalog: + type: object + allOf: + - $ref: '#/components/schemas/Polaris_Management_Service_Catalog' + description: >- + The base catalog type - this contains all the fields necessary to + construct an INTERNAL catalog + Polaris_Management_Service_ExternalCatalog: + description: An externally managed catalog + type: object + allOf: + - $ref: '#/components/schemas/Polaris_Management_Service_Catalog' + - type: object + properties: + remoteUrl: + type: string + description: URL to the remote catalog API + Polaris_Management_Service_StorageConfigInfo: + type: object + description: A storage configuration used by catalogs + properties: + storageType: + type: string + enum: + - S3 + - GCS + - AZURE + - FILE + description: >- + The cloud provider type this storage is built on. FILE is supported + for testing purposes only + allowedLocations: + type: array + items: + type: string + example: >- + For AWS [s3://bucketname/prefix/], for AZURE + [abfss://container@storageaccount.blob.core.windows.net/prefix/], + for GCP [gs://bucketname/prefix/] + required: + - storageType + discriminator: + propertyName: storageType + mapping: + S3: '#/components/schemas/Polaris_Management_Service_AwsStorageConfigInfo' + AZURE: >- + #/components/schemas/Polaris_Management_Service_AzureStorageConfigInfo + GCS: '#/components/schemas/Polaris_Management_Service_GcpStorageConfigInfo' + FILE: >- + #/components/schemas/Polaris_Management_Service_FileStorageConfigInfo + Polaris_Management_Service_AwsStorageConfigInfo: + type: object + description: aws storage configuration info + allOf: + - $ref: '#/components/schemas/Polaris_Management_Service_StorageConfigInfo' + properties: + roleArn: + type: string + description: the aws role arn that grants privileges on the S3 buckets + example: arn:aws:iam::123456789001:principal/abc1-b-self1234 + externalId: + type: string + description: >- + an optional external id used to establish a trust relationship with + AWS in the trust policy + userArn: + type: string + description: the aws user arn used to assume the aws role + example: arn:aws:iam::123456789001:user/abc1-b-self1234 + required: + - roleArn + Polaris_Management_Service_AzureStorageConfigInfo: + type: object + description: azure storage configuration info + allOf: + - $ref: '#/components/schemas/Polaris_Management_Service_StorageConfigInfo' + properties: + tenantId: + type: string + description: the tenant id that the storage accounts belong to + multiTenantAppName: + type: string + description: the name of the azure client application + consentUrl: + type: string + description: URL to the Azure permissions request page + required: + - tenantId + Polaris_Management_Service_GcpStorageConfigInfo: + type: object + description: gcp storage configuration info + allOf: + - $ref: '#/components/schemas/Polaris_Management_Service_StorageConfigInfo' + properties: + gcsServiceAccount: + type: string + description: a Google cloud storage service account + Polaris_Management_Service_FileStorageConfigInfo: + type: object + description: gcp storage configuration info + allOf: + - $ref: '#/components/schemas/Polaris_Management_Service_StorageConfigInfo' + Polaris_Management_Service_UpdateCatalogRequest: + description: Updates to apply to a Catalog + type: object + properties: + currentEntityVersion: + type: integer + description: >- + The version of the object onto which this update is applied; if the + object changed, the update will fail and the caller should retry + after fetching the latest version. + properties: + type: object + additionalProperties: + type: string + storageConfigInfo: + $ref: '#/components/schemas/Polaris_Management_Service_StorageConfigInfo' + Polaris_Management_Service_Principals: + description: A list of Principals + type: object + properties: + principals: + type: array + items: + $ref: '#/components/schemas/Polaris_Management_Service_Principal' + required: + - principals + Polaris_Management_Service_PrincipalWithCredentials: + description: >- + A user with its client id and secret. This type is returned when a new + principal is created or when its credentials are rotated + type: object + properties: + principal: + $ref: '#/components/schemas/Polaris_Management_Service_Principal' + credentials: + type: object + properties: + clientId: + type: string + clientSecret: + type: string + required: + - principal + - credentials + Polaris_Management_Service_CreatePrincipalRequest: + type: object + properties: + principal: + $ref: '#/components/schemas/Polaris_Management_Service_Principal' + credentialRotationRequired: + type: boolean + description: >- + If true, the initial credentials can only be used to call + rotateCredentials + Polaris_Management_Service_Principal: + description: A Polaris principal. + type: object + properties: + name: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + clientId: + type: string + description: >- + The output-only OAuth clientId associated with this principal if + applicable + properties: + type: object + additionalProperties: + type: string + createTimestamp: + type: integer + format: int64 + lastUpdateTimestamp: + type: integer + format: int64 + entityVersion: + type: integer + description: >- + The version of the principal object used to determine if the + principal metadata has changed + required: + - name + Polaris_Management_Service_UpdatePrincipalRequest: + description: Updates to apply to a Principal + type: object + properties: + currentEntityVersion: + type: integer + description: >- + The version of the object onto which this update is applied; if the + object changed, the update will fail and the caller should retry + after fetching the latest version. + properties: + type: object + additionalProperties: + type: string + required: + - currentEntityVersion + - properties + Polaris_Management_Service_PrincipalRoles: + type: object + properties: + roles: + type: array + items: + $ref: '#/components/schemas/Polaris_Management_Service_PrincipalRole' + required: + - roles + Polaris_Management_Service_GrantPrincipalRoleRequest: + type: object + properties: + principalRole: + $ref: '#/components/schemas/Polaris_Management_Service_PrincipalRole' + Polaris_Management_Service_CreatePrincipalRoleRequest: + type: object + properties: + principalRole: + $ref: '#/components/schemas/Polaris_Management_Service_PrincipalRole' + Polaris_Management_Service_PrincipalRole: + type: object + properties: + name: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + description: The name of the role + properties: + type: object + additionalProperties: + type: string + createTimestamp: + type: integer + format: int64 + lastUpdateTimestamp: + type: integer + format: int64 + entityVersion: + type: integer + description: >- + The version of the principal role object used to determine if the + principal role metadata has changed + required: + - name + Polaris_Management_Service_UpdatePrincipalRoleRequest: + description: Updates to apply to a Principal Role + type: object + properties: + currentEntityVersion: + type: integer + description: >- + The version of the object onto which this update is applied; if the + object changed, the update will fail and the caller should retry + after fetching the latest version. + properties: + type: object + additionalProperties: + type: string + required: + - currentEntityVersion + - properties + Polaris_Management_Service_CatalogRoles: + type: object + properties: + roles: + type: array + items: + $ref: '#/components/schemas/Polaris_Management_Service_CatalogRole' + description: The list of catalog roles + required: + - roles + Polaris_Management_Service_GrantCatalogRoleRequest: + type: object + properties: + catalogRole: + $ref: '#/components/schemas/Polaris_Management_Service_CatalogRole' + Polaris_Management_Service_CreateCatalogRoleRequest: + type: object + properties: + catalogRole: + $ref: '#/components/schemas/Polaris_Management_Service_CatalogRole' + Polaris_Management_Service_CatalogRole: + type: object + properties: + name: + type: string + minLength: 1 + maxLength: 256 + pattern: ^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$ + description: The name of the role + properties: + type: object + additionalProperties: + type: string + createTimestamp: + type: integer + format: int64 + lastUpdateTimestamp: + type: integer + format: int64 + entityVersion: + type: integer + description: >- + The version of the catalog role object used to determine if the + catalog role metadata has changed + required: + - name + Polaris_Management_Service_UpdateCatalogRoleRequest: + description: Updates to apply to a Catalog Role + type: object + properties: + currentEntityVersion: + type: integer + description: >- + The version of the object onto which this update is applied; if the + object changed, the update will fail and the caller should retry + after fetching the latest version. + properties: + type: object + additionalProperties: + type: string + required: + - currentEntityVersion + - properties + Polaris_Management_Service_ViewPrivilege: + type: string + enum: + - CATALOG_MANAGE_ACCESS + - VIEW_CREATE + - VIEW_DROP + - VIEW_LIST + - VIEW_READ_PROPERTIES + - VIEW_WRITE_PROPERTIES + - VIEW_FULL_METADATA + Polaris_Management_Service_TablePrivilege: + type: string + enum: + - CATALOG_MANAGE_ACCESS + - TABLE_DROP + - TABLE_LIST + - TABLE_READ_PROPERTIES + - VIEW_READ_PROPERTIES + - TABLE_WRITE_PROPERTIES + - TABLE_READ_DATA + - TABLE_WRITE_DATA + - TABLE_FULL_METADATA + Polaris_Management_Service_NamespacePrivilege: + type: string + enum: + - CATALOG_MANAGE_ACCESS + - CATALOG_MANAGE_CONTENT + - CATALOG_MANAGE_METADATA + - NAMESPACE_CREATE + - TABLE_CREATE + - VIEW_CREATE + - NAMESPACE_DROP + - TABLE_DROP + - VIEW_DROP + - NAMESPACE_LIST + - TABLE_LIST + - VIEW_LIST + - NAMESPACE_READ_PROPERTIES + - TABLE_READ_PROPERTIES + - VIEW_READ_PROPERTIES + - NAMESPACE_WRITE_PROPERTIES + - TABLE_WRITE_PROPERTIES + - VIEW_WRITE_PROPERTIES + - TABLE_READ_DATA + - TABLE_WRITE_DATA + - NAMESPACE_FULL_METADATA + - TABLE_FULL_METADATA + - VIEW_FULL_METADATA + Polaris_Management_Service_CatalogPrivilege: + type: string + enum: + - CATALOG_MANAGE_ACCESS + - CATALOG_MANAGE_CONTENT + - CATALOG_MANAGE_METADATA + - CATALOG_READ_PROPERTIES + - CATALOG_WRITE_PROPERTIES + - NAMESPACE_CREATE + - TABLE_CREATE + - VIEW_CREATE + - NAMESPACE_DROP + - TABLE_DROP + - VIEW_DROP + - NAMESPACE_LIST + - TABLE_LIST + - VIEW_LIST + - NAMESPACE_READ_PROPERTIES + - TABLE_READ_PROPERTIES + - VIEW_READ_PROPERTIES + - NAMESPACE_WRITE_PROPERTIES + - TABLE_WRITE_PROPERTIES + - VIEW_WRITE_PROPERTIES + - TABLE_READ_DATA + - TABLE_WRITE_DATA + - NAMESPACE_FULL_METADATA + - TABLE_FULL_METADATA + - VIEW_FULL_METADATA + Polaris_Management_Service_AddGrantRequest: + type: object + properties: + grant: + $ref: '#/components/schemas/Polaris_Management_Service_GrantResource' + Polaris_Management_Service_RevokeGrantRequest: + type: object + properties: + grant: + $ref: '#/components/schemas/Polaris_Management_Service_GrantResource' + Polaris_Management_Service_ViewGrant: + allOf: + - $ref: '#/components/schemas/Polaris_Management_Service_GrantResource' + - type: object + properties: + namespace: + type: array + items: + type: string + viewName: + type: string + minLength: 1 + maxLength: 256 + privilege: + $ref: '#/components/schemas/Polaris_Management_Service_ViewPrivilege' + required: + - namespace + - viewName + - privilege + Polaris_Management_Service_TableGrant: + allOf: + - $ref: '#/components/schemas/Polaris_Management_Service_GrantResource' + - type: object + properties: + namespace: + type: array + items: + type: string + tableName: + type: string + minLength: 1 + maxLength: 256 + privilege: + $ref: '#/components/schemas/Polaris_Management_Service_TablePrivilege' + required: + - namespace + - tableName + - privilege + Polaris_Management_Service_NamespaceGrant: + allOf: + - $ref: '#/components/schemas/Polaris_Management_Service_GrantResource' + - type: object + properties: + namespace: + type: array + items: + type: string + privilege: + $ref: >- + #/components/schemas/Polaris_Management_Service_NamespacePrivilege + required: + - namespace + - privilege + Polaris_Management_Service_CatalogGrant: + allOf: + - $ref: '#/components/schemas/Polaris_Management_Service_GrantResource' + - type: object + properties: + privilege: + $ref: '#/components/schemas/Polaris_Management_Service_CatalogPrivilege' + required: + - privilege + Polaris_Management_Service_GrantResource: + type: object + discriminator: + propertyName: type + mapping: + catalog: '#/components/schemas/Polaris_Management_Service_CatalogGrant' + namespace: '#/components/schemas/Polaris_Management_Service_NamespaceGrant' + table: '#/components/schemas/Polaris_Management_Service_TableGrant' + view: '#/components/schemas/Polaris_Management_Service_ViewGrant' + properties: + type: + type: string + enum: + - catalog + - namespace + - table + - view + required: + - type + Polaris_Management_Service_GrantResources: + type: object + properties: + grants: + type: array + items: + $ref: '#/components/schemas/Polaris_Management_Service_GrantResource' + required: + - grants + Apache_Iceberg_REST_Catalog_API_ErrorModel: + type: object + description: >- + JSON error payload returned in a response with further details on the + error + required: + - message + - type + - code + properties: + message: + type: string + description: Human-readable error message + type: + type: string + description: Internal type definition of the error + example: NoSuchNamespaceException + code: + type: integer + minimum: 400 + maximum: 600 + description: HTTP response code + example: 404 + stack: + type: array + items: + type: string + Apache_Iceberg_REST_Catalog_API_CatalogConfig: + type: object + description: Server-provided configuration for the catalog. + required: + - defaults + - overrides + properties: + overrides: + type: object + additionalProperties: + type: string + description: >- + Properties that should be used to override client configuration; + applied after defaults and client configuration. + defaults: + type: object + additionalProperties: + type: string + description: >- + Properties that should be used as default configuration; applied + before client configuration. + Apache_Iceberg_REST_Catalog_API_CreateNamespaceRequest: + type: object + required: + - namespace + properties: + namespace: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Namespace' + properties: + type: object + description: Configured string to string map of properties for the namespace + example: + owner: Hank Bendickson + default: {} + additionalProperties: + type: string + Apache_Iceberg_REST_Catalog_API_UpdateNamespacePropertiesRequest: + type: object + properties: + removals: + type: array + uniqueItems: true + items: + type: string + example: + - department + - access_group + updates: + type: object + example: + owner: Hank Bendickson + additionalProperties: + type: string + Apache_Iceberg_REST_Catalog_API_RenameTableRequest: + type: object + required: + - source + - destination + properties: + source: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TableIdentifier' + destination: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TableIdentifier' + Apache_Iceberg_REST_Catalog_API_Namespace: + description: Reference to one or more levels of a namespace + type: array + items: + type: string + example: + - accounting + - tax + Apache_Iceberg_REST_Catalog_API_PageToken: + description: >- + An opaque token that allows clients to make use of pagination for list + APIs (e.g. ListTables). Clients may initiate the first paginated request + by sending an empty query parameter `pageToken` to the server. + + Servers that support pagination should identify the `pageToken` + parameter and return a `next-page-token` in the response if there are + more results available. After the initial request, the value of + `next-page-token` from each response must be used as the `pageToken` + parameter value for the next request. The server must return `null` + value for the `next-page-token` in the last response. + + Servers that support pagination must return all results in a single + response with the value of `next-page-token` set to `null` if the query + parameter `pageToken` is not set in the request. + + Servers that do not support pagination should ignore the `pageToken` + parameter and return all results in a single response. The + `next-page-token` must be omitted from the response. + + Clients must interpret either `null` or missing response value of + `next-page-token` as the end of the listing results. + type: string + nullable: true + Apache_Iceberg_REST_Catalog_API_TableIdentifier: + type: object + required: + - namespace + - name + properties: + namespace: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Namespace' + name: + type: string + nullable: false + Apache_Iceberg_REST_Catalog_API_PrimitiveType: + type: string + example: + - long + - string + - fixed[16] + - decimal(10,2) + Apache_Iceberg_REST_Catalog_API_StructField: + type: object + required: + - id + - name + - type + - required + properties: + id: + type: integer + name: + type: string + type: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Type' + required: + type: boolean + doc: + type: string + Apache_Iceberg_REST_Catalog_API_StructType: + type: object + required: + - type + - fields + properties: + type: + type: string + enum: + - struct + fields: + type: array + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_StructField' + Apache_Iceberg_REST_Catalog_API_ListType: + type: object + required: + - type + - element-id + - element + - element-required + properties: + type: + type: string + enum: + - list + element-id: + type: integer + element: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Type' + element-required: + type: boolean + Apache_Iceberg_REST_Catalog_API_MapType: + type: object + required: + - type + - key-id + - key + - value-id + - value + - value-required + properties: + type: + type: string + enum: + - map + key-id: + type: integer + key: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Type' + value-id: + type: integer + value: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Type' + value-required: + type: boolean + Apache_Iceberg_REST_Catalog_API_Type: + oneOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_PrimitiveType' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_StructType' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ListType' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_MapType' + Apache_Iceberg_REST_Catalog_API_Schema: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_StructType' + - type: object + properties: + schema-id: + type: integer + readOnly: true + identifier-field-ids: + type: array + items: + type: integer + Apache_Iceberg_REST_Catalog_API_Expression: + oneOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_AndOrExpression' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_NotExpression' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_SetExpression' + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_LiteralExpression + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_UnaryExpression' + Apache_Iceberg_REST_Catalog_API_ExpressionType: + type: string + example: + - eq + - and + - or + - not + - in + - not-in + - lt + - lt-eq + - gt + - gt-eq + - not-eq + - starts-with + - not-starts-with + - is-null + - not-null + - is-nan + - not-nan + Apache_Iceberg_REST_Catalog_API_AndOrExpression: + type: object + required: + - type + - left + - right + properties: + type: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ExpressionType' + enum: + - and + - or + left: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Expression' + right: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Expression' + Apache_Iceberg_REST_Catalog_API_NotExpression: + type: object + required: + - type + - child + properties: + type: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ExpressionType' + enum: + - not + child: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Expression' + Apache_Iceberg_REST_Catalog_API_UnaryExpression: + type: object + required: + - type + - term + - value + properties: + type: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ExpressionType' + enum: + - is-null + - not-null + - is-nan + - not-nan + term: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Term' + value: + type: object + Apache_Iceberg_REST_Catalog_API_LiteralExpression: + type: object + required: + - type + - term + - value + properties: + type: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ExpressionType' + enum: + - lt + - lt-eq + - gt + - gt-eq + - eq + - not-eq + - starts-with + - not-starts-with + term: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Term' + value: + type: object + Apache_Iceberg_REST_Catalog_API_SetExpression: + type: object + required: + - type + - term + - values + properties: + type: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ExpressionType' + enum: + - in + - not-in + term: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Term' + values: + type: array + items: + type: object + Apache_Iceberg_REST_Catalog_API_Term: + oneOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Reference' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TransformTerm' + Apache_Iceberg_REST_Catalog_API_Reference: + type: string + example: + - column-name + Apache_Iceberg_REST_Catalog_API_TransformTerm: + type: object + required: + - type + - transform + - term + properties: + type: + type: string + enum: + - transform + transform: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Transform' + term: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Reference' + Apache_Iceberg_REST_Catalog_API_Transform: + type: string + example: + - identity + - year + - month + - day + - hour + - bucket[256] + - truncate[16] + Apache_Iceberg_REST_Catalog_API_PartitionField: + type: object + required: + - source-id + - transform + - name + properties: + field-id: + type: integer + source-id: + type: integer + name: + type: string + transform: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Transform' + Apache_Iceberg_REST_Catalog_API_PartitionSpec: + type: object + required: + - fields + properties: + spec-id: + type: integer + readOnly: true + fields: + type: array + items: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_PartitionField + Apache_Iceberg_REST_Catalog_API_SortDirection: + type: string + enum: + - asc + - desc + Apache_Iceberg_REST_Catalog_API_NullOrder: + type: string + enum: + - nulls-first + - nulls-last + Apache_Iceberg_REST_Catalog_API_SortField: + type: object + required: + - source-id + - transform + - direction + - null-order + properties: + source-id: + type: integer + transform: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Transform' + direction: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_SortDirection' + null-order: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_NullOrder' + Apache_Iceberg_REST_Catalog_API_SortOrder: + type: object + required: + - order-id + - fields + properties: + order-id: + type: integer + readOnly: true + fields: + type: array + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_SortField' + Apache_Iceberg_REST_Catalog_API_Snapshot: + type: object + required: + - snapshot-id + - timestamp-ms + - manifest-list + - summary + properties: + snapshot-id: + type: integer + format: int64 + parent-snapshot-id: + type: integer + format: int64 + sequence-number: + type: integer + format: int64 + timestamp-ms: + type: integer + format: int64 + manifest-list: + type: string + description: Location of the snapshot's manifest list file + summary: + type: object + required: + - operation + properties: + operation: + type: string + enum: + - append + - replace + - overwrite + - delete + additionalProperties: + type: string + schema-id: + type: integer + Apache_Iceberg_REST_Catalog_API_SnapshotReference: + type: object + required: + - type + - snapshot-id + properties: + type: + type: string + enum: + - tag + - branch + snapshot-id: + type: integer + format: int64 + max-ref-age-ms: + type: integer + format: int64 + max-snapshot-age-ms: + type: integer + format: int64 + min-snapshots-to-keep: + type: integer + Apache_Iceberg_REST_Catalog_API_SnapshotReferences: + type: object + additionalProperties: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_SnapshotReference' + Apache_Iceberg_REST_Catalog_API_SnapshotLog: + type: array + items: + type: object + required: + - snapshot-id + - timestamp-ms + properties: + snapshot-id: + type: integer + format: int64 + timestamp-ms: + type: integer + format: int64 + Apache_Iceberg_REST_Catalog_API_MetadataLog: + type: array + items: + type: object + required: + - metadata-file + - timestamp-ms + properties: + metadata-file: + type: string + timestamp-ms: + type: integer + format: int64 + Apache_Iceberg_REST_Catalog_API_TableMetadata: + type: object + required: + - format-version + - table-uuid + properties: + format-version: + type: integer + minimum: 1 + maximum: 2 + table-uuid: + type: string + location: + type: string + last-updated-ms: + type: integer + format: int64 + properties: + type: object + additionalProperties: + type: string + schemas: + type: array + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Schema' + current-schema-id: + type: integer + last-column-id: + type: integer + partition-specs: + type: array + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_PartitionSpec' + default-spec-id: + type: integer + last-partition-id: + type: integer + sort-orders: + type: array + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_SortOrder' + default-sort-order-id: + type: integer + snapshots: + type: array + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Snapshot' + refs: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SnapshotReferences + current-snapshot-id: + type: integer + format: int64 + last-sequence-number: + type: integer + format: int64 + snapshot-log: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_SnapshotLog' + metadata-log: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_MetadataLog' + statistics-files: + type: array + items: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_StatisticsFile + partition-statistics-files: + type: array + items: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_PartitionStatisticsFile + Apache_Iceberg_REST_Catalog_API_SQLViewRepresentation: + type: object + required: + - type + - sql + - dialect + properties: + type: + type: string + sql: + type: string + dialect: + type: string + Apache_Iceberg_REST_Catalog_API_ViewRepresentation: + oneOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SQLViewRepresentation + Apache_Iceberg_REST_Catalog_API_ViewHistoryEntry: + type: object + required: + - version-id + - timestamp-ms + properties: + version-id: + type: integer + timestamp-ms: + type: integer + format: int64 + Apache_Iceberg_REST_Catalog_API_ViewVersion: + type: object + required: + - version-id + - timestamp-ms + - schema-id + - summary + - representations + - default-namespace + properties: + version-id: + type: integer + timestamp-ms: + type: integer + format: int64 + schema-id: + type: integer + description: Schema ID to set as current, or -1 to set last added schema + summary: + type: object + additionalProperties: + type: string + representations: + type: array + items: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ViewRepresentation + default-catalog: + type: string + default-namespace: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Namespace' + Apache_Iceberg_REST_Catalog_API_ViewMetadata: + type: object + required: + - view-uuid + - format-version + - location + - current-version-id + - versions + - version-log + - schemas + properties: + view-uuid: + type: string + format-version: + type: integer + minimum: 1 + maximum: 1 + location: + type: string + current-version-id: + type: integer + versions: + type: array + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ViewVersion' + version-log: + type: array + items: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ViewHistoryEntry + schemas: + type: array + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Schema' + properties: + type: object + additionalProperties: + type: string + Apache_Iceberg_REST_Catalog_API_BaseUpdate: + discriminator: + propertyName: action + mapping: + assign-uuid: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AssignUUIDUpdate + upgrade-format-version: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_UpgradeFormatVersionUpdate + add-schema: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_AddSchemaUpdate' + set-current-schema: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetCurrentSchemaUpdate + add-spec: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AddPartitionSpecUpdate + set-default-spec: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetDefaultSpecUpdate + add-sort-order: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AddSortOrderUpdate + set-default-sort-order: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetDefaultSortOrderUpdate + add-snapshot: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AddSnapshotUpdate + set-snapshot-ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetSnapshotRefUpdate + remove-snapshots: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RemoveSnapshotsUpdate + remove-snapshot-ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RemoveSnapshotRefUpdate + set-location: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetLocationUpdate + set-properties: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetPropertiesUpdate + remove-properties: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RemovePropertiesUpdate + add-view-version: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AddViewVersionUpdate + set-current-view-version: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetCurrentViewVersionUpdate + set-statistics: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetStatisticsUpdate + remove-statistics: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RemoveStatisticsUpdate + set-partition-statistics: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetPartitionStatisticsUpdate + remove-partition-statistics: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RemovePartitionStatisticsUpdate + type: object + required: + - action + properties: + action: + type: string + Apache_Iceberg_REST_Catalog_API_AssignUUIDUpdate: + description: >- + Assigning a UUID to a table/view should only be done when creating the + table/view. It is not safe to re-assign the UUID if a table/view already + has a UUID assigned + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - uuid + properties: + action: + type: string + enum: + - assign-uuid + uuid: + type: string + Apache_Iceberg_REST_Catalog_API_UpgradeFormatVersionUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - format-version + properties: + action: + type: string + enum: + - upgrade-format-version + format-version: + type: integer + Apache_Iceberg_REST_Catalog_API_AddSchemaUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - schema + properties: + action: + type: string + enum: + - add-schema + schema: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Schema' + last-column-id: + type: integer + description: >- + The highest assigned column ID for the table. This is used to ensure + columns are always assigned an unused ID when evolving schemas. When + omitted, it will be computed on the server side. + Apache_Iceberg_REST_Catalog_API_SetCurrentSchemaUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - schema-id + properties: + action: + type: string + enum: + - set-current-schema + schema-id: + type: integer + description: Schema ID to set as current, or -1 to set last added schema + Apache_Iceberg_REST_Catalog_API_AddPartitionSpecUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - spec + properties: + action: + type: string + enum: + - add-spec + spec: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_PartitionSpec' + Apache_Iceberg_REST_Catalog_API_SetDefaultSpecUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - spec-id + properties: + action: + type: string + enum: + - set-default-spec + spec-id: + type: integer + description: >- + Partition spec ID to set as the default, or -1 to set last added + spec + Apache_Iceberg_REST_Catalog_API_AddSortOrderUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - sort-order + properties: + action: + type: string + enum: + - add-sort-order + sort-order: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_SortOrder' + Apache_Iceberg_REST_Catalog_API_SetDefaultSortOrderUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - sort-order-id + properties: + action: + type: string + enum: + - set-default-sort-order + sort-order-id: + type: integer + description: >- + Sort order ID to set as the default, or -1 to set last added sort + order + Apache_Iceberg_REST_Catalog_API_AddSnapshotUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - snapshot + properties: + action: + type: string + enum: + - add-snapshot + snapshot: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Snapshot' + Apache_Iceberg_REST_Catalog_API_SetSnapshotRefUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SnapshotReference + required: + - action + - ref-name + properties: + action: + type: string + enum: + - set-snapshot-ref + ref-name: + type: string + Apache_Iceberg_REST_Catalog_API_RemoveSnapshotsUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - snapshot-ids + properties: + action: + type: string + enum: + - remove-snapshots + snapshot-ids: + type: array + items: + type: integer + format: int64 + Apache_Iceberg_REST_Catalog_API_RemoveSnapshotRefUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - ref-name + properties: + action: + type: string + enum: + - remove-snapshot-ref + ref-name: + type: string + Apache_Iceberg_REST_Catalog_API_SetLocationUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - location + properties: + action: + type: string + enum: + - set-location + location: + type: string + Apache_Iceberg_REST_Catalog_API_SetPropertiesUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - updates + properties: + action: + type: string + enum: + - set-properties + updates: + type: object + additionalProperties: + type: string + Apache_Iceberg_REST_Catalog_API_RemovePropertiesUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - removals + properties: + action: + type: string + enum: + - remove-properties + removals: + type: array + items: + type: string + Apache_Iceberg_REST_Catalog_API_AddViewVersionUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - view-version + properties: + action: + type: string + enum: + - add-view-version + view-version: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ViewVersion' + Apache_Iceberg_REST_Catalog_API_SetCurrentViewVersionUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - view-version-id + properties: + action: + type: string + enum: + - set-current-view-version + view-version-id: + type: integer + description: >- + The view version id to set as current, or -1 to set last added view + version id + Apache_Iceberg_REST_Catalog_API_SetStatisticsUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - snapshot-id + - statistics + properties: + action: + type: string + enum: + - set-statistics + snapshot-id: + type: integer + format: int64 + statistics: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_StatisticsFile' + Apache_Iceberg_REST_Catalog_API_RemoveStatisticsUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - snapshot-id + properties: + action: + type: string + enum: + - remove-statistics + snapshot-id: + type: integer + format: int64 + Apache_Iceberg_REST_Catalog_API_SetPartitionStatisticsUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - partition-statistics + properties: + action: + type: string + enum: + - set-partition-statistics + partition-statistics: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_PartitionStatisticsFile + Apache_Iceberg_REST_Catalog_API_RemovePartitionStatisticsUpdate: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BaseUpdate' + required: + - action + - snapshot-id + properties: + action: + type: string + enum: + - remove-partition-statistics + snapshot-id: + type: integer + format: int64 + Apache_Iceberg_REST_Catalog_API_TableUpdate: + anyOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AssignUUIDUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_UpgradeFormatVersionUpdate + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_AddSchemaUpdate' + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetCurrentSchemaUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AddPartitionSpecUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetDefaultSpecUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AddSortOrderUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetDefaultSortOrderUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AddSnapshotUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetSnapshotRefUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RemoveSnapshotsUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RemoveSnapshotRefUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetLocationUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetPropertiesUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RemovePropertiesUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetStatisticsUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RemoveStatisticsUpdate + Apache_Iceberg_REST_Catalog_API_ViewUpdate: + anyOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AssignUUIDUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_UpgradeFormatVersionUpdate + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_AddSchemaUpdate' + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetLocationUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetPropertiesUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_RemovePropertiesUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AddViewVersionUpdate + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_SetCurrentViewVersionUpdate + Apache_Iceberg_REST_Catalog_API_TableRequirement: + discriminator: + propertyName: type + mapping: + assert-create: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_AssertCreate' + assert-table-uuid: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_AssertTableUUID' + assert-ref-snapshot-id: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AssertRefSnapshotId + assert-last-assigned-field-id: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AssertLastAssignedFieldId + assert-current-schema-id: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AssertCurrentSchemaId + assert-last-assigned-partition-id: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AssertLastAssignedPartitionId + assert-default-spec-id: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AssertDefaultSpecId + assert-default-sort-order-id: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_AssertDefaultSortOrderId + type: object + required: + - type + properties: + type: + type: string + Apache_Iceberg_REST_Catalog_API_AssertCreate: + allOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TableRequirement + type: object + description: The table must not already exist; used for create transactions + required: + - type + properties: + type: + type: string + enum: + - assert-create + Apache_Iceberg_REST_Catalog_API_AssertTableUUID: + allOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TableRequirement + description: The table UUID must match the requirement's `uuid` + required: + - type + - uuid + properties: + type: + type: string + enum: + - assert-table-uuid + uuid: + type: string + Apache_Iceberg_REST_Catalog_API_AssertRefSnapshotId: + allOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TableRequirement + description: >- + The table branch or tag identified by the requirement's `ref` must + reference the requirement's `snapshot-id`; if `snapshot-id` is `null` or + missing, the ref must not already exist + required: + - type + - ref + - snapshot-id + properties: + type: + type: string + enum: + - assert-ref-snapshot-id + ref: + type: string + snapshot-id: + type: integer + format: int64 + Apache_Iceberg_REST_Catalog_API_AssertLastAssignedFieldId: + allOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TableRequirement + description: >- + The table's last assigned column id must match the requirement's + `last-assigned-field-id` + required: + - type + - last-assigned-field-id + properties: + type: + type: string + enum: + - assert-last-assigned-field-id + last-assigned-field-id: + type: integer + Apache_Iceberg_REST_Catalog_API_AssertCurrentSchemaId: + allOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TableRequirement + description: >- + The table's current schema id must match the requirement's + `current-schema-id` + required: + - type + - current-schema-id + properties: + type: + type: string + enum: + - assert-current-schema-id + current-schema-id: + type: integer + Apache_Iceberg_REST_Catalog_API_AssertLastAssignedPartitionId: + allOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TableRequirement + description: >- + The table's last assigned partition id must match the requirement's + `last-assigned-partition-id` + required: + - type + - last-assigned-partition-id + properties: + type: + type: string + enum: + - assert-last-assigned-partition-id + last-assigned-partition-id: + type: integer + Apache_Iceberg_REST_Catalog_API_AssertDefaultSpecId: + allOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TableRequirement + description: >- + The table's default spec id must match the requirement's + `default-spec-id` + required: + - type + - default-spec-id + properties: + type: + type: string + enum: + - assert-default-spec-id + default-spec-id: + type: integer + Apache_Iceberg_REST_Catalog_API_AssertDefaultSortOrderId: + allOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TableRequirement + description: >- + The table's default sort order id must match the requirement's + `default-sort-order-id` + required: + - type + - default-sort-order-id + properties: + type: + type: string + enum: + - assert-default-sort-order-id + default-sort-order-id: + type: integer + Apache_Iceberg_REST_Catalog_API_ViewRequirement: + discriminator: + propertyName: type + mapping: + assert-view-uuid: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_AssertViewUUID' + type: object + required: + - type + properties: + type: + type: string + Apache_Iceberg_REST_Catalog_API_AssertViewUUID: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ViewRequirement' + description: The view UUID must match the requirement's `uuid` + required: + - type + - uuid + properties: + type: + type: string + enum: + - assert-view-uuid + uuid: + type: string + Apache_Iceberg_REST_Catalog_API_LoadTableResult: + description: > + Result used when a table is successfully loaded. + + + + The table metadata JSON is returned in the `metadata` field. The + corresponding file location of table metadata should be returned in the + `metadata-location` field, unless the metadata is not yet committed. For + example, a create transaction may return metadata that is staged but not + committed. + + Clients can check whether metadata has changed by comparing metadata + locations after the table has been created. + + + + The `config` map returns table-specific configuration for the table's + resources, including its HTTP client and FileIO. For example, config may + contain a specific FileIO implementation class for the table depending + on its underlying storage. + + + + The following configurations should be respected by clients: + + + ## General Configurations + + + - `token`: Authorization bearer token to use for table requests if + OAuth2 security is enabled + + + ## AWS Configurations + + + The following configurations should be respected when working with + tables stored in AWS S3 + - `client.region`: region to configure client for making requests to AWS + - `s3.access-key-id`: id for for credentials that provide access to the data in S3 + - `s3.secret-access-key`: secret for credentials that provide access to data in S3 + - `s3.session-token`: if present, this value should be used for as the session token + - `s3.remote-signing-enabled`: if `true` remote signing should be performed as described in the `s3-signer-open-api.yaml` specification + type: object + required: + - metadata + properties: + metadata-location: + type: string + description: May be null if the table is staged as part of a transaction + metadata: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TableMetadata' + config: + type: object + additionalProperties: + type: string + Apache_Iceberg_REST_Catalog_API_CommitTableRequest: + type: object + required: + - requirements + - updates + properties: + identifier: + description: >- + Table identifier to update; must be present for + CommitTransactionRequest + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TableIdentifier' + requirements: + type: array + items: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TableRequirement + updates: + type: array + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TableUpdate' + Apache_Iceberg_REST_Catalog_API_CommitViewRequest: + type: object + required: + - updates + properties: + identifier: + description: View identifier to update + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TableIdentifier' + requirements: + type: array + items: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ViewRequirement + updates: + type: array + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ViewUpdate' + Apache_Iceberg_REST_Catalog_API_CommitTransactionRequest: + type: object + required: + - table-changes + properties: + table-changes: + type: array + items: + description: Table commit request; must provide an `identifier` + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_CommitTableRequest + Apache_Iceberg_REST_Catalog_API_CreateTableRequest: + type: object + required: + - name + - schema + properties: + name: + type: string + location: + type: string + schema: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Schema' + partition-spec: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_PartitionSpec' + write-order: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_SortOrder' + stage-create: + type: boolean + properties: + type: object + additionalProperties: + type: string + Apache_Iceberg_REST_Catalog_API_RegisterTableRequest: + type: object + required: + - name + - metadata-location + properties: + name: + type: string + metadata-location: + type: string + Apache_Iceberg_REST_Catalog_API_CreateViewRequest: + type: object + required: + - name + - schema + - view-version + - properties + properties: + name: + type: string + location: + type: string + schema: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Schema' + view-version: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ViewVersion' + description: >- + The view version to create, will replace the schema-id sent within + the view-version with the id assigned to the provided schema + properties: + type: object + additionalProperties: + type: string + Apache_Iceberg_REST_Catalog_API_LoadViewResult: + description: > + Result used when a view is successfully loaded. + + + + The view metadata JSON is returned in the `metadata` field. The + corresponding file location of view metadata is returned in the + `metadata-location` field. + + Clients can check whether metadata has changed by comparing metadata + locations after the view has been created. + + + The `config` map returns view-specific configuration for the view's + resources. + + + The following configurations should be respected by clients: + + + ## General Configurations + + + - `token`: Authorization bearer token to use for view requests if OAuth2 + security is enabled + type: object + required: + - metadata-location + - metadata + properties: + metadata-location: + type: string + metadata: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ViewMetadata' + config: + type: object + additionalProperties: + type: string + Apache_Iceberg_REST_Catalog_API_TokenType: + type: string + enum: + - urn:ietf:params:oauth:token-type:access_token + - urn:ietf:params:oauth:token-type:refresh_token + - urn:ietf:params:oauth:token-type:id_token + - urn:ietf:params:oauth:token-type:saml1 + - urn:ietf:params:oauth:token-type:saml2 + - urn:ietf:params:oauth:token-type:jwt + description: |- + Token type identifier, from RFC 8693 Section 3 + + See https://datatracker.ietf.org/doc/html/rfc8693#section-3 + Apache_Iceberg_REST_Catalog_API_OAuthClientCredentialsRequest: + description: |- + OAuth2 client credentials request + + See https://datatracker.ietf.org/doc/html/rfc6749#section-4.4 + type: object + required: + - grant_type + - client_id + - client_secret + properties: + grant_type: + type: string + enum: + - client_credentials + scope: + type: string + client_id: + type: string + description: >- + Client ID + + + This can be sent in the request body, but OAuth2 recommends sending + it in a Basic Authorization header. + client_secret: + type: string + description: >- + Client secret + + + This can be sent in the request body, but OAuth2 recommends sending + it in a Basic Authorization header. + Apache_Iceberg_REST_Catalog_API_OAuthTokenExchangeRequest: + description: |- + OAuth2 token exchange request + + See https://datatracker.ietf.org/doc/html/rfc8693 + type: object + required: + - grant_type + - subject_token + - subject_token_type + properties: + grant_type: + type: string + enum: + - urn:ietf:params:oauth:grant-type:token-exchange + scope: + type: string + requested_token_type: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TokenType' + subject_token: + type: string + description: Subject token for token exchange request + subject_token_type: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TokenType' + actor_token: + type: string + description: Actor token for token exchange request + actor_token_type: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TokenType' + Apache_Iceberg_REST_Catalog_API_OAuthTokenRequest: + anyOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_OAuthClientCredentialsRequest + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_OAuthTokenExchangeRequest + Apache_Iceberg_REST_Catalog_API_CounterResult: + type: object + required: + - unit + - value + properties: + unit: + type: string + value: + type: integer + format: int64 + Apache_Iceberg_REST_Catalog_API_TimerResult: + type: object + required: + - time-unit + - count + - total-duration + properties: + time-unit: + type: string + count: + type: integer + format: int64 + total-duration: + type: integer + format: int64 + Apache_Iceberg_REST_Catalog_API_MetricResult: + anyOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_CounterResult' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TimerResult' + Apache_Iceberg_REST_Catalog_API_Metrics: + type: object + additionalProperties: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_MetricResult' + example: + metrics: + total-planning-duration: + count: 1 + time-unit: nanoseconds + total-duration: 2644235116 + result-data-files: + unit: count + value: 1 + result-delete-files: + unit: count + value: 0 + total-data-manifests: + unit: count + value: 1 + total-delete-manifests: + unit: count + value: 0 + scanned-data-manifests: + unit: count + value: 1 + skipped-data-manifests: + unit: count + value: 0 + total-file-size-bytes: + unit: bytes + value: 10 + total-delete-file-size-bytes: + unit: bytes + value: 0 + Apache_Iceberg_REST_Catalog_API_ReportMetricsRequest: + anyOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ScanReport' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_CommitReport' + required: + - report-type + properties: + report-type: + type: string + Apache_Iceberg_REST_Catalog_API_ScanReport: + type: object + required: + - table-name + - snapshot-id + - filter + - schema-id + - projected-field-ids + - projected-field-names + - metrics + properties: + table-name: + type: string + snapshot-id: + type: integer + format: int64 + filter: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Expression' + schema-id: + type: integer + projected-field-ids: + type: array + items: + type: integer + projected-field-names: + type: array + items: + type: string + metrics: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Metrics' + metadata: + type: object + additionalProperties: + type: string + Apache_Iceberg_REST_Catalog_API_CommitReport: + type: object + required: + - table-name + - snapshot-id + - sequence-number + - operation + - metrics + properties: + table-name: + type: string + snapshot-id: + type: integer + format: int64 + sequence-number: + type: integer + format: int64 + operation: + type: string + metrics: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Metrics' + metadata: + type: object + additionalProperties: + type: string + Apache_Iceberg_REST_Catalog_API_NotificationRequest: + required: + - notification-type + properties: + notification-type: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_NotificationType + payload: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TableUpdateNotification + Apache_Iceberg_REST_Catalog_API_NotificationType: + type: string + enum: + - UNKNOWN + - CREATE + - UPDATE + - DROP + Apache_Iceberg_REST_Catalog_API_TableUpdateNotification: + type: object + required: + - table-name + - timestamp + - table-uuid + - metadata-location + properties: + table-name: + type: string + timestamp: + type: integer + format: int64 + table-uuid: + type: string + metadata-location: + type: string + metadata: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TableMetadata' + Apache_Iceberg_REST_Catalog_API_OAuthError: + type: object + required: + - error + properties: + error: + type: string + enum: + - invalid_request + - invalid_client + - invalid_grant + - unauthorized_client + - unsupported_grant_type + - invalid_scope + error_description: + type: string + error_uri: + type: string + Apache_Iceberg_REST_Catalog_API_OAuthTokenResponse: + type: object + required: + - access_token + - token_type + properties: + access_token: + type: string + description: The access token, for client credentials or token exchange + token_type: + type: string + enum: + - bearer + - mac + - N_A + description: |- + Access token type for client credentials or token exchange + + See https://datatracker.ietf.org/doc/html/rfc6749#section-7.1 + expires_in: + type: integer + description: >- + Lifetime of the access token in seconds for client credentials or + token exchange + issued_token_type: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TokenType' + refresh_token: + type: string + description: Refresh token for client credentials or token exchange + scope: + type: string + description: Authorization scope for client credentials or token exchange + Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse: + description: JSON wrapper for all error responses (non-2xx) + type: object + required: + - error + properties: + error: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel' + additionalProperties: false + example: + error: + message: The server does not support this operation + type: UnsupportedOperationException + code: 406 + Apache_Iceberg_REST_Catalog_API_CreateNamespaceResponse: + type: object + required: + - namespace + properties: + namespace: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Namespace' + properties: + type: object + additionalProperties: + type: string + description: Properties stored on the namespace, if supported by the server. + example: + owner: Ralph + created_at: '1452120468' + default: {} + Apache_Iceberg_REST_Catalog_API_GetNamespaceResponse: + type: object + required: + - namespace + properties: + namespace: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Namespace' + properties: + type: object + description: >- + Properties stored on the namespace, if supported by the server. If + the server does not support namespace properties, it should return + null for this field. If namespace properties are supported, but none + are set, it should return an empty object. + additionalProperties: + type: string + example: + owner: Ralph + transient_lastDdlTime: '1452120468' + default: {} + nullable: true + Apache_Iceberg_REST_Catalog_API_ListTablesResponse: + type: object + properties: + next-page-token: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_PageToken' + identifiers: + type: array + uniqueItems: true + items: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TableIdentifier + Apache_Iceberg_REST_Catalog_API_ListNamespacesResponse: + type: object + properties: + next-page-token: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_PageToken' + namespaces: + type: array + uniqueItems: true + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_Namespace' + Apache_Iceberg_REST_Catalog_API_UpdateNamespacePropertiesResponse: + type: object + required: + - updated + - removed + properties: + updated: + description: List of property keys that were added or updated + type: array + uniqueItems: true + items: + type: string + removed: + description: List of properties that were removed + type: array + items: + type: string + missing: + type: array + items: + type: string + description: >- + List of properties requested for removal that were not found in the + namespace's properties. Represents a partial success response. + Server's do not need to implement this. + nullable: true + Apache_Iceberg_REST_Catalog_API_CommitTableResponse: + type: object + required: + - metadata-location + - metadata + properties: + metadata-location: + type: string + metadata: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TableMetadata' + Apache_Iceberg_REST_Catalog_API_StatisticsFile: + type: object + required: + - snapshot-id + - statistics-path + - file-size-in-bytes + - file-footer-size-in-bytes + - blob-metadata + properties: + snapshot-id: + type: integer + format: int64 + statistics-path: + type: string + file-size-in-bytes: + type: integer + format: int64 + file-footer-size-in-bytes: + type: integer + format: int64 + blob-metadata: + type: array + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BlobMetadata' + Apache_Iceberg_REST_Catalog_API_BlobMetadata: + type: object + required: + - type + - snapshot-id + - sequence-number + - fields + properties: + type: + type: string + snapshot-id: + type: integer + format: int64 + sequence-number: + type: integer + format: int64 + fields: + type: array + items: + type: integer + properties: + type: object + Apache_Iceberg_REST_Catalog_API_PartitionStatisticsFile: + type: object + required: + - snapshot-id + - statistics-path + - file-size-in-bytes + properties: + snapshot-id: + type: integer + format: int64 + statistics-path: + type: string + file-size-in-bytes: + type: integer + format: int64 + Apache_Iceberg_REST_Catalog_API_BooleanTypeValue: + type: boolean + example: true + Apache_Iceberg_REST_Catalog_API_IntegerTypeValue: + type: integer + example: 42 + Apache_Iceberg_REST_Catalog_API_LongTypeValue: + type: integer + format: int64 + example: 9223372036854776000 + Apache_Iceberg_REST_Catalog_API_FloatTypeValue: + type: number + format: float + example: 3.14 + Apache_Iceberg_REST_Catalog_API_DoubleTypeValue: + type: number + format: double + example: 123.456 + Apache_Iceberg_REST_Catalog_API_DecimalTypeValue: + type: string + description: >- + Decimal type values are serialized as strings. Decimals with a positive + scale serialize as numeric plain text, while decimals with a negative + scale use scientific notation and the exponent will be equal to the + negated scale. For instance, a decimal with a positive scale is + '123.4500', with zero scale is '2', and with a negative scale is + '2E+20' + example: '123.4500' + Apache_Iceberg_REST_Catalog_API_StringTypeValue: + type: string + example: hello + Apache_Iceberg_REST_Catalog_API_UUIDTypeValue: + type: string + format: uuid + pattern: ^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$ + maxLength: 36 + minLength: 36 + description: >- + UUID type values are serialized as a 36-character lowercase string in + standard UUID format as specified by RFC-4122 + example: eb26bdb1-a1d8-4aa6-990e-da940875492c + Apache_Iceberg_REST_Catalog_API_DateTypeValue: + type: string + format: date + description: Date type values follow the 'YYYY-MM-DD' ISO-8601 standard date format + example: '2007-12-03' + Apache_Iceberg_REST_Catalog_API_TimeTypeValue: + type: string + description: >- + Time type values follow the 'HH:MM:SS.ssssss' ISO-8601 format with + microsecond precision + example: '22:31:08.123456' + Apache_Iceberg_REST_Catalog_API_TimestampTypeValue: + type: string + description: >- + Timestamp type values follow the 'YYYY-MM-DDTHH:MM:SS.ssssss' ISO-8601 + format with microsecond precision + example: '2007-12-03T10:15:30.123456' + Apache_Iceberg_REST_Catalog_API_TimestampTzTypeValue: + type: string + description: >- + TimestampTz type values follow the 'YYYY-MM-DDTHH:MM:SS.ssssss+00:00' + ISO-8601 format with microsecond precision, and a timezone offset + (+00:00 for UTC) + example: '2007-12-03T10:15:30.123456+00:00' + Apache_Iceberg_REST_Catalog_API_TimestampNanoTypeValue: + type: string + description: >- + Timestamp_ns type values follow the 'YYYY-MM-DDTHH:MM:SS.sssssssss' + ISO-8601 format with nanosecond precision + example: '2007-12-03T10:15:30.123456789' + Apache_Iceberg_REST_Catalog_API_TimestampTzNanoTypeValue: + type: string + description: >- + Timestamp_ns type values follow the + 'YYYY-MM-DDTHH:MM:SS.sssssssss+00:00' ISO-8601 format with nanosecond + precision, and a timezone offset (+00:00 for UTC) + example: '2007-12-03T10:15:30.123456789+00:00' + Apache_Iceberg_REST_Catalog_API_FixedTypeValue: + type: string + description: >- + Fixed length type values are stored and serialized as an uppercase + hexadecimal string preserving the fixed length + example: 78797A + Apache_Iceberg_REST_Catalog_API_BinaryTypeValue: + type: string + description: >- + Binary type values are stored and serialized as an uppercase hexadecimal + string + example: 78797A + Apache_Iceberg_REST_Catalog_API_CountMap: + type: object + properties: + keys: + type: array + items: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IntegerTypeValue + description: List of integer column ids for each corresponding value + values: + type: array + items: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_LongTypeValue' + description: List of Long values, matched to 'keys' by index + example: + keys: + - 1 + - 2 + values: + - 100 + - 200 + Apache_Iceberg_REST_Catalog_API_ValueMap: + type: object + properties: + keys: + type: array + items: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IntegerTypeValue + description: List of integer column ids for each corresponding value + values: + type: array + items: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_PrimitiveTypeValue + description: List of primitive type values, matched to 'keys' by index + example: + keys: + - 1 + - 2 + values: + - 100 + - test + Apache_Iceberg_REST_Catalog_API_PrimitiveTypeValue: + oneOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_BooleanTypeValue + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IntegerTypeValue + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_LongTypeValue' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_FloatTypeValue' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_DoubleTypeValue' + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_DecimalTypeValue + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_StringTypeValue' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_UUIDTypeValue' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_DateTypeValue' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_TimeTypeValue' + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TimestampTypeValue + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TimestampTzTypeValue + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TimestampNanoTypeValue + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_TimestampTzNanoTypeValue + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_FixedTypeValue' + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_BinaryTypeValue' + Apache_Iceberg_REST_Catalog_API_FileFormat: + type: string + enum: + - avro + - orc + - parquet + Apache_Iceberg_REST_Catalog_API_ContentFile: + discriminator: + propertyName: content + mapping: + data: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_DataFile' + position-deletes: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_PositionDeleteFile + equality-deletes: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_EqualityDeleteFile + type: object + required: + - spec-id + - content + - file-path + - file-format + - file-size-in-bytes + - record-count + properties: + content: + type: string + file-path: + type: string + file-format: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_FileFormat' + spec-id: + type: integer + partition: + type: array + items: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_PrimitiveTypeValue + description: >- + A list of partition field values ordered based on the fields of the + partition spec specified by the `spec-id` + example: + - 1 + - bar + file-size-in-bytes: + type: integer + format: int64 + description: Total file size in bytes + record-count: + type: integer + format: int64 + description: Number of records in the file + key-metadata: + allOf: + - $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_BinaryTypeValue + description: Encryption key metadata blob + split-offsets: + type: array + items: + type: integer + format: int64 + description: List of splittable offsets + sort-order-id: + type: integer + Apache_Iceberg_REST_Catalog_API_DataFile: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ContentFile' + type: object + required: + - content + properties: + content: + type: string + enum: + - data + column-sizes: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_CountMap' + description: Map of column id to total count, including null and NaN + value-counts: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_CountMap' + description: Map of column id to null value count + null-value-counts: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_CountMap' + description: Map of column id to null value count + nan-value-counts: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_CountMap' + description: Map of column id to number of NaN values in the column + lower-bounds: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ValueMap' + description: Map of column id to lower bound primitive type values + upper-bounds: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ValueMap' + description: Map of column id to upper bound primitive type values + Apache_Iceberg_REST_Catalog_API_PositionDeleteFile: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ContentFile' + required: + - content + properties: + content: + type: string + enum: + - position-deletes + Apache_Iceberg_REST_Catalog_API_EqualityDeleteFile: + allOf: + - $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ContentFile' + required: + - content + properties: + content: + type: string + enum: + - equality-deletes + equality-ids: + type: array + items: + type: integer + description: List of equality field IDs + parameters: + Apache_Iceberg_REST_Catalog_API_namespace: + name: namespace + in: path + required: true + description: >- + A namespace identifier as a single string. Multipart namespace parts + should be separated by the unit separator (`0x1F`) byte. + schema: + type: string + examples: + singlepart_namespace: + value: accounting + multipart_namespace: + value: accounting%1Ftax + Apache_Iceberg_REST_Catalog_API_prefix: + name: prefix + in: path + schema: + type: string + required: true + description: An optional prefix in the path + Apache_Iceberg_REST_Catalog_API_table: + name: table + in: path + description: A table name + required: true + schema: + type: string + example: sales + Apache_Iceberg_REST_Catalog_API_view: + name: view + in: path + description: A view name + required: true + schema: + type: string + example: sales + Apache_Iceberg_REST_Catalog_API_data-access: + name: X-Iceberg-Access-Delegation + in: header + description: > + Optional signal to the server that the client supports delegated access + via a comma-separated list of access mechanisms. The server may choose + to supply access via any or none of the requested mechanisms. + + + Specific properties and handling for `vended-credentials` is documented + in the `LoadTableResult` schema section of this spec document. + + + The protocol and specification for `remote-signing` is documented in + the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. + required: false + schema: + type: string + enum: + - vended-credentials + - remote-signing + style: simple + explode: false + example: vended-credentials,remote-signing + Apache_Iceberg_REST_Catalog_API_page-token: + name: pageToken + in: query + required: false + allowEmptyValue: true + schema: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_PageToken' + Apache_Iceberg_REST_Catalog_API_page-size: + name: pageSize + in: query + description: >- + For servers that support pagination, this signals an upper bound of the + number of results that a client will receive. For servers that do not + support pagination, clients may receive results larger than the + indicated `pageSize`. + required: false + schema: + type: integer + minimum: 1 + responses: + Apache_Iceberg_REST_Catalog_API_OAuthTokenResponse: + description: OAuth2 token response for client credentials or token exchange + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_OAuthTokenResponse + Apache_Iceberg_REST_Catalog_API_OAuthErrorResponse: + description: OAuth2 error response + content: + application/json: + schema: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_OAuthError' + Apache_Iceberg_REST_Catalog_API_BadRequestErrorResponse: + description: >- + Indicates a bad request error. It could be caused by an unexpected + request body format or other forms of request validation failure, such + as invalid json. Usually serves application/json content, although in + some cases simple text/plain content might be returned by the server's + middleware. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Malformed request + type: BadRequestException + code: 400 + Apache_Iceberg_REST_Catalog_API_UnauthorizedResponse: + description: >- + Unauthorized. Authentication is required and has failed or has not yet + been provided. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Not authorized to make this request + type: NotAuthorizedException + code: 401 + Apache_Iceberg_REST_Catalog_API_ForbiddenResponse: + description: Forbidden. Authenticated user does not have the necessary permissions. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Not authorized to make this request + type: NotAuthorizedException + code: 403 + Apache_Iceberg_REST_Catalog_API_UnsupportedOperationResponse: + description: >- + Not Acceptable / Unsupported Operation. The server does not support this + operation. + content: + application/json: + schema: + $ref: '#/components/schemas/Apache_Iceberg_REST_Catalog_API_ErrorModel' + example: + error: + message: The server does not support this operation + type: UnsupportedOperationException + code: 406 + Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse: + description: JSON wrapper for all error responses (non-2xx) + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: The server does not support this operation + type: UnsupportedOperationException + code: 406 + Apache_Iceberg_REST_Catalog_API_CreateNamespaceResponse: + description: >- + Represents a successful call to create a namespace. Returns the + namespace created, as well as any properties that were stored for the + namespace, including those the server might have added. Implementations + are not required to support namespace properties. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_CreateNamespaceResponse + example: + namespace: + - accounting + - tax + properties: + owner: Ralph + created_at: '1452120468' + Apache_Iceberg_REST_Catalog_API_GetNamespaceResponse: + description: >- + Returns a namespace, as well as any properties stored on the namespace + if namespace properties are supported by the server. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_GetNamespaceResponse + Apache_Iceberg_REST_Catalog_API_ListTablesResponse: + description: A list of table identifiers + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ListTablesResponse + examples: + ListTablesResponseNonEmpty: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_ListTablesNonEmptyExample + ListTablesResponseEmpty: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_ListTablesEmptyExample + Apache_Iceberg_REST_Catalog_API_ListNamespacesResponse: + description: A list of namespaces + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_ListNamespacesResponse + examples: + NonEmptyResponse: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_ListNamespacesNonEmptyExample + EmptyResponse: + $ref: >- + #/components/examples/Apache_Iceberg_REST_Catalog_API_ListNamespacesEmptyExample + Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse: + description: >- + Credentials have timed out. If possible, the client should refresh + credentials and retry. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Credentials have timed out + type: AuthenticationTimeoutException + code: 419 + Apache_Iceberg_REST_Catalog_API_ServiceUnavailableResponse: + description: >- + The service is not ready to handle the request. The client should wait + and retry. + + + The service may additionally send a Retry-After header to indicate when + to retry. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Slow down + type: SlowDownException + code: 503 + Apache_Iceberg_REST_Catalog_API_ServerErrorResponse: + description: >- + A server-side problem that might not be addressable from the client + side. Used for server 5xx errors without more specific documentation in + individual routes. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse + example: + error: + message: Internal Server Error + type: InternalServerError + code: 500 + Apache_Iceberg_REST_Catalog_API_UpdateNamespacePropertiesResponse: + description: JSON data response for a synchronous update properties request. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_UpdateNamespacePropertiesResponse + example: + updated: + - owner + removed: + - foo + missing: + - bar + Apache_Iceberg_REST_Catalog_API_CreateTableResponse: + description: Table metadata result after creating a table + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_LoadTableResult + Apache_Iceberg_REST_Catalog_API_LoadTableResponse: + description: Table metadata result when loading a table + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_LoadTableResult + Apache_Iceberg_REST_Catalog_API_LoadViewResponse: + description: View metadata result when loading a view + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_LoadViewResult + Apache_Iceberg_REST_Catalog_API_CommitTableResponse: + description: >- + Response used when a table is successfully updated. + + The table metadata JSON is returned in the metadata field. The + corresponding file location of table metadata must be returned in the + metadata-location field. Clients can check whether metadata has changed + by comparing metadata locations. + content: + application/json: + schema: + $ref: >- + #/components/schemas/Apache_Iceberg_REST_Catalog_API_CommitTableResponse + examples: + Apache_Iceberg_REST_Catalog_API_ListTablesEmptyExample: + summary: An empty list for a namespace with no tables + value: + identifiers: [] + Apache_Iceberg_REST_Catalog_API_ListNamespacesEmptyExample: + summary: An empty list of namespaces + value: + namespaces: [] + Apache_Iceberg_REST_Catalog_API_ListNamespacesNonEmptyExample: + summary: A non-empty list of namespaces + value: + namespaces: + - - accounting + - tax + - - accounting + - credits + Apache_Iceberg_REST_Catalog_API_ListTablesNonEmptyExample: + summary: A non-empty list of table identifiers + value: + identifiers: + - namespace: + - accounting + - tax + name: paid + - namespace: + - accounting + - tax + name: owed + Apache_Iceberg_REST_Catalog_API_MultipartNamespaceAsPathVariable: + summary: A multi-part namespace, as represented in a path parameter + value: accounting%1Ftax + Apache_Iceberg_REST_Catalog_API_NamespaceAsPathVariable: + summary: A single part namespace, as represented in a path paremeter + value: accounting + Apache_Iceberg_REST_Catalog_API_NamespaceAlreadyExistsError: + summary: The requested namespace already exists + value: + error: + message: The given namespace already exists + type: AlreadyExistsException + code: 409 + Apache_Iceberg_REST_Catalog_API_NoSuchTableError: + summary: The requested table does not exist + value: + error: + message: The given table does not exist + type: NoSuchTableException + code: 404 + Apache_Iceberg_REST_Catalog_API_NoSuchViewError: + summary: The requested view does not exist + value: + error: + message: The given view does not exist + type: NoSuchViewException + code: 404 + Apache_Iceberg_REST_Catalog_API_NoSuchNamespaceError: + summary: The requested namespace does not exist + value: + error: + message: The given namespace does not exist + type: NoSuchNamespaceException + code: 404 + Apache_Iceberg_REST_Catalog_API_RenameTableSameNamespace: + summary: Rename a table in the same namespace + value: + source: + namespace: + - accounting + - tax + name: paid + destination: + namespace: + - accounting + - tax + name: owed + Apache_Iceberg_REST_Catalog_API_RenameViewSameNamespace: + summary: Rename a view in the same namespace + value: + source: + namespace: + - accounting + - tax + name: paid-view + destination: + namespace: + - accounting + - tax + name: owed-view + Apache_Iceberg_REST_Catalog_API_TableAlreadyExistsError: + summary: The requested table identifier already exists + value: + error: + message: The given table already exists + type: AlreadyExistsException + code: 409 + Apache_Iceberg_REST_Catalog_API_ViewAlreadyExistsError: + summary: The requested view identifier already exists + value: + error: + message: The given view already exists + type: AlreadyExistsException + code: 409 + Apache_Iceberg_REST_Catalog_API_UnprocessableEntityDuplicateKey: + summary: >- + The request body either has the same key multiple times in what should + be a map with unique keys or the request body has keys in two or more + fields which should be disjoint sets. + value: + error: + message: >- + The request cannot be processed as there is a key present multiple + times + type: UnprocessableEntityException + code: 422 + Apache_Iceberg_REST_Catalog_API_UpdateAndRemoveNamespacePropertiesRequest: + summary: >- + An update namespace properties request with both properties to remove + and properties to upsert. + value: + removals: + - foo + - bar + updates: + owner: Raoul +x-tagGroups: + - name: Polaris Catalog Documentation + tags: + - Polaris Catalog Overview + - Polaris Catalog Entities + - Access Control + - name: Polaris Management Service + tags: + - polaris-management-service_other + - name: Apache Iceberg REST Catalog API + tags: + - Configuration API + - OAuth2 API + - Catalog API diff --git a/spec/polaris-management-service.yml b/spec/polaris-management-service.yml new file mode 100644 index 0000000000..9dd1a7e85d --- /dev/null +++ b/spec/polaris-management-service.yml @@ -0,0 +1,1340 @@ +# Copyright (c) 2024 Snowflake Computing Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +openapi: 3.0.3 +info: + title: Polaris Management Service + version: 0.0.1 + description: + Defines the management APIs for using Polaris to create and manage Iceberg catalogs and their principals +servers: + - url: "{scheme}://{host}/api/management/v1" + description: Server URL when the port can be inferred from the scheme + variables: + scheme: + description: The scheme of the URI, either http or https. + default: https + host: + description: The host address for the specified server + default: localhost +# All routes are currently configured using an Authorization header. +security: + - OAuth2: [] + +paths: + /catalogs: + get: + operationId: listCatalogs + description: List all catalogs in this polaris service + responses: + 200: + description: List of catalogs in the polaris service + content: + application/json: + schema: + $ref: "#/components/schemas/Catalogs" + 403: + description: "The caller does not have permission to list catalog details" + post: + operationId: createCatalog + description: Add a new Catalog + requestBody: + description: The Catalog to create + required: true + content: + application/json: + schema: + $ref: "#/components/schemas/CreateCatalogRequest" + responses: + 201: + description: "Successful response" + 403: + description: "The caller does not have permission to create a catalog" + 404: + description: "The catalog does not exist" + 409: + description: "A catalog with the specified name already exists" + + /catalogs/{catalogName}: + parameters: + - name: catalogName + in: path + description: The name of the catalog + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + get: + operationId: getCatalog + description: Get the details of a catalog + responses: + 200: + description: The catalog details + content: + application/json: + schema: + $ref: "#/components/schemas/Catalog" + 403: + description: "The caller does not have permission to read catalog details" + 404: + description: "The catalog does not exist" + + put: + operationId: updateCatalog + description: Update an existing catalog + requestBody: + description: The catalog details to use in the update + required: true + content: + application/json: + schema: + $ref: "#/components/schemas/UpdateCatalogRequest" + responses: + 200: + description: The catalog details + content: + application/json: + schema: + $ref: "#/components/schemas/Catalog" + 403: + description: "The caller does not have permission to update catalog details" + 404: + description: "The catalog does not exist" + 409: + description: "The entity version doesn't match the currentEntityVersion; retry after fetching latest version" + + delete: + operationId: deleteCatalog + description: Delete an existing catalog. This is a cascading operation that deletes all metadata, including principals, + roles and grants. If the catalog is an internal catalog, all tables and namespaces are dropped without purge. + responses: + 204: + description: "Success, no content" + 403: + description: "The caller does not have permission to delete a catalog" + 404: + description: "The catalog does not exist" + + /principals: + get: + operationId: listPrincipals + description: List the principals for the current catalog + responses: + 200: + description: List of principals for this catalog + content: + application/json: + schema: + $ref: "#/components/schemas/Principals" + 403: + description: "The caller does not have permission to list catalog admins" + 404: + description: "The catalog does not exist" + + post: + operationId: createPrincipal + description: Create a principal + requestBody: + description: The principal to create + required: true + content: + application/json: + schema: + $ref: "#/components/schemas/CreatePrincipalRequest" + responses: + 201: + description: "Successful response" + content: + application/json: + schema: + $ref: "#/components/schemas/PrincipalWithCredentials" + 403: + description: "The caller does not have permission to add a principal" + + /principals/{principalName}: + parameters: + - name: principalName + in: path + description: The principal name + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + get: + operationId: getPrincipal + description: Get the principal details + responses: + 200: + description: The requested principal + content: + application/json: + schema: + $ref: "#/components/schemas/Principal" + 403: + description: "The caller does not have permission to get principal details" + 404: + description: "The catalog or principal does not exist" + + put: + operationId: updatePrincipal + description: Update an existing principal + requestBody: + description: The principal details to use in the update + required: true + content: + application/json: + schema: + $ref: "#/components/schemas/UpdatePrincipalRequest" + responses: + 200: + description: The updated principal + content: + application/json: + schema: + $ref: "#/components/schemas/Principal" + 403: + description: "The caller does not have permission to update principal details" + 404: + description: "The principal does not exist" + 409: + description: "The entity version doesn't match the currentEntityVersion; retry after fetching latest version" + + delete: + operationId: deletePrincipal + description: Remove a principal from polaris + responses: + 204: + description: "Success, no content" + 403: + description: "The caller does not have permission to delete a principal" + 404: + description: "The principal does not exist" + + /principals/{principalName}/rotate: + parameters: + - name: principalName + in: path + description: The user name + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + post: + operationId: rotateCredentials + description: Rotate a principal's credentials. The new credentials will be returned in the response. This is the only + API, aside from createPrincipal, that returns the user's credentials. This API is *not* idempotent. + responses: + 200: + description: The principal details along with the newly rotated credentials + content: + application/json: + schema: + $ref: "#/components/schemas/PrincipalWithCredentials" + 403: + description: "The caller does not have permission to rotate credentials" + 404: + description: "The principal does not exist" + + /principals/{principalName}/principal-roles: + parameters: + - name: principalName + in: path + description: The name of the target principal + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + get: + operationId: listPrincipalRolesAssigned + description: List the roles assigned to the principal + responses: + 200: + description: List of roles assigned to this principal + content: + application/json: + schema: + $ref: "#/components/schemas/PrincipalRoles" + 403: + description: "The caller does not have permission to list roles" + 404: + description: "The principal or catalog does not exist" + + put: + operationId: assignPrincipalRole + description: Add a role to the principal + requestBody: + description: The principal role to assign + required: true + content: + application/json: + schema: + $ref: "#/components/schemas/GrantPrincipalRoleRequest" + responses: + 201: + description: "Successful response" + 403: + description: "The caller does not have permission to add assign a role to the principal" + 404: + description: "The catalog, the principal, or the role does not exist" + + /principals/{principalName}/principal-roles/{principalRoleName}: + parameters: + - name: principalName + in: path + description: The name of the target principal + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + - name: principalRoleName + in: path + description: The name of the role + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + delete: + operationId: revokePrincipalRole + description: Remove a role from a catalog principal + responses: + 204: + description: "Success, no content" + 403: + description: "The caller does not have permission to remove a role from the principal" + 404: + description: "The catalog or principal does not exist" + + /principal-roles: + get: + operationId: listPrincipalRoles + description: List the principal roles + responses: + 200: + description: List of principal roles + content: + application/json: + schema: + $ref: "#/components/schemas/PrincipalRoles" + 403: + description: "The caller does not have permission to list principal roles" + 404: + description: "The catalog does not exist" + + post: + operationId: createPrincipalRole + description: Create a principal role + requestBody: + description: The principal to create + required: true + content: + application/json: + schema: + $ref: "#/components/schemas/CreatePrincipalRoleRequest" + responses: + 201: + description: "Successful response" + 403: + description: "The caller does not have permission to add a principal role" + + /principal-roles/{principalRoleName}: + parameters: + - name: principalRoleName + in: path + description: The principal role name + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + get: + operationId: getPrincipalRole + description: Get the principal role details + responses: + 200: + description: The requested principal role + content: + application/json: + schema: + $ref: "#/components/schemas/PrincipalRole" + 403: + description: "The caller does not have permission to get principal role details" + 404: + description: "The principal role does not exist" + + put: + operationId: updatePrincipalRole + description: Update an existing principalRole + requestBody: + description: The principalRole details to use in the update + required: true + content: + application/json: + schema: + $ref: "#/components/schemas/UpdatePrincipalRoleRequest" + responses: + 200: + description: The updated principal role + content: + application/json: + schema: + $ref: "#/components/schemas/PrincipalRole" + 403: + description: "The caller does not have permission to update principal role details" + 404: + description: "The principal role does not exist" + 409: + description: "The entity version doesn't match the currentEntityVersion; retry after fetching latest version" + + delete: + operationId: deletePrincipalRole + description: Remove a principal role from polaris + responses: + 204: + description: "Success, no content" + 403: + description: "The caller does not have permission to delete a principal role" + 404: + description: "The principal role does not exist" + + /principal-roles/{principalRoleName}/principals: + parameters: + - name: principalRoleName + in: path + description: The principal role name + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + get: + operationId: listAssigneePrincipalsForPrincipalRole + description: List the Principals to whom the target principal role has been assigned + responses: + 200: + description: List the Principals to whom the target principal role has been assigned + content: + application/json: + schema: + $ref: "#/components/schemas/Principals" + 403: + description: "The caller does not have permission to list principals" + 404: + description: "The principal role does not exist" + + /principal-roles/{principalRoleName}/catalog-roles/{catalogName}: + parameters: + - name: principalRoleName + in: path + description: The principal role name + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + - name: catalogName + in: path + required: true + description: The name of the catalog where the catalogRoles reside + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + get: + operationId: listCatalogRolesForPrincipalRole + description: Get the catalog roles mapped to the principal role + responses: + 200: + description: The list of catalog roles mapped to the principal role + content: + application/json: + schema: + $ref: "#/components/schemas/CatalogRoles" + 403: + description: "The caller does not have permission to list catalog roles" + 404: + description: "The principal role does not exist" + + put: + operationId: assignCatalogRoleToPrincipalRole + description: Assign a catalog role to a principal role + requestBody: + description: The principal to create + required: true + content: + application/json: + schema: + $ref: "#/components/schemas/GrantCatalogRoleRequest" + responses: + 201: + description: "Successful response" + 403: + description: "The caller does not have permission to assign a catalog role" + + /principal-roles/{principalRoleName}/catalog-roles/{catalogName}/{catalogRoleName}: + parameters: + - name: principalRoleName + in: path + description: The principal role name + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + - name: catalogName + in: path + description: The name of the catalog that contains the role to revoke + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + - name: catalogRoleName + in: path + description: The name of the catalog role that should be revoked + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + delete: + operationId: revokeCatalogRoleFromPrincipalRole + description: Remove a catalog role from a principal role + responses: + 204: + description: "Success, no content" + 403: + description: "The caller does not have permission to revoke a catalog role" + 404: + description: "The principal role does not exist" + + /catalogs/{catalogName}/catalog-roles: + parameters: + - name: catalogName + in: path + description: The catalog for which we are reading/updating roles + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + get: + operationId: listCatalogRoles + description: List existing roles in the catalog + responses: + 200: + description: The list of roles that exist in this catalog + content: + application/json: + schema: + $ref: "#/components/schemas/CatalogRoles" + post: + operationId: createCatalogRole + description: Create a new role in the catalog + requestBody: + content: + application/json: + schema: + $ref: "#/components/schemas/CreateCatalogRoleRequest" + responses: + 201: + description: "Successful response" + 403: + description: "The principal is not authorized to create roles" + 404: + description: "The catalog does not exist" + + /catalogs/{catalogName}/catalog-roles/{catalogRoleName}: + parameters: + - name: catalogName + in: path + description: The catalog for which we are retrieving roles + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + - name: catalogRoleName + in: path + description: The name of the role + required: true + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + get: + operationId: getCatalogRole + description: Get the details of an existing role + responses: + 200: + description: The specified role details + content: + application/json: + schema: + $ref: "#/components/schemas/CatalogRole" + 403: + description: "The principal is not authorized to read role data" + 404: + description: "The catalog or the role does not exist" + + put: + operationId: updateCatalogRole + description: Update an existing role in the catalog + requestBody: + content: + application/json: + schema: + $ref: "#/components/schemas/UpdateCatalogRoleRequest" + responses: + 200: + description: The specified role details + content: + application/json: + schema: + $ref: "#/components/schemas/CatalogRole" + 403: + description: "The principal is not authorized to update roles" + 404: + description: "The catalog or the role does not exist" + 409: + description: "The entity version doesn't match the currentEntityVersion; retry after fetching latest version" + + delete: + operationId: deleteCatalogRole + description: Delete an existing role from the catalog. All associated grants will also be deleted + responses: + 204: + description: "Success, no content" + 403: + description: "The principal is not authorized to delete roles" + 404: + description: "The catalog or the role does not exist" + + /catalogs/{catalogName}/catalog-roles/{catalogRoleName}/principal-roles: + parameters: + - name: catalogName + in: path + required: true + description: The name of the catalog where the catalog role resides + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + - name: catalogRoleName + in: path + required: true + description: The name of the catalog role + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + get: + operationId: listAssigneePrincipalRolesForCatalogRole + description: List the PrincipalRoles to which the target catalog role has been assigned + responses: + 200: + description: List the PrincipalRoles to which the target catalog role has been assigned + content: + application/json: + schema: + $ref: "#/components/schemas/PrincipalRoles" + 403: + description: "The caller does not have permission to list principal roles" + 404: + description: "The catalog or catalog role does not exist" + + /catalogs/{catalogName}/catalog-roles/{catalogRoleName}/grants: + parameters: + - name: catalogName + in: path + required: true + description: The name of the catalog where the role will receive the grant + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + - name: catalogRoleName + in: path + required: true + description: The name of the role receiving the grant (must exist) + schema: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + get: + operationId: listGrantsForCatalogRole + description: List the grants the catalog role holds + responses: + 200: + description: List of all grants given to the role in this catalog + content: + application/json: + schema: + $ref: "#/components/schemas/GrantResources" + put: + operationId: addGrantToCatalogRole + description: Add a new grant to the catalog role + requestBody: + content: + application/json: + schema: + $ref: "#/components/schemas/AddGrantRequest" + responses: + 201: + description: "Successful response" + 403: + description: "The principal is not authorized to create grants" + 404: + description: "The catalog or the role does not exist" + post: + operationId: revokeGrantFromCatalogRole + description: + Delete a specific grant from the role. This may be a subset or a superset of the grants the role has. In case of + a subset, the role will retain the grants not specified. If the `cascade` parameter is true, grant revocation + will have a cascading effect - that is, if a principal has specific grants on a subresource, and grants are revoked + on a parent resource, the grants present on the subresource will be revoked as well. By default, this behavior + is disabled and grant revocation only affects the specified resource. + parameters: + - name: cascade + in: query + schema: + type: boolean + default: false + description: If true, the grant revocation cascades to all subresources. + requestBody: + content: + application/json: + schema: + $ref: "#/components/schemas/RevokeGrantRequest" + responses: + 201: + description: "Successful response" + 403: + description: "The principal is not authorized to create grants" + 404: + description: "The catalog or the role does not exist" + +components: + securitySchemes: + OAuth2: + type: oauth2 + description: Uses OAuth 2 with client credentials flow + flows: + implicit: + authorizationUrl: "{scheme}://{host}/api/v1/oauth/tokens" + scopes: {} + + schemas: + Catalogs: + type: object + description: A list of Catalog objects + properties: + catalogs: + type: array + items: + $ref: "#/components/schemas/Catalog" + required: + - catalogs + + CreateCatalogRequest: + type: object + description: Request to create a new catalog + properties: + catalog: + $ref: "#/components/schemas/Catalog" + required: + - catalog + + Catalog: + type: object + description: A catalog object. A catalog may be internal or external. Internal catalogs are managed entirely by + an external catalog interface. Third party catalogs may be other Iceberg REST implementations or other services + with their own proprietary APIs + properties: + type: + type: string + enum: + - INTERNAL + - EXTERNAL + description: the type of catalog - internal or external + default: INTERNAL + name: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + description: The name of the catalog + properties: + type: object + properties: + default-base-location: + type: string + additionalProperties: + type: string + required: + - default-base-location + createTimestamp: + type: integer + format: "int64" + description: The creation time represented as unix epoch timestamp in milliseconds + lastUpdateTimestamp: + type: integer + format: "int64" + description: The last update time represented as unix epoch timestamp in milliseconds + entityVersion: + type: integer + description: The version of the catalog object used to determine if the catalog metadata has changed + storageConfigInfo: + $ref: "#/components/schemas/StorageConfigInfo" + required: + - name + - type + - storageConfigInfo + - properties + discriminator: + propertyName: type + mapping: + INTERNAL: "#/components/schemas/PolarisCatalog" + EXTERNAL: "#/components/schemas/ExternalCatalog" + + + PolarisCatalog: + type: object + allOf: + - $ref: "#/components/schemas/Catalog" + description: The base catalog type - this contains all the fields necessary to construct an INTERNAL catalog + + ExternalCatalog: + description: An externally managed catalog + type: object + allOf: + - $ref: "#/components/schemas/Catalog" + - type: object + properties: + remoteUrl: + type: string + description: URL to the remote catalog API + + StorageConfigInfo: + type: object + description: A storage configuration used by catalogs + properties: + storageType: + type: string + enum: + - S3 + - GCS + - AZURE + - FILE + description: The cloud provider type this storage is built on. FILE is supported for testing purposes only + allowedLocations: + type: array + items: + type: string + example: "For AWS [s3://bucketname/prefix/], for AZURE [abfss://container@storageaccount.blob.core.windows.net/prefix/], for GCP [gs://bucketname/prefix/]" + required: + - storageType + discriminator: + propertyName: storageType + mapping: + S3: "#/components/schemas/AwsStorageConfigInfo" + AZURE: "#/components/schemas/AzureStorageConfigInfo" + GCS: "#/components/schemas/GcpStorageConfigInfo" + FILE: "#/components/schemas/FileStorageConfigInfo" + + AwsStorageConfigInfo: + type: object + description: aws storage configuration info + allOf: + - $ref: '#/components/schemas/StorageConfigInfo' + properties: + roleArn: + type: string + description: the aws role arn that grants privileges on the S3 buckets + example: "arn:aws:iam::123456789001:principal/abc1-b-self1234" + externalId: + type: string + description: an optional external id used to establish a trust relationship with AWS in the trust policy + userArn: + type: string + description: the aws user arn used to assume the aws role + example: "arn:aws:iam::123456789001:user/abc1-b-self1234" + required: + - roleArn + + AzureStorageConfigInfo: + type: object + description: azure storage configuration info + allOf: + - $ref: '#/components/schemas/StorageConfigInfo' + properties: + tenantId: + type: string + description: the tenant id that the storage accounts belong to + multiTenantAppName: + type: string + description: the name of the azure client application + consentUrl: + type: string + description: URL to the Azure permissions request page + required: + - tenantId + + GcpStorageConfigInfo: + type: object + description: gcp storage configuration info + allOf: + - $ref: '#/components/schemas/StorageConfigInfo' + properties: + gcsServiceAccount: + type: string + description: a Google cloud storage service account + + FileStorageConfigInfo: + type: object + description: gcp storage configuration info + allOf: + - $ref: '#/components/schemas/StorageConfigInfo' + + UpdateCatalogRequest: + description: Updates to apply to a Catalog + type: object + properties: + currentEntityVersion: + type: integer + description: The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version. + properties: + type: object + additionalProperties: + type: string + storageConfigInfo: + $ref: "#/components/schemas/StorageConfigInfo" + + Principals: + description: A list of Principals + type: object + properties: + principals: + type: array + items: + $ref: "#/components/schemas/Principal" + required: + - principals + + PrincipalWithCredentials: + description: A user with its client id and secret. This type is returned when a new principal is created or when its + credentials are rotated + type: object + properties: + principal: + $ref: "#/components/schemas/Principal" + credentials: + type: object + properties: + clientId: + type: string + clientSecret: + type: string + required: + - principal + - credentials + + CreatePrincipalRequest: + type: object + properties: + principal: + $ref: '#/components/schemas/Principal' + credentialRotationRequired: + type: boolean + description: If true, the initial credentials can only be used to call rotateCredentials + + Principal: + description: A Polaris principal. + type: object + properties: + name: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + clientId: + type: string + description: The output-only OAuth clientId associated with this principal if applicable + properties: + type: object + additionalProperties: + type: string + createTimestamp: + type: integer + format: "int64" + lastUpdateTimestamp: + type: integer + format: "int64" + entityVersion: + type: integer + description: The version of the principal object used to determine if the principal metadata has changed + required: + - name + + UpdatePrincipalRequest: + description: Updates to apply to a Principal + type: object + properties: + currentEntityVersion: + type: integer + description: The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version. + properties: + type: object + additionalProperties: + type: string + required: + - currentEntityVersion + - properties + + PrincipalRoles: + type: object + properties: + roles: + type: array + items: + $ref: "#/components/schemas/PrincipalRole" + required: + - roles + + GrantPrincipalRoleRequest: + type: object + properties: + principalRole: + $ref: '#/components/schemas/PrincipalRole' + + CreatePrincipalRoleRequest: + type: object + properties: + principalRole: + $ref: '#/components/schemas/PrincipalRole' + + PrincipalRole: + type: object + properties: + name: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + description: The name of the role + properties: + type: object + additionalProperties: + type: string + createTimestamp: + type: integer + format: "int64" + lastUpdateTimestamp: + type: integer + format: "int64" + entityVersion: + type: integer + description: The version of the principal role object used to determine if the principal role metadata has changed + required: + - name + + UpdatePrincipalRoleRequest: + description: Updates to apply to a Principal Role + type: object + properties: + currentEntityVersion: + type: integer + description: The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version. + properties: + type: object + additionalProperties: + type: string + required: + - currentEntityVersion + - properties + + CatalogRoles: + type: object + properties: + roles: + type: array + items: + $ref: "#/components/schemas/CatalogRole" + description: The list of catalog roles + required: + - roles + + GrantCatalogRoleRequest: + type: object + properties: + catalogRole: + $ref: '#/components/schemas/CatalogRole' + + CreateCatalogRoleRequest: + type: object + properties: + catalogRole: + $ref: '#/components/schemas/CatalogRole' + + CatalogRole: + type: object + properties: + name: + type: string + minLength: 1 + maxLength: 256 + pattern: '^(?!\s*[s|S][y|Y][s|S][t|T][e|E][m|M]\$).*$' + description: The name of the role + properties: + type: object + additionalProperties: + type: string + createTimestamp: + type: integer + format: "int64" + lastUpdateTimestamp: + type: integer + format: "int64" + entityVersion: + type: integer + description: The version of the catalog role object used to determine if the catalog role metadata has changed + required: + - name + + UpdateCatalogRoleRequest: + description: Updates to apply to a Catalog Role + type: object + properties: + currentEntityVersion: + type: integer + description: The version of the object onto which this update is applied; if the object changed, the update will fail and the caller should retry after fetching the latest version. + properties: + type: object + additionalProperties: + type: string + required: + - currentEntityVersion + - properties + + ViewPrivilege: + type: string + enum: + - CATALOG_MANAGE_ACCESS + - VIEW_CREATE + - VIEW_DROP + - VIEW_LIST + - VIEW_READ_PROPERTIES + - VIEW_WRITE_PROPERTIES + - VIEW_FULL_METADATA + + TablePrivilege: + type: string + enum: + - CATALOG_MANAGE_ACCESS + - TABLE_DROP + - TABLE_LIST + - TABLE_READ_PROPERTIES + - VIEW_READ_PROPERTIES + - TABLE_WRITE_PROPERTIES + - TABLE_READ_DATA + - TABLE_WRITE_DATA + - TABLE_FULL_METADATA + + NamespacePrivilege: + type: string + enum: + - CATALOG_MANAGE_ACCESS + - CATALOG_MANAGE_CONTENT + - CATALOG_MANAGE_METADATA + - NAMESPACE_CREATE + - TABLE_CREATE + - VIEW_CREATE + - NAMESPACE_DROP + - TABLE_DROP + - VIEW_DROP + - NAMESPACE_LIST + - TABLE_LIST + - VIEW_LIST + - NAMESPACE_READ_PROPERTIES + - TABLE_READ_PROPERTIES + - VIEW_READ_PROPERTIES + - NAMESPACE_WRITE_PROPERTIES + - TABLE_WRITE_PROPERTIES + - VIEW_WRITE_PROPERTIES + - TABLE_READ_DATA + - TABLE_WRITE_DATA + - NAMESPACE_FULL_METADATA + - TABLE_FULL_METADATA + - VIEW_FULL_METADATA + + CatalogPrivilege: + type: string + enum: + - CATALOG_MANAGE_ACCESS + - CATALOG_MANAGE_CONTENT + - CATALOG_MANAGE_METADATA + - CATALOG_READ_PROPERTIES + - CATALOG_WRITE_PROPERTIES + - NAMESPACE_CREATE + - TABLE_CREATE + - VIEW_CREATE + - NAMESPACE_DROP + - TABLE_DROP + - VIEW_DROP + - NAMESPACE_LIST + - TABLE_LIST + - VIEW_LIST + - NAMESPACE_READ_PROPERTIES + - TABLE_READ_PROPERTIES + - VIEW_READ_PROPERTIES + - NAMESPACE_WRITE_PROPERTIES + - TABLE_WRITE_PROPERTIES + - VIEW_WRITE_PROPERTIES + - TABLE_READ_DATA + - TABLE_WRITE_DATA + - NAMESPACE_FULL_METADATA + - TABLE_FULL_METADATA + - VIEW_FULL_METADATA + + AddGrantRequest: + type: object + properties: + grant: + $ref: '#/components/schemas/GrantResource' + + RevokeGrantRequest: + type: object + properties: + grant: + $ref: '#/components/schemas/GrantResource' + + ViewGrant: + allOf: + - $ref: '#/components/schemas/GrantResource' + - type: object + properties: + namespace: + type: array + items: + type: string + viewName: + type: string + minLength: 1 + maxLength: 256 + privilege: + $ref: '#/components/schemas/ViewPrivilege' + required: + - namespace + - viewName + - privilege + + TableGrant: + allOf: + - $ref: '#/components/schemas/GrantResource' + - type: object + properties: + namespace: + type: array + items: + type: string + tableName: + type: string + minLength: 1 + maxLength: 256 + privilege: + $ref: '#/components/schemas/TablePrivilege' + required: + - namespace + - tableName + - privilege + + NamespaceGrant: + allOf: + - $ref: '#/components/schemas/GrantResource' + - type: object + properties: + namespace: + type: array + items: + type: string + privilege: + $ref: '#/components/schemas/NamespacePrivilege' + required: + - namespace + - privilege + + + CatalogGrant: + allOf: + - $ref: '#/components/schemas/GrantResource' + - type: object + properties: + privilege: + $ref: '#/components/schemas/CatalogPrivilege' + required: + - privilege + + GrantResource: + type: object + discriminator: + propertyName: type + mapping: + catalog: '#/components/schemas/CatalogGrant' + namespace: '#/components/schemas/NamespaceGrant' + table: '#/components/schemas/TableGrant' + view: '#/components/schemas/ViewGrant' + properties: + type: + type: string + enum: + - catalog + - namespace + - table + - view + required: + - type + + GrantResources: + type: object + properties: + grants: + type: array + items: + $ref: "#/components/schemas/GrantResource" + required: + - grants diff --git a/spec/redocly.yaml b/spec/redocly.yaml new file mode 100644 index 0000000000..d410af6aaa --- /dev/null +++ b/spec/redocly.yaml @@ -0,0 +1,11 @@ +theme: + openapi: + theme: + typography: + fontFamily: 'Roboto, sans-serif' + logo: + gutter: '15px' + +seo: + title: Polaris Catalog Documentation + description: Learn how to work with the Open Source Polaris Catalog for Apache Iceberg \ No newline at end of file diff --git a/spec/rest-catalog-open-api.yaml b/spec/rest-catalog-open-api.yaml new file mode 100644 index 0000000000..9c7e61e75f --- /dev/null +++ b/spec/rest-catalog-open-api.yaml @@ -0,0 +1,4154 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +--- +openapi: 3.0.3 +info: + title: Apache Iceberg REST Catalog API + license: + name: Apache 2.0 + url: https://www.apache.org/licenses/LICENSE-2.0.html + version: 0.0.1 + description: + Defines the specification for the first version of the REST Catalog API. + Implementations should ideally support both Iceberg table specs v1 and v2, with priority given to v2. +servers: + - url: "{scheme}://{host}/{basePath}" + description: Server URL when the port can be inferred from the scheme + variables: + scheme: + description: The scheme of the URI, either http or https. + default: https + host: + description: The host address for the specified server + default: localhost + basePath: + description: Optional prefix to be appended to all routes + default: "" + - url: "{scheme}://{host}:{port}/{basePath}" + description: Generic base server URL, with all parts configurable + variables: + scheme: + description: The scheme of the URI, either http or https. + default: https + host: + description: The host address for the specified server + default: localhost + port: + description: The port used when addressing the host + default: "443" + basePath: + description: Optional prefix to be appended to all routes + default: "" +# All routes are currently configured using an Authorization header. +security: + - OAuth2: [catalog] + - BearerAuth: [] + +paths: + /v1/config: + + get: + tags: + - Configuration API + summary: List all catalog configuration settings + operationId: getConfig + parameters: + - name: warehouse + in: query + required: false + schema: + type: string + description: Warehouse location or identifier to request from the service + description: + " + All REST clients should first call this route to get catalog configuration + properties from the server to configure the catalog and its HTTP client. + Configuration from the server consists of two sets of key/value pairs. + + - defaults - properties that should be used as default configuration; applied before client configuration + + - overrides - properties that should be used to override client configuration; applied after defaults and client configuration + + + Catalog configuration is constructed by setting the defaults, then client- + provided configuration, and finally overrides. The final property set is then + used to configure the catalog. + + + For example, a default configuration property might set the size of the + client pool, which can be replaced with a client-specific setting. An + override might be used to set the warehouse location, which is stored + on the server rather than in client configuration. + + + Common catalog configuration settings are documented at + https://iceberg.apache.org/docs/latest/configuration/#catalog-properties + " + responses: + 200: + description: Server specified configuration values. + content: + application/json: + schema: + $ref: '#/components/schemas/CatalogConfig' + example: { + "overrides": { + "warehouse": "s3://bucket/warehouse/" + }, + "defaults": { + "clients": "4" + } + } + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + /v1/oauth/tokens: + + post: + tags: + - OAuth2 API + summary: Get a token using an OAuth2 flow + operationId: getToken + description: + Exchange credentials for a token using the OAuth2 client credentials flow or token exchange. + + + This endpoint is used for three purposes - + + 1. To exchange client credentials (client ID and secret) for an access token + This uses the client credentials flow. + + 2. To exchange a client token and an identity token for a more specific access token + This uses the token exchange flow. + + 3. To exchange an access token for one with the same claims and a refreshed expiration period + This uses the token exchange flow. + + + For example, a catalog client may be configured with client credentials from the OAuth2 + Authorization flow. This client would exchange its client ID and secret for an access token + using the client credentials request with this endpoint (1). Subsequent requests would then + use that access token. + + + Some clients may also handle sessions that have additional user context. These clients would + use the token exchange flow to exchange a user token (the "subject" token) from the session + for a more specific access token for that user, using the catalog's access token as the + "actor" token (2). The user ID token is the "subject" token and can be any token type + allowed by the OAuth2 token exchange flow, including a unsecured JWT token with a sub claim. + This request should use the catalog's bearer token in the "Authorization" header. + + + Clients may also use the token exchange flow to refresh a token that is about to expire by + sending a token exchange request (3). The request's "subject" token should be the expiring + token. This request should use the subject token in the "Authorization" header. + parameters: + - name: Authorization + in: header + schema: + type: string + required: false + requestBody: + required: true + content: + application/x-www-form-urlencoded: + schema: + $ref: '#/components/schemas/OAuthTokenRequest' + responses: + 200: + $ref: '#/components/responses/OAuthTokenResponse' + 400: + $ref: '#/components/responses/OAuthErrorResponse' + 401: + $ref: '#/components/responses/OAuthErrorResponse' + 5XX: + $ref: '#/components/responses/OAuthErrorResponse' + security: + - BearerAuth: [] + + /v1/{prefix}/namespaces: + parameters: + - $ref: '#/components/parameters/prefix' + + get: + tags: + - Catalog API + summary: List namespaces, optionally providing a parent namespace to list underneath + description: + List all namespaces at a certain level, optionally starting from a given parent namespace. + If table accounting.tax.paid.info exists, using 'SELECT NAMESPACE IN accounting' would + translate into `GET /namespaces?parent=accounting` and must return a namespace, ["accounting", "tax"] only. + Using 'SELECT NAMESPACE IN accounting.tax' would + translate into `GET /namespaces?parent=accounting%1Ftax` and must return a namespace, ["accounting", "tax", "paid"]. + If `parent` is not provided, all top-level namespaces should be listed. + operationId: listNamespaces + parameters: + - $ref: '#/components/parameters/page-token' + - $ref: '#/components/parameters/page-size' + - name: parent + in: query + description: + An optional namespace, underneath which to list namespaces. + If not provided or empty, all top-level namespaces should be listed. + If parent is a multipart namespace, the parts must be separated by the unit separator (`0x1F`) byte. + required: false + allowEmptyValue: true + schema: + type: string + example: "accounting%1Ftax" + responses: + 200: + $ref: '#/components/responses/ListNamespacesResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: Not Found - Namespace provided in the `parent` query parameter is not found. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + NoSuchNamespaceExample: + $ref: '#/components/examples/NoSuchNamespaceError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + post: + tags: + - Catalog API + summary: Create a namespace + description: + Create a namespace, with an optional set of properties. + The server might also add properties, such as `last_modified_time` etc. + operationId: createNamespace + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/CreateNamespaceRequest' + responses: + 200: + $ref: '#/components/responses/CreateNamespaceResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 406: + $ref: '#/components/responses/UnsupportedOperationResponse' + 409: + description: Conflict - The namespace already exists + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + NamespaceAlreadyExists: + $ref: '#/components/examples/NamespaceAlreadyExistsError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + /v1/{prefix}/namespaces/{namespace}: + parameters: + - $ref: '#/components/parameters/prefix' + - $ref: '#/components/parameters/namespace' + + get: + tags: + - Catalog API + summary: Load the metadata properties for a namespace + operationId: loadNamespaceMetadata + description: Return all stored metadata properties for a given namespace + responses: + 200: + $ref: '#/components/responses/GetNamespaceResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: Not Found - Namespace not found + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + NoSuchNamespaceExample: + $ref: '#/components/examples/NoSuchNamespaceError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + head: + tags: + - Catalog API + summary: Check if a namespace exists + operationId: namespaceExists + description: + Check if a namespace exists. The response does not contain a body. + responses: + 204: + description: Success, no content + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: Not Found - Namespace not found + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + NoSuchNamespaceExample: + $ref: '#/components/examples/NoSuchNamespaceError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + delete: + tags: + - Catalog API + summary: Drop a namespace from the catalog. Namespace must be empty. + operationId: dropNamespace + responses: + 204: + description: Success, no content + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: Not Found - Namespace to delete does not exist. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + NoSuchNamespaceExample: + $ref: '#/components/examples/NoSuchNamespaceError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + /v1/{prefix}/namespaces/{namespace}/properties: + parameters: + - $ref: '#/components/parameters/prefix' + - $ref: '#/components/parameters/namespace' + + post: + tags: + - Catalog API + summary: Set or remove properties on a namespace + operationId: updateProperties + description: + Set and/or remove properties on a namespace. + The request body specifies a list of properties to remove and a map + of key value pairs to update. + + Properties that are not in the request are not modified or removed by this call. + + Server implementations are not required to support namespace properties. + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/UpdateNamespacePropertiesRequest' + examples: + UpdateAndRemoveProperties: + $ref: '#/components/examples/UpdateAndRemoveNamespacePropertiesRequest' + responses: + 200: + $ref: '#/components/responses/UpdateNamespacePropertiesResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: Not Found - Namespace not found + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + NamespaceNotFound: + $ref: '#/components/examples/NoSuchNamespaceError' + 406: + $ref: '#/components/responses/UnsupportedOperationResponse' + 422: + description: Unprocessable Entity - A property key was included in both `removals` and `updates` + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + UnprocessableEntityDuplicateKey: + $ref: '#/components/examples/UnprocessableEntityDuplicateKey' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + /v1/{prefix}/namespaces/{namespace}/tables: + parameters: + - $ref: '#/components/parameters/prefix' + - $ref: '#/components/parameters/namespace' + + get: + tags: + - Catalog API + summary: List all table identifiers underneath a given namespace + description: Return all table identifiers under this namespace + operationId: listTables + parameters: + - $ref: '#/components/parameters/page-token' + - $ref: '#/components/parameters/page-size' + responses: + 200: + $ref: '#/components/responses/ListTablesResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: Not Found - The namespace specified does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + NamespaceNotFound: + $ref: '#/components/examples/NoSuchNamespaceError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + post: + tags: + - Catalog API + summary: Create a table in the given namespace + description: + Create a table or start a create transaction, like atomic CTAS. + + + If `stage-create` is false, the table is created immediately. + + + If `stage-create` is true, the table is not created, but table metadata is initialized and returned. + The service should prepare as needed for a commit to the table commit endpoint to complete the create + transaction. The client uses the returned metadata to begin a transaction. To commit the transaction, + the client sends all create and subsequent changes to the table commit route. Changes from the table + create operation include changes like AddSchemaUpdate and SetCurrentSchemaUpdate that set the initial + table state. + operationId: createTable + parameters: + - $ref: '#/components/parameters/data-access' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/CreateTableRequest' + responses: + 200: + $ref: '#/components/responses/CreateTableResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: Not Found - The namespace specified does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + NamespaceNotFound: + $ref: '#/components/examples/NoSuchNamespaceError' + 409: + description: Conflict - The table already exists + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + NamespaceAlreadyExists: + $ref: '#/components/examples/TableAlreadyExistsError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + /v1/{prefix}/namespaces/{namespace}/register: + parameters: + - $ref: '#/components/parameters/prefix' + - $ref: '#/components/parameters/namespace' + + post: + tags: + - Catalog API + summary: Register a table in the given namespace using given metadata file location + description: + Register a table using given metadata file location. + + operationId: registerTable + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/RegisterTableRequest' + responses: + 200: + $ref: '#/components/responses/LoadTableResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: Not Found - The namespace specified does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + NamespaceNotFound: + $ref: '#/components/examples/NoSuchNamespaceError' + 409: + description: Conflict - The table already exists + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + NamespaceAlreadyExists: + $ref: '#/components/examples/TableAlreadyExistsError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + /v1/{prefix}/namespaces/{namespace}/tables/{table}: + parameters: + - $ref: '#/components/parameters/prefix' + - $ref: '#/components/parameters/namespace' + - $ref: '#/components/parameters/table' + + get: + tags: + - Catalog API + summary: Load a table from the catalog + operationId: loadTable + description: + Load a table from the catalog. + + + The response contains both configuration and table metadata. The configuration, if non-empty is used + as additional configuration for the table that overrides catalog configuration. For example, this + configuration may change the FileIO implementation to be used for the table. + + + The response also contains the table's full metadata, matching the table metadata JSON file. + + + The catalog configuration may contain credentials that should be used for subsequent requests for the + table. The configuration key "token" is used to pass an access token to be used as a bearer token + for table requests. Otherwise, a token may be passed using a RFC 8693 token type as a configuration + key. For example, "urn:ietf:params:oauth:token-type:jwt=". + parameters: + - $ref: '#/components/parameters/data-access' + - in: query + name: snapshots + description: + The snapshots to return in the body of the metadata. Setting the value to `all` would + return the full set of snapshots currently valid for the table. Setting the value to + `refs` would load all snapshots referenced by branches or tags. + + Default if no param is provided is `all`. + required: false + schema: + type: string + enum: [all, refs] + responses: + 200: + $ref: '#/components/responses/LoadTableResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: + Not Found - NoSuchTableException, table to load does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + TableToLoadDoesNotExist: + $ref: '#/components/examples/NoSuchTableError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + post: + tags: + - Catalog API + summary: Commit updates to a table + operationId: updateTable + description: + Commit updates to a table. + + + Commits have two parts, requirements and updates. Requirements are assertions that will be validated + before attempting to make and commit changes. For example, `assert-ref-snapshot-id` will check that a + named ref's snapshot ID has a certain value. + + + Updates are changes to make to table metadata. For example, after asserting that the current main ref + is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new + snapshot id. + + + Create table transactions that are started by createTable with `stage-create` set to true are + committed using this route. Transactions should include all changes to the table, including table + initialization, like AddSchemaUpdate and SetCurrentSchemaUpdate. The `assert-create` requirement is + used to ensure that the table was not created concurrently. + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/CommitTableRequest' + responses: + 200: + $ref: '#/components/responses/CommitTableResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: + Not Found - NoSuchTableException, table to load does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + TableToUpdateDoesNotExist: + $ref: '#/components/examples/NoSuchTableError' + 409: + description: + Conflict - CommitFailedException, one or more requirements failed. The client may retry. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 500: + description: + An unknown server-side problem occurred; the commit state is unknown. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Internal Server Error", + "type": "CommitStateUnknownException", + "code": 500 + } + } + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 502: + description: + A gateway or proxy received an invalid response from the upstream server; the commit state is unknown. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Invalid response from the upstream server", + "type": "CommitStateUnknownException", + "code": 502 + } + } + 504: + description: + A server-side gateway timeout occurred; the commit state is unknown. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Gateway timed out during commit", + "type": "CommitStateUnknownException", + "code": 504 + } + } + 5XX: + description: + A server-side problem that might not be addressable on the client. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Bad Gateway", + "type": "InternalServerError", + "code": 502 + } + } + + delete: + tags: + - Catalog API + summary: Drop a table from the catalog + operationId: dropTable + description: Remove a table from the catalog + parameters: + - name: purgeRequested + in: query + required: false + description: Whether the user requested to purge the underlying table's data and metadata + schema: + type: boolean + default: false + responses: + 204: + description: Success, no content + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: + Not Found - NoSuchTableException, Table to drop does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + TableToDeleteDoesNotExist: + $ref: '#/components/examples/NoSuchTableError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + head: + tags: + - Catalog API + summary: Check if a table exists + operationId: tableExists + description: + Check if a table exists within a given namespace. The response does not contain a body. + responses: + 204: + description: Success, no content + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: + Not Found - NoSuchTableException, Table not found + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + TableToLoadDoesNotExist: + $ref: '#/components/examples/NoSuchTableError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + /v1/{prefix}/tables/rename: + parameters: + - $ref: '#/components/parameters/prefix' + + post: + tags: + - Catalog API + summary: Rename a table from its current name to a new name + description: + Rename a table from one identifier to another. It's valid to move a table + across namespaces, but the server implementation is not required to support it. + operationId: renameTable + requestBody: + description: Current table identifier to rename and new table identifier to rename to + content: + application/json: + schema: + $ref: '#/components/schemas/RenameTableRequest' + examples: + RenameTableSameNamespace: + $ref: '#/components/examples/RenameTableSameNamespace' + required: true + responses: + 204: + description: Success, no content + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: + Not Found + - NoSuchTableException, Table to rename does not exist + - NoSuchNamespaceException, The target namespace of the new table identifier does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + TableToRenameDoesNotExist: + $ref: '#/components/examples/NoSuchTableError' + NamespaceToRenameToDoesNotExist: + $ref: '#/components/examples/NoSuchNamespaceError' + 406: + $ref: '#/components/responses/UnsupportedOperationResponse' + 409: + description: Conflict - The target identifier to rename to already exists as a table or view + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: + $ref: '#/components/examples/TableAlreadyExistsError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + /v1/{prefix}/namespaces/{namespace}/tables/{table}/metrics: + parameters: + - $ref: '#/components/parameters/prefix' + - $ref: '#/components/parameters/namespace' + - $ref: '#/components/parameters/table' + + post: + tags: + - Catalog API + summary: Send a metrics report to this endpoint to be processed by the backend + operationId: reportMetrics + requestBody: + description: The request containing the metrics report to be sent + content: + application/json: + schema: + $ref: '#/components/schemas/ReportMetricsRequest' + required: true + responses: + 204: + description: Success, no content + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: + Not Found - NoSuchTableException, table to load does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + TableToLoadDoesNotExist: + $ref: '#/components/examples/NoSuchTableError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + /v1/{prefix}/namespaces/{namespace}/tables/{table}/notifications: + parameters: + - $ref: '#/components/parameters/prefix' + - $ref: '#/components/parameters/namespace' + - $ref: '#/components/parameters/table' + + post: + tags: + - Catalog API + summary: Sends a notification to the table + operationId: sendNotification + requestBody: + description: The request containing the notification to be sent + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRequest' + required: true + responses: + 204: + description: Success, no content + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: + Not Found - NoSuchTableException, table to load does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + TableToLoadDoesNotExist: + $ref: '#/components/examples/NoSuchTableError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + /v1/{prefix}/transactions/commit: + parameters: + - $ref: '#/components/parameters/prefix' + + post: + tags: + - Catalog API + summary: Commit updates to multiple tables in an atomic operation + operationId: commitTransaction + requestBody: + description: + Commit updates to multiple tables in an atomic operation + + + A commit for a single table consists of a table identifier with requirements and updates. + Requirements are assertions that will be validated before attempting to make and commit changes. + For example, `assert-ref-snapshot-id` will check that a named ref's snapshot ID has a certain value. + + + Updates are changes to make to table metadata. For example, after asserting that the current main ref + is at the expected snapshot, a commit may add a new child snapshot and set the ref to the new + snapshot id. + content: + application/json: + schema: + $ref: '#/components/schemas/CommitTransactionRequest' + required: true + responses: + 204: + description: Success, no content + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: + Not Found - NoSuchTableException, table to load does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + examples: + TableToUpdateDoesNotExist: + $ref: '#/components/examples/NoSuchTableError' + 409: + description: + Conflict - CommitFailedException, one or more requirements failed. The client may retry. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 500: + description: + An unknown server-side problem occurred; the commit state is unknown. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Internal Server Error", + "type": "CommitStateUnknownException", + "code": 500 + } + } + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 502: + description: + A gateway or proxy received an invalid response from the upstream server; the commit state is unknown. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Invalid response from the upstream server", + "type": "CommitStateUnknownException", + "code": 502 + } + } + 504: + description: + A server-side gateway timeout occurred; the commit state is unknown. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Gateway timed out during commit", + "type": "CommitStateUnknownException", + "code": 504 + } + } + 5XX: + description: + A server-side problem that might not be addressable on the client. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Bad Gateway", + "type": "InternalServerError", + "code": 502 + } + } + + /v1/{prefix}/namespaces/{namespace}/views: + parameters: + - $ref: '#/components/parameters/prefix' + - $ref: '#/components/parameters/namespace' + + get: + tags: + - Catalog API + summary: List all view identifiers underneath a given namespace + description: Return all view identifiers under this namespace + operationId: listViews + parameters: + - $ref: '#/components/parameters/page-token' + - $ref: '#/components/parameters/page-size' + responses: + 200: + $ref: '#/components/responses/ListTablesResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: Not Found - The namespace specified does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + examples: + NamespaceNotFound: + $ref: '#/components/examples/NoSuchNamespaceError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + post: + tags: + - Catalog API + summary: Create a view in the given namespace + description: + Create a view in the given namespace. + operationId: createView + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/CreateViewRequest' + responses: + 200: + $ref: '#/components/responses/LoadViewResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: Not Found - The namespace specified does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + examples: + NamespaceNotFound: + $ref: '#/components/examples/NoSuchNamespaceError' + 409: + description: Conflict - The view already exists + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + examples: + NamespaceAlreadyExists: + $ref: '#/components/examples/ViewAlreadyExistsError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + /v1/{prefix}/namespaces/{namespace}/views/{view}: + parameters: + - $ref: '#/components/parameters/prefix' + - $ref: '#/components/parameters/namespace' + - $ref: '#/components/parameters/view' + + get: + tags: + - Catalog API + summary: Load a view from the catalog + operationId: loadView + description: + Load a view from the catalog. + + + The response contains both configuration and view metadata. The configuration, if non-empty is used + as additional configuration for the view that overrides catalog configuration. + + + The response also contains the view's full metadata, matching the view metadata JSON file. + + + The catalog configuration may contain credentials that should be used for subsequent requests for the + view. The configuration key "token" is used to pass an access token to be used as a bearer token + for view requests. Otherwise, a token may be passed using a RFC 8693 token type as a configuration + key. For example, "urn:ietf:params:oauth:token-type:jwt=". + responses: + 200: + $ref: '#/components/responses/LoadViewResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: + Not Found - NoSuchViewException, view to load does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + examples: + ViewToLoadDoesNotExist: + $ref: '#/components/examples/NoSuchViewError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + post: + tags: + - Catalog API + summary: Replace a view + operationId: replaceView + description: + Commit updates to a view. + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/CommitViewRequest' + responses: + 200: + $ref: '#/components/responses/LoadViewResponse' + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: + Not Found - NoSuchViewException, view to load does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + examples: + ViewToUpdateDoesNotExist: + $ref: '#/components/examples/NoSuchViewError' + 409: + description: + Conflict - CommitFailedException. The client may retry. + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 500: + description: + An unknown server-side problem occurred; the commit state is unknown. + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + example: { + "error": { + "message": "Internal Server Error", + "type": "CommitStateUnknownException", + "code": 500 + } + } + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 502: + description: + A gateway or proxy received an invalid response from the upstream server; the commit state is unknown. + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + example: { + "error": { + "message": "Invalid response from the upstream server", + "type": "CommitStateUnknownException", + "code": 502 + } + } + 504: + description: + A server-side gateway timeout occurred; the commit state is unknown. + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + example: { + "error": { + "message": "Gateway timed out during commit", + "type": "CommitStateUnknownException", + "code": 504 + } + } + 5XX: + description: + A server-side problem that might not be addressable on the client. + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + example: { + "error": { + "message": "Bad Gateway", + "type": "InternalServerError", + "code": 502 + } + } + + delete: + tags: + - Catalog API + summary: Drop a view from the catalog + operationId: dropView + description: Remove a view from the catalog + responses: + 204: + description: Success, no content + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: + Not Found - NoSuchViewException, view to drop does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + examples: + ViewToDeleteDoesNotExist: + $ref: '#/components/examples/NoSuchViewError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + head: + tags: + - Catalog API + summary: Check if a view exists + operationId: viewExists + description: + Check if a view exists within a given namespace. This request does not return a response body. + responses: + 204: + description: Success, no content + 400: + description: Bad Request + 401: + description: Unauthorized + 404: + description: Not Found + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + /v1/{prefix}/views/rename: + parameters: + - $ref: '#/components/parameters/prefix' + + post: + tags: + - Catalog API + summary: Rename a view from its current name to a new name + description: + Rename a view from one identifier to another. It's valid to move a view + across namespaces, but the server implementation is not required to support it. + operationId: renameView + requestBody: + description: Current view identifier to rename and new view identifier to rename to + content: + application/json: + schema: + $ref: '#/components/schemas/RenameTableRequest' + examples: + RenameViewSameNamespace: + $ref: '#/components/examples/RenameViewSameNamespace' + required: true + responses: + 204: + description: Success, no content + 400: + $ref: '#/components/responses/BadRequestErrorResponse' + 401: + $ref: '#/components/responses/UnauthorizedResponse' + 403: + $ref: '#/components/responses/ForbiddenResponse' + 404: + description: + Not Found + - NoSuchViewException, view to rename does not exist + - NoSuchNamespaceException, The target namespace of the new identifier does not exist + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + examples: + ViewToRenameDoesNotExist: + $ref: '#/components/examples/NoSuchViewError' + NamespaceToRenameToDoesNotExist: + $ref: '#/components/examples/NoSuchNamespaceError' + 406: + $ref: '#/components/responses/UnsupportedOperationResponse' + 409: + description: Conflict - The target identifier to rename to already exists as a table or view + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + example: + $ref: '#/components/examples/ViewAlreadyExistsError' + 419: + $ref: '#/components/responses/AuthenticationTimeoutResponse' + 503: + $ref: '#/components/responses/ServiceUnavailableResponse' + 5XX: + $ref: '#/components/responses/ServerErrorResponse' + + +components: + ####################################################### + # Common Parameter Definitions Used In Several Routes # + ####################################################### + parameters: + namespace: + name: namespace + in: path + required: true + description: + A namespace identifier as a single string. + Multipart namespace parts should be separated by the unit separator (`0x1F`) byte. + schema: + type: string + examples: + singlepart_namespace: + value: "accounting" + multipart_namespace: + value: "accounting%1Ftax" + + prefix: + name: prefix + in: path + schema: + type: string + required: true + description: An optional prefix in the path + + table: + name: table + in: path + description: A table name + required: true + schema: + type: string + example: "sales" + + view: + name: view + in: path + description: A view name + required: true + schema: + type: string + example: "sales" + + data-access: + name: X-Iceberg-Access-Delegation + in: header + description: > + Optional signal to the server that the client supports delegated access + via a comma-separated list of access mechanisms. The server may choose + to supply access via any or none of the requested mechanisms. + + + Specific properties and handling for `vended-credentials` is documented + in the `LoadTableResult` schema section of this spec document. + + + The protocol and specification for `remote-signing` is documented in + the `s3-signer-open-api.yaml` OpenApi spec in the `aws` module. + + required: false + schema: + type: string + enum: + - vended-credentials + - remote-signing + style: simple + explode: false + example: "vended-credentials,remote-signing" + + page-token: + name: pageToken + in: query + required: false + allowEmptyValue: true + schema: + $ref: '#/components/schemas/PageToken' + + page-size: + name: pageSize + in: query + description: + For servers that support pagination, this signals an upper bound of the number of results that a client will receive. + For servers that do not support pagination, clients may receive results larger than the indicated `pageSize`. + required: false + schema: + type: integer + minimum: 1 + + ############################## + # Application Schema Objects # + ############################## + schemas: + + ErrorModel: + type: object + description: JSON error payload returned in a response with further details on the error + required: + - message + - type + - code + properties: + message: + type: string + description: Human-readable error message + type: + type: string + description: Internal type definition of the error + example: NoSuchNamespaceException + code: + type: integer + minimum: 400 + maximum: 600 + description: HTTP response code + example: 404 + stack: + type: array + items: + type: string + + CatalogConfig: + type: object + description: Server-provided configuration for the catalog. + required: + - defaults + - overrides + properties: + overrides: + type: object + additionalProperties: + type: string + description: + Properties that should be used to override client configuration; applied after defaults and client configuration. + defaults: + type: object + additionalProperties: + type: string + description: + Properties that should be used as default configuration; applied before client configuration. + + CreateNamespaceRequest: + type: object + required: + - namespace + properties: + namespace: + $ref: '#/components/schemas/Namespace' + properties: + type: object + description: Configured string to string map of properties for the namespace + example: {"owner": "Hank Bendickson"} + default: {} + additionalProperties: + type: string + + UpdateNamespacePropertiesRequest: + type: object + properties: + removals: + type: array + uniqueItems: true + items: + type: string + example: ["department", "access_group"] + updates: + type: object + example: {"owner": "Hank Bendickson"} + additionalProperties: + type: string + + RenameTableRequest: + type: object + required: + - source + - destination + properties: + source: + $ref: '#/components/schemas/TableIdentifier' + destination: + $ref: '#/components/schemas/TableIdentifier' + + Namespace: + description: Reference to one or more levels of a namespace + type: array + items: + type: string + example: ["accounting", "tax"] + + PageToken: + description: + An opaque token that allows clients to make use of pagination for list APIs + (e.g. ListTables). Clients may initiate the first paginated request by sending an empty + query parameter `pageToken` to the server. + + Servers that support pagination should identify the `pageToken` parameter and return a + `next-page-token` in the response if there are more results available. After the initial + request, the value of `next-page-token` from each response must be used as the `pageToken` + parameter value for the next request. The server must return `null` value for the + `next-page-token` in the last response. + + Servers that support pagination must return all results in a single response with the value + of `next-page-token` set to `null` if the query parameter `pageToken` is not set in the + request. + + Servers that do not support pagination should ignore the `pageToken` parameter and return + all results in a single response. The `next-page-token` must be omitted from the response. + + Clients must interpret either `null` or missing response value of `next-page-token` as + the end of the listing results. + + type: string + nullable: true + + TableIdentifier: + type: object + required: + - namespace + - name + properties: + namespace: + $ref: '#/components/schemas/Namespace' + name: + type: string + nullable: false + + PrimitiveType: + type: string + example: + - "long" + - "string" + - "fixed[16]" + - "decimal(10,2)" + + StructField: + type: object + required: + - id + - name + - type + - required + properties: + id: + type: integer + name: + type: string + type: + $ref: '#/components/schemas/Type' + required: + type: boolean + doc: + type: string + + StructType: + type: object + required: + - type + - fields + properties: + type: + type: string + enum: ["struct"] + fields: + type: array + items: + $ref: '#/components/schemas/StructField' + + ListType: + type: object + required: + - type + - element-id + - element + - element-required + properties: + type: + type: string + enum: ["list"] + element-id: + type: integer + element: + $ref: '#/components/schemas/Type' + element-required: + type: boolean + + MapType: + type: object + required: + - type + - key-id + - key + - value-id + - value + - value-required + properties: + type: + type: string + enum: ["map"] + key-id: + type: integer + key: + $ref: '#/components/schemas/Type' + value-id: + type: integer + value: + $ref: '#/components/schemas/Type' + value-required: + type: boolean + + Type: + oneOf: + - $ref: '#/components/schemas/PrimitiveType' + - $ref: '#/components/schemas/StructType' + - $ref: '#/components/schemas/ListType' + - $ref: '#/components/schemas/MapType' + + Schema: + allOf: + - $ref: '#/components/schemas/StructType' + - type: object + properties: + schema-id: + type: integer + readOnly: true + identifier-field-ids: + type: array + items: + type: integer + + Expression: + oneOf: + - $ref: '#/components/schemas/AndOrExpression' + - $ref: '#/components/schemas/NotExpression' + - $ref: '#/components/schemas/SetExpression' + - $ref: '#/components/schemas/LiteralExpression' + - $ref: '#/components/schemas/UnaryExpression' + + ExpressionType: + type: string + example: + - "eq" + - "and" + - "or" + - "not" + - "in" + - "not-in" + - "lt" + - "lt-eq" + - "gt" + - "gt-eq" + - "not-eq" + - "starts-with" + - "not-starts-with" + - "is-null" + - "not-null" + - "is-nan" + - "not-nan" + + AndOrExpression: + type: object + required: + - type + - left + - right + properties: + type: + $ref: '#/components/schemas/ExpressionType' + enum: ["and", "or"] + left: + $ref: '#/components/schemas/Expression' + right: + $ref: '#/components/schemas/Expression' + + NotExpression: + type: object + required: + - type + - child + properties: + type: + $ref: '#/components/schemas/ExpressionType' + enum: ["not"] + child: + $ref: '#/components/schemas/Expression' + + UnaryExpression: + type: object + required: + - type + - term + - value + properties: + type: + $ref: '#/components/schemas/ExpressionType' + enum: ["is-null", "not-null", "is-nan", "not-nan"] + term: + $ref: '#/components/schemas/Term' + value: + type: object + + LiteralExpression: + type: object + required: + - type + - term + - value + properties: + type: + $ref: '#/components/schemas/ExpressionType' + enum: ["lt", "lt-eq", "gt", "gt-eq", "eq", "not-eq", "starts-with", "not-starts-with"] + term: + $ref: '#/components/schemas/Term' + value: + type: object + + SetExpression: + type: object + required: + - type + - term + - values + properties: + type: + $ref: '#/components/schemas/ExpressionType' + enum: ["in", "not-in"] + term: + $ref: '#/components/schemas/Term' + values: + type: array + items: + type: object + + Term: + oneOf: + - $ref: '#/components/schemas/Reference' + - $ref: '#/components/schemas/TransformTerm' + + Reference: + type: string + example: + - "column-name" + + TransformTerm: + type: object + required: + - type + - transform + - term + properties: + type: + type: string + enum: ["transform"] + transform: + $ref: '#/components/schemas/Transform' + term: + $ref: '#/components/schemas/Reference' + + Transform: + type: string + example: + - "identity" + - "year" + - "month" + - "day" + - "hour" + - "bucket[256]" + - "truncate[16]" + + PartitionField: + type: object + required: + - source-id + - transform + - name + properties: + field-id: + type: integer + source-id: + type: integer + name: + type: string + transform: + $ref: '#/components/schemas/Transform' + + PartitionSpec: + type: object + required: + - fields + properties: + spec-id: + type: integer + readOnly: true + fields: + type: array + items: + $ref: '#/components/schemas/PartitionField' + + SortDirection: + type: string + enum: ["asc", "desc"] + + NullOrder: + type: string + enum: ["nulls-first", "nulls-last"] + + SortField: + type: object + required: + - source-id + - transform + - direction + - null-order + properties: + source-id: + type: integer + transform: + $ref: '#/components/schemas/Transform' + direction: + $ref: '#/components/schemas/SortDirection' + null-order: + $ref: '#/components/schemas/NullOrder' + + SortOrder: + type: object + required: + - order-id + - fields + properties: + order-id: + type: integer + readOnly: true + fields: + type: array + items: + $ref: '#/components/schemas/SortField' + + Snapshot: + type: object + required: + - snapshot-id + - timestamp-ms + - manifest-list + - summary + properties: + snapshot-id: + type: integer + format: int64 + parent-snapshot-id: + type: integer + format: int64 + sequence-number: + type: integer + format: int64 + timestamp-ms: + type: integer + format: int64 + manifest-list: + type: string + description: Location of the snapshot's manifest list file + summary: + type: object + required: + - operation + properties: + operation: + type: string + enum: ["append", "replace", "overwrite", "delete"] + additionalProperties: + type: string + schema-id: + type: integer + + SnapshotReference: + type: object + required: + - type + - snapshot-id + properties: + type: + type: string + enum: ["tag", "branch"] + snapshot-id: + type: integer + format: int64 + max-ref-age-ms: + type: integer + format: int64 + max-snapshot-age-ms: + type: integer + format: int64 + min-snapshots-to-keep: + type: integer + + SnapshotReferences: + type: object + additionalProperties: + $ref: '#/components/schemas/SnapshotReference' + + SnapshotLog: + type: array + items: + type: object + required: + - snapshot-id + - timestamp-ms + properties: + snapshot-id: + type: integer + format: int64 + timestamp-ms: + type: integer + format: int64 + + MetadataLog: + type: array + items: + type: object + required: + - metadata-file + - timestamp-ms + properties: + metadata-file: + type: string + timestamp-ms: + type: integer + format: int64 + + TableMetadata: + type: object + required: + - format-version + - table-uuid + properties: + format-version: + type: integer + minimum: 1 + maximum: 2 + table-uuid: + type: string + location: + type: string + last-updated-ms: + type: integer + format: int64 + properties: + type: object + additionalProperties: + type: string + # schema tracking + schemas: + type: array + items: + $ref: '#/components/schemas/Schema' + current-schema-id: + type: integer + last-column-id: + type: integer + # partition spec tracking + partition-specs: + type: array + items: + $ref: '#/components/schemas/PartitionSpec' + default-spec-id: + type: integer + last-partition-id: + type: integer + # sort order tracking + sort-orders: + type: array + items: + $ref: '#/components/schemas/SortOrder' + default-sort-order-id: + type: integer + # snapshot tracking + snapshots: + type: array + items: + $ref: '#/components/schemas/Snapshot' + refs: + $ref: '#/components/schemas/SnapshotReferences' + current-snapshot-id: + type: integer + format: int64 + last-sequence-number: + type: integer + format: int64 + # logs + snapshot-log: + $ref: '#/components/schemas/SnapshotLog' + metadata-log: + $ref: '#/components/schemas/MetadataLog' + # statistics + statistics-files: + type: array + items: + $ref: '#/components/schemas/StatisticsFile' + partition-statistics-files: + type: array + items: + $ref: '#/components/schemas/PartitionStatisticsFile' + + SQLViewRepresentation: + type: object + required: + - type + - sql + - dialect + properties: + type: + type: string + sql: + type: string + dialect: + type: string + + ViewRepresentation: + oneOf: + - $ref: '#/components/schemas/SQLViewRepresentation' + + ViewHistoryEntry: + type: object + required: + - version-id + - timestamp-ms + properties: + version-id: + type: integer + timestamp-ms: + type: integer + format: int64 + + ViewVersion: + type: object + required: + - version-id + - timestamp-ms + - schema-id + - summary + - representations + - default-namespace + properties: + version-id: + type: integer + timestamp-ms: + type: integer + format: int64 + schema-id: + type: integer + description: Schema ID to set as current, or -1 to set last added schema + summary: + type: object + additionalProperties: + type: string + representations: + type: array + items: + $ref: '#/components/schemas/ViewRepresentation' + default-catalog: + type: string + default-namespace: + $ref: '#/components/schemas/Namespace' + + ViewMetadata: + type: object + required: + - view-uuid + - format-version + - location + - current-version-id + - versions + - version-log + - schemas + properties: + view-uuid: + type: string + format-version: + type: integer + minimum: 1 + maximum: 1 + location: + type: string + current-version-id: + type: integer + versions: + type: array + items: + $ref: '#/components/schemas/ViewVersion' + version-log: + type: array + items: + $ref: '#/components/schemas/ViewHistoryEntry' + schemas: + type: array + items: + $ref: '#/components/schemas/Schema' + properties: + type: object + additionalProperties: + type: string + + BaseUpdate: + discriminator: + propertyName: action + mapping: + assign-uuid: '#/components/schemas/AssignUUIDUpdate' + upgrade-format-version: '#/components/schemas/UpgradeFormatVersionUpdate' + add-schema: '#/components/schemas/AddSchemaUpdate' + set-current-schema: '#/components/schemas/SetCurrentSchemaUpdate' + add-spec: '#/components/schemas/AddPartitionSpecUpdate' + set-default-spec: '#/components/schemas/SetDefaultSpecUpdate' + add-sort-order: '#/components/schemas/AddSortOrderUpdate' + set-default-sort-order: '#/components/schemas/SetDefaultSortOrderUpdate' + add-snapshot: '#/components/schemas/AddSnapshotUpdate' + set-snapshot-ref: '#/components/schemas/SetSnapshotRefUpdate' + remove-snapshots: '#/components/schemas/RemoveSnapshotsUpdate' + remove-snapshot-ref: '#/components/schemas/RemoveSnapshotRefUpdate' + set-location: '#/components/schemas/SetLocationUpdate' + set-properties: '#/components/schemas/SetPropertiesUpdate' + remove-properties: '#/components/schemas/RemovePropertiesUpdate' + add-view-version: '#/components/schemas/AddViewVersionUpdate' + set-current-view-version: '#/components/schemas/SetCurrentViewVersionUpdate' + set-statistics: '#/components/schemas/SetStatisticsUpdate' + remove-statistics: '#/components/schemas/RemoveStatisticsUpdate' + set-partition-statistics: '#/components/schemas/SetPartitionStatisticsUpdate' + remove-partition-statistics: '#/components/schemas/RemovePartitionStatisticsUpdate' + type: object + required: + - action + properties: + action: + type: string + + AssignUUIDUpdate: + description: Assigning a UUID to a table/view should only be done when creating the table/view. It is not safe to re-assign the UUID if a table/view already has a UUID assigned + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - uuid + properties: + action: + type: string + enum: ["assign-uuid"] + uuid: + type: string + + UpgradeFormatVersionUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - format-version + properties: + action: + type: string + enum: ["upgrade-format-version"] + format-version: + type: integer + + AddSchemaUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - schema + properties: + action: + type: string + enum: ["add-schema"] + schema: + $ref: '#/components/schemas/Schema' + last-column-id: + type: integer + description: The highest assigned column ID for the table. This is used to ensure columns are always assigned an unused ID when evolving schemas. When omitted, it will be computed on the server side. + + SetCurrentSchemaUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - schema-id + properties: + action: + type: string + enum: ["set-current-schema"] + schema-id: + type: integer + description: Schema ID to set as current, or -1 to set last added schema + + AddPartitionSpecUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - spec + properties: + action: + type: string + enum: ["add-spec"] + spec: + $ref: '#/components/schemas/PartitionSpec' + + SetDefaultSpecUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - spec-id + properties: + action: + type: string + enum: ["set-default-spec"] + spec-id: + type: integer + description: Partition spec ID to set as the default, or -1 to set last added spec + + AddSortOrderUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - sort-order + properties: + action: + type: string + enum: ["add-sort-order"] + sort-order: + $ref: '#/components/schemas/SortOrder' + + SetDefaultSortOrderUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - sort-order-id + properties: + action: + type: string + enum: ["set-default-sort-order"] + sort-order-id: + type: integer + description: Sort order ID to set as the default, or -1 to set last added sort order + + AddSnapshotUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - snapshot + properties: + action: + type: string + enum: ["add-snapshot"] + snapshot: + $ref: '#/components/schemas/Snapshot' + + SetSnapshotRefUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + - $ref: '#/components/schemas/SnapshotReference' + required: + - action + - ref-name + properties: + action: + type: string + enum: ["set-snapshot-ref"] + ref-name: + type: string + + RemoveSnapshotsUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - snapshot-ids + properties: + action: + type: string + enum: ["remove-snapshots"] + snapshot-ids: + type: array + items: + type: integer + format: int64 + + RemoveSnapshotRefUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - ref-name + properties: + action: + type: string + enum: ["remove-snapshot-ref"] + ref-name: + type: string + + SetLocationUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - location + properties: + action: + type: string + enum: ["set-location"] + location: + type: string + + SetPropertiesUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - updates + properties: + action: + type: string + enum: ["set-properties"] + updates: + type: object + additionalProperties: + type: string + + RemovePropertiesUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - removals + properties: + action: + type: string + enum: ["remove-properties"] + removals: + type: array + items: + type: string + + AddViewVersionUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - view-version + properties: + action: + type: string + enum: ["add-view-version"] + view-version: + $ref: '#/components/schemas/ViewVersion' + + SetCurrentViewVersionUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - view-version-id + properties: + action: + type: string + enum: ["set-current-view-version"] + view-version-id: + type: integer + description: The view version id to set as current, or -1 to set last added view version id + + SetStatisticsUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - snapshot-id + - statistics + properties: + action: + type: string + enum: ["set-statistics"] + snapshot-id: + type: integer + format: int64 + statistics: + $ref: '#/components/schemas/StatisticsFile' + + RemoveStatisticsUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - snapshot-id + properties: + action: + type: string + enum: ["remove-statistics"] + snapshot-id: + type: integer + format: int64 + + SetPartitionStatisticsUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - partition-statistics + properties: + action: + type: string + enum: ["set-partition-statistics"] + partition-statistics: + $ref: '#/components/schemas/PartitionStatisticsFile' + + RemovePartitionStatisticsUpdate: + allOf: + - $ref: '#/components/schemas/BaseUpdate' + required: + - action + - snapshot-id + properties: + action: + type: string + enum: ["remove-partition-statistics"] + snapshot-id: + type: integer + format: int64 + + TableUpdate: + anyOf: + - $ref: '#/components/schemas/AssignUUIDUpdate' + - $ref: '#/components/schemas/UpgradeFormatVersionUpdate' + - $ref: '#/components/schemas/AddSchemaUpdate' + - $ref: '#/components/schemas/SetCurrentSchemaUpdate' + - $ref: '#/components/schemas/AddPartitionSpecUpdate' + - $ref: '#/components/schemas/SetDefaultSpecUpdate' + - $ref: '#/components/schemas/AddSortOrderUpdate' + - $ref: '#/components/schemas/SetDefaultSortOrderUpdate' + - $ref: '#/components/schemas/AddSnapshotUpdate' + - $ref: '#/components/schemas/SetSnapshotRefUpdate' + - $ref: '#/components/schemas/RemoveSnapshotsUpdate' + - $ref: '#/components/schemas/RemoveSnapshotRefUpdate' + - $ref: '#/components/schemas/SetLocationUpdate' + - $ref: '#/components/schemas/SetPropertiesUpdate' + - $ref: '#/components/schemas/RemovePropertiesUpdate' + - $ref: '#/components/schemas/SetStatisticsUpdate' + - $ref: '#/components/schemas/RemoveStatisticsUpdate' + + ViewUpdate: + anyOf: + - $ref: '#/components/schemas/AssignUUIDUpdate' + - $ref: '#/components/schemas/UpgradeFormatVersionUpdate' + - $ref: '#/components/schemas/AddSchemaUpdate' + - $ref: '#/components/schemas/SetLocationUpdate' + - $ref: '#/components/schemas/SetPropertiesUpdate' + - $ref: '#/components/schemas/RemovePropertiesUpdate' + - $ref: '#/components/schemas/AddViewVersionUpdate' + - $ref: '#/components/schemas/SetCurrentViewVersionUpdate' + + TableRequirement: + discriminator: + propertyName: type + mapping: + assert-create: '#/components/schemas/AssertCreate' + assert-table-uuid: '#/components/schemas/AssertTableUUID' + assert-ref-snapshot-id: '#/components/schemas/AssertRefSnapshotId' + assert-last-assigned-field-id: '#/components/schemas/AssertLastAssignedFieldId' + assert-current-schema-id: '#/components/schemas/AssertCurrentSchemaId' + assert-last-assigned-partition-id: '#/components/schemas/AssertLastAssignedPartitionId' + assert-default-spec-id: '#/components/schemas/AssertDefaultSpecId' + assert-default-sort-order-id: '#/components/schemas/AssertDefaultSortOrderId' + type: object + required: + - type + properties: + type: + type: "string" + + AssertCreate: + allOf: + - $ref: "#/components/schemas/TableRequirement" + type: object + description: The table must not already exist; used for create transactions + required: + - type + properties: + type: + type: string + enum: ["assert-create"] + + AssertTableUUID: + allOf: + - $ref: "#/components/schemas/TableRequirement" + description: The table UUID must match the requirement's `uuid` + required: + - type + - uuid + properties: + type: + type: string + enum: ["assert-table-uuid"] + uuid: + type: string + + AssertRefSnapshotId: + allOf: + - $ref: "#/components/schemas/TableRequirement" + description: + The table branch or tag identified by the requirement's `ref` must reference the requirement's `snapshot-id`; + if `snapshot-id` is `null` or missing, the ref must not already exist + required: + - type + - ref + - snapshot-id + properties: + type: + type: string + enum: ["assert-ref-snapshot-id"] + ref: + type: string + snapshot-id: + type: integer + format: int64 + + AssertLastAssignedFieldId: + allOf: + - $ref: "#/components/schemas/TableRequirement" + description: + The table's last assigned column id must match the requirement's `last-assigned-field-id` + required: + - type + - last-assigned-field-id + properties: + type: + type: string + enum: ["assert-last-assigned-field-id"] + last-assigned-field-id: + type: integer + + AssertCurrentSchemaId: + allOf: + - $ref: "#/components/schemas/TableRequirement" + description: + The table's current schema id must match the requirement's `current-schema-id` + required: + - type + - current-schema-id + properties: + type: + type: string + enum: ["assert-current-schema-id"] + current-schema-id: + type: integer + + AssertLastAssignedPartitionId: + allOf: + - $ref: "#/components/schemas/TableRequirement" + description: + The table's last assigned partition id must match the requirement's `last-assigned-partition-id` + required: + - type + - last-assigned-partition-id + properties: + type: + type: string + enum: ["assert-last-assigned-partition-id"] + last-assigned-partition-id: + type: integer + + AssertDefaultSpecId: + allOf: + - $ref: "#/components/schemas/TableRequirement" + description: + The table's default spec id must match the requirement's `default-spec-id` + required: + - type + - default-spec-id + properties: + type: + type: string + enum: ["assert-default-spec-id"] + default-spec-id: + type: integer + + AssertDefaultSortOrderId: + allOf: + - $ref: "#/components/schemas/TableRequirement" + description: + The table's default sort order id must match the requirement's `default-sort-order-id` + required: + - type + - default-sort-order-id + properties: + type: + type: string + enum: ["assert-default-sort-order-id"] + default-sort-order-id: + type: integer + + ViewRequirement: + discriminator: + propertyName: type + mapping: + assert-view-uuid: '#/components/schemas/AssertViewUUID' + type: object + required: + - type + properties: + type: + type: "string" + + AssertViewUUID: + allOf: + - $ref: "#/components/schemas/ViewRequirement" + description: The view UUID must match the requirement's `uuid` + required: + - type + - uuid + properties: + type: + type: string + enum: ["assert-view-uuid"] + uuid: + type: string + + LoadTableResult: + description: | + Result used when a table is successfully loaded. + + + The table metadata JSON is returned in the `metadata` field. The corresponding file location of table metadata should be returned in the `metadata-location` field, unless the metadata is not yet committed. For example, a create transaction may return metadata that is staged but not committed. + Clients can check whether metadata has changed by comparing metadata locations after the table has been created. + + + The `config` map returns table-specific configuration for the table's resources, including its HTTP client and FileIO. For example, config may contain a specific FileIO implementation class for the table depending on its underlying storage. + + + The following configurations should be respected by clients: + + ## General Configurations + + - `token`: Authorization bearer token to use for table requests if OAuth2 security is enabled + + ## AWS Configurations + + The following configurations should be respected when working with tables stored in AWS S3 + - `client.region`: region to configure client for making requests to AWS + - `s3.access-key-id`: id for for credentials that provide access to the data in S3 + - `s3.secret-access-key`: secret for credentials that provide access to data in S3 + - `s3.session-token`: if present, this value should be used for as the session token + - `s3.remote-signing-enabled`: if `true` remote signing should be performed as described in the `s3-signer-open-api.yaml` specification + type: object + required: + - metadata + properties: + metadata-location: + type: string + description: May be null if the table is staged as part of a transaction + metadata: + $ref: '#/components/schemas/TableMetadata' + config: + type: object + additionalProperties: + type: string + + CommitTableRequest: + type: object + required: + - requirements + - updates + properties: + identifier: + description: Table identifier to update; must be present for CommitTransactionRequest + $ref: '#/components/schemas/TableIdentifier' + requirements: + type: array + items: + $ref: '#/components/schemas/TableRequirement' + updates: + type: array + items: + $ref: '#/components/schemas/TableUpdate' + + CommitViewRequest: + type: object + required: + - updates + properties: + identifier: + description: View identifier to update + $ref: '#/components/schemas/TableIdentifier' + requirements: + type: array + items: + $ref: '#/components/schemas/ViewRequirement' + updates: + type: array + items: + $ref: '#/components/schemas/ViewUpdate' + + CommitTransactionRequest: + type: object + required: + - table-changes + properties: + table-changes: + type: array + items: + description: Table commit request; must provide an `identifier` + $ref: '#/components/schemas/CommitTableRequest' + + CreateTableRequest: + type: object + required: + - name + - schema + properties: + name: + type: string + location: + type: string + schema: + $ref: '#/components/schemas/Schema' + partition-spec: + $ref: '#/components/schemas/PartitionSpec' + write-order: + $ref: '#/components/schemas/SortOrder' + stage-create: + type: boolean + properties: + type: object + additionalProperties: + type: string + + RegisterTableRequest: + type: object + required: + - name + - metadata-location + properties: + name: + type: string + metadata-location: + type: string + + CreateViewRequest: + type: object + required: + - name + - schema + - view-version + - properties + properties: + name: + type: string + location: + type: string + schema: + $ref: '#/components/schemas/Schema' + view-version: + $ref: '#/components/schemas/ViewVersion' + description: The view version to create, will replace the schema-id sent within the view-version with the id assigned to the provided schema + properties: + type: object + additionalProperties: + type: string + + LoadViewResult: + description: | + Result used when a view is successfully loaded. + + + The view metadata JSON is returned in the `metadata` field. The corresponding file location of view metadata is returned in the `metadata-location` field. + Clients can check whether metadata has changed by comparing metadata locations after the view has been created. + + The `config` map returns view-specific configuration for the view's resources. + + The following configurations should be respected by clients: + + ## General Configurations + + - `token`: Authorization bearer token to use for view requests if OAuth2 security is enabled + + type: object + required: + - metadata-location + - metadata + properties: + metadata-location: + type: string + metadata: + $ref: '#/components/schemas/ViewMetadata' + config: + type: object + additionalProperties: + type: string + + TokenType: + type: string + enum: + - urn:ietf:params:oauth:token-type:access_token + - urn:ietf:params:oauth:token-type:refresh_token + - urn:ietf:params:oauth:token-type:id_token + - urn:ietf:params:oauth:token-type:saml1 + - urn:ietf:params:oauth:token-type:saml2 + - urn:ietf:params:oauth:token-type:jwt + description: + Token type identifier, from RFC 8693 Section 3 + + + See https://datatracker.ietf.org/doc/html/rfc8693#section-3 + + OAuthClientCredentialsRequest: + description: + OAuth2 client credentials request + + + See https://datatracker.ietf.org/doc/html/rfc6749#section-4.4 + type: object + required: + - grant_type + - client_id + - client_secret + properties: + grant_type: + type: string + enum: + - client_credentials + scope: + type: string + client_id: + type: string + description: + Client ID + + + This can be sent in the request body, but OAuth2 recommends sending it in + a Basic Authorization header. + client_secret: + type: string + description: + Client secret + + + This can be sent in the request body, but OAuth2 recommends sending it in + a Basic Authorization header. + + OAuthTokenExchangeRequest: + description: + OAuth2 token exchange request + + + See https://datatracker.ietf.org/doc/html/rfc8693 + type: object + required: + - grant_type + - subject_token + - subject_token_type + properties: + grant_type: + type: string + enum: + - urn:ietf:params:oauth:grant-type:token-exchange + scope: + type: string + requested_token_type: + $ref: '#/components/schemas/TokenType' + subject_token: + type: string + description: Subject token for token exchange request + subject_token_type: + $ref: '#/components/schemas/TokenType' + actor_token: + type: string + description: Actor token for token exchange request + actor_token_type: + $ref: '#/components/schemas/TokenType' + + OAuthTokenRequest: + anyOf: + - $ref: '#/components/schemas/OAuthClientCredentialsRequest' + - $ref: '#/components/schemas/OAuthTokenExchangeRequest' + + CounterResult: + type: object + required: + - unit + - value + properties: + unit: + type: string + value: + type: integer + format: int64 + + TimerResult: + type: object + required: + - time-unit + - count + - total-duration + properties: + time-unit: + type: string + count: + type: integer + format: int64 + total-duration: + type: integer + format: int64 + + MetricResult: + anyOf: + - $ref: '#/components/schemas/CounterResult' + - $ref: '#/components/schemas/TimerResult' + + Metrics: + type: object + additionalProperties: + $ref: '#/components/schemas/MetricResult' + example: + "metrics": { + "total-planning-duration": { + "count": 1, + "time-unit": "nanoseconds", + "total-duration": 2644235116 + }, + "result-data-files": { + "unit": "count", + "value": 1, + }, + "result-delete-files": { + "unit": "count", + "value": 0, + }, + "total-data-manifests": { + "unit": "count", + "value": 1, + }, + "total-delete-manifests": { + "unit": "count", + "value": 0, + }, + "scanned-data-manifests": { + "unit": "count", + "value": 1, + }, + "skipped-data-manifests": { + "unit": "count", + "value": 0, + }, + "total-file-size-bytes": { + "unit": "bytes", + "value": 10, + }, + "total-delete-file-size-bytes": { + "unit": "bytes", + "value": 0, + } + } + + ReportMetricsRequest: + anyOf: + - $ref: '#/components/schemas/ScanReport' + - $ref: '#/components/schemas/CommitReport' + required: + - report-type + properties: + report-type: + type: string + + ScanReport: + type: object + required: + - table-name + - snapshot-id + - filter + - schema-id + - projected-field-ids + - projected-field-names + - metrics + properties: + table-name: + type: string + snapshot-id: + type: integer + format: int64 + filter: + $ref: '#/components/schemas/Expression' + schema-id: + type: integer + projected-field-ids: + type: array + items: + type: integer + projected-field-names: + type: array + items: + type: string + metrics: + $ref: '#/components/schemas/Metrics' + metadata: + type: object + additionalProperties: + type: string + + CommitReport: + type: object + required: + - table-name + - snapshot-id + - sequence-number + - operation + - metrics + properties: + table-name: + type: string + snapshot-id: + type: integer + format: int64 + sequence-number: + type: integer + format: int64 + operation: + type: string + metrics: + $ref: '#/components/schemas/Metrics' + metadata: + type: object + additionalProperties: + type: string + + NotificationRequest: + required: + - notification-type + properties: + notification-type: + $ref: '#/components/schemas/NotificationType' + payload: + $ref: '#/components/schemas/TableUpdateNotification' + + NotificationType: + type: string + enum: + - UNKNOWN + - CREATE + - UPDATE + - DROP + + TableUpdateNotification: + type: object + required: + - table-name + - timestamp + - table-uuid + - metadata-location + properties: + table-name: + type: string + timestamp: + type: integer + format: int64 + table-uuid: + type: string + metadata-location: + type: string + metadata: + $ref: '#/components/schemas/TableMetadata' + + OAuthError: + type: object + required: + - error + properties: + error: + type: string + enum: + - invalid_request + - invalid_client + - invalid_grant + - unauthorized_client + - unsupported_grant_type + - invalid_scope + error_description: + type: string + error_uri: + type: string + + OAuthTokenResponse: + type: object + required: + - access_token + - token_type + properties: + access_token: + type: string + description: + The access token, for client credentials or token exchange + token_type: + type: string + enum: + - bearer + - mac + - N_A + description: + Access token type for client credentials or token exchange + + + See https://datatracker.ietf.org/doc/html/rfc6749#section-7.1 + expires_in: + type: integer + description: + Lifetime of the access token in seconds for client credentials or token exchange + issued_token_type: + $ref: '#/components/schemas/TokenType' + refresh_token: + type: string + description: Refresh token for client credentials or token exchange + scope: + type: string + description: Authorization scope for client credentials or token exchange + + IcebergErrorResponse: + description: JSON wrapper for all error responses (non-2xx) + type: object + required: + - error + properties: + error: + $ref: '#/components/schemas/ErrorModel' + additionalProperties: false + example: + { + "error": { + "message": "The server does not support this operation", + "type": "UnsupportedOperationException", + "code": 406 + } + } + + CreateNamespaceResponse: + type: object + required: + - namespace + properties: + namespace: + $ref: '#/components/schemas/Namespace' + properties: + type: object + additionalProperties: + type: string + description: + Properties stored on the namespace, if supported by the server. + example: {"owner": "Ralph", "created_at": "1452120468"} + default: {} + + GetNamespaceResponse: + type: object + required: + - namespace + properties: + namespace: + $ref: '#/components/schemas/Namespace' + properties: + type: object + description: + Properties stored on the namespace, if supported by the server. + If the server does not support namespace properties, it should return null for this field. + If namespace properties are supported, but none are set, it should return an empty object. + additionalProperties: + type: string + example: {"owner": "Ralph", 'transient_lastDdlTime': '1452120468'} + default: {} + nullable: true + + ListTablesResponse: + type: object + properties: + next-page-token: + $ref: '#/components/schemas/PageToken' + identifiers: + type: array + uniqueItems: true + items: + $ref: '#/components/schemas/TableIdentifier' + + ListNamespacesResponse: + type: object + properties: + next-page-token: + $ref: '#/components/schemas/PageToken' + namespaces: + type: array + uniqueItems: true + items: + $ref: '#/components/schemas/Namespace' + + UpdateNamespacePropertiesResponse: + type: object + required: + - updated + - removed + properties: + updated: + description: List of property keys that were added or updated + type: array + uniqueItems: true + items: + type: string + removed: + description: List of properties that were removed + type: array + items: + type: string + missing: + type: array + items: + type: string + description: + List of properties requested for removal that were not found + in the namespace's properties. Represents a partial success response. + Server's do not need to implement this. + nullable: true + + CommitTableResponse: + type: object + required: + - metadata-location + - metadata + properties: + metadata-location: + type: string + metadata: + $ref: '#/components/schemas/TableMetadata' + + StatisticsFile: + type: object + required: + - snapshot-id + - statistics-path + - file-size-in-bytes + - file-footer-size-in-bytes + - blob-metadata + properties: + snapshot-id: + type: integer + format: int64 + statistics-path: + type: string + file-size-in-bytes: + type: integer + format: int64 + file-footer-size-in-bytes: + type: integer + format: int64 + blob-metadata: + type: array + items: + $ref: '#/components/schemas/BlobMetadata' + + BlobMetadata: + type: object + required: + - type + - snapshot-id + - sequence-number + - fields + properties: + type: + type: string + snapshot-id: + type: integer + format: int64 + sequence-number: + type: integer + format: int64 + fields: + type: array + items: + type: integer + properties: + type: object + + PartitionStatisticsFile: + type: object + required: + - snapshot-id + - statistics-path + - file-size-in-bytes + properties: + snapshot-id: + type: integer + format: int64 + statistics-path: + type: string + file-size-in-bytes: + type: integer + format: int64 + + BooleanTypeValue: + type: boolean + example: true + + IntegerTypeValue: + type: integer + example: 42 + + LongTypeValue: + type: integer + format: int64 + example: 9223372036854775807 + + FloatTypeValue: + type: number + format: float + example: 3.14 + + DoubleTypeValue: + type: number + format: double + example: 123.456 + + DecimalTypeValue: + type: string + description: + "Decimal type values are serialized as strings. Decimals with a positive scale serialize as numeric plain + text, while decimals with a negative scale use scientific notation and the exponent will be equal to the + negated scale. For instance, a decimal with a positive scale is '123.4500', with zero scale is '2', + and with a negative scale is '2E+20'" + example: "123.4500" + + StringTypeValue: + type: string + example: "hello" + + UUIDTypeValue: + type: string + format: uuid + pattern: '^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$' + maxLength: 36 + minLength: 36 + description: + "UUID type values are serialized as a 36-character lowercase string in standard UUID format as specified + by RFC-4122" + example: "eb26bdb1-a1d8-4aa6-990e-da940875492c" + + DateTypeValue: + type: string + format: date + description: + "Date type values follow the 'YYYY-MM-DD' ISO-8601 standard date format" + example: "2007-12-03" + + TimeTypeValue: + type: string + description: + "Time type values follow the 'HH:MM:SS.ssssss' ISO-8601 format with microsecond precision" + example: "22:31:08.123456" + + TimestampTypeValue: + type: string + description: + "Timestamp type values follow the 'YYYY-MM-DDTHH:MM:SS.ssssss' ISO-8601 format with microsecond precision" + example: "2007-12-03T10:15:30.123456" + + TimestampTzTypeValue: + type: string + description: + "TimestampTz type values follow the 'YYYY-MM-DDTHH:MM:SS.ssssss+00:00' ISO-8601 format with microsecond precision, + and a timezone offset (+00:00 for UTC)" + example: "2007-12-03T10:15:30.123456+00:00" + + TimestampNanoTypeValue: + type: string + description: + "Timestamp_ns type values follow the 'YYYY-MM-DDTHH:MM:SS.sssssssss' ISO-8601 format with nanosecond precision" + example: "2007-12-03T10:15:30.123456789" + + TimestampTzNanoTypeValue: + type: string + description: + "Timestamp_ns type values follow the 'YYYY-MM-DDTHH:MM:SS.sssssssss+00:00' ISO-8601 format with nanosecond + precision, and a timezone offset (+00:00 for UTC)" + example: "2007-12-03T10:15:30.123456789+00:00" + + FixedTypeValue: + type: string + description: + "Fixed length type values are stored and serialized as an uppercase hexadecimal string + preserving the fixed length" + example: "78797A" + + BinaryTypeValue: + type: string + description: + "Binary type values are stored and serialized as an uppercase hexadecimal string" + example: "78797A" + + CountMap: + type: object + properties: + keys: + type: array + items: + $ref: '#/components/schemas/IntegerTypeValue' + description: "List of integer column ids for each corresponding value" + values: + type: array + items: + $ref: '#/components/schemas/LongTypeValue' + description: "List of Long values, matched to 'keys' by index" + example: + { + "keys": [1, 2], + "values": [100,200] + } + + ValueMap: + type: object + properties: + keys: + type: array + items: + $ref: '#/components/schemas/IntegerTypeValue' + description: "List of integer column ids for each corresponding value" + values: + type: array + items: + $ref: '#/components/schemas/PrimitiveTypeValue' + description: "List of primitive type values, matched to 'keys' by index" + example: + { + "keys": [1, 2], + "values": [100, "test"] + } + + PrimitiveTypeValue: + oneOf: + - $ref: '#/components/schemas/BooleanTypeValue' + - $ref: '#/components/schemas/IntegerTypeValue' + - $ref: '#/components/schemas/LongTypeValue' + - $ref: '#/components/schemas/FloatTypeValue' + - $ref: '#/components/schemas/DoubleTypeValue' + - $ref: '#/components/schemas/DecimalTypeValue' + - $ref: '#/components/schemas/StringTypeValue' + - $ref: '#/components/schemas/UUIDTypeValue' + - $ref: '#/components/schemas/DateTypeValue' + - $ref: '#/components/schemas/TimeTypeValue' + - $ref: '#/components/schemas/TimestampTypeValue' + - $ref: '#/components/schemas/TimestampTzTypeValue' + - $ref: '#/components/schemas/TimestampNanoTypeValue' + - $ref: '#/components/schemas/TimestampTzNanoTypeValue' + - $ref: '#/components/schemas/FixedTypeValue' + - $ref: '#/components/schemas/BinaryTypeValue' + + FileFormat: + type: string + enum: + - avro + - orc + - parquet + + ContentFile: + discriminator: + propertyName: content + mapping: + data: '#/components/schemas/DataFile' + position-deletes: '#/components/schemas/PositionDeleteFile' + equality-deletes: '#/components/schemas/EqualityDeleteFile' + type: object + required: + - spec-id + - content + - file-path + - file-format + - file-size-in-bytes + - record-count + properties: + content: + type: string + file-path: + type: string + file-format: + $ref: '#/components/schemas/FileFormat' + spec-id: + type: integer + partition: + type: array + items: + $ref: '#/components/schemas/PrimitiveTypeValue' + description: + "A list of partition field values ordered based on the fields of the partition spec specified by the + `spec-id`" + example: [1, "bar"] + file-size-in-bytes: + type: integer + format: int64 + description: "Total file size in bytes" + record-count: + type: integer + format: int64 + description: "Number of records in the file" + key-metadata: + allOf: + - $ref: '#/components/schemas/BinaryTypeValue' + description: "Encryption key metadata blob" + split-offsets: + type: array + items: + type: integer + format: int64 + description: "List of splittable offsets" + sort-order-id: + type: integer + + DataFile: + allOf: + - $ref: '#/components/schemas/ContentFile' + type: object + required: + - content + properties: + content: + type: string + enum: ["data"] + column-sizes: + allOf: + - $ref: '#/components/schemas/CountMap' + description: "Map of column id to total count, including null and NaN" + value-counts: + allOf: + - $ref: '#/components/schemas/CountMap' + description: "Map of column id to null value count" + null-value-counts: + allOf: + - $ref: '#/components/schemas/CountMap' + description: "Map of column id to null value count" + nan-value-counts: + allOf: + - $ref: '#/components/schemas/CountMap' + description: "Map of column id to number of NaN values in the column" + lower-bounds: + allOf: + - $ref: '#/components/schemas/ValueMap' + description: "Map of column id to lower bound primitive type values" + upper-bounds: + allOf: + - $ref: '#/components/schemas/ValueMap' + description: "Map of column id to upper bound primitive type values" + + PositionDeleteFile: + allOf: + - $ref: '#/components/schemas/ContentFile' + required: + - content + properties: + content: + type: string + enum: ["position-deletes"] + + EqualityDeleteFile: + allOf: + - $ref: '#/components/schemas/ContentFile' + required: + - content + properties: + content: + type: string + enum: ["equality-deletes"] + equality-ids: + type: array + items: + type: integer + description: "List of equality field IDs" + + ############################# + # Reusable Response Objects # + ############################# + responses: + + OAuthTokenResponse: + description: OAuth2 token response for client credentials or token exchange + content: + application/json: + schema: + $ref: '#/components/schemas/OAuthTokenResponse' + + OAuthErrorResponse: + description: OAuth2 error response + content: + application/json: + schema: + $ref: '#/components/schemas/OAuthError' + + BadRequestErrorResponse: + description: + Indicates a bad request error. It could be caused by an unexpected request + body format or other forms of request validation failure, such as invalid json. + Usually serves application/json content, although in some cases simple text/plain content might + be returned by the server's middleware. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Malformed request", + "type": "BadRequestException", + "code": 400 + } + } + + # Note that this is a representative example response for use as a shorthand in the spec. + # The fields `message` and `type` as indicated here are not presently prescriptive. + UnauthorizedResponse: + description: + Unauthorized. Authentication is required and has failed or has not yet been provided. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Not authorized to make this request", + "type": "NotAuthorizedException", + "code": 401 + } + } + + # Note that this is a representative example response for use as a shorthand in the spec. + # The fields `message` and `type` as indicated here are not presently prescriptive. + ForbiddenResponse: + description: Forbidden. Authenticated user does not have the necessary permissions. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Not authorized to make this request", + "type": "NotAuthorizedException", + "code": 403 + } + } + + # Note that this is a representative example response for use as a shorthand in the spec. + # The fields `message` and `type` as indicated here are not presently prescriptive. + UnsupportedOperationResponse: + description: Not Acceptable / Unsupported Operation. The server does not support this operation. + content: + application/json: + schema: + $ref: '#/components/schemas/ErrorModel' + example: { + "error": { + "message": "The server does not support this operation", + "type": "UnsupportedOperationException", + "code": 406 + } + } + + IcebergErrorResponse: + description: JSON wrapper for all error responses (non-2xx) + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "The server does not support this operation", + "type": "UnsupportedOperationException", + "code": 406 + }} + + CreateNamespaceResponse: + description: + Represents a successful call to create a namespace. + Returns the namespace created, as well as any properties that were stored for the namespace, + including those the server might have added. Implementations are not required to support namespace + properties. + content: + application/json: + schema: + $ref: '#/components/schemas/CreateNamespaceResponse' + example: { + "namespace": ["accounting", "tax"], + "properties": {"owner": "Ralph", "created_at": "1452120468"} + } + + GetNamespaceResponse: + description: + Returns a namespace, as well as any properties stored on the namespace if namespace properties + are supported by the server. + content: + application/json: + schema: + $ref: '#/components/schemas/GetNamespaceResponse' + + ListTablesResponse: + description: A list of table identifiers + content: + application/json: + schema: + $ref: '#/components/schemas/ListTablesResponse' + examples: + ListTablesResponseNonEmpty: + $ref: '#/components/examples/ListTablesNonEmptyExample' + ListTablesResponseEmpty: + $ref: '#/components/examples/ListTablesEmptyExample' + + ListNamespacesResponse: + description: A list of namespaces + content: + application/json: + schema: + $ref: '#/components/schemas/ListNamespacesResponse' + examples: + NonEmptyResponse: + $ref: '#/components/examples/ListNamespacesNonEmptyExample' + EmptyResponse: + $ref: '#/components/examples/ListNamespacesEmptyExample' + + AuthenticationTimeoutResponse: + description: + Credentials have timed out. If possible, the client should refresh credentials and retry. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Credentials have timed out", + "type": "AuthenticationTimeoutException", + "code": 419 + } + } + + ServiceUnavailableResponse: + description: + The service is not ready to handle the request. The client should wait and retry. + + + The service may additionally send a Retry-After header to indicate when to retry. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Slow down", + "type": "SlowDownException", + "code": 503 + } + } + + ServerErrorResponse: + description: + A server-side problem that might not be addressable from the client + side. Used for server 5xx errors without more specific documentation in + individual routes. + content: + application/json: + schema: + $ref: '#/components/schemas/IcebergErrorResponse' + example: { + "error": { + "message": "Internal Server Error", + "type": "InternalServerError", + "code": 500 + } + } + + UpdateNamespacePropertiesResponse: + description: JSON data response for a synchronous update properties request. + content: + application/json: + schema: + $ref: '#/components/schemas/UpdateNamespacePropertiesResponse' + example: { + "updated": ["owner"], + "removed": ["foo"], + "missing": ["bar"] + } + + CreateTableResponse: + description: Table metadata result after creating a table + content: + application/json: + schema: + $ref: '#/components/schemas/LoadTableResult' + + LoadTableResponse: + description: Table metadata result when loading a table + content: + application/json: + schema: + $ref: '#/components/schemas/LoadTableResult' + + LoadViewResponse: + description: View metadata result when loading a view + content: + application/json: + schema: + $ref: '#/components/schemas/LoadViewResult' + + CommitTableResponse: + description: + Response used when a table is successfully updated. + + The table metadata JSON is returned in the metadata field. The corresponding file location of table metadata must be returned in the metadata-location field. Clients can check whether metadata has changed by comparing metadata locations. + content: + application/json: + schema: + $ref: '#/components/schemas/CommitTableResponse' + + ####################################### + # Common examples of different values # + ####################################### + examples: + + ListTablesEmptyExample: + summary: An empty list for a namespace with no tables + value: { + "identifiers": [] + } + + ListNamespacesEmptyExample: + summary: An empty list of namespaces + value: { + "namespaces": [] + } + + ListNamespacesNonEmptyExample: + summary: A non-empty list of namespaces + value: { + "namespaces": [ + ["accounting", "tax"], + ["accounting", "credits"] + ] + } + + ListTablesNonEmptyExample: + summary: A non-empty list of table identifiers + value: { + "identifiers": [ + {"namespace": ["accounting", "tax"], "name": "paid"}, + {"namespace": ["accounting", "tax"], "name": "owed"} + ] + } + + MultipartNamespaceAsPathVariable: + summary: A multi-part namespace, as represented in a path parameter + value: "accounting%1Ftax" + + NamespaceAsPathVariable: + summary: A single part namespace, as represented in a path paremeter + value: "accounting" + + NamespaceAlreadyExistsError: + summary: The requested namespace already exists + value: { + "error": { + "message": "The given namespace already exists", + "type": "AlreadyExistsException", + "code": 409 + } + } + + NoSuchTableError: + summary: The requested table does not exist + value: { + "error": { + "message": "The given table does not exist", + "type": "NoSuchTableException", + "code": 404 + } + } + + NoSuchViewError: + summary: The requested view does not exist + value: { + "error": { + "message": "The given view does not exist", + "type": "NoSuchViewException", + "code": 404 + } + } + + NoSuchNamespaceError: + summary: The requested namespace does not exist + value: { + "error": { + "message": "The given namespace does not exist", + "type": "NoSuchNamespaceException", + "code": 404 + } + } + + RenameTableSameNamespace: + summary: Rename a table in the same namespace + value: { + "source": {"namespace": ["accounting", "tax"], "name": "paid"}, + "destination": {"namespace": ["accounting", "tax"], "name": "owed"} + } + + RenameViewSameNamespace: + summary: Rename a view in the same namespace + value: { + "source": {"namespace": ["accounting", "tax"], "name": "paid-view"}, + "destination": {"namespace": ["accounting", "tax"], "name": "owed-view"} + } + + TableAlreadyExistsError: + summary: The requested table identifier already exists + value: { + "error": { + "message": "The given table already exists", + "type": "AlreadyExistsException", + "code": 409 + } + } + + ViewAlreadyExistsError: + summary: The requested view identifier already exists + value: { + "error": { + "message": "The given view already exists", + "type": "AlreadyExistsException", + "code": 409 + } + } + + # This is an example response and is not meant to be prescriptive regarding the message or type. + UnprocessableEntityDuplicateKey: + summary: + The request body either has the same key multiple times in what should be a map with unique keys + or the request body has keys in two or more fields which should be disjoint sets. + value: { + "error": { + "message": "The request cannot be processed as there is a key present multiple times", + "type": "UnprocessableEntityException", + "code": 422 + } + } + + UpdateAndRemoveNamespacePropertiesRequest: + summary: An update namespace properties request with both properties to remove and properties to upsert. + value: { + "removals": ["foo", "bar"], + "updates": {"owner": "Raoul"} + } + + securitySchemes: + OAuth2: + type: oauth2 + description: + This scheme is used for OAuth2 authorization. + + + For unauthorized requests, services should return an appropriate 401 or + 403 response. Implementations must not return altered success (200) + responses when a request is unauthenticated or unauthorized. + + If a separate authorization server is used, substitute the tokenUrl with + the full token path of the external authorization server, and use the + resulting token to access the resources defined in the spec. + flows: + clientCredentials: + tokenUrl: /v1/oauth/tokens + scopes: + catalog: Allows interacting with the Config and Catalog APIs + BearerAuth: + type: http + scheme: bearer \ No newline at end of file