Skip to content

Conversation

@adutra
Copy link
Contributor

@adutra adutra commented Jun 12, 2025

Now ready for review.

I'm implementing my own suggestion of using a Checkstyle rule to check for forbidden packages.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checkstyle depends on antlr4-runtime.


// Ensure Checkstyle runs after jandex to avoid task dependency issues
tasks
.matching { it.name.startsWith("checkstyle") }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a trap and prevents "configure on demand", .matching gives you the fully configured tasks.
I'd use

tasks.withType<Checkstyle>().configureEach { ...

if (project.extra.has("duplicated-project-sources")) {
// skip the style check for duplicated projects
tasks.withType<Checkstyle>().configureEach { enabled = false }
val checkstyleVersion =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: move this into the checkstyle { block.

<!-- See http://checkstyle.org/config_imports.html -->
<module name="IllegalImport">
<property name="illegalPkgs"
value="org.apache.iceberg.shaded, org.apache.iceberg.relocated, org.testcontainers.shaded"/>
Copy link
Contributor

@dimas-b dimas-b Jun 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I wonder if we could have **.shaded.**, **.relocated.**?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it looks like regexp is allowed 🤔

@github-project-automation github-project-automation bot moved this from PRs In Progress to Ready to merge in Basic Kanban Board Jun 13, 2025
@snazy
Copy link
Member

snazy commented Jun 13, 2025

Not against this PR, but wouldn't it be better to not even have those dependencies that bring relocated code or not have dependencies on uber jars at all, as long as there are alternatives?

@adutra
Copy link
Contributor Author

adutra commented Jun 13, 2025

Not against this PR, but wouldn't it be better to not even have those dependencies that bring relocated code or not have dependencies on uber jars at all, as long as there are alternatives?

Well, we certainly could try to ban the iceberg shaded dependencies, but afair the Spark Client module is fairly complex in terms of dependency management.

And as for Testcontainers, I think that if your tests require testcontainers, you get the testcontainers shaded dependencies (e.g. Awaitility) "for free" 🤷‍♂️ – and there is no way to exclude them since they are in the same jar.

}

// TODO: replace the check using gradlew checkstyle plugin
tasks.register("checkNoDisallowedImports") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for help getting rid of that

@adutra adutra merged commit f38ac8a into apache:main Jun 16, 2025
9 of 20 checks passed
@adutra adutra deleted the checkstyle-imports branch June 16, 2025 08:51
@github-project-automation github-project-automation bot moved this from Ready to merge to Done in Basic Kanban Board Jun 16, 2025
snazy added a commit to snazy/polaris that referenced this pull request Nov 20, 2025
* Cleanup unnecessary files in client/python (apache#1878)

Cleanup unnecessary files in `client/python`

* Bump version in version.txt

With the release/1.0.0 branch being cut, we should bump this to reflect the current state of main

* JDBC: Refactor DatabaseOps (apache#1843)

* removes the databaseType computation from JDBCMetastoreManagerFactory to DbOperations
* wraps the bootstrap in a transaction !
* refactor Production Readiness checks for Postgres

* Fix two wrong links in README.md (apache#1879)

* Avoid using org.testcontainers.shaded.** (apache#1876)

* main: Update dependency io.smallrye.config:smallrye-config-core to v3.13.2 (apache#1888)

* main: Update registry.access.redhat.com/ubi9/openjdk-21-runtime Docker tag to v1.22-1.1749462970 (apache#1887)

* main: Update dependency boto3 to v1.38.36 (apache#1886)

* fix(build): Fix deprecation warnings in PolarisIntegrationTestExtension (apache#1895)

* Enable patch version updates for maintained Polaris version (apache#1891)

Polaris 1.x will be a supported/maintained release. It is crucial to apply bug and security fixes to such release branches.

Therefore, this change enables patch-version updates for Polaris 1.*

* Add Polaris community meeting record for 2025-06-12 (apache#1892)

* Do not use relative path inside CLI script

Issue apache#1868 reported that the Polaris script can fail when it's run from an unexpected path. The recent addition of a reference to `./gradlew` looks incorrect here, and should be changed to use an absolute path.

Fixes apache#1868

* feat(build): Add Checkstyle plugin and an IllegalImport rule (apache#1880)

* Python CI: pin mypy version to avoid CI failure due to new release (apache#1903)

Mypy did a new release 1.16.1 and it cause our CI to fail for about 20 minutes due to missing wheel (upload not completed)
```
 | Unable to find installation candidates for mypy (1.16.1)
    | 
    | This is likely not a Poetry issue.
    | 
    |   - 14 candidate(s) were identified for the package
    |   - 14 wheel(s) were skipped as your project's environment does not support the identified abi tags
    | 
    | Solutions:
    | Make sure the lockfile is up-to-date. You can try one of the following;
    | 
    |     1. Regenerate lockfile: poetry lock --no-cache --regenerate
    |     2. Update package     : poetry update --no-cache mypy
    | 
    | If neither works, please first check to verify that the mypy has published wheels available from your configured source that are compatible with your environment- ie. operating system, architecture (x86_64, arm64 etc.), python interpreter.
    | 

```
This PR temporarily restrict the mypy version to avoid the similar issue.

We may consider bring poetry.lock back to git tracking so we won't automatically update test dependencies all the time

* Remove `.github/CODEOWNERS` (apache#1902)

As per this [dev-ML discussion](https://lists.apache.org/thread/jjr5w3hslk755yvxy8b3z45c7094cxdn)

* Rename quarkus as runtime (apache#1695)

* Rename runtime/test-commons to runtime/test-common (for consistency with module name) (apache#1906)

* docs: Add `Polaris Evolution` page (apache#1890)

---------

Co-authored-by: Eric Maynard <[email protected]>

* feat(ci): Split Java Gradle CI in many jobs to reduce execution time (apache#1897)

* Add webpage for Generic Table support (apache#1889)

* add change

* add comment

* address feedback

* update limitations

* update docs

* update doc

* address feedback

* Improve the parsing and validation of UserSecretReferenceUrns (apache#1840)

This change addresses all the TODOs found the org.polaris.core.secrets package. 
Main changes:

- Create a helper to parse, validate and build the URN strings. 
- Use Regex instead of `String.split()`.
- Add Precondition checks to ensure that the URN is valid and the UserSecretManager matches the expected type. 
- Remove the now unused `GLOBAL_INSTANCE` of the UnsafeInMemorySecretsManager.

Testing 
- Existing `UnsafeInMemorySecretsManagerTest` captures most of the functional changes. 
- Added `UserSecretReferenceUrnHelperTest` to capture the utilities exposed.

* Reuse shadowJar for spark client bundle jar maven publish (apache#1857)

* fix spark client

* fix test failure and address feedback

* fix error

* update regression test

* update classifier name

* address comment

* add change

* update doc

* update build and readme

* add back jr

* udpate dependency

* add change

* update

* update tests

* remove merge service file

* update readme

* update readme

* fix(ci): Remove dummy "build" job from Gradle CI (apache#1911)

Since apache#1897, the jobs in gradle.yaml changed and the "build" job was split into many smaller jobs. But since it was a required job, it couldn't be removed immediately.

* main: Update Quarkus Platform and Group to v3.23.3 (apache#1797)

* main: Update Quarkus Platform and Group to v3.23.3

* Adopt polaris-admin test invocation

---------

Co-authored-by: Robert Stupp <[email protected]>

* Feature: Rollback compaction on conflict (apache#1285)

Intention is make the catalog smarter, to revert the compaction commits in case of crunch to let the writers who are actually adding or removing the data to the table succeed. In a sense treating compaction as always a lower priority process.

Presently the rest catalog client creates the snapshot and asks the Rest Server to apply the snapshot and gives this in a combination of requirement and update.

Polaris could apply some basic inference and generate some updates to metadata given a property is enabled at a table level, by saying that It will revert back the commit which was created by compaction and let the write succeed.
I had this PR in OSS, which was essentially doing this at the client end, but we think its best if we do this as server end. to support more such clients.

How to use this
Enable a catalog level configuration : polaris.config.rollback.compaction.on-conflicts.enabled when this is enabled polaris will apply the intelligence of rollbacking those REPLACE ops snapshot which have the property of polaris.internal.rollback.compaction.on-conflict in their snapshot summary to resolve conflicts at the server end !
a sample use case is there is a deployment of a Polaris where this config is enabled and there is auto compaction (maintenance job) which is updating the table state, it adds the snapshot summary that polaris.internal.rollback.compaction.on-conflict is true now when a backfill process running for 8 hours want to commit but can't because the compaction job committed before so in this case it will reach out to Polaris and Polaris will see if the snapshot of compation aka replace snapshot has this property if yes roll it back and let the writer succeed !

Devlist: https://lists.apache.org/thread/8k8t77dgk1vc124fnb61932bdp9kf1lc

* NoSQL: nits

* `AutoCloseable` for `PersistenceTestExtension`
* checkstyle adoptions

* fix: unify bootstrap credentials and standardize POLARIS setup (apache#1905)

- unified formatting across docker, gradle
- reverted secret to s3cr3t
- updated docker-compose, README, conftest.py

use POLARIS for consistency across docker, gradle and others.

* Add doc for rollback config (apache#1919)

* Revert "Reuse shadowJar for spark client bundle jar maven publish (apache#1857)" (apache#1921)

…857)"

This reverts commit 1f7f127.

The shadowJar plugin actually stops publish the original jar, which is not what spark client intend to publish for the --package usage. 

Revert it for now, will follow up with a better way to reuse the shadow jar plugin, likely with a separate bundle project

* fix(build): Gradle caching effectively not working (apache#1922)

Using a `custom()` spotless formatter check effectively disables caching, see `com.diffplug.gradle.spotless.FormatExtension#custom(java.lang.String, com.diffplug.spotless.FormatterFunc)` using `globalState`, which is a `NeverUpToDateBetweenRuns`. This change refactors this to be cachable.

We also already have a errorprone rule, so we can get rid entirely of the spotless step.

* Update spark client to use the shaded iceberg-core in iceberg-spark-runtime to avoid spark compatibilities issue (apache#1908)

* add change

* add comment

* update change

* add comment

* add change

* add tests

* add comment

* clean up style check

* update build

* Revert "Reuse shadowJar for spark client bundle jar maven publish (apache#1857)"

This reverts commit 1f7f127.

* Reuse shadowJar for spark client bundle jar maven publish (apache#1857)

* fix spark client

* fix test failure and address feedback

* fix error

* update regression test

* update classifier name

* address comment

* add change

* update doc

* update build and readme

* add back jr

* udpate dependency

* add change

* update

* update tests

* remove merge service file

* update readme

* update readme

* update checkstyl

* rebase with main

* Revert "Reuse shadowJar for spark client bundle jar maven publish (apache#1857)"

This reverts commit 40f4d36.

* update checkstyle

* revert change

* address comments

* trigger tests

* Last merged commit 93938fd

---------

Co-authored-by: Honah (Jonas) J. <[email protected]>
Co-authored-by: Eric Maynard <[email protected]>
Co-authored-by: Prashant Singh <[email protected]>
Co-authored-by: Yufei Gu <[email protected]>
Co-authored-by: Dmitri Bourlatchkov <[email protected]>
Co-authored-by: Mend Renovate <[email protected]>
Co-authored-by: Alexandre Dutra <[email protected]>
Co-authored-by: JB Onofré <[email protected]>
Co-authored-by: Eric Maynard <[email protected]>
Co-authored-by: Yun Zou <[email protected]>
Co-authored-by: Pooja Nilangekar <[email protected]>
Co-authored-by: Seungchul Lee <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants