-
Notifications
You must be signed in to change notification settings - Fork 332
Add webpage for Generic Table support #1889
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
dimas-b
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for writing this doc, @gh-yzou ! I think it is very valuable for end users (even though the feature itself is still "beta").
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using curl seems to be an example rather than a requirement here, right? I suppose any tool capable to making HTTP requests will work just as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, i updated the wording to using tools such as curl
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the Operation for making changes to the data in a Generic Table?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The client does that, not the server
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean that the server does not "know" what such a table change? If so, it certainly deserves a dedicated paragraph 😅 As for me, I tend to view the Generic Tables API as something similar to the Iceberg REST Catalog API, which does control commits and by extension conflict resolution on the server side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is much closer to the Spark "HMS" catalog integration. The catalog itself is unaware of the anything about the underlying table except for some loosely defined metadata about it. It's up to the engine (and plugins in that engine) to determine exactly how loading data or committing data actually occur based on that metadata.
You could imagine examples of use cases being something like a CSV based table, or a JDBC Table. When these are stored in the HMS by Spark, the HMS doesn't know how to actually interact with the metadata.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not challenge this mode of operation from a technical POV. I mean that it was a surprise for me and might be a surprise to other people coming from the Iceberg REST Catalog side. I'd appreciate if this aspect were discussed in more details in this doc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add more description in the limitation section.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thx for the update, @gh-yzou !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe use a swagger.io reference as in #1879 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally this should point to the same YAML version as the version of the doc (e.g. 1.0.0 vs. main).... not sure how to do it, though 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated the link to the swagger.io, but seems we only have the catalog bundle yaml, so updated the text to Catalog API Spec also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the Spark plugin use any specific property names? If yes, it would be good to add a section for them:
- as an illustration,
- to avoid name clashes with other use cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean does spark plugin actually do transfer of specific property it receives from Spark and convert to a reserved key name ? Today, we do not do this. All properties a defined by Spark, and Spark have the right to update any of the property keys, which i don't think it is a good idea to doc it here for Generic tables
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm afraid, I'm a bit lost here 😅
I mean: do we know what specific properties are currently set/retrieved from this properties list by Polaris code on the client or server side?
Is the properties property used by any code now (apologies that I did not review the Spark Client end-to-end)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the answer to your question is no, it's not used. Actually per #1785 we don't propagate the properties from client to server, which seems incorrect to me. But it does mean there's not some special property to call out here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so today, the polaris doesn't set/retrieve properties at server side. At client side today, we do retrive the "provider" and "location" property at spark client side and then translate it to the format and location. However, this is more of a contract between spark and spark client, which may not be suitable to mention in the generic table webpage. I can mention that our Polaris spark client today looks into the table property and translate the provider and location into our catalog format and location, but i don't think we want to make it like the property has to contain fields like "provider" or "location" from spark.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suppose a Generic Table named A exists and has some properties usable from Spark via our new Spark Client. Now, if another client wants to query table A via the Generic Tables API, how should that (new) client interpret existing table properties?
The new client may not be aware of the Spark Client, but the properties are observable via API. I believe this aspect of the Generic Tables API ought to be clarified in this doc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't it the same for Iceberg tables? There may be some property on a table that my Spark job knows how to interpret, but yours doesn't. The REST catalog itself does not take any actions based on generic table properties currently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, but the REST catalog spec is not owned / defined by Polaris, while this one is.
I do not insist on a complete enumeration of possibilities. I think it should be sufficient to make a broad statement, for example: at this time, the contents of the properties map are not strictly defined and their interpretation is delegated to client / engine implementations, including interoperability concerns.
In fact, this is what happens with Iceberg tables too, IMHO, but I believe it is valuable to be explicit in specs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good! i added the following description
- Currently, there is no reserved property key defined.
- The definition and interpretation is delegated to client or engine implementations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thx - sgtm
snazy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good start.
I've phrased some of my comments as questions as a reader of the doc.
Generally, the phrasing needs to be more precise and terms need to be explained before being used/referred to.
What I'm missing is the reason for "generic", because the only implemented use case is very specific. I.e. the text is missing an explanation how other formats would/could be represented. The reference of "structured" implies an explicit exclusion of "semi/unstructured" data.
The doc should IMO also describe the various interactions, edge cases and failure scenarios from an client integration's view.
I was not aware of No commit coordination or update capability provided at the catalog service level.. This is IMHO a very serious issue, because it means that there is absolutely no guarantee that the state is consistent. Older changes can overwrite newer changes (ordering of request executions). This means (data) consistency issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is a "generic table"?
What does "basic" and "management" mean here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just meant the operations listed below, i removed those unclear wording, and just point to the list of operations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This prevents leveraging "object store friendly paths", no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean volume usage? I think how volume is going to be supported in Polaris is currently not discussed yet. Since this is a beta feature, if we decided to support the use cases with mutilple locations, we can evolve quickly to support this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me this is a very serious issue. No way to coordinate changes means there will be consistency issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not all formats even have a way to do transactional commits. The basic premises here is to behave like the Spark Catalog with HMS (or Unity) which have these same guarantees for any sources.
For example registering a Cassandra would work but there is nothing in the Polaris world that would (or could) manage commits for a C* source.
Another example would be Delta Lake, which only can optionally (in 4.0) use a Catalog based commit coordinator and usually does not even consider the catalog when making commits.
Polaris is only guaranteeing a consistent view of the metadata about the entity, not any guarantees about the underlying data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not all formats even have a way to do transactional commits.
Delta has, no?
Polaris is only guaranteeing a consistent view of the metadata about the entity, not any guarantees about the underlying data.
How is a consistent view on metadata ensured?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mean, this blog post mentions A data catalog serves as the central registry for a table’s metadata. It manages transactions and table state, as well as access controls and read/write interoperability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the metadata only exists in polaris. Only this set of properties.
Not all formats even have a way to do transactional commits.
Delta has, no?
Delta does without using the catalog, and has an optional "commit coordinator" which uses another api which is not provided here. So like users of HMS for a delta table they would need to use a third party commit coordinator if they wanted to use the optional "commit coordinator"
Polaris is only guaranteeing a consistent view of the metadata about the entity, not any guarantees about the underlying data.
How is a consistent view on metadata ensured?
I think you may want to check out the original design docs here. The "metadata" we are talking about here is referring to what the user puts in their Create statement that talks to Polaris. That is the only thing Polaris knows about the table and is the part that will not change. Again, this is similar to how the HMS works with Spark or how Iceberg originally worked with the HMS Catalog implementaiton. The Catalog is essentially just holding a bag of properties we we will maintain. Changes to these properties (not yet allowed) would be atomic but they are essentially disconnected from the underlying format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if basically only the table name is there, what's a user's benefit for it?
I'm confused -- are we cross-examining the feature here or documenting it?
The benefit is as is documented here; you can use Delta and other non-Iceberg tables in Spark using the Spark connector. The doc walks you through how that works. If that benefit is unclear in the doc, let's fix that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if basically only the table name is there, what's a user's benefit for it?
As @RussellSpitzer mentioned, the milestone polaris accomplishes today is enable polaris as a centralized catalog service for Spark Catalog. Furthermore, for delta, the state is inside the delta log, as far as the client is able to load the delta log, "base-location" is the only information needed to enable access to the delta table. I can try to make it more clear in the doc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I really do not like is implying promises (A data catalog serves as the central registry for a table’s metadata. It manages transactions and table state, as well as access controls and read/write interoperability.) which just do not hold true: there is no way to manage transactions and table state, control access, etc.
WRT to what's in Polaris - it's incomplete (I suspect there's no doubt there).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't actually think it's incomplete in that context? I wouldn't imagine we would ever support those things for this endpoint and I don't think it would be a surprise to any user of Spark who uses this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @snazy 's point that the blog post (link above) is not really aligned with the Generic Tables API as far as Catalog as a "central registry" for table's metadata is concerned. The Generic Table API actually diminishes the role of the catalog as metadata registry by delegating most of the metadata loading to the client (even location is optional in the API).
That said, as far as this PR is concerned, I believe it should be sufficient to describe actual behaviour of Polaris in this respect. There is certainly room for improvements in terms of clarity and precision in this doc, but I think the current state of this PR is probably acceptable for 1.0.
There is some prior discussion around this naming choice in the dev ML: https://lists.apache.org/thread/9jcx656ybkn132qw94g5wh8n5nmkg1d9 |
yup - but we cannot expect readers to know the whole dev-ML history |
12722fc to
28d0b23
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since there's no "update" API, what happens if there's a mistake in the initial create request? WIll the client have to delete and re-create the table?
I do not mean to cause API changes at this point, just trying to clarity things for potential non-Polaris readers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, yes. I think adding update is a good idea but the intent here should be to document the existing behavior.
IIRC the motivation to not have update in v0 was due to a potential lack of clarity around what responsibilities the catalog takes on for updates (i.e. it's not the same as an Iceberg update where the catalog writes metadata).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry for the late reply! as we don't have update capability today, rename will require a re-creation, I added some description at section Generic Table API Vs. Iceberg Table API
|
Thanks again for making this doc page, @gh-yzou ! I think I'm done with comments from my side. I'd be fine with merging this PR. Not approving only to allow other reviewers to have another round of comments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: can we reword a bit like this?
| The Generic Table support today is very limited: | |
| Current limitations of Generic Table support: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
|
@snazy When we come with the name generic table, the intension is to evolute the support across different table formats, I added some more description at the top to help make the naming more clear. The whole feature is currently marked as beta, which is indicating things are still evolving. |
07319e7 to
a263c68
Compare
dimas-b
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making this doc page, @gh-yzou !
I think it is sufficient inform users about the Generic Tables API in 1.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: the term "load", given other conversations under this PR, is still a bit confusing, IMHO, because it resonates with Iceberg's loadTable, which provides tables's metadata... However, this call is more like "get properties". All in all, I think it's ok since subsequent doc sections provide more clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The API itself is actually named loadGenericTable, so I don't think it's exactly misleading. It does load the generic table's metadata, which is whatever metadata was registered in the catalog during createGenericTable. This is very similar to the behavior of, say, the HMS's getTable. Actually cracking open the metadata.json and returning its contents in the IRC is the exception, not the rule.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a confusing one but to match behavior of other systems with similar functionality I think load or get is probably correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, i was mainly try to matching the term of other systems to use "load or get", and I don't think load has to be specifically to the iceberg metadata.
|
This PR probably needs rebasing to catch up with CI changes (even though it's only a doc change) |
a263c68 to
9cd06da
Compare
|
@dimas-b Thanks! i just rebased |
* add change * add comment * address feedback * update limitations * update docs * update doc * address feedback
* Cleanup unnecessary files in client/python (apache#1878) Cleanup unnecessary files in `client/python` * Bump version in version.txt With the release/1.0.0 branch being cut, we should bump this to reflect the current state of main * JDBC: Refactor DatabaseOps (apache#1843) * removes the databaseType computation from JDBCMetastoreManagerFactory to DbOperations * wraps the bootstrap in a transaction ! * refactor Production Readiness checks for Postgres * Fix two wrong links in README.md (apache#1879) * Avoid using org.testcontainers.shaded.** (apache#1876) * main: Update dependency io.smallrye.config:smallrye-config-core to v3.13.2 (apache#1888) * main: Update registry.access.redhat.com/ubi9/openjdk-21-runtime Docker tag to v1.22-1.1749462970 (apache#1887) * main: Update dependency boto3 to v1.38.36 (apache#1886) * fix(build): Fix deprecation warnings in PolarisIntegrationTestExtension (apache#1895) * Enable patch version updates for maintained Polaris version (apache#1891) Polaris 1.x will be a supported/maintained release. It is crucial to apply bug and security fixes to such release branches. Therefore, this change enables patch-version updates for Polaris 1.* * Add Polaris community meeting record for 2025-06-12 (apache#1892) * Do not use relative path inside CLI script Issue apache#1868 reported that the Polaris script can fail when it's run from an unexpected path. The recent addition of a reference to `./gradlew` looks incorrect here, and should be changed to use an absolute path. Fixes apache#1868 * feat(build): Add Checkstyle plugin and an IllegalImport rule (apache#1880) * Python CI: pin mypy version to avoid CI failure due to new release (apache#1903) Mypy did a new release 1.16.1 and it cause our CI to fail for about 20 minutes due to missing wheel (upload not completed) ``` | Unable to find installation candidates for mypy (1.16.1) | | This is likely not a Poetry issue. | | - 14 candidate(s) were identified for the package | - 14 wheel(s) were skipped as your project's environment does not support the identified abi tags | | Solutions: | Make sure the lockfile is up-to-date. You can try one of the following; | | 1. Regenerate lockfile: poetry lock --no-cache --regenerate | 2. Update package : poetry update --no-cache mypy | | If neither works, please first check to verify that the mypy has published wheels available from your configured source that are compatible with your environment- ie. operating system, architecture (x86_64, arm64 etc.), python interpreter. | ``` This PR temporarily restrict the mypy version to avoid the similar issue. We may consider bring poetry.lock back to git tracking so we won't automatically update test dependencies all the time * Remove `.github/CODEOWNERS` (apache#1902) As per this [dev-ML discussion](https://lists.apache.org/thread/jjr5w3hslk755yvxy8b3z45c7094cxdn) * Rename quarkus as runtime (apache#1695) * Rename runtime/test-commons to runtime/test-common (for consistency with module name) (apache#1906) * docs: Add `Polaris Evolution` page (apache#1890) --------- Co-authored-by: Eric Maynard <[email protected]> * feat(ci): Split Java Gradle CI in many jobs to reduce execution time (apache#1897) * Add webpage for Generic Table support (apache#1889) * add change * add comment * address feedback * update limitations * update docs * update doc * address feedback * Improve the parsing and validation of UserSecretReferenceUrns (apache#1840) This change addresses all the TODOs found the org.polaris.core.secrets package. Main changes: - Create a helper to parse, validate and build the URN strings. - Use Regex instead of `String.split()`. - Add Precondition checks to ensure that the URN is valid and the UserSecretManager matches the expected type. - Remove the now unused `GLOBAL_INSTANCE` of the UnsafeInMemorySecretsManager. Testing - Existing `UnsafeInMemorySecretsManagerTest` captures most of the functional changes. - Added `UserSecretReferenceUrnHelperTest` to capture the utilities exposed. * Reuse shadowJar for spark client bundle jar maven publish (apache#1857) * fix spark client * fix test failure and address feedback * fix error * update regression test * update classifier name * address comment * add change * update doc * update build and readme * add back jr * udpate dependency * add change * update * update tests * remove merge service file * update readme * update readme * fix(ci): Remove dummy "build" job from Gradle CI (apache#1911) Since apache#1897, the jobs in gradle.yaml changed and the "build" job was split into many smaller jobs. But since it was a required job, it couldn't be removed immediately. * main: Update Quarkus Platform and Group to v3.23.3 (apache#1797) * main: Update Quarkus Platform and Group to v3.23.3 * Adopt polaris-admin test invocation --------- Co-authored-by: Robert Stupp <[email protected]> * Feature: Rollback compaction on conflict (apache#1285) Intention is make the catalog smarter, to revert the compaction commits in case of crunch to let the writers who are actually adding or removing the data to the table succeed. In a sense treating compaction as always a lower priority process. Presently the rest catalog client creates the snapshot and asks the Rest Server to apply the snapshot and gives this in a combination of requirement and update. Polaris could apply some basic inference and generate some updates to metadata given a property is enabled at a table level, by saying that It will revert back the commit which was created by compaction and let the write succeed. I had this PR in OSS, which was essentially doing this at the client end, but we think its best if we do this as server end. to support more such clients. How to use this Enable a catalog level configuration : polaris.config.rollback.compaction.on-conflicts.enabled when this is enabled polaris will apply the intelligence of rollbacking those REPLACE ops snapshot which have the property of polaris.internal.rollback.compaction.on-conflict in their snapshot summary to resolve conflicts at the server end ! a sample use case is there is a deployment of a Polaris where this config is enabled and there is auto compaction (maintenance job) which is updating the table state, it adds the snapshot summary that polaris.internal.rollback.compaction.on-conflict is true now when a backfill process running for 8 hours want to commit but can't because the compaction job committed before so in this case it will reach out to Polaris and Polaris will see if the snapshot of compation aka replace snapshot has this property if yes roll it back and let the writer succeed ! Devlist: https://lists.apache.org/thread/8k8t77dgk1vc124fnb61932bdp9kf1lc * NoSQL: nits * `AutoCloseable` for `PersistenceTestExtension` * checkstyle adoptions * fix: unify bootstrap credentials and standardize POLARIS setup (apache#1905) - unified formatting across docker, gradle - reverted secret to s3cr3t - updated docker-compose, README, conftest.py use POLARIS for consistency across docker, gradle and others. * Add doc for rollback config (apache#1919) * Revert "Reuse shadowJar for spark client bundle jar maven publish (apache#1857)" (apache#1921) …857)" This reverts commit 1f7f127. The shadowJar plugin actually stops publish the original jar, which is not what spark client intend to publish for the --package usage. Revert it for now, will follow up with a better way to reuse the shadow jar plugin, likely with a separate bundle project * fix(build): Gradle caching effectively not working (apache#1922) Using a `custom()` spotless formatter check effectively disables caching, see `com.diffplug.gradle.spotless.FormatExtension#custom(java.lang.String, com.diffplug.spotless.FormatterFunc)` using `globalState`, which is a `NeverUpToDateBetweenRuns`. This change refactors this to be cachable. We also already have a errorprone rule, so we can get rid entirely of the spotless step. * Update spark client to use the shaded iceberg-core in iceberg-spark-runtime to avoid spark compatibilities issue (apache#1908) * add change * add comment * update change * add comment * add change * add tests * add comment * clean up style check * update build * Revert "Reuse shadowJar for spark client bundle jar maven publish (apache#1857)" This reverts commit 1f7f127. * Reuse shadowJar for spark client bundle jar maven publish (apache#1857) * fix spark client * fix test failure and address feedback * fix error * update regression test * update classifier name * address comment * add change * update doc * update build and readme * add back jr * udpate dependency * add change * update * update tests * remove merge service file * update readme * update readme * update checkstyl * rebase with main * Revert "Reuse shadowJar for spark client bundle jar maven publish (apache#1857)" This reverts commit 40f4d36. * update checkstyle * revert change * address comments * trigger tests * Last merged commit 93938fd --------- Co-authored-by: Honah (Jonas) J. <[email protected]> Co-authored-by: Eric Maynard <[email protected]> Co-authored-by: Prashant Singh <[email protected]> Co-authored-by: Yufei Gu <[email protected]> Co-authored-by: Dmitri Bourlatchkov <[email protected]> Co-authored-by: Mend Renovate <[email protected]> Co-authored-by: Alexandre Dutra <[email protected]> Co-authored-by: JB Onofré <[email protected]> Co-authored-by: Eric Maynard <[email protected]> Co-authored-by: Yun Zou <[email protected]> Co-authored-by: Pooja Nilangekar <[email protected]> Co-authored-by: Seungchul Lee <[email protected]>
add a webpage for generic table support
fixes #1881