Skip to content

Conversation

@fivetran-ashokborra
Copy link
Contributor

@fivetran-ashokborra fivetran-ashokborra commented Apr 22, 2025

Fixes #480

  • Added support for SSE-KMS for S3

Comment on lines 188 to 253
Copy link
Contributor

@singhpk234 singhpk234 Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should not give these permissions on * , we should discuss more on finding what is the best way to get the KMS key via catalog prop (along with role ARN ...)

cc @eric-maynard @dennishuo

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about passing * as default, if the key is not configured via prop?
Or do you want to support KMS only if the key ARN is present?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, we should not give out * this can be misused, i would prefer to have KMS persmission enabled via config and when we configure it we also configure the KMS key arn

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it to regional level and started a thread on Dev list

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe worth to define the resource as configurable ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have thought of it. While it's possible to control at the policy level like this, making it configurable would just have the right permissions. I don't have a strong opinion on that.

{
            "Sid": "DenyObjectsThatAreNotSSEKMS",
            "Effect": "Deny",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": "{bucket}/*",
            "Condition": {
                "StringNotLike": {
                    "s3:x-amz-server-side-encryption-aws-kms-key-id": "{key-id}"
                }
            }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe worth to define the resource as configurable ?

I would also say that we should allow automatic determination of bucket keys without requiring configuration as this information is already available.

@adnanhemani
Copy link
Contributor

adnanhemani commented Apr 25, 2025

Agreed here, in general, with the other comments that * resource for these KMS actions is likely a bad idea.

But I'm still struggling with a proper threat model here. The only major security threat case I could see to this is that a data admin has used different encryption keys for files within the same table - and are using the encryption key access as a way of controlling access to the overall data in the table. In this case, they would need to tell Polaris using what access logic should Polaris give credentials to which KMS keys - which is something that we do not support AFAIK.

The other potential case could be that a malicious user then uses the * resource on KMS to attempt decrypting something on the local filesystem that was encrypted using the same KMS key. For SSE, it would not be possible for a user to decrypt something in a S3 path that is not explicitly listed in the credential-vended policy (per my understanding). If a user attempts to download a S3 file using some other credentials to the local filesystem, they will require the KMS policy to even download this file to begin with.

So, in my opinion, this comes back to: do we care about this use case where different files in the same table are encrypted using different KMS keys and do we need to ensure that credentials only allow such access per file? If so, how can we solve this? Or - is there a different threat model that I have not considered here that shows an active vulnerability.

cc @singhpk234 @eric-maynard @dennishuo

@collado-mike
Copy link
Contributor

But I'm still struggling with a proper threat model here.

You're not creative enough to be a bad guy 🤣

The threat here is that the vended credentials are used outside of the scope of accessing the data in this table. There's nothing preventing a malicious user from using these credentials to call KMS to decrypt any key defined for any file elsewhere in S3. Or non-S3 resources entirely.

At minimum, the IAM policy should define an encryption context, but preferably the specific KMS key and the encryption context.

@fivetran-ashokborra
Copy link
Contributor Author

But I'm still struggling with a proper threat model here.

You're not creative enough to be a bad guy 🤣

The threat here is that the vended credentials are used outside of the scope of accessing the data in this table. There's nothing preventing a malicious user from using these credentials to call KMS to decrypt any key defined for any file elsewhere in S3. Or non-S3 resources entirely.

At minimum, the IAM policy should define an encryption context, but preferably the specific KMS key and the encryption context.

Note that we aren't talking about one single key ARN, but possibly more than one: You can configure S3 to use a new key ARN for future objects, each S3 object's metadata also contains the key ARN it was encrypted with.

Maybe we should instead focus on restricting access by encryption context, otherwise we would need to list all the objects we are accessing and and have to issue metadata requests for all objects just to collect the list of needed keys.

Every time there's a KMS key change, do we want the Polaris to have updated information about it?

@collado-mike
Copy link
Contributor

The typical pattern is to use a single KMS master key to encrypt all files for a table. Each file will be encrypted with a different data encryption key, but the master key is typically the same.

The way I propose to model this is to explicitly assign a KMS master key to each table - if it changes, we can add list of secondary keys - so that the policy explicitly grants encrypt/decrypt privileges using only the specified KMS key(s). Additionally, include the S3 path(s) as the encryption context so that the master key can only be used to decrypt data within the specified subpath.

@adnanhemani
Copy link
Contributor

adnanhemani commented Apr 29, 2025

The threat here is that the vended credentials are used outside of the scope of accessing the data in this table. There's nothing preventing a malicious user from using these credentials to call KMS to decrypt any key defined for any file elsewhere in S3. Or non-S3 resources entirely.

I'm not sure I agree here, @collado-mike. What does prevent a malicious user from doing these things is the fact that the STS credentials are already pre-scoped to the set of S3 prefixes that Polaris sets when generating the credentials. Malicious users are unable to cross over permission sets from different STS sessions to do actions spanning different actions, at least with regards to SSE. And to make sure we are on the same page, AWS does not allow combining IAM permissions from multiple sessions.

If you are implying that Polaris is unable to scope S3 resources to a granular-enough scale (say, it is unable to scope S3 permissions to a table-level) - then I'd say that's a massive issue that needs to be addressed outside of KMS support, but my quick reading of the code does not support this implication.


To make this concrete, let me state an example:

  • A malicious user has access to credentials for an IAM session that has the following policies: [allow on "s3://a/b/c/d/" with actions "s3:GetObject", allow on "*" with actions "kms:Decrypt"].
  • This malicious user also has access to a different set of credentials for an IAM session that has the policy: [allow on "s3://*"]. Say that all objects in S3 are encrypted with SSE-KMS with KMS key k1.

It would not be possible for this malicious user to access any S3 file outside of prefix "s3://a/b/c/d/" with just these two sets of credentials - S3 would not allow them to download any object (even in an encrypted form) using the second set of credentials due to the way that SSE-KMS and SSE-S3 work (which will require the same session to have the kms:Decrypt action; S3 will not let a request succeed for an object that is encrypted with SSE-KMS/S3 but KMS permissions do not exist within this IAM session).

If you disagree, please do share a concrete example that shows an active vulnerability - I'm interested in seeing if there's something obvious I've missed altogether.


If my above statements are true, then I think we have landed on something similar earlier to what you are now recommending: is there a security use case for different KMS keys being used to encrypt files within the same table/S3 prefix and customers today? How does a user make Polaris aware of the different KMS keys in such a use case?

The way I propose to model this is to explicitly assign a KMS master key to each table

Is this something that is already supported in Iceberg? I did not find this - or is the intention for Polaris to manage this KMS master key on behalf of the customer? If so, what is Polaris' story regarding tables that are being imported into Polaris after being created?

@collado-mike
Copy link
Contributor

To make this concrete, let me state an example:

A malicious user has access to credentials for an IAM session that has the following policies: [allow on "s3://a/b/c/d/" with actions "s3:GetObject", allow on "" with actions "kms:Decrypt"].
This malicious user also has access to a different set of credentials for an IAM session that has the policy: [allow on "s3://
"]. Say that all objects in S3 are encrypted with SSE-KMS with KMS key k1.

Now do the same example with client-side encryption with KMS-managed keys and tell me how the IAM policy prevents the user from decrypting keys for any object in S3. Or how the policy prevents the user from decrypting keys for anything that's not in S3.

There's nothing in that policy that prevents the user from using the keys outside of the context of S3-SSE

@adnanhemani
Copy link
Contributor

Synced offline with @collado-mike - while there is still not an issue with users who are only using SSE, if the user is using CSE anywhere else within their AWS account, this could present a security issue as credentials vended by Polaris could be used by a malicious user against CSE objects. Rather than supporting this feature in this way using a warning to the customer about the potential security side effects of enabling both CSE and SSE, I think we will have to hold the line on ensuring that the credentials are scoped in a more restrictive way.

Found some examples on how to use EncryptionContext and kms:ViaService that we can add to this PR that should help bring the right amount of restrictions: https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-config-for-kms-objects.html#replications (Look for the header: "Example – Using SSE-KMS with separate destination buckets")

I'm still forming an opinion on if "*" resources are acceptable if we use both the EncryptionContext and (potentially, if we want to restrict to SSE) kms:ViaService. The attack vector that Mike is trying to point out above is mitigated when using EncryptionContext - so I'm still trying to see if there's a way for the wildcard resources to be exploited.

@collado-mike
Copy link
Contributor

while there is still not an issue with users who are only using SSE, if the user is using CSE anywhere else within their AWS account, this could present a security issue as credentials vended by Polaris could be used by a malicious user against CSE objects.

Notably, if any KMS keys that the IAM role has access to are being used for anything client-side (e.g., encrypted DDB, or local files or anything really), the caller can use the credentials vended to immediately start decrypting anything with those keys.

I know there is some churn in the encryption configuration in Iceberg, but I also know that s3.sse.key is a property that's currently used to specify the KMS key associated with a table. The current getSubscopedCreds API doesn't take in table properties now, but can we make that change to allow specific keys to be passed in and added to the policy?

@adnanhemani
Copy link
Contributor

I know there is some churn in the encryption configuration in Iceberg, but I also know that s3.sse.key is a property that's currently used to specify the KMS key associated with a table.

The way I read this, I believe the s3.sse.key property is on the catalog rather than the table - and I don't see this key being persisted in the table metadata either. I'm not an Iceberg expert, but I'm guessing that won't be right to do since the user can change this input in their session and we would not know if they can have access to this KMS key from the session itself?

@fivetran-ashokborra
Copy link
Contributor Author

fivetran-ashokborra commented May 2, 2025

I have listed all the approaches on top of my mind to make the policy granular

Approach 1: Implicit Key ARN retrieval

In this approach, the customer doesn't need to provide any key ARNs explicitly. However, the role must have the GetBucketEncryption permission to retrieve the active key ARN.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyObjectsThatAreNotSSEKMS",
            "Effect": "Deny",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::bucket-name/prefix/*",
            "Condition": {
                "StringNotLike": {
                    "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:region:account:key/key_id"
                }
            }
        },
        {
            "Sid": "AllowUseOfKMS",
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey",
                "kms:Decrypt"
            ],
            "Resource": "arn:aws:kms:region:account:key/*",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalArn": "arn:aws:iam::account-id:role/specific-role-name"
                },
                "StringLike": {
                    "kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix/*"
                }
            }
        },
        {
            "Sid": "KMSEncrypt",
            "Effect": "Allow",
            "Action": "kms:Encrypt",
            "Resource": "arn:aws:kms:region:account:key/key_id",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalArn": "arn:aws:iam::account-id:role/specific-role-name"
                },
                "StringLike": {
                    "kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix/*"
                }
            }
        }

    ]
}

Pros:

  • No need for the customer to explicitly provide the KMS key ARN.

Cons:

  • The IAM policy may still grant decryption access to all KMS keys within the account’s region.

  • An additional step is required to fetch the active KMS key ARN

The region is implicitly inferred from the KMS key ARN

Approach 2: Explicit KMS Key ARN input

This approach requires the customer to provide the KMS Key ARN as part of the input.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyObjectsThatAreNotSSEKMS",
            "Effect": "Deny",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::bucket-name/prefix/*",
            "Condition": {
                "StringNotLike": {
                    "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:region:account:key/key_id"
                }
            }
        },
        {
            "Sid": "AllowUseOfKMS",
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey",
                "kms:Decrypt"
            ],
            "Resource": "arn:aws:kms:region:account:key/*",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalArn": "arn:aws:iam::account-id:role/specific-role-name"
                },
                "StringLike": {
                    "kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix/*"
                }
            }
        },
        {
            "Sid": "KMSEncrypt",
            "Effect": "Allow",
            "Action": "kms:Encrypt",
            "Resource": "arn:aws:kms:region:account:key/key_id",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalArn": "arn:aws:iam::account-id:role/specific-role-name"
                },
                "StringLike": {
                    "kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix/*"
                }
            }
        }

    ]
}

Pros:

  • Eliminates the need for GetBucketEncryption permission to retrieve the active KMS Key ARN.

Cons:

  • The IAM policy may still allow decryption access to all KMS keys within the account’s region.
  • Customers are responsible for updating the catalog if the KMS key changes
  • The KMS Key ARN must be stored in the entities table.

Approach 3: Explicit Key ARN Input with Fine-Grained Control

This approach involves storing all relevant KMS Key ARNs in the table.

{
    "Version": "2012-10-17",
    "Statement": [
         {
            "Sid": "AllowUseOfKMS",
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey",
                "kms:Decrypt",
                "kms:Encrypt"
            ],
            "Resource": ["arn:aws:kms:region:account:key/key_id1","arn:aws:kms:region:account:key/key_id2","arn:aws:kms:region:account:key/key_id3"],
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalArn": "arn:aws:iam::account-id:role/specific-role-name"
                },
                "StringLike": {
                    "kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix/*"
                }
            }
        }
    ]
}

Pros:

  • IAM policy grants access only to the specific keys required.

Cons:

  • Customers are responsible for updating the catalog if the KMS key changes
  • Persist all key changes over time.

Approach 4: Region-Wide Key Access

This approach takes the region as input from the customer and grants access to all KMS keys within that region


{
    "Version": "2012-10-17",
    "Statement": [
         {
            "Sid": "AllowUseOfKMS",
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey",
                "kms:Decrypt",
                "kms:Encrypt"
            ],
            "Resource": "arn:aws:kms:region:account:key/*",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalArn": "arn:aws:iam::account-id:role/specific-role-name"
                },
                "StringLike": {
                    "kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix/*"
                }
            }
        }
    ]
}

Pros:

  • Simplified policy requiring minimal customer input.

Cons:

  • Broad policy grants access to all keys within a region.

I believe Approach 4 is more aligned with Polaris’s use case, even though it involves broader access by allowing permissions to all KMS keys within a region. Here are my thoughts:

  • In practice, customers typically grant KMS key access only to specific users and roles.

  • S3 uses Forward Access Sessions (FAS) to handle encryption and decryption. As a result, explicit KMS Key ARNs are often not required for the calling application.

  • However, S3 doesn’t expose APIs to retrieve previously used KMS Key ARNs. To determine which keys were used, we would need to list all objects, extract their metadata, and infer the keys—an inefficient and complex process.

Meanwhile, Approaches 2 and 3 introduce additional maintenance overhead, and Approach 1 requires elevated permissions (GetBucketEncryption), which may not always be feasible.

We can add this condition as well

 "Condition": {
    "StringEquals": {
      "kms:ViaService": [
        "s3.<region>.amazonaws.com"
      ]
    }

@adnanhemani
Copy link
Contributor

I believe Approach 4 is more aligned with Polaris’s use case, even though it involves broader access by allowing permissions to all KMS keys within a region.

While I agree that Approach 4 is the right way to go - I don't think it's because of limiting the KMS keys by region. Limiting that blast radius does not actually do anything to mitigate the attack vector that @collado-mike and I reasoned above. I'd say that it's more because of requiring the EncryptionContext that this IAM session cannot be misused against other AWS resources. Using the KMS:ViaService will also be helpful in the meantime to ensure that we are only supporting SSE via S3 - I'd recommend adding that to Approach 4 for a tighter security guarantee.

S3 uses Forward Access Sessions (FAS) to handle encryption and decryption. As a result, explicit KMS permissions are often not required for the calling application.

Clarification: While you're correct that S3 uses FAS, it does require the original IAM session that you used to call S3 does have the required KMS permissions.

Thoughts @collado-mike @dennishuo ?

@pavibhai
Copy link
Contributor

@fivetran-ashokborra

Here is what I am thinking of.

  • Configuration at the Catalog Level for KMS ARNs
    • Empty indicates no KMS vending required
    • * is allowed
    • The values specified here is used as resource
  • During the creation of the policy statement, the following needs to happen
    • Use the KMS ARNs from the Catalog definition as resources
    • Two separate statements one for read and another for write
    • Each statement comes with condition against encryption context s3 arn to restrict to the allowed read or write paths accordingly

This does avoid making another call to S3 to determine the encryption settings and still perform the required restrictions of limiting the use of this credentials to only the specified paths.

Hope that makes sense, do let me know. I know the scope is different to what you started initially, if it helps I can try to put a PR towards the above scope if this is beyond what you were initially planning on.

cc: @collado-mike

policyBuilder.addStatement(
IamStatement.builder()
.effect(IamEffect.ALLOW)
.addAction("kms:GenerateDataKey")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we separate out the read and write paths into two different statements?

.addAction("kms:GenerateDataKey")
.addAction("kms:Decrypt")
.addAction("kms:DescribeKey")
.addResource(getKMSArnPrefix(roleARN) + region + ":" + awsAccountId + ":key/*")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this come from the catalog configuration instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean the key? I was thinking explicity key support can be added in a follow-up PR where we can support KMS at a table level

@flyrain flyrain added this to the 1.1.0 milestone Jul 28, 2025
}

private static String getS3Endpoint(String roleArn, String region) {
if (roleArn.contains("aws-cn")) {
Copy link
Contributor

@pavibhai pavibhai Jul 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do the following instead:

Region regionObj = Region.of(region);
String.format("s3.%s.%s", region.id(), region.metadata().partition().dnsSuffix()))

Copy link
Contributor

@dimas-b dimas-b left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a valuable feature to me 👍 Thanks for working on it @fivetran-ashokborra !

.key("ENABLE_KMS_SUPPORT_FOR_S3")
.catalogConfig("polaris.config.enable-kms-support-for-s3")
.description("If true, enables KMS support for S3 storage integration")
.defaultValue(KmsSupportLevel.NONE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to add an end-to-end test for non-default values... Otherwise, how do we know they work if configured? 😉

Ideally, we should test with MinIO too... It looks like MinIO has KMS.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have Java based e2e tests? I can work on adding one for this

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

private boolean isKMSSupported(CallContext callContext) {
return !callContext
.getRealmConfig()
.getConfig(KMS_SUPPORT_LEVEL_S3)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this approach to getting the config value, is KmsSupportLevel.TABLE relevant?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm deferring the table level support in a follow-up PR

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this method going to work in the follow-up PR? :)


policyBuilder.addStatement(allowGetObjectStatementBuilder.build());

if (isKMSSupported(callContext)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tend to think the KMS config rather belongs in AwsStorageConfigurationInfo.

We can still keep the feature flag as a global kill switch (in case unexpected behaviour occurs in runtime), but using KMS or not is probably something to be configured at each specific storage integration point. WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that's related to storageConfig. But that would mean bringing bucket-specific config to Catalog configuration.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why bucket-specific? Right now it's just an on/off flag. It looks similar to the pathStyleAccess setting (#2012)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My point is that isKMSSupported is a global setting (in this PR), but there may be multiple catalogs in the system. Enabling KMS for one, will change the behaviour of "storage integration" code for all S3 catalogs.

@jbonofre jbonofre modified the milestones: 1.1.0, 1.2.0 Aug 27, 2025
@fivetran-ashokborra
Copy link
Contributor Author

I'm back on it. Will address the comments some time this week

@singhpk234 singhpk234 modified the milestones: 1.2.0, 1.3.0 Sep 26, 2025
@dimas-b
Copy link
Contributor

dimas-b commented Sep 30, 2025

@fivetran-ashokborra : Could you please resolve conflicts?

@snazy
Copy link
Member

snazy commented Nov 4, 2025

Ping @fivetran-ashokborra

@dimas-b
Copy link
Contributor

dimas-b commented Nov 4, 2025

I believe #2802 is the more recent proposal for implementing KMS.

Also, there are still some conceptual discussions about per-catalog and per-table config in https://lists.apache.org/thread/to7lbdw1k6dym5c6gnk1nm32vqdw24qk

@fivetran-ashokborra
Copy link
Contributor Author

fivetran-ashokborra commented Nov 5, 2025

I'm happy to continue working on this feature if it's acceptable to have a default * config at the catalog level instead of specifying a key. The way I look at it is to develop in phases. We can handle each of them separately.

  1. Default * config at the catalog level
  2. KMS Key configuration at the catalog level
  3. KMS Key configuration at the table level

@dimas-b
Copy link
Contributor

dimas-b commented Nov 6, 2025

@fivetran-ashokborra : now that we have this PR and #2802, would you mind replying to the dev ML thread to sync up on the impl. direction? https://lists.apache.org/thread/2k5dqhz35fcshs2o083hv7pxyrkgrbk9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Polaris can't use S3 when KMS is enabled

9 participants