Skip to content

Conversation

@viirya
Copy link
Member

@viirya viirya commented Mar 5, 2021

What changes were proposed in this pull request?

This patch adds a config spark.yarn.kerberos.renewal.excludeHadoopFileSystems which lists the filesystems to be excluded from delegation token renewal at YARN.

Why are the changes needed?

MapReduce jobs can instruct YARN to skip renewal of tokens obtained from certain hosts by specifying the hosts with configuration mapreduce.job.hdfs-servers.token-renewal.exclude=,,..,.

But seems Spark lacks of similar option. So the job submission fails if YARN fails to renew DelegationToken for any of the remote HDFS cluster. The failure in DT renewal can happen due to many reason like Remote HDFS does not trust Kerberos identity of YARN etc. We have a customer facing such issue.

Does this PR introduce any user-facing change?

No, if the config is not set. Yes, as users can use this config to instruct YARN not to renew delegation token from certain filesystems.

How was this patch tested?

It is hard to do unit test for this. We did verify it work from the customer using this fix in the production environment.

@github-actions github-actions bot added the CORE label Mar 5, 2021
@SparkQA
Copy link

SparkQA commented Mar 5, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/40402/

@SparkQA
Copy link

SparkQA commented Mar 5, 2021

Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/40402/

@SparkQA
Copy link

SparkQA commented Mar 5, 2021

Test build #135820 has finished for PR 31761 at commit e11cdb9.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the PR description, is this a bug, @viirya ?

@dongjoon-hyun
Copy link
Member

cc @tgravescs

@viirya
Copy link
Member Author

viirya commented Mar 6, 2021

I think it is not a bug. Spark currently just doesn't support to exclude specific filesystems from DT renewal. But MapReduce supports it. So if the user has such requirement, the user cannot submit Spark job to YARN because YARN is unable to renew the DT.

filesystems.foreach { fs =>
logInfo(s"getting token for: $fs with renewer $renewer")
fs.addDelegationTokens(renewer, creds)
if (fsToExclude.contains(fs.getUri.getHost)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this excludes all file systems on that host, shall we revise the description in line 107?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hosts on which the file systems to be excluded from token renewal?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ya. Maybe.
BTW, we are accept hadoopFileSystems via 'spark.kerberos.renewal.exclude.hadoopFileSystem' and convert it to hosts?
Could you describe some examples where we cannot accept hosts directly?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also wondering why unlike the MR config we don't use hosts directly but rather URLs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, we can accept hosts directly. It doesn't matter here. Either URL or host can achieve the same result we want.

It takes file system URLs because I let it to be consistent with other configs like KERBEROS_FILESYSTEMS_TO_ACCESS.

Copy link
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, @viirya . It looks like a useful feature.

@SparkQA
Copy link

SparkQA commented Mar 6, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/40417/

@SparkQA
Copy link

SparkQA commented Mar 6, 2021

Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/40417/

@SparkQA
Copy link

SparkQA commented Mar 6, 2021

Test build #135835 has finished for PR 31761 at commit d6c8f9e.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, looks safe. I added one more comment. (https://github.com/apache/spark/pull/31761/files#r588953655)

Also, cc @sunchao and @mridulm

Copy link
Member

@sunchao sunchao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dongjoon-hyun for pinging.

filesystems.foreach { fs =>
logInfo(s"getting token for: $fs with renewer $renewer")
fs.addDelegationTokens(renewer, creds)
if (fsToExclude.contains(fs.getUri.getHost)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also wondering why unlike the MR config we don't use hosts directly but rather URLs.


private[spark] val KERBEROS_FILESYSTEM_RENEWAL_EXCLUDE =
ConfigBuilder("spark.kerberos.renewal.exclude.hadoopFileSystems")
.doc("The list of Hadoop filesystem URLs whose hosts will be excluded from " +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this only restricted to HDFS URLs? can "hosts" either be nameservice name or namenode name? some examples might help.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the configuration value will be parsed as file system URLs to get file system. A simple host name doesn't work here.

e.g.

scala> new Path("hdfs.namenode.net").getFileSystem(new Configuration()).getUri.getHost
res9: String = null

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It makes sense for spark.kerberos.access.hadoopFileSystems to be URLs since Spark needs to instantiate FileSystems from them. But for this case I'm not sure if it's necessary: we can just parse the config into a set of host names and check whether the file systems above contain them:

val hostsToExclude = sparkConf.get(KERBEROS_FILESYSTEM_RENEWAL_EXCLUDE).toSet
filesystems.filter(fs => !hostsToExclude.contains(fs.getUri.getHost).foreach { fs =>
 ...
}

I'm fine either way though since it also makes sense to keep it consistent with spark.kerberos.access.hadoopFileSystems. BTW I think we'll need to update security.md for the new config.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Let me update security.md.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so just to be clear, these fileystems are ones the user specified in spark.kerberos.access.hadoopFileSystems so it wants to get initial tokens for them but then later we don't want to renew, correct? We should perhaps update doc to mention that.
It would be nice to know if this works with other cluster managers like k8s - @ifilonenko @mccheah maybe?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HeartSaVioR Spark actually behaves this way (w/o this change): if one DT is created then it's renewed all the time except in the following cases:

  • keytab used but no principal provided
  • ccache used but it has no kerberos credentials

Please see the code here:

/** @return Whether delegation token renewal is enabled. */
def renewalEnabled: Boolean = sparkConf.get(KERBEROS_RENEWAL_CREDENTIALS) match {
case "keytab" => principal != null
case "ccache" => UserGroupInformation.getCurrentUser().hasKerberosCredentials()
case _ => false
}

Copy link
Contributor

@HeartSaVioR HeartSaVioR Mar 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah OK I think I missed some part of @tgravescs 's comment. My bad.

His point wasn't that the new config is not necessary at all. His point was that the problem will occur from only defaultFs and/or stageFs, so the new config could be simplified instead of being general but a bit verbose.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so just to be clear, these fileystems are ones the user specified in spark.kerberos.access.hadoopFileSystems so it wants to get initial tokens for them but then later we don't want to renew, correct? We should perhaps update doc to mention that.
It would be nice to know if this works with other cluster managers like k8s - @ifilonenko @mccheah maybe?

Hmm, for the customer case, the requirement is to prevent YARN from renewing the delegation token obtained. It doesn't matter if the file system is defaultFs/stageFs or the ones specified in spark.kerberos.access.hadoopFileSystems. That said, Spark still can obtain the token for the ones in spark.kerberos.access.hadoopFileSystems, but we don't want YARN to renew it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was simply trying to clarify the exact case being hit here and make sure there wasn't an alternate solution and make sure the docs are clear. The specific case being hit could affect the solution.

I think there are comments on this review that are very confusing, thus why I wanted to clarification. Some indicate that Spark doesn't get initial tokens, others saying in this case the tokens were already acquired, etc.

In the end my comment ends up being I think we should update the security.md doc to mention renewal in the kerberos section for Hadoop filesystems to help explain to users.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, thanks @tgravescs. I think in short this config doesn't affect Spark token handling, but only prevent YARN from renewing tokens, no matter the tokens are obtained in advance or by Spark. Let me update security.md and try to explain it.

@HyukjinKwon
Copy link
Member

cc @HeartSaVioR and @gaborgsomogyi FYI

@github-actions github-actions bot added the DOCS label Mar 7, 2021
@SparkQA
Copy link

SparkQA commented Mar 7, 2021

Test build #135838 has finished for PR 31761 at commit faa85fd.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

logInfo(s"getting token for: $fs with renewer $renewer")
fs.addDelegationTokens(renewer, creds)
if (fsToExclude.contains(fs.getUri.getHost)) {
// RM skips renewing token with empty renewer
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to understand properly, does RM mean Resource Manager? Now the delegation token handling is not only done for Spark on yarn but also for other resource schedulers as well, so probably better to remove specific resource scheduler's term/details.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually you remind me that I should document it this is only for YARN. As the PR description mentions, this is for YARN behavior. I am not sure if other resource scheduler follows the same behavior here.

Copy link
Member

@sunchao sunchao Mar 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm does it only apply to YARN though? It seems Spark has its own HadoopDelegationTokenManager which is separated from YARN. Also, it seems Spark has its own renewal logic and I'm not sure how the empty string for renewer approach work in the case. See HadoopDelegationTokenManager.scheduleRenewal.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We only test it under YARN and verify it works to skip the renewal from YARN if we give an empty renewer. I'm not sure if other resource scheduler follows this behavior. So I can just document the config to mention it is known to work for YARN.

Copy link
Contributor

@gaborgsomogyi gaborgsomogyi Mar 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh gosh, maybe I'm working on too many stuff in parallel and was watching bad TV here :)
Today morning I've put other tasks away and has taken the time and re-checked/re-tested the code and it has no influence on the initial obtain.
The comment can be resolved and I'm fine with the code as-is.

@HeartSaVioR
Copy link
Contributor

I haven't encountered such so don't feel I'm qualified to approve, but I see what the PR is trying to do and it makes sense. Generally looks OK.

fetchDelegationTokens(renewer, filesystems, creds)
fetchDelegationTokens(renewer, filesystems, creds, hadoopConf, sparkConf)

val renewIntervals = creds.getAllTokens.asScala.filter {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, suppose we have a fs which we exclude the renewal, then my understanding on behavior is it fetches the delegation token but won't renew. Do I understand correctly? Just curious about how these credentials will be handled from here, like whether they're not AbstractDelegationTokenIdentifier, or token.renew fails but Try will swallow.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The customer uses spark-submit to submit Spark job to YARN, I think the customer already obtains delegation token for the remote HDFS. YARN will try to renew the delegation token when we submit job to it, but the YARN cluster is not able to renew it and fails.

Copy link
Contributor

@gaborgsomogyi gaborgsomogyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution, +1 on the direction. Left a question which I not yet understand.

.toSequence
.createWithDefault(Nil)

private[spark] val KERBEROS_FILESYSTEM_RENEWAL_EXCLUDE =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since HadoopFSDelegationTokenProvider obtains the initial tokens and not just renewing them why do we call it ...renewal...?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems related to YARN behavior. YARN will try to renew delegation token when the job is submitted to YARN. For the customer case, the delegation token is already obtained when calling spark-submit. But YARN is unable to renew the DT and refuses the job. This config is for such case to avoid YARN to renew the DT.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the point. On the other hand the other use-case is that Spark user lets Spark to obtain tokens and such case the added configuration has effect. In the mentioned case the name is misleading because it has effect on the initial obtain too.

Copy link
Member Author

@viirya viirya Mar 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean with empty renewer addDelegationTokens doesn't work to obtain the initial token if the token is not present?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean the code works because I've tested it. My suggestion is not to name the config variable ...renewal... because it has effect in the initial obtain too. The code is good as-is and good job :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I am confused. :) So you said this configuration also affects initial obtain too. It means if users set a file system in the configuration, Spark won't obtain initial token for it? Or I misunderstood above comments?

@SparkQA
Copy link

SparkQA commented Mar 8, 2021

Test build #135862 has finished for PR 31761 at commit 397ebd2.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.


// The hosts on which the file systems to be excluded from token renewal
val fsToExclude = sparkConf.get(KERBEROS_FILESYSTEM_RENEWAL_EXCLUDE)
.map(new Path(_).getFileSystem(hadoopConf).getUri.getHost)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we don't have to get file system here, maybe just

map(new Path(_).toUri.getHost)

@mridulm
Copy link
Contributor

mridulm commented Mar 10, 2021

Curious about this usecase @viirya ... how does the DT get renewed ? Or is the DT required only at initial job submission time ?
Any reasonably long running job will fail if the acquired token cant be renewed, right ?

@mridulm
Copy link
Contributor

mridulm commented Mar 10, 2021

If enabling this means the application cannot be long running, is that documented some place @viirya ? If not, can you add a note ? Thx.

@viirya
Copy link
Member Author

viirya commented Mar 11, 2021

If enabling this means the application cannot be long running, is that documented some place @viirya ? If not, can you add a note ? Thx.

This should only be applied on the token which is obtained before calling spark-submit and the token is unable to renew by YARN. So excluding from renewal sounds it not for long-running applications? I can add a note to the config doc if you think it is better.

@gaborgsomogyi
Copy link
Contributor

Not renewing specific tokens implicitly means the workload won't work after it's validity period (so I don't feel it must be added), though it doesn't hurt to add such note to be 100% clear. Good examples are the cases I've mentioned here.

@viirya
Copy link
Member Author

viirya commented Mar 15, 2021

Thanks all for the review. I updated the config doc and security.md. Hopefully it addresses the concerns.

docs/security.md Outdated
A comma-separated list of Hadoop filesystems for whose hosts will be excluded from from delegation
token renewal at resource scheduler. For example, <code>spark.kerberos.renewal.exclude.hadoopFileSystems=hdfs://nn1.com:8032,
hdfs://nn2.com:8032</code>. This is known to work under YARN for now, so YARN Resource Manager won't renew tokens for the application.
Note that as resource scheduler does not renew token, the application might not be long running once the token expires.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might not be long running is a little bit soft in my view. Can you imagine a situation where token expires and the workload goes on successfully?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the workload can definitely go on until it tries to access that filesystem. There is no reason it couldn't fully succeed depending on where its writing data or if it writes data at all.
We definitely could clarify though, perhaps something more like:
, so any application running longer than the original token expiration that tries to use that token will fail.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so my intent was actually to add more description up on like line 792 and add to the "When using a Hadoop filesystem " bit.
Perhaps something like:

service hosting the user's home directory and staging directory. Spark will renew Hadoop filesystem delegation tokens before their expiration unless the token is excluded via spark.kerberos.renewal.exclude.hadoopFileSystems. Please note that if the token is not renewed, any application that attempts to access the filesystem associated with that token after it expired will likely fail.

Copy link
Member Author

@viirya viirya Mar 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I understand, this config doesn't affect Spark's renewal of tokens, but avoid YARN RM's renewal. That is different. The token might be invalid if Spark doesn't renew it before expiration. Whether Spark renews the token is depending on #31761 (comment). Please correct me if I misunderstand it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh right. Please update the description on the issue to be what you are really trying to do here.

I'm not sure that is completely true though either. getTokenRenewalInterval fetches the delegation tokens in fetchDelegationTokens and then calls renew on them to get the next expiration time. I'm assuming your changes in fetchDelegationTokens makes renew not work. What does renew return in this case that renewer is empty? It looks like it should throw an exception going by the base Hadoop classes I looked at.
Assuming its not the actual next renewal time since you say it doesn't get renewed if this tokens expiration is actually less than all the other tokens than we won't properly renew it on the Spark side either. That is in the case the key tab was specified. If it throws an exception that is even worse.

I'm also wondering out loud if this config would ever apply to anything other than YARN. there is no renewal component in k8s that I'm aware of

Copy link
Member Author

@viirya viirya Mar 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, this sounds a good point to me. I think I should not change the behavior of getTokenRenewalInterval because it is not related to the issue here. The renew call in getTokenRenewalInterval is just used for obtaining renewal interval for all FS tokens.

Throwing exception during individual renew call will be ignored. The actual renewal internal will be the minimum among all internals. But yes this could be a behavior change if the token with empty renewer is actually the minimum one and we cannot get it now because of the exception.

Let me restore the behavior of getTokenRenewalInterval to make it safer.

I assume this config is only for YARN-specific behavior. I documented it explicitly about YARN in the config doc/security.md. Is any other thing I should do for it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a suggestion above for rephrasing the text here, something more like:

Note that as resource scheduler does not renew token, so any application running longer than the original token expiration that tries to use that token will likely fail.

@SparkQA
Copy link

SparkQA commented Mar 16, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/40669/

@SparkQA
Copy link

SparkQA commented Mar 16, 2021

Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/40669/

.createWithDefault(Nil)

private[spark] val KERBEROS_FILESYSTEM_RENEWAL_EXCLUDE =
ConfigBuilder("spark.kerberos.renewal.exclude.hadoopFileSystems")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think we should change the name fo this. we have spark.kerberos.renewal.credentials that is talking about how Spark renewals. This has the same prefix but is not about spark renewal, its about resource scheduler renewal.
Honestly at this point if no one knows of a k8s component that could do this I would almost say we make the config yarn specific - spark.yarn.kerberos.renewal.excludeHadoopFileSystems. I would actually like to see rm or resourceScheduler in the name to make it even more clear but that gets really long. Open to other opinions but we can always make a generic one later. If someone has an idea how this would be used with others we can leave it generic to not yarn but I want something else in the name to show its resource scheduler specific.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spark.yarn.kerberos.renewal.excludeHadoopFileSystems looks okay to me.

@viirya viirya changed the title [SPARK-34295][CORE] Exclude filesystems from token renewal [SPARK-34295][CORE] Exclude filesystems from token renewal at YARN Mar 17, 2021
@viirya
Copy link
Member Author

viirya commented Mar 20, 2021

retest this please

@viirya
Copy link
Member Author

viirya commented Mar 20, 2021

@tgravescs Thanks for the review. Do you have any other comment I need to address?

@SparkQA
Copy link

SparkQA commented Mar 20, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/40871/

@SparkQA
Copy link

SparkQA commented Mar 20, 2021

Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/40871/

@SparkQA
Copy link

SparkQA commented Mar 20, 2021

Test build #136289 has finished for PR 31761 at commit 71acd5c.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

docs/security.md Outdated
<td>3.0.0</td>
</tr>
<tr>
<td><code>spark.yarn.kerberos.renewal.excludeHadoopFileSystems</code></td>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be in the yarn specific kerberos section -> YARN-specific Kerberos Configuration (http://spark.apache.org/docs/latest/running-on-yarn.html). and we should link to it from the security.md doc i

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved. Thanks.

@tgravescs
Copy link
Contributor

sorry for my delay, I was out, mostly good, I think if we just move docs to yarn section its good.

@viirya
Copy link
Member Author

viirya commented Mar 22, 2021

Thank you @tgravescs.

I'll still leave this open for one or two days. Then I will merge it if no more comments.

@SparkQA
Copy link

SparkQA commented Mar 22, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/40947/

@SparkQA
Copy link

SparkQA commented Mar 22, 2021

Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/40947/

@SparkQA
Copy link

SparkQA commented Mar 22, 2021

Test build #136362 has finished for PR 31761 at commit cf8da75.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@viirya
Copy link
Member Author

viirya commented Mar 24, 2021

Thanks all for the review. Merging to master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants