Skip to content

Conversation

@HeartSaVioR
Copy link
Contributor

What changes were proposed in this pull request?

This patch fixes the missed spot - the test initializes FileStreamSinkLog with its "output" directory instead of "metadata" directory, hence the verification against sink log was no-op.

Why are the changes needed?

Without the fix, the verification against sink log was no-op.

Does this PR introduce any user-facing change?

No.

How was this patch tested?

Checked with debugger in test, and verified allFiles() returns non-zero entries. (It returned zero entry, as there's no metadata.)

@HeartSaVioR
Copy link
Contributor Author

cc. @gengliangwang @cloud-fan @HyukjinKwon as they've reviewed #26671

@gengliangwang
Copy link
Member

I am not very familiar with streaming but LGTM

@HeartSaVioR
Copy link
Contributor Author

Ah that's OK. Thanks for reviewing :) cc @dongjoon-hyun @zsxwing and @gatorsmile from #26639 (original PR of SPARK-29999)

@SparkQA
Copy link

SparkQA commented Jun 26, 2020

Test build #124532 has finished for PR 28930 at commit f7e6b4e.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.


def getMetadataLogPath(path: Path, hadoopConf: Configuration, sqlConf: SQLConf): Path = {
val metadataDir = new Path(path, FileStreamSink.metadataDir)
val fs = metadataDir.getFileSystem(hadoopConf)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we pass the fs as a parameter as well?

def getMetadataLogPath(path: Path, hadoopConf: Configuration, sqlConf: SQLConf): Path = {
val metadataDir = new Path(path, FileStreamSink.metadataDir)
val fs = metadataDir.getFileSystem(hadoopConf)
FileStreamSink.checkEscapedMetadataPath(fs, metadataDir, sqlConf)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is new code. So this PR fixes a bug instead of just fixing a test?

Copy link
Contributor Author

@HeartSaVioR HeartSaVioR Jun 29, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh sorry I should have explained before. That's to avoid test to copy same code (logic) what FileStreamSink does, so that we don't break the test when we somehow change it. So that's a refactor and doesn't mean there's a bug in FileStreamSink.

@cloud-fan
Copy link
Contributor

retest this please

@SparkQA
Copy link

SparkQA commented Jun 29, 2020

Test build #124629 has finished for PR 28930 at commit 8f949a1.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

thanks, merging to master/3.0!

@cloud-fan cloud-fan closed this in 5472170 Jun 30, 2020
cloud-fan pushed a commit that referenced this pull request Jun 30, 2020
… directory

### What changes were proposed in this pull request?

This patch fixes the missed spot - the test initializes FileStreamSinkLog with its "output" directory instead of "metadata" directory, hence the verification against sink log was no-op.

### Why are the changes needed?

Without the fix, the verification against sink log was no-op.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Checked with debugger in test, and verified `allFiles()` returns non-zero entries. (It returned zero entry, as there's no metadata.)

Closes #28930 from HeartSaVioR/SPARK-29999-FOLLOWUP-fix-test.

Authored-by: Jungtaek Lim (HeartSaVioR) <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
(cherry picked from commit 5472170)
Signed-off-by: Wenchen Fan <[email protected]>
@HeartSaVioR
Copy link
Contributor Author

Thanks all for reviewing and merging!

@HeartSaVioR HeartSaVioR deleted the SPARK-29999-FOLLOWUP-fix-test branch June 30, 2020 08:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants