Skip to content

Conversation

@10110346
Copy link
Contributor

What changes were proposed in this pull request?

In fact, unroll memory is also storage memory,so i think it is unnecessary to release unroll memory really, and then to get storage memory again.

How was this patch tested?

Existing unit test

@SparkQA
Copy link

SparkQA commented Feb 26, 2018

Test build #87669 has finished for PR 20676 at commit 0574be5.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 27, 2018

Test build #87685 has finished for PR 20676 at commit 0574be5.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 27, 2018

Test build #87704 has finished for PR 20676 at commit 6e549ee.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 27, 2018

Test build #87705 has finished for PR 20676 at commit 496f2fb.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@10110346
Copy link
Contributor Author

retest this please.

@SparkQA
Copy link

SparkQA commented Feb 27, 2018

Test build #87713 has finished for PR 20676 at commit 496f2fb.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

unrollMemoryUsedByThisBlock += amountToRequest
unrollMemoryUsedByThisBlock = size
}
} else if (size < unrollMemoryUsedByThisBlock) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#19285 miss this check before? Oops! This could waste storage memory. Thanks for adding this.
Also, cc @cloud-fan @ConeyLiu

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In #19285, we first release unrollMemoryUsedByThisBlock unroll memory, and then we request entry.size storage memory. So, there is no waste of resources here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no problem before,because it releases all unroll memory in 257 line

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see, glad to hear that.

}
// In fact, unroll memory is also storage memory, it is unnecessary to
// release unroll memory really
releaseUnrollMemoryForThisTask(memoryMode, unrollMemoryUsedByThisBlock, false)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you check other calls on this function? Are you sure all other calls' third param default to be true, except here only?

Copy link
Contributor Author

@10110346 10110346 Feb 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am sure all other calls' third param default to be true

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, it seems a little awkward. How about change transferUnrollToStorage()'s(which is removed after #19285, see L242 ) behavior , rather than releaseUnrollMemoryForThisTask ?

releaseUnrollMemoryForThisTask(memoryMode, unrollMemoryUsedByThisBlock)
val success = memoryManager.acquireStorageMemory(blockId, entry.size, memoryMode)
assert(success, "transferring unroll memory to storage memory failed")
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a concept difference between unroll memory and storage memory. And it seems reasonable to transfer unroll memory to storage memory from the view of their concept, though, it makes no big difference underlying storage memory. I'm not sure will this change cause some side effect, if none, I'll approve of it.

@ConeyLiu
Copy link
Contributor

This is for compatibility reasons. The memory management also support legacy memory management (StaticMemoryManager). In StaticMemoryManager, the storage memory and unroll memory is managed separately. So we can't say

In fact, unroll memory is also storage memory

@10110346
Copy link
Contributor Author

In StaticMemoryManager, the storage memory and unroll memory is managed separately, but, unroll memory is also storage memory, so we do not need release unroll memory really,Just need to release the memory for unrollMemoryMap . @ConeyLiu

@ConeyLiu
Copy link
Contributor

Yeah, I see that. I'm not sure it's OK to change. But I think we should follow the interface design, not the underlying implementation.

@Ngone51
Copy link
Member

Ngone51 commented Feb 28, 2018

Hi, @ConeyLiu. I don't think it's about compatibility. Because both StaticMemoryManager and UnifiedMemoryManager call the same function for release unroll memory, which is releaseUnrollMemory(), seehttps://github.com/apache/spark/blob/b14993e1fcb68e1c946a671c6048605ab4afdf58/core/src/main/scala/org/apache/spark/memory/MemoryManager.scala#L165

@cloud-fan
Copy link
Contributor

cloud-fan commented Feb 28, 2018

I don't think unroll memory is storage memory from the interface. It's only true for unified memory manager. I'm -1 on this change unless you can convince the community to remove the static memory manager entirely, or slightly update the memory manger interface to clarify the semantic of unroll memory.

@10110346
Copy link
Contributor Author

10110346 commented Mar 5, 2018

Thanks all , i will close this PR.

@10110346 10110346 closed this Mar 5, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants