Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions src/hotspot/share/nmt/mallocTracker.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,12 @@ void* MallocTracker::record_free_block(void* memblock) {

deaccount(header->free_info());

if (ZapCHeap) {
// To do this zapping, we need to know the block size.
// This is why we have to do it here, and not in os::free.
memset(memblock, freeBlockPad, header->size());
}

header->mark_block_as_dead();

return (void*)header;
Expand Down
3 changes: 3 additions & 0 deletions src/hotspot/share/runtime/globals.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -483,6 +483,9 @@ const int ObjectAlignmentInBytes = 8;
develop(bool, ZapFillerObjects, trueInDebug, \
"Zap filler objects") \
\
develop(bool, ZapCHeap, trueInDebug, \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have this vague recollection that maybe we used to do something like this, and decided to stop
because it really badly hurt performance in some cases. I know debug builds aren't expected to
be performant, but there's slow and then there's really unpleasant to use. Maybe make this
default to false and require explicit opt-in?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a legitimate concern. We have been optimizing/guarding zapping code over the years, because excessive zapping is sometimes not worth it. That said, the utility for diagnostic zapping lies in being enabled by default. If we had this zapping in place, JDK-8364501 would have been trivial to find. So we already know it is useful.

To estimate rough costs of doing this extra work, I ran Linux x86_64 server fastdebug tier1 with and without the patch, and here are the results:

# Before
62589.94s user 5358.93s system 4015% cpu 28:16.24 total
62453.49s user 5388.42s system 3993% cpu 28:18.60 total
62363.92s user 5347.49s system 3976% cpu 28:22.75 total

# After
62803.82s user 5350.01s system 3983% cpu 28:31.05 total
63868.84s user 5415.74s system 3997% cpu 28:33.04 total
63864.74s user 5521.71s system 4051% cpu 28:37.57 total

So there is an impact, but I will hard-pressed to call it really bad.

The upside for this PR is that we can now summarily turn off malloc/realloc/free zapping, if we want to.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1. Ideally, os::malloc and friends should not be terribly hot. That's why we have custom allocators for heavy fine-grained use cases like C2.

ZapCHeap may be a bit misleading as a name, since all it does is zap on free.

Zap on malloc would also be useful. If we are worried about speed, zapping the 1-2 words would already give 95% of effect, since that is in high likelyhood the later location for some important struct members. And there is some probability that the libc touches memory in the vicinity of the block start during allocation, so it's probably already paged in.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Zap on malloc would also be useful.

I don't understand. ZapCHeap, as implement in current PR, zaps on malloc as well. Well, actually, it just wraps the already existing zapping code with flag guards. I used to call the flag ZapFreeCHeap, but then realized we do malloc/realloc side already, so it just claimed those to be a part of the same zapping feature.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I see. So it's the other way around, we zap newly allocated memory, but not free memory. I thought you were doing that, according to your comment:

When NMT is enabled (which it is for debug builds), we can also do this for frees, as NMT tells us the size of the free-ed block.

Copy link
Member Author

@shipilev shipilev Aug 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR does all three, take a look? Current code already zaps on malloc/realloc path, those paths are now under new ZapCHeap flag. New code zaps on free path, and it is also under new ZapCHeap flag. So in the end, ZapCHeap covers malloc/realloc/free, and thus it has a proper name.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah okay. I should not review on Friday evenings. All good, then.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shipilev Thanks for doing some performance testing. Yeah, that doesn't look too bad.

Change looks good.

"Zap allocated/freed C heap space") \
\
develop(bool, ZapTLAB, trueInDebug, \
"Zap allocated TLABs") \
develop(bool, TestingAsyncLoggingDeathTest, false, \
Expand Down
6 changes: 3 additions & 3 deletions src/hotspot/share/runtime/os.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -666,8 +666,8 @@ void* os::malloc(size_t size, MemTag mem_tag, const NativeCallStack& stack) {
if (CDSConfig::is_dumping_static_archive()) {
// Need to deterministically fill all the alignment gaps in C++ structures.
::memset(inner_ptr, 0, size);
} else {
DEBUG_ONLY(::memset(inner_ptr, uninitBlockPad, size);)
} else if (ZapCHeap) {
::memset(inner_ptr, uninitBlockPad, size);
}
DEBUG_ONLY(break_if_ptr_caught(inner_ptr);)
return inner_ptr;
Expand Down Expand Up @@ -740,7 +740,7 @@ void* os::realloc(void *memblock, size_t size, MemTag mem_tag, const NativeCallS

#ifdef ASSERT
assert(old_size == free_info.size, "Sanity");
if (old_size < size) {
if (ZapCHeap && old_size < size) {
// We also zap the newly extended region.
::memset((char*)new_inner_ptr + old_size, uninitBlockPad, size - old_size);
}
Expand Down