Skip to content

Commit 6cfd53f

Browse files
pmundttorvalds
authored andcommitted
nommu: fix kobjsize() for SLOB and SLUB
kobjsize() has been abusing page->index as a method for sorting out compound order, which blows up both for page cache pages, and SLOB's reuse of the index in struct slob_page. Presently we are not able to accurately size arbitrary pointers that don't come from kmalloc(), so the best we can do is sort out the compound order from the head page if it's a compound page, or default to 0-order if it's impossible to ksize() the object. Obviously this leaves quite a bit to be desired in terms of object sizing accuracy, but the behaviour is unchanged over the existing implementation, while fixing the page->index oopses originally reported here: http://marc.info/?l=linux-mm&m=121127773325245&w=2 Accuracy could also be improved by having SLUB and SLOB both set PG_slab on ksizeable pages, rather than just handling the __GFP_COMP cases irregardless of the PG_slab setting, as made possibly with Pekka's patches: http://marc.info/?l=linux-kernel&m=121139439900534&w=2 http://marc.info/?l=linux-kernel&m=121139440000537&w=2 http://marc.info/?l=linux-kernel&m=121139440000540&w=2 This is primarily a bugfix for nommu systems for 2.6.26, with the aim being to gradually kill off kobjsize() and its particular brand of object abuse entirely. Reviewed-by: Pekka Enberg <[email protected]> Signed-off-by: Paul Mundt <[email protected]> Acked-by: David Howells <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 6f09bdf commit 6cfd53f

File tree

1 file changed

+26
-4
lines changed

1 file changed

+26
-4
lines changed

mm/nommu.c

Lines changed: 26 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -104,21 +104,43 @@ EXPORT_SYMBOL(vmtruncate);
104104
unsigned int kobjsize(const void *objp)
105105
{
106106
struct page *page;
107+
int order = 0;
107108

108109
/*
109110
* If the object we have should not have ksize performed on it,
110111
* return size of 0
111112
*/
112-
if (!objp || (unsigned long)objp >= memory_end || !((page = virt_to_page(objp))))
113+
if (!objp)
113114
return 0;
114115

116+
if ((unsigned long)objp >= memory_end)
117+
return 0;
118+
119+
page = virt_to_head_page(objp);
120+
if (!page)
121+
return 0;
122+
123+
/*
124+
* If the allocator sets PageSlab, we know the pointer came from
125+
* kmalloc().
126+
*/
115127
if (PageSlab(page))
116128
return ksize(objp);
117129

118-
BUG_ON(page->index < 0);
119-
BUG_ON(page->index >= MAX_ORDER);
130+
/*
131+
* The ksize() function is only guaranteed to work for pointers
132+
* returned by kmalloc(). So handle arbitrary pointers, that we expect
133+
* always to be compound pages, here.
134+
*/
135+
if (PageCompound(page))
136+
order = compound_order(page);
120137

121-
return (PAGE_SIZE << page->index);
138+
/*
139+
* Finally, handle arbitrary pointers that don't set PageSlab.
140+
* Default to 0-order in the case when we're unable to ksize()
141+
* the object.
142+
*/
143+
return PAGE_SIZE << order;
122144
}
123145

124146
/*

0 commit comments

Comments
 (0)