Skip to content

Commit d5ea5e5

Browse files
jankaraakpm00
authored andcommitted
readahead: properly shorten readahead when falling back to do_page_cache_ra()
When we succeed in creating some folios in page_cache_ra_order() but then need to fallback to single page folios, we don't shorten the amount to read passed to do_page_cache_ra() by the amount we've already read. This then results in reading more and also in placing another readahead mark in the middle of the readahead window which confuses readahead code. Fix the problem by properly reducing number of pages to read. Unlike previous attempt at this fix (commit 7c87758) which had to be reverted, we are now careful to check there is indeed something to read so that we don't submit negative-sized readahead. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Jan Kara <[email protected]> Cc: Matthew Wilcox <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 7a1eb89 commit d5ea5e5

File tree

1 file changed

+10
-3
lines changed

1 file changed

+10
-3
lines changed

mm/readahead.c

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -448,7 +448,8 @@ void page_cache_ra_order(struct readahead_control *ractl,
448448
struct file_ra_state *ra, unsigned int new_order)
449449
{
450450
struct address_space *mapping = ractl->mapping;
451-
pgoff_t index = readahead_index(ractl);
451+
pgoff_t start = readahead_index(ractl);
452+
pgoff_t index = start;
452453
unsigned int min_order = mapping_min_folio_order(mapping);
453454
pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
454455
pgoff_t mark = index + ra->size - ra->async_size;
@@ -506,12 +507,18 @@ void page_cache_ra_order(struct readahead_control *ractl,
506507
/*
507508
* If there were already pages in the page cache, then we may have
508509
* left some gaps. Let the regular readahead code take care of this
509-
* situation.
510+
* situation below.
510511
*/
511512
if (!err)
512513
return;
513514
fallback:
514-
do_page_cache_ra(ractl, ra->size, ra->async_size);
515+
/*
516+
* ->readahead() may have updated readahead window size so we have to
517+
* check there's still something to read.
518+
*/
519+
if (ra->size > index - start)
520+
do_page_cache_ra(ractl, ra->size - (index - start),
521+
ra->async_size);
515522
}
516523

517524
static unsigned long ractl_max_pages(struct readahead_control *ractl,

0 commit comments

Comments
 (0)