Skip to content

Commit 4e89dce

Browse files
Vijayanand Jittawilldeacon
authored andcommitted
iommu/iova: Retry from last rb tree node if iova search fails
When ever a new iova alloc request comes iova is always searched from the cached node and the nodes which are previous to cached node. So, even if there is free iova space available in the nodes which are next to the cached node iova allocation can still fail because of this approach. Consider the following sequence of iova alloc and frees on 1GB of iova space 1) alloc - 500MB 2) alloc - 12MB 3) alloc - 499MB 4) free - 12MB which was allocated in step 2 5) alloc - 13MB After the above sequence we will have 12MB of free iova space and cached node will be pointing to the iova pfn of last alloc of 13MB which will be the lowest iova pfn of that iova space. Now if we get an alloc request of 2MB we just search from cached node and then look for lower iova pfn's for free iova and as they aren't any, iova alloc fails though there is 12MB of free iova space. To avoid such iova search failures do a retry from the last rb tree node when iova search fails, this will search the entire tree and get an iova if its available. Signed-off-by: Vijayanand Jitta <[email protected]> Reviewed-by: Robin Murphy <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
1 parent f8394f2 commit 4e89dce

File tree

1 file changed

+17
-6
lines changed

1 file changed

+17
-6
lines changed

drivers/iommu/iova.c

Lines changed: 17 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -184,8 +184,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
184184
struct rb_node *curr, *prev;
185185
struct iova *curr_iova;
186186
unsigned long flags;
187-
unsigned long new_pfn;
187+
unsigned long new_pfn, retry_pfn;
188188
unsigned long align_mask = ~0UL;
189+
unsigned long high_pfn = limit_pfn, low_pfn = iovad->start_pfn;
189190

190191
if (size_aligned)
191192
align_mask <<= fls_long(size - 1);
@@ -198,15 +199,25 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
198199

199200
curr = __get_cached_rbnode(iovad, limit_pfn);
200201
curr_iova = rb_entry(curr, struct iova, node);
202+
retry_pfn = curr_iova->pfn_hi + 1;
203+
204+
retry:
201205
do {
202-
limit_pfn = min(limit_pfn, curr_iova->pfn_lo);
203-
new_pfn = (limit_pfn - size) & align_mask;
206+
high_pfn = min(high_pfn, curr_iova->pfn_lo);
207+
new_pfn = (high_pfn - size) & align_mask;
204208
prev = curr;
205209
curr = rb_prev(curr);
206210
curr_iova = rb_entry(curr, struct iova, node);
207-
} while (curr && new_pfn <= curr_iova->pfn_hi);
208-
209-
if (limit_pfn < size || new_pfn < iovad->start_pfn) {
211+
} while (curr && new_pfn <= curr_iova->pfn_hi && new_pfn >= low_pfn);
212+
213+
if (high_pfn < size || new_pfn < low_pfn) {
214+
if (low_pfn == iovad->start_pfn && retry_pfn < limit_pfn) {
215+
high_pfn = limit_pfn;
216+
low_pfn = retry_pfn;
217+
curr = &iovad->anchor.node;
218+
curr_iova = rb_entry(curr, struct iova, node);
219+
goto retry;
220+
}
210221
iovad->max32_alloc_size = size;
211222
goto iova32_full;
212223
}

0 commit comments

Comments
 (0)