Skip to content

Commit 43e6a04

Browse files
bbrezillonmehmetb0
authored andcommitted
drm/panfrost: Fix the error path in panfrost_mmu_map_fault_addr()
If some the pages or sgt allocation failed, we shouldn't release the pages ref we got earlier, otherwise we will end up with unbalanced get/put_pages() calls. We should instead leave everything in place and let the BO release function deal with extra cleanup when the object is destroyed, or let the fault handler try again next time it's called. Fixes: 187d292 ("drm/panfrost: Add support for GPU heap allocations") Cc: <[email protected]> Reviewed-by: Steven Price <[email protected]> Reviewed-by: AngeloGioacchino Del Regno <[email protected]> Signed-off-by: Boris Brezillon <[email protected]> Co-developed-by: Dmitry Osipenko <[email protected]> Signed-off-by: Dmitry Osipenko <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected] CVE-2024-35951 (backported from commit 1fc9af8) [hui: This fix commit can't be cleanly applied to J and F due to missing a prerequisite commit 21aa27d ("drm/shmem-helper: Switch to reservation lock"), the prerequisite commit will introduce a significant change hence here can't introduce it in the J and F. So I edited the fix commit accordingly, changed "goto err_unlock" to "goto err_bo".] Signed-off-by: Hui Wang <[email protected]> Acked-by: Mehmet Basaran <[email protected]> Acked-by: Chris Chiu <[email protected]> Signed-off-by: Stefan Bader <[email protected]>
1 parent 48171e0 commit 43e6a04

File tree

1 file changed

+9
-4
lines changed

1 file changed

+9
-4
lines changed

drivers/gpu/drm/panfrost/panfrost_mmu.c

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -465,12 +465,19 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
465465
mapping_set_unevictable(mapping);
466466

467467
for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
468+
/* Can happen if the last fault only partially filled this
469+
* section of the pages array before failing. In that case
470+
* we skip already filled pages.
471+
*/
472+
if (pages[i])
473+
continue;
474+
468475
pages[i] = shmem_read_mapping_page(mapping, i);
469476
if (IS_ERR(pages[i])) {
470477
mutex_unlock(&bo->base.pages_lock);
471478
ret = PTR_ERR(pages[i]);
472479
pages[i] = NULL;
473-
goto err_pages;
480+
goto err_bo;
474481
}
475482
}
476483

@@ -480,7 +487,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
480487
ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
481488
NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
482489
if (ret)
483-
goto err_pages;
490+
goto err_bo;
484491

485492
ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0);
486493
if (ret)
@@ -500,8 +507,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
500507

501508
err_map:
502509
sg_free_table(sgt);
503-
err_pages:
504-
drm_gem_shmem_put_pages(&bo->base);
505510
err_bo:
506511
panfrost_gem_mapping_put(bomapping);
507512
return ret;

0 commit comments

Comments
 (0)