Skip to content

Conversation

@nitinrawat123
Copy link
Contributor

The nvme_pci_prp_iter_next function had a race condition where dma_need_unmap() could return true indicating DMA unmapping is needed, but iod->dma_vecs was NULL, causing a NULL pointer dereference.

This occurred because:

  1. dma_vecs allocation happens in nvme_pci_setup_data_prp()
  2. nvme_pci_prp_iter_next() checks dma_need_unmap() but doesn't verify if dma_vecs allocation was successful
  3. If allocation failed or race condition occurred, accessing dma_vecs[0] would cause kernel crash

The crash manifested as:

  • dma_size:0 unmap:0 initially, then dma_size:0 unmap:1
  • nr_dma_vecs:0 dma_vecs:0x0 (NULL pointer)
  • Unable to handle kernel NULL pointer dereference at virtual address 0x0.

Fix by adding iod->dma_vecs NULL check to the condition in nvme_pci_prp_iter_next(), ensuring DMA vector operations only occur when the dma_vecs array has been successfully allocated.

The nvme_pci_prp_iter_next function had a race condition where
dma_need_unmap() could return true indicating DMA unmapping is
needed, but iod->dma_vecs was NULL, causing a NULL pointer
dereference.

This occurred because:
1. dma_vecs allocation happens in nvme_pci_setup_data_prp()
2. nvme_pci_prp_iter_next() checks dma_need_unmap() but doesn't
   verify if dma_vecs allocation was successful
3. If allocation failed or race condition occurred, accessing
   dma_vecs[0] would cause kernel crash

The crash manifested as:
- dma_size:0 unmap:0 initially, then dma_size:0 unmap:1
- nr_dma_vecs:0 dma_vecs:0x0 (NULL pointer)
- Unable to handle kernel NULL pointer dereference at virtual
  address 0x0.

Fix by adding iod->dma_vecs NULL check to the condition in
nvme_pci_prp_iter_next(), ensuring DMA vector operations only
occur when the dma_vecs array has been successfully allocated.

Signed-off-by: Nitin Rawat <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant