Skip to content

Commit 441ed12

Browse files
isilencedinguyen702
authored andcommitted
io_uring: fix UAF due to missing POLLFREE handling
[ upstream commmit 791f346 ] Fixes a problem described in 50252e4 ("aio: fix use-after-free due to missing POLLFREE handling") and copies the approach used there. In short, we have to forcibly eject a poll entry when we meet POLLFREE. We can't rely on io_poll_get_ownership() as can't wait for potentially running tw handlers, so we use the fact that wqs are RCU freed. See Eric's patch and comments for more details. Reported-by: Eric Biggers <[email protected]> Link: https://lore.kernel.org/r/[email protected] Reported-and-tested-by: [email protected] Fixes: 221c5eb ("io_uring: add support for IORING_OP_POLL") Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/4ed56b6f548f7ea337603a82315750449412748a.1642161259.git.asml.silence@gmail.com [axboe: drop non-functional change from patch] Signed-off-by: Jens Axboe <[email protected]> [pavel: backport] Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
1 parent f1684a8 commit 441ed12

File tree

1 file changed

+50
-8
lines changed

1 file changed

+50
-8
lines changed

fs/io_uring.c

Lines changed: 50 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5369,23 +5369,41 @@ static void io_init_poll_iocb(struct io_poll_iocb *poll, __poll_t events,
53695369

53705370
static inline void io_poll_remove_entry(struct io_poll_iocb *poll)
53715371
{
5372-
struct wait_queue_head *head = poll->head;
5372+
struct wait_queue_head *head = smp_load_acquire(&poll->head);
53735373

5374-
spin_lock_irq(&head->lock);
5375-
list_del_init(&poll->wait.entry);
5376-
poll->head = NULL;
5377-
spin_unlock_irq(&head->lock);
5374+
if (head) {
5375+
spin_lock_irq(&head->lock);
5376+
list_del_init(&poll->wait.entry);
5377+
poll->head = NULL;
5378+
spin_unlock_irq(&head->lock);
5379+
}
53785380
}
53795381

53805382
static void io_poll_remove_entries(struct io_kiocb *req)
53815383
{
53825384
struct io_poll_iocb *poll = io_poll_get_single(req);
53835385
struct io_poll_iocb *poll_double = io_poll_get_double(req);
53845386

5385-
if (poll->head)
5386-
io_poll_remove_entry(poll);
5387-
if (poll_double && poll_double->head)
5387+
/*
5388+
* While we hold the waitqueue lock and the waitqueue is nonempty,
5389+
* wake_up_pollfree() will wait for us. However, taking the waitqueue
5390+
* lock in the first place can race with the waitqueue being freed.
5391+
*
5392+
* We solve this as eventpoll does: by taking advantage of the fact that
5393+
* all users of wake_up_pollfree() will RCU-delay the actual free. If
5394+
* we enter rcu_read_lock() and see that the pointer to the queue is
5395+
* non-NULL, we can then lock it without the memory being freed out from
5396+
* under us.
5397+
*
5398+
* Keep holding rcu_read_lock() as long as we hold the queue lock, in
5399+
* case the caller deletes the entry from the queue, leaving it empty.
5400+
* In that case, only RCU prevents the queue memory from being freed.
5401+
*/
5402+
rcu_read_lock();
5403+
io_poll_remove_entry(poll);
5404+
if (poll_double)
53885405
io_poll_remove_entry(poll_double);
5406+
rcu_read_unlock();
53895407
}
53905408

53915409
/*
@@ -5523,6 +5541,30 @@ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
55235541
wait);
55245542
__poll_t mask = key_to_poll(key);
55255543

5544+
if (unlikely(mask & POLLFREE)) {
5545+
io_poll_mark_cancelled(req);
5546+
/* we have to kick tw in case it's not already */
5547+
io_poll_execute(req, 0);
5548+
5549+
/*
5550+
* If the waitqueue is being freed early but someone is already
5551+
* holds ownership over it, we have to tear down the request as
5552+
* best we can. That means immediately removing the request from
5553+
* its waitqueue and preventing all further accesses to the
5554+
* waitqueue via the request.
5555+
*/
5556+
list_del_init(&poll->wait.entry);
5557+
5558+
/*
5559+
* Careful: this *must* be the last step, since as soon
5560+
* as req->head is NULL'ed out, the request can be
5561+
* completed and freed, since aio_poll_complete_work()
5562+
* will no longer need to take the waitqueue lock.
5563+
*/
5564+
smp_store_release(&poll->head, NULL);
5565+
return 1;
5566+
}
5567+
55265568
/* for instances that support it check for an event match first */
55275569
if (mask && !(mask & poll->events))
55285570
return 0;

0 commit comments

Comments
 (0)