Skip to content

Commit fc22d94

Browse files
Benjamin TissoiresAlexei Starovoitov
authored andcommitted
bpf: replace bpf_timer_cancel_and_free with a generic helper
Same reason than most bpf_timer* functions, we need almost the same for workqueues. So extract the generic part out of it so bpf_wq_cancel_and_free can reuse it. Signed-off-by: Benjamin Tissoires <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
1 parent 073f11b commit fc22d94

File tree

1 file changed

+25
-17
lines changed

1 file changed

+25
-17
lines changed

kernel/bpf/helpers.c

Lines changed: 25 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1413,36 +1413,44 @@ static const struct bpf_func_proto bpf_timer_cancel_proto = {
14131413
.arg1_type = ARG_PTR_TO_TIMER,
14141414
};
14151415

1416-
/* This function is called by map_delete/update_elem for individual element and
1417-
* by ops->map_release_uref when the user space reference to a map reaches zero.
1418-
*/
1419-
void bpf_timer_cancel_and_free(void *val)
1416+
static struct bpf_async_cb *__bpf_async_cancel_and_free(struct bpf_async_kern *async)
14201417
{
1421-
struct bpf_async_kern *timer = val;
1422-
struct bpf_hrtimer *t;
1418+
struct bpf_async_cb *cb;
14231419

1424-
/* Performance optimization: read timer->timer without lock first. */
1425-
if (!READ_ONCE(timer->timer))
1426-
return;
1420+
/* Performance optimization: read async->cb without lock first. */
1421+
if (!READ_ONCE(async->cb))
1422+
return NULL;
14271423

1428-
__bpf_spin_lock_irqsave(&timer->lock);
1424+
__bpf_spin_lock_irqsave(&async->lock);
14291425
/* re-read it under lock */
1430-
t = timer->timer;
1431-
if (!t)
1426+
cb = async->cb;
1427+
if (!cb)
14321428
goto out;
1433-
drop_prog_refcnt(&t->cb);
1429+
drop_prog_refcnt(cb);
14341430
/* The subsequent bpf_timer_start/cancel() helpers won't be able to use
14351431
* this timer, since it won't be initialized.
14361432
*/
1437-
WRITE_ONCE(timer->timer, NULL);
1433+
WRITE_ONCE(async->cb, NULL);
14381434
out:
1439-
__bpf_spin_unlock_irqrestore(&timer->lock);
1435+
__bpf_spin_unlock_irqrestore(&async->lock);
1436+
return cb;
1437+
}
1438+
1439+
/* This function is called by map_delete/update_elem for individual element and
1440+
* by ops->map_release_uref when the user space reference to a map reaches zero.
1441+
*/
1442+
void bpf_timer_cancel_and_free(void *val)
1443+
{
1444+
struct bpf_hrtimer *t;
1445+
1446+
t = (struct bpf_hrtimer *)__bpf_async_cancel_and_free(val);
1447+
14401448
if (!t)
14411449
return;
14421450
/* Cancel the timer and wait for callback to complete if it was running.
14431451
* If hrtimer_cancel() can be safely called it's safe to call kfree(t)
14441452
* right after for both preallocated and non-preallocated maps.
1445-
* The timer->timer = NULL was already done and no code path can
1453+
* The async->cb = NULL was already done and no code path can
14461454
* see address 't' anymore.
14471455
*
14481456
* Check that bpf_map_delete/update_elem() wasn't called from timer
@@ -1451,7 +1459,7 @@ void bpf_timer_cancel_and_free(void *val)
14511459
* return -1). Though callback_fn is still running on this cpu it's
14521460
* safe to do kfree(t) because bpf_timer_cb() read everything it needed
14531461
* from 't'. The bpf subprog callback_fn won't be able to access 't',
1454-
* since timer->timer = NULL was already done. The timer will be
1462+
* since async->cb = NULL was already done. The timer will be
14551463
* effectively cancelled because bpf_timer_cb() will return
14561464
* HRTIMER_NORESTART.
14571465
*/

0 commit comments

Comments
 (0)