Skip to content

Commit 924de3b

Browse files
committed
fork: Have new threads join on-going signal group stops
There are only two signals that are delivered to every member of a signal group: SIGSTOP and SIGKILL. Signal delivery requires every signal appear to be delivered either before or after a clone syscall. SIGKILL terminates the clone so does not need to be considered. Which leaves only SIGSTOP that needs to be considered when creating new threads. Today in the event of a group stop TIF_SIGPENDING will get set and the fork will restart ensuring the fork syscall participates in the group stop. A fork (especially of a process with a lot of memory) is one of the most expensive system so we really only want to restart a fork when necessary. It is easy so check to see if a SIGSTOP is ongoing and have the new thread join it immediate after the clone completes. Making it appear the clone completed happened just before the SIGSTOP. The calculate_sigpending function will see the bits set in jobctl and set TIF_SIGPENDING to ensure the new task takes the slow path to userspace. V2: The call to task_join_group_stop was moved before the new task is added to the thread group list. This should not matter as sighand->siglock is held over both the addition of the threads, the call to task_join_group_stop and do_signal_stop. But the change is trivial and it is one less thing to worry about when reading the code. Signed-off-by: "Eric W. Biederman" <[email protected]>
1 parent 4390e9e commit 924de3b

File tree

3 files changed

+31
-12
lines changed

3 files changed

+31
-12
lines changed

include/linux/sched/signal.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -385,6 +385,8 @@ static inline void ptrace_signal_wake_up(struct task_struct *t, bool resume)
385385
signal_wake_up_state(t, resume ? __TASK_TRACED : 0);
386386
}
387387

388+
void task_join_group_stop(struct task_struct *task);
389+
388390
#ifdef TIF_RESTORE_SIGMASK
389391
/*
390392
* Legacy restore_sigmask accessors. These are inefficient on

kernel/fork.c

Lines changed: 15 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1934,18 +1934,20 @@ static __latent_entropy struct task_struct *copy_process(
19341934
goto bad_fork_cancel_cgroup;
19351935
}
19361936

1937-
/*
1938-
* Process group and session signals need to be delivered to just the
1939-
* parent before the fork or both the parent and the child after the
1940-
* fork. Restart if a signal comes in before we add the new process to
1941-
* it's process group.
1942-
* A fatal signal pending means that current will exit, so the new
1943-
* thread can't slip out of an OOM kill (or normal SIGKILL).
1944-
*/
1945-
recalc_sigpending();
1946-
if (signal_pending(current)) {
1947-
retval = -ERESTARTNOINTR;
1948-
goto bad_fork_cancel_cgroup;
1937+
if (!(clone_flags & CLONE_THREAD)) {
1938+
/*
1939+
* Process group and session signals need to be delivered to just the
1940+
* parent before the fork or both the parent and the child after the
1941+
* fork. Restart if a signal comes in before we add the new process to
1942+
* it's process group.
1943+
* A fatal signal pending means that current will exit, so the new
1944+
* thread can't slip out of an OOM kill (or normal SIGKILL).
1945+
*/
1946+
recalc_sigpending();
1947+
if (signal_pending(current)) {
1948+
retval = -ERESTARTNOINTR;
1949+
goto bad_fork_cancel_cgroup;
1950+
}
19491951
}
19501952

19511953

@@ -1982,6 +1984,7 @@ static __latent_entropy struct task_struct *copy_process(
19821984
current->signal->nr_threads++;
19831985
atomic_inc(&current->signal->live);
19841986
atomic_inc(&current->signal->sigcnt);
1987+
task_join_group_stop(p);
19851988
list_add_tail_rcu(&p->thread_group,
19861989
&p->group_leader->thread_group);
19871990
list_add_tail_rcu(&p->thread_node,

kernel/signal.c

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -373,6 +373,20 @@ static bool task_participate_group_stop(struct task_struct *task)
373373
return false;
374374
}
375375

376+
void task_join_group_stop(struct task_struct *task)
377+
{
378+
/* Have the new thread join an on-going signal group stop */
379+
unsigned long jobctl = current->jobctl;
380+
if (jobctl & JOBCTL_STOP_PENDING) {
381+
struct signal_struct *sig = current->signal;
382+
unsigned long signr = jobctl & JOBCTL_STOP_SIGMASK;
383+
unsigned long gstop = JOBCTL_STOP_PENDING | JOBCTL_STOP_CONSUME;
384+
if (task_set_jobctl_pending(task, signr | gstop)) {
385+
sig->group_stop_count++;
386+
}
387+
}
388+
}
389+
376390
/*
377391
* allocate a new signal queue record
378392
* - this may be called without locks if and only if t == current, otherwise an

0 commit comments

Comments
 (0)