aboutsummaryrefslogtreecommitdiffstats
path: root/tools/lib/bpf/bpf_prog_linfo.c
diff options
context:
space:
mode:
authorAruna Ramakrishna <[email protected]>2025-07-09 17:33:28 +0000
committerPeter Zijlstra <[email protected]>2025-07-14 08:59:31 +0000
commit36569780b0d64de283f9d6c2195fd1a43e221ee8 (patch)
tree94292b3c406e76d645dbc9e2bd0118240ed9c00d /tools/lib/bpf/bpf_prog_linfo.c
parentLinux 6.16-rc6 (diff)
downloadkernel-36569780b0d64de283f9d6c2195fd1a43e221ee8.tar.gz
kernel-36569780b0d64de283f9d6c2195fd1a43e221ee8.zip
sched: Change nr_uninterruptible type to unsigned long
The commit e6fe3f422be1 ("sched: Make multiple runqueue task counters 32-bit") changed nr_uninterruptible to an unsigned int. But the nr_uninterruptible values for each of the CPU runqueues can grow to large numbers, sometimes exceeding INT_MAX. This is valid, if, over time, a large number of tasks are migrated off of one CPU after going into an uninterruptible state. Only the sum of all nr_interruptible values across all CPUs yields the correct result, as explained in a comment in kernel/sched/loadavg.c. Change the type of nr_uninterruptible back to unsigned long to prevent overflows, and thus the miscalculation of load average. Fixes: e6fe3f422be1 ("sched: Make multiple runqueue task counters 32-bit") Signed-off-by: Aruna Ramakrishna <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
Diffstat (limited to 'tools/lib/bpf/bpf_prog_linfo.c')
0 files changed, 0 insertions, 0 deletions