aboutsummaryrefslogtreecommitdiffstats
path: root/lib/lockref.c
diff options
context:
space:
mode:
authorVladimir Davydov <[email protected]>2013-09-15 13:49:13 +0000
committerIngo Molnar <[email protected]>2013-09-20 09:59:36 +0000
commitb18855500fc40da050512d9df82d2f1471e59642 (patch)
treebceb77d57c1a89fe67c98467b9565e9f27d150da /lib/lockref.c
parentMerge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/k... (diff)
downloadkernel-b18855500fc40da050512d9df82d2f1471e59642.tar.gz
kernel-b18855500fc40da050512d9df82d2f1471e59642.zip
sched/balancing: Fix 'local->avg_load > sds->avg_load' case in calculate_imbalance()
In busiest->group_imb case we can come to calculate_imbalance() with local->avg_load >= busiest->avg_load >= sds->avg_load. This can result in imbalance overflow, because it is calculated as follows env->imbalance = min( max_pull * busiest->group_power, (sds->avg_load - local->avg_load) * local->group_power) / SCHED_POWER_SCALE; As a result we can end up constantly bouncing tasks from one cpu to another if there are pinned tasks. Fix this by skipping the assignment and assuming imbalance=0 in case local->avg_load > sds->avg_load. [ The bug can be caught by running 2*N cpuhogs pinned to two logical cpus belonging to different cores on an HT-enabled machine with N logical cpus: just look at se.nr_migrations growth. ] Signed-off-by: Vladimir Davydov <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/8f596cc6bc0e5e655119dc892c9bfcad26e971f4.1379252740.git.vdavydov@parallels.com Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'lib/lockref.c')
0 files changed, 0 insertions, 0 deletions