aboutsummaryrefslogtreecommitdiffstats
path: root/tools/bpf/bpftool/prog.c
diff options
context:
space:
mode:
authorAlexei Starovoitov <[email protected]>2023-02-15 23:40:06 +0000
committerAlexei Starovoitov <[email protected]>2023-02-15 23:40:06 +0000
commit3538a0fbbd81bc131afe48b4cf02895735944359 (patch)
treec36bec94b00c337da6266687758715ec9eaa0f92 /tools/bpf/bpftool/prog.c
parentMerge branch 'Improvements for BPF_ST tracking by verifier ' (diff)
parentselftests/bpf: Add test case for element reuse in htab map (diff)
downloadkernel-3538a0fbbd81bc131afe48b4cf02895735944359.tar.gz
kernel-3538a0fbbd81bc131afe48b4cf02895735944359.zip
Merge branch 'Use __GFP_ZERO in bpf memory allocator'
Hou Tao says: ==================== From: Hou Tao <[email protected]> Hi, The patchset tries to fix the hard-up problem found when checking how htab handles element reuse in bpf memory allocator. The immediate reuse of freed elements will reinitialize special fields (e.g., bpf_spin_lock) in htab map value and it may corrupt lookup procedure with BFP_F_LOCK flag which acquires bpf-spin-lock during value copying, and lead to hard-lock as shown in patch #2. Patch #1 fixes it by using __GFP_ZERO when allocating the object from slab and the behavior is similar with the preallocated hash-table case. Please see individual patches for more details. And comments are always welcome. Regards, Change Log: v1: * Use __GFP_ZERO instead of ctor to avoid retpoline overhead (from Alexei) * Add comments for check_and_init_map_value() (from Alexei) * split __GFP_ZERO patches out of the original patchset to unblock the development work of others. RFC: https://lore.kernel.org/bpf/[email protected] ==================== Signed-off-by: Alexei Starovoitov <[email protected]>
Diffstat (limited to 'tools/bpf/bpftool/prog.c')
0 files changed, 0 insertions, 0 deletions