diff options
| author | Kuniyuki Iwashima <[email protected]> | 2022-06-21 17:19:10 +0000 |
|---|---|---|
| committer | David S. Miller <[email protected]> | 2022-06-22 11:59:43 +0000 |
| commit | b6e811383062f88212082714db849127fa95142c (patch) | |
| tree | 5999695ff674144ba8e4ec4199c30015203fcecf /net/unix/diag.c | |
| parent | af_unix: Include the whole hash table size in UNIX_HASH_SIZE. (diff) | |
| download | kernel-b6e811383062f88212082714db849127fa95142c.tar.gz kernel-b6e811383062f88212082714db849127fa95142c.zip | |
af_unix: Define a per-netns hash table.
This commit adds a per netns hash table for AF_UNIX, which size is fixed
as UNIX_HASH_SIZE for now.
The first implementation defines a per-netns hash table as a single array
of lock and list:
struct unix_hashbucket {
spinlock_t lock;
struct hlist_head head;
};
struct netns_unix {
struct unix_hashbucket *hash;
...
};
But, Eric pointed out memory cost that the structure has holes because of
sizeof(spinlock_t), which is 4 (or more if LOCKDEP is enabled). [0] It
could be expensive on a host with thousands of netns and few AF_UNIX
sockets. For this reason, a per-netns hash table uses two dense arrays.
struct unix_table {
spinlock_t *locks;
struct hlist_head *buckets;
};
struct netns_unix {
struct unix_table table;
...
};
Note the length of the list has a significant impact rather than lock
contention, so having shared locks can be an option. But, per-netns
locks and lists still perform better than the global locks and per-netns
lists. [1]
Also, this patch adds a change so that struct netns_unix disappears from
struct net if CONFIG_UNIX is disabled.
[0]: https://lore.kernel.org/netdev/CANn89iLVxO5aqx16azNU7p7Z-nz5NrnM5QTqOzueVxEnkVTxyg@mail.gmail.com/
[1]: https://lore.kernel.org/netdev/[email protected]/
Signed-off-by: Kuniyuki Iwashima <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Diffstat (limited to 'net/unix/diag.c')
0 files changed, 0 insertions, 0 deletions
