Skip to content

Commit 9544d60

Browse files
edumazetkuba-moo
authored andcommitted
inet: change lport contribution to inet_ehashfn() and inet6_ehashfn()
In order to speedup __inet_hash_connect(), we want to ensure hash values for <source address, port X, destination address, destination port> are not randomly spread, but monotonically increasing. Goal is to allow __inet_hash_connect() to derive the hash value of a candidate 4-tuple with a single addition in the following patch in the series. Given : hash_0 = inet_ehashfn(saddr, 0, daddr, dport) hash_sport = inet_ehashfn(saddr, sport, daddr, dport) Then (hash_sport == hash_0 + sport) for all sport values. As far as I know, there is no security implication with this change. After this patch, when __inet_hash_connect() has to try XXXX candidates, the hash table buckets are contiguous and packed, allowing a better use of cpu caches and hardware prefetchers. Tested: Server: ulimit -n 40000; neper/tcp_crr -T 200 -F 30000 -6 --nolog Client: ulimit -n 40000; neper/tcp_crr -T 200 -F 30000 -6 --nolog -c -H server Before this patch: utime_start=0.271607 utime_end=3.847111 stime_start=18.407684 stime_end=1997.485557 num_transactions=1350742 latency_min=0.014131929 latency_max=17.895073144 latency_mean=0.505675853 latency_stddev=2.125164772 num_samples=307884 throughput=139866.80 perf top on client: 56.86% [kernel] [k] __inet6_check_established 17.96% [kernel] [k] __inet_hash_connect 13.88% [kernel] [k] inet6_ehashfn 2.52% [kernel] [k] rcu_all_qs 2.01% [kernel] [k] __cond_resched 0.41% [kernel] [k] _raw_spin_lock After this patch: utime_start=0.286131 utime_end=4.378886 stime_start=11.952556 stime_end=1991.655533 num_transactions=1446830 latency_min=0.001061085 latency_max=12.075275028 latency_mean=0.376375302 latency_stddev=1.361969596 num_samples=306383 throughput=151866.56 perf top: 50.01% [kernel] [k] __inet6_check_established 20.65% [kernel] [k] __inet_hash_connect 15.81% [kernel] [k] inet6_ehashfn 2.92% [kernel] [k] rcu_all_qs 2.34% [kernel] [k] __cond_resched 0.50% [kernel] [k] _raw_spin_lock 0.34% [kernel] [k] sched_balance_trigger 0.24% [kernel] [k] queued_spin_lock_slowpath There is indeed an increase of throughput and reduction of latency. Signed-off-by: Eric Dumazet <[email protected]> Reviewed-by: Kuniyuki Iwashima <[email protected]> Tested-by: Jason Xing <[email protected]> Reviewed-by: Jason Xing <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
1 parent f8ece40 commit 9544d60

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

net/ipv4/inet_hashtables.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,8 +35,8 @@ u32 inet_ehashfn(const struct net *net, const __be32 laddr,
3535
{
3636
net_get_random_once(&inet_ehash_secret, sizeof(inet_ehash_secret));
3737

38-
return __inet_ehashfn(laddr, lport, faddr, fport,
39-
inet_ehash_secret + net_hash_mix(net));
38+
return lport + __inet_ehashfn(laddr, 0, faddr, fport,
39+
inet_ehash_secret + net_hash_mix(net));
4040
}
4141
EXPORT_SYMBOL_GPL(inet_ehashfn);
4242

net/ipv6/inet6_hashtables.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,8 +35,8 @@ u32 inet6_ehashfn(const struct net *net,
3535
lhash = (__force u32)laddr->s6_addr32[3];
3636
fhash = __ipv6_addr_jhash(faddr, tcp_ipv6_hash_secret);
3737

38-
return __inet6_ehashfn(lhash, lport, fhash, fport,
39-
inet6_ehash_secret + net_hash_mix(net));
38+
return lport + __inet6_ehashfn(lhash, 0, fhash, fport,
39+
inet6_ehash_secret + net_hash_mix(net));
4040
}
4141
EXPORT_SYMBOL_GPL(inet6_ehashfn);
4242

0 commit comments

Comments
 (0)