Skip to content

Commit acdc9fc

Browse files
author
Martin Schwidefsky
committed
s390/bitops: implement cache friendly test_and_set_bit_lock
The generic implementation for test_and_set_bit_lock in include/asm-generic uses the standard test_and_set_bit operation. This is done with either a 'csg' or a 'loag' instruction. For both version the cache line is fetched exclusively, even if the bit is already set. The result is an increase in cache traffic, for a contented lock this is a bad idea. Acked-by: Hendrik Brueckner <[email protected]> Signed-off-by: Martin Schwidefsky <[email protected]>
1 parent 5614dd9 commit acdc9fc

File tree

1 file changed

+22
-1
lines changed

1 file changed

+22
-1
lines changed

arch/s390/include/asm/bitops.h

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -276,6 +276,28 @@ static inline int test_bit(unsigned long nr, const volatile unsigned long *ptr)
276276
return (*addr >> (nr & 7)) & 1;
277277
}
278278

279+
static inline int test_and_set_bit_lock(unsigned long nr,
280+
volatile unsigned long *ptr)
281+
{
282+
if (test_bit(nr, ptr))
283+
return 1;
284+
return test_and_set_bit(nr, ptr);
285+
}
286+
287+
static inline void clear_bit_unlock(unsigned long nr,
288+
volatile unsigned long *ptr)
289+
{
290+
smp_mb__before_atomic();
291+
clear_bit(nr, ptr);
292+
}
293+
294+
static inline void __clear_bit_unlock(unsigned long nr,
295+
volatile unsigned long *ptr)
296+
{
297+
smp_mb();
298+
__clear_bit(nr, ptr);
299+
}
300+
279301
/*
280302
* Functions which use MSB0 bit numbering.
281303
* On an s390x system the bits are numbered:
@@ -446,7 +468,6 @@ static inline int fls(int word)
446468
#include <asm-generic/bitops/ffz.h>
447469
#include <asm-generic/bitops/find.h>
448470
#include <asm-generic/bitops/hweight.h>
449-
#include <asm-generic/bitops/lock.h>
450471
#include <asm-generic/bitops/sched.h>
451472
#include <asm-generic/bitops/le.h>
452473
#include <asm-generic/bitops/ext2-atomic-setbit.h>

0 commit comments

Comments
 (0)