Skip to content

Commit 951eede

Browse files
npigginmpe
authored andcommitted
powerpc/64: Handle linker stubs in low .text code
Very large kernels may require linker stubs for branches from HEAD text code. The linker may place these stubs before the HEAD text sections, which breaks the assumption that HEAD text is located at 0 (or the .text section being located at 0x7000/0x8000 on Book3S kernels). Provide an option to create a small section just before the .text section with an empty 256 - 4 bytes, and adjust the start of the .text section to match. The linker will tend to put stubs in that section and not break our relative-to-absolute offset assumptions. This causes a small waste of space on common kernels, but allows large kernels to build and boot. For now, it is an EXPERT config option, defaulting to =n, but a reference is provided for it in the build-time check for such breakage. This is good enough for allyesconfig and custom users / hackers. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
1 parent 4ea8065 commit 951eede

File tree

3 files changed

+34
-0
lines changed

3 files changed

+34
-0
lines changed

arch/powerpc/Kconfig

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -455,6 +455,17 @@ config PPC_TRANSACTIONAL_MEM
455455
---help---
456456
Support user-mode Transactional Memory on POWERPC.
457457

458+
config LD_HEAD_STUB_CATCH
459+
bool "Reserve 256 bytes to cope with linker stubs in HEAD text" if EXPERT
460+
depends on PPC64
461+
default n
462+
help
463+
Very large kernels can cause linker branch stubs to be generated by
464+
code in head_64.S, which moves the head text sections out of their
465+
specified location. This option can work around the problem.
466+
467+
If unsure, say "N".
468+
458469
config DISABLE_MPROFILE_KERNEL
459470
bool "Disable use of mprofile-kernel for kernel tracing"
460471
depends on PPC64 && CPU_LITTLE_ENDIAN

arch/powerpc/include/asm/head-64.h

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,11 +63,29 @@
6363
. = 0x0; \
6464
start_##sname:
6565

66+
/*
67+
* .linker_stub_catch section is used to catch linker stubs from being
68+
* inserted in our .text section, above the start_text label (which breaks
69+
* the ABS_ADDR calculation). See kernel/vmlinux.lds.S and tools/head_check.sh
70+
* for more details. We would prefer to just keep a cacheline (0x80), but
71+
* 0x100 seems to be how the linker aligns branch stub groups.
72+
*/
73+
#ifdef CONFIG_LD_HEAD_STUB_CATCH
74+
#define OPEN_TEXT_SECTION(start) \
75+
.section ".linker_stub_catch","ax",@progbits; \
76+
linker_stub_catch: \
77+
. = 0x4; \
78+
text_start = (start) + 0x100; \
79+
.section ".text","ax",@progbits; \
80+
.balign 0x100; \
81+
start_text:
82+
#else
6683
#define OPEN_TEXT_SECTION(start) \
6784
text_start = (start); \
6885
.section ".text","ax",@progbits; \
6986
. = 0x0; \
7087
start_text:
88+
#endif
7189

7290
#define ZERO_FIXED_SECTION(sname, start, end) \
7391
sname##_start = (start); \

arch/powerpc/kernel/vmlinux.lds.S

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,11 @@ SECTIONS
103103
* section placement to work.
104104
*/
105105
.text BLOCK(0) : AT(ADDR(.text) - LOAD_OFFSET) {
106+
#ifdef CONFIG_LD_HEAD_STUB_CATCH
107+
*(.linker_stub_catch);
108+
. = . ;
109+
#endif
110+
106111
#else
107112
.text : AT(ADDR(.text) - LOAD_OFFSET) {
108113
ALIGN_FUNCTION();

0 commit comments

Comments
 (0)