1
0
mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git synced 2026-01-11 17:10:13 +00:00

Compare commits

...

25 Commits

Author SHA1 Message Date
Linus Torvalds
b57b17e88b parisc architecture fixes for kernel v6.7-rc1 (part 2):
- Include upper 5 address bits of physical address in iitlbp
 - Prevent booting 64-bit kernels on PA1.x machines
 - parport-gsc: mark init function static
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQS86RI+GtKfB8BJu973ErUQojoPXwUCZVB1AgAKCRD3ErUQojoP
 Xy+GAP0TlgE7ExHBB4jBpGf6uFuP0broznCeclPD4Bd0gngVhQEAz5v5m0FkJVVI
 5nOlKbBzLU4Mt9WYEbqNhmoNrklvYQo=
 =T1IA
 -----END PGP SIGNATURE-----

Merge tag 'parisc-for-6.7-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux

Pull parisc architecture fixes from Helge Deller:

 - Include the upper 5 address bits when inserting TLB entries on a
   64-bit kernel.

   On physical machines those are ignored, but in qemu it's nice to have
   them included and to be correct.

 - Stop the 64-bit kernel and show a warning if someone tries to boot on
   a machine with a 32-bit CPU

 - Fix a "no previous prototype" warning in parport-gsc

* tag 'parisc-for-6.7-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
  parisc: Prevent booting 64-bit kernels on PA1.x machines
  parport: gsc: mark init function static
  parisc/pgtable: Do not drop upper 5 address bits of physical address
2023-11-12 11:05:31 -08:00
Linus Torvalds
4eeee6636a LoongArch changes for v6.7
1, Support PREEMPT_DYNAMIC with static keys;
 2, Relax memory ordering for atomic operations;
 3, Support BPF CPU v4 instructions for LoongArch;
 4, Some build and runtime warning fixes.
 -----BEGIN PGP SIGNATURE-----
 
 iQJKBAABCAA0FiEEzOlt8mkP+tbeiYy5AoYrw/LiJnoFAmVQWXgWHGNoZW5odWFj
 YWlAa2VybmVsLm9yZwAKCRAChivD8uImepDTEACS808EsgSNIM1+JwldhdqKOErt
 XDWlLuIddVpenInx8F+9GnZJzKBU+wl+Ow5ejcVarjcecIJDv5UhoVrbhpeOHkfv
 RszRXQR4p/ZNSFvdraYDjjJ9UX6bp5rq7vMUC2d9bLazMauAfwf7T/HJ5qj9OYZi
 RLlcwaKo2UQHYsT7nJicjh0qpH1YpZQBYTaUUCwzilzB6vAIOTf6X12vFmhtM/i+
 5RIPnesMA1IQSm2ywUODpDHCs7Pirvy8aJvx0CsYdi3xl1yg3pUS6u69Ms61uWlw
 29yYhNbWmVnDikTVLTNISDb/jwto5SAVB2KQKBhF1trF4ZBNE6r7sP4m2tfllYo9
 KXK9tm0U8McS5o46Qd5er6eEnxL7mEeAsc12tNKUYOMe3SIkmHJmj/rZQOtpsiBg
 zqQsYkGUfO2VAwMWiGke8dxPZElOYwZ3UCOpbEpXEXy3NW71VJTIuQFGmsYKJhdy
 3xaAtQxdffE5yUTt2j3Y8Mex2b2oSUBSF263imsZjzWOOxd480iaoejtamf1V779
 bElevzZjMDmbiQ7kiVSf96TWc7iYcSv33jhP4DorKIqnPseYPfrXEeD1xY7JV+IU
 kkvSlO0hAJzVMmQgu5n0PPT1wrVpuvwtbsfcRobIkr1vktZyLaKHRq7rh4R5HTRL
 ZUUm6c0kUDywGT+J4A==
 =bmFe
 -----END PGP SIGNATURE-----

Merge tag 'loongarch-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson

Pull LoongArch updates from Huacai Chen:

 - support PREEMPT_DYNAMIC with static keys

 - relax memory ordering for atomic operations

 - support BPF CPU v4 instructions for LoongArch

 - some build and runtime warning fixes

* tag 'loongarch-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson:
  selftests/bpf: Enable cpu v4 tests for LoongArch
  LoongArch: BPF: Support signed mod instructions
  LoongArch: BPF: Support signed div instructions
  LoongArch: BPF: Support 32-bit offset jmp instructions
  LoongArch: BPF: Support unconditional bswap instructions
  LoongArch: BPF: Support sign-extension mov instructions
  LoongArch: BPF: Support sign-extension load instructions
  LoongArch: Add more instruction opcodes and emit_* helpers
  LoongArch/smp: Call rcutree_report_cpu_starting() earlier
  LoongArch: Relax memory ordering for atomic operations
  LoongArch: Mark __percpu functions as always inline
  LoongArch: Disable module from accessing external data directly
  LoongArch: Support PREEMPT_DYNAMIC with static keys
2023-11-12 10:58:08 -08:00
Linus Torvalds
5dd2020f33 powerpc fixes for 6.7 #2
- Finish a refactor of pgprot_framebuffer() which dependend on some changes
    that were merged via the drm tree.
 
  - Fix some kernel-doc warnings to quieten the bots.
 
 Thanks to: Nathan Lynch, Thomas Zimmermann.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCAAxFiEEJFGtCPCthwEv2Y/bUevqPMjhpYAFAmVQIV8THG1wZUBlbGxl
 cm1hbi5pZC5hdQAKCRBR6+o8yOGlgKTmEACKY2QHnc8ppY2V3W2D62q336OXU8Jj
 ljJdPj/4dMlbFxi7RcUHhENGx97KN7pJX/bIOYv+iK4C34B1sM/sMG6OxXzWrlJw
 ff2MnxE3ekljFerPdtx0fu3upCsr93hB3spm+/9pb/5V5SViK/gJt70dLUJuZ4ei
 Y4AW0mnS4dMNMPZDGwI9GHbjCdq1GAbG9JdfDWbltKu2G3zNuM4MTa0IVJY/kHgU
 8dbrPcs4LooC/RXJDTVdpBpShKg4i5sejcK30BP8qV0EXuez09lIRSk464n4aBEi
 LWnKavsLOAAGYhEFCuBsn/ZFbWUWCmV6ARcC7ydZ+ukhZi+0iioPMh1dGO0Bo+rP
 qesGLMddvsRZHInFN44NLDFVv03NA4V97LazvLQoUKSw8Oyt7aglLCmy+3YZL5Pd
 Zny/Pi5Vq3Ma45lqGuafoaT2qhERz4Z3tbedtRcdO3APVnvtGtgWUUPym8xNKAe4
 mOx0R1EzVdD3QXjh1Fwi9We69tdu5yRDmu+qne07x2T/vJN5zPR9k6sZXkuv85zH
 jX53GlVyLTLXVuD00pFcL9/wjlWhzFHk2BUCg8scKgkqdadN323uZ9qhyn1/VJFt
 E+2j0vLUlRA3Bj+WqcbY8TNq7HsDo91nt1ceYDtnHmRiZcSjRj/rh+cNyd28j+Zk
 Z4hXJkznVjBHAw==
 =Qaeg
 -----END PGP SIGNATURE-----

Merge tag 'powerpc-6.7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc fixes from Michael Ellerman:

 - Finish a refactor of pgprot_framebuffer() which dependend
   on some changes that were merged via the drm tree

 - Fix some kernel-doc warnings to quieten the bots

Thanks to Nathan Lynch and Thomas Zimmermann.

* tag 'powerpc-6.7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
  powerpc/rtas: Fix ppc_rtas_rmo_buf_show() kernel-doc
  powerpc/pseries/rtas-work-area: Fix rtas_work_area_reserve_arena() kernel-doc
  powerpc/fb: Call internal __phys_mem_access_prot() in fbdev code
  powerpc: Remove file parameter from phys_mem_access_prot()
  powerpc/machdep: Remove trailing whitespaces
2023-11-12 10:50:38 -08:00
Helge Deller
a406b8b424 parisc: Prevent booting 64-bit kernels on PA1.x machines
Bail out early with error message when trying to boot a 64-bit kernel on
32-bit machines. This fixes the previous commit to include the check for
true 64-bit kernels as well.

Signed-off-by: Helge Deller <deller@gmx.de>
Fixes: 591d2108f3abc ("parisc: Add runtime check to prevent PA2.0 kernels on PA1.x machines")
Cc:  <stable@vger.kernel.org> # v6.0+
2023-11-10 16:17:32 +01:00
Arnd Bergmann
8e8e46a603 parport: gsc: mark init function static
This is only used locally, so mark it static to avoid a warning:

drivers/parport/parport_gsc.c:395:5: error: no previous prototype for 'parport_gsc_init' [-Werror=missing-prototypes]

Acked-by: Helge Deller <deller@gmx.de>
Acked-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Helge Deller <deller@gmx.de>
2023-11-10 08:41:23 +01:00
Hengqi Chen
1d375d6546 selftests/bpf: Enable cpu v4 tests for LoongArch
Enable the cpu v4 tests for LoongArch. Currently, we don't have BPF
trampoline in LoongArch JIT, so the fentry test `test_ptr_struct_arg`
still failed, will followup.

Test result attached below:

  # ./test_progs -t verifier_sdiv,verifier_movsx,verifier_ldsx,verifier_gotol,verifier_bswap
  #316/1   verifier_bswap/BSWAP, 16:OK
  #316/2   verifier_bswap/BSWAP, 16 @unpriv:OK
  #316/3   verifier_bswap/BSWAP, 32:OK
  #316/4   verifier_bswap/BSWAP, 32 @unpriv:OK
  #316/5   verifier_bswap/BSWAP, 64:OK
  #316/6   verifier_bswap/BSWAP, 64 @unpriv:OK
  #316     verifier_bswap:OK
  #330/1   verifier_gotol/gotol, small_imm:OK
  #330/2   verifier_gotol/gotol, small_imm @unpriv:OK
  #330     verifier_gotol:OK
  #338/1   verifier_ldsx/LDSX, S8:OK
  #338/2   verifier_ldsx/LDSX, S8 @unpriv:OK
  #338/3   verifier_ldsx/LDSX, S16:OK
  #338/4   verifier_ldsx/LDSX, S16 @unpriv:OK
  #338/5   verifier_ldsx/LDSX, S32:OK
  #338/6   verifier_ldsx/LDSX, S32 @unpriv:OK
  #338/7   verifier_ldsx/LDSX, S8 range checking, privileged:OK
  #338/8   verifier_ldsx/LDSX, S16 range checking:OK
  #338/9   verifier_ldsx/LDSX, S16 range checking @unpriv:OK
  #338/10  verifier_ldsx/LDSX, S32 range checking:OK
  #338/11  verifier_ldsx/LDSX, S32 range checking @unpriv:OK
  #338     verifier_ldsx:OK
  #349/1   verifier_movsx/MOV32SX, S8:OK
  #349/2   verifier_movsx/MOV32SX, S8 @unpriv:OK
  #349/3   verifier_movsx/MOV32SX, S16:OK
  #349/4   verifier_movsx/MOV32SX, S16 @unpriv:OK
  #349/5   verifier_movsx/MOV64SX, S8:OK
  #349/6   verifier_movsx/MOV64SX, S8 @unpriv:OK
  #349/7   verifier_movsx/MOV64SX, S16:OK
  #349/8   verifier_movsx/MOV64SX, S16 @unpriv:OK
  #349/9   verifier_movsx/MOV64SX, S32:OK
  #349/10  verifier_movsx/MOV64SX, S32 @unpriv:OK
  #349/11  verifier_movsx/MOV32SX, S8, range_check:OK
  #349/12  verifier_movsx/MOV32SX, S8, range_check @unpriv:OK
  #349/13  verifier_movsx/MOV32SX, S16, range_check:OK
  #349/14  verifier_movsx/MOV32SX, S16, range_check @unpriv:OK
  #349/15  verifier_movsx/MOV32SX, S16, range_check 2:OK
  #349/16  verifier_movsx/MOV32SX, S16, range_check 2 @unpriv:OK
  #349/17  verifier_movsx/MOV64SX, S8, range_check:OK
  #349/18  verifier_movsx/MOV64SX, S8, range_check @unpriv:OK
  #349/19  verifier_movsx/MOV64SX, S16, range_check:OK
  #349/20  verifier_movsx/MOV64SX, S16, range_check @unpriv:OK
  #349/21  verifier_movsx/MOV64SX, S32, range_check:OK
  #349/22  verifier_movsx/MOV64SX, S32, range_check @unpriv:OK
  #349/23  verifier_movsx/MOV64SX, S16, R10 Sign Extension:OK
  #349/24  verifier_movsx/MOV64SX, S16, R10 Sign Extension @unpriv:OK
  #349     verifier_movsx:OK
  #361/1   verifier_sdiv/SDIV32, non-zero imm divisor, check 1:OK
  #361/2   verifier_sdiv/SDIV32, non-zero imm divisor, check 1 @unpriv:OK
  #361/3   verifier_sdiv/SDIV32, non-zero imm divisor, check 2:OK
  #361/4   verifier_sdiv/SDIV32, non-zero imm divisor, check 2 @unpriv:OK
  #361/5   verifier_sdiv/SDIV32, non-zero imm divisor, check 3:OK
  #361/6   verifier_sdiv/SDIV32, non-zero imm divisor, check 3 @unpriv:OK
  #361/7   verifier_sdiv/SDIV32, non-zero imm divisor, check 4:OK
  #361/8   verifier_sdiv/SDIV32, non-zero imm divisor, check 4 @unpriv:OK
  #361/9   verifier_sdiv/SDIV32, non-zero imm divisor, check 5:OK
  #361/10  verifier_sdiv/SDIV32, non-zero imm divisor, check 5 @unpriv:OK
  #361/11  verifier_sdiv/SDIV32, non-zero imm divisor, check 6:OK
  #361/12  verifier_sdiv/SDIV32, non-zero imm divisor, check 6 @unpriv:OK
  #361/13  verifier_sdiv/SDIV32, non-zero imm divisor, check 7:OK
  #361/14  verifier_sdiv/SDIV32, non-zero imm divisor, check 7 @unpriv:OK
  #361/15  verifier_sdiv/SDIV32, non-zero imm divisor, check 8:OK
  #361/16  verifier_sdiv/SDIV32, non-zero imm divisor, check 8 @unpriv:OK
  #361/17  verifier_sdiv/SDIV32, non-zero reg divisor, check 1:OK
  #361/18  verifier_sdiv/SDIV32, non-zero reg divisor, check 1 @unpriv:OK
  #361/19  verifier_sdiv/SDIV32, non-zero reg divisor, check 2:OK
  #361/20  verifier_sdiv/SDIV32, non-zero reg divisor, check 2 @unpriv:OK
  #361/21  verifier_sdiv/SDIV32, non-zero reg divisor, check 3:OK
  #361/22  verifier_sdiv/SDIV32, non-zero reg divisor, check 3 @unpriv:OK
  #361/23  verifier_sdiv/SDIV32, non-zero reg divisor, check 4:OK
  #361/24  verifier_sdiv/SDIV32, non-zero reg divisor, check 4 @unpriv:OK
  #361/25  verifier_sdiv/SDIV32, non-zero reg divisor, check 5:OK
  #361/26  verifier_sdiv/SDIV32, non-zero reg divisor, check 5 @unpriv:OK
  #361/27  verifier_sdiv/SDIV32, non-zero reg divisor, check 6:OK
  #361/28  verifier_sdiv/SDIV32, non-zero reg divisor, check 6 @unpriv:OK
  #361/29  verifier_sdiv/SDIV32, non-zero reg divisor, check 7:OK
  #361/30  verifier_sdiv/SDIV32, non-zero reg divisor, check 7 @unpriv:OK
  #361/31  verifier_sdiv/SDIV32, non-zero reg divisor, check 8:OK
  #361/32  verifier_sdiv/SDIV32, non-zero reg divisor, check 8 @unpriv:OK
  #361/33  verifier_sdiv/SDIV64, non-zero imm divisor, check 1:OK
  #361/34  verifier_sdiv/SDIV64, non-zero imm divisor, check 1 @unpriv:OK
  #361/35  verifier_sdiv/SDIV64, non-zero imm divisor, check 2:OK
  #361/36  verifier_sdiv/SDIV64, non-zero imm divisor, check 2 @unpriv:OK
  #361/37  verifier_sdiv/SDIV64, non-zero imm divisor, check 3:OK
  #361/38  verifier_sdiv/SDIV64, non-zero imm divisor, check 3 @unpriv:OK
  #361/39  verifier_sdiv/SDIV64, non-zero imm divisor, check 4:OK
  #361/40  verifier_sdiv/SDIV64, non-zero imm divisor, check 4 @unpriv:OK
  #361/41  verifier_sdiv/SDIV64, non-zero imm divisor, check 5:OK
  #361/42  verifier_sdiv/SDIV64, non-zero imm divisor, check 5 @unpriv:OK
  #361/43  verifier_sdiv/SDIV64, non-zero imm divisor, check 6:OK
  #361/44  verifier_sdiv/SDIV64, non-zero imm divisor, check 6 @unpriv:OK
  #361/45  verifier_sdiv/SDIV64, non-zero reg divisor, check 1:OK
  #361/46  verifier_sdiv/SDIV64, non-zero reg divisor, check 1 @unpriv:OK
  #361/47  verifier_sdiv/SDIV64, non-zero reg divisor, check 2:OK
  #361/48  verifier_sdiv/SDIV64, non-zero reg divisor, check 2 @unpriv:OK
  #361/49  verifier_sdiv/SDIV64, non-zero reg divisor, check 3:OK
  #361/50  verifier_sdiv/SDIV64, non-zero reg divisor, check 3 @unpriv:OK
  #361/51  verifier_sdiv/SDIV64, non-zero reg divisor, check 4:OK
  #361/52  verifier_sdiv/SDIV64, non-zero reg divisor, check 4 @unpriv:OK
  #361/53  verifier_sdiv/SDIV64, non-zero reg divisor, check 5:OK
  #361/54  verifier_sdiv/SDIV64, non-zero reg divisor, check 5 @unpriv:OK
  #361/55  verifier_sdiv/SDIV64, non-zero reg divisor, check 6:OK
  #361/56  verifier_sdiv/SDIV64, non-zero reg divisor, check 6 @unpriv:OK
  #361/57  verifier_sdiv/SMOD32, non-zero imm divisor, check 1:OK
  #361/58  verifier_sdiv/SMOD32, non-zero imm divisor, check 1 @unpriv:OK
  #361/59  verifier_sdiv/SMOD32, non-zero imm divisor, check 2:OK
  #361/60  verifier_sdiv/SMOD32, non-zero imm divisor, check 2 @unpriv:OK
  #361/61  verifier_sdiv/SMOD32, non-zero imm divisor, check 3:OK
  #361/62  verifier_sdiv/SMOD32, non-zero imm divisor, check 3 @unpriv:OK
  #361/63  verifier_sdiv/SMOD32, non-zero imm divisor, check 4:OK
  #361/64  verifier_sdiv/SMOD32, non-zero imm divisor, check 4 @unpriv:OK
  #361/65  verifier_sdiv/SMOD32, non-zero imm divisor, check 5:OK
  #361/66  verifier_sdiv/SMOD32, non-zero imm divisor, check 5 @unpriv:OK
  #361/67  verifier_sdiv/SMOD32, non-zero imm divisor, check 6:OK
  #361/68  verifier_sdiv/SMOD32, non-zero imm divisor, check 6 @unpriv:OK
  #361/69  verifier_sdiv/SMOD32, non-zero reg divisor, check 1:OK
  #361/70  verifier_sdiv/SMOD32, non-zero reg divisor, check 1 @unpriv:OK
  #361/71  verifier_sdiv/SMOD32, non-zero reg divisor, check 2:OK
  #361/72  verifier_sdiv/SMOD32, non-zero reg divisor, check 2 @unpriv:OK
  #361/73  verifier_sdiv/SMOD32, non-zero reg divisor, check 3:OK
  #361/74  verifier_sdiv/SMOD32, non-zero reg divisor, check 3 @unpriv:OK
  #361/75  verifier_sdiv/SMOD32, non-zero reg divisor, check 4:OK
  #361/76  verifier_sdiv/SMOD32, non-zero reg divisor, check 4 @unpriv:OK
  #361/77  verifier_sdiv/SMOD32, non-zero reg divisor, check 5:OK
  #361/78  verifier_sdiv/SMOD32, non-zero reg divisor, check 5 @unpriv:OK
  #361/79  verifier_sdiv/SMOD32, non-zero reg divisor, check 6:OK
  #361/80  verifier_sdiv/SMOD32, non-zero reg divisor, check 6 @unpriv:OK
  #361/81  verifier_sdiv/SMOD64, non-zero imm divisor, check 1:OK
  #361/82  verifier_sdiv/SMOD64, non-zero imm divisor, check 1 @unpriv:OK
  #361/83  verifier_sdiv/SMOD64, non-zero imm divisor, check 2:OK
  #361/84  verifier_sdiv/SMOD64, non-zero imm divisor, check 2 @unpriv:OK
  #361/85  verifier_sdiv/SMOD64, non-zero imm divisor, check 3:OK
  #361/86  verifier_sdiv/SMOD64, non-zero imm divisor, check 3 @unpriv:OK
  #361/87  verifier_sdiv/SMOD64, non-zero imm divisor, check 4:OK
  #361/88  verifier_sdiv/SMOD64, non-zero imm divisor, check 4 @unpriv:OK
  #361/89  verifier_sdiv/SMOD64, non-zero imm divisor, check 5:OK
  #361/90  verifier_sdiv/SMOD64, non-zero imm divisor, check 5 @unpriv:OK
  #361/91  verifier_sdiv/SMOD64, non-zero imm divisor, check 6:OK
  #361/92  verifier_sdiv/SMOD64, non-zero imm divisor, check 6 @unpriv:OK
  #361/93  verifier_sdiv/SMOD64, non-zero imm divisor, check 7:OK
  #361/94  verifier_sdiv/SMOD64, non-zero imm divisor, check 7 @unpriv:OK
  #361/95  verifier_sdiv/SMOD64, non-zero imm divisor, check 8:OK
  #361/96  verifier_sdiv/SMOD64, non-zero imm divisor, check 8 @unpriv:OK
  #361/97  verifier_sdiv/SMOD64, non-zero reg divisor, check 1:OK
  #361/98  verifier_sdiv/SMOD64, non-zero reg divisor, check 1 @unpriv:OK
  #361/99  verifier_sdiv/SMOD64, non-zero reg divisor, check 2:OK
  #361/100 verifier_sdiv/SMOD64, non-zero reg divisor, check 2 @unpriv:OK
  #361/101 verifier_sdiv/SMOD64, non-zero reg divisor, check 3:OK
  #361/102 verifier_sdiv/SMOD64, non-zero reg divisor, check 3 @unpriv:OK
  #361/103 verifier_sdiv/SMOD64, non-zero reg divisor, check 4:OK
  #361/104 verifier_sdiv/SMOD64, non-zero reg divisor, check 4 @unpriv:OK
  #361/105 verifier_sdiv/SMOD64, non-zero reg divisor, check 5:OK
  #361/106 verifier_sdiv/SMOD64, non-zero reg divisor, check 5 @unpriv:OK
  #361/107 verifier_sdiv/SMOD64, non-zero reg divisor, check 6:OK
  #361/108 verifier_sdiv/SMOD64, non-zero reg divisor, check 6 @unpriv:OK
  #361/109 verifier_sdiv/SMOD64, non-zero reg divisor, check 7:OK
  #361/110 verifier_sdiv/SMOD64, non-zero reg divisor, check 7 @unpriv:OK
  #361/111 verifier_sdiv/SMOD64, non-zero reg divisor, check 8:OK
  #361/112 verifier_sdiv/SMOD64, non-zero reg divisor, check 8 @unpriv:OK
  #361/113 verifier_sdiv/SDIV32, zero divisor:OK
  #361/114 verifier_sdiv/SDIV32, zero divisor @unpriv:OK
  #361/115 verifier_sdiv/SDIV64, zero divisor:OK
  #361/116 verifier_sdiv/SDIV64, zero divisor @unpriv:OK
  #361/117 verifier_sdiv/SMOD32, zero divisor:OK
  #361/118 verifier_sdiv/SMOD32, zero divisor @unpriv:OK
  #361/119 verifier_sdiv/SMOD64, zero divisor:OK
  #361/120 verifier_sdiv/SMOD64, zero divisor @unpriv:OK
  #361     verifier_sdiv:OK
  Summary: 5/163 PASSED, 0 SKIPPED, 0 FAILED

  # ./test_progs -t ldsx_insn
  test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec
  test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec
  libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22
  libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524
  test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524)
  #116/1   ldsx_insn/map_val and probed_memory:FAIL
  #116/2   ldsx_insn/ctx_member_sign_ext:OK
  #116/3   ldsx_insn/ctx_member_narrow_sign_ext:OK
  #116     ldsx_insn:FAIL

  All error logs:
  test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec
  test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec
  libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22
  libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524
  test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524)
  #116/1   ldsx_insn/map_val and probed_memory:FAIL
  #116     ldsx_insn:FAIL
  Summary: 0/2 PASSED, 0 SKIPPED, 1 FAILED

Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:21 +08:00
Hengqi Chen
7b6b13d329 LoongArch: BPF: Support signed mod instructions
Add support for signed mod instructions.

Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:21 +08:00
Hengqi Chen
2425c9e002 LoongArch: BPF: Support signed div instructions
Add support for signed div instructions.

Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:21 +08:00
Hengqi Chen
9ddd2b8d1a LoongArch: BPF: Support 32-bit offset jmp instructions
Add support for 32-bit offset jmp instruction. Currently, we use b
instruction which supports range within ±128MB for such jumps. This
should be large enough for BPF progs.

Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:16 +08:00
Hengqi Chen
4ebf9216e7 LoongArch: BPF: Support unconditional bswap instructions
Add support for unconditional bswap instruction. Since LoongArch is
always little-endian, just treat unconditional bswap the same as big-
endian conversion.

Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:16 +08:00
Hengqi Chen
f48012f161 LoongArch: BPF: Support sign-extension mov instructions
Add support for sign-extension mov instructions.

Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:16 +08:00
Hengqi Chen
7111afe8fb LoongArch: BPF: Support sign-extension load instructions
Add support for sign-extension load instructions.

Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:16 +08:00
Hengqi Chen
add2802440 LoongArch: Add more instruction opcodes and emit_* helpers
This patch adds more instruction opcodes and their corresponding emit_*
helpers which will be used in later patches.

Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:15 +08:00
Huacai Chen
a2ccf46333 LoongArch/smp: Call rcutree_report_cpu_starting() earlier
rcutree_report_cpu_starting() must be called before cpu_probe() to avoid
the following lockdep splat that triggered by calling __alloc_pages() when
CONFIG_PROVE_RCU_LIST=y:

 =============================
 WARNING: suspicious RCU usage
 6.6.0+ #980 Not tainted
 -----------------------------
 kernel/locking/lockdep.c:3761 RCU-list traversed in non-reader section!!
 other info that might help us debug this:
 RCU used illegally from offline CPU!
 rcu_scheduler_active = 1, debug_locks = 1
 1 lock held by swapper/1/0:
  #0: 900000000c82ef98 (&pcp->lock){+.+.}-{2:2}, at: get_page_from_freelist+0x894/0x1790
 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.6.0+ #980
 Stack : 0000000000000001 9000000004f79508 9000000004893670 9000000100310000
         90000001003137d0 0000000000000000 90000001003137d8 9000000004f79508
         0000000000000000 0000000000000001 0000000000000000 90000000048a3384
         203a656d616e2065 ca43677b3687e616 90000001002c3480 0000000000000008
         000000000000009d 0000000000000000 0000000000000001 80000000ffffe0b8
         000000000000000d 0000000000000033 0000000007ec0000 13bbf50562dad831
         9000000005140748 0000000000000000 9000000004f79508 0000000000000004
         0000000000000000 9000000005140748 90000001002bad40 0000000000000000
         90000001002ba400 0000000000000000 9000000003573ec8 0000000000000000
         00000000000000b0 0000000000000004 0000000000000000 0000000000070000
         ...
 Call Trace:
 [<9000000003573ec8>] show_stack+0x38/0x150
 [<9000000004893670>] dump_stack_lvl+0x74/0xa8
 [<900000000360d2bc>] lockdep_rcu_suspicious+0x14c/0x190
 [<900000000361235c>] __lock_acquire+0xd0c/0x2740
 [<90000000036146f4>] lock_acquire+0x104/0x2c0
 [<90000000048a955c>] _raw_spin_lock_irqsave+0x5c/0x90
 [<900000000381cd5c>] rmqueue_bulk+0x6c/0x950
 [<900000000381fc0c>] get_page_from_freelist+0xd4c/0x1790
 [<9000000003821c6c>] __alloc_pages+0x1bc/0x3e0
 [<9000000003583b40>] tlb_init+0x150/0x2a0
 [<90000000035742a0>] per_cpu_trap_init+0xf0/0x110
 [<90000000035712fc>] cpu_probe+0x3dc/0x7a0
 [<900000000357ed20>] start_secondary+0x40/0xb0
 [<9000000004897138>] smpboot_entry+0x54/0x58

raw_smp_processor_id() is required in order to avoid calling into lockdep
before RCU has declared the CPU to be watched for readers.

See also commit 29368e093921 ("x86/smpboot: Move rcu_cpu_starting() earlier"),
commit de5d9dae150c ("s390/smp: move rcu_cpu_starting() earlier") and commit
99f070b62322 ("powerpc/smp: Call rcu_cpu_starting() earlier").

Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:15 +08:00
WANG Rui
affef66b65 LoongArch: Relax memory ordering for atomic operations
This patch relaxes the implementation while satisfying the memory ordering
requirements for atomic operations, which will help improve performance on
LA664+.

Unixbench with full threads (8)
                                           before       after
  Dhrystone 2 using register variables   203910714.2  203909539.8   0.00%
  Double-Precision Whetstone                 37930.9        37931   0.00%
  Execl Throughput                           29431.5      29545.8   0.39%
  File Copy 1024 bufsize 2000 maxblocks    6645759.5      6676320   0.46%
  File Copy 256 bufsize 500 maxblocks      2138772.4    2144182.4   0.25%
  File Copy 4096 bufsize 8000 maxblocks   11640698.4     11602703  -0.33%
  Pipe Throughput                          8849077.7    8917009.4   0.77%
  Pipe-based Context Switching             1255108.5    1287277.3   2.56%
  Process Creation                           50825.9      50442.1  -0.76%
  Shell Scripts (1 concurrent)               25795.8      25942.3   0.57%
  Shell Scripts (8 concurrent)                3812.6       3835.2   0.59%
  System Call Overhead                     9248212.6    9353348.6   1.14%
                                                                  =======
  System Benchmarks Index Score               8076.6       8114.4   0.47%

Signed-off-by: WANG Rui <wangrui@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:15 +08:00
Nathan Chancellor
71945968d8 LoongArch: Mark __percpu functions as always inline
A recent change to the optimization pipeline in LLVM reveals some
fragility around the inlining of LoongArch's __percpu functions, which
manifests as a BUILD_BUG() failure:

  In file included from kernel/sched/build_policy.c:17:
  In file included from include/linux/sched/cputime.h:5:
  In file included from include/linux/sched/signal.h:5:
  In file included from include/linux/rculist.h:11:
  In file included from include/linux/rcupdate.h:26:
  In file included from include/linux/irqflags.h:18:
  arch/loongarch/include/asm/percpu.h:97:3: error: call to '__compiletime_assert_51' declared with 'error' attribute: BUILD_BUG failed
     97 |                 BUILD_BUG();
        |                 ^
  include/linux/build_bug.h:59:21: note: expanded from macro 'BUILD_BUG'
     59 | #define BUILD_BUG() BUILD_BUG_ON_MSG(1, "BUILD_BUG failed")
        |                     ^
  include/linux/build_bug.h:39:37: note: expanded from macro 'BUILD_BUG_ON_MSG'
     39 | #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
        |                                     ^
  include/linux/compiler_types.h:425:2: note: expanded from macro 'compiletime_assert'
    425 |         _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
        |         ^
  include/linux/compiler_types.h:413:2: note: expanded from macro '_compiletime_assert'
    413 |         __compiletime_assert(condition, msg, prefix, suffix)
        |         ^
  include/linux/compiler_types.h:406:4: note: expanded from macro '__compiletime_assert'
    406 |                         prefix ## suffix();                             \
        |                         ^
  <scratch space>:86:1: note: expanded from here
     86 | __compiletime_assert_51
        | ^
  1 error generated.

If these functions are not inlined (which the compiler is free to do
even with functions marked with the standard 'inline' keyword), the
BUILD_BUG() in the default case cannot be eliminated since the compiler
cannot prove it is never used, resulting in a build failure due to the
error attribute.

Mark these functions as __always_inline to guarantee inlining so that
the BUILD_BUG() only triggers when the default case genuinely cannot be
eliminated due to an unexpected size.

Cc:  <stable@vger.kernel.org>
Closes: https://github.com/ClangBuiltLinux/linux/issues/1955
Fixes: 46859ac8af52 ("LoongArch: Add multi-processor (SMP) support")
Link: 1a2e77cf9e
Suggested-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:15 +08:00
WANG Rui
21eb2bfe27 LoongArch: Disable module from accessing external data directly
The distance between vmlinux and the module is too far so that PC-REL
cannot be accessed directly, only GOT.

When compiling module with GCC, the option `-mdirect-extern-access` is
disabled by default. The Clang option `-fdirect-access-external-data` is
enabled by default, so it needs to be explicitly disabled.

Signed-off-by: WANG Rui <wangrui@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:07 +08:00
Huacai Chen
80c7889de7 LoongArch: Support PREEMPT_DYNAMIC with static keys
Since commit 4e90d0522a688371402c ("riscv: support PREEMPT_DYNAMIC with
static keys"), the infrastructure is complete and we can simply select
HAVE_PREEMPT_DYNAMIC_KEY to enable PREEMPT_DYNAMIC on LoongArch because
we already support static keys.

Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-11-08 14:12:01 +08:00
Helge Deller
166b0110d1 parisc/pgtable: Do not drop upper 5 address bits of physical address
When calculating the pfn for the iitlbt/idtlbt instruction, do not
drop the upper 5 address bits. This doesn't seem to have an effect
on physical hardware which uses less physical address bits, but in
qemu the missing bits are visible.

Signed-off-by: Helge Deller <deller@gmx.de>
Cc:  <stable@vger.kernel.org>
2023-11-07 19:48:30 +01:00
Nathan Lynch
644b6025bc powerpc/rtas: Fix ppc_rtas_rmo_buf_show() kernel-doc
>From a W=1 build:

>> arch/powerpc/kernel/rtas-proc.c:771: warning: Function parameter or member 'm' not described in
>> 'ppc_rtas_rmo_buf_show'
>> arch/powerpc/kernel/rtas-proc.c:771: warning: Function parameter or member 'v' not described in
>> 'ppc_rtas_rmo_buf_show'

Add the missing parameter descriptions.

Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com>
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202309211645.1Lvwmbv4-lkp@intel.com/
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20231106-rtas-trivial-v1-2-61847655c51f@linux.ibm.com
2023-11-07 13:13:45 +11:00
Nathan Lynch
65083333d3 powerpc/pseries/rtas-work-area: Fix rtas_work_area_reserve_arena() kernel-doc
>From a W=1 build:

>> arch/powerpc/platforms/pseries/rtas-work-area.c:189: warning: Function parameter or member 'limit' not
>> described in 'rtas_work_area_reserve_arena'

Add the missing description of the limit parameter.

Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com>
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202309131221.Bm1pg96n-lkp@intel.com/
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20231106-rtas-trivial-v1-1-61847655c51f@linux.ibm.com
2023-11-07 13:13:44 +11:00
Thomas Zimmermann
deebe5f607 powerpc/fb: Call internal __phys_mem_access_prot() in fbdev code
Call __phys_mem_access_prot() from the fbdev mmap helper
pgprot_framebuffer(). Allows to avoid the file argument of NULL.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230922080636.26762-6-tzimmermann@suse.de
2023-11-06 15:24:52 +11:00
Thomas Zimmermann
1f92a844c3 powerpc: Remove file parameter from phys_mem_access_prot()
Remove 'file' parameter from struct machdep_calls.phys_mem_access_prot
and its implementation in pci_phys_mem_access_prot(). The file is not
used on PowerPC. By removing it, a later patch can simplify fbdev's
mmap code, which uses phys_mem_access_prot() on PowerPC.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
[mpe: Rebase on unrelated changes to phys_mem_access_prot()]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230922080636.26762-5-tzimmermann@suse.de
2023-11-06 15:21:33 +11:00
Thomas Zimmermann
322948c319 powerpc/machdep: Remove trailing whitespaces
Fix coding style. No functional changes.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230922080636.26762-4-tzimmermann@suse.de
2023-11-06 15:15:15 +11:00
Huacai Chen
a6bdc082ad Merge 'bpf-next 2023-10-16' into loongarch-next
LoongArch architecture changes for 6.7 (BPF CPU v4 support) depend on
the bpf changes to fix conflictions in selftests and work, so merge them
to create a base.
2023-11-01 10:55:00 +08:00
24 changed files with 245 additions and 95 deletions

View File

@ -136,6 +136,7 @@ config LOONGARCH
select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select HAVE_PREEMPT_DYNAMIC_KEY
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_RETHOOK
select HAVE_RSEQ

View File

@ -68,6 +68,8 @@ LDFLAGS_vmlinux += -static -n -nostdlib
ifdef CONFIG_AS_HAS_EXPLICIT_RELOCS
cflags-y += $(call cc-option,-mexplicit-relocs)
KBUILD_CFLAGS_KERNEL += $(call cc-option,-mdirect-extern-access)
KBUILD_AFLAGS_MODULE += $(call cc-option,-fno-direct-access-external-data)
KBUILD_CFLAGS_MODULE += $(call cc-option,-fno-direct-access-external-data)
KBUILD_AFLAGS_MODULE += $(call cc-option,-mno-relax) $(call cc-option,-Wa$(comma)-mno-relax)
KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax) $(call cc-option,-Wa$(comma)-mno-relax)
else

View File

@ -36,19 +36,19 @@
static inline void arch_atomic_##op(int i, atomic_t *v) \
{ \
__asm__ __volatile__( \
"am"#asm_op"_db.w" " $zero, %1, %0 \n" \
"am"#asm_op".w" " $zero, %1, %0 \n" \
: "+ZB" (v->counter) \
: "r" (I) \
: "memory"); \
}
#define ATOMIC_OP_RETURN(op, I, asm_op, c_op) \
static inline int arch_atomic_##op##_return_relaxed(int i, atomic_t *v) \
#define ATOMIC_OP_RETURN(op, I, asm_op, c_op, mb, suffix) \
static inline int arch_atomic_##op##_return##suffix(int i, atomic_t *v) \
{ \
int result; \
\
__asm__ __volatile__( \
"am"#asm_op"_db.w" " %1, %2, %0 \n" \
"am"#asm_op#mb".w" " %1, %2, %0 \n" \
: "+ZB" (v->counter), "=&r" (result) \
: "r" (I) \
: "memory"); \
@ -56,13 +56,13 @@ static inline int arch_atomic_##op##_return_relaxed(int i, atomic_t *v) \
return result c_op I; \
}
#define ATOMIC_FETCH_OP(op, I, asm_op) \
static inline int arch_atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
#define ATOMIC_FETCH_OP(op, I, asm_op, mb, suffix) \
static inline int arch_atomic_fetch_##op##suffix(int i, atomic_t *v) \
{ \
int result; \
\
__asm__ __volatile__( \
"am"#asm_op"_db.w" " %1, %2, %0 \n" \
"am"#asm_op#mb".w" " %1, %2, %0 \n" \
: "+ZB" (v->counter), "=&r" (result) \
: "r" (I) \
: "memory"); \
@ -72,29 +72,53 @@ static inline int arch_atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
#define ATOMIC_OPS(op, I, asm_op, c_op) \
ATOMIC_OP(op, I, asm_op) \
ATOMIC_OP_RETURN(op, I, asm_op, c_op) \
ATOMIC_FETCH_OP(op, I, asm_op)
ATOMIC_OP_RETURN(op, I, asm_op, c_op, _db, ) \
ATOMIC_OP_RETURN(op, I, asm_op, c_op, , _relaxed) \
ATOMIC_FETCH_OP(op, I, asm_op, _db, ) \
ATOMIC_FETCH_OP(op, I, asm_op, , _relaxed)
ATOMIC_OPS(add, i, add, +)
ATOMIC_OPS(sub, -i, add, +)
#define arch_atomic_add_return arch_atomic_add_return
#define arch_atomic_add_return_acquire arch_atomic_add_return
#define arch_atomic_add_return_release arch_atomic_add_return
#define arch_atomic_add_return_relaxed arch_atomic_add_return_relaxed
#define arch_atomic_sub_return arch_atomic_sub_return
#define arch_atomic_sub_return_acquire arch_atomic_sub_return
#define arch_atomic_sub_return_release arch_atomic_sub_return
#define arch_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed
#define arch_atomic_fetch_add arch_atomic_fetch_add
#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add
#define arch_atomic_fetch_add_release arch_atomic_fetch_add
#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed
#define arch_atomic_fetch_sub arch_atomic_fetch_sub
#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub
#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub
#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed
#undef ATOMIC_OPS
#define ATOMIC_OPS(op, I, asm_op) \
ATOMIC_OP(op, I, asm_op) \
ATOMIC_FETCH_OP(op, I, asm_op)
ATOMIC_FETCH_OP(op, I, asm_op, _db, ) \
ATOMIC_FETCH_OP(op, I, asm_op, , _relaxed)
ATOMIC_OPS(and, i, and)
ATOMIC_OPS(or, i, or)
ATOMIC_OPS(xor, i, xor)
#define arch_atomic_fetch_and arch_atomic_fetch_and
#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and
#define arch_atomic_fetch_and_release arch_atomic_fetch_and
#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed
#define arch_atomic_fetch_or arch_atomic_fetch_or
#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or
#define arch_atomic_fetch_or_release arch_atomic_fetch_or
#define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed
#define arch_atomic_fetch_xor arch_atomic_fetch_xor
#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor
#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor
#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed
#undef ATOMIC_OPS
@ -172,18 +196,18 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
static inline void arch_atomic64_##op(long i, atomic64_t *v) \
{ \
__asm__ __volatile__( \
"am"#asm_op"_db.d " " $zero, %1, %0 \n" \
"am"#asm_op".d " " $zero, %1, %0 \n" \
: "+ZB" (v->counter) \
: "r" (I) \
: "memory"); \
}
#define ATOMIC64_OP_RETURN(op, I, asm_op, c_op) \
static inline long arch_atomic64_##op##_return_relaxed(long i, atomic64_t *v) \
#define ATOMIC64_OP_RETURN(op, I, asm_op, c_op, mb, suffix) \
static inline long arch_atomic64_##op##_return##suffix(long i, atomic64_t *v) \
{ \
long result; \
__asm__ __volatile__( \
"am"#asm_op"_db.d " " %1, %2, %0 \n" \
"am"#asm_op#mb".d " " %1, %2, %0 \n" \
: "+ZB" (v->counter), "=&r" (result) \
: "r" (I) \
: "memory"); \
@ -191,13 +215,13 @@ static inline long arch_atomic64_##op##_return_relaxed(long i, atomic64_t *v) \
return result c_op I; \
}
#define ATOMIC64_FETCH_OP(op, I, asm_op) \
static inline long arch_atomic64_fetch_##op##_relaxed(long i, atomic64_t *v) \
#define ATOMIC64_FETCH_OP(op, I, asm_op, mb, suffix) \
static inline long arch_atomic64_fetch_##op##suffix(long i, atomic64_t *v) \
{ \
long result; \
\
__asm__ __volatile__( \
"am"#asm_op"_db.d " " %1, %2, %0 \n" \
"am"#asm_op#mb".d " " %1, %2, %0 \n" \
: "+ZB" (v->counter), "=&r" (result) \
: "r" (I) \
: "memory"); \
@ -207,29 +231,53 @@ static inline long arch_atomic64_fetch_##op##_relaxed(long i, atomic64_t *v) \
#define ATOMIC64_OPS(op, I, asm_op, c_op) \
ATOMIC64_OP(op, I, asm_op) \
ATOMIC64_OP_RETURN(op, I, asm_op, c_op) \
ATOMIC64_FETCH_OP(op, I, asm_op)
ATOMIC64_OP_RETURN(op, I, asm_op, c_op, _db, ) \
ATOMIC64_OP_RETURN(op, I, asm_op, c_op, , _relaxed) \
ATOMIC64_FETCH_OP(op, I, asm_op, _db, ) \
ATOMIC64_FETCH_OP(op, I, asm_op, , _relaxed)
ATOMIC64_OPS(add, i, add, +)
ATOMIC64_OPS(sub, -i, add, +)
#define arch_atomic64_add_return arch_atomic64_add_return
#define arch_atomic64_add_return_acquire arch_atomic64_add_return
#define arch_atomic64_add_return_release arch_atomic64_add_return
#define arch_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed
#define arch_atomic64_sub_return arch_atomic64_sub_return
#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return
#define arch_atomic64_sub_return_release arch_atomic64_sub_return
#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed
#define arch_atomic64_fetch_add arch_atomic64_fetch_add
#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add
#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add
#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed
#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub
#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub
#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub
#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed
#undef ATOMIC64_OPS
#define ATOMIC64_OPS(op, I, asm_op) \
ATOMIC64_OP(op, I, asm_op) \
ATOMIC64_FETCH_OP(op, I, asm_op)
ATOMIC64_FETCH_OP(op, I, asm_op, _db, ) \
ATOMIC64_FETCH_OP(op, I, asm_op, , _relaxed)
ATOMIC64_OPS(and, i, and)
ATOMIC64_OPS(or, i, or)
ATOMIC64_OPS(xor, i, xor)
#define arch_atomic64_fetch_and arch_atomic64_fetch_and
#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and
#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and
#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed
#define arch_atomic64_fetch_or arch_atomic64_fetch_or
#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or
#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or
#define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed
#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor
#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor
#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed
#undef ATOMIC64_OPS

View File

@ -65,6 +65,8 @@ enum reg2_op {
revbd_op = 0x0f,
revh2w_op = 0x10,
revhd_op = 0x11,
extwh_op = 0x16,
extwb_op = 0x17,
iocsrrdb_op = 0x19200,
iocsrrdh_op = 0x19201,
iocsrrdw_op = 0x19202,
@ -572,6 +574,8 @@ static inline void emit_##NAME(union loongarch_instruction *insn, \
DEF_EMIT_REG2_FORMAT(revb2h, revb2h_op)
DEF_EMIT_REG2_FORMAT(revb2w, revb2w_op)
DEF_EMIT_REG2_FORMAT(revbd, revbd_op)
DEF_EMIT_REG2_FORMAT(extwh, extwh_op)
DEF_EMIT_REG2_FORMAT(extwb, extwb_op)
#define DEF_EMIT_REG2I5_FORMAT(NAME, OP) \
static inline void emit_##NAME(union loongarch_instruction *insn, \
@ -623,6 +627,9 @@ DEF_EMIT_REG2I12_FORMAT(lu52id, lu52id_op)
DEF_EMIT_REG2I12_FORMAT(andi, andi_op)
DEF_EMIT_REG2I12_FORMAT(ori, ori_op)
DEF_EMIT_REG2I12_FORMAT(xori, xori_op)
DEF_EMIT_REG2I12_FORMAT(ldb, ldb_op)
DEF_EMIT_REG2I12_FORMAT(ldh, ldh_op)
DEF_EMIT_REG2I12_FORMAT(ldw, ldw_op)
DEF_EMIT_REG2I12_FORMAT(ldbu, ldbu_op)
DEF_EMIT_REG2I12_FORMAT(ldhu, ldhu_op)
DEF_EMIT_REG2I12_FORMAT(ldwu, ldwu_op)
@ -701,9 +708,12 @@ static inline void emit_##NAME(union loongarch_instruction *insn, \
insn->reg3_format.rk = rk; \
}
DEF_EMIT_REG3_FORMAT(addw, addw_op)
DEF_EMIT_REG3_FORMAT(addd, addd_op)
DEF_EMIT_REG3_FORMAT(subd, subd_op)
DEF_EMIT_REG3_FORMAT(muld, muld_op)
DEF_EMIT_REG3_FORMAT(divd, divd_op)
DEF_EMIT_REG3_FORMAT(modd, modd_op)
DEF_EMIT_REG3_FORMAT(divdu, divdu_op)
DEF_EMIT_REG3_FORMAT(moddu, moddu_op)
DEF_EMIT_REG3_FORMAT(and, and_op)
@ -715,6 +725,9 @@ DEF_EMIT_REG3_FORMAT(srlw, srlw_op)
DEF_EMIT_REG3_FORMAT(srld, srld_op)
DEF_EMIT_REG3_FORMAT(sraw, sraw_op)
DEF_EMIT_REG3_FORMAT(srad, srad_op)
DEF_EMIT_REG3_FORMAT(ldxb, ldxb_op)
DEF_EMIT_REG3_FORMAT(ldxh, ldxh_op)
DEF_EMIT_REG3_FORMAT(ldxw, ldxw_op)
DEF_EMIT_REG3_FORMAT(ldxbu, ldxbu_op)
DEF_EMIT_REG3_FORMAT(ldxhu, ldxhu_op)
DEF_EMIT_REG3_FORMAT(ldxwu, ldxwu_op)

View File

@ -32,7 +32,7 @@ static inline void set_my_cpu_offset(unsigned long off)
#define __my_cpu_offset __my_cpu_offset
#define PERCPU_OP(op, asm_op, c_op) \
static inline unsigned long __percpu_##op(void *ptr, \
static __always_inline unsigned long __percpu_##op(void *ptr, \
unsigned long val, int size) \
{ \
unsigned long ret; \
@ -63,7 +63,7 @@ PERCPU_OP(and, and, &)
PERCPU_OP(or, or, |)
#undef PERCPU_OP
static inline unsigned long __percpu_read(void *ptr, int size)
static __always_inline unsigned long __percpu_read(void *ptr, int size)
{
unsigned long ret;
@ -100,7 +100,7 @@ static inline unsigned long __percpu_read(void *ptr, int size)
return ret;
}
static inline void __percpu_write(void *ptr, unsigned long val, int size)
static __always_inline void __percpu_write(void *ptr, unsigned long val, int size)
{
switch (size) {
case 1:
@ -132,8 +132,8 @@ static inline void __percpu_write(void *ptr, unsigned long val, int size)
}
}
static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
int size)
static __always_inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
int size)
{
switch (size) {
case 1:

View File

@ -504,8 +504,9 @@ asmlinkage void start_secondary(void)
unsigned int cpu;
sync_counter();
cpu = smp_processor_id();
cpu = raw_smp_processor_id();
set_my_cpu_offset(per_cpu_offset(cpu));
rcutree_report_cpu_starting(cpu);
cpu_probe();
constant_clockevent_init();

View File

@ -411,7 +411,11 @@ static int add_exception_handler(const struct bpf_insn *insn,
off_t offset;
struct exception_table_entry *ex;
if (!ctx->image || !ctx->prog->aux->extable || BPF_MODE(insn->code) != BPF_PROBE_MEM)
if (!ctx->image || !ctx->prog->aux->extable)
return 0;
if (BPF_MODE(insn->code) != BPF_PROBE_MEM &&
BPF_MODE(insn->code) != BPF_PROBE_MEMSX)
return 0;
if (WARN_ON_ONCE(ctx->num_exentries >= ctx->prog->aux->num_exentries))
@ -450,7 +454,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
{
u8 tm = -1;
u64 func_addr;
bool func_addr_fixed;
bool func_addr_fixed, sign_extend;
int i = insn - ctx->prog->insnsi;
int ret, jmp_offset;
const u8 code = insn->code;
@ -468,8 +472,23 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
/* dst = src */
case BPF_ALU | BPF_MOV | BPF_X:
case BPF_ALU64 | BPF_MOV | BPF_X:
move_reg(ctx, dst, src);
emit_zext_32(ctx, dst, is32);
switch (off) {
case 0:
move_reg(ctx, dst, src);
emit_zext_32(ctx, dst, is32);
break;
case 8:
move_reg(ctx, t1, src);
emit_insn(ctx, extwb, dst, t1);
break;
case 16:
move_reg(ctx, t1, src);
emit_insn(ctx, extwh, dst, t1);
break;
case 32:
emit_insn(ctx, addw, dst, src, LOONGARCH_GPR_ZERO);
break;
}
break;
/* dst = imm */
@ -534,39 +553,71 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
/* dst = dst / src */
case BPF_ALU | BPF_DIV | BPF_X:
case BPF_ALU64 | BPF_DIV | BPF_X:
emit_zext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_zext_32(ctx, t1, is32);
emit_insn(ctx, divdu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
if (!off) {
emit_zext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_zext_32(ctx, t1, is32);
emit_insn(ctx, divdu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
} else {
emit_sext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_sext_32(ctx, t1, is32);
emit_insn(ctx, divd, dst, dst, t1);
emit_sext_32(ctx, dst, is32);
}
break;
/* dst = dst / imm */
case BPF_ALU | BPF_DIV | BPF_K:
case BPF_ALU64 | BPF_DIV | BPF_K:
move_imm(ctx, t1, imm, is32);
emit_zext_32(ctx, dst, is32);
emit_insn(ctx, divdu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
if (!off) {
move_imm(ctx, t1, imm, is32);
emit_zext_32(ctx, dst, is32);
emit_insn(ctx, divdu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
} else {
move_imm(ctx, t1, imm, false);
emit_sext_32(ctx, t1, is32);
emit_sext_32(ctx, dst, is32);
emit_insn(ctx, divd, dst, dst, t1);
emit_sext_32(ctx, dst, is32);
}
break;
/* dst = dst % src */
case BPF_ALU | BPF_MOD | BPF_X:
case BPF_ALU64 | BPF_MOD | BPF_X:
emit_zext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_zext_32(ctx, t1, is32);
emit_insn(ctx, moddu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
if (!off) {
emit_zext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_zext_32(ctx, t1, is32);
emit_insn(ctx, moddu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
} else {
emit_sext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_sext_32(ctx, t1, is32);
emit_insn(ctx, modd, dst, dst, t1);
emit_sext_32(ctx, dst, is32);
}
break;
/* dst = dst % imm */
case BPF_ALU | BPF_MOD | BPF_K:
case BPF_ALU64 | BPF_MOD | BPF_K:
move_imm(ctx, t1, imm, is32);
emit_zext_32(ctx, dst, is32);
emit_insn(ctx, moddu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
if (!off) {
move_imm(ctx, t1, imm, is32);
emit_zext_32(ctx, dst, is32);
emit_insn(ctx, moddu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
} else {
move_imm(ctx, t1, imm, false);
emit_sext_32(ctx, t1, is32);
emit_sext_32(ctx, dst, is32);
emit_insn(ctx, modd, dst, dst, t1);
emit_sext_32(ctx, dst, is32);
}
break;
/* dst = -dst */
@ -712,6 +763,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
break;
case BPF_ALU | BPF_END | BPF_FROM_BE:
case BPF_ALU64 | BPF_END | BPF_FROM_LE:
switch (imm) {
case 16:
emit_insn(ctx, revb2h, dst, dst);
@ -828,7 +880,11 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
/* PC += off */
case BPF_JMP | BPF_JA:
jmp_offset = bpf2la_offset(i, off, ctx);
case BPF_JMP32 | BPF_JA:
if (BPF_CLASS(code) == BPF_JMP)
jmp_offset = bpf2la_offset(i, off, ctx);
else
jmp_offset = bpf2la_offset(i, imm, ctx);
if (emit_uncond_jmp(ctx, jmp_offset) < 0)
goto toofar;
break;
@ -879,31 +935,56 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
case BPF_LDX | BPF_PROBE_MEM | BPF_W:
case BPF_LDX | BPF_PROBE_MEM | BPF_H:
case BPF_LDX | BPF_PROBE_MEM | BPF_B:
/* dst_reg = (s64)*(signed size *)(src_reg + off) */
case BPF_LDX | BPF_MEMSX | BPF_B:
case BPF_LDX | BPF_MEMSX | BPF_H:
case BPF_LDX | BPF_MEMSX | BPF_W:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_B:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_H:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_W:
sign_extend = BPF_MODE(insn->code) == BPF_MEMSX ||
BPF_MODE(insn->code) == BPF_PROBE_MEMSX;
switch (BPF_SIZE(code)) {
case BPF_B:
if (is_signed_imm12(off)) {
emit_insn(ctx, ldbu, dst, src, off);
if (sign_extend)
emit_insn(ctx, ldb, dst, src, off);
else
emit_insn(ctx, ldbu, dst, src, off);
} else {
move_imm(ctx, t1, off, is32);
emit_insn(ctx, ldxbu, dst, src, t1);
if (sign_extend)
emit_insn(ctx, ldxb, dst, src, t1);
else
emit_insn(ctx, ldxbu, dst, src, t1);
}
break;
case BPF_H:
if (is_signed_imm12(off)) {
emit_insn(ctx, ldhu, dst, src, off);
if (sign_extend)
emit_insn(ctx, ldh, dst, src, off);
else
emit_insn(ctx, ldhu, dst, src, off);
} else {
move_imm(ctx, t1, off, is32);
emit_insn(ctx, ldxhu, dst, src, t1);
if (sign_extend)
emit_insn(ctx, ldxh, dst, src, t1);
else
emit_insn(ctx, ldxhu, dst, src, t1);
}
break;
case BPF_W:
if (is_signed_imm12(off)) {
emit_insn(ctx, ldwu, dst, src, off);
} else if (is_signed_imm14(off)) {
emit_insn(ctx, ldptrw, dst, src, off);
if (sign_extend)
emit_insn(ctx, ldw, dst, src, off);
else
emit_insn(ctx, ldwu, dst, src, off);
} else {
move_imm(ctx, t1, off, is32);
emit_insn(ctx, ldxwu, dst, src, t1);
if (sign_extend)
emit_insn(ctx, ldxw, dst, src, t1);
else
emit_insn(ctx, ldxwu, dst, src, t1);
}
break;
case BPF_DW:

View File

@ -475,13 +475,13 @@
* to a CPU TLB 4k PFN (4k => 12 bits to shift) */
#define PAGE_ADD_SHIFT (PAGE_SHIFT-12)
#define PAGE_ADD_HUGE_SHIFT (REAL_HPAGE_SHIFT-12)
#define PFN_START_BIT (63-ASM_PFN_PTE_SHIFT+(63-58)-PAGE_ADD_SHIFT)
/* Drop prot bits and convert to page addr for iitlbt and idtlbt */
.macro convert_for_tlb_insert20 pte,tmp
#ifdef CONFIG_HUGETLB_PAGE
copy \pte,\tmp
extrd,u \tmp,(63-ASM_PFN_PTE_SHIFT)+(63-58)+PAGE_ADD_SHIFT,\
64-PAGE_SHIFT-PAGE_ADD_SHIFT,\pte
extrd,u \tmp,PFN_START_BIT,PFN_START_BIT+1,\pte
depdi _PAGE_SIZE_ENCODING_DEFAULT,63,\
(63-58)+PAGE_ADD_SHIFT,\pte
@ -489,8 +489,7 @@
depdi _HUGE_PAGE_SIZE_ENCODING_DEFAULT,63,\
(63-58)+PAGE_ADD_HUGE_SHIFT,\pte
#else /* Huge pages disabled */
extrd,u \pte,(63-ASM_PFN_PTE_SHIFT)+(63-58)+PAGE_ADD_SHIFT,\
64-PAGE_SHIFT-PAGE_ADD_SHIFT,\pte
extrd,u \pte,PFN_START_BIT,PFN_START_BIT+1,\pte
depdi _PAGE_SIZE_ENCODING_DEFAULT,63,\
(63-58)+PAGE_ADD_SHIFT,\pte
#endif

View File

@ -70,9 +70,8 @@ $bss_loop:
stw,ma %arg2,4(%r1)
stw,ma %arg3,4(%r1)
#if !defined(CONFIG_64BIT) && defined(CONFIG_PA20)
/* This 32-bit kernel was compiled for PA2.0 CPUs. Check current CPU
* and halt kernel if we detect a PA1.x CPU. */
#if defined(CONFIG_PA20)
/* check for 64-bit capable CPU as required by current kernel */
ldi 32,%r10
mtctl %r10,%cr11
.level 2.0

View File

@ -8,12 +8,7 @@ static inline pgprot_t pgprot_framebuffer(pgprot_t prot,
unsigned long vm_start, unsigned long vm_end,
unsigned long offset)
{
/*
* PowerPC's implementation of phys_mem_access_prot() does
* not use the file argument. Set it to NULL in preparation
* of later updates to the interface.
*/
return phys_mem_access_prot(NULL, PHYS_PFN(offset), vm_end - vm_start, prot);
return __phys_mem_access_prot(PHYS_PFN(offset), vm_end - vm_start, prot);
}
#define pgprot_framebuffer pgprot_framebuffer

View File

@ -10,7 +10,7 @@
#include <linux/export.h>
struct pt_regs;
struct pci_bus;
struct pci_bus;
struct device_node;
struct iommu_table;
struct rtc_time;
@ -78,8 +78,8 @@ struct machdep_calls {
unsigned char (*nvram_read_val)(int addr);
void (*nvram_write_val)(int addr, unsigned char val);
ssize_t (*nvram_write)(char *buf, size_t count, loff_t *index);
ssize_t (*nvram_read)(char *buf, size_t count, loff_t *index);
ssize_t (*nvram_size)(void);
ssize_t (*nvram_read)(char *buf, size_t count, loff_t *index);
ssize_t (*nvram_size)(void);
void (*nvram_sync)(void);
/* Exception handlers */
@ -102,12 +102,11 @@ struct machdep_calls {
*/
long (*feature_call)(unsigned int feature, ...);
/* Get legacy PCI/IDE interrupt mapping */
/* Get legacy PCI/IDE interrupt mapping */
int (*pci_get_legacy_ide_irq)(struct pci_dev *dev, int channel);
/* Get access protection for /dev/mem */
pgprot_t (*phys_mem_access_prot)(struct file *file,
unsigned long pfn,
pgprot_t (*phys_mem_access_prot)(unsigned long pfn,
unsigned long size,
pgprot_t vma_prot);

View File

@ -105,9 +105,7 @@ extern void of_scan_pci_bridge(struct pci_dev *dev);
extern void of_scan_bus(struct device_node *node, struct pci_bus *bus);
extern void of_rescan_bus(struct device_node *node, struct pci_bus *bus);
struct file;
extern pgprot_t pci_phys_mem_access_prot(struct file *file,
unsigned long pfn,
extern pgprot_t pci_phys_mem_access_prot(unsigned long pfn,
unsigned long size,
pgprot_t prot);

View File

@ -120,9 +120,15 @@ static inline void mark_initmem_nx(void) { }
int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
pte_t *ptep, pte_t entry, int dirty);
pgprot_t __phys_mem_access_prot(unsigned long pfn, unsigned long size,
pgprot_t vma_prot);
struct file;
pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
unsigned long size, pgprot_t vma_prot);
static inline pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
unsigned long size, pgprot_t vma_prot)
{
return __phys_mem_access_prot(pfn, size, vma_prot);
}
#define __HAVE_PHYS_MEM_ACCESS_PROT
void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep);

View File

@ -521,8 +521,7 @@ int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma)
* PCI device, it tries to find the PCI device first and calls the
* above routine
*/
pgprot_t pci_phys_mem_access_prot(struct file *file,
unsigned long pfn,
pgprot_t pci_phys_mem_access_prot(unsigned long pfn,
unsigned long size,
pgprot_t prot)
{

View File

@ -752,6 +752,8 @@ static int ppc_rtas_tone_volume_show(struct seq_file *m, void *v)
/**
* ppc_rtas_rmo_buf_show() - Describe RTAS-addressable region for user space.
* @m: seq_file output target.
* @v: Unused.
*
* Base + size description of a range of RTAS-addressable memory set
* aside for user space to use as work area(s) for certain RTAS

View File

@ -35,18 +35,18 @@ unsigned long long memory_limit;
unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
EXPORT_SYMBOL(empty_zero_page);
pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
unsigned long size, pgprot_t vma_prot)
pgprot_t __phys_mem_access_prot(unsigned long pfn, unsigned long size,
pgprot_t vma_prot)
{
if (ppc_md.phys_mem_access_prot)
return ppc_md.phys_mem_access_prot(file, pfn, size, vma_prot);
return ppc_md.phys_mem_access_prot(pfn, size, vma_prot);
if (!page_is_ram(pfn))
vma_prot = pgprot_noncached(vma_prot);
return vma_prot;
}
EXPORT_SYMBOL(phys_mem_access_prot);
EXPORT_SYMBOL(__phys_mem_access_prot);
#ifdef CONFIG_MEMORY_HOTPLUG
static DEFINE_MUTEX(linear_mapping_mutex);

View File

@ -184,6 +184,7 @@ machine_arch_initcall(pseries, rtas_work_area_allocator_init);
/**
* rtas_work_area_reserve_arena() - Reserve memory suitable for RTAS work areas.
* @limit: Upper limit for memblock allocation.
*/
void __init rtas_work_area_reserve_arena(const phys_addr_t limit)
{

View File

@ -392,7 +392,7 @@ static struct parisc_driver parport_driver __refdata = {
.remove = __exit_p(parport_remove_chip),
};
int parport_gsc_init(void)
static int parport_gsc_init(void)
{
return register_parisc_driver(&parport_driver);
}

View File

@ -7,7 +7,8 @@
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
(defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
defined(__TARGET_ARCH_s390)) && __clang_major__ >= 18
defined(__TARGET_ARCH_s390) || defined(__TARGET_ARCH_loongarch)) && \
__clang_major__ >= 18
const volatile int skip = 0;
#else
const volatile int skip = 1;

View File

@ -6,7 +6,8 @@
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
(defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \
defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390) || \
defined(__TARGET_ARCH_loongarch)) && \
__clang_major__ >= 18
SEC("socket")

View File

@ -6,7 +6,8 @@
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
(defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \
defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390) || \
defined(__TARGET_ARCH_loongarch)) && \
__clang_major__ >= 18
SEC("socket")

View File

@ -6,7 +6,8 @@
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
(defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \
defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390) || \
defined(__TARGET_ARCH_loongarch)) && \
__clang_major__ >= 18
SEC("socket")

View File

@ -6,7 +6,8 @@
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
(defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \
defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390) || \
defined(__TARGET_ARCH_loongarch)) && \
__clang_major__ >= 18
SEC("socket")

View File

@ -6,7 +6,8 @@
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
(defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \
defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390) || \
defined(__TARGET_ARCH_loongarch)) && \
__clang_major__ >= 18
SEC("socket")