1
0
mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git synced 2026-01-11 09:00:12 +00:00
Linus Torvalds 509d3f4584 Significant patch series in this pull request:
- The 6 patch series "panic: sys_info: Refactor and fix a potential
   issue" from Andy Shevchenko fixes a build issue and does some cleanup in
   ib/sys_info.c.
 
 - The 9 patch series "Implement mul_u64_u64_div_u64_roundup()" from
   David Laight enhances the 64-bit math code on behalf of a PWM driver and
   beefs up the test module for these library functions.
 
 - The 2 patch series "scripts/gdb/symbols: make BPF debug info available
   to GDB" from Ilya Leoshkevich makes BPF symbol names, sizes, and line
   numbers available to the GDB debugger.
 
 - The 4 patch series "Enable hung_task and lockup cases to dump system
   info on demand" from Feng Tang adds a sysctl which can be used to cause
   additional info dumping when the hung-task and lockup detectors fire.
 
 - The 6 patch series "lib/base64: add generic encoder/decoder, migrate
   users" from Kuan-Wei Chiu adds a general base64 encoder/decoder to lib/
   and migrates several users away from their private implementations.
 
 - The 2 patch series "rbree: inline rb_first() and rb_last()" from Eric
   Dumazet makes TCP a little faster.
 
 - The 9 patch series "liveupdate: Rework KHO for in-kernel users" from
   Pasha Tatashin reworks the KEXEC Handover interfaces in preparation for
   Live Update Orchestrator (LUO), and possibly for other future clients.
 
 - The 13 patch series "kho: simplify state machine and enable dynamic
   updates" from Pasha Tatashin increases the flexibility of KEXEC
   Handover.  Also preparation for LUO.
 
 - The 18 patch series "Live Update Orchestrator" from Pasha Tatashin is
   a major new feature targeted at cloud environments.  Quoting the [0/N]:
 
     This series introduces the Live Update Orchestrator, a kernel subsystem
     designed to facilitate live kernel updates using a kexec-based reboot.
     This capability is critical for cloud environments, allowing hypervisors
     to be updated with minimal downtime for running virtual machines.  LUO
     achieves this by preserving the state of selected resources, such as
     memory, devices and their dependencies, across the kernel transition.
 
     As a key feature, this series includes support for preserving memfd file
     descriptors, which allows critical in-memory data, such as guest RAM or
     any other large memory region, to be maintained in RAM across the kexec
     reboot.
 
   Mike Rappaport merits a mention here, for his extensive review and
   testing work.
 
 - The 3 patch series "kexec: reorganize kexec and kdump sysfs" from
   Sourabh Jain moves the kexec and kdump sysfs entries from /sys/kernel/
   to /sys/kernel/kexec/ and adds back-compatibility symlinks which can
   hopefully be removed one day.
 
 - The 2 patch series "kho: fixes for vmalloc restoration" from Mike
   Rapoport fixes a BUG which was being hit during KHO restoration of
   vmalloc() regions.
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCaTSAkQAKCRDdBJ7gKXxA
 jrkiAP9QKfsRv46XZaM5raScjY1ayjP+gqb2rgt6BQ/gZvb2+wD/cPAYOR6BiX52
 n0pVpQmG5P/KyOmpLztn96ejL4heKwQ=
 =JY96
 -----END PGP SIGNATURE-----

Merge tag 'mm-nonmm-stable-2025-12-06-11-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-MM updates from Andrew Morton:

 - "panic: sys_info: Refactor and fix a potential issue" (Andy Shevchenko)
   fixes a build issue and does some cleanup in ib/sys_info.c

 - "Implement mul_u64_u64_div_u64_roundup()" (David Laight)
   enhances the 64-bit math code on behalf of a PWM driver and beefs up
   the test module for these library functions

 - "scripts/gdb/symbols: make BPF debug info available to GDB" (Ilya Leoshkevich)
   makes BPF symbol names, sizes, and line numbers available to the GDB
   debugger

 - "Enable hung_task and lockup cases to dump system info on demand" (Feng Tang)
   adds a sysctl which can be used to cause additional info dumping when
   the hung-task and lockup detectors fire

 - "lib/base64: add generic encoder/decoder, migrate users" (Kuan-Wei Chiu)
   adds a general base64 encoder/decoder to lib/ and migrates several
   users away from their private implementations

 - "rbree: inline rb_first() and rb_last()" (Eric Dumazet)
   makes TCP a little faster

 - "liveupdate: Rework KHO for in-kernel users" (Pasha Tatashin)
   reworks the KEXEC Handover interfaces in preparation for Live Update
   Orchestrator (LUO), and possibly for other future clients

 - "kho: simplify state machine and enable dynamic updates" (Pasha Tatashin)
   increases the flexibility of KEXEC Handover. Also preparation for LUO

 - "Live Update Orchestrator" (Pasha Tatashin)
   is a major new feature targeted at cloud environments. Quoting the
   cover letter:

      This series introduces the Live Update Orchestrator, a kernel
      subsystem designed to facilitate live kernel updates using a
      kexec-based reboot. This capability is critical for cloud
      environments, allowing hypervisors to be updated with minimal
      downtime for running virtual machines. LUO achieves this by
      preserving the state of selected resources, such as memory,
      devices and their dependencies, across the kernel transition.

      As a key feature, this series includes support for preserving
      memfd file descriptors, which allows critical in-memory data, such
      as guest RAM or any other large memory region, to be maintained in
      RAM across the kexec reboot.

   Mike Rappaport merits a mention here, for his extensive review and
   testing work.

 - "kexec: reorganize kexec and kdump sysfs" (Sourabh Jain)
   moves the kexec and kdump sysfs entries from /sys/kernel/ to
   /sys/kernel/kexec/ and adds back-compatibility symlinks which can
   hopefully be removed one day

 - "kho: fixes for vmalloc restoration" (Mike Rapoport)
   fixes a BUG which was being hit during KHO restoration of vmalloc()
   regions

* tag 'mm-nonmm-stable-2025-12-06-11-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (139 commits)
  calibrate: update header inclusion
  Reinstate "resource: avoid unnecessary lookups in find_next_iomem_res()"
  vmcoreinfo: track and log recoverable hardware errors
  kho: fix restoring of contiguous ranges of order-0 pages
  kho: kho_restore_vmalloc: fix initialization of pages array
  MAINTAINERS: TPM DEVICE DRIVER: update the W-tag
  init: replace simple_strtoul with kstrtoul to improve lpj_setup
  KHO: fix boot failure due to kmemleak access to non-PRESENT pages
  Documentation/ABI: new kexec and kdump sysfs interface
  Documentation/ABI: mark old kexec sysfs deprecated
  kexec: move sysfs entries to /sys/kernel/kexec
  test_kho: always print restore status
  kho: free chunks using free_page() instead of kfree()
  selftests/liveupdate: add kexec test for multiple and empty sessions
  selftests/liveupdate: add simple kexec-based selftest for LUO
  selftests/liveupdate: add userspace API selftests
  docs: add documentation for memfd preservation via LUO
  mm: memfd_luo: allow preserving memfd
  liveupdate: luo_file: add private argument to store runtime state
  mm: shmem: export some functions to internal.h
  ...
2025-12-06 14:01:20 -08:00

383 lines
13 KiB
C

/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __LINUX_COMPILER_H
#define __LINUX_COMPILER_H
#include <linux/compiler_types.h>
#ifndef __ASSEMBLY__
#ifdef __KERNEL__
/*
* Note: DISABLE_BRANCH_PROFILING can be used by special lowlevel code
* to disable branch tracing on a per file basis.
*/
void ftrace_likely_update(struct ftrace_likely_data *f, int val,
int expect, int is_constant);
#if defined(CONFIG_TRACE_BRANCH_PROFILING) \
&& !defined(DISABLE_BRANCH_PROFILING) && !defined(__CHECKER__)
#define likely_notrace(x) __builtin_expect(!!(x), 1)
#define unlikely_notrace(x) __builtin_expect(!!(x), 0)
#define __branch_check__(x, expect, is_constant) ({ \
long ______r; \
static struct ftrace_likely_data \
__aligned(4) \
__section("_ftrace_annotated_branch") \
______f = { \
.data.func = __func__, \
.data.file = __FILE__, \
.data.line = __LINE__, \
}; \
______r = __builtin_expect(!!(x), expect); \
ftrace_likely_update(&______f, ______r, \
expect, is_constant); \
______r; \
})
/*
* Using __builtin_constant_p(x) to ignore cases where the return
* value is always the same. This idea is taken from a similar patch
* written by Daniel Walker.
*/
# ifndef likely
# define likely(x) (__branch_check__(x, 1, __builtin_constant_p(x)))
# endif
# ifndef unlikely
# define unlikely(x) (__branch_check__(x, 0, __builtin_constant_p(x)))
# endif
#ifdef CONFIG_PROFILE_ALL_BRANCHES
/*
* "Define 'is'", Bill Clinton
* "Define 'if'", Steven Rostedt
*/
#define if(cond, ...) if ( __trace_if_var( !!(cond , ## __VA_ARGS__) ) )
#define __trace_if_var(cond) (__builtin_constant_p(cond) ? (cond) : __trace_if_value(cond))
#define __trace_if_value(cond) ({ \
static struct ftrace_branch_data \
__aligned(4) \
__section("_ftrace_branch") \
__if_trace = { \
.func = __func__, \
.file = __FILE__, \
.line = __LINE__, \
}; \
(cond) ? \
(__if_trace.miss_hit[1]++,1) : \
(__if_trace.miss_hit[0]++,0); \
})
#endif /* CONFIG_PROFILE_ALL_BRANCHES */
#else
# define likely(x) __builtin_expect(!!(x), 1)
# define unlikely(x) __builtin_expect(!!(x), 0)
# define likely_notrace(x) likely(x)
# define unlikely_notrace(x) unlikely(x)
#endif
/* Optimization barrier */
#ifndef barrier
/* The "volatile" is due to gcc bugs */
# define barrier() __asm__ __volatile__("": : :"memory")
#endif
#ifndef barrier_data
/*
* This version is i.e. to prevent dead stores elimination on @ptr
* where gcc and llvm may behave differently when otherwise using
* normal barrier(): while gcc behavior gets along with a normal
* barrier(), llvm needs an explicit input variable to be assumed
* clobbered. The issue is as follows: while the inline asm might
* access any memory it wants, the compiler could have fit all of
* @ptr into memory registers instead, and since @ptr never escaped
* from that, it proved that the inline asm wasn't touching any of
* it. This version works well with both compilers, i.e. we're telling
* the compiler that the inline asm absolutely may see the contents
* of @ptr. See also: https://llvm.org/bugs/show_bug.cgi?id=15495
*/
# define barrier_data(ptr) __asm__ __volatile__("": :"r"(ptr) :"memory")
#endif
/* workaround for GCC PR82365 if needed */
#ifndef barrier_before_unreachable
# define barrier_before_unreachable() do { } while (0)
#endif
/* Unreachable code */
#ifdef CONFIG_OBJTOOL
/* Annotate a C jump table to allow objtool to follow the code flow */
#define __annotate_jump_table __section(".data.rel.ro.c_jump_table")
#else /* !CONFIG_OBJTOOL */
#define __annotate_jump_table
#endif /* CONFIG_OBJTOOL */
/*
* Mark a position in code as unreachable. This can be used to
* suppress control flow warnings after asm blocks that transfer
* control elsewhere.
*/
#define unreachable() do { \
barrier_before_unreachable(); \
__builtin_unreachable(); \
} while (0)
/*
* KENTRY - kernel entry point
* This can be used to annotate symbols (functions or data) that are used
* without their linker symbol being referenced explicitly. For example,
* interrupt vector handlers, or functions in the kernel image that are found
* programatically.
*
* Not required for symbols exported with EXPORT_SYMBOL, or initcalls. Those
* are handled in their own way (with KEEP() in linker scripts).
*
* KENTRY can be avoided if the symbols in question are marked as KEEP() in the
* linker script. For example an architecture could KEEP() its entire
* boot/exception vector code rather than annotate each function and data.
*/
#ifndef KENTRY
# define KENTRY(sym) \
extern typeof(sym) sym; \
static const unsigned long __kentry_##sym \
__used \
__attribute__((__section__("___kentry+" #sym))) \
= (unsigned long)&sym;
#endif
#ifndef RELOC_HIDE
# define RELOC_HIDE(ptr, off) \
({ unsigned long __ptr; \
__ptr = (unsigned long) (ptr); \
(typeof(ptr)) (__ptr + (off)); })
#endif
#define absolute_pointer(val) RELOC_HIDE((void *)(val), 0)
#ifndef OPTIMIZER_HIDE_VAR
/* Make the optimizer believe the variable can be manipulated arbitrarily. */
#define OPTIMIZER_HIDE_VAR(var) \
__asm__ ("" : "=r" (var) : "0" (var))
#endif
/* Format: __UNIQUE_ID_<name>_<__COUNTER__> */
#define __UNIQUE_ID(name) \
__PASTE(__UNIQUE_ID_, \
__PASTE(name, \
__PASTE(_, __COUNTER__)))
/**
* data_race - mark an expression as containing intentional data races
*
* This data_race() macro is useful for situations in which data races
* should be forgiven. One example is diagnostic code that accesses
* shared variables but is not a part of the core synchronization design.
* For example, if accesses to a given variable are protected by a lock,
* except for diagnostic code, then the accesses under the lock should
* be plain C-language accesses and those in the diagnostic code should
* use data_race(). This way, KCSAN will complain if buggy lockless
* accesses to that variable are introduced, even if the buggy accesses
* are protected by READ_ONCE() or WRITE_ONCE().
*
* This macro *does not* affect normal code generation, but is a hint
* to tooling that data races here are to be ignored. If the access must
* be atomic *and* KCSAN should ignore the access, use both data_race()
* and READ_ONCE(), for example, data_race(READ_ONCE(x)).
*/
#define data_race(expr) \
({ \
__kcsan_disable_current(); \
__auto_type __v = (expr); \
__kcsan_enable_current(); \
__v; \
})
#ifdef __CHECKER__
#define __BUILD_BUG_ON_ZERO_MSG(e, msg, ...) (0)
#else /* __CHECKER__ */
#define __BUILD_BUG_ON_ZERO_MSG(e, msg, ...) ((int)sizeof(struct {_Static_assert(!(e), msg);}))
#endif /* __CHECKER__ */
/* &a[0] degrades to a pointer: a different type from an array */
#define __is_array(a) (!__same_type((a), &(a)[0]))
#define __must_be_array(a) __BUILD_BUG_ON_ZERO_MSG(!__is_array(a), \
"must be array")
#define __is_byte_array(a) (__is_array(a) && sizeof((a)[0]) == 1)
#define __must_be_byte_array(a) __BUILD_BUG_ON_ZERO_MSG(!__is_byte_array(a), \
"must be byte array")
/*
* If the "nonstring" attribute isn't available, we have to return true
* so the __must_*() checks pass when "nonstring" isn't supported.
*/
#if __has_attribute(__nonstring__) && defined(__annotated)
#define __is_cstr(a) (!__annotated(a, nonstring))
#define __is_noncstr(a) (__annotated(a, nonstring))
#else
#define __is_cstr(a) (true)
#define __is_noncstr(a) (true)
#endif
/* Require C Strings (i.e. NUL-terminated) lack the "nonstring" attribute. */
#define __must_be_cstr(p) \
__BUILD_BUG_ON_ZERO_MSG(!__is_cstr(p), \
"must be C-string (NUL-terminated)")
#define __must_be_noncstr(p) \
__BUILD_BUG_ON_ZERO_MSG(!__is_noncstr(p), \
"must be non-C-string (not NUL-terminated)")
/*
* Use __typeof_unqual__() when available.
*
* XXX: Remove test for __CHECKER__ once
* sparse learns about __typeof_unqual__().
*/
#if CC_HAS_TYPEOF_UNQUAL && !defined(__CHECKER__)
# define USE_TYPEOF_UNQUAL 1
#endif
/*
* Define TYPEOF_UNQUAL() to use __typeof_unqual__() as typeof
* operator when available, to return an unqualified type of the exp.
*/
#if defined(USE_TYPEOF_UNQUAL)
# define TYPEOF_UNQUAL(exp) __typeof_unqual__(exp)
#else
# define TYPEOF_UNQUAL(exp) __typeof__(exp)
#endif
#endif /* __KERNEL__ */
#if defined(CONFIG_CFI) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
/*
* Force a reference to the external symbol so the compiler generates
* __kcfi_typid.
*/
#define KCFI_REFERENCE(sym) __ADDRESSABLE(sym)
#else
#define KCFI_REFERENCE(sym)
#endif
/**
* offset_to_ptr - convert a relative memory offset to an absolute pointer
* @off: the address of the 32-bit offset value
*/
static inline void *offset_to_ptr(const int *off)
{
return (void *)((unsigned long)off + *off);
}
#endif /* __ASSEMBLY__ */
/*
* Force the compiler to emit 'sym' as a symbol, so that we can reference
* it from inline assembler. Necessary in case 'sym' could be inlined
* otherwise, or eliminated entirely due to lack of references that are
* visible to the compiler.
*/
#define ___ADDRESSABLE(sym, __attrs) \
static void * __used __attrs \
__UNIQUE_ID(__PASTE(addressable_, sym)) = (void *)(uintptr_t)&sym;
#define __ADDRESSABLE(sym) \
___ADDRESSABLE(sym, __section(".discard.addressable"))
/*
* This returns a constant expression while determining if an argument is
* a constant expression, most importantly without evaluating the argument.
* Glory to Martin Uecker <Martin.Uecker@med.uni-goettingen.de>
*
* Details:
* - sizeof() return an integer constant expression, and does not evaluate
* the value of its operand; it only examines the type of its operand.
* - The results of comparing two integer constant expressions is also
* an integer constant expression.
* - The first literal "8" isn't important. It could be any literal value.
* - The second literal "8" is to avoid warnings about unaligned pointers;
* this could otherwise just be "1".
* - (long)(x) is used to avoid warnings about 64-bit types on 32-bit
* architectures.
* - The C Standard defines "null pointer constant", "(void *)0", as
* distinct from other void pointers.
* - If (x) is an integer constant expression, then the "* 0l" resolves
* it into an integer constant expression of value 0. Since it is cast to
* "void *", this makes the second operand a null pointer constant.
* - If (x) is not an integer constant expression, then the second operand
* resolves to a void pointer (but not a null pointer constant: the value
* is not an integer constant 0).
* - The conditional operator's third operand, "(int *)8", is an object
* pointer (to type "int").
* - The behavior (including the return type) of the conditional operator
* ("operand1 ? operand2 : operand3") depends on the kind of expressions
* given for the second and third operands. This is the central mechanism
* of the macro:
* - When one operand is a null pointer constant (i.e. when x is an integer
* constant expression) and the other is an object pointer (i.e. our
* third operand), the conditional operator returns the type of the
* object pointer operand (i.e. "int *"). Here, within the sizeof(), we
* would then get:
* sizeof(*((int *)(...)) == sizeof(int) == 4
* - When one operand is a void pointer (i.e. when x is not an integer
* constant expression) and the other is an object pointer (i.e. our
* third operand), the conditional operator returns a "void *" type.
* Here, within the sizeof(), we would then get:
* sizeof(*((void *)(...)) == sizeof(void) == 1
* - The equality comparison to "sizeof(int)" therefore depends on (x):
* sizeof(int) == sizeof(int) (x) was a constant expression
* sizeof(int) != sizeof(void) (x) was not a constant expression
*/
#define __is_constexpr(x) \
(sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8)))
/*
* Whether 'type' is a signed type or an unsigned type. Supports scalar types,
* bool and also pointer types.
*/
#define is_signed_type(type) (((type)(-1)) < (__force type)1)
#define is_unsigned_type(type) (!is_signed_type(type))
/*
* Useful shorthand for "is this condition known at compile-time?"
*
* Note that the condition may involve non-constant values,
* but the compiler may know enough about the details of the
* values to determine that the condition is statically true.
*/
#define statically_true(x) (__builtin_constant_p(x) && (x))
/*
* Similar to statically_true() but produces a constant expression
*
* To be used in conjunction with macros, such as BUILD_BUG_ON_ZERO(),
* which require their input to be a constant expression and for which
* statically_true() would otherwise fail.
*
* This is a trade-off: const_true() requires all its operands to be
* compile time constants. Else, it would always returns false even on
* the most trivial cases like:
*
* true || non_const_var
*
* On the opposite, statically_true() is able to fold more complex
* tautologies and will return true on expressions such as:
*
* !(non_const_var * 8 % 4)
*
* For the general case, statically_true() is better.
*/
#define const_true(x) __builtin_choose_expr(__is_constexpr(x), x, false)
/*
* This is needed in functions which generate the stack canary, see
* arch/x86/kernel/smpboot.c::start_secondary() for an example.
*/
#define prevent_tail_call_optimization() mb()
#include <asm/rwonce.h>
#endif /* __LINUX_COMPILER_H */