mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
synced 2026-01-11 17:10:13 +00:00
It has been reported that split_kernel_leaf_mapping() is trying to sleep in non-sleepable context. It does this when acquiring the pgtable_split_lock mutex, when either CONFIG_DEBUG_PAGEALLOC or CONFIG_KFENCE are enabled, which change linear map permissions within softirq context during memory allocation and/or freeing. All other paths into this function are called from sleepable context and so are safe. But it turns out that the memory for which these 2 features may attempt to modify the permissions is always mapped by pte, so there is no need to attempt to split the mapping. So let's exit early in these cases and avoid attempting to take the mutex. There is one wrinkle to this approach; late-initialized kfence allocates it's pool from the buddy which may be block mapped. So we must hook that allocation and convert it to pte-mappings up front. Previously this was done as a side-effect of kfence protecting all the individual pages in its pool at init-time, but this no longer works due to the added early exit path in split_kernel_leaf_mapping(). So instead, do this via the existing arch_kfence_init_pool() arch hook, and reuse the existing linear_map_split_to_ptes() infrastructure. Closes: https://lore.kernel.org/all/f24b9032-0ec9-47b1-8b95-c0eeac7a31c5@roeck-us.net/ Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full") Reported-by: Guenter Roeck <linux@roeck-us.net> Tested-by: Guenter Roeck <groeck@google.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: David Hildenbrand (Red Hat) <david@kernel.org> Reviewed-by: Yang Shi <yang@os.amperecomputing.com> Signed-off-by: Will Deacon <will@kernel.org>
32 lines
649 B
C
32 lines
649 B
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
/*
|
|
* arm64 KFENCE support.
|
|
*
|
|
* Copyright (C) 2020, Google LLC.
|
|
*/
|
|
|
|
#ifndef __ASM_KFENCE_H
|
|
#define __ASM_KFENCE_H
|
|
|
|
#include <asm/set_memory.h>
|
|
|
|
static inline bool kfence_protect_page(unsigned long addr, bool protect)
|
|
{
|
|
set_memory_valid(addr, 1, !protect);
|
|
|
|
return true;
|
|
}
|
|
|
|
#ifdef CONFIG_KFENCE
|
|
extern bool kfence_early_init;
|
|
static inline bool arm64_kfence_can_set_direct_map(void)
|
|
{
|
|
return !kfence_early_init;
|
|
}
|
|
bool arch_kfence_init_pool(void);
|
|
#else /* CONFIG_KFENCE */
|
|
static inline bool arm64_kfence_can_set_direct_map(void) { return false; }
|
|
#endif /* CONFIG_KFENCE */
|
|
|
|
#endif /* __ASM_KFENCE_H */
|