1
0
mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git synced 2026-01-11 17:10:13 +00:00

kernel/kexec: fix IMA when allocation happens in CMA area

*** Bug description ***

When I tested kexec with the latest kernel, I ran into the following warning:

[   40.712410] ------------[ cut here ]------------
[   40.712576] WARNING: CPU: 2 PID: 1562 at kernel/kexec_core.c:1001 kimage_map_segment+0x144/0x198
[...]
[   40.816047] Call trace:
[   40.818498]  kimage_map_segment+0x144/0x198 (P)
[   40.823221]  ima_kexec_post_load+0x58/0xc0
[   40.827246]  __do_sys_kexec_file_load+0x29c/0x368
[...]
[   40.855423] ---[ end trace 0000000000000000 ]---

*** How to reproduce ***

This bug is only triggered when the kexec target address is allocated in
the CMA area. If no CMA area is reserved in the kernel, use the "cma="
option in the kernel command line to reserve one.

*** Root cause ***
The commit 07d24902977e ("kexec: enable CMA based contiguous
allocation") allocates the kexec target address directly on the CMA area
to avoid copying during the jump. In this case, there is no IND_SOURCE
for the kexec segment.  But the current implementation of
kimage_map_segment() assumes that IND_SOURCE pages exist and map them
into a contiguous virtual address by vmap().

*** Solution ***
If IMA segment is allocated in the CMA area, use its page_address()
directly.

Link: https://lkml.kernel.org/r/20251216014852.8737-2-piliu@redhat.com
Fixes: 07d24902977e ("kexec: enable CMA based contiguous allocation")
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Cc: Alexander Graf <graf@amazon.com>
Cc: Steven Chen <chenste@linux.microsoft.com>
Cc: Mimi Zohar <zohar@linux.ibm.com>
Cc: Roberto Sassu <roberto.sassu@huawei.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Pingfan Liu 2025-12-16 09:48:52 +08:00 committed by Andrew Morton
parent fe55ea8593
commit a3785ae5d3

View File

@ -960,13 +960,17 @@ void *kimage_map_segment(struct kimage *image, int idx)
kimage_entry_t *ptr, entry;
struct page **src_pages;
unsigned int npages;
struct page *cma;
void *vaddr = NULL;
int i;
cma = image->segment_cma[idx];
if (cma)
return page_address(cma);
addr = image->segment[idx].mem;
size = image->segment[idx].memsz;
eaddr = addr + size;
/*
* Collect the source pages and map them in a contiguous VA range.
*/
@ -1007,7 +1011,8 @@ void *kimage_map_segment(struct kimage *image, int idx)
void kimage_unmap_segment(void *segment_buffer)
{
vunmap(segment_buffer);
if (is_vmalloc_addr(segment_buffer))
vunmap(segment_buffer);
}
struct kexec_load_limit {