On Thu, Aug 07, 2025 at 07:13:50PM +0600, Svetlana Parfenova wrote:
On 07/08/2025 00.57, Kees Cook wrote:
On Wed, Aug 06, 2025 at 10:18:14PM +0600, Svetlana Parfenova
wrote:
Preserve the original ELF e_flags from the executable in the
core dump header instead of relying on compile-time defaults
(ELF_CORE_EFLAGS or value from the regset view). This ensures
that ABI-specific flags in the dump file match the actual
binary being executed.
Save the e_flags field during ELF binary loading (in
load_elf_binary()) into the mm_struct, and later retrieve it
during core dump generation (in fill_note_info()). Use this
saved value to populate the e_flags in the core dump ELF
header.
Add a new Kconfig option, CONFIG_CORE_DUMP_USE_PROCESS_EFLAGS,
to guard this behavior. Although motivated by a RISC-V use
case, the mechanism is generic and can be applied to all
architectures.
In the general case, is e_flags mismatched? i.e. why hide this
behind a Kconfig? Put another way, if I enabled this Kconfig and
dumped core from some regular x86_64 process, will e_flags be
different?
The Kconfig option is currently restricted to the RISC-V
architecture because it's not clear to me whether other
architectures need actual e_flags value from ELF header. If this
option is disabled, the core dump will always use a compile time
value for e_flags, regardless of which method is selected:
ELF_CORE_EFLAGS or CORE_DUMP_USE_REGSET. And this constant does not necessarily reflect the actual e_flags of the running process
(at least on RISC-V), which can vary depending on how the binary
was compiled. Thus, I made a third method to obtain e_flags that
reflects the real value. And it is gated behind a Kconfig option,
as not all users may need it.
Can you check if the ELF e_flags and the hard-coded e_flags actually differ on other architectures? I'd rather avoid using the Kconfig so
we can have a common execution path for all architectures.