Re: [RFC RESEND] binfmt_elf: preserve original ELF e_flags in core dumps

From: Svetlana Parfenova
Date: Fri Aug 08 2025 - 11:54:53 EST


On 08/08/2025 03.14, Kees Cook wrote:
On Thu, Aug 07, 2025 at 07:13:50PM +0600, Svetlana Parfenova wrote:
On 07/08/2025 00.57, Kees Cook wrote:
On Wed, Aug 06, 2025 at 10:18:14PM +0600, Svetlana Parfenova
wrote:
Preserve the original ELF e_flags from the executable in the
core dump header instead of relying on compile-time defaults
(ELF_CORE_EFLAGS or value from the regset view). This ensures
that ABI-specific flags in the dump file match the actual
binary being executed.

Save the e_flags field during ELF binary loading (in
load_elf_binary()) into the mm_struct, and later retrieve it
during core dump generation (in fill_note_info()). Use this
saved value to populate the e_flags in the core dump ELF
header.

Add a new Kconfig option, CONFIG_CORE_DUMP_USE_PROCESS_EFLAGS,
to guard this behavior. Although motivated by a RISC-V use
case, the mechanism is generic and can be applied to all
architectures.

In the general case, is e_flags mismatched? i.e. why hide this
behind a Kconfig? Put another way, if I enabled this Kconfig and
dumped core from some regular x86_64 process, will e_flags be
different?


The Kconfig option is currently restricted to the RISC-V
architecture because it's not clear to me whether other
architectures need actual e_flags value from ELF header. If this
option is disabled, the core dump will always use a compile time
value for e_flags, regardless of which method is selected:
ELF_CORE_EFLAGS or CORE_DUMP_USE_REGSET. And this constant does not necessarily reflect the actual e_flags of the running process
(at least on RISC-V), which can vary depending on how the binary
was compiled. Thus, I made a third method to obtain e_flags that
reflects the real value. And it is gated behind a Kconfig option,
as not all users may need it.

Can you check if the ELF e_flags and the hard-coded e_flags actually differ on other architectures? I'd rather avoid using the Kconfig so
we can have a common execution path for all architectures.


I checked various architectures, and most don’t use e_flags in core
dumps - just zero value. For x86 this is valid since it doesn’t define
values for e_flags. However, architectures like ARM do have meaningful
e_flags, yet still they are set to zero in core dumps. I guess the real
question isn't about core dump correctness, but whether tools like GDB
actually rely on e_flags to provide debug information. Seems like most
architectures either don’t use it or can operate without it. RISC-V
looks like black sheep here ... GDB relies on e_flags to determine the
ABI and interpret the core dump correctly.

What if I rework my patch the following way:
- remove Kconfig option;
- add function/macro that would override e_flags with value taken from
process, but it would only be applied if architecture specifies that.

Would that be a better approach?

--
Best regards,
Svetlana Parfenova