On Fri, 24 Oct 2008, Steven Rostedt wrote:Some architectures do not support a way to read the irq flags that
is set from "local_irq_save(flags)" to determine if interrupts were
disabled or enabled. Ftrace uses this information to display to the user
if the trace occurred with interrupts enabled or disabled.
Both alpha
#define irqs_disabled() (getipl() == IPL_MAX)
and m68k
static inline int irqs_disabled(void)
{
unsigned long flags;
local_save_flags(flags);
return flags & ~ALLOWINT;
}
do have irqs_disabled(), but they don't have irqs_disabled_flags().
M68knommu has both, but they don't check the same thing:
#define irqs_disabled() \
({ \
unsigned long flags; \
local_save_flags(flags); \
((flags & 0x0700) == 0x0700); \
})
static inline int irqs_disabled_flags(unsigned long flags)
{
if (flags & 0x0700)
return 0;
else
return 1;
}
Is there a semantic difference between them (except that the latter takes the
flags as a parameter)?
Or can we just extract the core logic of irqs_disabled() into
irqs_disabled_flags()?