Re: Hardware Error Kernel Mini-Summit

From: Ingo Molnar
Date: Wed May 19 2010 - 03:09:57 EST

* Borislav Petkov <bp@xxxxxxxxx> wrote:

> From: "Eric W. Biederman" <ebiederm@xxxxxxxxxxxx>
> Date: Tue, May 18, 2010 at 09:14:09PM -0400
> > - Errors that occur frequently. That is broken
> > hardware of one time or another. I want to know
> > about that so I can schedule down time to replace my
> > memory before I get an uncorrected ECC error.
> > Errors of this kind are likely happening frequently
> > enough as to impact performance.
> This is exactly the reason why we need a better error
> logging and reporting than a log.
> [ ... lots of specific details snipped ... ]

Basically the idea behind the generic structured logging
framework (the perf events kernel subsystem) is to have
both ASCII output (where desired: critical errors), but to
also have well-specified event format parsable to
user-space tools.

Plus there's the need for fast, lightweight, flexible
event passing mechanism - which is given by the perf
events transport which enables arbitrary size in-memory
ring-buffers, poll() and epoll support, etc.

perf events supports all these different usecases and
comes with a (constantly growing) set of events already
defined upstream. We've got more than a dozen different
upstream subsystems that have defined events and we have
over a hundred individual events. There's a rapidly
growing tool space that makes case by case use of these
event sources to measure/observe various aspects of the

Regarding dmesg, there's a WIP patch on lkml that
integrates printks into this framework as well - makes
each printk also available as a special string event.

That way a tool can have both programmatic access to
printk output (without having to interact with the syslog
buffer itself) - together with all the other structured
log sources, while humans can also see what is happening.


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at