On Sat, Jul 2, 2011 at 1:01 AM, Marco Stornelli>>
<marco.stornelli@xxxxxxxxx> wrote:
It was easy because the record size had a fixed length (4096), so maybe atThe problem with a fixed record size of 4K is that it is not very
this point it can be sufficient the record size information. I see a little
problem however. I think we could use debugfs interface to dump the log in
an easy way but we should be able to dump it even with /dev/mem. Specially
on embedded systems, debugfs can be no mounted or not available at all. So
maybe, as you said below, with these new patches we need a memset over all
the memory area when the first dump is taken. However, the original idea was
to store even old dumps. In addition, there's the problem to catch an oops
after a start-up that "clean" the area before we read it. At that point we
lost the previous dumps. To solve this we could use a "reset" paramater, but
I think all of this is a little overkilling. Maybe we can only bump up the
record size if needed. What do you think?
flexible as some setups may need more dump data (and 4K doesn't mean
that much). Setting the record size via a module parameter or platform
data doesn't seem as a huge problem to me if you are not using debugfs
as you should be able to somehow export the record size (since you
were the one who set it through the parameter in the first place) and
get the dumps from /dev/mem.
I've thought more about this problem today and I have thought of the
following alternative solution: Have a debugfs entry which returns a
record size chunk at a time by starting with the first entry and then
checking each of the entries for the header (and the presence of the
timestamp maybe to be sure). It will then return each entry that is
valid skipping over the invalid ones and it will return an empty
result when it reaches the end of the memory zone. It could also have
an entry to reset to the first entry so you can start over. This way
you wouldn't lose old entries and you could still get a pretty easy to
parse result.