Re: [PATCH v1 0/4] perf: enable compression of record mode trace to save storage space

From: Alexey Budankov
Date: Mon Jan 14 2019 - 06:26:45 EST


On 14.01.2019 14:03, Jiri Olsa wrote:
> On Mon, Jan 14, 2019 at 11:43:31AM +0300, Alexey Budankov wrote:
>> Hi,
>> On 09.01.2019 20:28, Jiri Olsa wrote:
>>> On Mon, Dec 24, 2018 at 04:21:33PM +0300, Alexey Budankov wrote:
>>>>
>>>> buffers for asynchronous trace writing serve that purpose.
<SNIP>
>>>
>>> I dont like that it's onlt for aio only, I can't really see why it's
>>
>> For serial streaming, on CPU bound codes, under full system utilization it
>> can induce more runtime overhead and increase data loss because amount of
>> code on performance critical path grows, of course size of written data
>> reduces but still. Feeding kernel buffer content by user space code to a
>> syscall is extended with intermediate copying to user space memory with
>> doing some math on it in the middle.
>>
>>> a problem for normal data.. can't we just have one layer before and
>>> stream the data to the compress function instead of the file (or aio
>>> buffers).. and that compress functions would spit out 64K size COMPRESSED
>>> events, which would go to file (or aio buffers)
>>
>> It is already almost like that. Compression could be bridged using AIO
>> buffers but then still streamed to file serially using record__pushfn()
>> and that would make some sense for moderate profiling cases on systems
>> without AIO support and trace streaming based on it.
>>
>>>
>>> the report side would process them (decompress) on the session layer
>>> before the tool callbacks are called
>>
>> It is already pretty similar to that.
>
> hum, AFAICS you do that in report code not in on the session layer

Correct. Decompressor and handling of compressed data chunks could be
moved to session related code.

Thanks,
Alexey

>
> jirka
>