Re: [PATCH 01/13] kdbus: add documentation

From: Andy Lutomirski
Date: Wed Feb 04 2015 - 18:03:37 EST


On Tue, Feb 3, 2015 at 2:09 AM, Daniel Mack <daniel@xxxxxxxxxx> wrote:
> Hi Andy,
>
> On 02/02/2015 09:12 PM, Andy Lutomirski wrote:
>> On Feb 2, 2015 1:34 AM, "Daniel Mack" <daniel@xxxxxxxxxx> wrote:
>
>>> That's right, but again - if an application wants to gather this kind of
>>> information about tasks it interacts with, it can do so today by looking
>>> at /proc or similar sources. Desktop machines do exactly that already,
>>> and the kernel code executed in such cases very much resembles that in
>>> metadata.c, and is certainly not cheaper. kdbus just makes such
>>> information more accessible when requested. Which information is
>>> collected is defined by bit-masks on both the sender and the receiver
>>> connection, and most applications will effectively only use a very
>>> limited set by default if they go through one of the more high-level
>>> libraries.
>>
>> I should rephrase a bit. Kdbus doesn't require use of send-time
>> metadata. It does, however, strongly encourage it, and it sounds like
>
> On the kernel level, kdbus just *offers* that, just like sockets offer
> SO_PASSCRED. On the userland level, kdbus helps applications get that
> information race-free, easier and faster than they would otherwise.
>
>> systemd and other major users will use send-time metadata. Once that
>> happens, it's ABI (even if it's purely in userspace), and changing it
>> is asking for security holes to pop up. So you'll be mostly stuck
>> with it.
>
> We know we can't break the ABI. At most, we could deprecate item types
> and introduce new ones, but we want to avoid that by all means of
> course. However, I fail to see how that is related to send time
> metadata, or even to kdbus in general, as all ABIs have to be kept stable.
>
>> Do you have some simple benchmark code you can share? I'd like to
>> play with it a bit.
>
> Sure, it's part of the self-test suite. Call it with "-t benchmark" to
> run the benchmark as isolated test with verbose output. The code for
> that lives in test-benchmark.c.

I see "latencies" of around 20 microseconds with lockdep and context
tracking off. For example:

stats (UNIX): 226730 packets processed, latency (nsecs) min/max/avg
3845 // 34828 // 4069
stats (KDBUS): 37103 packets processed, latency (nsecs) min/max/avg
19123 // 99660 // 20696

This is IMO not very good. With memfds off:

stats (UNIX): 226061 packets processed, latency (nsecs) min/max/avg
3885 // 32019 // 4079
stats (KDBUS): 83284 packets processed, latency (nsecs) min/max/avg
10525 // 42578 // 10932

With memfds off and the payload set to 8 bytes:

stats (KDBUS): 77669 packets processed, latency (nsecs) min/max/avg
9963 // 64325 // 11645
stats (UNIX): 253695 packets processed, latency (nsecs) min/max/avg
2986 // 56094 // 3565

Am I missing something here? This is slow enough that a lightweight
userspace dbus daemon should be able to outperform kdbus, or at least
come very close.

It would be kind of nice to know how long just the send call takes, too.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/