Re: [PATCH 1/1] net: cdc_ncm: Allow for dwNtbOutMaxSize to be unset or zero

From: Bjørn Mork
Date: Fri Dec 03 2021 - 05:40:55 EST


Hello Lee!

Jakub Kicinski <kuba@xxxxxxxxxx> writes:

> On Thu, 2 Dec 2021 14:34:37 +0000 Lee Jones wrote:
>> Currently, due to the sequential use of min_t() and clamp_t() macros,
>> in cdc_ncm_check_tx_max(), if dwNtbOutMaxSize is not set, the logic
>> sets tx_max to 0. This is then used to allocate the data area of the
>> SKB requested later in cdc_ncm_fill_tx_frame().
>>
>> This does not cause an issue presently because when memory is
>> allocated during initialisation phase of SKB creation, more memory
>> (512b) is allocated than is required for the SKB headers alone (320b),
>> leaving some space (512b - 320b = 192b) for CDC data (172b).
>>
>> However, if more elements (for example 3 x u64 = [24b]) were added to
>> one of the SKB header structs, say 'struct skb_shared_info',
>> increasing its original size (320b [320b aligned]) to something larger
>> (344b [384b aligned]), then suddenly the CDC data (172b) no longer
>> fits in the spare SKB data area (512b - 384b = 128b).
>>
>> Consequently the SKB bounds checking semantics fails and panics:
>>
>> skbuff: skb_over_panic: text:ffffffff830a5b5f len:184 put:172 \
>> head:ffff888119227c00 data:ffff888119227c00 tail:0xb8 end:0x80 dev:<NULL>
>>
>> ------------[ cut here ]------------
>> kernel BUG at net/core/skbuff.c:110!
>> RIP: 0010:skb_panic+0x14f/0x160 net/core/skbuff.c:106
>> <snip>
>> Call Trace:
>> <IRQ>
>> skb_over_panic+0x2c/0x30 net/core/skbuff.c:115
>> skb_put+0x205/0x210 net/core/skbuff.c:1877
>> skb_put_zero include/linux/skbuff.h:2270 [inline]
>> cdc_ncm_ndp16 drivers/net/usb/cdc_ncm.c:1116 [inline]
>> cdc_ncm_fill_tx_frame+0x127f/0x3d50 drivers/net/usb/cdc_ncm.c:1293
>> cdc_ncm_tx_fixup+0x98/0xf0 drivers/net/usb/cdc_ncm.c:1514
>>
>> By overriding the max value with the default CDC_NCM_NTB_MAX_SIZE_TX
>> when not offered through the system provided params, we ensure enough
>> data space is allocated to handle the CDC data, meaning no crash will
>> occur.

Just out of curiouslity: Is this a real device, or was this the result
of fuzzing around?

Not that it matters - it's obviously a bug to fix in any case. Good catch!

(We probably have many more of the same, assuming the device presents
semi-sane values in the NCM parameter struct)

>> diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
>> index 24753a4da7e60..e303b522efb50 100644
>> --- a/drivers/net/usb/cdc_ncm.c
>> +++ b/drivers/net/usb/cdc_ncm.c
>> @@ -181,6 +181,8 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
>> min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);
>>
>> max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
>> + if (max == 0)
>> + max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
>>
>> /* some devices set dwNtbOutMaxSize too low for the above default */
>> min = min(min, max);

It's been a while since I looked at this, so excuse me if I read it
wrongly. But I think we need to catch more illegal/impossible values
than just zero here? Any buffer size which cannot hold a single
datagram is pointless.

Trying to figure out what I possible meant to do with that

min = min(min, max);

I don't think it makes any sense? Does it? The "min" value we've
carefully calculated allow one max sized datagram and headers. I don't
think we should ever continue with a smaller buffer than that. Or are
there cases where this is valid?

So that really should haven been catching this bug with a

max = max(min, max)

or maybe more readable

if (max < min)
max = min

What do you think?


Bjørn