Re: [GIT] Networking

From: Hannes Frederic Sowa
Date: Mon Nov 09 2015 - 05:38:18 EST


Hello,

Ingo Molnar <mingo@xxxxxxxxxx> writes:

> * Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>
>> Does anybody have any particular other "uhhuh, overflow in multiplication"
>> issues in mind? Because the interface for a saturating multiplication (or
>> addition, for that matter) would actually be much easier. And would be trivial
>> to have as an inline asm for compatibility with older versions of gcc too.
>>
>> Then you could just do that jiffies conversion - or allocation, for that matter
>> - without any special overflow handling at all. Doing
>>
>> buf = kmalloc(sat_mul(sizeof(x), nr), GFP_KERNEL);
>>
>> would just magically work.
>
> Exactly: saturation is the default behavior for many GPU vector/pixel attributes
> as well, to simplify and speed up the code and the hardware. I always wanted our
> ABIs to saturate instead of duplicating complexity with overflow failure logic.

I don't think saturation arithmetic is useful at all in the kernel as a
replacement for overflow/wrap-around checks. Linus' example has a
discrepancy between what the caller expects and the actual number of
bytes allocated. Imagine sat_mul does the operation in signed char and
kmalloc takes only signed chars as an argument, it could actually be a
huge discrepancy that could lead to security vulnerabilities. The call
should definitely error out here and not try to allocate memory of some
different size and return it to the caller.

> In the kernel the first point of failure is missing overflow checks. The second
> point of failure are buggy overflow checks. We can eliminate both if we just use
> safe operations that produce output that never exit the valid range. This also
> happens to result in the simplest code. We should start thinking of overflow
> checks as rootkit enablers.

Sorry, I don't understand that at all. sat_mul is a rootkit enabler, I
fear. If you allocate a smalelr portion of memory as the caller actually
asked for because of saturation logic, this definitely could lead to
memory corruption and hard to diagnose bugs.

> And note how much this simplifies review and static analysis: if this is the
> dominant model used in new kernel code then the analysis (human or machine) would
> only have to ensure that no untrusted input values get multiplied (or added) in an
> unsafe way. It would not have to be able to understand and track any 'overflow
> logic' through a maze of return paths, and validate whether the 'overflow logic'
> is correct for all input parameter ranges...

Sorry, I don't really understand that proposal. :/

> The flip side is marginally less ABI robustness: random input parameters due to
> memory corruption will just saturate and produce nonsensical results. I don't
> think it's a big issue, and I also think the simplicity of input parameter
> validation is _way_ more important than our behavior to random input - but I've
> been overruled in the past when trying to introduce saturating ABIs, so saturation
> is something people sometimes find inelegant.

If those nonsensical results are memory corruptions I also don't
agree. I think we need to be very much accurate when dealing with
overflows.

Bye,
Hannes
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/