Re: [PATCH] mm: SLAB freelist randomization

From: Thomas Garnier
Date: Mon Apr 18 2016 - 15:52:35 EST


I agree, if we had a generic way to pass entropy across boots on all
architecture that would be amazing. I will let the SLAB maintainers to
decide on requiring CONFIG_ARCH_RANDOM or documenting it.

On Mon, Apr 18, 2016 at 12:36 PM, Laura Abbott <labbott@xxxxxxxxxx> wrote:
> On 04/18/2016 08:59 AM, Thomas Garnier wrote:
>>
>> I will send the next version today. Note that I get_random_bytes_arch
>> is used because at that stage we have 0 bits of entropy. It seemed
>> like a better idea to use the arch version that will fallback on
>> get_random_bytes sub API in the worse case.
>>
>
> This is unfortunate for ARM/ARM64. Those platforms don't have a standard
> method for getting random numbers so until additional entropy is added
> get_random_bytes will always return the same seed and indeed I always
> see the same shuffle on a quick test of arm64. For KASLR, the workaround
> was to require the bootloader to pass in entropy. It might be good to
> either document this or require this only be used with CONFIG_ARCH_RANDOM.
>
>
>
>> On Fri, Apr 15, 2016 at 3:47 PM, Thomas Garnier <thgarnie@xxxxxxxxxx>
>> wrote:
>>>
>>> Thanks for the comments. I will address them in a v2 early next week.
>>>
>>> If anyone has other comments, please let me know.
>>>
>>> Thomas
>>>
>>> On Fri, Apr 15, 2016 at 3:26 PM, Joe Perches <joe@xxxxxxxxxxx> wrote:
>>>>
>>>> On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:
>>>>>
>>>>> On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier <thgarnie@xxxxxxxxxx>
>>>>> wrote:
>>>>>>
>>>>>> Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>>>>>> SLAB freelist. The list is randomized during initialization of a new
>>>>>> set
>>>>>> of pages. The order on different freelist sizes is pre-computed at
>>>>>> boot
>>>>>> for performance. This security feature reduces the predictability of
>>>>>> the
>>>>>> kernel SLAB allocator against heap overflows rendering attacks much
>>>>>> less
>>>>>> stable.
>>>>
>>>>
>>>> trivia:
>>>>
>>>>>> @@ -1229,6 +1229,61 @@ static void __init set_up_node(struct
>>>>>> kmem_cache *cachep, int index)
>>>>
>>>> []
>>>>>>
>>>>>> + */
>>>>>> +static freelist_idx_t master_list_2[2];
>>>>>> +static freelist_idx_t master_list_4[4];
>>>>>> +static freelist_idx_t master_list_8[8];
>>>>>> +static freelist_idx_t master_list_16[16];
>>>>>> +static freelist_idx_t master_list_32[32];
>>>>>> +static freelist_idx_t master_list_64[64];
>>>>>> +static freelist_idx_t master_list_128[128];
>>>>>> +static freelist_idx_t master_list_256[256];
>>>>>> +static struct m_list {
>>>>>> + size_t count;
>>>>>> + freelist_idx_t *list;
>>>>>> +} master_lists[] = {
>>>>>> + { ARRAY_SIZE(master_list_2), master_list_2 },
>>>>>> + { ARRAY_SIZE(master_list_4), master_list_4 },
>>>>>> + { ARRAY_SIZE(master_list_8), master_list_8 },
>>>>>> + { ARRAY_SIZE(master_list_16), master_list_16 },
>>>>>> + { ARRAY_SIZE(master_list_32), master_list_32 },
>>>>>> + { ARRAY_SIZE(master_list_64), master_list_64 },
>>>>>> + { ARRAY_SIZE(master_list_128), master_list_128 },
>>>>>> + { ARRAY_SIZE(master_list_256), master_list_256 },
>>>>>> +};
>>>>
>>>>
>>>> static const struct m_list?
>>>>
>