Re: [PATCH] drm/panfrost: Prevent race when handling page fault

From: Steven Price
Date: Fri Sep 06 2019 - 08:42:56 EST


On 06/09/2019 12:10, Rob Herring wrote:
> On Thu, Sep 5, 2019 at 1:11 PM Steven Price <steven.price@xxxxxxx> wrote:
>>
>> When handling a GPU page fault addr_to_drm_mm_node() is used to
>> translate the GPU address to a buffer object. However it is possible for
>> the buffer object to be freed after the function has returned resulting
>> in a use-after-free of the BO.
>>
>> Change addr_to_drm_mm_node to return the panfrost_gem_object with an
>> extra reference on it, preventing the BO from being freed until after
>> the page fault has been handled.
>>
>> Signed-off-by: Steven Price <steven.price@xxxxxxx>
>> ---
>>
>> I've managed to trigger this, generating the following stack trace.
>
> Humm, the assumption was that a fault could only happen during a job
> and so a reference would already be held. Otherwise, couldn't the GPU
> also be accessing the BO after it is freed?

Ah, I guess I missed that in the commit message. This is assuming that
user space doesn't include the BO in the job even though the GPU then
does try to access it. AIUI mesa wouldn't do this, but this is still
easily possible if user space wants to crash the kernel.

> Also, looking at this again, I think we need to hold the mm_lock
> around the drm_mm_for_each_node().

Ah, good point. Squashing in the following should do the trick:

----8<----
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c
b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index ccc536d2e489..41f297aa259c 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -396,26 +396,33 @@ static struct panfrost_gem_object *
addr_to_drm_mm_node(struct panfrost_device *pfdev, int as, u64 addr)
{
struct panfrost_gem_object *bo = NULL;
+ struct panfrost_file_priv *priv;
struct drm_mm_node *node;
u64 offset = addr >> PAGE_SHIFT;
struct panfrost_mmu *mmu;

spin_lock(&pfdev->as_lock);
list_for_each_entry(mmu, &pfdev->as_lru_list, list) {
- struct panfrost_file_priv *priv;
- if (as != mmu->as)
- continue;
+ if (as == mmu->as)
+ break;
+ }
+ if (as != mmu->as)
+ goto out;

- priv = container_of(mmu, struct panfrost_file_priv, mmu);
- drm_mm_for_each_node(node, &priv->mm) {
- if (offset >= node->start &&
- offset < (node->start + node->size)) {
- bo = drm_mm_node_to_panfrost_bo(node);
- drm_gem_object_get(&bo->base.base);
- goto out;
- }
+ priv = container_of(mmu, struct panfrost_file_priv, mmu);
+
+ spin_lock(&priv->mm_lock);
+
+ drm_mm_for_each_node(node, &priv->mm) {
+ if (offset >= node->start &&
+ offset < (node->start + node->size)) {
+ bo = drm_mm_node_to_panfrost_bo(node);
+ drm_gem_object_get(&bo->base.base);
+ break;
}
}
+
+ spin_unlock(&priv->mm_lock);
out:
spin_unlock(&pfdev->as_lock);
return bo;