On Fri, Aug 08, 2025 at 01:15:12PM +0800, Baolu Lu wrote:Yep,using guard(spinlock)() for scope-bound lock management sacrifices
+static void kernel_pte_work_func(struct work_struct *work)
+{
+ struct ptdesc *ptdesc, *next;
+
+ iommu_sva_invalidate_kva_range(0, TLB_FLUSH_ALL);
+
+ guard(spinlock)(&kernel_pte_work.lock);
+ list_for_each_entry_safe(ptdesc, next, &kernel_pte_work.list, pt_list) {
+ list_del_init(&ptdesc->pt_list);
+ pagetable_dtor_free(ptdesc);
+ }
Do a list_move from kernel_pte_work.list to an on-stack list head and
then immediately release the lock. No reason to hold the spinock while
doing frees, also no reason to do list_del_init, that memory probably
gets zerod in pagetable_dtor_free()
Jason