Re: [RFC PATCH 02/21] x86/virt/tdx: Enhance tdh_mem_page_aug() to support huge pages

From: Yan Zhao
Date: Tue Jul 08 2025 - 04:52:00 EST


On Thu, Apr 24, 2025 at 11:04:28AM +0800, Yan Zhao wrote:
> diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> index f5e2a937c1e7..a66d501b5677 100644
> --- a/arch/x86/virt/vmx/tdx/tdx.c
> +++ b/arch/x86/virt/vmx/tdx/tdx.c
> @@ -1595,9 +1595,18 @@ u64 tdh_mem_page_aug(struct tdx_td *td, u64 gpa, int level, struct page *page, u
According to the discussion in DPAMT [*],
"hpa here points to a 2M region that pamt_pages covers. We don't have
struct page that represents it. Passing 4k struct page would be
misleading IMO."

Should we update tdh_mem_page_aug() accordingly to use hpa?
Or use struct folio instead?

[*] https://lore.kernel.org/all/3coaqkcfp7xtpvh2x4kph55qlopupknm7dmzqox6fakzaedhem@a2oysbvbshpm/


> .rdx = tdx_tdr_pa(td),
> .r8 = page_to_phys(page),
> };
> + unsigned long nr_pages = 1 << (level * 9);
> + struct folio *folio = page_folio(page);
> + unsigned long idx = 0;
> u64 ret;
>
> - tdx_clflush_page(page);
> + if (!(level >= TDX_PS_4K && level < TDX_PS_NR) ||
> + (folio_page_idx(folio, page) + nr_pages > folio_nr_pages(folio)))
> + return -EINVAL;
> +
> + while (nr_pages--)
> + tdx_clflush_page(nth_page(page, idx++));
> +
> ret = seamcall_ret(TDH_MEM_PAGE_AUG, &args);
>
> *ext_err1 = args.rcx;
> --
> 2.43.2
>