Re: [PATCH v2 05/10] MIPS: Refactor mips_cps_core_entry implementation
From: Jiaxun Yang
Date: Thu Nov 09 2023 - 08:13:01 EST
在2023年11月8日十一月 下午4:30,Gregory CLEMENT写道:
> Hello Jiaxun,
>
>> Now the exception vector for CPS systems are allocated on-fly
>> with memblock as well.
>>
>> It will try to allocate from KSEG1 first, and then try to allocate
>> in low 4G if possible.
>>
>> The main reset vector is now generated by uasm, to avoid tons
>> of patches to the code. Other vectors are copied to the location
>> later.
>>
>> Signed-off-by: Jiaxun Yang <jiaxun.yang@xxxxxxxxxxx>
>> ---
>
>> +
>> +static int __init setup_cps_vecs(void)
>> +{
> [...]
>> +
>> + /* We want to ensure cache is clean before writing uncached mem */
>> + blast_dcache_range(TO_CAC(cps_vec_pa), TO_CAC(cps_vec_pa) +
>> BEV_VEC_SIZE);
>
> In my case this call failed because when setup_cps_vecs is called, the
> cache information are not initialized yet!
>
> As a workaround I moved the cpu_cache_init() call before
> plat_smp_setup() in the /arch/mips/kernel/setup.c file.
>
> Obviously it is not the right thing to do, but it shows that the cache
> related function are called too early. For example, in
> blast_dcache_range, the value returned by cpu_dcache_line_size was 0
> instead of 64, because the value cpu_data[0].dcache.linesz was not set
> yet.
Oops, that's a problem!
>
> So I wonder who it managed to work in your setup. What is the machine
> running in QEMU .
I'm using QEMU boston with vmlinux only.
QEMU does not emulate Cache at all so that won't be a problem on QEMU
but it may be a problem for actual hardware.
The proper solution might be leave allocation here but move uasm generation
to a later point.
>
> Does it use someting like the following line ?
> #define cpu_dcache_line_size() 32
>
>
>> + bc_wback_inv(TO_CAC(cps_vec_pa), BEV_VEC_SIZE);
>> + __sync();
>> +
>> + cps_vec = (void *)TO_UNCAC(cps_vec_pa);
>> + mips_cps_build_core_entry(cps_vec);
>> +
>> + memcpy(cps_vec + 0x200, &excep_tlbfill, 0x80);
>> + memcpy(cps_vec + 0x280, &excep_xtlbfill, 0x80);
>> + memcpy(cps_vec + 0x300, &excep_cache, 0x80);
>> + memcpy(cps_vec + 0x380, &excep_genex, 0x80);
>> + memcpy(cps_vec + 0x400, &excep_intex, 0x80);
>> + memcpy(cps_vec + 0x480, &excep_ejtag, 0x80);
>> +
>> + /* Make sure no prefetched data in cache */
>> + blast_inv_dcache_range(TO_CAC(cps_vec_pa), TO_CAC(cps_vec_pa) + BEV_VEC_SIZE);
>> + bc_inv(TO_CAC(cps_vec_pa), BEV_VEC_SIZE);
>> + __sync();
>> +
>> + return 0;
>> +}
>
> [...]
>
>> /* If we have an FPU, enroll ourselves in the FPU-full mask */
>> @@ -110,10 +241,14 @@ static void __init cps_prepare_cpus(unsigned int max_cpus)
>> {
>> unsigned ncores, core_vpes, c, cca;
>> bool cca_unsuitable, cores_limited;
>> - u32 *entry_code;
>>
>> mips_mt_set_cpuoptions();
>>
>> + if (!core_entry_reg) {
>> + pr_err("core_entry address unsuitable, disabling smp-cps\n");
>> + goto err_out;
>> + }
>> +
>> /* Detect whether the CCA is unsuited to multi-core SMP */
>> cca = read_c0_config() & CONF_CM_CMASK;
>> switch (cca) {
>> @@ -145,20 +280,6 @@ static void __init cps_prepare_cpus(unsigned int max_cpus)
>> (cca_unsuitable && cpu_has_dc_aliases) ? " & " : "",
>> cpu_has_dc_aliases ? "dcache aliasing" : "");
>>
>> - /*
>> - * Patch the start of mips_cps_core_entry to provide:
>> - *
>> - * s0 = kseg0 CCA
>> - */
>> - entry_code = (u32 *)&mips_cps_core_entry;
>> - uasm_i_addiu(&entry_code, 16, 0, cca);
>> - UASM_i_LA(&entry_code, 17, (long)mips_gcr_base);
>> - BUG_ON((void *)entry_code > (void *)&mips_cps_core_entry_patch_end);
>> - blast_dcache_range((unsigned long)&mips_cps_core_entry,
>> - (unsigned long)entry_code);
>> - bc_wback_inv((unsigned long)&mips_cps_core_entry,
>> - (void *)entry_code - (void *)&mips_cps_core_entry);
>> - __sync();
>
> The original code here was called later during boot from
> kernel_init_freeable() which is called by kernel_init() after all the
> calls in start_kernel. That's why there were no issue before the move.
I guess move uasm generation code here will be helpful :-)
>
> Gregory
>
>>
--
- Jiaxun