Re: Using DT overlays for adding virtual hardware

From: Jan Kiszka
Date: Fri Jun 10 2016 - 10:57:46 EST


On 2016-06-09 09:22, Arnd Bergmann wrote:
> On Wednesday, June 8, 2016 6:39:08 PM CEST Jan Kiszka wrote:
>>>>
>>>
>>> I just donât see how an ACPI based hypervisor can ever be certified for
>>> safety critical applications. It might be possible but it should be
>>> an enormous undertaking; perhaps a subset without AML, but then again
>>> can you even boot an ACPI box without it?
>>
>> ACPI is out of scope for us. We will probably continue to feed the
>> hypervisor with static platform information, generated in advance and
>> validated. Can be DT-based one day, but even that is more complex to
>> parse than our current structures.
>>
>> But does ACPI usually mean that the kernel no longer has DT support and
>> would not be able to handle any overlay? That could be a killer.
>
> The kernel always has DT support built-in, but there may be some code
> paths that do not look at DT properties when it was booted from ACPI.
>
> In particular, communicating things like interrupt mappings may be
> hard, as they are represented very differently on ACPI, so you no
> longer have an 'interrupt-parent' node to point to from your overlay.
>
> It's hard to say how things would work out when trying to load DT
> overlays in this configuration. My guess is that it's actually
> easier to do on x86 (which doesn't normally rely on ACPI for
> describing the core system) than on arm64.

OK. But let's see if there are really systems with ACPI and without
pre-existing PCI. Currently, I would say the probability is low, because
ACPI means server, and servers love PCI...

>
>>> DT is safer since it contains state only.
>>>
>>>> To be clear, I'm not arguing *against* overlays as such, just making
>>>> sure that we're not prematurely choosing a solution just becasue it's
>>>> the one we're aware of.
>>
>> I'm open for any suggestion that is simple. Maybe we can extend a
>> trivial existing pci host driver (like pci-host-generic) to work also
>> without DT overlays - also fine, at least from Jailhose POV. However,
>> any unneeded kernel patch is even better.
>
> A few more observations:
>
> - you can easily have an arbitrary number of PCI host bridges, so you
> can always add another PCI bridge just for the virtual devices even
> on systems that have access to physical PCI devices in passthrough.
>
> - PCIe hotplugging seems well-defined enough to just make that work,
> without needing DT overlays.

The point is about adding virtual devices when there is no physical PCI
- when there is, we can already sneak them in between physical ones.

Granted, when we run out of free slots, there is a need to do more,
either via virtual bridges (but hypervisor is the last place we'd like
to touch), by enforcing Linux to scan on slots outside of the physical
topology or by making it create bridge stubs for virtual devices that
are not assigned to a physical bus. But that's all PCI topics, not
directly related to the original point of adding the host bridge.

>
> - The really tricky question is what to do about passthrough of
> host devices that are not PCI. The current generation of server
> class arm64 machines tend to have a bunch of those, and the
> expectation seems to be that hardware passthrough is the only
> way to get decent I/O performance to make up for the relatively
> slow CPU cores. If you are only concerned about emulated devices,
> that won't be a problem though.

Yes, that is tricky, but more from the analytical POV: which devices or
which parts of devices can we hand out to guests without jeopardizing
the system integrity? No generic answers here, for sure.

Jan

--
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux