Re: Linux guest kernel threat model for Confidential Computing

From: Dr. David Alan Gilbert
Date: Wed Jan 25 2023 - 10:30:09 EST


* Daniel P. Berrangé (berrange@xxxxxxxxxx) wrote:
> On Wed, Jan 25, 2023 at 01:42:53PM +0000, Dr. David Alan Gilbert wrote:
> > * Greg Kroah-Hartman (gregkh@xxxxxxxxxxxxxxxxxxx) wrote:
> > > On Wed, Jan 25, 2023 at 12:28:13PM +0000, Reshetova, Elena wrote:
> > > > Hi Greg,
> > > >
> > > > You mentioned couple of times (last time in this recent thread:
> > > > https://lore.kernel.org/all/Y80WtujnO7kfduAZ@xxxxxxxxx/) that we ought to start
> > > > discussing the updated threat model for kernel, so this email is a start in this direction.
> > >
> > > Any specific reason you didn't cc: the linux-hardening mailing list?
> > > This seems to be in their area as well, right?
> > >
> > > > As we have shared before in various lkml threads/conference presentations
> > > > ([1], [2], [3] and many others), for the Confidential Computing guest kernel, we have a
> > > > change in the threat model where guest kernel doesn’t anymore trust the hypervisor.
> > >
> > > That is, frankly, a very funny threat model. How realistic is it really
> > > given all of the other ways that a hypervisor can mess with a guest?
> >
> > It's what a lot of people would like; in the early attempts it was easy
> > to defeat, but in TDX and SEV-SNP the hypervisor has a lot less that it
> > can mess with - remember that not just the memory is encrypted, so is
> > the register state, and the guest gets to see changes to mapping and a
> > lot of control over interrupt injection etc.
> >
> > > So what do you actually trust here? The CPU? A device? Nothing?
> >
> > We trust the actual physical CPU, provided that it can prove that it's a
> > real CPU with the CoCo hardware enabled. Both the SNP and TDX hardware
> > can perform an attestation signed by the CPU to prove to someone
> > external that the guest is running on a real trusted CPU.
> >
> > Note that the trust is limited:
> > a) We don't trust that we can make forward progress - if something
> > does something bad it's OK for the guest to stop.
> > b) We don't trust devices, and we don't trust them by having the guest
> > do normal encryption; e.g. just LUKS on the disk and normal encrypted
> > networking. [There's a lot of schemes people are working on about how
> > the guest gets the keys etc for that)
>
> I think we need to more precisely say what we mean by 'trust' as it
> can have quite a broad interpretation.
>
> As a baseline requirement, in the context of confidential computing the
> guest would not trust the hypervisor with data that needs to remain
> confidential, but would generally still expect it to provide a faithful
> implementation of a given device.
>
> IOW, the guest would expect the implementation of virtio-blk devices to
> be functionally correct per the virtio-blk specification, but would not
> trust the host to protect confidentiality any stored data in the disk.
>
> Any virtual device exposed to the guest that can transfer potentially
> sensitive data needs to have some form of guest controlled encryption
> applied. For disks this is easy with FDE like LUKS, for NICs this is
> already best practice for services by using TLS. Other devices may not
> have good existing options for applying encryption.
>
> If the guest has a virtual keyboard, mouse and graphical display, which
> is backed by a VNC/RDP server in the host, then all that is visible to the
> host. There's no pre-existing solutions I know can could offer easy
> confidentiality for basic console I/O from the start of guest firmware
> onwards. The best is to spawn a VNC/RDP server in the guest at some
> point during boot. Means you can't login to the guest in single user
> mode with your root password though, without compromising it.
>
> The problem also applies for common solutions today where the host passes
> in config data to the guest, for consumption by tools like cloud-init.
> This is used in the past to inject an SSH key for example, or set the
> guest root password. Such data received from the host can no longer be
> trusted, as the host can see the data, or subsitute its own SSH key(s)
> in order to gain access. Cloud-init needs to get its config data from
> a trusted source, likely an external attestation server
>
>
> A further challenge surrounds handling of undesirable devices. A goal
> of OS development has been to ensure that both coldplugged and hotplugged
> devices "just work" out of the box with zero guest admin config required.
> To some extent this is contrary to what a confidential guest will want.
> It doesn't want a getty spawned on any console exposed, it doesn't want
> to use a virtio-rng exposed by the host which could be feeding non-random.
>
>
> Protecting against malicious implementations of devices is conceivably
> interesting, as a hardening task. A malicious host may try to take
> advantage of the guest OS device driver impl to exploit the guest OS
> kernel with an end goal of getting into a state where it can be made
> to reveal confidential data that was otherwise protected.

I think this is really what the Intel stuff is trying to protect
against.

Dave

> With regards,
> Daniel
> --
> |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o- https://fstop138.berrange.com :|
> |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
>
--
Dr. David Alan Gilbert / dgilbert@xxxxxxxxxx / Manchester, UK