Re: [PATCH v43 01/15] Linux Random Number Generator

From: Simo Sorce
Date: Mon Jan 10 2022 - 14:41:42 EST


On Mon, 2022-01-10 at 19:44 +0100, Jason A. Donenfeld wrote:
> On Mon, Jan 10, 2022 at 4:08 PM Marcelo Henrique Cerri
> <marcelo.cerri@xxxxxxxxxxxxx> wrote:
> > > Just to confirm, this little patch here gives you FIPS certification?
> > It does
>
> On Mon, Jan 10, 2022 at 7:29 PM Eric Biggers <ebiggers@xxxxxxxxxx> wrote:
> > Now, the idea of certifying the whole kernel as a FIPS cryptographic module is
> > stupid

Not that it is not the whole kernel, but a "module boundary" is drawn
around the crypto API and vicinity.
It would be really nice if this whole "boundary" could be built as a
single binary module to be loaded in the kernel in fips mode. That way
we could update the rest of the kernel w/o rebuilding the module, but
we are not there.

Rebuilding the kernel does technically invalidate certification however
NIST themselves tells people to care first about the security of the
systems as long as the vendor is undergoing or promising certification
of the patched kernel.

There is an assumption of good faith.

> Alright, so if that's the case, then what you ostensibly want is:
> a) Some cryptoapi users to use crypto_rng_get_bytes, as they already
> do now. (In a private thread with Simo, I pointed out a missing place
> and encouraged him to send a patch for that but none has arrived.)

I noted your point, just haven' had time to act on it.

> b) Userspace to use some other RNG.
>
> (a) is basically already done.
>
> (b) can be accomplished in userspace by just (i) disabling getrandom()
> (making it return ENOSYS), and then (ii) replacing the /dev/urandom
> path with a CUSE device or similar.

While this is technically possible it is not very helpful, as it
requires downstream patching of userspace programs, most of which do
not have either runtime nor compile time switches to change the used
random device.

> I suppose (b.i) might be able to be done with some bpf seccomp cgroup
> situation. Or, if that's problematic, somebody could propose a
> "disable getrandom(2)" cmdline option. That doesn't seem very hard.
> And (b.ii) could use combined inputs from /dev/urandom and whatever
> FIPSy userspace jitter entropy daemon you have.

It is simply easier to just patch /dev/[u]random/getrandom() to use a
certified DRBG in FIPS mode, although we also considered all the
options you mentioned we couldn't really find a good reason to add
more work, and make a more complicated solution when it is simple to
wire up the correct DRBG to the random device userspace applications
use and is the de facto standard API for obtaining good random numbers.

> In order to prevent the actual security from regressing on this, all
> you have to do is ensure that you're always using at least 32 bytes
> from the kernel's real /dev/urandom, and then whatever you add on top
> of that becomes just for the certification aspect. As your various
> green compliance checkboxes change over time and per region, you can
> just swap out the extra-paper-pushing-bytes-on-top with whatever the
> particular requirements of a certification body are. And you get to do
> this all in userspace.

You can do the whole jitterbug in userspace, but that is simply not
efficient and too disruptive (the above patching of all downstream
usage).

>
> Marcelo/Simo - could you tell me what you find deficient about that
> plan? It strikes me that this would give you maximum flexibility and
> pretty much accomplish the goals?

My goal is to deviate as little as possible both in kernel and user-
space from what upstreams do. Creating new interfaces is easy, making
people use them is almost impossible. Witness the process in getting
people to use getrandom()


Let me also add that NIST requirements are not capricious, they are
written by people that study entropy sources and random generation as
their job and know what they are doing, I err on the side of giving
them credit. The requirements set by the various 90A/90B/90C documents
are about raising the bar, to guarantee that random number generators
are actually "certifiably" good. There are entropy assessment performed
by the labs as part of the certification process to insure the random
source is a good source and does produce output that passes randomness
tests. I personally think the kernel would benefit from implementing
those "checkboxes", it is basically like implementing a sane CI/CD and
testing environment.

To answer to Ted,
every certification program necessarily requires a certain amount of
bureaucracy, especially when governments are involved, but that doesn't
mean that it's all security theater.

The FIPS certification process has changed over the years as well not
just the requirements. Until a few years ago the requirement to use
FIPS certified cryptography could be waived, and because very few
consumer programs were certified a lot of agencies considered it just a
burden and didn't care much. That has changed, it is now required as a
matter of law for most government agencies, and the waiver process has
been discontinued. So we really need to provide FIPS certification to
our public sector customers, moreover various other security standards
now reference FIPS standards, so it is extending beyond government
agencies and their contractors.

FIPS is painful for us undergoing certification, but as a program it
also does have positive effects. We scrutinize all cryptographic
modules a lot more than we used to, we have a lot more testing than we
used to and a lot more confidence in the solidity of the provided
cryptography in the Linux world also thanks to this scrutiny. I wish
the certification process was less painful for sure, but I believe it
does add value when done sensibly.

Simo.

--
Simo Sorce
RHEL Crypto Team
Red Hat, Inc