Re: [RFC PATCH v2 0/8] nvmem: add block device NVMEM provider
From: Daniel Golle
Date: Tue Mar 12 2024 - 09:12:54 EST
On Tue, Mar 12, 2024 at 01:57:39PM +0100, Ulf Hansson wrote:
> On Tue, 12 Mar 2024 at 13:30, Daniel Golle <daniel@xxxxxxxxxxxxxx> wrote:
> >
> > Hi Ulf,
> >
> > On Tue, Mar 12, 2024 at 01:22:49PM +0100, Ulf Hansson wrote:
> > > On Tue, 5 Mar 2024 at 21:23, Daniel Golle <daniel@xxxxxxxxxxxxxx> wrote:
> > > >
> > > > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > > > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > > > calibration EEPROM data.
> > > >
> > > > Implement an NVMEM provider backed by block devices as typically the
> > > > NVMEM framework is used to have kernel drivers read and use binary data
> > > > from EEPROMs, efuses, flash memory (MTD), ...
> > > >
> > > > In order to be able to reference hardware partitions on an eMMC, add code
> > > > to bind each hardware partition to a specific firmware subnode.
> > > >
> > > > This series is meant to open the discussion on how exactly the device
> > > > tree schema for block devices and partitions may look like, and even
> > > > if using the block layer to back the NVMEM device is at all the way to
> > > > go -- to me it seemed to be a good solution because it will be reuable
> > > > e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.
> > > >
> > > > This series has previously been submitted on July 19th 2023[1] and most of
> > > > the basic idea did not change since.
> > > >
> > > > However, the recent introduction of bdev_file_open_by_dev() allow to
> > > > get rid of most use of block layer internals which supposedly was the
> > > > main objection raised by Christoph Hellwig back then.
> > > >
> > > > Most of the other comments received for in the first RFC have also
> > > > been addressed, however, what remains is the use of class_interface
> > > > (lacking an alternative way to get notifications about addition or
> > > > removal of block devices from the system). As this has been criticized
> > > > in the past I'm specifically interested in suggestions on how to solve
> > > > this in another way -- ideally without having to implement a whole new
> > > > way for in-kernel notifications of appearing or disappearing block
> > > > devices...
> > > >
> > > > And, in a way just like in case of MTD and UBI, I believe acting as an
> > > > NVMEM provider *is* a functionality which belongs to the block layer
> > > > itself and, other than e.g. filesystems, is inconvenient to implement
> > > > elsewhere.
> > >
> > > I don't object to the above, however to keep things scalable at the
> > > block device driver level, such as the MMC subsystem, I think we
> > > should avoid having *any* knowledge about the binary format at these
> > > kinds of lower levels.
> > >
> > > Even if most of the NVMEM format is managed elsewhere, the support for
> > > NVMEM partitions seems to be dealt with from the MMC subsystem too.
> >
> > In an earlier iteration of this RFC it was requested to make NVMEM
> > support opt-in (instead of opt-out for mtdblock and ubiblock, which
> > already got their own NVMEM provider implementation).
> > Hence at least a change to opt-in for NVMEM support is required in the
> > MMC subsystem, together with making sure that MMC devices have their
> > fwnode assigned.
>
> So, the NVMEM support needs to be turned on (opt-in) for each and
> every block device driver?
>
> It's not a big deal for me - and I would be happy to apply such a
> change. On the other hand, it is just some binary data that is stored
> on the flash, why should MMC have to opt-in or opt-out at all? It
> should be the upper layers who decide what to store on the flash, not
> the MMC subsystem, if you get my point.
>
I agree, and that's exactly how I originally wrote it. However, in the
first round of rewiew it was requested to be in that way (ie. opt-in
for each subsystem; rather than opt-out for subsystems already
providing NVMEM in another way, such as MTD or UBI), see here:
https://patchwork.kernel.org/comment/25432948/
> >
> > > Why can't NVMEM partitions be managed the usual way via the MBR/GPT?
> >
> > Absolutely, maybe my wording was not clear, but that's exactly what
> > I'm suggesting here. There are no added parsers nor any knowledge
> > about binary formats in this patchset.
>
> Right, but there are new DT bindings added in the $subject series that
> allows us to describe NVMEM partitions for an eMMC. Why isn't that
> parsed from the MBR/GPT, etc, rather than encoded in DT?
The added dt-bindings merely allow to **identify** the partition by
it's PARTNAME, PARTNO or PARTUUID, so we can reference them in DT.
We'd still rely on MBR or GPT to do the actual parsing of the on-disk
format.
>
> >
> > Or did I misunderstand your comment?
>
> Maybe. I am just trying to understand this, so apologize if you find
> my questions silly. :-)
Let's make sure to all be on the same page and everything is fully
understood by everyone. Everyone has to bare the noise, but I guess
that's ok ;)
Cheers
Daniel