Re: atomisp current issues

From: Laurent Pinchart
Date: Thu Nov 04 2021 - 08:47:43 EST


Hi Mauro,

On Thu, Nov 04, 2021 at 10:50:51AM +0000, Mauro Carvalho Chehab wrote:
> Em Thu, 4 Nov 2021 11:53:55 +0200 Laurent Pinchart escreveu:
> > On Wed, Nov 03, 2021 at 01:54:18PM +0000, Mauro Carvalho Chehab wrote:
> > > Hi,
> > >
> > > From what I've seen so far, those are the main issues with regards to V4L2 API,
> > > in order to allow a generic V4L2 application to work with it.
> > >
> > > MMAP support
> > > ============
> > >
> > > Despite having some MMAP code on it, the current implementation is broken.
> > > Fixing it is not trivial, as it would require fixing the HMM support on it,
> > > which does several tricks.
> > >
> > > The best would be to replace it by something simpler. If this is similar
> > > enough to IPU3, perhaps one idea would be to replace the HMM code on it by
> > > videodev2 + IPU3 HMM code.
> > >
> > > As this is not trivial, I'm postponing such task. If someone has enough
> > > time, it would be great to have this fixed.
> > >
> > > From my side, I opted to add support for USERPTR on camorama:
> > >
> > > https://github.com/alessio/camorama
> >
> > We should *really* phase out USERPTR support.
>
> I'm not a big fan of userptr, buy why do we phase it out?

Because USERPTR is broken by design. It gives a false promise to
userspace that a user pointer can be DMA'ed to, and this isn't generally
true. Even if buffer are carefully allocated to be compatible with the
device requirements, there are corner cases that prevent making a
mechanism based on get_user_pages() a first class citizen. In any case,
USERPTR makes life more difficult for the kernel.

There may be some use cases for which USERPTR could be an appropriate
solution, but now that we have DMABUF (and of course MMAP), I see no
reason to continue supporting USERPTR forever, and certainly not adding
new users.

> > Worst case you may support
> > DMABUF only if MMAP is problematic, but I don't really see why it could
> > be easy to map an imported buffer and difficult to map a buffer
> > allocated by the driver. videobuf2 should be used.
>
> Yeah, atomisp should be migrated to VB2, and such migration is listed at
> its TODO file. However, this is a complex task, as its memory management
> code is very complex.

Have a look at GPU memory management, and you'll find the atomisp driver
very simple in comparison :-)

I'm also pretty sure that drivers/staging/media/atomisp/pci/hmm/ could
be rewritten to use more of the existing kernel frameworks.

> Maybe we could try to use the ISP3 code on it,
> replacing the current HMM logic, but not sure if the implementation there
> would be compatible.

I'd be surprised if the IPU3 was compatible.

> In any case, the current priority is to make the driver to work, fixing
> the V4L2 API implementation, which has several issues.
>
> ...
>
> > > Video devices
> > > =============
> > >
> > > Currently, 10 video? devices are created:
> > >
> > > $ for i in $(ls /dev/video*|sort -k2 -to -n); do echo -n $i:; v4l2-ctl -D -d $i|grep Name; done
> > > /dev/video0: Name : ATOMISP ISP CAPTURE output
> > > /dev/video1: Name : ATOMISP ISP VIEWFINDER output
> > > /dev/video2: Name : ATOMISP ISP PREVIEW output
> > > /dev/video3: Name : ATOMISP ISP VIDEO output
> > > /dev/video4: Name : ATOMISP ISP ACC
> > > /dev/video5: Name : ATOMISP ISP MEMORY input
> > > /dev/video6: Name : ATOMISP ISP CAPTURE output
> > > /dev/video7: Name : ATOMISP ISP VIEWFINDER output
> > > /dev/video8: Name : ATOMISP ISP PREVIEW output
> > > /dev/video9: Name : ATOMISP ISP VIDEO output
> > > /dev/video10: Name : ATOMISP ISP ACC
> > >
> > > That seems to be written to satisfy some Android-based app, but we don't
> > > really need all of those.
> > >
> > > I'm thinking to comment out the part of the code which creates all of those,
> > > keeping just "ATOMISP ISP PREVIEW output", as I don't think we need all
> > > of those.
> >
> > Why is that ? Being able to capture multiple streams in different
> > resolutions is important for lots of applications, the viewfinder
> > resolution is often different than the video streaming and/or still
> > capture resolution. Scaling after capture is often expensive (and there
> > are memory bandwidth and power constraints to take into account too). A
> > single-stream device may be better than nothing, but it's time to move
> > to the 21st century.
>
> True, but having multiple videonodes at this moment is not helping,
> specially since only one of such modes (PREVIEW) is actually working at
> the moment.
>
> So, this is more a strategy to help focusing on making this work
> properly, and not a statement that those modules would be dropped.
>
> I'd say that the "final" version of atomisp - once it gets
> fixed, cleaned up and started being MC-controlled - should support
> all such features, and have the pipelines setup via libcamera.

I have no issue with phasing development (I have few issues with the
atomisp driver in general actually, as it's in staging), but the goal
should be kept in mind to make sure development goes in the right
direction.

--
Regards,

Laurent Pinchart