Re: [spi-devel-general] [PATCH] atmel_spi: support zero length transfer

From: Ned Forrester
Date: Fri Feb 22 2008 - 14:37:14 EST


David Brownell wrote:
>> However, if the transfer is by DMA, note that the PXA255 and PXA270
>> Developer's Manuals have the following language regarding DMA lengths:
>>
>> LEN = 0 means zero bytes for descriptor-fetch transactions.
>> LEN = 0 is an invalid setting for no-descriptor-fetch
>> transactions. ...
>>
>> Because the pxa2xx_spi driver does not currently use DMA descriptors,
>> zero length DMAs are invalid.
>
> In that case the pxa2xx_spi driver should add a special case to
> avoid starting such transfers in DMA mode.

So, what I think you said is that it would be better for pxa2xx_spi to
silently ignore a zero-length message, passing it back with the rest of
the message when all is complete, than to reject the message. I see no
reason why that could not be done, though it may be tricky to set other
things like SSP modes and chip select and *not* start the DMA. It would
have to be tested, so I'm not sure when I could try that.

>> I agree with Marc: any such delay will be undefined, in the general
>> case. It might work for a specific driver implementation.
>
> Is that what Marc said? I couldn't tell. In any case, I disagree;
> the semantics of that delay are clearly define.

Maybe I am missing something. Aren't we talking about a transfer in a
message, with or without other transfers, who's only unique
characteristic is that that its length is zero? What is clearly
defined about what should happen during that transfer? I think (maybe
we are all confused) that Marc and I are talking about the delay, likely
short, caused by the processing or ignoring of that zero length transfer.

Or are you and Atsushi talking about using spi_transfer.delay_usecs
*with* a zero length transfer to effectively put a delay between the
assertion of CS and the start of the first clock? If so, then I guess I
missed the original point. Sorry.

--

By the way, reading spi.h again, it looks like spi_transfer.delay_usecs
is supposed to be implemented between the last bit movement of a
transfer and any change of CS at the end of a transfer. Is that right?
I think that pxa2xx_spi is dropping CS, if requested, immediately at
the end of transfer, and then putting spi_transfer.delay_usecs between
that transfer and the next.

--
Ned Forrester nforrester@xxxxxxxx
Oceanographic Systems Lab 508-289-2226
Applied Ocean Physics and Engineering Dept.
Woods Hole Oceanographic Institution Woods Hole, MA 02543, USA
http://www.whoi.edu/sbl/liteSite.do?litesiteid=7212
http://www.whoi.edu/hpb/Site.do?id=1532
http://www.whoi.edu/page.do?pid=10079

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/