Re: [PATCH v8 4/4] bus: mhi: Add userspace client interface driver

From: Hemant Kumar
Date: Thu Oct 22 2020 - 19:25:49 EST


Hi Loic,

On 10/22/20 3:20 AM, Loic Poulain wrote:
Hi Hemant,

A few comments inline.

On Thu, 22 Oct 2020 at 10:22, Hemant Kumar <hemantk@xxxxxxxxxxxxxx> wrote:

This MHI client driver allows userspace clients to transfer
raw data between MHI device and host using standard file operations.
Driver instantiates UCI device object which is associated to device
file node. UCI device object instantiates UCI channel object when device
file node is opened. UCI channel object is used to manage MHI channels
by calling MHI core APIs for read and write operations. MHI channels
are started as part of device open(). MHI channels remain in start
state until last release() is called on UCI device file node. Device
file node is created with format

[..]
+
+/**
+ * struct uci_chan - MHI channel for a UCI device
+ * @udev: associated UCI device object
+ * @ul_wq: wait queue for writer
+ * @write_lock: mutex write lock for ul channel
+ * @dl_wq: wait queue for reader
+ * @read_lock: mutex read lock for dl channel
+ * @dl_lock: spin lock
+ * @pending: list of dl buffers userspace is waiting to read
+ * @cur_buf: current buffer userspace is reading
+ * @dl_size: size of the current dl buffer userspace is reading
+ * @ref_count: uci_chan reference count
+ */
+struct uci_chan {
+ struct uci_dev *udev;
+ wait_queue_head_t ul_wq;
+
+ /* ul channel lock to synchronize multiple writes */
+ struct mutex write_lock;
+
+ wait_queue_head_t dl_wq;
+
+ /* dl channel lock to synchronize multiple reads */
+ struct mutex read_lock;
+
+ /*
+ * protects pending and cur_buf members in bh context, channel release,
+ * read and poll
+ */
+ spinlock_t dl_lock;

Maybe I'm wrong, but I think it would be clearer and simpler for
dl_lock to be only a lock for the pending RX list (e.g. pending_lock),
for protecting against concurrent access in chardev read ops
(consumer) and MHI download callback (producer). cur_buf is the
currently read buffer, and so could be simply protected by the
read_lock mutex (never accessed from bh/irq callback context).
You have a good point. I can protect pending list related operations using spin lock and call it pending_lock which would be used in dl_xfer call back, channel release, read and poll. Use read lock for cur_buf in read().

+
+ struct list_head pending;
+ struct uci_buf *cur_buf;
+ size_t dl_size;
+ struct kref ref_count;
+};
+
+/**
[..]
+static void mhi_dl_xfer_cb(struct mhi_device *mhi_dev,
+ struct mhi_result *mhi_result)
+{
+ struct uci_dev *udev = dev_get_drvdata(&mhi_dev->dev);
+ struct uci_chan *uchan = udev->uchan;
+ struct device *dev = &mhi_dev->dev;
+ struct uci_buf *ubuf;
+ size_t dl_buf_size = udev->mtu - sizeof(*ubuf);
+
+ dev_dbg(dev, "status: %d receive_len: %zu\n",
+ mhi_result->transaction_status, mhi_result->bytes_xferd);
+
+ if (mhi_result->transaction_status == -ENOTCONN) {
+ kfree(mhi_result->buf_addr);
+ return;
+ }

It would be more robust to test only transaction_status values that
allow you to consider the buffer as valid, AFAIU 0 and -EOVERFLOW.
That would prevent breaking that code if new transaction_status errors
are returned in the futur (e.g. -EIO...).
I agree, will use this instead
if (mhi_result->transaction_status &&
mhi_result->transaction_status != -EOVERFLOW)


+
+ ubuf = mhi_result->buf_addr + dl_buf_size;
+ ubuf->data = mhi_result->buf_addr;
+ ubuf->len = mhi_result->bytes_xferd;
+ spin_lock_bh(&uchan->dl_lock);
+ list_add_tail(&ubuf->node, &uchan->pending);
+ spin_unlock_bh(&uchan->dl_lock);
+
+ wake_up(&uchan->dl_wq);
+}
+

Thanks,
Hemant
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project