[RFC PATCH 14/15] epoll: support polling from userspace for ep_poll()

From: Roman Penyaev
Date: Wed Jan 09 2019 - 11:41:01 EST


When epfd is polled from userspace and user calls epoll_wait():

1. If user ring is not fully consumed (i.e. head != tail) returns
-ESTALE, indicating that some actions on userside is required.

2. If events were routed to klists probably memory was expanded or
shrink is still required. Do shrink if needed and transfer all
collected events from kernel lists to uring.

3. Ensure with WARN that ep_poll_send_events() can't be called from
ep_poll() when epfd is pollable from userspace.

4. Wait for events on wait queue, always return -ESTALE if were
awekened indicating that events have to be consumed from user ring.

Signed-off-by: Roman Penyaev <rpenyaev@xxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Davidlohr Bueso <dbueso@xxxxxxx>
Cc: Jason Baron <jbaron@xxxxxxxxxx>
Cc: Al Viro <viro@xxxxxxxxxxxxxxxxxx>
Cc: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Andrea Parri <andrea.parri@xxxxxxxxxxxxxxxxxxxx>
Cc: linux-fsdevel@xxxxxxxxxxxxxxx
Cc: linux-kernel@xxxxxxxxxxxxxxx
---
fs/eventpoll.c | 46 +++++++++++++++++++++++++++++++++++++---------
1 file changed, 37 insertions(+), 9 deletions(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 2b38a3d884e8..5de640fcf28b 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -523,7 +523,8 @@ static inline bool ep_user_ring_events_available(struct eventpoll *ep)
static inline int ep_events_available(struct eventpoll *ep)
{
return !list_empty_careful(&ep->rdllist) ||
- READ_ONCE(ep->ovflist) != EP_UNACTIVE_PTR;
+ READ_ONCE(ep->ovflist) != EP_UNACTIVE_PTR ||
+ ep_user_ring_events_available(ep);
}

#ifdef CONFIG_NET_RX_BUSY_POLL
@@ -2411,6 +2412,8 @@ static int ep_send_events(struct eventpoll *ep,
{
struct ep_send_events_data esed;

+ WARN_ON(ep_polled_by_user(ep));
+
esed.maxevents = maxevents;
esed.events = events;

@@ -2607,6 +2610,24 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,

lockdep_assert_irqs_enabled();

+ if (ep_polled_by_user(ep)) {
+ if (ep_user_ring_events_available(ep))
+ /* Firstly all events from ring have to be consumed */
+ return -ESTALE;
+
+ if (ep_events_routed_to_klists(ep)) {
+ res = ep_transfer_events_and_shrink_uring(ep);
+ if (unlikely(res < 0))
+ return res;
+ if (res)
+ /*
+ * Events were transferred from klists to
+ * user ring
+ */
+ return -ESTALE;
+ }
+ }
+
if (timeout > 0) {
struct timespec64 end_time = ep_set_mstimeout(timeout);

@@ -2695,14 +2716,21 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
__set_current_state(TASK_RUNNING);

send_events:
- /*
- * Try to transfer events to user space. In case we get 0 events and
- * there's still timeout left over, we go trying again in search of
- * more luck.
- */
- if (!res && eavail &&
- !(res = ep_send_events(ep, events, maxevents)) && !timed_out)
- goto fetch_events;
+ if (!res && eavail) {
+ if (!ep_polled_by_user(ep)) {
+ /*
+ * Try to transfer events to user space. In case we get
+ * 0 events and there's still timeout left over, we go
+ * trying again in search of more luck.
+ */
+ res = ep_send_events(ep, events, maxevents);
+ if (!res && !timed_out)
+ goto fetch_events;
+ } else {
+ /* User has to deal with the ring himself */
+ res = -ESTALE;
+ }
+ }

if (waiter) {
spin_lock_irq(&ep->wq.lock);
--
2.19.1