Re: [RFC][PATCH] vhost/vsock: Add vsock_list file to map cid with vhost tasks

From: Steven Rostedt
Date: Fri May 07 2021 - 12:09:48 EST


On Fri, 7 May 2021 17:43:32 +0200
Stefano Garzarella <sgarzare@xxxxxxxxxx> wrote:

> >The start/stop of a seq_file() is made for taking locks. I do this with all
> >my code in ftrace. Yeah, there's a while loop between the two, but that's
> >just to fill the buffer. It's not that long and it never goes to userspace
> >between the two. You can even use this for spin locks (but I wouldn't
> >recommend doing it for raw ones).
>
> Ah okay, thanks for the clarification!
>
> I was worried because building with `make C=2` I had these warnings:
>
> ../drivers/vhost/vsock.c:944:13: warning: context imbalance in 'vsock_start' - wrong count at exit
> ../drivers/vhost/vsock.c:963:13: warning: context imbalance in 'vsock_stop' - unexpected unlock
>
> Maybe we need to annotate the functions somehow.

Yep, I it should have been.

static void *vsock_start(struct seq_file *m, loff_t *pos)
__acquires(rcu)
{
[...]

}

static void vsock_stop(struct seq_file *m, void *p)
__releases(rcu)
{
[...]
}

static int vsock_show(struct seq_file *m, void *v)
__must_hold(rcu)
{
[...]
}


And guess what? I just copied those annotations from sock_hash_seq_start(),
sock_hash_seq_show() and sock_hash_seq_stop() from net/core/sock_map.c
which is doing exactly the same thing ;-)

So there's definitely precedence for this.

>
> >
> >>
> >> >+
> >> >+ iter->index = -1;
> >> >+ iter->node = NULL;
> >> >+ t = vsock_next(m, iter, NULL);
> >> >+
> >> >+ for (; iter->index < HASH_SIZE(vhost_vsock_hash) && l < *pos;
> >> >+ t = vsock_next(m, iter, &l))
> >> >+ ;
> >>
> >> A while() maybe was more readable...
> >
> >Again, I just cut and pasted from my other code.
> >
> >If you have a good idea on how to implement this with netlink (something
> >that ss or netstat can dislpay), I think that's the best way to go.
>
> Okay, I'll take a look and get back to you.
> If it's too complicated, we can go ahead with this patch.

Awesome, thanks!

-- Steve