On Thu, Sep 24, 2015 at 9:17 PM, Ashish Samant <ashish.samant@xxxxxxxxxx> wrote:with the new change, contention of spinlock is significantly reduced, hence the latency caused by NUMA is not visible. Even in earlier case, the scalability was not a big problem if we bind all processes(fuse worker and user (dd threads)) to a single NUMA node. The problem was only seen when threads spread out across numa nodes and contend for the spin lock.
We did some performance testing without these patches and with these patchesInteresting. This means, that serving the request on a different NUMA
(with -o clone_fd option specified). We did 2 types of tests:
1. Throughput test : We did some parallel dd tests to read/write to FUSE
based database fs on a system with 8 numa nodes and 288 cpus. The
performance here is almost equal to the the per-numa patches we submitted a
while back.Please find results attached.
node as the one where the request originated doesn't appear to make
the performance much worse.
Thanks,
Miklos