Questions about process statistics

From: wy11
Date: Sun Jan 22 2017 - 00:05:25 EST



Hello,

Recently I noticed that in the early versions, the kernel scheduler suffers from such an attack,
http://static.usenix.org/event/sec07/tech/full_papers/tsafrir/tsafrir_html/
and it has already been fixed by introducing CFS and nanosecond granularity accounting.

However, as to the statistics exported from kernel to /proc/stat, it seems that the data is updated upon every tick by update_process_times, and the granularity is jiffies.

In my view, for applications which utilize such statistics in userspace, they would still suffer from a time accounting attack, for it is possible for a process to run between two ticks to evade from being accounted (please correct me if I'm wrong). Is there any special reason that /proc/stat only achieves a granularity of jiffies? Is it possible to update the statistics every time the CPU switches to another process instead of upon every tick, and to read TSC for a more accurate time value?

Also, I noticed that the acct_rss_mem1/acct_vm_mem1 area in task_struct is updated upon every tick, and a malicious process is able to occupy a large amount of memory between two ticks. Is it possible to update accumulate memory every time the memory size is modified (for example, insert_page) by adding previous memory times time interval? I'd like to know if it can help to avoid time accounting attack and achieve more accurate statistics.

I'd be appreciate if you can answer my questions. Thanks a lot.

Best Regards,

Wenqiu