Re: DRAM unreliable under specific access patern

From: Pavel Machek
Date: Thu Jan 08 2015 - 11:53:04 EST


On Thu 2015-01-08 13:03:25, One Thousand Gnomes wrote:
> On Mon, 5 Jan 2015 18:26:07 -0800
> Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote:

> > > I don't know for sure, but it looks likely to me according to claim in the
> > > paper (8MB). But it still can be sombody else's data: 644 file on
> > > hugetlbfs mmap()ed r/o by anyone.
>
> Thats less of a concern I think. As far as I can tell it would depend how
> the memory is wired what actually gets hit. I'm not clear if its within
> the range or not.

I think it can hit outside the specified area, yes.

> > > When I read the paper I thought that vdso would be interesting target for
> > > the attack, but having all these constrains in place, it's hard aim the
> > > attack anything widely used.
> > >
> >
> > The vdso and the vvar page are both at probably-well-known physical
> > addresses, so you can at least target the kernel a little bit. I
> > *think* that kASLR helps a little bit here.
>
> SMEP likewise if you were able to use 1GB to corrupt matching lines
> elsewhere in RAM (eg the syscall table), but that would I think depend
> how the RAM is physically configured.
>
> Thats why the large TLB case worries me. With 4K pages and to an extent
> with 2MB pages its actually quite hard to line up an attack if you know
> something about the target. With 1GB hugepages you control the lower bits
> of the physical address precisely. The question is whether that merely
> enables you to decide where to shoot yourself or it goes beyond that
> ?

I think you shoot pretty much randomly. Some cells are more likely to
flip, some are less likely, but that depends on concrete DRAM chip.

> (Outside HPC anyway: for HPC cases it bites both ways I suspect - you've
> got the ability to ensure you don't hit those access patterns while using
> 1GB pages, but also nothing to randomise stuff to make them unlikely if
> you happen to have worst case aligned data).

I don't think it is a problem for HPC. You really can't do this by
accident. You need very specific pattern of DRAM accesses. Get it 10
times slower, and DRAM can handle that.

Actually, I don't think you can trigger it without performing the
cache flushing instructions.


Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/