Hey Ted! Funny having you chiming in on this discussion. I think
I took your name in vein earlier on. :-)
On Wed, Apr 19, 2000 at 02:31:29PM -0400, Theodore Y. Ts'o wrote:
> Date: Wed, 19 Apr 2000 09:59:36 -0400
> From: "Michael H. Warfield" <mhw@wittsend.com>
> > First of all, you can mess up the address of the string by
> > randomizing the initial stack pointer. One might be willing
> > to use 10 to 16 bits of randomness. If a failed attack kills
> > the daemon, it is not likely the attack will succeed.
> Now this one actually has some prospects. Most exploits
> require some knowledge of at least a few addresses on the stack.
> 16 bits of randomness would walk your stack over 1/4 of a meg (stacks
> are aligned on at least a 4 byte boundry remember). That's a lot
> of space. You could still do an effective job with only 6 bits, though.
> Random guessing 1 chance out of 64 and you only get one shot at it...
> That's got some possiblities and would only consume 256 bytes of stack
> per process. You could scale that up if you thought it necessary or
> even make it run time configurable.
> Keep in mind that you don't necessarily get one shot at things ----
> apache for example will has a watcher process which will restart worker
> processes which have core'd themselves. So you can try arbitrary number
> of times to guess the stack pointer, until you finally get it right.
> The same is of course true of any program fired out of inetd.conf ---
> like telnetd, ftpd, etc.
Good points... Of course, the inetd case is not totally unbounded.
You'll run into the infamous "server respawning too quickly - shuting down
for five minutes" type problem that will limit some of that kind of action
given large enough numbers. Still have the Murphy principle involved that
says it WILL happen sooner or later.
So what would be reasonable. If we combined random starting
addresses within a memory page and random page addresses within a certain
range, at what point can we reduce the chances of success low enough to
render an attack ineffective.
Ok... I understand that's a question that can not be answered.
There are too many independent variables such as the level of access,
how rapidly and exploit can be delivered and recycled (the inetd
limitation), how valuable is the trophy (some attacks won't buy you enough
gain to make them worth while but others could warrent days of attacks),
and how high is the chance of detection (noisy exploits would have to be
effective in fewer shots than "quiet" or stealth exploits).
If we grant an even distribution of starting addresses within
a memory page (how big is our granularity there? I haven't looked
at the memory management stuff in a looonnnggg time) we end up wasting
an average of 1/2 of the smallest allocation of memory per process to
impliment this scheme. We also need a fairly significant address space
to map the stack at various page offsets, but I don't think that's a
serious concern given the address space that's available. We could
utilize that 16 bits of randomness and reduce the probability to 1:65536
while requiring a 1/4 Meg of address space but still average 1/2 page
of memory at the smallest allocation.
Any thoughts on whether there is any value in this approach and
how much?
> - Ted
Mike
-- Michael H. Warfield | (770) 985-6132 | mhw@WittsEnd.com (The Mad Wizard) | (770) 331-2437 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0xDF1DD471 | possible worlds. A pessimist is sure of it!- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sun Apr 23 2000 - 21:00:15 EST