dan at doxpara.com
Mon Jan 11 13:18:35 UTC 2016
On Monday, January 11, 2016, Scott Guthery <sbg at acw.com> wrote:
> ... in general we close off more bugs than we open nesting security layers.
> 1) The only situation in which this may be true is when a small team
> designs all the layers, all the way down to the iron. Even in this case
> there is no evidence to support the assertion and there are numerous
> anecdotes that deny it.
No one team designs all the layers, ever, because the story of computer
science is that of nesting layers all the way down to the analog (which
occasionally does bite us, hello Rowhammer).
You spend enough time in massive bug databases and eventually you only care
about one thing: Who can reach this broken code? What level of access
does an attacker need to have to get here in the first place? Access
restrictions actually do compose.
Put another way, you _will_ compose systems, that _do_ nest security
layers, and it's not _always_ bad. Cross layer bugs are a thing (and I
exploit them ruthlessly) but that does not mean you shouldn't have multiple
layers or that they don't generally play pretty well together.
Layering is how humans make systems scale. To be honest it's also how we
secure things, by squeezing interactions between subsystems into a language
we can model (as opposed to flat memory spaces).
> 2) People can write code faster than they can find and fix bugs.
> 3) The number of bugs is in direct proportion to lines of code.
...sort of true. It depends what you mean by bugs. For example, you can
write an infinite number of lines of code that can't be accessed by anyone
but root, and add no security bugs, because the attacker is already root.
However you can add many other types of bugs and generally will.
> All that said, isn't the point to not create bugs in the first place?
> (Unless, of course, you're paid to find them. Low-paid code writers and
> high-paid code fixers brings to mind one hand washing the other. See
> software contracts for Obamacare connectors. )
The point is to not have exploitable bugs, however we may achieve that. As
it happens preventing bug generation early is great, in that bugs get
exponentially more expensive the longer it takes to find them. But
strictly speaking we're looking to protect systems.
> Cheers, Scott
> P.S. Wouldn't it be more honest to start calling them 'faults' or 'errors'
> or 'failures' rather than 'bugs'?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the langsec-discuss