darren.highfill at us.pwc.com
darren.highfill at us.pwc.com
Fri Jan 15 15:10:09 UTC 2016
The point identified in this thread from which I haven't been able to get away is the question of what affect layering, modularity, and composition (called LMC here for convenience) has on the propagation of unintended design attributes. Does LMC have a net positive or negative affect on security?
I started to get the mental image of LMC resembling a graphical portrayal of integral calculus. As the block size approaches the small and large ends of the extreme, the chart (or program) starts to look monolithic again.
If, in fact, LMC has some affect - positive or negative - this would imply a desired optimization of its use. Specifically, we would seek an optimal size of LMC - theoretically up to a point of diminishing returns where other factors start to overtake the equation.
Are we saying that this optimal size corresponds to how much code one person or another can hold in their head? Do we have any research that points to a correlation between bug density and module size? What does the module size to bug density curve look like?
> On Jan 12, 2016, at 2:53 PM, Meredith L. Patterson <clonearmy at gmail.com> wrote:
>> On Tue, Jan 12, 2016 at 8:02 PM, Will Sargent <will.sargent at gmail.com> wrote:
>>> On Tue, Jan 12, 2016 at 10:29 AM, Dan Kaminsky <dan at doxpara.com> wrote:
>>> It's not random, how people use a programming language. It's weird, people treat code like math instead of the cognitive science problem it actually is.
>> There is a "cryptographers speak math and expect results" problem, but security researchers also say things like "untrusted input" and expect app programmers to know what that means. So there's a cultural approach problem as well as a language problem.
> Well, the cultural problem is that there are a lot of overlapping dialects for talking *about* programming, all of them ontologically specialised, even if the end products of each one of them run on the same architectures. Each one is an attempt to solve the cognitive science problem Dan's talking about.
> I'd argue that people are used to treating code like math because historically math has been the main thing that code is responsible *for*. Arguably this is still true today (see: embedded), but these days people more and more need to treat code as a decision-making process. Which is what makes "what kinds of decisions can be made deterministically vs. which ones have to be made heuristically" an interesting boundary question.
The information transmitted, including any attachments, is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited, and all liability arising therefrom is disclaimed. If you received this in error, please contact the sender and delete the material from any computer. PricewaterhouseCoopers LLP is a Delaware limited liability partnership. This communication may come from PricewaterhouseCoopers LLP or one of its subsidiaries.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the langsec-discuss