[langsec-discuss] Is computation half the story?

Jacob Torrey jacob at jacobtorrey.com
Wed Apr 1 16:15:29 UTC 2015


I've had similar thoughts, and a rather hasty blog post I wrote a while
back may be of interest:
http://blog.jacobtorrey.com/towards-a-new-model-of-computational-expressiveness

- Jacob

On Sun, Mar 29, 2015 at 8:50 PM, <travis+ml-langsec at subspacefield.org>
wrote:

> On Thu, Mar 26, 2015 at 06:22:59PM -0600, Taylor Hornby wrote:
> > ...wherein I make the distinction between a machine's computational
> > abilities (i.e. which languages can it decide?) and a machine's
> > "informational" abilities (i.e. how can the machine influence the
> > outside world? what APIs is it allowed to call?).
> >
> > I chose the the term "informational" for lack of a better word because
> > it is about information entering and exiting the machine, or moving
> > between "parts" of the machine.
>
> I think this is referred to somewhat dismissively as "side effects" of
> computation, and it's always been underrated IMHO.
>
> It's actually very, very rare that I am (well, was) actually doing
> something that can be modelled as purely functional.  At least half
> of the work of any program I worked on was usually I/O of some kind.
>
> Obviously this is not true of biotech informatics and DOE stuff, but
> it's definitely true of video games; they are low-latency low-jitter
> I/O engines, not offline raytracing engines.
>
> So yes, but so far nobody has really decided what they want to model
> and why about I/O.  It's messy and complicated and hard to measure
> because there are few guarantees from the underlying hardware.  For
> example, on modern hardware, array lookup isn't actually constant
> time:
>
> http://cr.yp.to/antiforgery/cachetiming-20050414.pdf
>
> We deal a lot with this in timing side channel attacks:
>
>
> http://www.subspacefield.org/security/security_concepts/index.html#toc-Subsection-31.2
>
> > I concluded the post by claiming computer science has no general theory*
> > of this property. We understand computation well from computability and
> > complexity theory, but "informational" capabilities are only understood
> > through limited models like ACLs, Bell-LaPaudula, noninterference, etc.
> >
> > Those models are properties systems should have in order to be called
> > secure. I'm thinking more along the lines of starting with a given
> > system then quantifying its its "power" and proving theorems about what
> > it can and can't do. Most importantly, relating the power of one given
> > system to another given system.
>
> Doing static analysis I deal a lot with "sources" and "sinks" which
> deal with what can influence what.  The source is generally some input
> to the program, a sink is some sensitive consumer of that data, and
> occasionally you detect/model untaint routines that allow it to flow
> without problem.  This untainting is rarely as effective as it seems.
>
> I think a common security intuition involves some belief that most
> software is already deputized and easily confused, and that the trust
> boundaries (i.e. process address space, user/kernel barrier, separate
> systems, guest-to-host VM boundary) is our best attempt at "firewalls"
> in the original sense of the word.  In addition to static analysis
> which models what should happen by spec (code) on an idealized system,
> I model what can happen, with the assumption that only systems
> explicitly designed to be security barriers tend to be even moderately
> effective against knowledgable attackers.  This level of
> generalization is necessary for "unknown unknown" deviation from ideal
> machines.  It exists in threat modelling and commonly lumped under
> "security architecture", but I haven't seen it treated formally.
>
> So in some ways it is more like writing portable code, which deals
> with assumptions about the underlying systems and the guarantees they
> provide.  That's actually a useful metaphor, because nothing is
> universally portable nor universally secure, one has to decide where
> to expend effort based on available resources and educated intuition
> about potential but hard-to-quantify risks.
>
> > I would appreciate references to the literature.
>
> Can't help you with the theory, as that seems to usually solve
> problems I don't have (never seen an infinite tape), but on the
> pragmatic side:
>
> "The Art of Software Security Assessment" by Dowd
> "Threat Modelling: Designing for Security" by Shostack
> --
> http://www.subspacefield.org/~travis/
> "Computer crime, the glamor crime of the 1970s, will become in the
> 1980s one of the greatest sources of preventable business loss."
> John M. Carroll, "Computer Security", first edition cover flap, 1977
>
> _______________________________________________
> langsec-discuss mailing list
> langsec-discuss at mail.langsec.org
> https://mail.langsec.org/cgi-bin/mailman/listinfo/langsec-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.langsec.org/pipermail/langsec-discuss/attachments/20150401/f61c12f6/attachment.html>


More information about the langsec-discuss mailing list