[langsec-discuss] the frontier of decidability & TFP
paulburchard at gmail.com
Fri Jun 12 00:54:05 UTC 2015
I asked a similar question in this forum a while back and Evan Sultanik was
kind enough to supply some good references. Here they are, along with my
summary after reading.
On May 28, 2014 1:59 AM, "Paul Burchard" <paulburchard at gmail.com> wrote:
> Thanks again for all the great references. Here is my initial take on
the underlying theoretical foundations.
> In terms of pure language restrictions, the basic one underlying these
approaches is the restriction to bounded recursion (primitive recursive
functions). This seems like a useful compromise, which allows full
security analysis in time bounded by the recursion bounds at the cost of
reduced computational power. This is the same type of restriction
mentioned by Peter from general finite state machines to acyclic ones,
which appears to still have substantial applications.
> A different approach is to restrict the security questions that may be
asked in exchange for greater (but still not Turing complete) computational
power. These approaches are characterized by clear security questions but
rather implicit models of computation. The static analysis tools you
mention (for example to prevent recursion-based DoS) are pessimistic
predictors of certain types of bad behavior; they can be thought of as
defining a reduced computational model that eliminates some perfectly good
code (to continue the example, eliminating any code where tainted inputs
can affect recursion bounds, a sort of relativized bounded recursion).
Another approach of this type pointed out to me by Matt DeMoss called
"secure multi-execution" guarantees that a partially ordered set of
security levels are respected, at a cost of linear time in the number of
security levels (by executing the code once per security level); meanwhile,
only misbehaving code has its behavior altered while conforming code can be
fully Turing complete.
> Finally, there is the approach of not restricting the computational power
but using heuristics for answering security questions, which may be
successful in many practical cases but could potentially be at least
exponentially expensive. You can think of this as making the set of
security questions that can be feasibly answered rather implicit. The
constraint solvers you mention fall into this type, for example
transforming the security question into a SAT instance (which could be
> Let me know if I have misunderstood anything.
> On Mon, May 26, 2014 at 08:55:59PM -0700, Paul Burchard wrote:
> > What are known results mitigating the basic computer science problem
> > it's normally expensive or impossible to answer the fundamental language
> > security question: what might some code do with the powers it's given?
> It depends on what question you're ultimately trying to answer. While
> answering the question,
> "What might this code do *under all possible inputs* with the
> powers it is given?"
> is usually intractable, if not undecidable, it is important to
> remember that solving the "dual" of the problem is often much easier:
> "Does there exist an input that causes this code to [perform a
> specific behavior]?" In other words, it is often much easier to find
> *at least one* certificate (i.e., input) that exercises a certain
> desired behavior (e.g., halting). If there are only a limited number
> of behaviors about which one is concerned, this approach can often be
> tractable, if not even efficient.
> I may be side-stepping your actual question here, but my point is that
> it can often be fruitful to constrain the set of inputs as opposed to
> constraining the language.
> There are already many program analysis techniques [1,2] like concolic
> execution that provide an "oracle" for answering questions like, "Does
> there exist a set of inputs that causes [a specific pointer] to be
> used after it is freed?" Constraint solvers [3,4] allow us to query
> for inputs that exercise almost arbitrarily complex behaviors, many of
> which are tractable to solve. There has also been some recent success
> at using machine learning to classify the behavioral characteristics
> (as opposed to logical flow) of black-box binaries . Liveness and
> deadlock can also be relatively efficiently determined using solely
> static analysis and model-checking [6,7].
> <shameless-self-promotion>My employer, Digital Operatives , has
> developed a static analysis tool for detecting liveness and performing
> arbitrary constraint satisfaction and optimization on
> Now, back to your actual question. It's clear that
> non-Turing-complete languages can be useful. For example,
> Hofstadter's old BlooP  language can do useful computations like
> computing factorials, however, it explicitly disallows unbounded
> (infinite) loops, so the halting problem is decidable for it. A
> somewhat more recent---but still old---example is the CHARITY
> programming language , which uses category theory  to
> guarantee termination. Finally, you probably want to read the Hume
> Report  which describes a hierarchical system architecture that
> provides layers of non-Turing-completeness for security.
> - Evan
>  http://llvm.org/pubs/2008-12-OSDI-KLEE.html
>  http://s2e.epfl.ch/
>  http://chicory.stanford.edu/PAPERS/STP-ganesh-07.pdf
>  http://z3.codeplex.com/
>  http://www.blackhat.com/us-13/briefings.html#Saxe
>  http://www.cs.utexas.edu/~shmat/shmat_csf09.pdf
>  http://www.digitaloperatives.com/
>  http://en.wikipedia.org/wiki/BlooP_and_FlooP
>  http://pll.cpsc.ucalgary.ca/charity1/www/home.html
>  http://en.wikipedia.org/wiki/Category_theory
>  http://www-fp.cs.st-andrews.ac.uk/hume/report/hume-report.pdf
> Evan A. Sultanik, Ph.D.
> Chief Scientist
> Digital Operatives, LLC
> Mobile: +1 215 919 7234
On Jun 11, 2015 6:55 PM, <travis+ml-langsec at subspacefield.org> wrote:
> When we have problems that are undecidable such as the halting problem:
> The HRU problem:
> Or Rice's Theorem:
> I wonder how frequently they are undecidable in practice, and
> precisely what restrictions can one place on a program to make it
> decidable. I see there's some
> For example, I rarely encounter self-modifying code on "well behaved"
> programs*, and their negative effect on halting decidability should be
> [*] Old-style signal trampoline and lazy-binding excepted;
> that's systems-level deep magic.
> The idea is two-fold:
> 1) To precisely specify more than PDA and less than a TM which can
> still be analyzed.
> 2) To define the category of analyzable programs such that one can
> reason about them and assume the worst about programs which
> deliberately ("encourage the transparent, penalize the tricky").
> Yay! While writing/researching this question I may have found the
> answers to it:
> [A] termination analysis is program analysis which attempts to
> determine whether the evaluation of a given program will
> definitely terminate. Because the halting problem is
> undecidable, termination analysis cannot be total. The aim is
> to find the answer "program does terminate" (or "program does
> not terminate") whenever this is possible. Without success the
> algorithm (or human) working on the termination analysis may
> answer with "maybe" or continue working infinitely long.
> Total functional programming (also known as strong functional
> programming, to be contrasted with ordinary, or weak
> functional programming) is a programming paradigm that
> restricts the range of programs to those that are provably
> Although those links will require deep meditation I figured that
> perhaps it may interest other readers so I didn't just scrub this
> whole thing. The last link, TFP, is particularly exciting, as one
> could potentially have programs with specific construction, contracts,
> or proofs which could allow the user to execute them while limiting
> their powers. In essence, the burden of proof for security is shifted
> from the user to the program author.
> Incidentally, not terminating or being misbehaved is a show-stopper
> for executing in kernel mode, which would be a very efficient place to
> "Computer crime, the glamor crime of the 1970s, will become in the
> 1980s one of the greatest sources of preventable business loss."
> John M. Carroll, "Computer Security", first edition cover flap, 1977
> langsec-discuss mailing list
> langsec-discuss at mail.langsec.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the langsec-discuss