[langsec-discuss] Is computation half the story?
zeroskillor at zeroskillor.org
Sat Apr 4 16:11:09 UTC 2015
Dear LangSec people,
first of all I have to admit that this is an interesting discussion with
a lot topics combined into one thread. I hope I understand the
conversation accordingly and can add some valuable information to it.
If the concerns are "just" security and so called threat modelling, then
I have to say that the questions which came up during the discussion are
reaching the boundaries of what we know today about our "world" and have
a bigger impact on how we model our universe and the systems we have. My
points are highly theoretical and philosophical.
1 ] -- The "Computational Ability" and "Informational Ability" Problem
The following citation is a way of "answering" a different problem but
maybe this a good analogy to your question of "Computational Ability"
and "Informational Ability". At the end your robot has the same problem
as a human being and so the old philosophical problem which called
"mind-body problem" occurs here too, I think:
"According to Searle then, there is no more a mind–body problem than
there is a macro–micro economics problem. They are different levels of
description of the same set of phenomena. [...] But Searle is careful to
maintain that the mental – the domain of qualitative experience and
understanding – is autonomous and has no counterpart on the microlevel;
any redescription of these macroscopic features amounts to a kind of
—Joshua Rust, John Searle
This is my favorite answer to this question because the
boundaries/models are sometimes helpful but they are not able to give
you a complete picture of everything.
2 ] -- Some Theory for Computational Ability and Informational Ability
in a general sense:
There is the idea of digital physics maybe we are referring to these
ideas (un)- or intentionally in this discussion. One interesting
approach is written by Jürgen Schmidhuber and has the interesting title
"Algorithmic Theories of Everything"
For an introduction, I would recommend the following :
and if you have a lot of time, I can recommend
"ALGORITHMIC THEORIES OF EVERYTHING"
3 ] -- "Ultimate" Security
Regarding the security aspects, the ultimate security would be for me a
automaton using on the one hand formally proven code/parsers(input) and
on the other hand formally proven hardware which is only able to execute
the proven input. The connection between N systems should be organized
over a formally proven protocol. So there would be no space for wired
machines in this formal system.
But at the end we will find a way of exploiting these through unknown
side channels as we have seen in the past. We are behaving in a physical
Thank you all for this interesting discussion. I hope you are not bored
about the long read.
All the best
flo aka zeroskillor
On 04/02/15 04:57, Matt DeMoss wrote:
> Have you seen the paper, "Towards a Theory of Application
> Compartmentalisation?" The protocol-centered approach taken there jibes
> well with what you wrote about "informational ability."
> On Wed, Apr 1, 2015 at 10:46 PM, Andrew Ruef <munin at mimisbrunnr.net> wrote:
>> isn't this captured in the definitions of quantified information flow?
>>> On Apr 1, 2015, at 22:28, Taylor Hornby <havoc at defuse.ca> wrote:
>>>> On 04/01/2015 10:15 AM, Jacob Torrey wrote:
>>>> I've had similar thoughts, and a rather hasty blog post I wrote a while
>>>> back may be of interest:
>>>> - Jacob
>>> Thanks for that link. I'm glad to see others thinking about this!
>>> Your blog post inspired me to try to define "isolation" using Turing
>>> machines as a model. If you can do it for a Turing machine, then that
>>> should apply to any more specific model by the Church-Turing thesis.
>>> I failed terribly. I was trying to say something along the lines of: If
>>> A and B are disjoint subsets of tape indices, then A is isolated from
>>> B iff you can freeze the machine at any time, wiggle the tape cells in
>>> A, and the cells in B won't be affected by your wiggling for the
>>> remainder of the computation (and vice-versa).
>>> That doesn't work because the sets A and B have to depend on the input
>>> length (I'll omit the proof; consider the language of strings containing
>>> a "1").
>>> The whole notion doesn't make much sense for a Turing machine on
>>> a single input (we're just saying "these are cells the TM never
>>> meaningfully uses, even though it might read/write them"), but if you
>>> allow parts of the inputs to be chosen by different actors, the idea
>>> makes more sense.
>>> You can come up with a reasonable definition for a constant number of
>>> actors. If there are K actors, let A1, A2, ..., AK be disjoint sets and
>>> give the TM K read-only input tapes plus one work tape, where input tape
>>> i is contained in Ai, and so on...
>>> But that's not good enough. Real systems interact with an arbitrary
>>> number of actors, each wanting to be isolated from the others.
>>> So here's a question. Is it possible to give any TM-based definition of
>>> isolation that (1) doesn't depend on the number of actors or input
>>> length, and (2) is more insightful than
>>> On any K-tuple input (W1, W2, ..., WK) the machine outputs a K-tuple
>>> (R1, R2, ..., RK) and if Wi is fixed, Ri is fixed no matter how you
>>> change the other Wj's.
>>> That definition doesn't satisfy me because it has nothing to do with
>>> computation; it's just a property of a *function* that a TM might
>>> compute. It doesn't expose any Turing-machine internals to reason about.
>>> Is there a good definition that does?
>>> langsec-discuss mailing list
>>> langsec-discuss at mail.langsec.org
>> langsec-discuss mailing list
>> langsec-discuss at mail.langsec.org
> langsec-discuss mailing list
> langsec-discuss at mail.langsec.org
More information about the langsec-discuss