[langsec-discuss] fun in the turing tar pit

travis+ml-langsec at subspacefield.org travis+ml-langsec at subspacefield.org
Tue May 5 02:53:21 UTC 2015

I found this while searching for a quote about how fun debugging TeX
is, and it seemed to resonate:


	January 24th, 2008 

	Today's Social Issue is this: programmers adore Turing tar
	pits. Turing tar pits are addictive. You can render
	programmers completely paralyzed from any practical standpoint
	by showing them a tar pit and convincing them to jump into it.

	A Turing tar pit is a Turing-complete virtual machine that
	doesn't support straightforward implementation of
	bread-and-butter programmer stuff. Bread-and-butter programmer
	stuff includes arithmetics, variables, loops, data structures,
	functions and modules and other such trivial and awfully handy
	goodies. A canonical example of a Turing tar pit is the
	Brainf*ck programming language. Implementing a decimal
	calculator or a sokoban game in sed is also a consensus
	example of Turing tar pit swimming as far as I know. By
	"consensus", I mean that nobody writes software that people
	are actually supposed to use on top of those VMs.


	I'll tell you why it happens. When you write code in a
	full-featured programming language, clearly you can do a lot
	of useful things. Because, like, everybody can. So the job has
	to be quite special to give you satisfaction; if the task is
	prosaic, and most of them are, there's little pride you're
	going to feel. But if the language is crippled, it's a whole
	different matter. "Look, a loop using templates!" Trivial
	stuff becomes an achievement, which feels good. I like feeling
	good. Templates are powerful! What do you mean by "solving a
	non-problem?" Of course I'm solving real problems! How else
	are you going to do compile-time loops in C++? How else are
	you going to modify env vars of the parent shell?! I'm using
	my proficiency with my tools and knowledge of their advanced
	features to solve problems! Damn, I'm good!


	Seriously, it seems like 85% percents of the contexts where
	something is called "powerful", it really means "useless and
	dangerous". Unlike most entries in the Modern Software
	Industry Dictionary, I don't consider this word a meaningless
	cheerleader noise. I think it actually carries semantics,
	making it a pretty good warning sign.

The more I do security reviews of source code, the more I think that
being able to reason that a certain portion of the code cannot do
something is useful.  I (and security generally) deal (to a first
approximation) in negations - the code cannot violate the integrity of
the data, that it cannot reveal it to the end user, and so on.  Just
as lack of goto helped with understanding code, so too a lack of eval
helps me.

I'm musing out loud here, but if unsafe constructs cannot be banned,
as I assume, even if we could make them explicit, that would be quite
helpful.  For example, if user data must be untainted, or if private
data must be explicitly decrypted via an opaque handle.

So too would a calculus of what cannot happen when certain code
executes.  Every programmer understands what language constructs do,
but the security guy has to reason about what it cannot do, and right
now there seems to be little help.
"Computer crime, the glamor crime of the 1970s, will become in the
1980s one of the greatest sources of preventable business loss."
John M. Carroll, "Computer Security", first edition cover flap, 1977
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 834 bytes
Desc: not available
URL: <https://mail.langsec.org/pipermail/langsec-discuss/attachments/20150504/16779db6/attachment.sig>

More information about the langsec-discuss mailing list