[langsec-discuss] Harmful Consequences of Postel's Maxim
travis+ml-langsec at subspacefield.org
travis+ml-langsec at subspacefield.org
Wed Jul 8 22:38:01 UTC 2015
On Mon, Jul 06, 2015 at 11:32:26AM -0700, Derick Winkworth wrote:
> Note section 6.
> The authors of the draft propose replacing this maxim with:
> "Protocol designs and implementations should be maximally strict."
Neat. Great first step.
It's probably not that simple.
It would be wise to study variations in widely deployed protocols with
a great deal more detail than the HTTP/1.1 spec:
Zalewski's "The Tangled Web"
Ivan Ristic's "Bulletproof TLS/SSL"
nmap TCP/IP fingerprinting
In general, DWIM (error-correcting) systems are probably dangerous,
but unfortunately common with browsers due to poor HTML practices in
the early days. Good idea at time or not, this negatively affects us
Another issue is market-leading software non-compliance. For example,
you can't read most of my emails on an iPad because it does not show
multipart MIME emails. Outlook shows my signatures as attachments
with the filename ATTrandomnumber.DAT which confuses people even when
I explain it in my signature file. Exchange cloud silently drops
them. I've not signed this email so that such people can actually
read it. Typically, OS vendors simply don't care about products
beyond some work/marketshare threshhold, whereas optional software
must work better to get people to download it. I'm wondering if I
should simply give up. How to incentivize large market-making "good
enough" vendors to implement the full spec or none at all is an
interesting question. Actually this RFC points to incentive structure
but only implicitly. It is the only way to fix these problems -
c.f. Ross Anderson's Security Economics publications.
It's often safest for systems to parse into an intermediate form (AST
tree) using only known extensions and tags and reserialize using a
very conservative (strict) encoding, which is half of Postel's Maxim.
TLS has an interesting method for distinguishing, at the protocol
level, a required versus optional tag (in the TLV format), so that an
implementation can decide to fail or ignore the tag/extension.
This subject (protocol design for robustness in the face of variegated
implementations) needs vastly more study by some polymaths who can
work at both high and very low levels.
In particular, concerns revolve around:
Ambiguous wording in the standard
Where to check inputs / preconditions / validity
Dealing with incorrectness
The idea of where to check input is interesting. Trying to check
preconditions at every function entry point can lead to extremely
brittle code. Some things like randomness are properties of the
source (provenance) not testable properties of the data itself.
On the last point, see Ferguson and Schneier, Cryptography
Engineering, esp. Sec 8.4.3 Assertions. How to respond to invalid
inputs is hard. Simply exiting is not always a good idea. What if
you are a security monitoring system? OS kernel? A plug-in library
that runs in-process in the browser (which would take out every tab)?
What if exiting and respawn trigger resource consumption?
How to merely report an error is hard too. For example, trying to
mount a filesystem on Unix once gave me "mount: invalid argument".
This turned out to be perror of the EINVAL errno, which was a response
to the mount system call, not the userlevel application. The error
was confusing because it was not contextualized.
Merely describing an ideal form for an RFC is an interesting exercise.
English is often too ambiguous. It seems that one needs to create
multiple ways of describing the desired behavior (e.g. state machines,
English wording, etc.) as well as supporting rationale (what you are
trying to accomplish and what you are trying to prevent) to provide
maximal redundancy and context. Ideally, then, the RFC could be be
checked for internal consistency during the draft process.
One could easily write a book on the subject.
http://www.subspacefield.org/~travis/ | if spammer then john at subspacefield.org
"Computer crime, the glamor crime of the 1970s, will become in the
1980s one of the greatest sources of preventable business loss."
John M. Carroll, "Computer Security", first edition cover flap, 1977
More information about the langsec-discuss