[langsec-discuss] Fwd: ShellShock bug and langsec relation

travis+ml-langsec at subspacefield.org travis+ml-langsec at subspacefield.org
Sat Sep 27 20:49:05 UTC 2014


On Fri, Sep 26, 2014 at 02:32:02PM -0400, Sergey Bratus wrote:
> Very true. The LangSec implication of "any input is a program" is
> almost trivial here: input placed in environment variables was not
> just driving some state changes in the consuming software logic, but
> evaluated as shell commands, straight up!

Yes.  From what I have read, this is classic injection vulnerability,
mixing data and control signaling domains.  In this case, the parser
handled both data and control, and the environment was dutifully fed
in, and since no "risky" functions were being abused, this probably
passed most vulnerability scanners because apart from feeding
unfiltered envp[][] data into a powerful parsing/execution engine, it
worked as (implicitly) designed.

Notable historical examples of same... in reverse chronological order:
XSS (HTML injection) & SQLi
Shell Command Injection (was the old finger bug one of these?)
Blue Box (in-band signaling of 2600Hz on data line)

> As we are working on LangSec guidelines for code review, one item is
> very clear: identifying the parts of the target that directly
> receive inputs and interpret them. In the LangSec threat/attack
> model, input is the program and the input-handling code is the
> interpreter for that program; thus a general description of how the
> interpreter works is a good starting point. In many cases, the
> workings of the input-driven computation are relatively obscure and
> include memory corruption and other hallmarks of "weird machines",
> etc. In Shellshock case, that interpreter works exactly as it does
> in the intended computation case :)

In fact, the bulk of the software security assessment process revolves
primarily around noting sensitive/vulnerable functions,
adversary-controllable inputs, and making connections between the two.
If you don't have the ability (input) to drive a behavior, you can't
usually exploit it.  Starting from the inputs is an "outside-in"
approach that works in some cases, but can get extremely complicated
if done manually to something with complex inputs, like modern web
apps, or mashups in the browser with rich media plugins.  In those
cases it may be more economic to start with vulnerable functions and
work outwards, or simply find and eliminate them, because determining
exploitability can be far more expensive than fixing the code.

What constitutes "adversary controllable" is a rather interesting and
potentially subtle question too.  The anonymous attack surface is the
obvious thing, but there are nth order indirect attack trees that can
get you control over things that are not naively expected, especially
in today's cloud-based, open-source, highly-internetworked world.  As
an academic exercise, doing a case study analyzing the supply chain as
an attack vector might be interesting too.  Remember that supply
chains are not just for hardware, but include software, firmware, and
hardware, and there are dependencies which jump boundaries, like when
the EDA software is trusted to do hardware layout accurately, and when
hardware is trusted to execute software correctly.  The Target breach
where a HVAC networked device gave access to a corporate network which
gave access to POS terminals which processed physical magstripe data
might be an interesting starting example of interactions in security
dependencies.
-- 
http://www.subspacefield.org/~travis/
I'm feeling a little uncertain about this random generator of numbers.






-- 
http://www.subspacefield.org/~travis/
I'm feeling a little uncertain about this random generator of numbers.





-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 834 bytes
Desc: not available
URL: <https://mail.langsec.org/pipermail/langsec-discuss/attachments/20140927/59de7e53/attachment.pgp>


More information about the langsec-discuss mailing list