Monday, January 14, 2013

SecAppDev 2013, 4-8 March, Leuven, Belgium

Dear all,

We are pleased to announce SecAppDev Leuven 2013, an intensive one-week course in secure application development. The course is organized by secappdev.org, a non-profit organization that aims to broaden security awareness in the development community and advance secure software engineering practices. The course is a joint initiative with KU Leuven and Solvay Brussels School of Economics and Management.

SecAppDev 2013 is the 9th edition of our widely acclaimed course, attended by an international audience from a broad range of industries including financial services, telecom, consumer electronics and media and taught by leading software security experts including
  • Prof. dr. ir. Bart Preneel who heads COSIC, the renowned crypto lab. 
  • Ken van Wyk, co-founder of the CERT Coordination Center and widely acclaimed author and lecturer. 
  • Dr. Steven Murdoch of the University of Cambridge Computer Laboratory's security group, well known for his research in anonymity and banking system security. 
  • Jim Manico, an OWASP board member. 
  • John Steven, a sought-after architect for high-performance, scalable JEE systems. 
When we ran our first annual course in 2005, emphasis was on awareness and security basics, but as the field matured and a thriving security training market developed, we felt it was not appropriate to compete as a non-profit organization. Our focus has hence shifted to providing a platform for leading-edge and experimental material from thought leaders in academia and industry. We look toward academics to provide research results that are ready to break into the mainstream and attract people with an industrial background to try out new content and formats.

The course takes place from March 4th to 8th in the Faculty Club, Leuven, Belgium.

For more information visit the web site: http://secappdev.org.

  • Places are limited, so do not delay registering to avoid disappointment.
  • Registration is on a first-come, first-served basis.
  • A 25% discount is available for Early Bird registration until January 15th.
  • Alumni, public servants and independents receive a 50% discount.

I hope that we will be able to welcome you or your colleagues to our course.

Kind regards,

Lieven
-- Lieven Desmet
http://secappdev.org

Friday, January 4, 2013

Handling Untrusted JSON Safely

JSON (JavaScript Object Notation) is quickly becoming the de-facto way to transport structured text data over the Web, a job also performed by XML. JSON is a limited subset of the object literal notation inherent to JavaScript, so you can think of JSON as just a part of the JavaScript language. As a limited subset of JavaScript object notation, JSON objects can represent simple name-value pairs as well as lists of values.

BUT with JSON comes JavaScript and with JavaScript comes the potential for JavaScript Injection, the most critical type of Cross Site Scripting (XSS).

Just like XML, JSON data needs to be parsed to be utilized in software. The two major locations within a Web application architecture where JSON needs to be parsed are in the browser client-side and in application code on the server.

Parsing JSON can be a dangerous procedure if the JSON text contains untrusted data. For example, if you parse untrusted JSON in a browser using the JavaScript "eval" function, and the untrusted JSON text itself contains JavaScript code itself, the code will execute during parse time.

From http://www.json.org/js.html
"To convert a JSON text into an object, you can use the eval() function. eval() invokes the JavaScript compiler. Since JSON is a proper subset of JavaScript, the compiler will correctly parse the text and produce an object structure. The text must be wrapped in parens to avoid tripping on an ambiguity in JavaScript's syntax.
var myObject = eval('(' + myJSONtext + ')'); 
The eval function is very fast. However, it can compile and execute any JavaScript program, so there can be security issues."
So, the essential question is: How can programmers and applications parse untrusted JSON safely?

Parsing JSON safely, Client Side

The most common safe way to parse JSON safely in a modern browser is to utilize the JSON.parse method inherent to JavaScript. Here is a good reference that describes the state of JSON.parse browser support. And for legacy browsers that do not support native JSON parsing, there is always Douglas Crockfords JSON parsing library for legacy browsers.

Parsing JSON in the browser is often the result of an asynchronous request returning JSON to the browser. Another technique that is becoming more common is to embed JSON directly in a Web page server side, and then to parse and render that JSON in the browser. The mechanism of embedding JSON safely in a Web page is described here:

https://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet#RULE_.233.1_-_HTML_escape_JSON_values_in_an_HTML_context_and_read_the_data_with_JSON.parse

Step 1 includes embedded JSON on a web page safely through HTML Entity Encoding:

<span style="display:none" id="init_data">
    <%= data.to_json %>  <-- data is HTML escaped -->
 </span>

Step 2 and 3 includes decoding the JSON data and then parsing it safely.

 <script>
    // unescapes the content of the span
    var jsonText = document.getElementById('init_data').innerHTML;
    // parse untrusted JSON safely
    var initData = JSON.parse(jsonText);
 </script>

Parsing JSON safely, Server Side

It's important to use a formal JSON parser when handling untrusted JSON on the server. For example, Java Programmers can utilize the OWASP JSON Sanitizer for Java. The OWASP JSON Sanitizer project aspires to accomplish the following goals.
"Given JSON-like content, converts it to valid JSON. 
This can be attached at either end of a data-pipeline to help satisfy Postel's principle: 
Be conservative in what you do, be liberal in what you accept from others            
Applied to JSON-like content from others, it will produce well-formed JSON that should satisfy any parser you use. 
Applied to your output before you send, it will coerce minor mistakes in encoding and make it easier to embed your JSON in HTML and XML."
The OWASP JSON Sanitizer project was created by and is maintained by Mike Samuel, an esteemed member of the Google Application Security Team. For more information on the OWASP JSON Sanitizer, please visit the OWASP JSON Sanitizer Google Code page.


I hope this article helps you on your way to safer parsing of JSON in your applications. Please drop me a line if you have any questions at [email protected].



Jim Manico is the VP of Security Architecture for WhiteHat Security, a web security firm. Jim is also a board member for the OWASP foundation where he manages and participates in several projects.




Sunday, February 13, 2011

Taming the Beast

The recent cross-platform numerical parsing DOS bug has been named the "Mark of the Beast". Some claim that this bug was first reported as early as 2001.

This is a significant bug in (at least) PHP and Java. Similar issues have effected Ruby in the past. This bug has left a number of servers, web frameworks and custom web applications vulnerable to easily exploitable Denial of Service.

Oracle has patched this vuln but there are several non-Oracle JVM's that have yet to release a patch. Tactical patching may be prudent for environment.

Here are three filters that may help you tame this beast of a bug.

1) Ryan Barnett deployed a series of mod security rules and documented several options here http://blog.spiderlabs.com/2011/02/java-floating-point-dos-attack-protection.html

2) Bryan Sullivan from Adobe came up with the following Java-based blacklist filter. This rule is actually quite accurate in *rejecting input* in the DOSable JVM numeric range. This fix, while simple, does indeed reject a series of normally good values.

public static boolean containsMagicDoSNumber(String s) {
return s.replace(".", "").contains("2225073858507201");
}

3) The following data sanitization code came from Brian Chess at HP/Fortify. This approach detects the evil range before trying to call parseDouble and returns the IEEE official value for any double in this most evil range ( 2.2250738585072014E-308 ).

private static BigDecimal bigBad;
private static BigDecimal smallBad;

static {
BigDecimal one = new BigDecimal(1);
BigDecimal two = new BigDecimal(2);
BigDecimal tiny = one.divide(two.pow(1022));
// 2^(-1022) ­ 2^(-1076)
bigBad = tiny.subtract(one.divide(two.pow(1076)));
//2^(-1022) ­ 2^(-1075)
smallBad = tiny.subtract(one.divide(two.pow(1075)));
}

public static Double parseSafeDouble(String input) throws InvalidParameterException {

if (input == null) throw new InvalidParameterException("input is null");

BigDecimal bd;
try {
bd = new BigDecimal(input);
} catch (NumberFormatException e) {
throw new InvalidParameterException("cant parse number");
}

if (bd.compareTo(smallBad) >= 0 && bd.compareTo(bigBad) <= 0) {
// if you get here you know you're looking at a bad value. The final
// value for any double in this range is supposed to be the following safe #
//return safe number
System.out.println("BAD NUMBER DETECTED - returning 2.2250738585072014E-308");
return new Double("2.2250738585072014E-308");
}

//safe number, return double value
return bd.doubleValue();
}

Sunday, January 9, 2011

Touchpoints and BSIMM hurt AppSec

Conjecture: BSIMM and Touchpoints are harmful to developers and organizations seeking cost effective application security based risk reduction.

Let’s start with the flaws of Touchpoints:

1. Touchpoints make security separate from development
2. Touchpoints are all verification, not build secure apps
3. Touchpoints are only SDLC (one app), not full boar appsec program planning across an entire application portfolio
4. Touchpoints makes security a cost, not an opportunity for improvement in other aspects of software dev
5. Touchpoints are negative vulnerability focused, not positive controls centric thinking
6. Touchpoints are basically hacking ourselves secure, not assurance evidence based
7. Touchpoints are trivial in the sense that they are just a concept with no backing... just a picture and a book. No meat!
8. Touchpoints are designed to sell tools - not totally, but somewhat
9. Touchpoints are not free and open (creative commons anyone?)

BSIMM continues with this tradition.

Does your organization really care if the software you are writing is secure, or is it a burden and a chore? No amount of process will fix not caring. BSIMM does almost nothing to create a culture of good security practices for developers. It’s again, 80% verification activities. It extends the tradition of the Touchpoints model which was 100% verification.

BSIMM and touchpoints do not go down and dirty to figure out how to actually make software secure.

And frankly, that’s what the entire world really needs right now.

Wednesday, June 30, 2010

Injection-safe templating languages

The state of the art for Cross Site Scripting (XSS) software engineering defense is, of course, contextual output encoding. This involves manually escaping/encoding each piece of user data within the right context of a HTML document. The best programmer-centric OWASP resource around XSS defense can be found here: http://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet

However, manually escaping user data can be a complex, error prone and time consuming process - especially if you are battling DOM based XSS vulns. We need a more efficient way. We need our frameworks to automatically defend against XSS so programmers can focus on innovation and functionality.

The future of XSS defense is HTML templating languages that are injection-safe by default.

Thanks to Mike Samuel from Google's AppSec team for pointing these projects out to me.

First we have GXP : http://code.google.com/p/gxp/ . It's an older Google offering that is much closer structurally to JSP and so possibly a better option for someone who has a bunch of broken JSPs and wants to migrate piecemeal to a better system.

There are also Java libraries like http://gxp.googlecode.com/svn/trunk/javadoc/com/google/gxp/html/HtmlClosure.html - this Library throws exceptions that are captured in the java type system which makes auditing them and logging and assertions around them fairly easy. They've done a really bad job documenting and advocating GXP but it's very well thought out, easy to use, and feature complete. https://docs.google.com/a/google.com/present/view?id=dcbpz3ck_8gphq8bdt is the best intro.

Another angle on the problem of generating safe HTML is http://google-caja.googlecode.com/svn/changes/mikesamuel/string-interpolation-29-Jan-2008/trunk/src/js/com/google/caja/interp/index.html which talks about ways to redefine string interpolation in languages like perl and PHP.

Marcel Laverdet from Facebook is trying another tack for PHP with his XHP scheme : http://www.facebook.com/notes/facebook-engineering/xhp-a-new-way-to-write-php/294003943919 . Rasmus has publicly been very skeptical of XHP, but I think a lot of his criticisms were a result of conflating XHP with other Facebook PHP schemes, such as precompilation to C and the like.

And course, there is the Google Auto-Escape project to keep a close eye on. It was first announced on March 31st of 2009. http://googleonlinesecurity.blogspot.com/2009/03/reducing-xss-by-way-of-automatic.html

Today, we need to manually output encode each piece of user driven data that we display. Perhaps tomorrow, our frameworks will do that work for us.

Tuesday, March 30, 2010

Shure SM-7B

Thank you to OWASP for this new studio-quality microphone, a Shure SM-7B. It's an incredible piece of equipment that makes my life a lot easier - and takes up a lot less space in my very crowded computer area.
I have quite a few podcasts on deck - including a 5 show batch to be released in sync with the Top Ten release!

Thanks all.

Aloha,
Jim

Thursday, January 21, 2010

How bad is it?

Thank you to John Menerick and Ben Nagy for entertaining my questions on the Daily Dave list.

Q: Is the recent ie6 0-day anything special?

John: Not really. Not as special as the NT <-> Win 7 issue recently highlighted.

Q: How many similar 0-days are for sale on the black market?

John: Quite a few.

Ben: I'd love to see your basis for this assertion. I'm not saying that in the "I don't believe you" sense, only in the "everyone always says that but nobody ever puts up any facts" sense.

Q: What is the rate/difficulty for discovery of new windows-based 0-days for the common MS and Adobe products that are installed on almost every corporate client? (I heard Dave mention that discovery is getting more difficult)?

John: Not terribly difficult for someone who is dedicated. Then again, my idea of difficult is much different from the avg. person

Ben: I think that while finding 0-days might be 'not terribly difficult', selecting and properly weaponising useful 0-days from the masses of dreck your fuzzer spits out IS difficult - at least in my experience. There was some discussion of the 'too many bugs' problem on this list previously and I know several of the other fuzzing guys are currently researching the same area. Of course you'd explain this to your 'avg. person', as well as explaining that the skillset for finding bugs is not necessarily the same as the skillset for writing reliable exploits for them, and that 'dedication' may not sufficiently substitute for either.

Lurene Grenier: I really feel that the "selecting good crashes" problem is not that hard to overcome if you have a proper bucketing system, and the ability to do just a bit of auto-triage at crash time. For example, the fuzzer I use now both separates crashes by what it perceives to be the base issue at hand, and provides a brief notes file with some information about the crash and what is controlled. This requires just a bit of sense in providing fuzzed input, and very little smarts on the part of the debugger. I really think the next step is automating that brain-jutsu; much of it is hard to keep in your head, but not hard to do in code.

Using this output, it's pretty easy to spend a lazy morning with your coffee grepping the notes files for the sorts of things you usually find to be reliably exploitable. From there you can call in your 30 ninjas and have at.

Creating reliable exploits is for sure the hardest part, but once you've done the initial work on a program, the next few exploits in it are of course more quickly and easily done.

As for the thought experiment, I think that the benefit of the top four researchers is that they've trained themselves over a long period of time (and with passion) to have a very good set of pattern-recognition tools which they call instincts. They know how to get crashes, and they know having seen one crash what's likely to find more. They know how to think about a process to get proper execution, and they're rewarded by success emotionally which makes the lesson learned this time around stick for when they need it again.

I honestly think that there is more pattern recognition "muscle-memory" type skill involved in RE, bug hunting, and exploit dev than pure mechanical process, which is why the numbers are so
skewed. It's like taking 4 native speakers of a language (who love to read!) and 100 students of general linguistics with a zillion dollars. Who will read a book in the language faster?

Q: How easy is discovery for someone with resources like the Chinese government?

John: Much simpler.

Ben: Setting aside the previous point that discovery is only the start, I think it's instructive to consider which elements of the process scale well with money.

Finding the bugs: You need a fuzzing infrastructure that scales - running peach on one laptop with 30 ninjas standing around it with IDA Pro open is not going to work. Also consider tracking what you've already tested, tracking the results, storing all the crashes, blah blah blah. This does scale well with money, but it's an area that not as many people have looked at as I would like.

Seeing which bugs are exploitable: Using a naive approach, this scales horribly poorly with money - non-linearly, to put it mildly. There are only so many analysts you will be able to hire that have enough smarts to look at a non-trivial bug and correctly determine its exploitability. You only have to look at some of the Immunity guys' (hi Kostya) records with turning bugs that other people had discarded as DoS or "Just Too Hard" into tight exploits. Even for ninjas, it's slow. There is research being done into doing 'some' of this process automatically (well, I'm doing some, and I know a couple of other guys are too, so that counts), but I don't know of anyone that has a great result in the area yet - I'd love to be corrected.

Creating nice, reliable exploits: I'd assert that this is like the previous point, but even harder. To be honest, it's not really my thing, so probably one of the people that write exploits for a living would be better to comment, but from talking to those kind of guys, it's often a very long road from 'woo we control ebx' to reliable exploitation, especially against modern OSes and modern software that has lots of stuff built in to make your life harder. I don't know how much of the process can really be automated - I mean there are some nice things like the (old now) EEREAP and newer windbg extensions from the Metasploit guys that will find you jump targets according to parameters and so forth, but up until now I was labouring under the impression that a lot of it remains brain-jitsu, which is hard to scale linearly with money.

So, while I think that 'simpler' is certainly unassailable, I would need more than a two word assertion to be convinced that it is 'much' simpler. If you give one team a million dollars and 100 people selected at random from the top 10% graduating computer science and you give the other team their pick of any 4 researchers in the world and 3 imacs, whom does the smart money think will produce more weapons grade 0day after 6 months?

(No it's not a fair comparison. It's a thought experiment.)

Food for thought, perhaps, since sound bites need little care and feeding.

Q: How bad is it really?

John: Look at the CVSSv2 score and adjust it to the environments where you determine "how bad it is." It could be much worse.

Q: I suspect we are just looking at one grain of sand in a beach of 0-days....

John: Correct. No one wants to let everyone else know what cards they hold in their hand, the tools in their toolbox, etc....