Wednesday, December 23, 2009

If I were cyberczar

(Read to the tune of "Rage Against the Machine : Killing in the Name")

1) I would defeat SQL Injection. This would be a multi-phased plan focusing on programmer tools and programmer training. The main use of any Federal funding I could secure would be to build the worlds best open source SQL Escaping library so legacy code could be retrofitted.

2) I would lobby the Fed's to create a new branch of the military. ARMY - NAVY - AIRFORCE - CYBERFORCE - MARINES. The problem is that big, and we are losing the game.

3) I would take copious notes from day 1. This is a thankless job with heaps of responsibility and absolutely no power. Might as well get a good book deal out of the experience.

Sunday, November 22, 2009

OWASP Top 5 rc1 released!

I'm very impressed with the latest OWASP Top 10 2010 release candidate . But if a 10 item list is to long for you in this era of 140 character tweets, I present to you the unauthorized reductionistic OWASP Top 5.

And the OWASP Top 5 is:

1) Injection Flaws
2) Broken Authentication
3) Broken Access Control
4) Broken Encryption
5) Security Misconfiguration

The OWASP Top 5 team felt that A2 (XSS) could be considered to be another kind of injection problem. Like most injection flaws, XSS is controlled by contextual encoding.

A4 (Direct Object Reference), A5 (CSRF), A6 (Failure to Restrict URL Access) and A8 (Unvalidated Redirects and Forwards) could be considered to be classes of access control/authorization flaws. I think that A4/A6/A8 all easily fit into the access control category. But CSRF as just an access control problem? Yes! Authentication validates WHO you are. Authorization/Access Control validates WHAT can you do. CSRF tokens are just a piece of that task/activity validation.

A9 (Insecure Cryptographic Storage) and A10 (Insufficient Transport Layer Security) are 2 sides of the same data-encryption-lifecycle.

Hat's off to the OWASP Top Ten team. This brief reductionism is just a form of OWASP Top 10 flattery! :)



Saturday, November 14, 2009

Hardware Security

A recent article in "Foreign Affairs" magazine titled "Securing the Information Highway" (co-authored by General Clarke and Peter Levin) caught my attention. Interesting stuff. Focuses on hardware security.

http://www.foreignaffairs.com/articles/65499/wesley-k-clark-and-peter-l-levin/securing-the-information-highway

Their basic thesis is that there is just no way possible to stop the threat of "electronic infiltration, data theft, and hardware sabotage" and that securing the nation infrastructure is "neither cost effective or technically feasible".

They suggest:

1) Risk Management : "US must develop an integrated strategy that addresses ... the sprawling communications network to the individual chips inside computers"
2) Diversification of the country's digital infrastructure
3) Secure the hardware supply chain

Worthwhile read.

Monday, August 17, 2009

justifying the focus on insider threat

Thank you to Mat Caughron at [email protected]m for authoring this most excellent blog entry.

It is common to have the insider threat dismissed as a scare tactic or
worst-case-scenario and I believe this is a mistake.

We are all about the business value of risk.

Most enterprise companies have to protect themselves from malicious
insiders at all times and this affects the design of their software,
specifically the need for least privilege and generally all
requirements surrounding logging and internal controls.  My thinking
is that if you want to have a seat at the table during the beginning
phases of the software development life cycle, it is best to master
the concerns and business needs imposed by this type of risk.

Granted, our industry seems to generate snake oil by the barrel, which
is all the more reason for us to take these threats seriously and
calmly seek publicly documented data on real cases.

Indeed, one would hope the information security professional is
someone who helps to establish the boundaries of trust in systems
being built, not someone who vacuums up the pieces of broken projects,
however well such housekeeping pays.


Some references not yet mentioned in this thread:

Report from 1999 by NSTISSAM:
  
http://www.cnss.gov/Assets/pdf/nstissam_infosec_1-99.pdf
Focus is on mechanisms more than specific incidents though a few are mentioned.

U^S3 report with Carnegie Mellon on insider threat, focus on
infrastructure and financial services industries, dated 2004/05/08:
 
http://www.secretservice.gov/ntac/its_report_050516.pdf
 
http://www.secretservice.gov/ntac/its_report_040820.pdf
 
http://www.treasury.gov/usss/ntac/gov%20ExecSummary%202008_0108.pdf
Each sampling set is around 50 incidents or less.

Department of Energy is grappling with this as the disruptions from
insiders could be high impact:
 
http://www.cio.energy.gov/documents/Tues_1400_SalonII_Randall.pdf

Belani / Wilson web application incident response and forensics
considers insider threats with two great examples:
   
www.blackhat.com/presentations/bh-usa-06/BH-US-06-Willis.pdf
Also presented in Seattle at an OWASP chapter meeting.

None of these reports, however, can compare in detail to the data set
of the Privacy Rights Clearinghouse' chronological list of data
breaches.
 
http://www.privacyrights.org/ar/ChronDataBreaches.htm

Until about 2006, the PRC list identified inside threat incidents as
"Dishonest insider." After that, the number of employee instigated
events is described with greater detail but is therefore harder to
search.   A quick look here should be enough to convince most on this
webappsec list that the impact from insider threats is not
insignificant.

As software security professionals, we can help to mitigate insider
threat problems and our value in doing so should not be
underestimated.

The commonplace nature of OWASP-top-ten type flaws should not prevent
us from acknowledging their utility in the hands of a malicious
employee, developer, manager, etc.


Mat Caughron CISSP
(408) 910-1266

Sunday, August 9, 2009

When to use OWASP AntiSamy?

OWASP AntiSamy is a software engineering tool that allows a programmer to verify user-driven HTML/CSS input against a whitelist policy to ensure that is does not contain XSS.

But when do you use it?

1) If you accept "normal text data" from a user, then
a) (input validation) Use the ESAPI validator for input valiation (functions OTHER than getValidSafeHTML)
b) (output encoding) Use the ESAPI encoding library for contextual output encoding when displaying dynamic data in a web browser
1. encodeForHTML
2. encodeForJavascript
3. encodeForHTMLEntity
4. encodeForCSS

2) If you accept HTML from a user, you need to use AntiSamy
a) (input validation) You must validate and CHANGE (make it safer) HTML that you accept from a user with AntiSamy (which can be called via ESAPI - getValidSafeHTML)
b) (output translation) You can optionally use AntySamy for output translation (it does not encode; it only makes HTML "safer")
1. This is crucial when you have legacy HTML in your data storage mechanism that may still contain XSS

Saturday, August 8, 2009

Real world cookie length limits

Daniel Stenberg recently posted some interesting test code and browser results to [email protected] describing the maximum amount of data that can be stored in a cookie:

****

... I just went ahead and wrote a CGI script that redirects to itself and grows a
cookie and stores its length in a URL field like "cookie.cgi?len=200" until
the length in the URL and the actual cookie length no longer matches.

Here's a few results from various browsers:

Firefox 3.0.12: 4000
Firefox 3.5: 4000
curl 7.19.5: 4999
IE 8: 5000
Opera 10.00 beta: 4000
Android 1.5 browser: 4000
Chrome 3.0.195.6: 4000
Wget 1.11.4: 7000[*]
mobile safari (iphone): 8000
lynx 2.8.7dev.9: 4000

I think we can safely say that most browsers support at least 4000 characters
cookie contents.

[*] = this reports "500 Internal Server Error" on 8000, which I don't
understand why but haven't bothered much more about.

The test is live here: http://daniel.haxx.se/test/longcookie.cgi Feel free to
use it if you want to try out other browsers, without torturing it of course!

And the perl script that runs it looks like this:

require "CGI.pm";

$len = CGI::param('len');
$c = CGI::cookie('data');

print "Content-Type: text/html\n";

if($len == length($c)) {
$c .= "A" x 1000;
$len += 1000;
print "Set-Cookie: data=$c\n";

print "Location: longcookie.cgi?len=$len\n";
print "\nmoo\n";
}
else {
printf "\nMax cookie length: %d\n", length($c);
}

Saturday, July 18, 2009

Open letter to the Struts 1.x team on AUTOCOMPLETE

I'm a big fan of Struts 1.3.x. I currently use Struts 1.3.10, the latest release of the 1.x Struts line.

I would like the ability to disable autocomplete in an HTML form. Sadly (from a security perspective), most every browser enables autocomplete by default. We need to explicitly attribute our form html with autocomplete="off" - in both the form and form element tags of HTML 4.01+ pages. This is a very basic security protection. Wanting to prevent the browser from caching credit card numbers, PII and other critical user data is a no-brainier; appsec 101.
Now, the recent 1.3.10 release made a great stride in this direction. Finally for the first time the main Struts 1.3.x branch supports the autocomplete tag (which defensive coders need - just to disable this feature via html!). But it's still not enabled by default in Struts! I need to modify the struts tld xml file in order to enable the autocomplete form and form element attribute; which takes me off the main branch of Struts 1.3.x.

I implore you to consider enabling autocomplete by default, so we can turn it off - without having to customize our version of struts 1.3.x! The best security is "secured by default", and this request moves us in that direction.

Jim Manico
OWASP, Intrinsic Security Working Group