Shared posts

05 Nov 14:40

How To Start With Software Security

by Ray Sinnema

white-hatThe software security field sometimes feels a bit negative.

The focus is on things that went wrong and people are constantly told what not to do.

Build Security In

One often heard piece of advice is that one cannot bolt security on as an afterthought, that it has to be built in.

But how do we do that? I’ve written earlier about two approaches: Cigital’s TouchPoints and Microsoft’s Security Development Lifecycle (SDL).

The Touchpoints are good, but rather high-level and not so actionable for developers starting out with security. The SDL is also good, but rather heavyweight and difficult to adopt for smaller organizations.

The Software Assurance Maturity Model (SAMM)

We need a framework that we can ease into in an iterative manner. It should also provide concrete guidance for developers that don’t necessarily have a lot of background in the security field.

Enter OWASP‘s SAMM:

The Software Assurance Maturity Model (SAMM) is an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization.

SAMM assumes four business functions in developing software and assigns three security practices to each of those:

opensamm-security-practices

For each practice, three maturity levels are defined, in addition to an implicit Level 0 where the practice isn’t performed at all. Each level has an objective and several activities to meet the objective.

To get a baseline of the current security status, you perform an assessment, which consists of answering questions about each of the practices. The result of an assessment is a scorecard. Comparing scorecards over time gives insight into evolving security capabilities.

With these building blocks in place, you can build a roadmap for improving your capabilities.

A roadmap consists of phases in which certain practices are improved so that they reach a higher level. SAMM even provides roadmap templates for specific types of organizations to get you started quickly.

What Do You Think?

Do you think the SAMM is actionable? Would it help your organization build out a roadmap for improving its security capabilities? Please leave a comment.


Filed under: Application Security, Information Security, Software Development Tagged: maturity, OWASP, SAMM, SDL, security, Touchpoints
29 Oct 18:26

Weighing the Cost of Security

by Matt Presson
Today I want to take a little detour from the Robust Application Security track, and focus on a common problem that many application security professionals struggle to manage every day - what to do when speed to market and functionality trump the addition of security into a product.

As an aside, the idea for this post comes from an excellent post by Peter Gillard-Moss entitled Weighing the Cost of Expediency.

The Problem

As almost every security professional knows, there WILL come a time where you are consulting on a project, and either the project team or "the business" will choose to forego adding some bit of security into the project for the sake of getting to market faster, adding some last-minute must-have feature, or saving project cost. We will assume here that they are not being intentionally negligent, as that is really another case entirely. Similar to the situation Peter presents in his post, the team assures you that they will add that bit of security in during the next release cycle, or the first round of bug fixes which is tentatively scheduled for next month, and by the way your company is not a huge target for hackers anyway, so what's the harm. Assuming you agree with the proposal, everything continues on and now it is time for the security to be added. It's not on the agenda though! Here is where you, the project management, and the development team went wrong.

The Issues

The simple fact is that both you, the project manager, and the development team all screwed up. The reason that this happened is because, just like in Peter's scenario, your reasoning was flawed. You all chose the quick win thinking that all things would be equal down the road. The problem is, nothing is EVER equal down the road. Things change, especially priorities.

Issue #1 - Adding security in will be a priority (later)

The first issue with the reasoning given above is the assumption that adding in security will be a priority during the next release cycle or the first round of bug fixes.

As any developer will tell you, when you have functional bugs that are potentially causing customers issues and thus preventing the company from making money, or as much money as they would like to be making, THIS is the priority. Companies are in business to make money. They do this with a passion. Unless you are a security company, security is not generally seen as helping to achieve this goal. Therefore, that means developers are going to be focused on getting those functional bugs fixed, tested, and a new version of the code pushed to production so that customers can give your company more money. Where does this leave security? Exactly where it was when you started this release cycle, on the back burner. Where it is likely to stay.

Secondly, it is EXTREMELY HARD to add security functionality to a project after it has been built. The reality is that instead of making the product better, the most likely outcome will be that something breaks instead. Whats more, the trickier and more pervasive the issue you are trying to correct, the more like it becomes that something will no longer work "as it should." Of course this all eventually leads down the path where "what changed" is attributed to "the new security checks" which will inevitably be "backed out to get everything working again." At this point, you have another long road ahead of yourself trying to get that functionality added back in, but where it doesn't impact anything. If you are lucky, you will get a stern talking to by "the business," but everything will move on relatively quickly. Worst case, polish off that resume of yours ... you may need it.

Of course, the good thing about this issue is that it is rather easy to prevent. The solution is to get involved as early as you can in your company's SDLC, and to integrate security into the process at key stages. The earlier you can have the requirements incorporated into the project documentation, the better off you will be.

Issue #2 - We just aren't a target

It should go without saying that in reality not everyone is equally targeted by hackers, hacktivists, or other groups. Not every company has really interesting data worth stealing. However, just because you are not a bank, or work for a government agency or subcontractor, does not mean that you will never experience a breach, or be the target of a "crime of opportunity." In essence, what I am saying here is do not let you lack of perceived target-ability let you make rash decisions on whether or not to incorporate security. A breach is a breach whether you were compromised by Anonymous, APT, or the 3 year old half way around the world who just learned how to use Havij.

Issue #3 - The cost of adding security will be the same

When it comes to adding security into a project, without breaking the entire thing, the costs for doing so exponentially increase the more functionality is added. Depending on the type of security fix you are trying to add, the cost can go up even faster. For instance, if your developers begin writing and add proper output encoding from the start, the cost is relatively small. They test it out as they are working and minimal effort is required. In addition, the chances of them encoding each parameter in every spot is much more likely than if they have to come back a month after it has been deployed to production and retrofit EVERY PAGE. In the later case, they now have to touch every page in the app, remember all the different parameters, find every place they are used, and hope that they either didn't forgot one or break something in the process. The sheer time and effort (read money) required to do this is ridiculous. Now, if you consider harder problems such as access control issues which affect page flows, or SQL Injection which will affect how your application communicates with its database, you can begin to see that doing it right the first time is the only way to do it. Having to re-architect or rewrite your entire data access layer is NOT something that you want to have to tell your lead developer ... especially if they have moved on to another project and do not have the time to focus on it like they should.

The Solution

There is no shortcut to security. It should be just as much a part of a system's initial requirements as the functional specifications given by the business. To be effective at our jobs we have to get out of this operational, IT mindset and start to think in terms of the way our companies do business, and was that we can influence that. This is the way that you get ahead of the curve and start winning. This is where you really start to make a difference, and become a partner to the business. We are all "the business." We work FOR our companies just like the analysts and project managers do. To make a difference we have to start contributing to what they see as important. We are not here to say "No" all the time. Instead, we are here to say "Yes, we can do that as long as we take care to safeguard against these risks." Trust me, when phrased the right way in language that they can understand, these people can be some of your best friends. When you build that trust, they can also be some of your biggest supporters.

Break out of IT. Stop being "the security guy/gal." Be the person who provides solutions to the risks that the company NEEDS to take in order to stay competitive. When you can do that, then you will know you have succeeded. Some stuff to think about.

Until next time, happy hacking!

@matt_presson
14 Oct 17:25

Five Golden Rules For A Successful Bug Bounty Program

by Luca Carettoni
Bug bounty programs have become a popular complement to already existing security practices, like secure software development and security testing. In this space, there are successful examples with many bugs reported and copious rewards paid out. For vendors, users and researchers, this seems to be a mutually beneficial ecosystem.

True is that not all bug bounty initiatives have been so successful. In some cases, a great idea was poorly executed resulting in frustration and headache for both vendors and researchers.  Ineffective programs are mainly caused by security immaturity, as not all companies are ready and motivated enough to organize and maintain such initiatives.  Bug bounties are a great complement to other practices but cannot completely substitute professional penetration tests and source code analysis. Many organizations fail to understand that and jump on the bounties bandwagon without having mature security practices in place.

Talking with a bunch of friends during BlackHat/Defcon, we came up with a list of five golden rules to set your bug bounty program up for success. Although the list is not exhaustive, it was built by collecting opinions from numerous peers and should be a good representation of what security researchers expect.

If you are a vendor considering to start a similar initiative, please read it carefully.

The Five Golden Rules:

1. Build trust, with facts
Security testing is based on trust, between client and provider. Trust is important during testing, and especially crucial during disclosure time. As a vendor, make sure to provide as much clarity as you can. For duplicate bugs, increase your transparency by providing more details to the reporter (e.g. date/time of the initial disclosure, original bug ID, etc.). Also, fixing bugs and claiming that they are not relevant (thus non-eligible to rewards) is a perfect way to lose trust.

2. Fast turn around
Security researchers are happy to spend extra time on explaining bugs and providing workarounds, however they also expect to get notified (and rewarded) at the same reasonable speed. From reporting the bug to paying out rewards, you should have a fast turn around. Fast means days - not months. Even if you need more time to fix the bug, pay out immediately the reward and explain in detail the complexity of rolling out the patch. Decoupling internal development life cycles and bounties allows you to be flexible with external reporters while maintaining your standard company processes.

3. Get security experts
If you expect web security bugs, make sure to have web security experts around you. For memory corruption vulnerabilities, you need people able to understand root causes and to investigate application crashes. Either internally or through leveraging trusted parties, this aspect is crucial for your reputation. Many of us have experienced situations in which we had to explain basic vulnerabilities and how to replicate those issues. In several cases, the interlocutors were software engineers and not security folks: we simply talk different languages and use different tools.

4. Adequate rewards
Make sure that your monetary rewards are aligned with the market. What's adequate? Check Mozilla, Facebook, Google, Etsy and many others. If you don't have enough budget - just setup a wall of fame, send nice swags and be creative. For instance, you could decide to pay for specific classes of bugs or medium-high impact vulnerabilities only. Always paying at the low end of your rewards range, even for critical security bugs, it is just pathetic. Before starting, crunch some numbers by reviewing past penetration test reports performed by recognized consulting boutiques.

5. Non-eligible bugs
Clarify the scope of the program by providing concrete examples, eligible domains and types of bugs that are commonly rejected. Even so, you will have to reject submissions for a multitude of reasons: be as clear and transparent as possible. Spend a few minutes to explain the reason of rejection, especially when the researcher has over-estimated severity or not properly evaluated the issue.

Happy Bug Hunting, Happy Bug Squashing!
08 Oct 06:31

Cross Site Request Forgery in JS Web Apps

Ensuring that attackers don’t forge requests in your web applications can be a tricky businesses, one that often requires a hand-rolled solution.

As soon as you have a session, you need to start thinking about cross site request forgery (CSRF). Every request to your site will contain authentication cookies, and HTML forms don’t abide by the same origin policy (SOP).

One method of ensuring that destructive requests (PUTs/POSTs/DELETEs) to your site are made from your domain, is by only allowing requests with a Content-Type header of application/json. The only way to set this header is via Ajax, and Ajax requests are limited to the same domain.

However, there have been active vectors in the past that have allowed header injection (such as some of the Flash exploits), and Egor, who is the expert in these things, assures me it’s not enough.

The classic method of preventing CSRF attacks is via a token that you pass with every destructive request. The idea is that attackers can’t get hold of this token (because of the SOP), and thus can’t forge requests.

If you’re using Rails, you get this for free. If you’re using Rack, than Rack CSRF is your best bet. It’ll deal with generating tokens and checking requests. The only part you have to handle is on the client side.

jQuery has a neat feature called ajaxPrefilter, which lets you provide a callback to be invoked every Ajax request. You can pass a CSRF token to the client side via, say, a meta tag. Then set a header using ajaxPrefilter.

var CSRF_HEADER = 'X-CSRF-Token';

var setCSRFToken = function(securityToken) {
  jQuery.ajaxPrefilter(function(options, _, xhr) {
    if ( !xhr.crossDomain ) 
        xhr.setRequestHeader(CSRF_HEADER, securityToken);
  });
};

setCSRFToken($('meta[name="csrf-token"]').attr('content'));

In the example above we’re retrieving our CSRF token from a meta tag in the page. Then we’re ensuring that any local Ajax requests forward the token as part of the request’s header.

19 Sep 10:01

5 ways to tackle an insufficient HTTPS implementation

by Troy Hunt

Earlier this year I wrote about 5 ways to implement HTTPS in an insufficient manner (and leak sensitive data). The entire premise of the post was that following a customer raising concerns about their SSL implementation, Top CashBack went on to assert that everything that needed to be protected, was. Except it wasn’t, at least not sufficiently and that’s the rub with SSL; it’s not about having it or not having it, it’s about understanding the nuances of transport layer protection and getting all the nuts and bolts of it right.

Every now and then I write posts like that and every now and then the company involved doesn't do very much about it at all (hi Tesco!) But this case is a little bit different, this time Top CashBack deserves some credit not only for fixing their issues, but for objectively reaching out to discuss the findings and making some very pragmatic, balanced decisions about which pieces of HTTPS to implement and importantly, which ones not to.

The purpose of this post is to show how simple many of these fixes can be and to also point out some of the real challenges that organisations face when rolling out HTTPS on a broader basis. They’re both interesting stories and are a worthwhile addendum to the original post.

Solution 1: Sensitive data always goes over HTTPS

This is a bit of a biggie because it’s really HTTPS 101; you send anything sensitive over the wire, it gets transport layer protection. End of story, no more negotiations. Here’s what used to happen:

Registration page on Top CashBack

Not only was the password field loaded over HTTP (which of course we know means that it may be manipulated by an attacker to do nasty things even if it posts to HTTPS), it also posted to HTTP. Now it does this:

Registration page without a password field

But wait – where’s the HTTPS? Well at this point you’re not entrusting the site with anything of a sensitive nature so for the moment, there’s nothing to protect (and that includes forms that you might then entrust with sensitive data). Anyway, try to join and you’re now taken here:

Registration page requesting the password

Now we’re seeing HTTPS and of course now I’m also prompted for sensitive data in the form of a password. There’s now the opportunity to see the HTTPS scheme in the address bar and if desired, inspect the certificate before entrusting the site with your credentials.

Why not load the first page over HTTPS? I’ll come back to that, let’s tick off the other boxes first.

Solution 2: HTTP content doesn’t go into HTTPS pages

Browsers get rather unhappy about this and rightly so; once you whack data from an insecure connection into a page loaded over a secure connection then you can no longer have confidence in the overall integrity of the page. I show you how to do rather nasty things with this in my video on mixed content warnings.

Previously, verifying an email account resulted in this:

Top CashBack email authentication page

That’s a rather unhappy little browser warning down the bottom. Now, however, the page looks like, well, pretty much the one above but without the warning. You see it only takes one little absolute reference to an insecure scheme, say to embed a JavaScript file, and you get hit with mixed content warnings. That’s easily fixed by using either domain relative paths such as “../scripts/foo.js” or protocol relative paths such as “//topcashback.co.uk/scripts/foo.js”. They’ve opted for the former and the warning is gone. Job done.

Solution 3: No more auth cookies over HTTP

The auth cookie is that little piece of stateful data that gets sent in the request header across the stateless protocol that is HTTP. It’s what keeps us authenticated across requests and it’s also what an attacker uses to hijack a session. Send it over an insecure connection and you’ve got serious issues.

After logging in and navigating back to the homepage served over HTTP, here’s what we used to see:

Home page loaded over HTTP with links to "My account" and "Log out"

See the “My account” info in the top right of the page? The site knew you were still logged in because the auth cookie was sent with the unencrypted request. Whoops. Here’s what it looks like now:

Home page still loaded over HTTP with links to "My account" and "Log out"

Wait – what?! It’s still the same – an HTTP request and there’s a link to the account. Well it is and it isn’t the same and the difference is very important. To explain exactly what’s going on, let’s move onto the fourth point.

Solution 4. Auth cookies get marked as secure

The idea of cookies is that they persist state across requests. I know I’ve already said that but it’s important. The other thing that’s important is that you can have multiple cookies for one site. Yes, yes, I know that you probably know that too but here’s the important bit: the security profile of those cookies doesn’t need to be the same and there are cases where it’s perfectly legitimate to send a cookie across an insecure connection, it just can’t contain anything of value to an attacker.

It makes more sense when you look at the cookies for the site in Chrome’s developer tools (it frankly does a much better job at this than IE’s):

Two auth cookies in Chrome's developer tools

What you’re seeing right up the top is two auth cookies – one is secure, one is not. When the .TCB-BAS cookie is present you’ll get nice “My account” and “Log out” links so you get a sense of persistence. In fact you can even hijack this cookie as an attacker would and then follow the “My account” link but – but – before seeing anything of a personal nature, you’ll see this:

Prompt to login when no secure auth cookie exists

What this means is that there’s a two-tiered auth cookie approach. The secure cookie – that’s .TCB-AUTH-1 – must be present in order to view account info. This means that you have the best of both worlds in the sense that one cookie allows the user to still be identified on insecure pages whilst the other cookie will only work over HTTPS and that’s the one that does the sensitive work. Oh, and just to clarify a point at the end of my earlier post, whilst a bunch of personal data can be pulled out from the account section, apparently banking data is obfuscated which is good news on the security front.

But why not just serve everything over HTTPS? I know, I know, I’m getting to that, one more thing first.

Solution 5. Not relying on HTTP to load login forms

The last major HTTPS issue they had was that when clicking the “Login” button on an insecure page, you got this:

Login form with a padlock

Don’t be fooled by the padlock image! The login form is actually a secure page loaded into an iframe, only thing is that you don’t know that and because it’s embedded in an HTTP page then a man in the middle could have actually loaded their own login form instead. In fact here’s a video of exactly how that works.

Now hitting the login button from anywhere gives you this:

Dedicated login page loaded over HTTPS

This is a fundamentally different security profile as you can observe the HTTPS scheme, inspect the certificate and establish the authenticity of the page before entering your credentials. Oh – and the only padlock in sight is the one presented by the browser as a means of independent verification that the connection is indeed secure.

In some ways this is a usability concession as it requires an entirely new page to be loaded before the user can login rather than just populating a little iframe. Of course this pattern could still be used if the parent page was served over HTTPS, so why isn’t it? Why not just make everything HTTPS? Ok, let’s tackle that:

Why not HTTPS everywhere?

Ultimately, we’re going to see a lot more HTTPS everywhere and by everywhere I mean each and every resource being served over a secure connection; the page itself, the JavaScript, the CSS, the images – you name it – it will increasingly only be available over a secure scheme. But we’re in a transitional phase as an industry and as of now there are some hurdles to overcome.

One of the best analyses I’ve seen of the challenges HTTPS everywhere poses is from Nick Craver of Stack Overflow where earlier this year he wrote about Stackoverflow.com: the road to SSL. Nick makes many important points in his post including:

  1. Their ad network (Adzerk) needs to support SSL
  2. Their image hosting service (i.stack.imgur.com) needs to support SSL
  3. Images embedded from other services need to be loaded over SSL
  4. There’s a cost impact from CDNs
  5. There are considerations around the impact on web sockets
  6. Load balancers need to terminate the SSL connection before hitting the web farm
  7. Multiple domains add challenges to the cert validity
  8. They simply don’t know the impact on SEO – and that’s a critical one

Stack Overflow, of course, is a significantly larger proposition than Top CashBack but they have a number of similar issues they’re facing. For example, the SEO issue – how will search engines behave if they end up HTTP 301’ing (assuming a permanent redirect) to every resource they’ve already indexed? And the biggie (which is arguably the showstopper) is that Google doesn’t support SSL on the Adsense network (at least not at the time of writing) so there’s an entire revenue stream out the window. It’s hard to argue with that.

Security needs to be tackled with a healthy dose of pragmatism and frankly that’s often missing in these discussions. At the end of the day, security is but one of many considerations in the melting pot that is running an online business and ultimately it has to support the objectives of that business. Of course one of the objectives is to not get pwned and that’s where a discussion of risk versus impact versus cost comes into play. This probably wasn’t originally done at Top CashBack with full cognisance of the “risk” component, but it’s reassuring to now see this understood and IMHO a healthier balance struck between those objectives. Even still, as you’ll read in some of the comments on their blog post, change is not without impact so credit is due when an effort is made to improve things.

Looking to the future, if I was to be involved in a new project today where there was a need to use any SSL whatsoever then I’d be very inclined to just secure the whole shebang over HTTPS. There’d be no worry about SEO, you can could be selective with 3rd party content providers and there’d be no retrofit effort. The old concern of adverse performance impact on the client or server has been pretty comprehensively disproven (at least any tangible impact has) as have concerns about caching. SSL has matured. Browsers have matured. It might just be time for HTTPS everywhere.

Disclosure

Top CashBack responded well to my original post – their focus was on improving their security position and they were open and receptive to feedback. We spoke on the phone, we emailed we had a couple of points of differing views but it was always informed and constructive. There was no financial engagement sought or offered nor any oversight on this post, they simply got on with the job of securing their environment. Kudos to them.

12 Sep 17:56

The Three Patterns of Software Development for SDLC Security

by Rohit Sethi

one-sized fits all approach to Software Development Life Cycle (SDLC) security doesn’t work. Practitioners often find that development teams all have different processes – many seem they are special snowflakes, rejecting a single SDLC security program. This may not be much comfort to somebody who needs to lead a SDLC Security initiative across a large organization – but in our experience it is possible to build a program of application security that works for different development teams by recognizing that each SDLC tends to fall into one of three patterns: Waterfall, Agile and Continuous Development/No Process. Your secure SDLC initiative should provide a toolkit that works for each without severely impacting the developers’ productivity. Our whitepaper presents detailed guidance on how to embed security requirements into each.

Characteristics of the Three Patterns for SDLC Security:

1. Waterfall: Development with big upfront design.

  • Managed by a central person or team of Project Managers (PMs).
  • May be iterative, but generally has long release cycles (i.e. quarterly, bi-annual or annual releases).
  • Common in highly regulated industries, large enterprises, and software vendors who create expensive to patch software (e.g. shipped software, embedded devices).

2. Agile: Iterative development

  • No formal project management as compared to waterfall. Scrum masters are responsible for watching over process while product owners are responsible for setting priorities.
  • Each release results in shippable software – typically 1-4 week releases.
  • This tends to be the most popular style for internal applications, mobile applications, and increasingly external-facing web-based applications. In general, we see agile as the most common pattern of development for new software.

3. Continuous development/no process: Either hyper-optimized with automation, leveraging continuous integration tools like Jenkins configuration management systems OR absolutely no development process or standardized tooling, such as Application Lifecycle Management (ALM) tools. Both styles impact security requirements as such:

  • Releases and even iterations are completely removed from the picture – software is in a continuous state of release, with no chance to embed security ahead of time.
  • Absolute minimization of process overhead.
  • Cost of a defect is low, since it’s relatively easy to deploy a fix.
  • Continuous development is very popular with eCommerce companies and other Internet-based businesses.

Each style tends to have different needs from a secure SDLC standpoint:

 1. Waterfall

  • Willingness to spend-time up-front to “do it right” – if and only if the business thinks security is a priority.
  • With sufficient buy in, design-time analysis such as threat modeling, and longer cycles on security activities such as a full-scale code review are conducted.
  • Can accommodate several different security assessment techniques.
  • Cost of fixing a security vulnerability can be extreme, the window of risk exposure can be particularly long if it involves end users patching their systems

2. Agile:

  • Primarily feature driven, particularly when adopting user stories as the primary method for conveying requirements.
  • Typically do not have any process around managing non-functional requirements
  • Can adopt security into iteration planning process by baking security requirements into product backlog.
  • Emphasis on automated testing, whenever possible – may be able to accommodate manual testing from QA or security teams.
  • Cost of fixing security vulnerabilities/window of risk is lower than waterfall, but there is still an emphasis of shipping defect-free software.

3. Continuous development / no process:

  • Obsessed with automation and protecting developers from process overhead. Anything that requires developers to take time away from coding is often met with fierce resistance.
  • No ability to plan up-front except on a per-feature or per-change basis.
  • Often willing to invest in building security features into frameworks, automated front-end tools to shield them from developers.

Recognizing the three patterns and providing toolkits that work for each can dramatically lower the resistance for a SDLC security initiative. Read our guide on how to embed requirements into each.

Also see:

 

 

28 Aug 18:28

Small Business Authority's Survey Shows Overwhelming Majority of Independent Business Owners Believe Their Website Is Secure

Findings from August survey is 86 percent of business owners feel that current website is secure
27 Aug 17:53

Incentives And Organizational Alignment (Or Lack Thereof)

The lack of incentives for security effectiveness remains a problem for security professionals. Until we define legitimate success criteria as the basis to align the organization around security, nothing will change
27 Aug 17:46

Wall Street traders charged with stealing company code via email

by Neil McAllister

Note to self ... add attachment 'secret-algorithms.zip' ...

Three men have been charged with pilfering trade secrets from a Wall Street firm after two of them emailed themselves computer code belonging to their former employer from their company email accounts.…

27 Aug 17:46

Contractors Are Now Using Encrypted Calls and Texts for Legal Advice

Posted by InfoSec News on Aug 27

http://www.nextgov.com/cybersecurity/2013/08/contractors-now-using-encrypted-calls-and-text-legal-advice/69341/

By Aliya Sternstein
Nextgov
August 26, 2013

With economic espionage and domestic surveillance creating a climate of
cyber insecurity, some intellectual property attorneys now employ
encrypted communications to correspond with federal contractor clients.

Tools such as RedPhone, a mobile voice app, and Silent Circle, a text,
video...
27 Aug 17:39

Poison Ivy RAT becoming the AK-47 of cyber-espionage attacks

by John Leyden

Just because it's simple to use doesn't mean the user is low-rent

The Poison Ivy Remote Access Tool (RAT) - often considered a tool for novice "script kiddies" - has become a ubiquitous feature of cyber-espionage campaigns, according to experts.…

26 Aug 18:57

Tesla Model S vulnerable to hackers?

Posted by InfoSec News on Aug 26

http://www.autoblog.com/2013/08/25/tesla-model-s-vulnerable-hackers/

By Damon Lowney
AutoBlog
Aug 25th 2013

Next time you walk by a parked Tesla and its sunroof is opening and
closing with nobody sitting inside or around it, you could be witnessing a
hacker moment. For all of its strengths as a car, the Model S reportedly
has a weak spot: the security of its API (application programming
interface) authentication, according to an article in...
26 Aug 18:56

PayPal fixes critical account switcheroo bug after researcher tipoff

by Iain Thomson

All your account are belong to us

PayPal has fixed a critical flaw that allowed an attacker to delete any account at will and replace it with one of their own.…

21 Aug 14:36

Microsoft's Patch Pitfalls Underscore Tradeoffs For Securing Systems

As the software giant works to fix the shortcomings in its latest set of patches, security experts debate whether "trust the patch" is still the best course
20 Aug 13:18

Backdoor in popular ad-serving software opens websites to remote hijacking

by Dan Goodin

If you installed the OpenX ad server in the past nine months, there's a chance hackers have a backdoor that gives them administrative control over your Web server, in many cases including passwords stored in databases, security researchers warned.

The hidden code in the proprietary open-source ad software was discovered by a reader of Heise Online (Microsoft Translator), a well-known German tech news site, and it has since been confirmed by researchers from Sucuri. It has gone undetected since November and allows attackers to execute any PHP code of their choice on sites running a vulnerable OpenX version.

Coca-Cola, Bloomberg, Samsung, CBS Interactive, and eHarmony are just a small sampling of companies the OpenX website lists as customers. The software company, which also sells a proprietary version of the software, has raised more than $75 million in venture capital as of February 2013.

Read 10 remaining paragraphs | Comments


    


20 Aug 13:18

Twitter rolls out two-factor authentication that’s simpler, more secure

by Dan Goodin
Twitter

Twitter has unveiled a new login verification feature that largely replaces the two-factor authentication system it rolled out in May to prevent a rash of password phishing attacks hitting its users.

The new system relies on strong encryption to provide iOS and Android smartphone users with an end-to-end solution that's not vulnerable to compromised SMS delivery channels. Unlike the current system, it also does away with the use of a "shared secret" between end users and Twitter, since the secrets are often just as vulnerable as passwords to phishing and other types of attacks. The cryptographic key used to approve login requests stays on a user's phone and is managed by the Twitter app itself. In addition to being more resistant to attack, the system is easier to use, company officials said.

"Now you can enroll in login verification and approve login requests right from the Twitter app on iOS and Android," Twitter security engineer Alex Smolen wrote in a blog post published Tuesday. "Simply tap a button on your phone and you're good to go. This means you don't have to wait for a text message and then type in the code each time you sign in on twitter.com."

Read 12 remaining paragraphs | Comments


    


20 Aug 13:18

Welcome to the “Internet of Things,” where even lights aren’t hacker safe

by Dan Goodin
A frame from a video demonstration showing a proof-of-concept malware attack on a smartphone-controlled light system from Philips.
Nitesh Dhanjani

Weaknesses in a popular brand of light system controlled by computers and smartphones can be exploited by attackers to cause blackouts that are remedied only by removing the wireless device that receives the commands, a security researcher said.

The vulnerabilities in the Hue LED lighting system made by Philips are another example of the risks posed by connecting thermostats, door locks, and other everyday devices to the Internet so they can be controlled by someone in the next room or across town. While the so-called Internet of Things phenomenon brings convenience and new capabilities to gadgets, they come at a cost. Namely, they're susceptible to the same kinds of hack attacks that have plagued computer users for decades. The ability to load a Web page that causes house or office lights to go black could pose risks that go well beyond the typical computer threat.

"Lighting is critical to physical security," Nitesh Dhanjani, the researcher who discovered the weaknesses and developed proof-of-concept attacks that exploit them, wrote in a blog post published Tuesday. "Smart lightbulb systems are likely to be deployed in current and new residential and corporate constructions. An abuse case such as the ability of an intruder to remotely shut off lighting in locations such as hospitals and other public venues can result in serious consequences."

Read 10 remaining paragraphs | Comments


    






06 Aug 17:59

Lumpy milk and exploding yoghurt? Your fridge could be riddled with MALWARE

by Jasper Hamill

Security bod predicts future where virus writers steal your lunch

Antivirus guru AVG is preparing for a future where even fridges and freezers are targeted by malware, the firm's chief operating officer has said.…

06 Aug 17:59

Child porn hidden in legit hacked websites: 100s redirected to sick images

by Brid-Aine Parnell

So warns the Internet Watch Foundation

Innocent companies' websites are being hacked to serve images of child sex abuse, the Internet Watch Foundation has warned.…

06 Aug 17:59

Car Hackers Release Tools

Researchers who hacked Toyota Prius and Ford Escape hope to foster future 'car-in-a-box' model for tinkering with vehicle security issues
06 Aug 17:53

Researchers find trojanized banking app that exploits critical Android bug

by Dan Goodin
A page displayed by trojanized app found by Trend Micro.
Trend Micro

Researchers have unearthed another malicious app exploiting a critical vulnerability in Google's Android OS that allows attackers to inject malicious code into legitimate programs without invalidating their digital signature.

The threat poses as an update for the official Android app available to customers of NH Nonghyup Bank, one of South Korea's biggest financial institutions, according to a blog post published Friday by researchers from antivirus provider Trend Micro. By exploiting the so-called master-key vulnerability in the mobile OS, this malware bears the same cryptographic signature found in the legitimate release, even though the update contains malicious code that uploads user credentials to a remote server.

The good news is that the app verification tool Google released in Android 4.2 late last year flags these malicious apps. And according to this recent post, Google developers have added the protection to earlier versions and turned it on by default. The verification tool checks the authenticity of apps downloaded both from the official Google Play marketplace and alternative sources as well. As an added safety measure, users should avoid these alternative marketplaces unless there's a strong case for doing otherwise.

Read 1 remaining paragraphs | Comments


    


06 Aug 17:53

Windows Phones susceptible to password theft when connecting to rogue Wi-Fi

by Dan Goodin

Smartphones running Microsoft's Windows Phone operating system are vulnerable to attacks that can extract the user credentials needed to log in to sensitive corporate networks, the company warned Monday.

The vulnerability resides in a Wi-Fi authentication scheme known as PEAP-MS-CHAPv2, which Windows Phones use to access wireless networks protected by version 2 of the Wi-Fi Protected Access protocol. Cryptographic weaknesses in the Microsoft-developed technology allow attackers to recover a phone's encrypted domain credentials when it connects to a rogue access point. By exploiting vulnerabilities in the MS-CHAPv2 cryptographic protocol, the adversary could then decrypt the data.

"An attacker-controlled system could pose as a known Wi-Fi access point, causing the victim's device to automatically attempt to authenticate with the access point and in turn allow the attacker to intercept the victim's encrypted domain credentials," the Microsoft advisory warned. "An attacker could then exploit cryptographic weaknesses in the PEAP-MS-CHAPv2 protocol to obtain the victim's domain credentials."

Read 3 remaining paragraphs | Comments


    


30 Jul 01:45

Tampering with a car’s brakes and speed by hacking its computers: A new how-to

by Dan Goodin
Unsafe at any speed: The speedometer of a 2010 Toyota Prius that has been hacked to report an incorrect reading.
Chris Valasak

Just about everything these days ships with tiny embedded computers that are designed to make users' lives easier. High-definition TVs, for instance, can run Skype and Pandora and connect directly to the Internet, while heating systems have networked interfaces that allow people to crank up the heat on their way home from work. But these newfangled features can often introduce opportunities for malicious hackers. Witness "Smart TVs" from Samsung or a popular brand of software for controlling heating systems in businesses.

Now, security researchers are turning their attention to the computers in cars, which typically contain as many as 50 distinct ECUs—short for electronic control units—that are all networked together. Cars have relied on on-board computers for some three decades, but for most of that time, the circuits mostly managed low-level components. No more. Today, ECUs control or finely tune a wide array of critical functions, including steering, acceleration, braking, and dashboard displays. More importantly, as university researchers documented in papers published in 2010 and 2011, on-board components such as CD players, Bluetooth for hands-free calls, and "telematics" units for OnStar and similar road-side services make it possible for an attacker to remotely execute malicious code.

The research is still in its infancy, but its implications are unsettling. Trick a driver into loading the wrong CD or connecting the Bluetooth to the wrong handset, and it's theoretically possible to install malicious code on one of the ECUs. Since the ECUs communicate with one another using little or no authentication, there's no telling how far the hack could extend.

Read 8 remaining paragraphs | Comments


    


26 Jul 18:19

Android 'Master Key' DEMON APPS sniffed out in China

by Richard Chirgwin

Send for the doctor, send your IMEI to attackers

Virus-hunter Symantec says the Android master key vulnerability is being exploited in China, where half-a-dozen apps have showed up with malicious content hiding behind a supposedly-safe crypto key.…

26 Jul 18:19

LinkedIn snaps shut OAuth login token snaffling vulnerability

by John Leyden

Just in case anyone wanted to add CEO of Yahoo! to your online CV

Facebook-for-bosses website LinkedIn has fixed a security vulnerability that potentially allowed anyone to swipe users' OAuth login tokens.…

26 Jul 18:19

Five charged as Feds bust largest credit-card hack in history

by Iain Thomson

Hundreds of millions stolen from biggest names in US

Federal prosecutors in New Jersey say they've busted what could be the biggest credit card hacking fraud in US history, with companies such as NASDAQ, 7-Eleven, and Dow Jones falling prey to an Eastern European criminal gang.…

26 Jul 18:08

Feds Indict Five In Massive Credit-Card Data Breach Scheme

'Hacker 1' and 'Hacker 2' from the Heartland Payment Systems breach indictment were named today among the five defendants in latest breach charges that resulted in 160 million stolen credit card numbers and hundreds of millions of dollars in losses
26 Jul 18:08

Somebody's Watching You: Hacking IP Video Cameras

Major holes in network video recorders (NVRs) could result in a major physical security and privacy FAIL
22 Jul 15:50

Titsup Apple Developer Centre mystery: Database interloper fingered

by Richard Chirgwin

Brooding silence bugs code-cutters

After days of silence over an outage that's outraged developers, Apple has announced that its Developer Centre was subject to an attempted intrusion.…

22 Jul 15:50

NTRO releases guidelines to protect against cyber attacks

Posted by InfoSec News on Jul 22

http://articles.timesofindia.indiatimes.com/2013-07-20/india/40694913_1_cyber-attacks-ntro-guidelines

By Rakhi Chakrabarty
The Times of India
July 20, 2013

NEW DELHI: Cyber attacks on ministries, including home, external affairs,
power and telecom, could soon constitute cyber-terrorism and could be
punished with life imprisonment. Tough new guidelines were released by
national security advisor Shivshankar Menon on Friday by which these...