Shared posts

27 Jul 16:25

Victory Lap for Ask Patents

by Joel Spolsky

There are a lot of people complaining about lousy software patents these days. I say, stop complaining, and start killing them. It took me about fifteen minutes to stop a crappy Microsoft patent from being approved. Got fifteen minutes? You can do it too.

In a minute, I’ll tell you that story. But first, a little background.

Software developers don’t actually invent very much. The number of actually novel, non-obvious inventions in the software industry that maybe, in some universe, deserve a government-granted monopoly is, perhaps, two.

The other 40,000-odd software patents issued every year are mostly garbage that any working programmer could “invent” three times before breakfast. Most issued software patents aren’t “inventions” as most people understand that word. They’re just things that any first-year student learning Java should be able to do as a homework assignment in two hours.

Nevertheless, a lot of companies large and small have figured out that patents are worth money, so they try to file as many as they possibly can. They figure they can generate a big pile of patents as an inexpensive byproduct of the R&D work they’re doing anyway, just by sending some lawyers around the halls to ask programmers what they’re working on, and then attempting to patent everything. Almost everything they find is either obvious or has been done before, so it shouldn’t be patentable, but they use some sneaky tricks to get these things through the patent office.

The first technique is to try to make the language of the patent as confusing and obfuscated as possible. That actually makes it harder for a patent examiner to identify prior art or evaluate if the invention is obvious.

A bonus side effect of writing an incomprehensible patent is that it works better as an infringement trap. Many patent owners, especially the troll types, don’t really want you to avoid their patent. Often they actually want you to infringe their patent, and then build a big business that relies on that infringement, and only then do they want you to find out about the patent, so you are in the worst possible legal position and can be extorted successfully. The harder the patent is to read, the more likely it will be inadvertently infringed.

The second technique to getting bad software patents issued is to use a thesaurus. Often, software patent applicants make up new terms to describe things with perfectly good, existing names. A lot of examiners will search for prior art using, well, search tools. They have to; no single patent examiner can possibly be aware of more than (rounding to nearest whole number) 0% of the prior art which might have invalidated the application.

Since patent examiners rely so much on keyword searches, when you submit your application, if you can change some of the keywords in your patent to be different than the words used everywhere else, you might get your patent through even when there’s blatant prior art, because by using weird, made-up words for things, you’ve made that prior art harder to find. 

Now on to the third technique. Have you ever seen a patent application that appears ridiculously broad? (“Good lord, they’re trying to patent CARS!”). Here’s why. The applicant is deliberately overreaching, that is, striving to get the broadest possible patent knowing that the worst thing that can happen is that the patent examiner whittles their claims down to what they were entitled to patent anyway.

Let me illustrate that as simply as I can. At the heart of a patent is a list of claims: the things you allege to have invented that you will get a monopoly on if your patent is accepted.

An example might help. Imagine a simple application with these three claims:

1. A method of transportation
2. The method of transportation in claim 1, wherein there is an engine connected to wheels
3. The method of transportation in claim 2, wherein the engine runs on water

Notice that claim 2 mentions claim 1, and narrows it... in other words, it claims a strict subset of things from claim 1.

Now, suppose you invented the water-powered car. When you submit your patent, you might submit it this way even knowing that there’s prior art for “methods of transportation” and you can’t really claim all of them as your invention. The theory is that (a) hey, you might get lucky! and (b) even if you don’t get lucky and the first claim is rejected, the narrower claims will still stand.

What you’re seeing is just a long shot lottery ticket, and you have to look deep into the narrower claims to see what they really expect to get. And you never know, the patent office might be asleep at the wheel and BOOM you get to extort everyone who makes, sells, buys, or rides transportation.

So anyway, a lot of crappy software patents get issued and the more that get issued, the worse it is for software developers.

The patent office got a little bit of heat about this. The America Invents Act changed the law to allow the public to submit examples of prior art while a patent application is being examined. And that’s why the USPTO asked us to set up Ask Patents, a Stack Exchange site where software developers like you can submit examples of prior art to stop crappy software patents even before they’re issued.

Sounds hard, right?

At first I honestly thought it was going to be hard. Would we even be able to find vulnerable applications? The funny thing is that when I looked at a bunch of software patent applications at random I came to realize that they were all bad, which makes our job much easier.

Take patent application US 20130063492 A1, submitted by Microsoft. An Ask Patent user submitted this call for prior art on March 26th.

I tried to find prior art for this just to see how hard it was. First I read the application. Well, to be honest, I kind of glanced at the application. In fact I skipped the abstract and the description and went straight to the claims. Dan Shapiro has great blog post called How to Read a Patent in 60 Seconds which taught me how to do this.

This patent was, typically, obfuscated, and it used terms like “pixel density” for something that every other programmer in the world would call “resolution,” either accidentally (because Microsoft’s lawyers were not programmers), or, more likely, because the obfuscation makes it that much harder to search.

Without reading too deeply, I realized that this patent is basically trying to say “Sometimes you have a picture that you want to scale to different resolutions. When this happens, you might want to have multiple versions of the image available at different resolutions, so you can pick the one that’s closest and scale that.”

This didn’t seem novel to me. I was pretty sure that the Win32 API already had a feature to do something like that. I remembered that it was common to provide multiple icons at different resolutions and in fact I was pretty sure that the operating system could pick one based on the resolution of the display. So I spent about a minute with Google and eventually (bing!) found this interesting document entitled Writing DPI-Aware Win32 Applications [PDF] written by Ryan Haveson and Ken Sykes at, what a coincidence, Microsoft.

And it was written in 2008, while Microsoft’s new patent application was trying to claim that this “invention” was “invented” in 2011. Boom. Prior art found, and deployed.

Total time elapsed, maybe 10 minutes. One of the participants on Ask Patents pointed out that the patent application referred to something called “scaling sets.” I wasn’t sure what that was supposed to mean but I found a specific part of the older Microsoft document that demonstrated this “invention” without using the same word, so I edited my answer a bit to point it out. Here’s my complete answer on AskPatents.

Mysteriously, whoever it was that posted the request for prior art checked the Accepted button on Stack Exchange. We thought this might be the patent examiner, but it was posted with a generic username.

At that point I promptly forgot about it, until May 21 (two months later), when I got this email from Micah Siegel (Micah is our full-time patent expert):

The USPTO rejected Microsoft's Resizing Imaging Patent!

The examiner referred specifically to Prior Art cited in Joel's answer ("Haveson et al").

Here is the actual document rejecting the patent. It is a clean sweep starting on page 4 and throughout, basically citing rejecting the application as obvious in view of Haveson.

Micah showed me a document from the USPTO confirming that they had rejected the patent application, and the rejection relied very heavily on the document I found. This was, in fact, the first “confirmed kill” of Ask Patents, and it was really surprisingly easy. I didn’t have to do the hard work of studying everything in the patent application and carefully proving that it was all prior art: the examiner did that for me. (It’s a pleasure to read him demolish the patent in question, all twenty claims, if that kind of schadenfreude amuses you).

(If you want to see the rejection, go to Public Pair and search for publication number US 20130063492 A1. Click on Image File Wrapper, and look at the non-final rejection of 4-11-2013. Microsoft is, needless to say, appealing the decision, so this crappy patent may re-surface.) Update October 2013: the patent received a FINAL REJECTION from the USPTO!

There is, though, an interesting lesson here. Software patent applications are of uniformly poor quality. They are remarkably easy to find prior art for. Ask Patents can be used to block them with very little work. And this kind of individual destruction of one software patent application at a time might start to make a dent in the mountain of bad patents getting granted.

My dream is that when big companies hear about how friggin’ easy it is to block a patent application, they’ll use Ask Patents to start messing with their competitors. How cool would it be if Apple, Samsung, Oracle and Google got into a Mexican Standoff on Ask Patents? If each of those companies had three or four engineers dedicating a few hours every day to picking off their competitors’ applications, the number of granted patents to those companies would grind to a halt. Wouldn’t that be something!

Got 15 minutes? Go to Ask Patents right now, and see if one of these RFPAs covers a topic you know something about, and post any examples you can find. They’re hidden in plain view; most of the prior art you need for software patents can be found on Google. Happy hunting!

Need to hire a really great programmer? Want a job that doesn't drive you crazy? Visit the Joel on Software Job Board: Great software jobs, great people.

22 Jul 14:19

Social Media

prepend

I haven't share xkcd in a long time.

The social media reaction to this asteroid announcement has been sharply negative. Care to respond?
17 Jul 18:32

Stopping Big Pharma From Using New .pharmacy Domain To Block Legal Pharmacies They Don't Like

by Mike Masnick
A few months ago, we wrote about how Big Pharma -- a collection of the largest pharmaceutical companies -- have been trying to get the .pharmacy generic top level domain from ICANN. The whole idea, of course is actually to use it to block legitimate pharmacies that offer reimportation of drugs at cheaper prices. As you hopefully know, pharmaceuticals jack up the prices for Americans, and you can often get the exact same drugs from Canadian pharmaceuticals. In fact, some US politicians -- including President Obama -- have supported this "reimportation" or "parallel importation" as a way to reduce the costs of healthcare.

But Big Pharma loves to conflate legitimate Canadian pharmacies with "rogue pharmacies" that sell either counterfeit drugs or just fake drugs. Of course, some studies have shown that much of the product from "rogue pharmacies" is actually genuine (killing off your customers isn't good business...), but the legal Canadian pharmacies still have a much higher level of legitimacy. And the pharmaceutical companies hate that, because they like their monopoly pricing.

RxRights, an organization that represents many of those Canadian pharmacies who are helping Americans get more affordable medicines so they can, you know, stay alive, has put together a petition asking ICANN not to support Big Pharma's digital landgrab.
Due to the applicant's history of actions and positions, we have little doubt that, if approved, NABP will prevent safe, regulated and licensed Canadian and other international online pharmacies from registering domains in the .pharmacy gTLD. Such an action would block trusted distance care providers from utilizing the gTLD that global consumers will come to regard as a mark of authenticity for safe medication.

Large pharmaceutical companies and NABP---member U.S. pharmacies oppose personal importation because Americans can obtain identical, legitimate, but lower cost prescription medications through licensed online pharmacies domiciled outside the U.S. This opposition is significant in light of the fact that U.S. pharmaceutical companies have largely funded NABP's application for .pharmacy. Further, NABP uses funding from pharmaceutical companies and U.S. pharmacies for its programs and activities. We believe that it is an inherent conflict of interest. Through its .pharmacy application, NABP seeks to control access to affordable medication not just for Americans but for all global consumers with an Internet connection.

Large pharmaceutical firms seek to keep drug prices high for as long as possible through a combination of dubious and aggressive tactics, including regional differential pricing arrangements and payments to prevent the availability of lower cost generics. Their backing of the .pharmacy application seeks to extend these inflated pricing measures to the Internet retail sector.

The .pharmacy gTLD must be operated in a manner that ensures that this unique global Internet resource provides benefits to all consumers seeking access to safe and affordable medicines no matter where they reside. ICANN must act in the global public interest by ensuring that NABP cannot endanger hundreds of thousands of lives through control of .pharmacy. A fully inclusive advisory board that includes legitimate online pharmacies and consumer groups from around the world should set the registration policies for .pharmacy.

The problem of drug affordability is a global issue. Within the U.S., unique among wealthy nations, it is dire--the Commonwealth Fund reports that 50 million Americans chose not to fill a prescription last year because of high U.S. costs. If NABP's .pharmacy application is approved, access to affordable medicine will be further restricted through the denial of domain registrations to licensed and regulated providers of lower cost prescription drugs, compounding this public health crisis.

Please don't let that happen.


Permalink | Comments | Email This Story
    
08 Jun 00:49

APIs: What Are The Common Obstacles?

by Ernesto Ramirez

QS_APIs

Today’s guest post come to us from Eric Jain, the lead developer behind Zenobase and a wonderful contributor to our community. 

At last month’s QS Europe 2013 conference, developers gathered at a breakout session to compile a list of common obstacles encountered when using the APIs of popular, QS-related services. We hope that this list of obstacles will be useful to toolmakers who have developed APIs for their tools or are planning to provide such APIs.

  1. No API, or incomplete APIs that exposes only aggregate data, and not the actual data that was recorded.
  2. Custom authentication mechanisms (instead of e.g. OAuth), or custom extensions (e.g. for refreshing tokens with OAuth 1.0a).
  3. OAuth tokens that expire.
  4. Timestamps that lack time zone offsets: Some applications need to know how much time has elapsed between two data points (not possible if all times are local), or what e.g. the hour of the day was (not possible if all times are converted to UTC).
  5. Can’t retrieve data points going back more than a few days or weeks, because at least one separate request has to be made for each day, instead of being able to use a begin/end timestamp and offset/limit parameters.
  6.  Numbers that don’t retain their precision (1 != 1.0 != 1.00), or are changed due to unit conversion (71kg = 156.528lbs = 70.9999kg?).
  7. No SSL, or SSL with a certificate that is not widely supported.
  8. Data that lacks unique identifies (for track-ability, or doesn’t include its provenance (if obtained from another service).
  9. No sandbox with test data for APIs that expose data from hardware devices.
  10. No dedicated channel for advance notifications of API changes.

This list is by no means complete, but rather a starting point that we hope will kick off a discussion around best practices.

The post APIs: What Are The Common Obstacles? appeared first on Quantified Self.

07 Jun 12:25

Triangulating On Truth – The Totalitarian State

by Michael Arrington

The Guardian breaks a big story yesterday – a court document authorizing the FBI and NSA to secretly collect customer phone records. All of them, for all Verizon customers.

Then today the Washington Post breaks an even bigger story – a leaked presentation stating that the NSA is “tapping directly into the central servers of nine leading U.S. Internet companies” to collect information on users. The project is code-named PRISM.

These are the huge repositories of user information from Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, Apple. Dropbox, we’re told, is “coming soon.” Twitter is noticeably absent.

Then the counter stories – most of the companies mentioned in the NSA presentation have denied that the NSA has access to their servers. And people are pointing out that the Verizon order doesn’t include actual phone conversations, just the metadata around those conversations.

On the WP story, that means one of these things must be true:

1. The NSA presentation is fake and the Washington Post got duped, or

2. Microsoft, Yahoo, Google, Facebook, Apple, etc. are lying, or

3. The presentation is real, and the companies are carefully drafting responses so that they aren’t technically lying.

I believe the third option above is truth.

The denials are all worded too similarly and too specifically:

Comparing denials from tech companies, a clear pattern emerges: Apple denied ever hearing of the program and notes they “do not provide any government agency with direct access to our servers and any agency requesting customer data must get a court order;” Facebook claimed they “do not provide any government organisation with direct access to Facebook servers;” Google said it “does not have a ‘back door’ for the government to access private user data”; And Yahoo said they “do not provide the government with direct access to our servers, systems, or network.” Most also note that they only release user information as the law compels them to.

How else could these companies be supplying the data? Easy, by simply sending a copy of all data to the NSA. Verizon’s court order, for example, required that they send call data daily.

The companies sending the data have both immunity from prosecution and are also prohibited from disclosing that the NSA has requested or received the data.

The truth of what’s going on becomes obvious.

The U.S. government is compelling companies to turn over all personal information of users to the NSA. They have immunity for this, and they are absolutely prohibited from admitting it.

The result is a massive NSA database that includes information about everything we do online, and everything we do offline that has any online ghost (checkins, photos, etc.).

If twenty years from now the government wants to listen to my phone calls from today, they’ll be able to, because they’re all being stored. Or see who I voted for, or who I associate with. A simple AI can parse all this and profile me. And a hostile government, intent on attacking political enemies, can target me (or anyone).

If you missed this story from May read it now. Former FBI counterterrorism agent Tim Clemente says that the U.S. government already has the ability to listen to past phone calls:

CLEMENTE: “No, there is a way. We certainly have ways in national security investigations to find out exactly what was said in that conversation. It’s not necessarily something that the FBI is going to want to present in court, but it may help lead the investigation and/or lead to questioning of her. We certainly can find that out.

BURNETT: “So they can actually get that? People are saying, look, that is incredible.

CLEMENTE: “No, welcome to America. All of that stuff is being captured as we speak whether we know it or like it or not.”

That’s why Mathew Ingram is totally correct when he says that we desperately need “a stateless repository for leaks” (such as WikiLeaks) to have any chance of fighting back.

But what I would like to see right now is for people at these internet companies to stand up and say the truth, all of it, about their dealings with the NSA.

It doesn’t matter if it’s the CEOs or lower level employees. It can be anonymous or on the record. Unless that Washington Post presentation is a fraud, then a lot of people in Silicon Valley know what’s going on, or parts of what’s going on. They have a duty to stand up to the government, and their employers, and tell the world the truth.

Because right now it certainly looks to me like we’re living in a totalitarian state. And the amount of control that state has over all of us, through intimidation and fear, will only grow over time.

Now is the time to stand up and talk, and be a hero.

Or not, and be complicit.

For my part, I don’t give a damn that Senator Feinstein and others in our government say that this is “called protecting America.”

It doesn’t, it’s Orwellian and it kills liberty and freedom on a scale never seen before. It’s not a way to stop terrorism. It IS terrorism.

The courts are allowing this. The government loves this. The only ones left to oppose it are us.


05 Jun 12:51

Forget quantified self wristbands. OMsignal raises $1M to make bio-sensing smart apparel

by Michael Carney

OMsignal stress levels

With the wearable computing and quantified self movements more popular than ever, it’s looking more and more likely that we are approaching the day where we will constantly be collecting data about our every action and the environment around us. Wristbands, headbands, belt clips, and eyewear have been the initial form factors for the sensors that collect this data, but these can be intrusive and soon the market will demand more coverage than can be offered by devices of such physically limited footprint.

With this in mind, many innovators are already working on what they believe the next generation of wearable computing: bio-sensing clothing. One such company, OMsignal, announced $1 million in Seed funding today from Real Ventures, Golden Venture Partners, and TechStars CEO David Cohen. The company is working on a full apparel line that continuously monitors physical activity, ECG data, and breathing patterns, transmitting that data to an accompanying smartphone app. In addition to raw bio data, the user will receive interpretations of tension levels, or “emotive” states, and subsequent prescriptions for how to improve.

The idea is that these products will ultimately be worn throughout the day, CEO Stéphane Marceau says, not just during athletic activity or other specific events. In its introductory video, the company demonstrates a wife notified of her husband’s rising stress levels who then sends a reassuring text, a daughter who’s able to get her father to a hospital after being alerted to his dramatically elevated heart rate, and parents lovingly monitoring the heartbeat of both mother and child throughout the day.

OMsignal apparel is not available for purchase yet. Prototypes of its shirt have been completed, and the company has developed proprietary manufacturing techniques to enable fully automated mass production – which is no small feat. But before the product is rolled out to the consumer market, it will be given to third-party developers who demonstrate an interest in building applications for the platform. The concept is heavily inspired by the preliminary rollout of Google Glass, minus the celebrity aspect. The first 100 units will be available in Q4 of this year.

The company is already working with several interested parties in the medical field, Marceau says, as well as other hobbyists, such as a developer who plans to build a lamp that changes intensity according to the user’s heart rate. The possibilities are endless, and will surely be tested by the participating developers.

We are entering a brave new world of ubiquitous data collection, but while the potential benefits are great, the risks are equally frightening. I’ve written previously about the possibility that user data could be used to deny medical coverage and credit, to target advertising, or to discriminate in other unpleasant ways. As the amount of data that we collect and share increases, these possibilities only grow more probable and frightening.

Just yesterday, Google Executive Chairman Eric Schmidt declared the need to build a platform to aggregate and make sense of all this user data. Fortunately Schmidt’s vision called for anonymized data, but as we’ve seen before, even the best intentions around privacy and security can have unintended (and unpleasant) consequences.

OMsignal is far from the only company working in the bio-sensing clothing category. I’ve personally spoken to two others who are currently in stealth mode but which have similarly spent more than a year developing products which will soon hit the market. Marceau and the founders of these other companies each compared their technology to GORE-TEX in that they’d like to see it licensed to major performance apparel brands for integration into their products. With that said, it’s likely that we’ll see smart clothing on retail shelves in the next 12 months.

When smart apparel does finally hit the market it will be priced like other wearable computing products – think on the order of $100 per unit, plus or minus a few dollars.  But because bio-sensing fabrics are lightweight, unobtrusive, and machine washable, they are far more flexible than traditional electronics in their applications, making them an attractive alternative at similar pricepoints. As the technology grows more ubiquitous and the market expands, it’s likely that prices will fall significantly, making way for mass-market adoption.

The big problem with quantified self products today is that nearly all lack the prescriptive aspect, or in other words, they deliver great data but don’t tell you what to do about it. For example, its nice to know that you’re not sleeping well and that you don’t get enough exercise. But, without first understanding the consequences of these two shortcomings and then receiving practical advice for improving them, the data is of little value. OMsignal is aware of this fact and is working to deliver new and innovative solutions. But Marceau admits that, like others in the space, his company hasn’t yet nailed it.

It’s possible that one day soon heart attacks, low blood sugar, rising insulin levels, and even pregnancy will be detected first by our apparel rather than by a blood test or another deliberate medical procedure. This is in addition to the more superficial benefits of athletic performance tracking enabled by innovation in this category. And while there are ethical minefields that remain to be crossed, the real benefits of such information are too exciting not to explore.

We are barely scratching the surface of wearable computing. Today, it is our bodily systems (nervous, respiratory, endocrine, etc.) that constantly monitor and measure every aspect of our physical performance. Soon, we may all have that same level of awareness sitting in the palm of our hands. And with it, hopefully we can create a healthier, fitter, happier, and more productive society.

Michael Carney

MCarney Headshot 11.12 Maui III
Michael Carney has spent his career exploring the world of early stage technology as an investor and entrepreneur and has participated in building companies in multiple countries within North and South America and Asia. Ultimately, he is an enthusiast of all things shiny and electronic and is inspired by those who build businesses and regularly tackle difficult problems. You can follow Michael on Twitter @mcarney.

    


05 Jun 10:30

The Problems with CALEA-II

by schneier

The FBI wants a new law that will make it easier to wiretap the Internet. Although its claim is that the new law will only maintain the status quo, it's really much worse than that. This law will result in less-secure Internet products and create a foreign industry in more-secure alternatives. It will impose costly burdens on affected companies. It will assist totalitarian governments in spying on their own citizens. And it won't do much to hinder actual criminals and terrorists.

As the FBI sees it, the problem is that people are moving away from traditional communication systems like telephones onto computer systems like Skype. Eavesdropping on telephones used to be easy. The FBI would call the phone company, which would bring agents into a switching room and allow them to literally tap the wires with a pair of alligator clips and a tape recorder. In the 1990s, the government forced phone companies to provide an analogous capability on digital switches; but today, more and more communications happens over the Internet.

What the FBI wants is the ability to eavesdrop on everything. Depending on the system, this ranges from easy to impossible. E-mail systems like Gmail are easy. The mail resides in Google's servers, and the company has an office full of people who respond to requests for lawful access to individual accounts from governments all over the world. Encrypted voice systems like Silent Circle are impossible to eavesdrop on—the calls are encrypted from one computer to the other, and there's no central node to eavesdrop from. In those cases, the only way to make the system eavesdroppable is to add a backdoor to the user software. This is precisely the FBI's proposal. Companies that refuse to comply would be fined $25,000 a day.

The FBI believes it can have it both ways: that it can open systems to its eavesdropping, but keep them secure from anyone else's eavesdropping. That's just not possible. It's impossible to build a communications system that allows the FBI surreptitious access but doesn't allow similar access by others. When it comes to security, we have two options: We can build our systems to be as secure as possible from eavesdropping, or we can deliberately weaken their security. We have to choose one or the other.

This is an old debate, and one we've been through many times. The NSA even has a name for it: the equities issue. In the 1980s, the equities debate was about export control of cryptography. The government deliberately weakened U.S. cryptography products because it didn't want foreign groups to have access to secure systems. Two things resulted: fewer Internet products with cryptography, to the insecurity of everybody, and a vibrant foreign security industry based on the unofficial slogan "Don't buy the U.S. stuff -- it's lousy."

In 1993, the debate was about the Clipper Chip. This was another deliberately weakened security product, an encrypted telephone. The FBI convinced AT&T to add a backdoor that allowed for surreptitious wiretapping. The product was a complete failure. Again, why would anyone buy a deliberately weakened security system?

In 1994, the Communications Assistance for Law Enforcement Act mandated that U.S. companies build eavesdropping capabilities into phone switches. These were sold internationally; some countries liked having the ability to spy on their citizens. Of course, so did criminals, and there were public scandals in Greece (2005) and Italy (2006) as a result.

In 2012, we learned that every phone switch sold to the Department of Defense had security vulnerabilities in its surveillance system. And just this May, we learned that Chinese hackers breached Google's system for providing surveillance data for the FBI.

The new FBI proposal will fail in all these ways and more. The bad guys will be able to get around the eavesdropping capability, either by building their own security systems -- not very difficult -- or buying the more-secure foreign products that will inevitably be made available. Most of the good guys, who don't understand the risks or the technology, will not know enough to bother and will be less secure. The eavesdropping functions will 1) result in more obscure -- and less secure -- product designs, and 2) be vulnerable to exploitation by criminals, spies, and everyone else. U.S. companies will be forced to compete at a disadvantage; smart customers won't buy the substandard stuff when there are more-secure foreign alternatives. Even worse, there are lots of foreign governments who want to use these sorts of systems to spy on their own citizens. Do we really want to be exporting surveillance technology to the likes of China, Syria, and Saudi Arabia?

The FBI's short-sighted agenda also works against the parts of the government that are still working to secure the Internet for everyone. Initiatives within the NSA, the DOD, and DHS to do everything from securing computer operating systems to enabling anonymous web browsing will all be harmed by this.

What to do, then? The FBI claims that the Internet is "going dark," and that it's simply trying to maintain the status quo of being able to eavesdrop. This characterization is disingenuous at best. We are entering a golden age of surveillance; there's more electronic communications available for eavesdropping than ever before, including whole new classes of information: location tracking, financial tracking, and vast databases of historical communications such as e-mails and text messages. The FBI's surveillance department has it better than ever. With regard to voice communications, yes, software phone calls will be harder to eavesdrop upon. (Although there are questions about Skype's security.) That's just part of the evolution of technology, and one that on balance is a positive thing.

Think of it this way: We don't hand the government copies of our house keys and safe combinations. If agents want access, they get a warrant and then pick the locks or bust open the doors, just as a criminal would do. A similar system would work on computers. The FBI, with its increasingly non-transparent procedures and systems, has failed to make the case that this isn't good enough.

Finally there's a general principle at work that's worth explicitly stating. All tools can be used by the good guys and the bad guys. Cars have enormous societal value, even though bank robbers can use them as getaway cars. Cash is no different. Both good guys and bad guys send e-mails, use Skype, and eat at all-night restaurants. But because society consists overwhelmingly of good guys, the good uses of these dual-use technologies greatly outweigh the bad uses. Strong Internet security makes us all safer, even though it helps the bad guys as well. And it makes no sense to harm all of us in an attempt to harm a small subset of us.

This essay originally appeared in Foreign Policy.

03 Jun 18:03

Open Source Architecture

Thanks to reader JP for this link about a plan for open-source home architecture.

If the plan works, someday a consumer will be able to download buildable home construction plans and order custom-cut materials to make home-building a breeze, and much cheaper. The biggest obstacle will be the complexity of local building standards. But I can imagine local governments embracing the system if it allows them to input their unique building requirements into it as a filter. It might be a big money-saver for the town because every proposed building plan that comes through the system will already meet codes.

The thing that is missing from the downloadable architecture system, as far as I know, is the design element. I don't want to build a house that is nothing but a bunch of boring rectangles arranged over a foundation. I want a house that is designed with function in mind first. And for that I'd like to see some sort of "best design" subsection for the downloadable home plans.

I often think about how hard it was to design our current kitchen layout. Kitchens are a challenge because ideally you want everything to be next to everything else, which is physically impossible. Our biggest error was putting the flatware drawer in a place that guaranteed someone would always be standing in front of it when someone else needed a fork. It seems like a small thing, but design is about getting all of the small stuff right. I'd love to upload the design of our kitchen, after moving that one drawer, and see how it stacks up to other kitchen designs.

Current home design is all about appearance over function because consumers buy homes that look great while having a hard time imagining all the little functional flaws such as a lack of storage space, how sound travels, and that sort of thing. The big homebuilders and architects design for the camera, not for the consumer. When homes are designed to meet the best standards of function as voted by actual homeowners, the value of the typical home will skyrocket at the same time the cost of construction drops.

I predict that in twenty years nearly every house that exists today will be seen as a "tear down" because new construction will be cheap and new home designs will be extraordinary.

Imagine picking your house design over the Internet with the intention of doing much of the work yourself, perhaps with your own crew of helpers. You pick the design, pick the start date, and click BUY. From that point on, the system starts delivering materials according to a fixed schedule that the buyer can modify on the fly. You show up at the construction site in the morning and several Google self-driving delivery trucks are already waiting. Your construction-bot unloads the trucks and stages the building material where you want it.

You walk up to the pile of materials and use your smartphone to read the bar code and call up detailed step-by-step directions for what you need to do. You'll have exactly the tools you need because the system warned you a week ahead to be ready for this phase. If you don't own a nail gun, for example, that is added to your delivery at the same time as the materials.

I can also imagine that in this world of pre-cut materials we would see more of a snap-together building system that is easy for non-handy people to manage.

There is an enormous home construction/retrofit phase ahead of us, perhaps ten years out.

31 May 13:29

Will mammoths be brought back to life?

Will mammoths be brought back to life?:

Nick Thompson:

Grigoriev told The Siberian Times newspaper it was the first time mammoth blood had been discovered and called it “the best preserved mammoth in the history of paleontology.”

“We suppose that the mammoth fell into water or got bogged down in a swamp, could not free herself and died. Due to this fact the lower part of the body, including the lower jaw, and tongue tissue, was preserved very well,” he said.

Grigoriev called the liquid blood “priceless material” for the university’s joint project with South Korean scientists who are hoping to clone a woolly mammoth, which has been extinct for thousands of years.

Hold on to your butts…

[via @zamosta]

29 May 14:29

Nassim Nicholas Taleb on Risk Perception

by schneier

From his Facebook page:

An illustration of how the news are largely created, bloated and magnified by journalists. I have been in Lebanon for the past 24h, and there were shells falling on a suburb of Beirut. Yet the news did not pass the local *social filter* and did [not] reach me from social sources.... The shelling is the kind of thing that is only discussed in the media because journalists can use it self-servingly to weave a web-worthy attention-grabbing narrative.

It is only through people away from the place discovering it through Google News or something even more stupid, the NYT, that I got the information; these people seemed impelled to inquire about my safety.

What kills people in Lebanon: cigarettes, sugar, coca cola and other chemical monstrosities, iatrogenics, hypochondria, overtreament (Lipitor etc.), refined wheat pita bread, fast cars, lack of exercise, angry husbands (or wives), etc., things that are not interesting enough to make it to Google News.

A Roman citizen 2000 years ago was more calibrated in his risk assessment than an internet user today....

29 May 14:26

Understanding the API options for securely delegating access to your AWS account

by jscharf

Thinking about building a secure delegation solution to grant temporary access to your AWS account?  This week’s guest blogger Kai Zhao, Product Manager on our AWS Identity and Access Management (IAM) team, will discuss some considerations when deciding on an approach:

________________________________________________________________________________

Introduction

Using temporary security credentials (“sessions”) enables you to securely delegate access to your AWS environment to one or more users or applications, without having to share your long-term credentials (i.e. password or secret access key).  Use cases include cross-account access (enabling users from one AWS account to access resources in another) and single sign-on to AWS (enabling users authenticated within your enterprise to access AWS without re-authentication).

Many customers have asked for guidance on how to build delegation solutions that grant temporary access to their AWS environment.  This blog post will cover two AWS APIs that you can use for this purpose (sts:GetFederationToken and sts:AssumeRole), how to call each API, and the benefits of using one versus the other.  

Please be aware that this blog post will dive deep into some technical details.  It’s helpful to first have a basic understanding of IAM and how to make programmatic AWS API calls with IAM users.  You may want to brush up first by reviewing the Using IAM and Using Temporary Security Credentials documentation.

What’s in a session?

AWS supports access delegation through the AWS Security Token Service (STS), which enables you to request temporary security credentials that can be used like long-lived access credentials, but have limited duration and permissions. 

If you’ve ever signed an AWS API request using your access key and secret key, then using temporary security credentials should be familiar.  They consist of an access key ID, secret access key, a session token, and an expiration time.  Use the keys to sign API requests and pass in the token as an additional parameter (which AWS uses to verify that the temporary access keys are valid).

Understanding how permissions are derived

Before we get into the details of how to call each API, let’s review a key difference between GetFederationToken and AssumeRole.  Both return a set of temporary security credentials, but they differ in how permissions associated with the temporary security credentials are derived.  The Venn diagram below details this:

Calling GetFederationToken requires an IAM user or root.  The resulting permissions inherit the permissions of the caller, scoped down by the permissions attached in the request (if you don’t attach permissions, the resulting permissions will be deny *).  The actual permissions associated with the session are the intersection of the caller’s base permissions and the permissions attached as an API parameter.  If you use an IAM user to call GetFederationToken, you’ll want this user to have the minimum permissions necessary to cover every federated user’s use case, since the actual delegated permissions are always a subset of the caller’s privileges.

Unlike GetFederationToken, AssumeRole sessions derive their permissions from the role policies that you’ve pre-defined, scoped down by the optional permissions attached in the request.  The actual permissions associated with the session are the intersection of the role’s permissions and the permissions attached as an API parameter.  Only an IAM user or another role with permissions to call AssumeRole can assume a role.  By default, you can define up to 250 roles (use the IAM Limit Increase form to request more), and your users/applications can assume any role you allow them to assume.  As a result, you can create role permission profiles tailored for individual use cases and don’t have to pass in a policy with your AssumeRole requests (though you can if you wish). 

It’s important to understand that using either GetFederationToken or AssumeRole, the permissions are evaluated each time an AWS API call is made.  That means that even though you cannot revoke the session, you can always modify the permissions associated with a session even after the session has been issued.  Simply modify the permissions on the IAM user (for GetFederationToken) or IAM role (for AssumeRole), and the permissions on the session will automatically be affected as well.

How do you use these APIs?

GetFederationToken and AssumeRole also differ in their parameters.  Here’s a rundown of the possible parameters for each.

  • GetFederationToken
    • DurationSeconds: The duration, in seconds, that the session should last (15 min – 36 hours).
    • Name: The name of the federated user associated with the credentials.
    • Policy: A policy specifying the permissions to associate with the credentials.  This is used to scope down the permissions derived from the IAM user making the GetFederationToken request. Note, if you don’t specify any policy here, the resulting temporary credentials will not have any permissions to access AWS resources.
  • AssumeRole
    • DurationSeconds: The duration, in seconds, that the session should last (15 min – 1 hour).
    • RoleSessionName: An identifier for the assumed role session. Analogous to the “Name” parameter used in sts:GetFederationToken.
    • Policy: A policy specifying the permissions to associate with the credentials.  This is used to scope down the permissions derived from the role policy.  Note that if a policy is not specified in the request, the resulting permissions will just inherit the policy associated with the role.
    • ExternalID: A unique identifier used if the role is to be assumed by a third party on behalf of their customers.
    • RoleARN: The Amazon Resource Name (ARN) of the role being assumed.

How do I choose which API?

When deciding which API to use, you should consider what services are required for your use case and where you want to maintain the policies associated with your federated users.

How do you want to maintain the policies associated with your delegated users?

If you prefer to maintain permissions solely within your organization, GetFederationToken is the better choice.  Since the base permissions will be derived from the IAM user making the request and you need to cover your entire delegated user base, this IAM user will require the combination of all permissions of all federated users. 

If you prefer to maintain permissions within AWS, choose AssumeRole, since the base permissions for the temporary credentials will be derived from the policy on a role. Like a GetFederationToken request, you can optionally scope down permissions by attaching a policy to the request.  Using this method, the IAM user credentials used by your proxy server only requires the ability to call sts:AssumeRole.

What AWS services do you want to use?

Refer to the STS documentation for an up-to-date list of services and their support for different session types.  Note that temporary security credentials received from GetFederationToken cannot be used to call IAM or STS.

Getting Started

Here are some useful resources to get you started with AWS delegation technologies:

  • Familiarize yourself with the STS developer guide
  • Try out our pre-packaged federation application, which enables single-sign on to AWS from Active Directory
  • Watch our re:Invent 2012 presentation on the topic of delegating access to your AWS environment

 

Jim

 

28 May 19:26

The Downsides of Being Smart

by Stephen J. Dubner
prepend

I wish I was smart.

A podcast listener named Amy Young writes in with interesting comments about our recent “Can You Be Too Smart For Your Own Good?” episode:

As I hold a Ph.D., I too feel well qualified to speak on topics I know nothing about.  Actually, the Ph.D. is in psychology, I am somewhat qualified to speak about the topic; however, most of my info comes from having a very bright son and having to do a lot of research to try to figure out how to raise him.

One downfall of being particularly bright is that you are often lonely.  You see and think of stuff that most other people don’t see or understand, so it can be hard to feel a genuine connection with most others.  What is really exciting to you goes right over the heads of most others.  As you get older this gets to be easier to solve by finding your flock, but I think loneliness in the formative years always sticks to you.  

Another downfall is that exceptionally bright people have a high drop-out rate from school, particularly high school. It seems counterintuitive until you spend a day in our public school system.  Bright kids see school as not providing any useful information and find it creates a lot of boring busy work.  On that note, a really great topic for you to explore is the economic impact of the teacher’s union’s stronghold on the American public education system. 

Also, in terms of gender and smarts, a downfall of being bright is social exclusion, which can be devastating for most girls.  As for the low marriage rates among bright women, I think most bright women avoided marriage in the past as it often meant staying home to perfect souffles and iron underwear.  I would imagine that to be torturous for bright women and could possibly be the inspiration for the Rolling Stones’ song “Mother’s Little Helper.”

28 May 14:46

HISP to HISP communications

by John Halamka

As Massachusetts works through the details of building a trust fabric for health information exchange, we have been working through another set of challenges in HISP to HISP communication.

Meaningful use Stage 2 requires EHRs to support the Direct Project implementation guide, which uses SMTP/SMIME as a transport protocol.   Optionally, Stage 2 also supports XDR/SOAP.

In the world of standards, "OR" always means  "AND" for implementers.   Massachusetts needs to support HISPs that allow XDR as well as those which only allow with SMTP/SMIME.   This gets confusing when there is a mismatch between the sender's protocol and the receiver's protocol, requiring a HISP to convert XDR to SMTP/SMIME or SMTP/SMIME to XDR.

There are 4 basic scenarios to think through
1. An SMTP/SMIME sender to an SMTP/SMIME receiver
2. An SMTP/SMIME sender to an XDR receiver
3. An XDR sender to an SMTP/SMIME receiver
4. An XDR sender to an XDR receiver

Scenarios 1 and 4 could be done without a HISP at all if the EHR fully implements the Direct Standard including certificate discovery.

Cases 2 and 3 require thoughtful security planning to support end to end encryption between two HISPs.

These slides provide the detail of what must be done for Cases 2 and 3.  

The challenge of supporting XDR is that the HISP must act as the agent of senders and receivers, holding their private key for use in the conversion from/to SMTP/SMIME.

As Massachusetts continues to enhance its state HIE capabilities and connect many other HISPs (eClinicalWorks, Cerner, Surescripts, AthenaHealth etc) to state government users and those using the Massachusetts HISP as part of their EHR (Partners, BIDMC, Atrius, Tufts Medical Center, Meditech users, NextGen users etc.) we now know what must be done to provide end to end encryption among different HISPs and users connected via 2 protocol choices.

We're learning once again that optionality in standards seems like a good idea, but ultimately adds expense and complexity.
.
Everyone on the HIT Standards Committee knows my bias - offer no optionality and replace the existing SMTP/SMIME and XDR approaches with RESTful APIs such as the Mitre hdata initiative.

Maybe for Stage 3!

22 May 01:23

AWS Achieves First FedRAMP(SM) Agency ATOs

by cwoolf

I’m very excited to share that AWS is now a FedRAMP-compliant cloud service provider. See the Amazon press release here. This is game-changing news for our U.S. government customers and systems integrators and other companies that provide products and services to the U.S. government because:

  1. It provides agencies a standardized approach to security assessment, authorization, and continuous monitoring for AWS products and services. Prior to the FedRAMP process, government security assessments of cloud providers were not standardized; each varied greatly in scope and depth and were an inefficient use of time and resources. Through FedRAMP, agencies now have a mechanism to obtain comprehensive AWS security assessment documentation and to perform an evaluation of our environment. Agencies can immediately request access to the AWS FedRAMP package by submitting a FedRAMP Package Access Request Form and begin moving through the process to evaluate our platform and authorize AWS for sensitive government workloads.
  2. It demonstrates the AWS environment meets the high bar of the FedRAMP security and control requirements. This means U.S. government customers can immediately start leveraging the Authority to Operate (ATO) provided by the Department of Health and Human Services (HHS) to use the AWS cloud. Kevin Charest, HHS Chief Information Security Officer, shared that by using AWS, all of the HHS Operating Divisions can now “reduce duplicative efforts, inconsistencies, and cost inefficiencies associated with current security authorization processes.”
  3. It provides agencies with the immediate ability to comply with the Office of Management and Budget’s (OMB) mandate to “use FedRAMP when conducting risk assessments, security authorizations, and granting ATOs for all Executive department or agency use of cloud services” (FedRAMP Policy Memo, OMB). 

This announcement is timely for federal CIOs. Budget, cybersecurity and workforce challenges rose to the top in TechAmerica's 23rd annual Survey of Federal CIOs published earlier this month (http://www.techamerica.org/cio-survey/). The survey found that 94% of CIO respondents have already or are planning to adopt public or private cloud services. Numerous U.S. government agencies are using the wide range of AWS services today.  And now AWS FedRAMP compliance eases the regulatory compliance burden for all government customers, as the Department of Health and Human Services has already demonstrated by being the first federal agency to publicly authorize the use of AWS under FedRAMP.

Want to learn more about AWS and FedRAMP? Check-out answers to frequently asked questions on the AWS FedRAMP FAQ site.

 

Chad Woolf

Director, AWS Risk and Compliance

 

09 Apr 17:17

Commencement at Columbia University

by John Halamka
I recently spoke at Columbia University to the graduates of their healthcare IT certificate program.    I used these slides.

I started with an overview of my top 10 buzzwords for 2013 describing a strategy for each of them:

Secure and Compliant - BIDMC has a multi-million dollar security enhancement program with 14 work streams

Hosted in the Cloud - BIDMC operates 3 private clouds to deliver web-based clinical applications to thousands of users

Service Oriented Architecture - BIDMC builds its own clinical systems using service oriented architectures and enterprise service bus approaches

Business Intelligence - 200 million observations on 2 million patients are searchable via innovative self service data mining tools

Social Networking - BIDMC uses social networking ideas in its clinical applications and uses commercial social networking sites extensively for marketing

Green - BIDMC's data centers support a 25%  annual increase in computing power and storage without increasing our power usage

Federated and Distributed - BIDMC's merger and acquisition strategy depends up on data sharing among peers rather than centralizing clinical applications into one common system

Patient Centered - BIDMC's patient and family engagement strategy includes Patientsite, OpenNotes, and new mobile friendly tools

Mobile BYOD - BIDMC applications run on iPhones, Android devices and iPads (with appropriate encryption)

Foundational for Healthcare Reform - BIDMC has embraced global captitated risk, with over 65% of patients in risk contracts in 2013.

I also described the leadership characteristics needed to implement all these concepts in large complex organizations:

Guidance - A consistent vision that everyone can understand and support.

Priority Setting - A sense of urgency that sets clear mandates for what to do and importantly want not to do.

Sponsorship - "Air Cover" when a project runs into difficulty.   Communication with the Board, Senior Leadership, and the general organization as needed.

Resources - A commitment to provide staff, operating budget, and capital to ensure project success.

Dispute resolution - Mediation when stakeholders cannot agree how or when to do a project.

Decision making - Active listening and participation when tough decisions need to be made.

Compassion - Empathy for the people involved in change management challenges.

Support - Trust for the managers overseeing work and respect for the plans they produce that balance stress creation and relief.

Responsiveness - Availability via email, phone,  or in person when issues need to be escalated.

Equanimity - Emotional evenness that is highly predictable no matter what happens day to day

The students were a great, energetic and inquisitive group.   I want to thank Columbia for the opportunity to speak with them.
16 Mar 22:13

Shake Shack coming to Atlanta?