Shared posts

10 May 04:53

Digital Photo Editing Workflow

by Sean McHugh
This article summarizes the most important steps to incorporate into your digital photo editing workflow (aka "post-processing workflow"). It isn't necessarily a procedure that you'll want to implement with all of your photos, but whenever you want to get that "keeper" looking just right, these steps can make all the difference.
20 Oct 03:26

Mac App Store: The Subtle Exodus

The Mac App Store was released in January 2011 and it marked the beginning of a great new distribution channel. Even though it lacked some bells and whistles, the developer community was hopeful the problems would be addressed in due course. Unfortunately, it has been years and there's no evidence that the core issues would be addressed in the future, at all. When notable developers are abandoning your platform, cannot do the right thing for their customers and are delaying their MAS submission, something is very, very broken. I believe that the inaction is harmful to the whole Mac community, affecting consumers and developers alike.

Let me make it absolutely clear why I'm writing this. First and foremost, it's because I deeply care about the Mac platform and its future, it pains me to see developers abandoning it. The Mac App Store can be so much better, it can sustain businesses and foster an ecosystem that values and rewards innovation and high quality software. But if you talk to developers behind the scenes or explore the Mac App Store, you'll find something completely different.

Before we look at what the Mac App Store can do better, let's take a moment and give credit where it's due. The Mac App Store is simply the most convenient way to purchase and download software, bar none. Unfortunately, that's where the good things end.

There are large amounts of evidence that one of the effects of the Mac App Store is the tendency to push developers into unsustainable pricing – some might say, it is designed that way. We'll look into that in a bit but let's get one thing straight. I see comments along the lines of "The MAS might be worse for developers but that means it's better for consumers". This presents a false dichotomy that somehow developers and consumers are at odds but this couldn't be further from the truth.

The relationship between consumers and developers is symbiotic, one cannot exist without the other. If the Mac App Store is a hostile environment for developers, we are going to end up in a situation where, either software will not be supported anymore or even worse, won't be made at all. And the result is the same the other way around – if there are no consumers, businesses would go bankrupt and no software will be made. The Mac App Store can work in ways that's beneficial to both developers and consumers alike, it doesn't have to be one or the other.

If the MAS is harmful to either developers or consumers, in the long term, it will be inevitably harmful to both.

Trials

Let's talk about trials. The idea behind it is very straightforward and has been around for decades. Simply put, when it comes down to evaluating a product, you just have to test it. While looking at screenshots, videos and reviews provides you a good starting point, it does not tell the full story. The consumer is the best judge whether an app fulfills their needs or not.

We should not be requiring upfront payment from consumers to evaluate software – it makes no sense. Let them try everything they want and naturally, the best value software will win – usually, the highest quality which has a reasonable price tag (there will be multiple winners at each price tier). But high quality software is expensive and very few people can afford to throw $30-50-100 on the off chance it might turn out to be adequate for their needs.

As a consequence, developers producing high quality apps reduce their prices significantly to levels that consumers can afford to take a gamble on. Those prices are usually unsustainable to support the continued development at the same standard of quality. We end up in a situation where the market can easily support higher-priced quality software but that's disallowed due to the arbitrary restrictions of the Mac App Store. We have regressed from "let the best value app win" to "let the cheapest app win" which is counter-productive in the long term. Worse still, the downward price spiral creates unrealistic expectations in the eyes of consumers that all software should be extremely cheap and anything more than $5 is a rip-off.

Supporting app trials would be trivial and provide a completely seamless experience. You would be able to download any app, play with it and purchase with a single click – effectively zero additional complexity. For example, "Try" can be shown besides the price and once the trial has expired, it will revert to just showing the price, possibly with a subtle subtitle saying "Trial expired".

Clear app trial mockup
Trying out an app would be a click away and easily accessible.

The current workaround is for developers to put trials on their websites but the amount of people who are even aware of this is probably small enough to not make a real difference. Alternatively, intentionally crippled versions of apps branded as "Lite" variants get distributed for free on the MAS as a way to emulate trials. But using the quality of an app deliberately made inferior is not a good idea, simply because it is not truly representative – the features that might make the difference between purchasing or not would most likely be only part of the full product.

Another possibility would be to have a free app with an in-app-purchase which unlocks all the features. But this approach is effectively the same as Lite, exhibiting the same issues, with the only difference being the Lite and non-Lite versions are unified into a single app. Expect to see one star reviews from users who downloaded the free app but never expected that it won't actually be free – even if the developer had the best intentions, the end result would a bad customer experience.

Many apps are fundamentally incompatible with this model and it's practically impossible to remove enough features while still keeping the free part useful – the end result is crippleware.

Upgrades

Paid upgrades seem to be a really controversial topic but it's quite simple. Businesses need a continuing revenue stream to operate, that's just basic economics. The reality is that there simply are not enough new customers to sustain most companies which implies that, either the software they make will be abandoned at some point or the customer base will need to step in and provide continued funding.

There are multiple ways to provide that continued funding (e.g., subscriptions, ads, IAPs) but if you took a deeper look, you will realise that all of them are inferior compared to paid upgrades. Do you want to be bombarded with piecemeal in-app-purchases every few releases? Do you want to be continually spammed with annoying ads? Do you want to buy "free" apps which trick you into paying more and more?

Paid upgrades provide a very simple proposition – the developer has put a significant amount of work into the next major iteration of the app and existing users can decide whether they're willing to upgrade. And if they choose to do so, they will be rewarded for their repeat custom by getting a discount.

Supporting paid upgrades does not require much additional complexity, if any at all. We know that Apple already keep multiple versions of apps, so that you can install them on devices that do not support the latest OS. If the developer publishes a paid upgrade and if a customer already owns the app, they will see an "Upgrade" button next to "Installed" – that's it. Consumers don't have to upgrade, they can happily keep using the version they purchased as long as they like.

First-class support for paid upgrades means that developers don't have to lose all the customer reviews and ratings, which usually take years to accumulate. It's a big slap in the face to start all over again from step one.

Monodraw app upgrade mockup
When a new major version is published, an "Upgrade" button would appear for people who have already purchased the app.

Sandboxing

App sandboxing is a great security feature that provides an assurance to users that apps cannot wreak havoc on the whole system and only have the least amount of access required. While sandboxing was initially not a requirement when the Mac App Store was launched, all apps that are now submitted have to be sandboxed.

Developers found themselves having to make a hard choice – severely degrade the functionality of their software or leave the MAS. Consequently, entire categories of apps cannot be distributed via the MAS because they cannot be sandboxed without crippling their feature set. There is growing number of abandoned apps which are severely lagging behind their direct download counterparts. We are talking about some very popular apps, not obscure hacks that require kernel extensions. For example, the MAS version of SourceTree was last updated in November 2012, almost 2 years ago (v1.5.6 vs v2.0.2 at the time of writing).

This should have been unacceptable. Either the appropriate entitlements and solutions should have been in place or sandboxing should not be a requirement for the MAS. It puts developers in a very tough position of having to either artificially limit their app (which will result in bad user experiences blamed on the developers) or having to abandon the MAS.

And we will never find out about all the apps that could have existed but were never created due to the inability to be distributed on the MAS (they might rely on the revenue that the MAS would bring but are consequently uneconomical).

Customer Reviews

The idea behind customer reviews is a good one – the best judges of your software are the people who use it. Unfortunately, there are several problems with that. Firstly, reviews are negatively biased as people are more likely to leave a bad review after an unpleasant experience. This has lead to developers begging for reviews in various ways, usually by (distastefully) popping up a dialog or asking in the release notes.

But every story has two sides. The inability of developers to actually respond to customer concerns is extremely damaging to the product and leads to loss of sales. This would be perfectly fair if the bad reviews were based on constructive criticism but the vast majority of times, that's simply not the case – just look at the one star reviews of apps which have a high average rating.

Final Words

The Mac App Store is restricting, in multiple ways, the tools available at developers' disposal to sustain a business and distribute high-quality apps. It should not be discriminating against tried and true practices that inarguably benefit both consumers and developers – the market should be left to decide that. We cannot know whether, e.g. paid upgrades, are better suited when we do not even have the option to use them.

Joe Steel: The decay of the Mac App Store over the last few years is pretty subtle. Developers are not leaving en masse, all at once. One by one, as new updates are being developed, they weigh the pros and cons for them, and their customers, and they pull out.

My ultimate fear is that the complacent state of the Mac App Store would lead to the slow erosion of the Mac indie community. The MAS is the best place to get your software, it comes bundled with your OS, it's very convenient but when all the issues compound, developers will vote with their feet and continue the slow exodus. I feel that Apple needs to encourage the availability of high quality software rather than quantity over quality – the first step would addressing the core issues that have been known for years. The Mac platform would be a much worse place if we prioritise short-term gains, boasting about the hundreds of thousands of free abandonware rather than concentrate on the long-term fundamentals to sustain a healthy and innovative ecosystem.

You should follow @milend for more.

← Back to Blog

Further Reading

22 Sep 23:57

Authors, Computers and Hints About What Survives for Born Digital Literary Archives

by Bill LeFurgy
By outcast104, on Flickr

By outcast104, on Flickr

There is an endless fascination about how writers work. What time they start, where they do it, how much they drink–all of it is grist for literary gawkers. When I peep into writerly habits, I wonder about computers. How much do writers revise on the screen? What kind of digital record do they keep of the drafting process? How do digital drafts correlate with paper drafts?

While others have delved into these questions–see  Saving-Over, Over-Saving, and the Future Mess of Writers’ Digital Archives, for example–the issue as a whole remains ill-defined. Even after 30 plus years of cursor-blinking relentlessness, computers still jockey with paper for the affection of authors. At the highest level of abstraction that appears safe to make, we can say that most writers generate an interwoven, hybrid mix of paper piles and computer files. What elements of this mix survive to document the creative process is difficult to say.

A clue can be found in the attitude of writers toward the computer. To this end, the interviews in The Paris Review are helpful. The publication has several long-running interview series, including The Art of Fiction, in which well-known authors share insights into their work. Starting about 1980, interviewers began asking about “word processors,” and a few years later about “computers.” There are wonderful vignettes about how this new technology was welcomed–or not–into the creative process. Not much is said about what records are kept, although one can draw some inferences. It is important to note that these are views frozen in time and may have changed for the individuals involved.

My overall impression from the authors interviewed is that most have an attitude toward computers that ranges from hostility to workmanlike acceptance. At one extreme there is the poet Yves Bonnefoy, who declared “No word processors! I use a little typewriter; at the same time, often on the same page, I write by hand. The old typewriter makes the paper more present, still a part of oneself,” he says. “What appears on the screen of a word processor is a mist; it’s somewhere else.” David Mamet also has no use for computers: “The idea of taking everything and cramming it into this little electronic box designed by some nineteen-year-old in Silicon Valley . . . I can’t imagine it. ” Ted Hughes goes even further. When the interviewer asked him if “word processing is a new discipline,” Hughes replied that “It’s a new discipline that these particular children haven’t learned…. I think I recognize among some modern novels the supersonic hand of the word processor uncurbed.”

Some accept the computer but feel it is a devil’s bargain. “It has ruined my life! Really on a daily basis it ruins my life,” said Ann Beattie. Why not quit it? “I can’t. I’m hooked…. The computer can more or less keep up with my thoughts. No electric typewriter, not even an IBM Selectric could possibly keep up with my thoughts. So now I’m keeping up with my thoughts. But they’re the thoughts of a deranged, deeply unhappy person because of working on the computer for twenty years.”

Ann Beattie

Ann Beattie

Utilitarian acquiescence is a common refrain. Tahar Ben Jelloun said “I write by hand. The word processor is for articles, because newspapers demand that the text be on a diskette.”  Beryl Bainbridge states “I write directly on the word processor, but I use the machine as a posh typewriter and print out every page the moment it’s finished.” A.S. Byatt has a firmly practical view. “I write anything serious by hand still. This isn’t a trivial question,” she said.  “On the other hand I do my journalism on the computer with the word count. I love the word count.”

Some writers do profess appreciation. “In the mideighties I was a grateful convert to computers,” says Ian McEwan. “Word processing is more intimate, more like thinking itself. In retrospect, the typewriter seems a gross mechanical obstruction. I like the provisional nature of unprinted material held in the computer’s memory—like an unspoken thought. I like the way sentences or passages can be endlessly reworked, and the way this faithful machine remembers all your little jottings and messages to yourself. Until, of course, it sulks and crashes.”

When the interviewer said “I understand that you are a great fan of the word processor,” to John Hersey it provoked a long reply.

I was introduced to the idea very early. In 1972… a young electrical engineer, Peter Weiner, was developing a program called the Yale Editor on the big university computer, with the hope that there would be terminals in every Yale office. They were curious to see whether it would work for somebody who was doing more or less imaginative work, and they asked me if I’d be interested in trying it. My habit up to that point had been to write first drafts in longhand and then do a great deal of revision on the typewriter. I had just finished the longhand draft of a novel that was eventually called My Petition for More Space, so I thought, well, I’ll try the revision on this machine. And I found it just wonderfully convenient; a huge and very versatile typewriter was what it amounted to…. So when these badly-named machines—processor! God!—came on the market some years later, I was really eager to find one. I think there’s a great deal of nonsense about computers and writers; the machine corrupts the writer, unless you write with a pencil you haven’t chosen the words, and so on. But it has made revision much more inviting to me, because when I revised before on the typewriter, there was a commitment of labor in typing a page; though I might have an urge to change the page, I was reluctant to retype it. But with this machine, there’s no cost of labor in revision at all, so I’ve found that I’ve spent much more time, much more care, in revision since I started using it.

Perhaps the most elegant appreciation for the computer came from Louis Begley. In response to a question about “when he grabbed a pen” and decided to write his first novel, Begley said:

I didn’t grab a pen. I typed Wartime Lies on my laptop. I remember exactly how it happened. In 1989, I decided to take a four-month sabbatical leave from my law firm, and I started the book on August 1, which was the first day of the sabbatical. I did not announce to my wife, Anka, or even to myself that this was what I was going to do. But, just a week before the sabbatical began, in a torrential downpour, I went to a computer store and bought a laptop. Now why would I have bought a laptop if I wasn’t going to use it? So I must have had the book in my mind.

As I noted earlier, the question of what writers actually save rarely comes up in these interviews. But Jonathan Lethem is an exception.

Interviewer:  You work on a computer. Do early drafts get printed out and archived?

Lethem: No, I never print anything out, only endlessly manipulate the words on the screen, carving fiction in ether. I enjoy keeping the book amorphous and fluid until the last possible moment. There’s no paper trail, I destroy the traces of revision by overwriting the same disk every day when I back up my work. In that sense, it occurs to me now, I’m more like the painter I trained to be—my early sketching is buried beneath the finished layer of oil and varnish.

That idea–digital traces of earlier starts, stops, changes and corrections buried under a finished product–is the ideal metaphor for the contemporary computer-aided writer. But where earlier sketching is an integral part of the painting itself, digital drafts live a physically separate, and likely undervalued, life of precarious materiality.

18 Aug 18:16

August 17, 2014

Nellynette Torres Ramirez

currency exchange!


Only 5 days left to submit for BAHFest!
24 May 03:57

Spritz Integration

A couple months ago we came across a new technology called Spritz. It’s a tool that helps you read faster. Given how far behind I typically am in my Old Reader queue, we thought it would be a good thing to try out in the application. We were so happy with the results that we’ve decided to roll out the beta Spritz integration to our users today.

To enable Spritz, you’ll need to go into your settings and click the Spritz checkbox. You’ll then see the Spritz icon in your feeds which you can click on (or use the ‘i’ hotkey). The first time through you’ll need to create a Spritz account, but after that it should be clear sailing and fast reading.

Here’s an article about the new functionality on TNW.

Let us know what you think!

24 May 03:46

Ten Years Ago in ALA: Art Direction and Drop Shadows

by Mike Pick
Nellynette Torres Ramirez

Stylesheet for printing

Writing for web designers is a tricky blend of trying to predict and shape the near-future while keeping your feet firmly grounded in the practical concerns of the here-and-now. Ten years ago this month in Issue 180, A List Apart published Stephen Hay’s Art Direction and the Web, a tidy piece that still resonates today.

For those who have grown weary of the Great Flatness Debates of the present, Stephen’s piece is refreshingly rooted in communication design. The article provides a solid outline of the principles of art direction by discussing the importance of creative themes and rhetorical devices in your work, and follows up with some practical tips on how to implement these concepts into your workflow. It’s a good read for today’s designer, as it is mainly focused on thoughtfulness and process, and unencumbered by jaggy screenshots of the pre-anti-aliased web.

On the other hand, Onion Skinned Drop Shadows, written by Brian Williams for Issue 182, is a direct example of a technique that is now utterly obsolete. Like Faux Columns and Sliding Doors, this technique demonstrates an incredible amount of ingenuity that seems ridiculously kludgey today, when drop shadows are so easily created with a single line of CSS that an entire movement has spawned to argue against them.

Also from May 2004: Print It Your Way by Derek Featherstone, a guide to creating custom user print stylesheets for Firefox, and Separation: The Web Designer’s Dilemma, a rumination from Michael Cohen on the ongoing concern over keeping content separate from layout.

Finally, a bonus flashback: zeldman.com from 10 years ago, with Issue 182 featured in the sidebar!

10 years ago this month, @zeldman linked to my site for the first time. I was such a fan that I took a screenshot. pic.twitter.com/Kz4rWgKZFG

— Mr. Andrew Clarke (@Malarkey) May 13, 2014
24 May 03:05

A Brave New World... Cover

Bantam has changed the cover for our forthcoming concordance, THE WORLD OF ICE & FIRE, several times.

You've seen a couple of the earlier mockups here and on my website.

Now here's the latest and... I am told... final cover.


WOIAF cover

 How do you like it?

 WORLD OF ICE & FIRE will be out in the fall.  It is going to be a gorgeous book, a big coffee table volume with lots and lots of stunning artwork, and tons of fake history.  We were supposed to provide 50,000 words of text, but... ah... I got carried away.

 
24 May 03:04

Network Literacy Mini-Course

by hrheingold
24 May 02:54

Transforming care, breaking down barriers

by Dr. Israel Green-Hopkins

“The practice of medicine is an art based on science” – Sir William Osler

This was famously stated over 100 years ago by Dr. Osler, the father of modern medicine and the physician who laid the foundation for professionalism in healthcare.  In this vein, a panel of experts from Boston-area hospitals and elsewhere recently convened to attack the question “What needs to change to get doctors back to the patients?”  While the primary theme was identification of the obstacles preventing physicians in the 21st century from reaching the bedside, the undertone was undoubtedly built around the evolving utilization of health information technology, digital health, and clarifying health systems’ functionality.

If you attended, then I hope you gleaned some valuable insights into how health IT can transform our practices and the overall quality of healthcare while simultaneously uncoupling physicians from the burden of navigating the evolving health IT environment.  In this two part series, I’ll explore some of the themes presented and look towards the challenges to be overcome as well as some proposed solutions.

We now have EHRs, but have our care processes – with respect to documentation – actually changed relative to the paper era?
Dr. Steven Stack, past president of the American Medical Association (AMA), commented on the migration to electronic health records (EHRs) and its effect on documentation.  From templates and macros to the ability to go on epic dialogues within charts, a substantial number of clinicians, including those in training as I have witnessed, treat documenting in the EHR as a clean, white notepad upon which anything and everything can be written.  Has this created fewer meaningful words and impaired our abilities to effectively care for patients? Although there’s no evidence to support this, the anecdotes from the panelists certainly speak to the palpable reality.  Furthermore, the complexity of documentation that exists within some EHRs exponentially increases the time clinicians must spend at the computer, easily impeding their ability to spend time at their patients’ bedside.

So, what is the next evolutionary step? Dr. John Halamka, CIO of Beth Israel Deaconess Medical Center and co-chair of the National HIT Standards Committee, coined a term I strongly favor: clinical relevance.  As our system evolves toward integrated care management, clinical documentation may transform into a more purposefully-designed tool that constructively contributes to the overall care of the patient.  Clinical relevance – whether it be about the actions to be taken for the diabetic patient with rising blood glucose or the recently discharged heart failure patient whose home-monitoring alerts are approaching the red zone for weight gain – may prevail in the next phase of documentation design.  This would counter the current methods of documentation, which are integrally organized into the reimbursement system,  and that has an unclear impact on the quality of care.

Specific solutions might include collaborative note structures wherein multiple providers contribute to a note that intelligently structures quality metrics and disease templates.  In turn, clinicians may be able to untie themselves from some of the elements of documentation and data filtering, allowing them to return to the bedside.  Certainly, the ICD-10 challenges that providers will soon face, regardless of delay, may further shift the balance in clinically relevant documentation by providing one more hurdle for physicians to clear, aside from relevant care.  Yet, such a transformation in clinical documentation would mean overhauling how we approach care entirely—is healthcare ready for this?  Maybe what matters most is improving our ability to understand the data we currently feed into our EHRs.

The data is there, but how do we transform it into wisdom?
Healthcare data is growing at a remarkably rapid pace and multiple panel members supported this with anecdotes from their own institutions.  One of our greatest challenges is now clarifying that data and using it in meaningful, real-time manners.  As Dr. Halamka put it, “we are already overwhelmed with data, what we need is information, knowledge and wisdom.” The pronounced physician dissatisfaction with the EHR, based on a study from the AMA/Rand Corporation, should be weighted considerably when approaching the design and impact of health IT solutions to bring us this wisdom. The appropriate visualization of data is crucial to effective patient care in every setting.  Communication is based on data and when that information is challenging to find or requires active searching, it leads to errors and has an adverse effect on care.  Tools that improve our awareness and analysis of available data within the EHR at the point of care will undoubtedly improve our efficiency of the system.  Though not elaborated on in the panel, areas such as predictive analytics and real-time data visualization need to be further explored to provide use cases and solutions.

Is it possible that once regulatory pressures ease in late 2014 that our field may finally have the breathing room to devote time and energy towards innovative health IT solutions, and, in turn, help us see the petabytes of data in a meaningful form? It may be, and examples such as the adoption of Google Glass in the Beth Israel Emergency Department demonstrate the resolve of health IT leaders to push forward with innovative agendas.  With practicing clinicians pushing the envelope in the application of technology-driven solutions to the most pressing problems, such as those discussed by the panelists, we should start to see words put into action.  Industry and academic leaders who creatively brainstorm interventions will serve as the futurists who are transforming our care.

 

The post Transforming care, breaking down barriers appeared first on What’s next.

24 May 02:50

DEVONthink Pro Pinboard Bookmarks Importer v0.2

A couple of days ago Andreas Zeitler joined Pinboard after I gave him access to my account and he could try out some of the features. Like me he is a user and fan of DEVONthink Pro Office. Conincidentally he uses my script for importing Pinboard bookmarks into DTPO.

Andreas discovered a couple of bugs, fixed them and submitted an updated version of the script. He added some user feedback when finished and changed keychain scripting to match more cases. He also fixed the title of the password window. Thanks!

Update: There was a minor issue with v0.2, so I just uploaded v0.3 and updated the link. Thanks again, Andreas.

Second Update: There are some encoding issues when you have special characters in your password. If the script exits with an error number, try to change your password so that it only contains alphanumeric characters. For more information read the whole thread in the DEVONtechnologies user forum.

Download DEVONthink Pro Pinboard Importer Script 0.3.

The installation instructions are still in the first post.

24 May 02:47

Kitchen of the Week: Warming Trend in a 1920s Georgian (11 photos)

The previous kitchen in this 1920s Georgian house was cold. Not just chilly because of the expanse of tile on the countertops and floors, but downright drafty thanks to a poorly insulated addition on the back. While the layout worked adequately enough for this Atlanta...

27 Jul 15:22

Improving the security of your SSH private key files

by Martin Kleppmann
Nellynette Torres Ramirez

Security and Privacy

Ever wondered how those key files in ~/.ssh actually work? How secure are they actually?

As you probably do too, I use ssh many times every single day — every git fetch and git push, every deploy, every login to a server. And recently I realised that to me, ssh was just some crypto voodoo that I had become accustomed to using, but I didn’t really understand. That’s a shame — I like to know how stuff works. So I went on a little journey of discovery, and here are some of the things I found.

When you start reading about “crypto stuff”, you very quickly get buried in an avalanche of acronyms. I will briefly mention the acronyms as we go along; they don’t help you understand the concepts, but they are useful in case you want to Google for further details.

Quick recap: If you’ve ever used public key authentication, you probably have a file ~/.ssh/id_rsa or ~/.ssh/id_dsa in your home directory. This is your RSA/DSA private key, and ~/.ssh/id_rsa.pub or ~/.ssh/id_dsa.pub is its public key counterpart. Any machine you want to log in to needs to have your public key in ~/.ssh/authorized_keys on that machine. When you try to log in, your SSH client uses a digital signature to prove that you have the private key; the server checks that the signature is valid, and that the public key is authorized for your username; if all is well, you are granted access.

So what is actually inside this private key file?

The unencrypted private key format

Everyone recommends that you protect your private key with a passphrase (otherwise anybody who steals the file from you can log into everything you have access to). If you leave the passphrase blank, the key is not encrypted. Let’s look at this unencrypted format first, and consider passphrase protection later.

A ssh private key file typically looks something like this:

-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEArCQG213utzqE5YVjTVF5exGRCkE9OuM7LCp/FOuPdoHrFUXk
y2MQcwf29J3A4i8zxpES9RdSEU6iIEsow98wIi0x1/Lnfx6jG5Y0/iQsG1NRlNCC
aydGvGaC+PwwWiwYRc7PtBgV4KOAVXMZdMB5nFRaekQ1ksdH/360KCGgljPtzTNl
09e97QBwHFIZ3ea5Eih/HireTrRSnvF+ywmwuxX4ubDr0ZeSceuF2S5WLXH2+TV0
   ... etc ... lots of base64 blah blah ...
-----END RSA PRIVATE KEY-----

The private key is an ASN.1 data structure, serialized to a byte string using DER, and then Base64-encoded. ASN.1 is roughly comparable to JSON (it supports various data types such as integers, booleans, strings and lists/sequences that can be nested in a tree structure). It’s very widely used for cryptographic purposes, but it has somehow fallen out of fashion with the web generation (I don’t know why, it seems like a pretty decent format).

To look inside, let’s generate a fake RSA key without passphrase using ssh-keygen, and then decode it using asn1parse:

$ ssh-keygen -t rsa -N '' -f test_rsa_key
$ openssl asn1parse -in test_rsa_key
    0:d=0  hl=4 l=1189 cons: SEQUENCE
    4:d=1  hl=2 l=   1 prim: INTEGER           :00
    7:d=1  hl=4 l= 257 prim: INTEGER           :C36EB2429D429C7768AD9D879F98C...
  268:d=1  hl=2 l=   3 prim: INTEGER           :010001
  273:d=1  hl=4 l= 257 prim: INTEGER           :A27759F60AEA1F4D1D56878901E27...
  534:d=1  hl=3 l= 129 prim: INTEGER           :F9D23EF31A387694F03AD0D050265...
  666:d=1  hl=3 l= 129 prim: INTEGER           :C84415C26A468934F1037F99B6D14...
  798:d=1  hl=3 l= 129 prim: INTEGER           :D0ACED4635B5CA5FB896F88BB9177...
  930:d=1  hl=3 l= 128 prim: INTEGER           :511810DF9AFD590E11126397310A6...
 1061:d=1  hl=3 l= 129 prim: INTEGER           :E3A296AE14E7CAF32F7E493FDF474...

Alternatively, you can paste the Base64 string into Lapo Luchini’s excellent JavaScript ASN.1 decoder. You can see that ASN.1 structure is quite simple: a sequence of nine integers. Their meaning is defined in RFC2313. The first integer is a version number (0), and the third number is quite small (65537) – the public exponent e. The two important numbers are the 2048-bit integers that appear second and fourth in the sequence: the RSA modulus n, and the private exponent d. These numbers are used directly in the RSA algorithm. The remaining five numbers can be derived from n and d, and are only cached in the key file to speed up certain operations.

DSA keys are similar, a sequence of six integers:

$ ssh-keygen -t dsa -N '' -f test_dsa_key
$ openssl asn1parse -in test_dsa_key
    0:d=0  hl=4 l= 444 cons: SEQUENCE
    4:d=1  hl=2 l=   1 prim: INTEGER           :00
    7:d=1  hl=3 l= 129 prim: INTEGER           :E497DFBFB5610906D18BCFB4C3CCD...
  139:d=1  hl=2 l=  21 prim: INTEGER           :CF2478A96A941FB440C38A86F22CF...
  162:d=1  hl=3 l= 129 prim: INTEGER           :83218C0CA49BA8F11BE40EE1A7C72...
  294:d=1  hl=3 l= 128 prim: INTEGER           :16953EA4012988E914B466B9C37CB...
  425:d=1  hl=2 l=  21 prim: INTEGER           :89A356E922688EDEB1D388258C825...

Passphrase-protected keys

Next, in order to make life harder for an attacker who manages to steal your private key file, you protect it with a passphrase. How does this actually work?

$ ssh-keygen -t rsa -N 'super secret passphrase' -f test_rsa_key
$ cat test_rsa_key
-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,D54228DB5838E32589695E83A22595C7

3+Mz0A4wqbMuyzrvBIHx1HNc2ZUZU2cPPRagDc3M+rv+XnGJ6PpThbOeMawz4Cbu
lQX/Ahbx+UadJZOFrTx8aEWyZoI0ltBh9O5+ODov+vc25Hia3jtayE51McVWwSXg
wYeg2L6U7iZBk78yg+sIKFVijxiWnpA7W2dj2B9QV0X3ILQPxbU/cRAVTd7AVrKT
    ... etc ...
-----END RSA PRIVATE KEY-----

We’ve gained two header lines, and if you try to parse that Base64 text, you’ll find it’s no longer valid ASN.1. That’s because the entire ASN.1 structure we saw above has been encrypted, and the Base64-encoded text is the output of the encryption. The header tells us the encryption algorithm that was used: AES-128 in CBC mode. The 128-bit hex string in the DEK-Info header is the initialization vector (IV) for the cipher. This is pretty standard stuff; all common crypto libraries can handle it.

But how do you get from the passphrase to the AES encryption key? I couldn’t find it documented anywhere, so I had to dig through the OpenSSL source to find it:

  1. Append the first 8 bytes of the IV to the passphrase, without a separator (serves as a salt).
  2. Take the MD5 hash of the resulting string (once).

That’s it. To prove it, let’s decrypt the private key manually (using the IV/salt from the DEK-Info header above):

$ tail -n +4 test_rsa_key | grep -v 'END ' | base64 -D |    # get just the binary blob
  openssl aes-128-cbc -d -iv D54228DB5838E32589695E83A22595C7 -K $(
    ruby -rdigest/md5 -e 'puts Digest::MD5.hexdigest(["super secret passphrase",0xD5,0x42,0x28,0xDB,0x58,0x38,0xE3,0x25].pack("a*cccccccc"))'
  ) |
  openssl asn1parse -inform DER

…which prints out the sequence of integers from the RSA key in the clear. Of course, if you want to inspect the key, it’s much easier to do this:

$ openssl rsa -text -in test_rsa_key -passin 'pass:super secret passphrase'

but I wanted to demonstrate exactly how the AES key is derived from the password. This is important because the private key protection has two weaknesses:

  • The digest algorithm is hard-coded to be MD5, which means that without changing the format, it’s not possible to upgrade to another hash function (e.g. SHA-1). This could be a problem if MD5 turns out not to be good enough.
  • The hash function is only applied once — there is no stretching. This is a problem because MD5 and AES are both fast to compute, and thus a short passphrase is quite easy to break with brute force.

If your private SSH key ever gets into the wrong hands, e.g. because someone steals your laptop or your backup hard drive, the attacker can try a huge number of possible passphrases, even with moderate computing resources. If your passphrase is a dictionary word, it can probably be broken in a matter of seconds.

That was the bad news: the passphrase on your SSH key isn’t as useful as you thought it was. But there is good news: you can upgrade to a more secure private key format, and everything continues to work!

Better key protection with PKCS#8

What we want is to derive a symmetric encryption key from the passphrase, and we want this derivation to be slow to compute, so that an attacker needs to buy more computing time if they want to brute-force the passphrase. If you’ve seen the use bcrypt meme, this should sound very familiar.

For SSH private keys, there are a few standards with clumsy names (acronym alert!) that can help us out:

  • PKCS #5 (RFC 2898) defines PBKDF2 (Password-Based Key Derivation Function 2), an algorithm for deriving an encryption key from a password by applying a hash function repeatedly. PBES2 (Password-Based Encryption Scheme 2) is also defined here; it simply means using a PBKDF2-generated key with a symmetric cipher.
  • PKCS #8 (RFC 5208) defines a format for storing encrypted private keys that supports PBKDF2. OpenSSL transparently supports private keys in PKCS#8 format, and OpenSSH uses OpenSSL, so if you’re using OpenSSH that means you can swap your traditional SSH key files for PKCS#8 files and everything continues to work as normal!

I don’t know why ssh-keygen still generates keys in SSH’s traditional format, even though a better format has been available for years. Compatibility with servers is not a concern, because the private key never leaves your machine. Fortunately it’s easy enough to convert to PKCS#8:

$ mv test_rsa_key test_rsa_key.old
$ openssl pkcs8 -topk8 -v2 des3 \
    -in test_rsa_key.old -passin 'pass:super secret passphrase' \
    -out test_rsa_key -passout 'pass:super secret passphrase'

If you try using this new PKCS#8 file with a SSH client, you should find that it works exactly the same as the file generated by ssh-keygen. But what’s inside it?

$ cat test_rsa_key
-----BEGIN ENCRYPTED PRIVATE KEY-----
MIIFDjBABgkqhkiG9w0BBQ0wMzAbBgkqhkiG9w0BBQwwDgQIOu/S2/v547MCAggA
MBQGCCqGSIb3DQMHBAh4q+o4ELaHnwSCBMjA+ho9K816gN1h9MAof4stq0akPoO0
CNvXdtqLudIxBq0dNxX0AxvEW6exWxz45bUdLOjQ5miO6Bko0lFoNUrOeOo/Gq4H
dMyI7Ot1vL9UvZRqLNj51cj/7B/bmfa4msfJXeuFs8jMtDz9J19k6uuCLUGlJscP
    ... etc ...
-----END ENCRYPTED PRIVATE KEY-----

Notice that the header/footer lines have changed (BEGIN ENCRYPTED PRIVATE KEY instead of BEGIN RSA PRIVATE KEY), and the plaintext Proc-Type and DEK-Info headers have gone. In fact, the whole key file is once again a ASN.1 structure:

$ openssl asn1parse -in test_rsa_key
    0:d=0  hl=4 l=1294 cons: SEQUENCE
    4:d=1  hl=2 l=  64 cons: SEQUENCE
    6:d=2  hl=2 l=   9 prim: OBJECT            :PBES2
   17:d=2  hl=2 l=  51 cons: SEQUENCE
   19:d=3  hl=2 l=  27 cons: SEQUENCE
   21:d=4  hl=2 l=   9 prim: OBJECT            :PBKDF2
   32:d=4  hl=2 l=  14 cons: SEQUENCE
   34:d=5  hl=2 l=   8 prim: OCTET STRING      [HEX DUMP]:3AEFD2DBFBF9E3B3
   44:d=5  hl=2 l=   2 prim: INTEGER           :0800
   48:d=3  hl=2 l=  20 cons: SEQUENCE
   50:d=4  hl=2 l=   8 prim: OBJECT            :des-ede3-cbc
   60:d=4  hl=2 l=   8 prim: OCTET STRING      [HEX DUMP]:78ABEA3810B6879F
   70:d=1  hl=4 l=1224 prim: OCTET STRING      [HEX DUMP]:C0FA1A3D2BCD7A80DD61F4C0287F8B2D...

Use Lapo Luchini’s JavaScript ASN.1 decoder to display a nice ASN.1 tree structure:

Sequence (2 elements)
|- Sequence (2 elements)
|  |- Object identifier: 1.2.840.113549.1.5.13            // using PBES2 from PKCS#5
|  `- Sequence (2 elements)
|     |- Sequence (2 elements)
|     |  |- Object identifier: 1.2.840.113549.1.5.12      // using PBKDF2 -- yay! :)
|     |  `- Sequence (2 elements)
|     |     |- Byte string (8 bytes): 3AEFD2DBFBF9E3B3    // salt
|     |     `- Integer: 2048                              // iteration count
|     `- Sequence (2 elements)
|          Object identifier: 1.2.840.113549.3.7          // encrypted with Triple DES, CBC
|          Byte string (8 bytes): 78ABEA3810B6879F        // initialization vector
`- Byte string (1224 bytes): C0FA1A3D2BCD7A80DD61F4C0287F8B2DAB46A43E...  // encrypted key blob

The format uses OIDs, numeric codes allocated by a registration authority to unambiguously refer to algorithms. The OIDs in this key file tell us that the encryption scheme is pkcs5PBES2, that the key derivation function is PBKDF2, and that the encryption is performed using des-ede3-cbc. The hash function can be explicitly specified if needed; here it’s omitted, which means that it defaults to hMAC-SHA1.

The nice thing about having all those identifiers in the file is that if better algorithms are invented in future, we can upgrade the key file without having to change the container file format.

You can also see that the key derivation function uses an iteration count of 2,048. Compared to just one iteration in the traditional SSH key format, that’s good — it means that it’s much slower to brute-force the passphrase. The number 2,048 is currently hard-coded in OpenSSL; I hope that it will be configurable in future, as you could probably increase it without any noticeable slowdown on a modern computer.

Conclusion: better protection for your SSH private keys

If you already have a strong passphrase on your SSH private key, then converting it from the traditional private key format to PKCS#8 is roughly comparable to adding two extra keystrokes to your passphrase, for free. And if you have a weak passphrase, you can take your private key protection from “easily breakable” to “slightly harder to break”.

It’s so easy, you can do it right now:

$ mv ~/.ssh/id_rsa ~/.ssh/id_rsa.old
$ openssl pkcs8 -topk8 -v2 des3 -in ~/.ssh/id_rsa.old -out ~/.ssh/id_rsa
$ chmod 600 ~/.ssh/id_rsa
# Check that the converted key works; if yes, delete the old one:
$ rm ~/.ssh/id_rsa.old

The openssl pkcs8 command asks for a passphrase three times: once to unlock your existing private key, and twice for the passphrase for the new key. It doesn’t matter whether you use a new passphrase for the converted key or keep it the same as the old key.

Not all software can read the PKCS8 format, but that’s fine — only your SSH client needs to be able to read the private key, after all. From the server’s point of view, storing the private key in a different format changes nothing at all.

Update: Brendan Thompson has wrapped this conversion in a handy shell script called keycrypt.

Update: to undo this change

On Mac OS X 10.9 (Mavericks), the default installation of OpenSSH no longer supports PKCS#8 private keys for some reason. If you followed the instructions above, you may no longer be able to log into your servers. Fortunately, it’s easy to convert your private key from PKCS#8 format back into the traditional key format:

$ mv ~/.ssh/id_rsa ~/.ssh/id_rsa.pkcs8
$ openssl pkcs8 -in ~/.ssh/id_rsa.pkcs8 -out ~/.ssh/id_rsa
$ chmod 600 ~/.ssh/id_rsa
$ ssh-keygen -f ~/.ssh/id_rsa -p

The openssl command decrypts the key, and the ssh-keygen command re-encrypts it using the traditional SSH key format.