Shared posts

02 Mar 03:46

If You Want To Persuade, Get Out Of Email

by Richard Millington

In 2007, my wife and I were in Morocco.

A friend of a family we stayed with invited us to see his carpet store.

We were young and dumb, so we agreed.

You can guess what happened next.

First he showed us around the store. He explained the history of the store and the remarkable story of how some of the rugs ended up in his possession. Then he brought in tea and began asking us about our lives. He asked about the homes we lived in, the kind of rugs we bought (none), and noted what an amazing feeling it is to invite your parents to your first home.

Finally, he asked which of the rugs we liked the best and finally said…just for us…he could make a really good deal.

If we had the money (and could squeeze a big rug into small backpacks) we probably would have bought one.

Notice the persuasive elements in action. He had referred credibility from our friend, he was personally likable (showed interest in us/complimented us), he created a narrative around his store (which we could share with friends when we bought the rug), he made tea (reciprocity), and he provided options (which rug do you like the best?).

He was even smart enough to sit next to us instead of opposite us.

Now compare this with walking through the crowded, noisy, markets of Marrakech. Each vendor shouts loudly to make the sale.

Email is that crowded, noisy, marketplace. It is the most competitive place to persuade anyone of anything. If you’re trying to deal with detractors or convert newcomers into regulars using email alone, you’re going to face an uphill struggle.

If you want to persuade someone, get them out of email.

Use email to setup a call or organize a webinar. Invite people to meetup for a coffee.

As salespeople know well, it’s pointless to persuade anyone by email. 60% of the challenge is getting the meeting. The moment someone hears your voice they develop a stronger sense of empathy and liking towards you.

For coordination and simple tasks, email works well. But if you’re trying to persuade someone to do something, use any other medium. If you’re dealing with detractors, making a sale, trying to get someone to engage more frequently – email is the weakest method. The associations with spam, self-involved tasks, and the noisy marketplace are just too strong.

The best use for email is to get out of email.

02 Mar 03:45

Recommended on Medium: "Slack, I’m Breaking Up with You" in Better People

“Revolutions have never lightened the burden of tyranny; they have only shifted it to another shoulder.” — George Bernard Shaw

Continue reading on Medium »

02 Mar 03:45

Musings on the Economics of Commercial and Open Educational Resources

files/images/Campbell_Biology_10th_edition___Rent_9780321775658___Chegg_com.png


David Wiley, iterating toward openness, Mar 03, 2016


"The market for textbooks is distorted," argues Phil Hill. "There is absolutely no reason that a digital textbook rental should cost five times what a physical textbook rental costs." I would pause to observe that the use of the word 'distorted' implies there is some 'natural' state of the market, from which I guess we could infer what prices 'should' be, but of course there is no such thing. But I digress. Why  do we think textbooks should be cheaper when they're digital. Davide Wiley argues, "When concrete expressions of ideas, knowledge, skills, and attitudes are converted from a physical into a digital format, this changes them from private goods back to being public goods, once again making them easier to share (ie., they are nonrivalrous and nonexcludable). But copyright law "changes these public goods into club goods, once again making them difficult to share." With club goods, you cannot even resell what you have purchased. "Publishers have worked hard to establish a licensing norm and copyright regime that insures that you never own any digital products – you simply license access to them." This prevents any secondary market of used digital texts from emerging, and keeps prices high. So used print texts end up costing less than digital texts.

[Link] [Comment]
02 Mar 03:44

Apple Music Connect, Seven Months Later

by Federico Viticci

Dave Wiskus has been trying Connect, the social feature of Apple Music, for seven months. He now reports back on how the experiment has been going:

Imagine a social network where you can’t see how many followers you have, can’t contact any of them directly, can’t tell how effective your posts are, can’t easily follow others, and can’t even change your avatar.

Welcome to Apple Music Connect.

I first wrote about my experiences with Connect last year:

Someone asked why I believed that Connect would ever be better than Ping, Apple’s previous attempt at socialifying iTunes. Ping’s mistake was that it tried to connect listeners to each other, as a way of discovering new music. Apple Music has re-thought that problem in some very interesting ways, and early indications are that the new approach works. For the social component, Connect wants to be about connecting artists with their listeners, but at the moment, it falls short.

These are early days, and there’s hope.

The morning after I posted this I awoke to an email from Trent Reznor. He had spoken to Eddy Cue and the team about my concerns, and wanted to assure me that they were being addressed.

Apple has had seven months to get their s*** together. Have they?

The whole post is damning. I hate to say this, but so far those people who originally made fun of Connect as another Ping have a point.

02 Mar 03:44

Google Maps for iPhone Adds Pit Stops

by Federico Viticci

Aditya Dhanrajani, writing on the Google Maps blog:

Life is full of the unexpected—things that send us scrambling for a gas station in the middle of nowhere, looking up a florist on our way home from work or searching for a restaurant as we tour the back roads of our latest vacation destination. Finding and navigating to these last-minute pit stops used to force you out of navigation mode in Google Maps—and away from the traffic updates, turn-by-turn directions and map you rely on to stay on track.

That changed last October with an update to Google Maps for Android that lets you add detours to your route, without ever leaving navigation mode. And starting today, this feature will start rolling out on iOS as well, in any country where we offer navigation—more than 100 worldwide. So no matter where you’re from, where you are, or whether you use Android or iPhone, making a pit stop is now a breeze.

This is going to be useful in Rome (which I don't know well enough) and it's another differentiator from Apple Maps. I also like that they added the ability to 3D Touch the app icon and start navigation to your Home address.

02 Mar 03:44

Hey, What About All Those Trump Magazines That Failed?

by Rex Hammock

I was starting to believe the magazine industry was being ripped off.

Until the past weekend, I hadn’t seen any Trump branded magazines among the many lists of Trump failures appearing. I recalled some as I blogged about them nine years ago.

Even better, my friend Dylan Stableford, now covering the campaign for Yahoo News, was back then at the trade magazine Folio: and in 2007, was already covering the parade of Trump magazine failures.

Trump Style | Started in 1997, Failed circa 2004
Trump World | Started in 2004, Failed 2007
Trump Magazine | Started in 2007, Failed circa 2009

There has probably been more Trump magazines launched (and failed).
I know just who to ask. Oh, Mr. Magazine?

P.S. This feels like the early days of this blog.

02 Mar 03:44

DMV Report: Google Self-Driving Car Hit City Bus While Changing Lanes

by Mary Beth Quirk
mkalus shared this story from Consumerist:
So I see even Computers drive Lexus' the same way humans drive them.

(Mariordo/Wikipedia)
Google has been quick to point out in the past that its self-driving cars haven’t been at fault for any of the accidents they’ve been involved in. In what could be the first incident that’s the driverless car’s fault, a California Department of Motor Vehicles report says a Google Lexus hit a city bus while in autonomous mode.

According to the report [PDF] (first noted by writer Mark Harris on Twitter), the Google autonomous vehicle, or AV, was traveling in autonomous mode in the right-hand lane, as it was attempting to turn right on a red light on Feb. 14.

But the Google AV had to stop and go around sandbags that were positioned around a storm drain in its way, so when the light turned green, the car let a few cars pass and then started to move into the center of the lane to pass the sand bags.

“A public transit bus was approaching from behind,” the report says. “The Google AV test driver saw the bus approaching in the left side mirror but believed the bus would stop or slow to allow the Google AV to continue.”

That wasn’t the case: about three seconds later, the Google AV came into contact with the side of the bus, the report says. The car was going less than 2 mph, while the bus was traveling at 15 mph when they hit each other. The Google vehicle sustained body damage to the left front fender, the left front wheel and one of its driver’s -side sensors. There were no injuries reported at the scene.

In Google’s February monthly report for its self-driving cars, which will be published on March 1, the company will provide further details of the Feb. 14 incident. In the report, which Consumerist has read, the company calls the Valentine’s Day incident a “tricky set of circumstances” that’s helped it “improve an important skill for navigating similar roads.”

The company echoes its report to the DMV, saying that the car and the test driver both predicted that the bus would yield to the vehicle, because they were ahead of it.

“And we can imagine the bus driver assumed we were going to stay put,” the report says. “Unfortunately, all these assumptions led us to the same spot in the lane at the same time,” adding that this isn’t a situation unique to self-driving vehicles, but a type of misunderstanding that humans encounter on the road every day .

Though trying to predict each other’s movements is a normal part of driving, Google says, the company does acknowledge that its car wasn’t blameless.

“In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision,” the report says. “That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.”

The company says it’s reviewed the incident in its simulator and has tweaked its software as a result.

“From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future,” the report says.

02 Mar 03:44

Two Billion People, One API-Powered Identity Solution

by Anurag

Recently at the Mobile World Congress in Barcelona, GSMA announced that its Mobile Connect digital identity authentication service is now accessible to two billion people via 34 telco operators in 21 countries. The secure universal login solution has essentially become the telco industry’s answer to Facebook Connect.

From a tentative beginning, when operators were unsure of the value of collaborating with their competitors, the program has gained significant momentum and new converts in countries like India, where six operators are putting entirely new levels of cooperation in place. They’re establishing common commercial terms and a shared vision for Mobile Connect as a key part of the country’s Digital India initiative.

Two very difficult problems have been solved in the past several years. First, the Mobile Connect standard (built on OpenID Connect) has finally been adopted by forward-thinking operators who’ve realized that subscriber identity is a gateway to the monetization of OTT (over-the-top) services. Second, the problem of how to discover the telco operator for any incoming subscriber with a Mobile Connect identity has been solved with a global API exchange that manages interoperability across operators.

Operators are no longer arguing the merits of a simple, powerful mobile-based authentication solution. The race is now on to find how service providers and partners can benefit from such a service. Early pilots and trials have shown positive results in countries from the U.K. and Finland to Pakistan and Sri Lanka.

What’s next? How to enable banks, merchants, local and federal government service providers, travel agencies, and gaming companies to offer seamless, customer experience through the Mobile Connect authentication. The doors have been flung wide open to 2 billion people, and the party is on. Now the question is: will they come?

Learn more about API Exchange, a solution developed by Apigee for GSMA that enables operators to federate between their individual APIs to deliver cross-operator reach. API Exchange is the federation mechanism for Mobile Connect; its identity authentication solution is Apigee's Identity APIx.

And check out my presentation at Mobile Connect.


Thoughts? Comments? Questions? Join the conversation (or start one) in the Apigee Community.

 

 

02 Mar 03:41

100,000 and counting

by Sandy James Planner

 

463x260_zwischenstopps_BildimTab

We have been talking about the virtual and sharing economies and Vancouver hit a milestone today. In the parlance of Malcolm Gladwell, Vancouverites are “early adapters” to new technologies and ways of doing things. The rampant success of car sharing is a key example. Vancouver is the first city in the world to exceed 100,000 Car2Go members. Given that Vancouver’s population is 603,502 according to the 2011 Census, that means that 1 in 6 are members of this car sharing service.

And there are some fun statistics-the average trip is six kilometers and Vancouverites have driven 33.6 million kilometers since the inception of this company in 2011.

Car2Go has 1,250 Smart cars with another 1,000 vehicles offered by the other three car share companies, Evo, Modo and Zipcar.

I like the convenience of the service, and have many friends that have tried car sharing and liked it so much they sold their car. There are however challenges-for folks living outside of Vancouver and parts of North Vancouver  these cars are not readily available. To make a business out of car sharing, there needs to be higher density and a frequency of users  to make a profit. That required density is something that the other Metro Vancouver municipalities do not have-yet.

The Vancouver Sun article by Matthew Robinson  on  car share delves into the car share business, and also gives a thumbs up to the forthcoming Vancouver bike-share service.


02 Mar 03:41

Where are those Boomers Going?

by Sandy James Planner

Green Acres was a television program that ran from 1965 to 1971 that featured an older socialite couple from New York City leaving their Manhattan digs for the rustic charm of the country.   The photo above is from that program.

But is this  happening locally? I had coffee with a business owner who caters to the baby boomers. He is looking for solid trends of where and what this age co-hort is doing, and what they are doing with that cashed out equity from Vancouver real estate. The Vancouver Sun is also curious,  and Nick Eagland’s article suggests Boomers are buying the farm in the Fraser Valley. While the numbers are not huge, it is suggestive of a trend for boomers to cash out and locate in more salubrious and pastoral farms and acreages. So much for everyone looking for that townhouse or condo downtown.

The New York Times examines the opposite-the plight of older citizens in the New York City suburbs who hoped to cash out of their large houses for a small pied a terre in New York City where they can age in place with minimal stairs and no snow shovelling.

Those boomers are however too late-in the last fifteen years the values of apartments in Manhattan and Brooklyn have risen more than the value of the suburban homes, leaving them stranded in the suburbs.

And back to Vancouver-Ladner and Tsawwassen in Delta have seen an increase of almost 40 per cent in property values in the last year, and other Metro municipalities have also seen increases.  Is this due to first timers buying into the market, or older empty nesters looking for their own version of those Green Acres?


02 Mar 03:39

@rachelsklar

@rachelsklar:
02 Mar 03:11

Folding bikes for condo living and downtown commuting: What’s new in foldies

by dandy

 salamander-stroller

The Salamander, above, by Guelph-based WIKE.

Folding bikes for condo living and downtown commuting: What's new in foldie technology

Story by Derek Rayside

Images provided by manufacturers

Folding bikes are a great option for city living: you can take them with you into your condo or office and tuck them away securely and unobtrusively. They are also easy to transport in your car or boat. I keep my folding bike under the desk in my office.
Ontario has been a hotbed of innovation in folding bike designs recently. There are three new products invented locally that are coming to market this summer.

 helix-open
The Helix (www.RideHelix.ca) has the largest wheel-size and frame of any folding bike, yet is one of the smallest when folded. It promises the ride quality of a Bike Friday with the compact foldability of a Brompton. The all-titanium frame is very light. Helix was invented and is manufactured here in Toronto. It raised over $2M in pre-production orders on Kickstarter.com last year (a Kickstarter record).
 helix-folded-side
The Revelo (www.Revelo.ca) is a very small electric bike with a radical yet old-fashioned design: the pedals are attached directly to the front wheel. It's like a miniature penny-farthing. What's old is new again. The original prototype was an OCAD thesis project about five years ago, and it is now market-ready. Both the Revelo and Helix promotional videos are shot on the newly renovated Queens Quay.
 revelo-all-modes
The Wike Salamander (www.Wicycle.com) is an ingenious folding family bicycle invented and manufactured in Guelph. It converts between a jogging stroller for two young children and a 7.5' long bakfiet-style bike --- and you can do this conversion in seconds with the children in the stroller cabin. Put the kids in the (fully enclosed) cabin upstairs in your apartment, stroll downstairs, convert to bike mode outside, ride down the street to dance class, convert back to stroller mode, and walk in to the community centre. Your kids will never be exposed to the elements (unless you open the canopy). Amazing!
salamander-bike
The Salamander will make its début at the Toronto International Bike Show March 4-6, CNE Better Living Centre.
The established names in quality folding bikes, which you can buy in shops today, include Brompton, Dahon, Tern, Giant, and Bike Friday. You can also get an inexpensive Schwinn at Canadian Tire. There is a good folding bike for every person and situation. See you in the bike lanes!

..

Derek's family recently replaced their car with an Onderwater Triple Tandem bike and a Burley D'Lite trailer. They use the bike as their main mode of transportation for all children's activities, grocery shopping, and even date nights. They use the new bike lanes on Queens Quay, Simcoe, Sherbourne, Adelaide, and Richmond. When he's not riding the family bicycle, Derek works as a software engineer.

Related on the dandyBLOG:

Kickstarter and kickstands: Innovators ask for money for bikes online

Coldest (not really) Day of the Year Ride 2016

Green Living Show 2015 recap

Icycle ice race 2016

External: Toronto International Bike Show 2016

02 Mar 03:10

Leak: Android N to come with a redesigned notification shade and Quick Settings panel

by Rajesh Pandey
With Lollipop, Google tweaked the notification panel and Quick Settings panel in Android, and it looks like with Android N, the company is all set to give the notification shade a huge makeover once again. The functionality of the panel remains largely the same, though it is getting a huge cosmetic makeover. Continue reading →
02 Mar 03:10

Falling in love again with ancient audio gear

by Doc Searls

pas3x

Back when I was a freshman in college, I tried to build what was already legendary audio gear: a Dynaco PAS-3X preamplifier, and a Stereo 35 power amplifier. Both were available only as kits, and I screwed them up. I mean, I wasn’t bad with a soldering iron, but I sucked at following directions.

So my cousin Ron (that’s him on the left) came to my rescue, fixing all my mistakes and ron-apgarmaking both chunks of iron sing like bells. In the process he decided to build a PAS-3X of his own, along with a Heathkit A111 power amplifier.

I wore out my Dynacos by the late ’70s. (Along with my KLH Model 18 tuner and AR turntable with a Shure V15 Type II Improved phono cartridge.) Ron’s worked at his Mom’s place for a few years, and were retired eventually to a cabinet where I spotted them a few years ago at her house in Maine. When I asked about them, she said “Take ’em away.” So I did. After that they languished behind furniture, first at our apartment in Massachusetts and then at the one here in New York.

So a few days ago, after my old Kenwood receiver crapped out, I decided on a lark to give Ron’s old gear a try. I had no faith it would work. After all, it was fifty year old iron that hadn’t been on in forty years or more. Worse, it wasn’t solid state stuff. These things were filled with vacuum tubes, and had components and wiring that had surely rotted to some degree with age.

So I plugged them in, made all the required connections between the two units and a pair of Polk speakers (which date from the ’90s), and then fed in some music from the collection on my iPhone.

Amazing: they work. Beautifully. Some knobs make scratchy sounds when I turn them, and every once in awhile the right channel drops out, requiring that I re-plug an input. But other than that, it’s all fine. The Heathkit, which has the size and heft of a car battery, could heat a room, even though it only produces 14 watts per channel. When it’s running, it’s too hot to pick up. But the sound is just freaking amazing. Much better than the Kenwood, which is a very nice receiver. I’m sure it’s the tubes. The sound is very warm and undistorted. Vocals especially are vivid and clear. The bass is tight. The high end is a bit understated, but with plenty of detail. (Here’s a test report from 1966.)

My original plan was to sell them eventually on eBay, since these kinds of things can bring up to $hundreds apiece. But now I love them too much to do that.

I mean, these things make me want to sit and listen to music, and it’s been a long time since any gear has done that.

They also connect me to Ron, who sadly passed several years ago. He was my big brother when we were growing up, and a totally great guy. (He was also cool in a vintage sense of the word, at least to me. And you had to love his red ’60 Chevy Impala convertible, which he drove until he joined the Army, as I recall.)

So I gotta keep ’em.

02 Mar 03:09

Xiaomi – All mod cons.

by windsorr

Reply to this post

RFM AvatarSmall

 

 

 

 

 

The new Mi5 has everything except profit.

  • Xiaomi’s strategy of getting users on board is working but the need to make money from them is becoming increasingly acute.
  • Registrations for its latest flagship launch, the Mi5, have passed 16.8m for today’s flash sale in China which does not come as an enormous surprise.
  • This is because fans have been waiting 18 months for an upgrade and Xiaomi seems to be virtually giving the phone away.
  • Everything about the Mi5 is high end except the screen which has an acceptable resolution at 1080p.
  • The Mi5 sports a 16MP Sensor, a Snapdragon 820 processor, 3GB of RAM and 32GB of storage for an incredibly reasonable $305.
  • At the same time the company has said that it has passed 170m users but there is no sign of monetising them.
  • One of the main reasons for this is that a large proportion of its users are not using a Xiaomi device.
  • RFM calculates that at the end of Q4 15A, that there were 103.2m users with a Xiaomi device leaving 66.8m that have used one of the 69 or more mods that are available to put MIUI on a non-Xiaomi device.
  • I believe that the vast majority of these ‘mods’ are outside of China where Xiaomi has no ecosystem and instead pushes Google.
  • This means that the effective user base from which it could potentially make money is actually around 100m.
  • Xiaomi has chosen the hardware route of monetisation but unlike Apple, the ecosystem is clearly not exclusive to the device.
  • Consequently, should Xiaomi’s ecosystem become popular, it will be unable to put its prices up because users will be able to download a ‘mod’ and get the ecosystem for free.
  • This is why I think that Xiaomi will have to either shut down the ‘mods’ or start charging for them to begin the monetisation of its ecosystem.
  • This is still a long way in the future, and the Xiaomi ecosystem still needs an awful lot of work before it gets to the point where it can begin to make money for its owner.
  • Xiaomi currently offers, media consumption, messaging and shopping giving it 25% coverage of the Radio Free Mobile Digital Life Pie.
  • In order to be considered a fully-fledged ecosystem, it needs more services which in turn necessitates heavy investment.
  • Xiaomi’s rivals (Baidu, Tencent, Alibaba and China Mobile) are all investing heavily and critically they also have internal cash flow to pay for it.
  • Xiaomi does not and it will be very difficult to come back to the market as this will necessitate raising money far below the 2014 valuation of $42bn.
  • I have recently valued Xiaomi at $5.9bn (see here) and I see nothing in these more recent numbers that leads me to change that view.
  • There is pent up demand for the Mi5 which is likely to cause a bump in shipments but once this has been satisfied units are likely to revert to around 18m per quarter.
  • Furthermore, I suspect that the bulk of the demand is coming from existing Xiaomi users, meaning that the user number is unlikely to see a sudden jump.
  • Hence, I think that Xiaomi remains in a very difficult position despite its early lead in China, as it does not have the internal cash flow to out-invest its large rivals.
  • I can see it being forced into the arms of one of the larger Chinese players as further fund raising is needed and this is going to be a real challenge.

 

 

02 Mar 03:09

Telus to charge $2 per month for data block feature with ‘enhanced capabilities’

by Ian Hardy

In an effort to reduce bill shock from data overages, Telus will be adding a new feature to its arsenal that will give customers “peace of mind,” at the cost of $2 per month.

Telus, and other Candian carriers, already have the Data Block feature enabled to ensure wireless subscribers are notified when data charges exceed $50 and all data services are blocked. Effective March 3rd, Telus is rolling out a data block feature with ‘enhanced capabilities’ that blocks data but includes access to picture/video messaging, Telus’ website, and its My Account app.

“The new feature blocks wireless data at the network level, while allowing picture and video messaging, and access to telus.com and My Account so customers with the feature can continue to manage their accounts from the device,” said a Telus spokesperson in a statement to MobileSyrup. “The $2/month feature provides customers with peace of mind and cost-control, preventing accidental data usage at pay-per-usage or overage rates. This change will not affect customers that already have data blocked at the network level.”

Customers will be able to add, change, or remove the new data block feature from any Telus app, store, or call centre.

Telus recently reported its Q4 earnings and recorded $3.21 billion in revenues with over 8.5 million wireless subscribers.

02 Mar 03:08

Developers: Apple’s App Review Needs Big Improvements

by Graham Spencer

Since the App Store launched in 2008, every app and every app update has gone through a process of App Review. Run by a team within Apple, their objective is to keep the App Store free from apps that are malicious, broken, dangerous, offensive or infringe upon any of Apple’s App Store Review Guidelines. For developers who want to have their app on the iOS, Mac, or tvOS App Store, App Review is an unavoidable necessity that they deal with regularly. But in the public, little is heard about App Review, except for a few occasions in which App Review has made a high-profile or controversial app rejection (such as the iOS 8 widgets saga) or when App Review has mistakenly approved an app that should never have been approved (such as the app requiring players to kill Aboriginal Australians).

Earlier this year we set out to get a better understanding of what developers think about App Review. We wanted to hear about their positive and negative experiences with App Review, and find out how App Review could be improved. It is hard to ignore from the results we got, from a survey of 172 developers,1 that beneath the surface there is a simmering frustration relating to numerous aspects of App Review. There is no question that App Review still mostly works and very few want to get rid of it, but developers are facing a process that can be slow (sometimes excruciatingly so), inconsistent, marred by incompetence, and opaque with poor communication. What fuels the frustration is that after months of hard work developing an app, App Review is the final hurdle that developers must overcome, and yet App Review can often cause big delays or kill an app before it ever even sees the light of day.

Developer frustration at App Review might seem inconsequential, or inside-baseball, but the reality is that it does have wider implications. The app economy has blossomed into a massive industry, with Apple itself boasting that it has paid developers nearly $40 billion since 2008 and is responsible (directly and indirectly) for employing 4 million people in the iOS app economy across the US, Europe and China. As a result, what might have been a small problem with App Review 5 years ago is a much bigger problem today, and will be a much, much bigger problem in another 5 years time.

App Review is not in a critical condition, but there is a very real possibility that today’s problems with App Review are, to some degree, silently stiffling app innovation and harming the quality of apps on the App Store. It would be naïve of Apple to ignore the significant and numerous concerns that developers have about the process.

eBook Version

An eBook version of this story is available to Club MacStories members for free as part of their subscription.

A Club MacStories membership costs $5/month or $50/year and it contains some great perks (including a weekly newsletter with exclusive original content – here's a sample issue).

You can subscribe here.

(Note: If you only care about the eBook, you can subscribe and immediately turn off auto-renewal in your member profile. I'd love for you to try out Club MacStories for at least a month, though.)

Download the EPUB files from your Club MacStories profile.

Download the EPUB files from your Club MacStories profile.

If you're a Club MacStories member, you will find a .zip download in the Downloads section of your profile, which can be accessed at macstories.memberful.com. The .zip archive contains two EPUB files – one optimized for iBooks (with footnote popovers), the other for most EPUB readers.

If you're looking for a way to download the file on iOS, check out this post.

The Negatives of App Review

Slow

The loudest complaint amongst the developers we surveyed is that App Review is too slow. We specifically asked developers about the speed of App Review, and the numbers speak for themselves. A whopping 78% of those surveyed rated the speed of App Review in negative terms (bad or terrible), whilst just 10% rated it in positive terms (good or excellent).

Not only did 4 out of 5 developers rank the speed of App Review as bad or terrible, but in the extended answer section of the survey the speed of App Review was repeatedly brought up as a complaint and an area which developers thought Apple must improve upon.

Apple does not publish detailed App Review processing times – the only information they provide is an infrequently updated table which tells developers how many submissions were reviewed (in percentage terms) in the last 5 business days.2 But aside from this one page on Apple’s website, developers are given no estimate or indication as to how long App Review might take to approve or reject their app. From our survey, the general consensus amongst developers was that it usually took about a week, but plenty also noted it could be shorter or longer than that.

Fortunately there is also an unofficial source of data relating to the speed of App Review – AppReviewTimes.com by Dave Verwer of iOS Dev Weekly. Operating since 2011, their data is crowd-sourced from developers who post on Twitter how long it took their app to be approved by App Review and append the #iosreviewtime or #macreviewtime hashtag. Any developer can then view the average App Review time for the last 14 days.

Unlike the couple of dozen estimates of App Review times from our survey, the data from AppReviewTimes.com numbered around 7,000 for the iOS App Store and just over 350 for the Mac App Store. That data confirms that the average App Review processing time can be fairly accurately stated as “about one week”.3 The average time does vary month by month, but the average rarely drops below 6 days and is often at 8 or more days.

Developers are frustrated by the slow speed of App Review because it is more than just a minor annoyance – there are real consequences when App Review takes a week to approve apps. The most significant consequence is that apps with software bugs in them take longer to be fixed because not only does the developer have to find the bug, and a solution to resolve the bug, but the developer must then wait for App Review to approve the fix. We heard from a few developers who expressed their frustration and despair at situations where a bug fix (sometimes critical bug fixes) took over a week to be approved. One developer noted that those situations have “been really terrible for customer satisfaction”.

Whilst developers can request an expedited review from App Review, these requests are not always granted. In fact many developers are instinctively cautious (perhaps overly cautious) about requesting an expedited review. They are fearful that it may prevent them from being granted an expedited review in a future (more dire) situation, under the belief that Apple will only grant a developer a certain number of expedited reviews each year.

Beyond delayed bug fixes, a slow App Review process means developers must plan ahead and allocate significant amounts of time for App Review when planning time-sensitive releases – reducing the amount of development time that can occur. One developer noted that they now incorporate a month of slack time into product launches just so they can handle a few App Review rejections.

The slow speed of App Review can also be a drag on innovation, as one developer noted:

Outside of the App Store, most software development has consolidated on agile methodologies and quick iteration, but that is just not possible when there is a minimum of a week for a release to go out.

Finally, the slow speed of App Review makes cross-platform simultaneous releases a bit of a logistical nightmare, as well as making marketing planning more difficult.

Sometimes the slow speed of App Review can be devastating. One developer told me about their app for the new Apple TV which they submitted for review about a week before the Apple TV launched. For an app to be on the App Store when a product launches (whether it is the Apple TV, iOS 9, or the iPhone 6s’ 3D Touch) is incredibly important – not only does Apple’s App Store usually feature great apps that take advantage of the new product, but third-party sites like MacStories also do the same. Unfortunately for this developer, App Review took 14 days to review their Apple TV app, at which point most of the excitement for Apple TV apps had died down and their app was barely mentioned.

Inconsistent

Inconsistency from App Review was another major recurring theme in the survey responses. Numerous developers gave examples where App Review had approved an update containing new features, only to reject a subsequent update for those features which had previously been approved. The most frustrating of those examples were when the update was a bug fix – meaning the developer, trying to quickly resolve an issue for their users, would now have to take more time either modifying their app to comply or appeal the decision (which may not succeed).

One such example was when a small bug fix led to App Review rejecting an app because it required registration. But the app, which had been on the App Store for five years, had always required registration and all of their competitors did the same thing. In the end the app was approved, but it took about a month of appeals and several phone calls to Apple from the developer.

Most concerning was the idea, alluded to in a few of the responses, that some apps were being left to stagnate and die because developers felt that going through App Review again was too risky. These developers felt it was better to just let the app die slowly than risk going through what they feel is an inconsistent App Review, which might reject a long-standing feature – the removal of which would instantly kill the utility of the app.

One developer was working on a emoji-centric keyboard extension for the launch of iOS 8, but kept running into rejections. After a number of these rejections they got the sense that Apple wanted to limit the keyboard extension API to alphanumeric keyboards, not emoji or symbols – so the developer stopped working on the feature. The only problem was that when iOS 8 launched, there were a bunch of emoji keyboard extensions from their competitors. Given that experience, that developer now says that the experience “makes it hard to find the energy to update the app further, or work on any other productivity-oriented app (not for lack of ideas)”.

A few developers provided similar stories of how App Review had rejected their app for some reason which the developer disputed, and after a few messages back and forth it was obvious to the developers that they weren't making any progress. Rather than continue the argument, these developers decided to reject their own app, and then resubmit the app a few days or weeks later. All of these stories ended in the same way: App Review approved the new build (with no changes) with no questions asked.

Another developer submitted an app which had a Notification Center widget which automatically hid itself – accomplished using public APIs. It was rejected, so the developer appealed the decision but was unsuccessful. After commenting on this rejection on Twitter, the developer understood that some Apple engineers told App Review that this was a valid use case. The developer resubmitted the app, but it was rejected another three times before finally being accepted. Unfortunately the app was then rejected after being approved. This developer also told me that outside of their day job, they no longer build apps for the App Store “because of App Review”.

Poor Communication

A few developers wrote in and described how an app had been stuck in the App Review process for weeks and even months. What frustrated many of these developers was not just the excruciatingly long time in review, but the utter lack of communication from Apple as to why they were in App Review limbo. One developer we heard from said that App Review did not respond to any requests and it was not until they contacted Apple’s Developer Technical Support team did they finally break the silence.

There is a mechanism for developers to send messages to the App Review team, but a common sentiment amongst those who commented on it was that it can often be (or at least appear to be) futile. One developer said App Review simply sends them “canned responses” and another developer even described the feeling of communicating with App Review as “like sending a message in a bottle”.

Illustration by Thomas Fink-Jensen

Illustration by Thomas Fink-Jensen

To the average consumer it may be surprising to learn that App Review frequently calls developers to discuss issues and rejections – which as one developer pointed out, is quite impressive given the massive size of the App Store today. The only problem is, most of these calls are not very useful to developers. That same developer called them “robots” which makes sense when you read this explanation from another developer:

When someone from App Review calls you on the phone to let you know your app has been rejected, the person who calls has little idea what exactly you are doing wrong and no ability to make policy decisions. They generally won’t let you talk to the actual decision-makers, meaning if they don’t understand what you are doing, it is virtually impossible to get on the same page.

Illustration by Thomas Fink-Jensen

Illustration by Thomas Fink-Jensen

App Store Review Guidelines

This is a story about App Review, and as such it also means that the App Store Review Guidelines (Guidelines) must inevitably be discussed. The Guidelines are effectively the laws of the App Store and App Review is both the police and the court system of the App Store.

We asked developers about the “clarity” of the App Store Review Guidelines and this was the most positively ranked aspect of App Review that we asked in our survey, with 30% negatively rating it and 44% positively ranking it. But there were still some developers who criticised the Guidelines as being written in a way which is purposefully vague. A handful of developers expressed frustration at having their app rejected , with App Review simply citing a rule in the Guidelines which is extremely broad. Another developer explained that whilst the Guidelines can be fairly straightforward on their face, "developers are often left guessing about how Apple will enforce them".

A key criticism from a number of developers was that App Review would use the Guidelines in their most conservative interpretation, allowing App Review to restrict apps and features that they did not like. The consequence of this impression is that some developers say that they avoid developing new apps if something has never been done before, because they view it as incredibly risky. As one developer said, they do not want to “spend months on [a new app], only to have it rejected”.

This developer concern came to fruition in a particularly extraordinary way with the release of iOS 8 and the ability for developers to create widgets for Notification Center. One of the apps which was updated for iOS 8 (and approved by App Review) to take advantage of this widget capability was PCalc, which effectively created a basic calculator that worked in Notification Center. It got a lot of media attention, and in fact Apple featured PCalc in the App Store’s “Great Apps for iOS 8” collection. But at the same time that PCalc was being featured by Apple, App Review had changed their mind and told the developer of PCalc to remove the widget as it breached the App Store Review Guidelines.4 The developer of PCalc, James Thomson, went public about Apple’s request to pull the calculator widget, and a public backlash ensued. Fortunately the saga ends on a positive note because Apple ultimately reversed its decision and allowed PCalc and other calculator widgets to remain in the App Store.

But PCalc was not the only app affected by Apple’s shifting of the goal posts relating to Notification Center widgets. Two of the other affected apps were Drafts (seemingly because it used buttons in the widget) and Launcher (because it launched other apps). Like PCalc, Drafts and Launcher both had their widgets approved, then approval revoked, before Apple approved it again (after public backlash against the rejections). It took Drafts' widget a few weeks to return to the App Store, but in the case of Launcher it took six months before the app could return to the App Store.

The Notification Center widget flip-flopping was, and remains, a real low-point in Apple’s developer relations history. It also highlights the very problem with the App Store Review Guidelines; Apple was happy to twist the vague wording to suit their purposes at the expense of developers. There was no rule against calculator widgets, or buttons in widgets, or widgets that launch other apps.5 These were smart developers and they built widgets they believed (rightly) to be within the Guidelines, and in fact App Review initially approved them because they did fall within the Guidelines. When Apple subsequently rejected them they cited vague guidelines and were evasive when the developers tried to understand and resolve the issue.

The Notification Center debacle left scars on developers that go beyond just the developers directly affected. It was a troubling precedent in which Apple callously wielded the Guidelines as a weapon, simply because it disapproved of how an app worked, despite the apps offering genuine functionality to users. There are undoubtedly other developers who have shelved apps or features which pushed the boundaries of iOS because they believe it to be too risky to spend time developing something that might go against Apple’s vision for iOS apps. That should concern Apple.

The App Store Review Guidelines should be like a surgeon’s scalpel, able to carefully and precisely cut out the malicious, deeply offensive, and broken apps. Instead, the Guidelines are more like a bludgeoning club that is barbarically swung, often knocking out the good along with the bad.

Illustration by Thomas Fink-Jensen

Illustration by Thomas Fink-Jensen

A separate frustration borne out of the Guidelines are what are known as “metadata rejections”. This refers to when something in an app’s description or screenshots is in breach of the Guidelines, and because of the way the App Review process works, it means the developer has to resubmit their entire app update and go through the whole App Review process again. Developers find them particularly annoying because it can be trivially easy to accidentally trigger a metadata rejection and App Review can also be notoriously inconsistent about what is permitted and what is not.

Incompetent

One of the most concerning aspects of the survey was that there were quite a few examples of what could only really be described as App Review incompetence.

In one instance a developer ran into an issue where App Review kept rejecting an app because they could not see any of the content in the app. The developer and their beta testers did not have this issue, and eventually the developer came to the conclusion that it must be a problem on App Review’s configuration of their network and iCloud. It took several weeks until the developer reached some Apple employees that accepted the developer’s conclusion and convinced App Review to approve the app.

Another developer once had an update rejected because there was no demo account. Except there was, and the developer had listed the demo account in their app review request. So the developer resubmitted the app without changing anything and the app was approved without issue.

One developer had an app update held “in review” for 32 days. Whenever the developer contacted App Review during this time, they were told nothing was wrong. Eventually the developer contacted someone at Apple they knew and this resulted in a call back from someone in App Review. They asked the developer why they were using HomeKit devices that were unreleased and that the developer should not have access to. Only wrinkle was, the developer was not using any unreleased HomeKit devices.

Another developer had their app repeatedly face metadata rejections because the screenshots of their app featured comic book covers. App Review said the developer did not have the right to use the covers and the developer was forced to use the covers of comic books from the public domain. But as the developer rightly pointed out, there are hundreds of apps which also feature the covers and posters from movies, TV shows, books, and albums.

To some degree it is hard to tell whether developers experience this kind of incompetence regularly. Nonetheless, here is a grab-bag of some other examples, because they really are quite ridiculous:

  • A developer had to send App Review a video on how IRC worked after it rejected the developer's app twice.
  • App Review rejected an app for sharing personal infromation because it used Game Center for multiplayer.
  • App Review approved Minecraft: Pocket Edition 2
  • App Review rejected an App Store screenshot because it contained the iPhone Home button – this is despite the developer using Apple's own marketing images.
  • App Review approved dozens of apps which had been infected with the XcodeGhost malware, potentially affecting tens of millions iOS users.

The Positives of App Review

It must be noted that not every submission focused on the shortcomings of App Review – there were many submissions which spoke highly of App Review in one way or another. The vast majority fit into the following categories:

Helpful

Many of the positive App Review comments related to situations where App Review had been helpful to a developer. A common example of how App Review can be helpful, provided by a number of developers in our survey, was that App Review had discovered a particular bug in their app which the developer and their beta testers had missed. In one instance App Review notified a developer about a graphical bug that only appeared in certain versions of OS X and sent the developer screenshots which documented the bug. Similarly, an update of an app from a different developer was crashing on an older system and App Review sent crash logs to help the developer quickly fix the problem before it was released to customers.

One developer also mentioned that the Developer Relations team at Apple can provide further information and assistance when a feature is rejected, or provide guidance when a developer is concerned that a feature in development may not be approved.

Contacts at Apple

Many developers had great words to say about various contacts they had made within Apple (inside and outside of the App Review team). Developers explained that these contacts were incredibly helpful in situations where the developer had run into an App Review problem and was having no luck in resolving the situation via the normal channels. Although many developers classified the utility of personal contacts at Apple as a positive, others pointed out that if developers had to ask for help from contacts at Apple, it highlighted the very failure of the App Review process.

Expedited Reviews

Apple is not oblivious to the problem of a slow App Review process because they allow developers to request an ‘expedited review’ which effectively permits a particular update to be fast-tracked through App Review. Apple describes it as so:

If you face extenuating circumstances, you can request the review of your app to be expedited. These circumstances include fixing a critical bug in your app on the App Store or releasing your app to coincide with an event you are directly associated with.

Apple will not grant every expedited review request, but developers who have been granted an expedited review for one of their app updates have said the process was fast – usually seeing their app reviewed within 24 hours. Though it is worth noting that expedited reviews simply allow a developer's app to jump the front of the review queue and skip the "Waiting for Review" phase.6

Suggested Solutions to Improve App Review

We also asked developers to make suggestions on how Apple could improve App Review. Although we have not included every suggestion, these were the most frequently mentioned. Because these suggestions are from many different developers, note that some suggestions may be in direct conflict with each other; we are not suggesting that Apple adopt every one of them.

Speed, Speed, Speed

App Review needs to get faster; one week is too long. This was a nearly universal sentiment amongst the developers surveyed. In addition to the obvious “hire more people”, there were a range of other suggestions for how Apple might accomplish faster reviews.

Prioritized Reviews: Some developers felt that those apps which are featured, or are about to be featured should get priority reviews; “left hand, please talk to right hand”.7

Minor Releases: Bug fixes and releases with minor tweaks should receive a streamlined review process, or perhaps some kind of review which can be done automatically, flagging potential issues to App Review’s human reviewers. (Counter-point: how do you define what a minor release is?)

Trusted Developers: If an app and/or developer is in good standing and can be trusted, it is felt by some that minor updates and bug fixes should get a “mini” review or no review at all – leaving full reviews for major new features. (Counter-point: how do you define a developer in good standing or what a minor update is?)

Paid Expedited Reviews: It would be controversial, but it was suggested that developers could pay to jump to the front of the line to guarantee an expedited review for critical bugs or to meet a launch deadline. (Counter-point: The danger of course is that this policy would favor the developers with deep pockets and put many independent developers at a disadvantage.)

Roll-Back Updates

If an app has a critical bug the developer has two options; quickly fix the bug and submit the update for review (perhaps requesting an expedited review) or temporarily remove the app from sale and submit a bug fix. But a few developers provided examples where their update had been rejected for one reason or another. This is a nightmare for developers because it means more and more users are exposed to this critical bug, or they are losing even more customers because their app is not available for sale. To reduce the adverse impact, a number of developers suggested the rather obvious solution of allowing a developer to roll-back to a previous version of their app whilst they go through the bug fixing and App Review process.

Partial App Store Release

Bugs are inevitable, but with automatic app updates on iOS and a slow App Review process, a critical bug can quickly wreak (potentially irreversible) damage on an app’s reputation. In addition to the roll-back functionality, it would be useful to allow developers to release an app update to a portion of their user base to ensure everything happens smoothly. Some game developers have done a similar thing when launching new games by limiting their app’s availability to a smaller country such as New Zealand at first – but this “hack” only works for new apps, not updates.

Greater Transparency

What amplifies the problem with App Review’s slowness is that developers have absolutely no idea how long the process is going to take. As one developer quipped, you get more feedback and transparency when you buy a pizza online than you do from App Review. Give developers some estimates so that they can better plan their marketing and development of their next update.

Better Communication

A common suggestion were various ways of improving the communication from App Review. Some wanted App Review to respond quicker, others wanted a dedicated contact so that they could build rapport and work through issues with one person who knows the background of their app. One developer eloquently explained why better communication was important;

“Developers simply want to have a more direct conversation with the person reviewing their app and have the opportunity to discuss or clarify contentious issues in a normal fashion, rather than firing a message into the black box and wait for an App Store Review Guideline to be cited back at you”.

Pre-Approval of Innovative Features

As explained above, some developers are hesitant to pour significant time and resources into innovative features that don’t exist on the App Store yet. Give these developers some certainty by offering a pre-approval process in which the developer can give a detailed explanation of what they want to do, and App Review can provide guidance to the developer as to whether it would be approved and the aspects of their idea that risks falling foul of the Guidelines.

Remove the Threat of Retribution

One of the most absurd lines in the App Store Review Guidelines goes as follows: “If you run to the press and trash us, it never helps”.

A strong argument could be made that without the media highlighting the absurdity of Apple’s position in the Notification Center widgets saga, those widgets would still be banned today. I think we can all agree, whether we use those widgets or not, that iOS would be worse if that was the case.

There’s a reason why we rely on the media to hold governments and corporations to account; it forces them to be better. Accountability is important and it is dissapointing that one of the opening lines of the Guidelines actively discourages, arguably even threatens, developers from holding App Review (and Apple as a whole) to account for its decisions.

Double-Blind Rejections

If an app is rejected, have it automatically re-reviewed by another member of App Review (without informing them that it has been rejected once before). This will help prevent arbitrary rejections and encourage App Review to deeply consider situations in which different App Reviewers come to different conclusions. To prevent waste, obvious breaches of rules such as defamatory or offensive references to religious, cultural or ethnic groups, could be rejected without the need for a second rejection.

Reduce Severity of Metadata Rejections

Rather than require an app go through the whole App Review process again, Apple could allow the app to be approved and instead give developers 24-72 hours to correct any metadata errors. Alternatively, approve the app but require the developer to fix the metadata error (and provide an undertaking to Apple that they have fixed it) before the developer can let the update go live on the App Store. Either option would prevent the wasted time and resources that occurs when a developer has to go through the whole App Review process again.

Clarify the Guidelines

The main complaint about the App Store Review Guidelines is that they can be purposefully vague and have been applied by App Review in ways that are extremely broad. This creates uncertainty for developers, which leads to developers being risk-averse and creating fewer innovative apps. One way to resolve this is to add more detail to the Guidelines and add specificity as to what is not permitted.

Change Attitude

One developer put into words what many others alluded to: App Review seems to have a policy of being conservative at first, and then only over time loosening their restrictions. In that developer’s opinion, it should be the reverse in order to encourage experimentation and innovation. As another developer pointed out, at this point in the life of the App Store it really is quite bizarre when App Review “nitpicks” on an app’s user experience.

Narrow the Scope

Another developer touched upon a similar suggestion – radically reduce the scope of App Review and stop the attempts at being an arbiter of how third-party developers use iOS features in their apps. Keep the focus of App Review on stopping malicious, dangerous and offensive apps and let developers create anything else, even if it may go against Apple’s design philosophy.

Make App Review Optional

I debated whether or not to include this one, because I don't think it'll ever happen. It certainly was not a popular opinion amongst the developers surveyed, but it was suggested enough times to merit a brief discussion. Essentially these developers were advocating a system on iOS similar to that which exists on the Mac. They want users to be able to go to the App Store where apps are required to go through App Review, or allow users to download and install apps from outside the App Store (with no App Review). Many of these developers suggested implementing something like the Mac's Gatekeeper security system as compulsory for all apps (not just those on the App Store) to keep users safe.

Increase Scrutiny

On the flipside, a few developers expressed a desire for App Review to actually increase scrutiny. These developers felt that there should be a higher standard of quality for apps to be approved by App Review.

Shine Some Light on App Review

One developer described App Review as a black box, and a lack of transparency surrounding App Review has clearly been a recurring theme in developer complaints. Apple could pull away the curtain and explain to a member of the press, or at a WWDC session, just how App Review works internally at Apple. Explain the challenges App Review faces, highlight the changes you are making to improve the process for developers and introduce members of the App Review team to the world of developers.

Illustration by Thomas Fink-Jensen

Illustration by Thomas Fink-Jensen

Conclusion

Apple has thrived since the introduction of the iPhone, and a large part of that success can be attributed to the third-party developers who have created a rich ecosystem of apps for the App Store. It should cause concern at Apple that many of those developers have such a low opinion of App Review – a pivotal process that every developer must deal with on a regular basis.

App Review is killing my love of software development. I don't like punting on good features or app ideas because I'm afraid they might draw a rejection. I dread it every time I hit the submit button. "What kind of silly thing are they going to flag me on this time?" is always in the back my mind. App Review is a process that brings about fear and apprehension in every iOS and Mac developer I know. There's something seriously wrong with that.

I have little doubt that Apple wants its developers to succeed, but I suspect the huge success of the App Store has overshadowed the real problems with App Review – along with the limited exposure that those inside Apple have of the App Review process. As a result, it is important that Apple takes steps to resolve or alleviate the problems that developers have raised with us. The current situation is resulting in Apple stiffling innovation in small but notable ways, causing undue frustration and lowering the morale of developers.

The situation is far from critical, but it is one that should be addressed in meaningful ways by Apple in the coming months.

Credits

Thank you to every developer who took the time to answer my survey last month. Your feedback was absolutely invaluable, and this story would not exist without it.

The fantastic illustrations that accompany this article are all thanks to the talented Thomas Fink-Jensen, you can follow him on Twitter or visit his website.

Thanks to Dave Verwer of iOS Dev Weekly who kindly provided us with the full archive of raw data from AppReviewTimes.com.

Finally, a big thank you to Federico Viticci, John Voorhees, Myke Hurley, and everyone else who read drafts and helped with the editing of this article.


  1. 70% were iOS-only developers, 29% were iOS & Mac developers, and 1% were Mac-only developers. Although we didn’t specifically ask developers, there appeared to be a mix of hobbyists, independent developers, contractors, and those who develop apps as their full time jobs. The response we got from those 172 developers was astounding and incredibly helpful, with their written responses exceeding 20,000 words. 
  2. When we checked this table on 28 February 2016, the information in the table was 9 days old, having last been updated on 19 February 2016. I also wonder if this table gives insight into Apple's goals regarding App Review speed – are they simply aiming for 100% of apps reviewed within 5 days? 
  3. Interestingly enough, at the time of publishing, App Review times have fallen to some of the lowest levels in App Store history (4 days for iOS and 3 days for the Mac). But it remains to be seen whether this is a temporary improvement or a more permanent fall in the processing times of App Review. 
  4. Apple’s reasoning to the developer was that “Notification Center widgets on iOS cannot perform any calculations”. 
  5. In fact the PCalc rejection was particularly absurd to many, because Apple includes a calculator widget for Notification Center on OS X. 
  6. Expedited reviews do little to expedite the "In Review" phase – as one developer discovered when it took their app 2 weeks to be approved, despite being granted an expedited review. 
  7. Although I wasn't able to confirm this information, it was suggested that large companies with chart-topping apps, and those that have business deals with Apple, already get priority treatment. It wouldn't surprise me, but nonetheless, take that information with a grain of salt. 
02 Mar 03:08

"In both Middle America and Middle England, among both rednecks and chavs, voters who have had more..."

“In both Middle America and Middle England, among both rednecks and chavs, voters who have had more than they can stomach of being patronised, nudged, nagged and basically treated as diseased bodies to be corrected rather than lively minds to be engaged are now putting their hope into a different kind of politics.”

-

Brendan O’Neill, From Trumpmania to Euroscepticism: Revenge of the Plebs

02 Mar 01:51

Habt ihr euch eigentlich mal gefragt, wieso es im Mittelmeerraum ...

mkalus shared this story from Fefes Blog.

Habt ihr euch eigentlich mal gefragt, wieso es im Mittelmeerraum nicht mehr Bäume gibt? Die haben die Römer abgeholzt, um daraus Schiffe zu bauen. Nein, wirklich! Ein Einsender kommentiert:
Mich persönlich würde es sehr freuen, wenn du die Medienlandschaft […] mal auf den Umstand hinweist, dass die Flutkarten Anfang dieses Jahres aktualisiert wurden. (z.B. http://flood.firetree.net/)

Diese gingen bis vor kurzem noch bis 35m, nach Fertigstellung der antarktischen Eisvermessung wurden die Karten angepasst und zeigen jetzt die vorhergesagten 60 Meter an.

[…]

Womit wir bei der von dir erkannten Verantwortung sind. Die haben wir gegenüber den Syrern in der Tat. Syrien war mal die reichste Provinz im Römischen Reich.

Aber als die Bäume weg waren, sind die Nordafrikaner halt nach Norden gezogen und haben bis nach Österreich Wälder gesucht. Das war das Erdöl der damaligen Zeit in der Ressourcen-Pyramide.

Und ein Detail noch:

Verantwortung hin oder her, wenn der Phosphatdünger, welcher eine fossile Resource ist und hauptsächlich in Nordafrika abgebaut wird, alle ist, dann wird es hier in Europa richtig ungemütlich werden. […] Die bekannten Lagerstätten werden bei gleichbleibendem Verbrauch (der Verbrauch bleibt nie gleich) laut UNO bis 2050 reichen. Wenn wir dieses Problem bewältigt haben, dann kommt das Wasser.
02 Mar 01:34

Samsung reveals why it dropped the microSD card slot on Galaxy S6 and S6 edge

by Rajesh Pandey
Samsung was widely criticised for its decision to drop the microSD card slot from the Galaxy S6 and Galaxy S6 edge. The presence of a microSD card slot was among the key strengths of Samsung’s flagship devices, so it made little sense for the company to drop it. No wonder the company ended up reintroducing the feature with the Galaxy S7 and Galaxy S7 edge this year. Continue reading →
02 Mar 01:34

Deals: Our streaming content pick for the best TV around $500, the TCL 50FS3800, is down to $400 (from $420)

by WC Staff

Best Deals: Our streaming content pick for the best TV around $500, the TCL 50FS3800, is down to $400 (from $420) [Amazon]

02 Mar 01:34

Wind Mobile restructuring WINDtab on March 22

by Ian Hardy

Wind Mobile has gone through many changes since acquiring wireless spectrum in 2008. The carrier was recently purchased by Shaw for $2.6 billion, which is still pending approval from Industry Canada, and it seems there will be changes to its plan offerings later this month.

According to Wind Mobile’s terms and condition, its WindTab system will once again be restructured. The WindTab currently operates in a manner that deducts 10 percent of your monthly payment towards the outstanding balance on your account. However, effective March 22nd, Wind will be shedding this strategy to simplify customers’ monthly bills.

“We’ve simplified how WINDtab works. As of your next invoice, WINDtab balances will now decrease by an equal amount each month, and you will be able to make partial payments too. The new monthly WINDtab reduction will be calculated by dividing the remaining WINDtab balance by the number of months until Pay Off Promise,” Wind stated in a note to subscribers.

The new calculation is as follows:
Current WINDtab balance
divided by
Number of months until Pay Off Promise
equals
Monthly amount to be deducted from the total WINDtab balance.

Wind Mobile gave this outline as:

Old calculation:
$40 monthly plan fee x 10% = $4 monthly WINDtab reduction
After 12 months: $4 x 12 = $48 total WINDtab reduction
$250 – $48 = $202 balance owing

New calculation:
$250 ÷ 24 months = $10.42 monthly WINDtab reduction
After 12 months: $10.42 x 12 = $125.04 total WINDtab reduction
$250 – $125.04 = $124.96 balance owing

Overall, it seems that Wind Mobile wants you to upgrade to newer devices faster, especially with handset prices quickly declining.

In addition, Wind stated, “we will check if this change to WINDtab benefits you retroactively, and if so give you a one-time credit towards your existing WINDtab balance.”

Source Wind
02 Mar 01:33

Do You Send Out Emails To Your Community Like This?

by Richard Millington

2 years ago I worked with a very capable, personable, and friendly community manager.

Let’s call her Sarah.

Sarah wasn’t an expert in her field, but had the traits to be great at building communities.

Her first email out to her community read something like:

“Hey everyone,

I’ve just been hired to replace (name) as the new community manager here!

I’m delighted to be working with you all and can’t wait to meet you in person.

I’ve noticed we’ve never had a place to share what you’re working on. So I’ve started one here.

Tell me your name, what kind of work you’re doing, and how I can help.

If you have any questions, reply to this address.

Speak to you all soon!

Sarah”

This email was short, enthusiastic, and direct.

If you follow the tips out there, this is exactly the kind of email you would write for your audience too.

Sarah referenced someone they already knew, she demonstrated her passion, and she created a place for people to introduce themselves.

But this email killed any credibility she hoped to gain with the audience.

So what happened?

The problem is she was dealing with a group of technical experts working at a high level within this organization. This isn’t how this group speak to one another.

The message screamed ‘not one of us’ and ‘low priority’ at a time when they were keen to connect with high value people like themselves.

The tone of the email is wrong. The call to action was wrong. It didn’t reflect authority and credibility.

Sarah followed all the free tips she could find and wrote an email that killed her credibility

We Have The Tips, Now To Focus On The Execution

We’ve conducted endless interviews, surveys, and met up with dozens of you over the past year.

One of the biggest challenges is this; you’re following all the free tips you can find and still not getting the level engagement you want.

We’re paying thousands of dollars on platforms every year, just as much on staff costs, and it’s not driving the level of engagement we need.

The problem isn’t that we need more free tips, the problem lies in psychology.

A Lesson From Seth Godin

Back in 2008 I did an internship with Seth Godin in New York.

I wasn’t alone, there were interns working virtually too.

We spent a lot of time planning, strategizing, and breaking down each other’s plans to rebuild them better.

One day Seth wrote something that stuck with me.   

I’ve lost the quote, so I’ll paraphrase:

‘Doing strategy and blue sky thinking is fun. But it’s a tiny portion of what makes us successful.

Ultimately it’s the empathizing, persuading, influencing, and cajoling people which brings success.

If you can’t do this, you’re probably not a community builder’

This is as true today as it was back in 2008.

We Never Talk About The Biggest Challenges

All these free tips aren’t helping us increase engagement.

Many of you have spent hundreds of thousands of dollars to boost engagement and you’re still not getting the results you want.

The problem isn’t the tips, it’s we’ve never tried to get better at the core skills to implement these tips.

80% of doing this work is about developing incredible skills in persuasion, influence, and building rapport.

Very few of the messages we write are persuasive. Most don’t use any of the principles of persuasion. Most don’t embrace any of the psychological techniques that can help you get members to do what you need them to do.

Sarah’s email wasn’t bad in isolation, in many communities it might have been great, she just didn’t have the deep engagement skills she needed.

Here are a few examples:

  • Audience profiling. Understanding the audience’s current views and favorability towards the idea, learning who they respect and why, identifying the keywords they use when interacting with each other etc..
  • Credibility. Knowing how to gain credibility and build rapport with key figures. Knowing to get references to the top people, not introduce yourself in a newcomer email. Learning to do deeper interviews that gain respect from the audience.
  • Persuasion. Knowing how to write and structure every message persuasively. Creating the right structure, using the right words, constructing the right narrative, deploying the right metaphors etc…
  • Motivation. Being able to identify the key motivations of group clusters and use that motivation in any call to action. Ensuring the call to action aligns with personal goals.

These are some of the skills that will drive engagement, not another massive list of free tips.

Sarah, like many people, consumed all the tips she could find and destroyed her credibility in her very first message. This is an extreme example, but just one of many examples.

Free tips are useful, but understanding psychology, persuasion, influence, building rapport, and credibility will help you so much more.  

We’ve spent 20 months now building the Advanced Engagement Methods program.

We’ve developed something I’m immensely proud of. It’s a single training and coaching program to coach you in skills that will make you better at driving real, meaningful, engagement through psychology principles.

To put this together we’ve tested hundreds of ideas, interviewed dozens of experts doing deep engagement work, and lined up several experts to give sessions during the program.

We’ve reviewed 500+ academic articles and pulled out the best insights to help you.

We’ve tested the methods in many different fields; internal and external engagement efforts, knowledge management and non-profits, content creation and social media.

This isn’t for beginners, it’s for people who have been in the field for several years already and consumed as many free tips as possible.

If you can learn and deploy these techniques effectively, you will be able to drive up engagement without spending another $50k on yet another new community platform.

And this is the final week you can sign up for the program.

I really hope a few of you will join us.  

www.feverbee.com/aem

(Fee per person is 3420, group rates available)

02 Mar 01:33

Turncoats: A New Architectural Debate

by pricetags

From Tony Osborn:

We are hosting an architectural debate in Vancouver that I’m sure your readers will be interested in. It’s a local version of a debate format that was started in London. It promises to be a really exciting night, which is more than you can say for most of the events in our field. The website is www.turncoats.ca.

Some links to posts about the London version – here, here and here.

.

Turncoats

Vancouver’s Architectural debates are rubbish.We’ve all been there: a panel of similar designers with similar views taking it in turns to talk at length about their similar work – too polite, too deferential, too dull. At best they are lukewarm love-ins, critically impotent, elitist and stuffy. Turncoats is a shot in the arm. Framed by theatrically provocative opening gambits, a series of free debates will rugby tackle six fundamental issues facing contemporary practice with a playful and combative format designed to ferment open and critical discussion, turning conventional consensus on its head.

Original Sin

We deride the derivative, we mock mimics, we fear facsimiles. Why? Hollywood reboots movies, theatre directors restage plays, musicians make covers. The best cultural production comes from the clear consensus that iterating is inventive yet in architecture we despise copying above all things. Our elitist and egotistical obsession with cosmetic novelty necessitates the endless, pointless, reinvention of form, reducing architecture to a spectacle of super-size billboard branding. Is bad originality preferable to a brilliant copy? Bullshit!

The Panel:

    • Clinton Cuddington is an architect and the founding principal of Measured Architecture Inc., an award-winning full-service architectural firm specializing in high quality, high performance modern buildings.
    • Fernanda Hannah teaches design and is the co-owner of Monzu and Hannah Design, a local firm focusing on residential and reclaimed wood designs. She has lived and worked in Barcelona, New York and Mexico City.
    • Javier Campos is an architect and founder of Campos Studio. His work includes several highly-awarded buildings, public art pieces, and competition entries.
    • Alicia Medina is a cofounder and director at the Laboratory for Housing Alternatives (LOHA), as well as an intern architect at Marianne Amodio Architecture Studio. Her work has crossed boundaries between architecture, interior design, graphic design, public space installation and craft brewing.
    .

Friday, April 1

6 pm

$10

DUDOC Dutch Urban Design Centre – 1445 West Georgia Street

Get a ticket


02 Mar 01:33

How to Deploy Software

<!DOCTYPE html>

How to Deploy Software

How to
Deploy Software

Make your team’s deploys as boring as hell and stop stressing about it.

Let's talk deployment

Whenever you make a change to your codebase, there's always going to be a risk that you're about to break something.

No one likes downtime, no one likes cranky users, and no one enjoys angry managers. So the act of deploying new code to production tends to be a pretty stressful process.

It doesn't have to be as stressful, though. There's one phrase I'm going to be reiterating over and over throughout this whole piece:

Your deploys should be as boring, straightforward, and stress-free as possible.

Deploying major new features to production should be as easy as starting a flamewar on Hacker News about spaces versus tabs. They should be easy for new employees to understand, they should be defensive towards errors, and they should be well-tested far before the first end-user ever sees a line of new code.

This is a long — sorry not sorry! — written piece specifically about the high-level aspects of deployment: collaboration, safety, and pace. There's plenty to be said for the low-level aspects as well, but those are harder to generalize across languages and, to be honest, a lot closer to being solved than the high-level process aspects. I love talking about how teams work together, and deployment is one of the most critical parts of working with other people. I think it's worth your time to evaluate how your team is faring, from time to time.

A lot of this piece stems from both my experiences during my five-year tenure at GitHub and during my last year of advising and consulting with a whole slew of tech companies big and small, with an emphasis on improving their deployment workflows (which have ranged from "pretty respectable" to "I think the servers must literally be on fire right now"). In particular, one of the startups I'm advising is Dockbit, whose product is squarely aimed at collaborating on deploys, and much of this piece stems from conversations I've had with their team. There's so many different parts of the puzzle that I thought it'd be helpful to get it written down.

I'm indebted to some friends from different companies who gave this a look-over and helped shed some light on their respective deploy perspectives: Corey Donohoe (Heroku), Jesse Toth (GitHub), Aman Gupta (GitHub), and Paul Betts (Slack). I continually found it amusing how the different companies might have taken different approaches but generally all focused on the same underlying aspects of collaboration, risk, and caution. I think there's something universal here.

Anyway, this is a long intro and for that I'd apologize, but this whole goddamn writeup is going to be long anyway, so deal with it, lol.

Table of Contents

  1. Goals

    Aren't deploys a solved problem?

  2. Prepare

    Start prepping for the deploy by thinking about testing, feature flags, and your general code collaboration approach.

  3. Branch

    Branching your code is really the fundamental part of deploying. You're segregating any possible unintended consequences of the new code you're deploying. Start thinking about different approaches involved with branch deploys, auto deploys on master, and blue/green deploys.

  4. Control

    The meat of deploys. How can you control the code that gets released? Deal with different permissions structures around deployment and merges, develop an audit trail of all your deploys, and keep everything orderly with deploy locks and deploy queues.

  5. Monitor

    Cool, so your code's out in the wild. Now you can fret about the different monitoring aspects of your deploy, gathering metrics to prove your deploy, and ultimately making the decision as to whether or not to roll back your changes.

  6. Conclusion

    "What did we learn, Palmer?"
    "I don't know, sir."
    "I don't fuckin' know either. I guess we learned not to do it again."
    "Yes, sir."

How to Deploy Software was originally published on March 1, 2016.

Goals

Aren't deploys a solved problem?

If you’re talking about the process of taking lines of code and transferring them onto a different server, then yeah, things are pretty solved and are pretty boring. You’ve got Capistrano in Ruby, Fabric in Python, Shipit in Node, all of AWS, and hell, even FTP is going to stick around for probably another few centuries. So tools aren’t really a problem right now.

So if we have pretty good tooling at this point, why do deploys go wrong? Why do people ship bugs at all? Why is there downtime? We’re all perfect programmers with perfect code, dammit.

Obviously things happen that you didn’t quite anticipate. And that’s where I think deployment is such an interesting area for small- to medium-sized companies to focus on. Very few areas will give you a bigger bang for your buck. Can you build processes into your workflow that anticipate these problems early? Can you use different tooling to help collaborate on your deploys easier?

This isn't a tooling problem; this is a process problem.

The vast, vast majority of startups I've talked to the last few years just don't have a good handle on what a "good" deployment workflow looks like from an organizational perspective.

You don't need release managers, you don't need special deploy days, you don't need all hands on deck for every single deploy. You just need to take some smart approaches.

Prepare

Start with a good foundation

You've got to walk before you run. I think there's a trendy aspect of startups out there that all want to get on the coolest new deployment tooling, but when you pop in and look at their process they're spending 80% of their time futzing with the basics. If they were to streamline that first, everything else would fall in place a lot quicker.

Tests

Testing is the easiest place with which to start. It's not necessarily part of the literal deployment process, but it has a tremendous impact on it.

There's a lot of tricks that depend on your language or your platform or your framework, but as general advice: test your code, and speed those tests up.

My favorite quote about this was written by Ryan Tomayko in GitHub's internal testing docs:

We can make good tests run fast but we can't make fast tests be good.

So start with a good foundation: have good tests. Don't skimp out on this, because it impacts everything else down the line.

Once you start having a quality test suite that you can rely upon, though, it's time to start throwing money at the problem. If you have any sort of revenue or funding behind your team, almost the number one area you should spend money on is whatever you run your tests on. If you use something like Travis CI or CircleCI, run parallel builds if you can and double whatever you're spending today. If you run on dedicated hardware, buy a huge server.

The amount of benefit I've seen companies gain by moving to a faster test suite is one of the most important productivity benefits you can earn, simply because it impacts iteration feedback cycles, time to deploy, developer happiness, and inertia. Throw money at the problem: servers are cheap, developers are not.

I made an informal Twitter poll asking my followers just how fast their tests suite ran. Granted, it's hard to account for microservices, language variation, the surprising amount of people who didn't have any tests at all, and full-stack vs quicker unit tests, but it still became pretty clear that most people are going to be waiting at least five minutes after a push to see the build status:

How fast should fast really be? GitHub's tests generally ran within 2-3 minutes while I was there. We didn't have a lot of integration tests, which allowed for relatively quick test runs, but in general the faster you can run them the faster you're going to have that feedback loop for your developers.

There are a lot of projects around aimed at helping parallelize your builds. There's parallel_tests and test-queue in Ruby, for example. There's a good chance you'll need to write your tests differently if your tests aren't yet fully independent from each other, but that's really something you should be aiming to do in either case.

Feature Flags

The other aspect of all this is to start looking at your code and transitioning it to support multiple deployed codepaths at once.

Again, our goal is that your deploys should be as boring, straightforward, and stress-free as possible. The natural stress point of deploying any new code is running into problems you can't foresee, and you ultimately impact user behavior (i.e., they experience downtime and bugs). Bad code is going to end up getting deployed even if you have the best programmers in the universe. Whether that bad code impacts 100% of users or just one user is what's important.

One easy way to handle this is with feature flags. Feature flags have been around since, well, technically since the if statement was invented, but the first time I remember really hearing about a company's usage of feature flags was Flickr's 2009 post, Flipping Out.

These allow us to turn on features that we are actively developing without being affected by the changes other developers are making. It also lets us turn individual features on and off for testing.

Having features in production that only you can see, or only your team can see, or all of your employees can see provides for two things: you can test code in the real world with real data and make sure things work and "feel right", and you can get real benchmarks as to the performance and risk involved if the feature got rolled out to the general population of all your users.

The huge benefit of all of this means that when you're ready to deploy your new feature, all you have to do is flip one line to true and everyone sees the new code paths. It makes that typically-scary new release deploy as boring, straightforward, and stress-free as possible.

Provably-correct deploys

As an additional step, feature flags provide a great way to prove that the code you're about to deploy won't have adverse impacts on performance and reliability. There's been a number of new tools and behaviors in recent years that help you do this.

I wrote a lot about this a couple years back in my companion written piece to my talk, Move Fast and Break Nothing. The gist of it is to run both codepaths of the feature flag in production and only return the results of the legacy code, collect statistics on both codepaths, and be able to generate graphs and statistical data on whether the code you're introducing to production matches the behavior of the code you're replacing. Once you have that data, you can be sure you won't break anything. Deploys become boring, straightforward, and stress-free.

Move Fast Break Nothing screenshot

GitHub open-sourced a Ruby library called Scientist to help abstract a lot of this away. The library's being ported to most popular languages at this point, so it might be worth your time to look into this if you're interested.

The other leg of all of this is percentage rollout. Once you're pretty confident that the code you're deploying is accurate, it's still prudent to only roll it out to a small percentage of users first to double-check and triple-check nothing unforeseen is going to break. It's better to break things for 5% of users instead of 100%.

There's plenty of libraries that aim to help out with this, ranging from Rollout in Ruby, Togglz in Java, fflip in JavaScript, and many others. There's also startups tackling this problem too, like LaunchDarkly.

It's also worth noting that this doesn't have to be a web-only thing. Native apps can benefit from this exact behavior too. Take a peek at GroundControl for a library that handles this behavior in iOS.


Feeling good with how you're building your code out? Great. Now that we got that out of the way, we can start talking about deploys.

Branch

Organize with branches

A lot of the organizational problems surrounding deployment stems from a lack of communication between the person deploying new code and the rest of the people who work on the app with her. You want everyone to know the full scope of changes you're pushing, and you want to avoid stepping on anyone else's toes while you do it.

There's a few interesting behaviors that can be used to help with this, and they all depend on the simplest unit of deployment: the branch.

Code branches

By "branch", I mean a branch in Git, or Mercurial, or whatever you happen to be using for version control. Cut a branch early, work on it, and push it up to your preferred code host (GitLab, Bitbucket, etc).

You should also be using pull requests, merge requests, or other code review to keep track of discussion on the code you're introducing. Deployments need to be collaborative, and using code review is a big part of that. We'll touch on pull requests in a bit more detail later in this piece.

Code Review

The topic of code review is long, complicated, and pretty specific to your organization and your risk profile. I think there's a couple important areas common to all organizations to consider, though:

  • Your branch is your responsibility. The companies I've seen who have tended to be more successful have all had this idea that the ultimate responsibility of the code that gets deployed falls upon the person or people who wrote that code. They don't throw code over the wall to some special person with deploy powers or testing powers and then get up and go to lunch. Those people certainly should be involved in the process of code review, but the most important part of all of this is that you are responsible for your code. If it breaks, you fix it… not your poor ops team. So don't break it.

  • Start reviews early and often. You don't need to finish a branch before you can request comments on it. If you can open a code review with imaginary code to gauge interest in the interface, for example, those twenty minutes spent doing that and getting told "no, let's not do this" is far preferable than blowing two weeks on that full implementation instead.

  • Someone needs to review. How you do this can depend on the organization, but certainly getting another pair of eyes on code can be really helpful. For more structured companies, you might want to explicitly assign people to the review and demand they review it before it goes out. For less structured companies, you could mention different teams to see who's most readily available to help you out. In either end of the spectrum, you're setting expectations that someone needs to lend you a hand before storming off and deploying code solo.

Branch and deploy pacing

There's an old joke that's been passed around from time to time about code review. Whenever you open a code review on a branch with six lines of code, you're more likely to get a lot of teammates dropping in and picking apart those six lines left and right. But when you push a branch that you've been working on for weeks, you'll usually just get people commenting with a quick 👍🏼 looks good to me!

Basically, developers are usually just a bunch of goddamn lazy trolls.

You can use that to your advantage, though: build software using quick, tiny branches and pull requests. Make them small enough to where it's easy for someone to drop in and review your pull in a couple minutes or less. If you build massive branches, it will take a massive amount of time for someone else to review your work, and that leads to a general slow-down with the pace of development.

Confused at how to make everything so small? This is where those feature flags from earlier come into play. When my team of three rebuilt GitHub Issues in 2014, we had shipped probably hundreds of tiny pull requests to production behind a feature flag that only we could see. We deployed a lot of partially-built components before they were "perfect". It made it a lot easier to review code as it was going out, and it made it quicker to build and see the new product in a real-world environment.

You want to deploy quickly and often. A team of ten could probably deploy at least 7-15 branches a day without too much fretting. Again, the smaller the diff, the more boring, straightforward, and stress-free your deploys become.

Branch deploys

When you're ready to deploy your new code, you should always deploy your branch before merging. Always.

View your entire repository as a record of fact. Whatever you have on your master branch (or whatever you've changed your default branch to be) should be noted as being the absolute reflection of what is on production. In other words, you can always be sure that your master branch is "good" and is a known state where the software isn't breaking.

Branches are the question. If you merge your branch first into master and then deploy master, you no longer have an easy way to determining what your good, known state is without doing an icky rollback in version control. It's not necessarily rocket science to do, but if you deploy something that breaks the site, the last thing you want to do is have to think about anything. You just want an easy out.

This is why it's important that your deploy tooling allows you to deploy your branch to production first. Once you're sure that your performance hasn't suffered, there's no stability issues, and your feature is working as intended, then you can merge it. The whole point of having this process is not for when things work, it's when things don't work. And when things don't work, the solution is boring, straightforward, and stress-free: you redeploy master. That's it. You're back to your known "good" state.

Auto-deploys

Part of all that is to have a stronger idea of what your "known state" is. The easiest way of doing that is to have a simple rule that's never broken:

Unless you're testing a branch, whatever is deployed to production is always reflected by the master branch.

The easiest way I've seen to handle this is to just always auto-deploy the master branch if it's changed. It's a pretty simple ruleset to remember, and it encourages people to make branches for all but the most risk-free commits.

There's a number of features in tooling that will help you do this. If you're on a platform like Heroku, they might have an option that lets you automatically deploy new versions on specific branches. CI providers like Travis CI also will allow auto deploys on build success. And self-hosted tools like Heaven and hubot-deploy — tools we'll talk about in greater detail in the next section — will help you manage this as well.

Auto-deploys are also helpful when you do merge the branch you're working on into master. Your tooling should pick up a new revision and deploy the site again. Even though the content of the software isn't changing (you're effectively redeploying the same codebase), the SHA-1 does change, which makes it more explicit as to what the current known state of production is (which again, just reaffirms that the master branch is the known state).

Blue-green deploys

Martin Fowler has pushed this idea of blue-green deployment since his 2010 article (which is definitely worth a read). In it, Fowler talks about the concept of using two identical production environments, which he calls "blue" and "green". Blue might be the "live" production environment, and green might be the idle production environment. You can then deploy to green, verify that everything is working as intended, and make a seamless cutover from blue to green. Production gains the new code without a lot of risk.

One of the challenges with automating deployment is the cut-over itself, taking software from the final stage of testing to live production.

This is a pretty powerful idea, and it's become even more powerful with the growing popularity of virtualization, containers, and generally having environments that can be easily thrown away and forgotten. Instead of having a simple blue/green deployment, you can spin up production environments for basically everything in the visual light spectrum.

There's a multitude of reasons behind doing this, from having disaster recovery available to having additional time to test critical features before users see them, but my favorite is the additional ability to play with new code.

Playing with new code ends up being pretty important in the product development cycle. Certainly a lot of problems should be caught earlier in code review or through automated testing, but if you're trying to do real product work, it's sometimes hard to predict how something will feel until you've tried it out for an extended period of time with real data. This is why blue-green deploys in production are more important than having a simple staging server whose data might be stale or completely fabricated.

What's more, if you have a specific environment that you've spun up with your code deployed to it, you can start bringing different stakeholders on board earlier in the process. Not everyone has the technical chops to pull your code down on their machine and spin your code up locally — and nor should they! If you can show your new live screen to someone in the billing department, for example, they can give you some realistic feedback on it prior to it going out live to the whole company. That can catch a ton of bugs and problems early on.

Heroku Pipelines

Whether or not you use Heroku, take a look at how they've been building out their concept of "Review Apps" in their ecosystem: apps get deployed straight from a pull request and can be immediately played with in the real world instead of just being viewed through screenshots or long-winded "this is what it might work like in the future" paragraphs. Get more people involved early before you have a chance to inconvenience them with bad product later on.

Control

Controlling the deployment process

Look, I'm totally the hippie liberal yuppie when it comes organizational manners in a startup: I believe strongly in developer autonomy, a bottom-up approach to product development, and generally will side with the employee rather than management. I think it makes for happier employees and better product. But with deployment, well, it's a pretty important, all-or-nothing process to get right. So I think adding some control around the deployment process makes a lot of sense.

Luckily, deployment tooling is an area where adding restrictions ends up freeing teammates up from stress, so if you do it right it's going to be a huge, huge benefit instead of what people might traditionally think of as a blocker. In other words, your process should facilitate work getting done, not get in the way of work.

Audit trails

I'm kind of surprised at how many startups I've seen unable to quickly bring up an audit log of deployments. There might be some sort of papertrail in a chat room transcript somewhere, but it's not something that is readily accessible when you need it.

The benefit of some type of audit trail for your deployments is basically what you'd expect: you'd be able to find out who deployed what to where and when. Every now and then you'll run into problems that don't manifest themselves until hours, days, or weeks after deployment, and being able to jump back and tie it to a specific code change can save you a lot of time.

A lot of services will generate these types of deployment listings fairly trivially for you. Amazon CodeDeploy and Dockbit, for example, have a lot of tooling around deploys in general but also serves as a nice trail of what happened when. GitHub's excellent Deployment API is a nice way to integrate with your external systems while still plugging deploy status directly into Pull Requests:

GitHub's deployment API

If you're playing on expert mode, plug your deployments and deployment times into one of the many, many time series databases and services like InfluxDB, Grafana, Librato, or Graphite. The ability to compare any given metric and layer deployment metrics on top of it is incredibly powerful: seeing a gradual increase of exceptions starting two hours ago might be curious at first, but not if you see an obvious deploy happen right at that time, too.

Deploy locking

Once you reach the point of having more than one person in a codebase, you're naturally going to have problems if multiple people try to deploy different code at once. While it's certainly possible to have multiple branches deployed to production at once — and it's advisable, as you grow past a certain point — you do need to have the tooling set up to deal with those deploys. Deploy locking is the first thing to take a look at.

Deploy locking is basically what you'd expect it to be: locking production so that only one person can deploy code at a time. There's many ways to do this, but the important part is that you make this visible.

The simplest way to achieve this visibility is through chat. A common pattern might be to set up deploy commands that simultaneously lock production like:

/deploy <app>/<branch> to <environment>

i.e.,

/deploy api/new-permissions to production

This makes it clear to everyone else in chat that you're deploying. I've seen a few companies hop in Slack and mention everyone in the Slack deploy room with @here I'm deploying […]!. I think that's unnecessary, and only serves to distract your coworkers. By just tossing it in the room you'll be visible enough. If it's been awhile since a deploy and it's not immediately obvious if production is being used, you can add an additional chat command that returns the current state of production.

There's a number of pretty easy ways to plug this type of workflow into your chat. Dockbit has a Slack integration that adds deploy support to different rooms. There's also an open source option called SlashDeploy that integrates GitHub Deployments with Slack and gives you this workflow as well (as well as handling other aspects like locking).

Another possibility that I've seen is to build web tooling around all of this. Slack has a custom internal app that provides a visual interface to deployment. Pinterest has open sourced their web-based deployment system. You can take the idea of locking to many different forms; it just depends on what's most impactful for your team.

Once a deploy's branch has been merged to master, production should automatically unlock for the next person to use.

There's a certain amount of decorum required while locking production. Certainly you don't want people to wait to deploy while a careless programmer forgot they left production locked. Automatically unlocking on a merge to master is helpful, and you can also set up periodic reminders to mention the deployer if the environment had been locked for longer than 10 minutes, for instance. The idea is to shit and get off the pot as soon as possible.

Deploy queueing

Once you have a lot of deployment locks happening and you have a lot of people on board deploying, you're obviously going to have some deploy contention. For that, draw from your deepest resolve of Britishness inside of you, and form a queue.

A deploy queue has a couple parts: 1) if there's a wait, add your name to the end of the list, and 2) allow for people to cut the line (sometimes Really Important Deploys Need To Happen Right This Minute and you need to allow for that).

The only problem with deploy queueing is having too many people queued to deploy. GitHub's been facing this internally the last year or so; come Monday when everybody wants to deploy their changes, the list of those looking to deploy can be an hour or more long. I'm not particularly a microservices advocate, but I think deploy queues specifically see a nice benefit if you're able to split things off from a majestic monolith.

Permissions

There's a number of methods to help restrict who can deploy and how someone can deploy.

2FA is one option. Hopefully your employee's chat account won't get popped, and hopefully they have other security measures turned on their machine (full disk encryption, strong passwords, etc.). But for a little more peace of mind you can require a 2FA process to deploy.

You might get 2FA from your chat provider already. Campfire and Slack, for example, both support 2FA. If you want it to happen every time you deploy, however, you can build a challenge/response process into the process. Heroku and Basecamp both have a process like that internally, for instance.

Another possibility to handle the who side of permissions is to investigate what I tend to call, "riding shotgun". I've seen a number of companies who have either informal or formal processes or tooling for ensuring that at least one senior developer is involved in every deploy. There's no reason you can't build out a 2FA-style process like that into a chat client, for example, requiring both the deployer and the senior developer that's riding shotgun to confirm that code can go out.

Monitor

Admire and check your work

Once you've got your code deployed, it's time to verify that what you just did actually did what you did intend it to do.

Check the playbook

All deploys should really hit the exact same game plan each time, no matter if it's a frontend change or a backend change or anything else. You're going to want to check to see if the site is still up, if the performance took a sudden turn for the worse, if error rates started elevating, or if there's an influx of new support issues. It's to your advantage to streamline that game plan.

If you have multiple sources of information for all of these aspects, try putting a link to each of these dashboards in your final deploy confirmation in chat, for example. That'll remind everyone every time to look and verify they're not impacting any metrics negatively.

Ideally, this should all be drawn from one source. Then it's easier to direct a new employee, for example, towards the important metrics to look at while making their first deploy. Pinterest's Teletraan, for example, has all of this in one interface.

Metrics

There's a number of metrics you can collect and compare that will help you determine whether you just made a successful deploy.

The most obvious, of course, is the general error rate. Has it dramatically shot up? If so, you probably should redeploy master and go ahead and fix those problems. You can automate a lot of this, and even automate the redeploy if the error rate crosses a certain threshold. Again, if you assume the master branch is always a known state you can roll back to, it makes it much easier to automate auto-rollbacks if you trigger a slew of exceptions right after deploy.

The deployments themselves are interesting metrics to keep on-hand as well. Zooming out over the last year or so can help give you a good example of whether you're scaling the development pace up, or if it's clear that there's some problems and things are slowing down. You can also take a step further and collect metrics on who's doing the deploying and, though I haven't heard of anyone do this explicitly yet, tie error rates back to deployer and develop a good measurement of who are reliable deployers on the team.

Post-deploy cleanup

The final bit of housework that's required is the cleanup.

The slightly aggressively-titled Feature Toggles are one of the worst kinds of Technical Debt talks a bit about this. If you're building things with feature flags and staff deployments, you run the risk of complicating the long-term sustainability of your codebase:

The plumbing and scaffolding logic to support branching in code becomes a nasty form of technical debt, from the moment each feature switch is introduced. Feature flags make the code more fragile and brittle, harder to test, harder to understand and maintain, harder to support, and less secure.

You don't need to do this right after a deploy; if you have a bigger feature or bugfix that needs to go out, you'll want to spend your time monitoring metrics instead of immediately deleting code. You should do it at some point after the deploy, though. If you have a large release, you can make it part of your shipping checklist to come back and remove code maybe a day or a week after it's gone out. One approach I liked to take was to prepare two pull requests: one that toggles the feature flag (i.e., ships the feature to everyone), and one that cleans up and removes all the excess code you introduced. When I'm sure that I haven't broken anything and it looks good, I can just merge the cleanup pull request later without a lot of thinking or development.

You should celebrate this internally, too: it's the final sign that your coworker has successfully finished what they were working on. And everyone likes it when a diff is almost entirely red. Removing code is fun.

Deleted branch

You can also delete the branch when you're done with it, too. There's nothing wrong with deleting branches when you're done with them. If you're using GitHub's pull requests, for example, you can always restore a deleted branch, so you'll benefit from having it cleared out of your branch list but you won't actually lose any data. This step can also be automated, too: periodically run a script that looks for stale branches that have been merged into master, and then delete those branches.

Neato

The whole ballgame

I only get emotional about two things: a moving photo of a Golden Retriever leaning with her best friend on top of a hill overlooking an ocean looking towards a beautiful sunset, and deployment workflows. The reason I care so much about this stuff is because I really do think it's a critical part of the whole ballgame. At the end of the day, I care about two things: how my coworkers are feeling, and how good the product I'm working on is. Everything else stems from those two aspects for me.

Deployments can cause stress and frustration, particularly if your company's pace of development is sluggish. It also can slow down and prevent you from getting features and fixes out to your users.

I think it's worthwhile to think about this, and worthwhile to improve your own workflows. Spend some time and get your deploys to be as boring, straightforward, and stress-free as possible. It'll pay off.

Written by Zach Holman. Thanks for reading.

If you liked this, you might like some of the other things I've written. If you didn't like this, well, they're not all winners.

Did reading this leave you with questions, or do you have anything you'd like to talk about? Feel free to drop by my ask-me-anything repository on GitHub and file a new issue so we can chat about it in the open with other people in the community.

I hope we eventually domesticate sea otters.

02 Mar 01:32

Credentials: not finished changing

by carlacasilli

Image from page 176 of "The dodo and its kindred; or, The history, affinities, and osteology of the dodo, solitaire, and other extinct birds of the islands Mauritius, Rodriguez and Bourbon" (1848)

“When you’re finished changing, you’re finished.” —Benjamin Franklin

From the gold standard to the floating exchange rate
The world of credentialing is evolving. Degrees have long been considered the basic unit of educational currency. But it appears that we’re experiencing an accelerating shift away from the gold standard of degrees and toward a more inclusive credentialing world that embraces badges, microcredentials and nanodegrees and is based on a market-driven floating exchange rate.

For the last decade we’ve lived through increasing degree inflation, watching jobs that previously required only a high school diploma become jobs that require not just an Associate’s degree but a Bachelor’s degree. In extreme forms of degree inflation some of those same jobs now require a Master’s degree or a Doctorate, or even post-doctoral work. What has happened to make this necessary? Have jobs changed that much in the intervening years? Is the world exponentially more complex? Could it be the degree itself causing these problems? My response to that last question is yes.

if u cn rd ths…
Our dependence on degrees as the primary means by which we collectively judge what someone knows and can do has effectively turned degrees into social and cultural shorthand. An unfortunate and increasingly inaccurate shorthand. An interpretive shorthand that attempts to speak to an individual’s qualifications well beyond what formal education currently provides, and one that gives informational short shrift to all stakeholders, perhaps most disappointingly, to the learners themselves. Why must the degree be so opaque? What does a transcript tell the student of their accomplishments other than the grades they received in a prescribed pathway? How does a course grade correlate to an amorphous future job? Is an A equal to a Job Grade Level V review?

Coke vs. Pepsi (or the problem with big educational brands)
Brand recognition should not be the calling card that gets most people in the workplace or college door. But it is. Ultimately, that’s primarily what our credentialing system has reified: brand. And not the brand value of the exiting learner—no, that would most likely be an incredibly useful metric. Instead the shorthand / reification focuses on the brand of the credential-issuing institution. Thankfully we’ve begun to see a questioning of this confused metric from both industry (EY + Penguin Random House as noted in Cracking the Credentialing Club) and through calls for research into college and university success rates in the somewhat de-fanged College Scorecard. These credential tremors are indicative of a larger and maybe-not-impending-but-already-happening tectonic shift occurring in education, learning, credentialing, and assessment.

Evolve or Die
I suggest that there is an implicit choice currently available to the credentialing world: evolve or die. You’ll note that this is the same choice that confronts every living thing and it affects both the small no-name brand and the very large, multinational brand. Academic and business systems are living things and must evolve in order to stay useful and relevant. The popular and useful thing of yesteryear may fade quickly into obscurity—and you don’t want to be the flightless bird in this story.

Consequently, we need to ask what we think we’re expressing when we hand out degrees and certificates. What is missing? What opportunities do new credentials like open badges offer? What are we hoping to effect and why? These questions must always acknowledge that context and audience are essential components of any answer. Because the credentialing environment is not composed solely of monolithic, faceless institutions struggling to survive, but rather thoughtful, circumspect individuals making hard choices about the potential, cost, and value of credentials in their own personal evolution.

Much more soon.
Talk to me at cmcasilli [at] gmail [dot] com

01 Mar 23:14

Bitcoin Guarantees Strong, not Eventual, Consistency

by Emin Gün Sirer
Blockchain

It has somehow become a common adage that Bitcoin is eventually consistent. We now have both academics and developers claiming that Bitcoin provides a laughably weak consistency level that is reserved solely for first-generation NoSQL datastores.

All of these people are wrong.

In this post, I want to dispel the myth of Bitcoin's eventual consistency once and for all: Bitcoin provides an incredibly strong consistency guarantee, far stronger than eventual consistency. Specifically, it guarantees serializability, with a probability that is exponentially decreasing with latency. Let's see what this means, and discuss why so many people get it wrong.

The Fallacious Eventual Consistency Claim

The error in thinking that Bitcoin is eventually consistent stems from looking at the operation of the blockchain and observing that the last few blocks of a blockchain are subject to rearrangement. Indeed, in Bitcoin and cryptocurrencies based on Nakamoto consensus, the last few blocks of the blockchain may be replaced by competing forks.

It's tempting to look at the way the blockchain operates and say "a-ha, because the contents of the blockchain are subject to change, it's clearly eventually consistent." This narrative might sound sensible to valley developers who have been indoctrinated by data-store companies who are packaging software that would not pass muster as failed masters theses. The same companies have been pushing bad science to justify the fact that they cannot implement a consistency guarantee in their data stores. Not surprisingly, this argument is flat out wrong.

The Eventual Consistency Claim is Flawed

First of all, if one were to buy the premise of this argument, that we should look at the entirety of the blockchain to evaluate Bitcoin's consistency, then the conclusion we must draw would be that Bitcoin is inconsistent. There is absolutely no guarantee that a transaction that has been observed in an orphaned block will be there after a reorganization, and therefore there is no eventual consistency guarantee.

This argument, that Bitcoin is as worthless a database as MongoDB, is more accurate than the argument that Bitcoin is eventually consistent, but is still completely wrong. The root cause of the wrong conclusion here is that the people making these arguments about Bitcoin's weak consistency have a messed up analysis framework.

The Right Way to Evaluate Databases

Blockchain

Here's the correct way to evaluate the consistency of distributed databases, including Bitcoin.

Every database-like system on earth comes with a write protocol and a corresponding read protocol. When evaluating the properties of a system, we examine the behavior of that system when we go through these protocols. We do not peek behind the covers into the internal state of the system; we do not dissect it apart; and most of all, we do not examine the values of internal variables, find a variable that changes, and scream "hah! eventually consistent!"

The suffix of the Bitcoin blockchain acts, in essence, as the scratch space of the consensus algorithm. For example, in Paxos (an algorithm that makes the strongest possible guarantee and yields a serializable timeline), a leader can seemingly flip-flop -- it starts out proposing value v1, but can end up learning some other value v2, and having the whole system accept v2. It'd be a folly to look at this and say "Paxos is eventually consistent: look, the leader flip-flopped." We need to look at the output of the protocol, not its transient states or internal variables.

In general, we cannot take a God's eye view into distributed systems and judge them by what we see from that privileged vantage point. What matters, what systems are judged by, is what they expose to their clients through their aforementioned read and write protocols.

Bitcoin Provides a Strong Consistency Guarantee

And a completely different picture emerges when we go through Bitcoin's read protocol. To wit, the read protocol for Bitcoin is to discard the last Ω blocks of the blockchain. Satoshi goes into a detailed analysis of what the value of Ω ought to be, and derives the equation as a function of the probability of observing an anomaly.

The nice thing about Bitcoin's read protocol is that it is parameterizable. Merchants can make this probability arbitrarily small. Because the probability drops exponentially with Ω, it's easy to pick an Ω such that the chance of observing a blockchain reorganization is less likely than having the processor miscompute values due to strikes by alpha particles. If that's not sufficient for pedants, one can pick a slightly higher Ω such that the chances of observing a reorganization are lower than the likelihood that all the oxygen molecules in the room, via random Brownian motion, will pile up into one corner of the room and leave one to suffocate in the other corner.

Satoshi suggests an Ω value of 6, in use by most merchants today. The read protocol simply discards the last 6 blocks, so reorganizations in the suffix of length 6 are not visible at all to clients. Sure, someone with a God's eye view could observe orphan chains of length 5 all day long, but it would not matter -- the Bitcoin read protocol will mask such reorganizations, and any inconsistency they might have led to, from clients.

A similar argument exists for the write protocol that I won't go through. You get the point: if you discard the last Ω blocks like Satoshi told you to, and you confirm using the appropriately trimmed blockchain, your chances of observing an anomaly are exponentially small.

What if Someone Uses A Low Ω?

Some people may actively choose to configure a system to provide weaker guarantees than it could give, for reasons of convenience or performance. In the same way I can configure an Oracle installation to violate its ACID guarantees, someone can use Bitcoin in a manner than does not take advantage of Bitcoin's strong guarantees.

For instance, the merchants who accept 0-confirmations have chosen to forego the strong guarantees of Bitcoin for convenience. They are using a read protocol that doesn't even involve the blockchain -- there is no guarantee that the transactions they see on the peer-to-peer network will ever be mined. This is not what Satoshi advised, but it may be a sensible choice for a non-risk-averse merchant. Of course, we now know how to build much faster, much higher throughput blockchains that provide better guarantees than Bitcoin's 0-conf transactions, but that's a different issue.

Why Is This So Hard?

There is a lot of confusion among software developers when it comes to consistency in databases. The muddled thinking that pervades the valley when it comes to issues of consistency probably stems from two sources: (1) academics' failure to provide a simple, accessible framework for reasoning about consistency, coupled with the misleading framework embodied in CAP, and (2) certain companies' deliberate, decade-long effort to muddy up the consistency landscape by falsely claiming that strong consistency is an impossible property to achieve under any circumstance. Surely, Google's Spanner is globally distributed and consistent, so are a slew of distributed databases that are part of a recent research wave that HyperDex started.

Just because the developers of certain cheap NoSQL data stores cannot guarantee consistency doesn't mean that eventual consistency is the best anyone can do. And Bitcoin's $6B+ value is most definitely not predicated on something as hokey as eventual consistency.

Hope this is useful for establishing a better foundation for evaluating systems, especially open source systems such as Bitcoin whose internal state is visible. It's tempting to look at that state, but consistency properties need to be evaluated solely by the outputs of the read and write protocols.

01 Mar 23:14

Startups, go to Slovenia now!

by Bogomil Shopov

I met with 3 Slovenian guys last week, just after my training session with StartupYard teams, and this become something more interesting that just having a beer with chips in a pub in the middle of Prague’s most awesome district – Smichov. (Yeah, Google and Skype are just around the corner + some of the most successful Czech companies)

 

First I’ve asked one of them – Petra if she can see the face into this ad:
before

She said yes, and after a few days I got this one from her.

after

 

Then Edina and Luka about more stuff and even I was a witness to a beth over the population of Slovenia. I forgot who won, but we had fun and all of this for like 45 min.

 

I got excited and  have exchanged a few more e-mail with all of  them about different topics and they are so awesome that I have decided to create an event soon in Slovenia to make the community even more successful.

 

Go to Slovenia

Now let’s get to the point – if you are startup from Europe, those awesome and creative guys have their call open for just 14 days more – until the March 15th.

 

Here’s why you must apply:

 

Another reason?

Watch the video. I love it!

 

Click here to learn more about their offer and on how they can help you to build, test and grow your product.

 

P.S This is not a paid CTA :)

The post Startups, go to Slovenia now! appeared first on Bogomil Shopov.

01 Mar 23:14

"With every job, you should have something to lose, something to gain, something to learn."

“With every job, you should have something to lose, something to gain, something to learn.”

-

Alison Beard, Life’s Work: An Interview with Kevin Spacey

01 Mar 23:14

@AdamMGrant

@AdamMGrant: