Shared posts

15 Sep 23:05

How to be an effective knowledge worker and ‘manage yourself’

by Doug Belshaw

As I’ve mentioned in a previous post, at the moment I’m reading eight books on repeat every morning. One of these is Peter Drucker’s magnificent Managing Oneself. I’ve actually gifted it to a couple of Critical Friend clients as it’s so good.

There’s some great insights in there, and some sections in particular I’d like to share here. First off, it’s worth defining terms. Thomas Davenport, in his book Thinking for a Living defines knowledge workers in the following way:

Knowledge workers have high degrees of expertise, education, or experience, and the primary purpose of their jobs involves the creation, distribution or application of knowledge.

So I’m guessing that almost everyone reading this fits into the category ‘knowledge worker’. I certainly identify as one, as my hands are much better suited touch-typing the thoughts that come out of my head, sparked by the things that I’m reading, than building walls and moving things around!

Drucker says that we knowledge workers are in a unique position in history:

Knowledge workers in particular have to learn to ask a question that has not been asked before: What should my contribution be? To answer it, they must address three distinct elements: What does the situation require? Given my strengths, my way of performing, and my values, how can I make the greatest contribution to what needs to be done? And finally, What results have to be achieved to make a difference?

This is a difficult thing to do and, to my mind, one that hierarchies are not great at solving. Every time I’m re-immersed in an organisation with a strict hierarchy, I’m always struck by how much time is wasted by the friction and griping that they cause. You have to be much more of a ‘grown-up’ to flourish in a non-paternalistic culture.

Drucker explains that knowledge workers who much ‘manage themselves’ need to take control of their relationships. This has two elements:

The first is to accept the fact that other people are as much individuals as you yourself are. They perversely insist on behaving like human beings. This means that they too have their strengths; they too have their ways of getting things done; they too have their values. To be effective, therefore, you have to know the strengths, the performance modes, and the values of your coworkers.
[…]
The second part of relationship responsibility is taking responsibility for communication. Whenever I, or any other consultant, start to work with an organization, the first thing I hear about are all the personality conflicts. Most of these arise from the fact that people do not know what other people are doing and how they do their work, or what contribution the other people are concentrating on and what results they expect. And the reason they do not know is that they have not asked and therefore have not been told.

The answer, of course, is to become a much more transparent organisation. Although The Open Organization is a book I’d happily recommend to everyone, I do feel that it conflates the notion of ‘transparency’ (which I’d define as something internal to the organisation) and ‘openness’ (which I see as the approach it takes externally).  For me, every organisation can and should become more transparent — and most will find that openness lends significant business advantages.

Transparency means that you can see the ‘audit trail’ for decisions, that there’s a way of plugging your ideas into others, that there’s a place where you can, as an individual ‘pull’ information down (rather than have it ‘pushed’ upon you). In short, transparency means nowhere to hide, and a ruthless, determined focus on the core mission of the organisation.

Hierarchies are the default way in which we organise people, but that doesn’t mean that they’re the best way of doing so. Part of the reason I’m so excited to be part of a co-operative is that, for the first time in history, I can work as effectively with colleagues  I consider my equals, without a defined hierarchy, and across continents and timezones. It’s incredible.

What this does mean, of course, is that you have to know what it is that you do, where your strengths lie, and how you best interact with others. Just as not everyone is a ‘morning person’, so some people prefer talking on the phone to a video conference, or via instant message than by email.

Drucker again:

Even people who understand the importance of taking responsibility for relationships often do not communicate sufficiently with their associates. They are afraid of being thought presumptuous or inquisitive or stupid. They are wrong. Whenever someone goes to his or her associates and says, “This is what I am good at. This is how I work. These are my values. This is the contribution I plan to concentrate on and the results I should be expected to deliver,” the response is always, “This is most helpful. But why didn’t you tell me earlier?”

[…]

Organizations are no longer built on force but on trust. The existence of trust between people does not necessarily mean that they like one another. It means that they understand one another. Taking responsibility for relationships is therefore an absolute necessity. It is a duty. Whether one is a member of the organization, a consultant to it, a supplier, or a distributor, one owes that responsibility to all one’s coworkers: those whose work one depends on as well as those who depend on one’s own work.

Reflecting on the way you work best means that you can deal confidently with others who may have a different style to you. It means it won’t take them weeks, months, or even years to figure out that you really aren’t  going to read an email longer than a couple of paragraphs.

[This] enables a person to say to an opportunity, an offer, or an assignment, “Yes, I will do that. But this is the way I should be doing it. This is the way it should be structured. This is the way the relationships should be. These are the kind of results you should expect from me, and in this time frame, because this is who I am.”

It’s a great book and, reading it at the same time as The Concise Mastery by Robert Greene is, I have to say, a revelation.

Image CC BY-NC gaftels

15 Sep 23:05

Twitter Favorites: [storyneedle] "Design thinking" has become obsessed with making designers more influential, instead of making users influential. That needs to change.

Michael Andrews @storyneedle
"Design thinking" has become obsessed with making designers more influential, instead of making users influential. That needs to change.
15 Sep 23:04

Apple confirms iPhone 7 Plus and Jet Black iPhone 7 stock sold out

by Ian Hardy

A year ago, Apple set an opening weekend record by selling 13 million iPhone 6s and iPhone 6s Plus smartphones in just three days. Unfortunately, for the iPhone 7 and iPhone 7 Plus launch, Apple has opted not to release sales stats, but has given an indication as to how desirable the new devices are.

According to a report in Reuters, Apple has completely sold out of 5.5-inch iPhone 7 Plus in all colours, while the new Jet Black model of the 4.7-inch iPhone 7 is out of initial stock. Apple also stated there will be “limited quantities” available for the Rose Gold, Gold, Silver, Black iPhone 7.

Apple spokeswoman Trudy Muller, said, “We sincerely appreciate our customers’ patience as we work hard to get the new iPhone into the hands of everyone who wants one as quickly as possible.”

Apple went live with pre-orders on September 9th and Canadians have the option to purchase directly or through a carrier partner.

Canadian carriers reported earlier this week that iPhone 7 pre-orders have been “very successful.” The new iPhone will officially launch on Friday, September 16th in Canada.

The first iPhone was released in the United States in 2007 and Apple recently announced it sold its billionth iPhone.

Related: iPhone 7 review: Apple sets the stage for 2017

SourceReuter
15 Sep 23:04

Simple Demo of Green Screen Principle in a Jupyter Notebook Using MyBinder

by Tony Hirst

One of my favourite bits of edtech  in the form of open educational technology infrastucture at the moment is mybinder (code), which allows you to fire up a semi-customised Docker container and run Jupyter notebooks based on the contents of a github repository. This makes is trivial to share interactive, Jupyter notebook demos, as long as you’re happy to make your notebooks public and pop them into github.

As an example, here’s a simple notebook I knocked up yesterday to demonstrate how we could created a composited image from a foreground image captured against a green screen, and a background image we wanted to place behind our foregrounded character.

The recipe was based on one I found in a Bryn Mawr College demo (Bryn Mawr is one of the places I look to for interesting ways of using Jupyter notebooks in an educational context.)

The demo works by looking at each pixel in turn in the foreground (greenscreened) image and checking its RGB colour value. If it looks to be green, use the corresponding pixel from the background image in the composited image; if it’s not green, use the colour values of the pixel in the foreground image.

The trick comes in setting appropriate threshold values to detect the green coloured background. Using Jupyter notebooks and ipywidgets, it’s easy enough to create a demo that lets you try out different “green detection” settings using sliders to select RGB colour ranges. And using mybinder, it’s trivial to share a copy of the working notebook – fire up a container and look for the Green screen.ipynb notebook: demo notebooks on mybinder.

green_screen_-_tm112

(You can find the actual notebook code on github here.)

I was going to say that one of the things I don’t think you can do at the moment is share a link to an actual notebook, but in that respect I’d be wrong… The reason I thought was that to launch a mybinder instance, eg from the psychemedia/ou-tm11n github repo, you’d use a URL of the form http://mybinder.org/repo/psychemedia/ou-tm11n; this then launches a container instance at a dynamically created location – eg http://SOME_IP_ADDRESS/user/SOME_CONTAINER_ID – with a URL and container ID that you don’t know in advance.

The notebook contents of the repo are copied into a notebooks folder in the container when the container image is built from the repo, and accessed down that path on the container URL, such as http://SOME_IP_ADDRESS/user/SOME_CONTAINER_ID/notebooks/Green%20screen%20-%20tm112.ipynb.

However, on checking, it seems that any path added to the mybinder call is passed along and appended to the URL of the dynamically created container.

Which means you can add the path to a notebook in the repo to the notebooks/ path when you call mybinder – http://mybinder.org/repo/psychemedia/ou-tm11n/notebooks/Green%20screen%20-%20tm112.ipynb – and the path will will passed through to the launched container.

In other words, you can share a link to a live notebook running on dynamically created container – such as this one – by calling mybinder with the local path to the notebook.

You can also go back up to the Jupyter notebook homepage from a notebook page by going up a level in the URL to the notebooks folder, eg http://mybinder.org/repo/psychemedia/ou-tm11n/notebooks/ .

I like mybinder a bit more each day:-)


15 Sep 20:36

Dreamy Multi-functional Furniture

by Alison Mazurek

I am currently on the hunt for a small side table for our living room to replace our large wooden sculptural one.  Our current table, while unique and beautiful is too big.  It measures about 1'w x 2'l x 18" tall and it is heavy.  We need a table that allows light to pass through to make the space appear larger and also a smaller horizontal surface (see previous rant here) to collect less things. I really only need the table to hold a drink and a book.

Then this table came across my screen and I fell for it.  Of course it's by Menu (link here) and way out of my budget.  Not only is it beautiful and simple it is wonderfully functional.  It flips over to create a tray (a helpful deterrent for toddlers and babies) and then when I thought it did enough, it transforms into a stool!  With our extending dining table for dinners with friends, we are often short a chair.  This small modern table solves two problems at once.

So needless to say, my search for a side table is over but I need some time to save up $$$.  Unless any readers have come across something as pretty and functional? Let me know!

15 Sep 06:54

We need some rationality back in our paranoid politics

by admin

Our city’s politics is not only divisive; it’s getting paranoid. Everyone seems to be ready to entertain the most outlandish conspiracies about their rivals, and even members of their own ideological camp. Our politics is becoming pathological.

15 Sep 06:43

Old Geek

I’m one. We’re not exactly common on the ground; my profession, apparently not content with having excluded a whole gender, is mostly doing without the services of a couple of generations.

This was provoked by a post from James Gosling, which I’ll reproduce because it was on Facebook and I don’t care to learn how to link into there:

Almost every old friend of mine is screwed. In the time between Sun and LRI I’d get lines like “We normally don’t hire people your age, but in your case we might make an exception”. In my brief stint at Google I had several guys in their 30s talk about cosmetic surgery. It's all tragically crazy.

He’d linked to It’s Tough Being Over 40 in Silicon Valley, by Carol Hymowitz and Robert Burson on Bloomberg. It’s saddening, especially the part about trying to look younger.

I’ve seen it at home, too; my wife Lauren is among the most awesome project managers I’ve known, is proficient with several software technologies, and is polished and professional. While she has an OK consulting biz, she occasionally sees a full-time job that looks interesting. But these days usually doesn’t bother reaching out; 40-plus women are basically not employable in the technology sector.

On the other hand

To be fair, not everyone wants to go on programming into their life’s second half. To start with, managers and marketers make more money. Also, lots of places make developers sit in rows in poorly-lit poorly-ventilated spaces, with not an atom of peace or privacy. And then, who, male or female, wants to work where there are hardly any women?

And even if you do want to stay technical, and even if you’re a superb coder, chances are that after two or three decades of seniority you’re going to make a bigger contribution helping other people out, reviewing designs, running task forces, advising executives, and so on.

Finally, there’s a bad thing that can happen: If you help build something important and impactful, call it X, it’s easy to slip into year after year of being the world’s greatest expert on X, and when X isn’t important and impactful any more, you’re in a bad place.

But having said all that, Bay Area tech culture totally has a blind spot, just another part of their great diversity suckage. It’s hurting them as much as all the demographics they exclude, but apparently not enough to motivate serious action.

Can old folks code?

I don’t know about the rest of the world, but they can at Amazon and Google. There are all these little communities at Google: Gayglers, Jewglers, and my favorite, the Greyglers; that’s the only T-shirt I took with me and still wear. The Greyglers are led by Vint Cerf, who holds wine-and-cheese events (good wine, good cheese) when he visits Mountain View from his regular DC digs. I’m not claiming it’s a big population, but includes people who are doing serious shit with core technology that you use every day.

There’s no equivalent at Amazon, but there is the community of Principal Engineers (I’m one), a tiny tribe in Amazon’s huge engineering army. There are a few fresh-faced youthful PEs, but on average we tend to grizzle and sag more than just a bit. And if you’re a group trying to do something serious, it’s expected you’ll have a PE advising or mentoring or even better, coding.

Like I do. Right now there’s code I wrote matching and routing millions and millions of Events every day, which makes me happy.

Not that that much of my time goes into it — in fact, I helped Events more with planning and politicking than coding. But a few weeks ago I got an idea for another project I’d been helping out with, a relatively cheap, fast way to do something that isn’t in the “Minimum Viable Product” that always ships, but would be super-useful. I decided it would be easier to build it than convince someone else, so… well, it turned out that I had to invent an intermediate language, and a parser for it, and I haven’t been blogging and, erm, seem a little short on sleep.

Advice

Are you getting middle-aged-or-later and find you still like code? I think what’s most helped me hang on is my attention span, comparable to a gnat’s. I get bored really, really fast and so I’m always wandering away from what I was just doing and poking curiously at the new shiny.

On top of which I’ve been extra lucky. The evidence says my taste is super-mainstream, whatever it is I find shiny is likely to appeal to millions of others too.

Anyhow, I don’t usually wear a T-shirt, but when I do, it’s this one.

15 Sep 06:41

Elisa reports 5 GB data use on average per user per month, 13 GB during the olympics

by Martin

Back in July, DNA/Elisa of Finland reported that their average user consumes 5 GB of data per month.

Personally I don’t do a lot of video streaming while I’m on LTE so my current data consumption on my smartphone is around 1.5 GB a month these days, so you can imagine how much streaming must be going on in Finland.

Even more seems to have been used in August during the Olympics. MobileEuorpe reports an exceptional DNA/Elisa mobile data use of 13 GB per user last month. That makes me wonder how networks are coping!? According to the picture in the first link, Swedish users are also consuming quite a lot of mobile data but nowhere near users in Finland. And still, even though the network I used in Finland had 50 MHz of LTE on air, speeds during evening hours were down from 40 Mbit/s during the daytime (my UE was not CA capable…) down to single digits in the evening.

15 Sep 06:41

Three Very Different Ways To Analyse An Online Community

by Richard Millington

Most people work from a simple assumption (e.g. “higher levels of activity per member, an increase in retention rates”). This means you measure activity per member and design your engagement activity to maximise activity per member.

The downside is you haven’t proved if the relationship is true (i.e. does it correlate well?), nor the influence levels of activity have on an individual’s retention rate (does it account for 100% or 5% of retention rate increase?), nor whether the relationship is linear (does it drop off after a certain level?).

You could blindly be pursuing more activity when data might show you that 3 posts per member, per month, is enough.

A better approach is to test a falsifiable hypothesis (e.g. sample members by levels of activity and compare this with customer retention rates to prove if the relationship exists and how influential the level of activity is in that relationship). You could then focus on increasing the level of activity from a specific segment to see if retention rate among that segment rises (this isn’t a natural experiment, but it’s still good). You might find that there is a relationship between increased levels of activity and retention, it’s nonlinear. After 5 posts per month, there is little impact.

Now instead of trying to get every member super active, you focus on ensuring they make 5 good contributions per month. This changes how you work a little.

An even better approach is to run a regression analysis to identify which variables correlate with increased levels of member retention. You might find that increased activity accounts for a 27% increase in higher retention rates. This also highlights other key variables. You might find direct messages between members have a 25% influence, opening newsletters have an 18% influence, and adding a profile picture have 10%. You can now test these relationships and build a mechanistic model.

Now instead of trying to maximise member activity at all costs, you might spend more time on newsletter subject lines and content, persuading members to add their profile picture, and ensuring members befriend each other to increase the number of direct messages.

This isn’t easy to do, but that’s exactly what makes it valuable. You can build a specific, tested, model that will show you exactly what you need to spend your time on to achieve. This lies beyond the endless hunt for more engagement. It’s where you can take your work to a more advanced, strategic, level.

15 Sep 02:45

Recommended on Medium: "Why Silicon Valley is all wrong about Apple’s AirPods" in Chris Messina

So you think Apple is a tech company? No, you’re wrong.

Continue reading on Chris Messina »

15 Sep 02:44

Android 7.0 Nougat LTE update headed to Nexus 6 and 9 ‘in the coming weeks’

by Patrick O'Rourke

Despite Android 7.0 Nougat launching just a few weeks ago, over-the-air and factory images of the new OS for the Nexus 6 were not released.

According to Android Police, however, it looks like Nexus 6 owners won’t have to wait long for the latest version of Android to hit their device. Google let the publication know that Nougat will his the Nexus 6 and Nexus 9 LTE at some point in the next few weeks.

The company didn’t release a specific time table for the OS to drop or indicate why the update was delayed.

If you’re still interested in enrolling in Google’s Android N Developer Preview program, register for it here.

Related: Nexus 6P Nougat update has been halted while Huawei works on solution

15 Sep 02:42

Cycling Support in San Francisco and Vancouver – and not

mkalus shared this story from Rolandt shared items on The Old Reader (RSS):
Ah yes, the whole stretch along first beach to second beach is a gong-show. From the way they did the cycle path in front of Cactus club to the idiocy of having the bike lane towards the beach along Beach Ave. instead of next to the road.

sf

Over four-fifths of likely voters (83 percent) believe that bicycling is good for San Francisco, and that bicycling in the city should be comfortable and attractive to everyone from small children to seniors.

  • For the first time, a majority of San Franciscans (51 percent) report biking occasionally, and 31 percent report riding regularly, meaning a few times a month or more.
  • A supermajority of San Franciscans (72 percent) support restricting private autos on Market Street.
  • Most San Franciscans (56 percent) support dramatically increasing City spending on bike infrastructure from around 0.5 percent to eight percent of the City’s transportation budget.
  • Two-thirds of San Franciscans (66 percent) support building a network of cross-town bike lanes connecting every neighborhood in San Francisco, even at the expense of travel lanes and parking spots.
  • Twice as many San Francisco voters are are likely to ride a bike on unprotected bike lanes (57 percent) than on streets without bike lanes (28 percent). Likely riders jump up to 65 percent on physically protected bike lanes.
  • Most voters (54 percent) would like to bike more frequently than they do presently.
  • With so many people reporting that biking is good for our city, and expressing the desire for better infrastructure and incorporating biking more into their lives, it is no surprise that a supermajority (68 percent) believe that City leaders are not doing enough to encourage biking.

The poll was conducted by David Binder Research, surveying 402 likely voters by cell phone and land line between Saturday, Aug. 20 and Tuesday, Aug. 23, and was commissioned by the San Francisco Bicycle Coalition. The margin of error is 4.9 percent. Findings include: …

 

PT: As a past NPA City Councillor, I continue to be amazed that the party has not tried to reposition itself on cycling issues, still aligning itself with whatever group is currently pissed off and dog-whistling to voters that cycling will not be a priority.

That’s most evident with the Park Board on which the NPA holds a majority.  Some of the most controversial projects fall in their jurisdiction – notably Kits Park.  But their indifference extends to other parks that on the cycling network or are major destinations – especially Jericho and Stanley Parks.  There has been no upgrading of the cycling infrastructure in years – and no indication that there will be any time soon.

The most egregious example: Second Beach in Stanley Park, a major junction for the thousands of cyclists and pedestrians who are deliberately placed in conflict:

There is one point where the yellow line switches from being a centre line for cyclists to a separation indicator for peds and bikes, without any clear indication of what’s happening.  It’s been this way for years – and apparently no one at Parks cares very much.

ped-path1

And then, of course, there’s this at the entrance to Jericho:

jericho-3

The Park Board couldn’t make a stronger statement, could it?

Ask a representative about their approach, and you’ll here about studies and plans and consultations.  But it’s also clear that there will be no political leadership, and that there is unlikely to be any action, much less a major commitment of resources, any time soon.

Which is odd.  Given the explosive growth in active transportation and the clear benefits, politicians today generally want to align themselves with this movement.  But more than, why would the Park Board continue to maintain an unacceptable status quo as the quality of infrastructure is upgraded all around them?  Their failure to address deficiencies and outstanding conflicts will only become more apparent – and annoying.

Sure, it won’t be easy to deal with unhappy constituents who see paving parkland as an unacceptable and unnecessary intrusion (regardless of the success of the Seaside route through English Bay and the False Creek parks).  But the NPA prides itself on the party that can get things done, and do it in a more balanced way than Vision.

And yet their indifference on this file only illustrates the opposite.


15 Sep 02:41

Qualcomm’s Clear Sight tech mimics the human eye to create a new dual camera system

by Rose Behar

With Huawei, LG and Apple all getting in on the fun, it seems that dual camera systems are the future. Setting its sights on that market, Qualcomm has developed a new technology called Clear Sight that provides OEMs with the basis they need to support dual cameras.

Qualcomm states in its release that the new technology is “designed to mimic the attributes of the human eye” in order to enhance image quality in low light. The chipset giant says the system works by making one camera with a colour sensor, and one with a black and white sensor, likening the former to cones and the latter to rods.

screen-shot-2016-09-14-at-2-54-20-pm

While the colour sense camera is great for capturing beautiful colours in bright light, the black and white sensor is best for low-light scenarios, as it lets in three times the light of its partner. When merged together, says the company, the combined images provide a super sharp, full-colour low-light image.

This is a different approach from some of the other dual camera technologies currently on the market. Rather than enhancing low light, Apple is focusing on improved zoom, while the LG V20 offers wide-angle snaps.

For manufacturers to use the technology, they’ll have to be running either the Snapdragon 820 or 821, which places the target squarely on the premium phone market. Considering the popularity of Qualcomm chips, this likely means consumers will see a lot more dual camera phones in the future.

Related: Qualcomm Snapdragon 821 supports Google Daydream VR and loads apps 10 percent faster

SourceQualcomm
15 Sep 01:31

Discover the APIs

files/images/Microsoft_APIs.JPG


Microsoft Cognitive Services, Sept 17, 2016


This is just one of several services available form various companies around the web, but I'm linking to it because it's illustrative. The mechanism is simple: first you create an identity (this later allows you to pay for the services), and then you take advantage of Microsoft's APIs to access advanced cognitive tools for your application. For example: somebody submits a photo to your website; you send the photo to Microsoft, and Microsoft tells you what emotion the photo is expressing. It's commodified artificial intelligence (AI) and it's here now. We were using the Bing API last year; here's the  migration guide to the new search API.

[Link] [Comment]
15 Sep 00:07

Cycling Support in San Francisco and Vancouver – and not

by pricetags

sf

Over four-fifths of likely voters (83 percent) believe that bicycling is good for San Francisco, and that bicycling in the city should be comfortable and attractive to everyone from small children to seniors.

  • For the first time, a majority of San Franciscans (51 percent) report biking occasionally, and 31 percent report riding regularly, meaning a few times a month or more.
  • A supermajority of San Franciscans (72 percent) support restricting private autos on Market Street.
  • Most San Franciscans (56 percent) support dramatically increasing City spending on bike infrastructure from around 0.5 percent to eight percent of the City’s transportation budget.
  • Two-thirds of San Franciscans (66 percent) support building a network of cross-town bike lanes connecting every neighborhood in San Francisco, even at the expense of travel lanes and parking spots.
  • Twice as many San Francisco voters are are likely to ride a bike on unprotected bike lanes (57 percent) than on streets without bike lanes (28 percent). Likely riders jump up to 65 percent on physically protected bike lanes.
  • Most voters (54 percent) would like to bike more frequently than they do presently.
  • With so many people reporting that biking is good for our city, and expressing the desire for better infrastructure and incorporating biking more into their lives, it is no surprise that a supermajority (68 percent) believe that City leaders are not doing enough to encourage biking.

The poll was conducted by David Binder Research, surveying 402 likely voters by cell phone and land line between Saturday, Aug. 20 and Tuesday, Aug. 23, and was commissioned by the San Francisco Bicycle Coalition. The margin of error is 4.9 percent. Findings include: …

 

PT: As a past NPA City Councillor, I continue to be amazed that the party has not tried to reposition itself on cycling issues, still aligning itself with whatever group is currently pissed off and dog-whistling to voters that cycling will not be a priority.

That’s most evident with the Park Board on which the NPA holds a majority.  Some of the most controversial projects fall in their jurisdiction – notably Kits Park.  But their indifference extends to other parks that are on the cycling network or are major destinations – especially Jericho and Stanley Parks.  There has been no upgrading of the cycling infrastructure in years – and no indication that there will be any time soon.

The most egregious example: Second Beach in Stanley Park, a major junction for the thousands of cyclists and pedestrians who are deliberately placed in conflict:

The worst case is  the point where the yellow line switches from being a centre line for cyclists to a separation indicator for peds and bikes, without any clear indication of what’s happening.  It’s been this way for years – and apparently no one at Parks cares very much.

ped-path1

And then, of course, there’s this at the entrance to Jericho:

jericho-3

The Park Board couldn’t make a stronger statement, could it?

Ask a representative about their approach, and you’ll here about studies and plans and consultations.  But it’s also clear that there will be no political leadership, and that there is unlikely to be any action, much less a major commitment of resources, any time soon.

Which is odd.  Given the explosive growth in active transportation and the clear benefits, politicians today generally want to align themselves with this movement.  But more than, why would the Park Board continue to maintain an unacceptable status quo as the quality of infrastructure is upgraded all around them?  Their failure to address deficiencies and outstanding conflicts will only become more apparent – and annoying.

Sure, it won’t be easy to deal with unhappy constituents who see paving parkland as an unacceptable and unnecessary intrusion (regardless of the success of the Seaside route through English Bay and the False Creek parks).  But the NPA prides itself on the party that can get things done, and do it in a more balanced way than Vision.

And yet their indifference on this file only illustrates the opposite.


15 Sep 00:04

Google Nexus 6, Nexus 9 LTE Will be Updated to Android 7.0 in the ‘Coming Weeks’

by Evan Selleck
At the tail-end of August, a few different Nexus-branded devices, including the Nexus 6P and the Nexus 5X, began receiving their over-the-air (OTA) update to Android 7.0 Nougat. Continue reading →
15 Sep 00:04

Austin Mann’s iPhone 7 Plus Camera Review in Rwanda

by Federico Viticci

I consider the iPhone a computer with a camera more than a computer that makes phone calls. Therefore, Austin Mann's annual iPhone camera review is my favorite of the bunch. I've been linking them for the past couple of years, and I find Austin's approach always fascinating and well-presented.

This time, Austin has outdone himself. To properly test the iPhone 7 Plus' camera with optical zoom, they've flown to Rwanda in collaboration with Nat Geo Travel and Nat Geo Adventure to track gorillas and take close-up pictures, timelapses, test wide-color gamut photos, and more.

He writes:

As many of you know, in the past I've created this review in Iceland twice, Patagonia and Switzerland, but this year I wanted to really change things up. With indicators pointing toward possibilities of optical zoom I asked myself: where's the best place in the world to test optical zoom? Africa of course.

So this year, in collaboration with Nat Geo Travel + Nat Geo Adventure we’ve set out to get you the answers. I'm writing you from deep in the Nyungwe rain forest in southwest Rwanda… we've been tracking gorillas in the north, boating Lake Kivu in the west and running through tea plantations in the south… all with the iPhone 7 Plus in hand.

I've had a blast playing with the wide spectrum of new features and tech but as always, our mission is to find out the answer to one question:

How does all this new tech make my pictures (and videos) better than before?

The result is beautiful. The video "review" is a mini-documentary/short film about tracking down mountain gorillas, and it's 9 minutes long. Seeing how they found the gorillas brought a big smile on my face, and you can notice how the zoom interface of the iPhone 7 Plus was useful for that purpose.

Watch it below, and go check out Austin's photos and summary of the experience here.

→ Source: austinmann.com

15 Sep 00:04

Why Do I Suddenly Have To Log In Now To Use The Graphics Card I’ve Had For Years?

by Kate Cox
mkalus shared this story from Consumerist.

Gaming can be, well, a kind of consumer-unfriendly industry. Players who build and upgrade their own PCs, though, usually expect a level of control over their experience that console gaming may not offer. And anything that changes that is not likely to go over well, as a change to certain Nvidia software is demonstrating.

PC gaming enthusiasts, unlike console players, get to pick what parts make their computer run. The graphics card, or GPU, is likely to be the most expensive single part most players put into a gaming PC. They run $200 – $700 each, and at their most basic are the key part that make games look pretty and go fast.

Tech company Nvidia is far and away the most popular maker of graphics cards among PC gamers. The Steam hardware survey, which is a reasonably decent proxy, currently finds about 58% of players are using Nvidia hardware. (ATI, Nvidia’s main competitor, comes in second place with a share of about 24%.)

When you install an Nvidia GPU into your computer, it comes with a piece of management software called the GeForce Experience that lets you access all the card’s many features and settings. It’s been bundled with Nvidia cards for years, but the most recent version — 3.0, automatically downloading to users’ systems now — comes with a major change that’s making some users unhappy: a mandatory login.

In order to use most features of the GeForce experience, Nvidia card owners now have to create an account (or link to their Facebook or Google profiles) and log in to the program with a username and password. Given that a GPU is a piece of hardware that works perfectly well offline and does not require an internet connection to do its job, this has struck some players as questionable at best. Especially as the software is specifically tied to optimizing the hardware on one specific PC — this is not a thing you will need to log into and access remotely or from another device.

So, being curious, we asked Nvidia why version 3.0 requires a login to use features that had previously been available to all users without logging in, and what the benefits to Nvidia and the consumer were from requiring the login now.

In response, a spokesperson for Nvidia told Consumerist, “Users with an account can take advantage of the latest GeForce Experience release features including GameStream pairing, Share technology, and more, as well as random prizes and giveaways. They can also leave feedback directly within the application as well.”

All of which is a well-phrased selling point, but not terribly useful for answering consumers’ questions.

However, a real answer is pretty much front and center. Unique user accounts sure are great for one big thing: marketing stuff.

Version 3.0 includes a big ol’ splash screen showing “several tiles that display details about GeForce news and features, partner games, and more when clicked on,” as PC World put it. That makes marketing stuff to you even more central to the Experience than it previously had been.

But any data collection comes with one big question: what of privacy?

The GeForce Experience FAQ splits the data the app collects into two pools: identifiable and aggregated. Only the aggregate data goes outside the company, Nvidia says: “GeForce Experience does not share any personally identifiable information outside the company. NVIDIA may share aggregate-level data with select partners, but does not share user-level data.”

The data it does collect is subject to Nvidia’s existing privacy policy, which does, of course, say that your information will be used for marketing purposes. While it will not sell your data, Nvidia’s policy says, “We may from time to time share your Personal Information with our business partners, resellers, affiliates, service providers, consulting partners and others in an effort to better serve you.”

When asked about collecting personal data, the Nvdia representative specified that the Geforce Experience collects data needed to recommend the correct driver update and optimal game settings — that’s mostly system-level data, including hardware configuration, operating system, language, installed games, game settings, and current driver version, and it’s all data that previous versions of GeForce would have needed to collect as well.

“We utilize multiple layers of security, both active and passive to protect our customer’s identity,” Nvidia said. “This includes active monitoring and blocking of traffic, logging, various levels of encryption, rapid incident response and remediation. Nvidia is fully compliant with both federal and local privacy regulations.”

In short: game publishers get aggregate data about how people are playing their games, and you eventually get some targeted ads. In this, your Nvidia software joins a long list of basically everything else that operates that way. The difference is, it didn’t work that way when consumers dropped $500 on a product, and opting out by choosing a new product has a very high cost.

The good news for Nvidia GPU owners, such as it is, is that while GeForce experience is convenient and centralized, it’s not actually mandatory to use. You can still download drivers directly from the website, choose optimal game performance settings using in-game menus, and set display settings using Windows options. And while the ability to run a framerate counter, upload, stream, or record directly from the GeForce experience is handy, it’s not the only software that can do so. PC gaming platforms like Steam, Origin, and even Xbox Live (in Windows 10) offer recording functions, Twitch or YouTube integration, and/or screenshot capabilities, as do third-party applications.





14 Sep 23:59

How RAW Changes iPhone Photography

by Federico Viticci

Ben McCarthy, writing for iMore:

Editing RAW files feels like a huge leap forward in terms of mobile photography: With iOS 10, the iPhone is evolving from a great camera for taking casual photos with into a capable professional tool. It still has plenty of limitations, but I suspect we've passed a tipping point.

But shooting while out and about is one thing. What about using the iPhone in a studio? I gathered together a couple of friends to do a little impromptu photoshoot to see how the iPhone would hold up.

Ben is the developer of Obscura, which I featured in my review yesterday because of its native RAW support on iOS 10. He makes some good points on the limitations and advantages of shooting RAW on iPhone.

→ Source: imore.com

14 Sep 23:58

Why all this talk about the “median” income?

by Josh Bernoff

The US Census Bureau released figures yesterday showing that, in 2015, the median annual household income in America increased by 5.2% over a year earlier. That’s an increase of $2,798, to $56,516 per year. Why talk about the “median” and how is that different from an average? I’ll explain. Put simply, when you’re talking about income, the … Continued

The post Why all this talk about the “median” income? appeared first on without bullshit.

14 Sep 23:57

Compilation à la mode

by Kristina Chodorow
Sundae from Black Tap, incidentally around the corner from the NYC Google office.

Sundae from Black Tap, incidentally around the corner from the NYC Google office.

Bazel lets you set up various “modes” of compilation. There are several built-in (fast, optimized, debug) and you can define your own. The built in ones are:

  • Fast: build your program as quickly as possible. This is generally best for development (when you want a tight compile/edit loop) and is the default, when you don’t specify anything. Your build’s output is generated in bazel-out/local-fastbuild.
  • Optimized: code is compiled to run fast, but may take longer to build. You can get this by running bazel build -c opt //your:target and it’s output will be generated in bazel-out/local-opt. This mode is the best for code that will be deployed.
  • Debug: this leaves in symbols and generally optimizes code for running through a debugger. You can get this by running bazel build -c dbg //your:target and it’s output will be generated in bazel-out/local-dbg

You don’t have to actually know where the build’s output is stored, bazel will update the bazel-bin/bazel-genfiles symlinks for you automatically at the end of your build, so they’ll always point to the right bazel-out subdirectory.

Because each flavor’s output is stored in a different directory, each mode effectively has an entirely separate set of build artifacts, so you can get incremental builds when switching between modes. On the downside: when you build in a new mode for the first time, it’s essentially a clean build.

Note: -c is short for --compilation_mode, but everyone just says -c.

Defining a new mode

Okay, you can’t really define your own mode without tons of work, but you can create named sets of options (which is probably what you wanted anyway unless you’re writing your own toolchain).

For example, let’s say I generally have several flags I want to run with when I’m trying to debug a failing test. I can create a “gahhh” config in my ~/.bazelrc as follows:

test:gahhh --test_output=all
test:gahhh --nocache_test_results
test:gahhh --verbose_failures
test:gahhh -c dbg

The “test” part indicates what command this applies to (“build”, “query” or “startup” are other useful ones). The “:gahhh” names the config, and then I give the option. Now, when I get frustrated, I can run:

$ bazel test --config gahhh //my:target

and I don’t have to remember the four options that I want to use.

14 Sep 23:57

Rogers’ CEO Guy Laurence says Canadian content should be funded on a ‘platform neutral basis’

by Patrick O'Rourke

During a presentation at the Canadian Club, Guy Laurence, Rogers president and CEO, as well as self-described ‘new Canadian,’ bestowed our nation with words of telecom and national branding wisdom.

While Laurence says he admires Canada’s “quite patriotism,” he feels it’s time for our country to “step out of the shadows and into the light on the world stage.”

Rogers’ CEO believes that Canada has two distinct problems related to brand. The first one is that our country’s brand is not “well defined” in the world and second, that Canada is not ambitious enough when it comes to self-promotion.

Laurence goes on to discuss Canadian media and artist success stories like Vice, Justin Bieber and the forever dreamy Ryan Reynolds and Ryan Gosling, before delving into a world he’s more familiar with, the telecom industry.

“Leading media companies, like Rogers, also have a role to play. So we look forward to participating in the Minister’s upcoming consultation. I’m glad to see that her consultation paper is promoting Canadian content globally, and if it’s not clear from my remarks today, we support a funding model that exploits this huge opportunity,” writes Laurence.

“We’re asking the government to recognize that there is enough money in the system already. We don’t need more funds– we need to consolidate the alphabet soup of funds so we can reduce complexity and administrative costs,” he continued.

Rogers CEO describes a digital-first future where content isn’t bound by platforms and instead is distributed on a more neutral basis.

“We’re asking for content to be funded on a platform neutral basis… for content to be created for all distribution platforms, whether it’s a TV screen, a movie screen, or a smartphone screen. Consumers are going digital. Rogers is going digital. Canada needs to go digital. Content should end up anywhere and everywhere it makes sense.”

The Canadian Club is a club in Toronto that meets several times a month to hear lunchtime speeches given by invited guests from the fields of politics, law, business, the arts, the media, and other prominent fields.

To read Laurence’s entire Canadian Club speech check out this link.

SourceRogers
14 Sep 23:56

Can Drupal outdo native applications?

by Dries

I've made no secret of my interest in the open web, so it won't come as a surprise that I'd love to see more web applications and fewer native applications. Nonetheless, many argue that "the future of the internet isn't the web" and that it's only a matter of time before walled gardens like Facebook and Google — and the native applications which serve as their gatekeepers — overwhelm the web as we know it today: a public, inclusive, and decentralized common good.

I'm not convinced. Native applications seem to be winning because they offer a better user experience. So the question is: can open web applications, like those powered by Drupal, ever match up to the user experience exemplified by native applications? In this blog post, I want to describe inversion of control, a technique now common in web applications and that could benefit Drupal's own user experience.

Native applications versus web applications

Using a native application — for the first time — is usually a high-friction, low-performance experience because you need to download, install, and open the application (Android's streamed apps notwithstanding). Once installed, native applications offer unique access to smartphone capabilities such as hardware APIs (e.g. microphone, GPS, fingerprint sensors, camera), events such as push notifications, and gestures such as swipes and pinch-and-zoom. Unfortunately, most of these don't have corresponding APIs for web applications.

A web application, on the other hand, is a low-friction experience upon opening it for the first time. While native applications can require a large amount of time to download initially, web applications usually don't have to be installed and launched. Nevertheless, web applications do incur the constraint of low performance when there is significant code weight or dozens of assets that have to be downloaded from the server. As such, one of the unique challenges facing web applications today is how to emulate a native user experience without the drawbacks that come with a closed, opaque, and proprietary ecosystem.

Inversion of control

In the spirit of open source, the Drupal Association invited experts from the wider front-end community to speak at DrupalCon New Orleans, including from Ember and Angular. Ed Faulkner, a member of the Ember core team and contributor to the API-first initiative, delivered a fascinating presentation about how Drupal and Ember working in tandem can enrich the user experience.

One of Ember's primary objectives is to demonstrate how web applications can be indistinguishable from native applications. And one of the key ideas of JavaScript frameworks like Ember is inversion of control, in which the client side essentially "takes over" from the server side by driving requirements and initiating actions. In the traditional page delivery model, the server is in charge, and the end user has to wait for the next page to be delivered and rendered through a page refresh. With inversion of control, the client is in charge, which enables fluid transitions from one place in the web application to another, just like native applications.

Before the advent of JavaScript and AJAX, distinct states in web applications could be defined only on the server side as individual pages and requested and transmitted via a round trip to the server, i.e. a full page refresh. Today, the client can retrieve application states asynchronously rather than depending on the server for a completely new page load. This improves perceived performance. I discuss the history of this trend in more detail in this blog post.

Through inversion of control, JavaScript frameworks like Ember provide much more than seamless interactions and perceived performance enhancements; they also offer client-side storage and offline functionality when the client has no access to the server. As a result, inversion of control opens a door to other features requiring the empowerment of the client beyond just client-driven interactions. In fact, because the JavaScript code is run on a client such as a smartphone rather than on the server, it would be well-positioned to access other hardware APIs, like near-field communication, as web APIs become available.

Inversion of control in end user experiences

Application-like animation using Ember and Drupal
When a user clicks a teaser image on the homepage of an Ember-enhanced Drupal.com, the page seamlessly transitions into the full content page for that teaser, with the teaser image as a reference point, even though the URL changes.

In response to our recent evaluation of JavaScript frameworks and their compatibility with Drupal, Ed applied the inversion of control principle to Drupal.com using Ember. Ed's goal was to enhance Drupal.com's end user experience with Ember to make it more application-like, while also preserving Drupal's editorial and rendering capabilities as much as possible.

Ed's changes are not in production on Drupal.com, but in his demo, clicking a teaser image causes it to "explode" to become the hero image of the destination page. Pairing Ember with Drupal in this way allows a user to visually and mentally transition from a piece of teaser content to its corresponding page via an animated transition between pages — all without a page refresh. The animation is very impressive and the animated GIF above doesn't do it full justice. While this transition across pages is similar to behavior found in native mobile applications, it's not currently possible out of the box in Drupal without extensive client-side control.

Rather than the progressively decoupled approach, which embeds JavaScript-driven components into a Drupal-rendered page, Ed's implementation inverts control by allowing Ember to render what is emitted by Drupal. Ember maintains control over how URLs are loaded in the browser by controlling URLs under its responsibility; take a look at Ed's DrupalCon presentation to better understand how Drupal and Ember interact in this model.

These impressive interactions are possible using the Ember plugin Liquid Fire. Fewer than 20 lines of code were needed to build the animations in Ed's demo, much like how SDKs for native mobile applications provide easy-to-implement animations out of the box. Of course, Ember isn't the only tool capable of this kind of functionality. The RefreshLess module for Drupal by Wim Leers (Acquia) also uses client-side control to enable navigating across pages with minimal server requests. Unfortunately, RefreshLess can't tap into Liquid Fire or other Ember plugins.

Inversion of control in editorial experiences

In-place editing using Ember and Drupal
In CardStack Editor, an editorial interface with transitions and animations is superimposed onto the content page in a manner similar to outside-in, and the editor benefits from an in-context, in-preview experience that updates in real time.

We can apply this principle of inversion of control not only to the end user experience but also to editorial experiences. The last demos in Ed's presentation depict CardStack Editor, a fully decoupled Ember application that uses inversion of control to overlay an administrative interface to edit Drupal content, much like in-place editing.

CardStack Editor communicates with Drupal's web services in order to retrieve and manipulate content to be edited, and in this example Drupal serves solely as a central content repository. This is why the API-first initiative is so important; it enables developers to use JavaScript frameworks to build application-like experiences on top of and backed by Drupal. And with the help of SDKs like Waterwheel.js (a native JavaScript library for interacting with Drupal's REST API), Drupal can become a preferred choice for JavaScript developers.

Inversion of control as the rule or exception?

Those of you following the outside-in work might have noticed some striking similarities between outside-in and the work Ed has been doing: both use inversion of control. The primary purpose of our outside-in interfaces is to provide for an in-context editing experience in which state changes take effect live before your eyes; hence the need for inversion of control.

Thinking about the future, we have to answer the following question: does Drupal want inversion of control to be the rule or the exception? We don't have to answer that question today or tomorrow, but at some point we should.

If the answer to that question is "the rule", we should consider embracing a JavaScript framework like Ember. The constellation of tools we have in jQuery, Backbone, and the Drupal AJAX framework makes using inversion of control much harder to implement than it could be. With a JavaScript framework like Ember as a standard, implementation could accelerate by becoming considerably easier. That said, there are many other factors to consider, including the costs of developing and hosting two codebases in different languages.

In the longer term, client-side frameworks like Ember will allow us to build web applications which compete with and even exceed native applications with regard to perceived performance, built-in interactions, and a better developer experience. But these frameworks will also enrich interactions between web applications and device hardware, potentially allowing them to react to pinch-and-zoom, issue native push notifications, and even interact with lower-level devices.

In the meantime, I maintain my recommendation of (1) progressive decoupling as a means to begin exploring inversion of control and (2) a continued focus on the API-first initiative to enable application-like experiences to be developed on Drupal.

Conclusion

I'm hopeful Drupal can exemplify how the open web will ultimately succeed over native applications and walled gardens. Through the API-first initiative, Drupal will provide the underpinnings for web and native applications. But is it enough?

Inversion of control is an important principle that we can apply to Drupal to improve how we power our user interactions and build robust experiences for end users and editors that rival native applications. Doing so will enable us to enhance our user experience long into the future in ways that we may not even be able to think of now. I encourage the community to experiment with these ideas around inversion of control and consider how we can apply them to Drupal.

Special thanks to Preston So for contributions to this blog post and to Angie Byron, Wim Leers, Kevin O'Leary, Matt Grill, and Ted Bowman for their feedback during its writing.

14 Sep 23:55

Interesting practicalities

by russell davies

It's tomorrow! We kick-off at 7pm. Aiming to be done by 9.30.

Speakers:

Please come and find me when you arrive, so I know you're there. Then all you'll need to do is come up to the stage about 15 minutes before your talk so we can sort out tech etc. The timetable will be roughly the same as this one, with a few minor tweaks.

Attendees:

It's going to be hot. We're not providing any drinks, so you might want to bring some water.

You might find a small beaker of liquid on your chair when you get there. Don't drink this! It's required for the first talk (don't worry it's not Alby's talk about polonium poisoning).

Bring a pencil!

I'll confess to feeling nervous about tomorrow. Back when we first did these things expectations were very low and we only just managed to clear the bar. It was remarkable enough that some random person was able to just put on a conference without any apparent ability or experience. That seems more common place these days, which is a good thing, but I hope you're not all expecting too much. Ah well, we'll see. Tomorrow!

 

 

14 Sep 23:55

Who says Microsoft doesn't listen?

by Volker Weber
One of the ongoing feedback items we’ve heard is how the apps that come preinstalled with Windows will reinstall after each upgrade – particularly noticeable for our Insiders that receive multiple flights per month. We’ve heard your feedback, and starting with Build 14926, when your PC updates it will check for apps that have been uninstalled, and it will preserve that state once the update has completed. This means if you uninstall any of the apps included in Windows 10 such as the Mail app or Maps app, they will not get reinstalled after you update to a newer build going forward.

No more 3D Builder, Get Skype, Get Office, Solitaire, Minesweeper etc. We will see if they can keep Xbox off, because that is a bit harder to get rid of.

Thank you, Microsoft.

More >

[Thanks, David]

14 Sep 23:54

Apple’s first iPhone 7 commercial shines the spotlight on its camera

by Ian Hardy

Apple’s new iPhone 7 and iPhone 7 Plus is set to officially launch in Canada on September 16th.

Teasing the phone landing on store shelves, the Cupertino, California-based company has just released its first commercial promoting the iPhone 7 and 7 Plus. The 33-second spot highlights the 7’s new, upgraded camera features, stereo speakers, water-resistant.

Usually, Apple commercials are upbeat and full of vivid colours. This time around, however, Apple opted for a dark theme, likely in reference to the new Jet Black and matte Black iteration of the iPhone, as well as the handset’s ability to snap great photos under low-light conditions.

Related: iPhone 7 and iPhone 7 Plus review

14 Sep 23:50

Recommended on Medium: "A new way to catch up in Slack" in Several People Are Typing — The Official Slack Blog

Breeze through your messages in less time and fewer clicks with All Unreads

Continue reading on Several People Are Typing — The Official Slack Blog »

14 Sep 23:50

The Overselling of Open

files/images/overselling-open-1-768x427.jpg


Jim Groom, bavatuesdays, Sept 17, 2016


Jim Groom revisits a  post from Michael Caulfield and points to "that first sentence 'but institutions, they are what make these things last” that I deeply question." He asks, "who, when all was said and done, saved more than a decade of web history in the form of Geocities from deletion at the hands of Yahoo! in 2010? Was it other corporations? Higher ed? The government? Nope, it was dozens of rogue archivists, technologists, artists, and librarians from around the world that cared." Exactly. See also his video, "Can we imagine tech infrastructure as an OER?"

[Link] [Comment]
14 Sep 22:48

Always Be Shipping

by Mark Finkle

We all want to ship as fast as possible, while making sure we can control the quality of our product. Continuous deployment means we can ship at any time, right? Well, we still need to balance the unstable and stable parts of the codebase.

Web Deploys vs Application Deploys

The ability to control changes in your stable codebase is usually the limiting factor in how quickly and easily you can ship your product to people. For example, web products can ship frequently because it’s somewhat easy to control the state of the product people are using. When something is updated on the website, users get the update when loading the content or refreshing the page. With mobile applications, it can be harder to control the version of the product people are using. After pushing an update to the store, people need to update the application on their devices. This takes time and it’s disruptive. It’s typical for several versions of a mobile application to be active at any given time.

It’s common for mobile application development to use time-based deployment windows, such as 2 or 4 weeks. Every few weeks, the unstable codebase is promoted to the stable codebase and tasks (features and bug fixes) which are deemed stable are made ready to deploy. Getting ready to deploy could mean running a short Beta, to test the release candidate with a larger, more varied, test group.

It’s important to remember, these deployment windows are not development sprints! They are merely opportunities to deploy stable code. Some features or bug fixes could take many weeks to complete. Once complete, the code can be deployed at the next window.

Tracking the Tasks

Just because you use 2 week deployment windows doesn’t mean you can really ship a quality product every 2 weeks. The deployment window is an artificial framework we create to add some structure to the process. At the core, we need to be able to track the tasks. What is a task? Let’s start with something that’s easy to visualize: a feature.

What work goes into getting a feature shipped?

  • Planning: Define and scope the work.
  • Design: Design the UI and experience.
  • Coding: Do the implementation. Iterate with designers & product managers.
  • Reviewing: Examine & run the code, looking for problems. Code is ready to land after a successful review. Otherwise, it goes back to coding to fix issues.
  • Testing: Test that the feature is working correctly and nothing broke in the process. Defects might require sending the work back to development.
  • Push to Stable: Once implemented, tested and verified, the code can be moved to the stable codebase.

In the old days, this was a waterfall approach. These days, we can use iterative, overlapping processes. A flow might crudely look like this:

feature-cycle

Each of these steps takes a non-zero amount of time. Some have to be repeated. The goal is to create a feature that has the desired behavior and at a known level of quality. Note that landing the code is not the final step. The work can only be called complete when it’s been verified as stable enough to ship.

Bug fixes are similar to features. The flow might look like this:

bug-cycle

Imagine you have many of these flows happening at the same time. Ongoing work happens on the unstable codebase. As work is completed, tested and verified at an expectable level of quality, it can be moved to the stable codebase. All work happens on the unstable codebase. Try very hard to keep work on the stable codebase to a minimum – usually disabling/enabling code or backing out unstable code.

Crash Landings

One practice I’ve seen happen on development teams is attempting to crash land code right before a deployment window. This is bad for a few reasons:

  • It forces many code reviews to happen simultaneously across the team, leading to delays since code review is an iterative cycle.
  • It forces large amounts of code to be merged during a short time period, likely leading to merge conflicts – leading to more delays.
  • It forces a lot of testing to happen at the same time, leading to backlogs and delays. Especially since testing, fixing and verifying is an iterative cycle.

The end result is anti-climatic for everyone: code landed at a deployment window is almost never shipped in the window. In fact, the delays caused by crash landing lead to a lot of code missing the deployment window.

crash-landing

Smooth Landings

A different approach is to spread out the code landings. Allow code reviews and testing/fixing cycles to happen in a more balanced manner. More code is verified as stable and can ship in the deployment window. Code that is not stable is disabled via build-time or runtime flags, or in extreme cases, backout out of the stable codebase.

smooth-landing

This balanced approach also reduces the stress that accompanies rushing code reviews and testing. The process becomes more predictable and even enjoyable. Teams thrive in healthy environments.

Once you get comfortable with deployment windows and sprints being very different things, you could even start getting more creative with deployments. Could you deploy weekly? I think it’s possible, but the limiting factor becomes your ability to create stable builds, test and verify those builds and submit those builds to the store. Yes, you still need to test the release candidates and react to any unexpected outcomes from the testing. Testing the release candidates with a larger group (Beta testing) will usually turn up issues not found in other testing. At larger scales, many things thought to be only hypothetical become reality and might need to be addressed. Allowing for this type of beta testing improves quality, but may limit how short a deployment window can be.

Remember, it’s difficult to undo or remove an unexpected issue from a mobile application user population. Users are just stuck with the problem until they get around to updating to a fixed version.

I’ve seen some companies use short deployment window techniques for internal test releases, so it’s certainly possible. Automation has to play a key role, as does tracking and triaging the bugs. Risk assessment is a big part of shipping software. Know your risks, ship your software.

14 Sep 22:48

Copyleft and data: databases as poor subject

by Luis Villa

tl;dr: Open licensing works when you strike a healthy balance between obligations and reuse. Data, and how it is used, is different from software in ways that change that balance, making reasonable compromises in software (like attribution) suddenly become insanely difficult barriers.

In my last post, I wrote about how database law is a poor platform to build a global public copyleft license on top of. Of course, whether you can have copyleft in data only matters if copyleft in data is a good idea. When we compare software (where copyleft has worked reasonably well) to databases, we’ll see that databases are different in ways that make even “minor” obligations like attribution much more onerous.

Card Puncher from the 1920 US Census.
Card Puncher from the 1920 US Census.

How works are combined

In software copyleft, the most common scenarios to evaluate are merging two large programs, or copying one small file into a much larger program. In this scenario, understanding how licenses work together is fairly straightforward: you have two licenses. If they can work together, great; if they can’t, then you don’t go forward, or, if it matters enough, you change the license on your own work to make it work.

In contrast, data is often combined in three ways that are significantly different than software:

  • Scale: Instead of a handful of projects, data is often combined from hundreds of sources, so doing a license conflicts analysis if any of those sources have conflicting obligations (like copyleft) is impractical. Peter Desmet did a great job of analyzing this in the context of an international bio-science dataset, which has 11,000+ data sources.
  • Boundaries: There are some cases where hundreds of pieces of software are combined (like operating systems and modern web services) but they have “natural” places to draw a boundary around the scope of the copyleft. Examples of this include the kernel-userspace boundary (useful when dealing with the GPL and Linux kernel), APIs (useful when dealing with the LGPL), or software-as-a-service (where no software is “distributed” in the classic sense at all). As a result, no one has to do much analysis of how those pieces fit together. In contrast, no natural “lines” have emerged around databases, so either you have copyleft that eats the entire combined dataset, or you have no copyleft. ODbL attempts to manage this with the concept of “independent” databases and produced works, but after this recent case I’m not sure even those tenuous attempts hold as a legal matter anymore.
  • Authorship: When you combine a handful of pieces of software, most of the time you also control the licensing of at least one of those pieces of software, and you can adjust the licensing of that piece as needed. (Widely-used exceptions to this rule, like OpenSSL, tend to be rare.) In other words, if you’re writing a Linux kernel driver, or a WordPress theme, you can choose the license to make sure it complies. Not necessarily the case in data combinations: if you’re making use of large public data sets, you’re often combining many other data sources where you aren’t the author. So if some of them have conflicting license obligations, you’re stuck.

How attribution is managed

Attribution in large software projects is painful enough that lawyers have written a lot on it, and open-source operating systems vendors have built somewhat elaborate systems to manage it. This isn’t just a problem for copyleft: it is also a problem for the supposedly easy case of attribution-only licenses.

Now, again, instead of dozens of authors, often employed by the same copyright-owner, imagine hundreds or thousands. And imagine that instead of combining these pieces in basically the same way each time you build the software, imagine that every time you have a different query, you have to provide different attribution data (because the relevant slices of data may have different sources or authors). That’s data!

The least-bad “solution” here is to (1) tag every field (not just data source) with licensing information, and (2) have data-reading software create new, accurate attribution information every time a new view into the data is created. (I actually know of at least one company that does this internally!) This is not impossible, but it is a big burden on data software developers, who must now include a lawyer in their product design team. Most of them will just go ahead and violate the licenses instead, pass the burden on to their users to figure out what the heck is going on, or both.

Who creates data

Most software is either under a very standard and well-understood open source license, or is produced by a single entity (or often even a single person!) that retains copyright and can adjust that license based on their needs. So if you find a piece of software that you’d like to use, you can either (1) just read their standard FOSS license, or (2) call them up and ask them to change it. (They might not change it, but at least they can if they want to.) This helps make copyleft problems manageable: if you find a true incompatibility, you can often ask the source of the problem to fix it, or fix it yourself (by changing the license on your software).

Data sources typically can’t solve problems by relicensing, because many of the most important data sources are not authored by a single company or single author. In particular:

  • Governments: Lots of data is produced by governments, where licensing changes can literally require an act of the legislature. So if you do anything that goes against their license, or two different governments release data under conflicting licenses, you can’t just call up their lawyers and ask for a change.
  • Community collaborations: The biggest open software relicensing that’s ever been done (Mozilla) required getting permission from a few thousand people. Successful online collaboration projects can have 1-2 orders of magnitude more contributors than that, making relicensing is hard. Wikidata solved this the right way: by going with CC0.

What is the bottom line?

Copyleft (and, to a lesser extent, attribution licenses) works when the obligations placed on a user are in balance with the benefits those users receive. If they aren’t in balance, the materials don’t get used. Ultimately, if the data does not get used, our egos feel good (we released this!) but no one benefits, and regardless of the license, no one gets attributed and no new material is released. Unfortunately, even minor requirements like attribution can throw the balance out of whack. So if we genuinely want to benefit the world with our data, we probably need to let it go.

So what to do?

So if data is legally hard to build a license for, and the nature of data makes copyleft (or even attribution!) hard, what to do? I’ll go into that in my next post.