Shared posts

07 Sep 23:06

Who sponsors Drupal development?

by Dries

There exist millions of Open Source projects today, but many of them aren't sustainable. Scaling Open Source projects in a sustainable manner is difficult. A prime example is OpenSSL, which plays a critical role in securing the internet. Despite its importance, the entire OpenSSL development team is relatively small, consisting of 11 people, 10 of whom are volunteers. In 2014, security researchers discovered an important security bug that exposed millions of websites. Like OpenSSL, most Open Source projects fail to scale their resources. Notable exceptions are the Linux kernel, Debian, Apache, Drupal, and WordPress, which have foundations, multiple corporate sponsors and many contributors that help these projects scale.

We (Dries Buytaert is the founder and project lead of Drupal and co-founder and Chief Technology Officer of Acquia and Matthew Tift is a Senior Developer at Lullabot and Drupal 8 configuration system co-maintainer) believe that the Drupal community has a shared responsibility to build Drupal and that those who get more from Drupal should consider giving more. We examined commit data to help understand who develops Drupal, how much of that work is sponsored, and where that sponsorship comes from. We will illustrate that the Drupal community is far ahead in understanding how to sustain and scale the project. We will show that the Drupal project is a healthy project with a diverse community of contributors. Nevertheless, in Drupal's spirit of always striving to do better, we will also highlight areas where our community can and should do better.

Who is working on Drupal?

In the spring of 2015, after proposing ideas about giving credit and discussing various approaches at length, Drupal.org added the ability for people to attribute their work to an organization or customer in the Drupal.org issue queues. Maintainers of Drupal themes and modules can award issues credits to people who help resolve issues with code, comments, design, and more.

Example issue credit on drupal org
A screenshot of an issue comment on Drupal.org. You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.

Drupal.org's credit system captures all the issue activity on Drupal.org. This is primarily code contributions, but also includes some (but not all) of the work on design, translations, documentation, etc. It is important to note that contributing in the issues on Drupal.org is not the only way to contribute. There are other activities—for instance, sponsoring events, promoting Drupal, providing help and mentoring—important to the long-term health of the Drupal project. These activities are not currently captured by the credit system. Additionally, we acknowledge that parts of Drupal are developed on GitHub and that credits might get lost when those contributions are moved to Drupal.org. For the purposes of this post, however, we looked only at the issue contributions captured by the credit system on Drupal.org.

What we learned is that in the 12-month period from July 1, 2015 to June 30, 2016 there were 32,711 issue credits—both to Drupal core as well as all the contributed themes and modules—attributed to 5,196 different individual contributors and 659 different organizations.

Despite the large number of individual contributors, a relatively small number do the majority of the work. Approximately 51% of the contributors involved got just one credit. The top 30 contributors (or top 0.5% contributors) account for over 21% of the total credits, indicating that these individuals put an incredible amount of time and effort in developing Drupal and its contributed modules:

Rank Username Issues
1 dawehner 560
2 DamienMcKenna 448
3 alexpott 409
4 Berdir 383
5 Wim Leers 382
6 jhodgdon 381
7 joelpittet 294
8 heykarthikwithu 293
9 mglaman 292
10 drunken monkey 248
11 Sam152 237
12 borisson_ 207
13 benjy 206
14 edurenye 184
15 catch 180
16 slashrsm 179
17 phenaproxima 177
18 mbovan 174
19 tim.plunkett 168
20 rakesh.gectcr 163
21 martin107 163
22 dsnopek 152
23 mikeryan 150
24 jhedstrom 149
25 xjm 147
26 hussainweb 147
27 stefan.r 146
28 bojanz 145
29 penyaskito 141
30 larowlan 135

How much of the work is sponsored?

As mentioned above, from July 1, 2015 to June 30, 2016, 659 organizations contributed code to Drupal.org. Drupal is used by more than one million websites. The vast majority of the organizations behind these Drupal websites never participate in the development of Drupal; they use the software as it is and do not feel the need to help drive its development.

Technically, Drupal started out as a 100% volunteer-driven project. But nowadays, the data suggests that the majority of the code on Drupal.org is sponsored by organizations in Drupal's ecosystem. For example, of the 32,711 commit credits we studied, 69% of the credited work is "sponsored".

We then looked at the distribution of how many of the credits are given to volunteers versus given to individuals doing "sponsored work" (i.e. contributing as part of their paid job):

Contributions top range

Looking at the top 100 contributors, for example, 23% of their credits are the result of contributing as volunteers and 56% of their credits are attributed to a corporate sponsor. The remainder, roughly 21% of the credits, are not attributed. Attribution is optional so this means it could either be volunteer-driven, sponsored, or both.

As can be seen on the graph, the ratio of volunteer versus sponsored don't meaningfully change as we look beyond the top 100—the only thing that changes is that more credits that are not attributed. This might be explained by the fact that occasional contributors might not be aware of or understand the credit system, or could not be bothered with setting up organizational profiles for their employer or customers.

As shown in jamadar's screenshot above, a credit can be marked as volunteer and sponsored at the same time. This could be the case when someone does the minimum required work to satisfy the customer's need, but uses his or her spare time to add extra functionality. We can also look at the amount of code credits that are exclusively volunteer credits. Of the 7,874 credits that marked volunteer, 43% of them (3,376 credits) only had the volunteer box checked and 57% of them (4,498) were also partially sponsored. These 3,376 credits are one of our best metrics to measure volunteer-only contributions. This suggests that only 10% of the 32,711 commit credits we examined were contributed exclusively by volunteers. This number is a stark contrast to the 12,888 credits that were "purely sponsored", and that account for 39% of the total credits. In other words, there were roughly four times as many "purely sponsored" credits as there were "purely volunteer" credits.

When we looked at the 5,196 users, rather than credits, we found somewhat different results. A similar percentage of all users had exclusively volunteer credits: 14% (741 users). But the percentage of users with exclusively sponsored credits is only 50% higher: 21% (1077 users). Thus, when we look at the data this way, we find that users who only do sponsored work tend to contribute quite a bit more than users who only do volunteer work.

None of these methodologies are perfect, but they all point to a conclusion that most of the work on Drupal is sponsored. At the same time, the data shows that volunteer contribution remains very important to Drupal. We believe there is a healthy ratio between sponsored and volunteer contributions.

Who is sponsoring the work?

Because we established that most of the work on Drupal is sponsored, we know it is important to track and study what organizations contribute to Drupal. Despite 659 different organizations contributing to Drupal, approximately 50% of them got 4 credits or less. The top 30 organizations (roughly top 5%) account for about 29% of the total credits, which suggests that the top 30 companies play a crucial role in the health of the Drupal project. The graph below shows the top 30 organizations and the number of credits they received between July 1, 2015 and June 30, 2016:

Contributions top organizations

While not immediately obvious from the graph above, different types of companies are active in Drupal's ecosystem and we propose the following categorization below to discuss our ecosystem.

Category Description
Traditional Drupal businesses Small-to-medium-sized professional services companies that make money primarily using Drupal. They typically employ less than 100 employees, and because they specialize in Drupal, many of these professional services companies contribute frequently and are a huge part of our community. Examples are Lullabot (shown on graph) or Chapter Three (shown on graph).
Digital marketing agencies Larger full-service agencies that have marketing led practices using a variety of tools, typically including Drupal, Adobe Experience Manager, Sitecore, WordPress, etc. They are typically larger, with the larger agencies employing thousands of people. Examples are Sapient (shown on graph) or AKQA.
System integrators Larger companies that specialize in bringing together different technologies into one solution. Example system agencies are Accenture, TATA Consultancy Services, Capgemini or CI&T.
Technology and infrastructure companies Examples are Acquia (shown on graph), Lingotek (shown on graph), BlackMesh, RackSpace, Pantheon or Platform.sh.
End-users Examples are Pfizer (shown on graph), Examiner.com (shown on graph) or NBC Universal.

Most of the top 30 sponsors are traditional Drupal companies. Sapient (120 credits) is the only digital marketing agency showing up in the top 30. No system integrator shows up in the top 30. The first system integrator is CI&T, which ranked 31st with 102 credits. As far as system integrators are concerned CI&T is a smaller player with between 1,000 and 5,000 employees. Other system integrators with credits are Capgemini (43 credits), Globant (26 credits), and TATA Consultancy Services (7 credits). We didn't see any code contributions from Accenture, Wipro or IBM Global Services. We expect these will come as most of them are building out Drupal practices. For example, we know that IBM Global Services already has over 100 people doing Drupal work.

Contributions by organization type

When we look beyond the top 30 sponsors, we see that roughly 82% of the code contribution on Drupal.org comes from the traditional Drupal businesses. About 13% of the contributions comes from infrastructure and software companies, though that category is mostly dominated by one company, Acquia. This means that the technology and infrastructure companies, digital marketing agencies, system integrators and end-users are not meaningfully contributing code to Drupal.org today. In an ideal world, the pie chart above would be sliced in equal sized parts.

How can we explain that unbalance? We believe the two biggest reasons are: (1) Drupal's strategic importance and (2) the level of maturity with Drupal and Open Source. Various of the traditional Drupal agencies have been involved with Drupal for 10 years and almost entirely depend on on Drupal. Given both their expertise and dependence on Drupal, they are most likely to look after Drupal's development and well-being. These organizations are typically recognized as Drupal experts and sought out by organizations that want to build a Drupal website. Contrast this with most of the digital marketing agencies and system integrators who have the size to work with a diversified portfolio of content management platforms, and are just getting started with Drupal and Open Source. They deliver digital marketing solutions and aren't necessarily sought out for their Drupal expertise. As their Drupal practices grow in size and importance, this could change, and when it does, we expect them to contribute more. Right now many of the digital marketing agencies and system integrators have little or no experience with Open Source so it is important that we motivate them to contribute and then teach them how to contribute.

There are two main business reasons for organizations to contribute: (1) it improves their ability to sell and win deals and (2) it improves their ability to hire. Companies that contribute to Drupal tend to promote their contributions in RFPs and sales pitches to win more deals. Contributing to Drupal also results in being recognized as a great place to work for Drupal experts.

We also should note that many organizations in the Drupal community contribute for reasons that would not seem to be explicitly economically motivated. More than 100 credits were sponsored by colleges or universities, such as the University of Waterloo (45 credits). More than 50 credits came from community groups, such as the Drupal Bangalore Community and the Drupal Ukraine Community. Other nonprofits and government organization that appeared in our data include the Drupal Association (166), National Virtual Library of India (25 credits), Center for Research Libraries (20), and Welsh Government (9 credits).

Infrastructure and software companies

Infrastructure and software companies play a different role in our community. These companies are less reliant on professional services (building Drupal websites) and primarily make money from selling subscription based products.

Acquia, Pantheon and Platform.sh are venture-backed Platform-as-a-Service companies born out of the Drupal community. Rackspace and AWS are public companies hosting thousands of Drupal sites each. Lingotek offers cloud-based translation management software for Drupal.

Contributions by technology companies

The graph above suggests that Pantheon and Platform.sh have barely contributed code on Drupal.org during the past year. (Platform.sh only became an independent company 6 months ago after they split off from CommerceGuys.) The chart also does not reflect sponsored code contributions on GitHub (such as drush), Drupal event sponsorship, and the wide variety of value that these companies add to Drupal and other Open Source communities.

Consequently, these data show that the Drupal community needs to do a better job of enticing infrastructure and software companies to contribute code to Drupal.org. The Drupal community has a long tradition of encouraging organizations to share code on Drupal.org rather than keep it behind firewalls. While the spirit of the Drupal project cannot be reduced to any single ideology-- not every organization can or will share their code -- we would like to see organizations continue to prioritize collaboration over individual ownership. Our aim is not to criticize those who do not contribute, but rather to help foster an environment worthy of contribution.

End users

We saw two end-users in the top 30 corporate sponsors: Pfizer (158 credits) and Examiner.com (132 credits). Other notable end-users that are actively giving back are Workday (52 credits), NBC Universal (40 credits), the University of Waterloo (45 credits) and CARD.com (33 credits). The end users that tend to contribute to Drupal use Drupal for a key part of their business and often have an internal team of Drupal developers.

Given that there are hundreds of thousands of Drupal end-users, we would like to see more end-users in the top 30 sponsors. We recognize that a lot of digital agencies don't want, or are not legally allowed, to attribute their customers. We hope that will change as Open Source continues to get more and more adopted.

Given the vast amount of Drupal users, we believe encouraging end-users to contribute could be a big opportunity. Being credited on Drupal.org gives them visibility in the Drupal community and recognizes them as a great place for Open Source developers to work.

The uneasy alliance with corporate contributions

As mentioned above, when community-driven Open Source projects grow, there becomes a bigger need for organizations to help drive its development. It almost always creates an uneasy alliance between volunteers and corporations.

This theory played out in the Linux community well before it played out in the Drupal community. The Linux project is 25 years old now has seen a steady increase in the number of corporate contributors for roughly 20 years. While Linux companies like Red Hat and SUSE rank highly on the contribution list, so do non-Linux-centric companies such as Samsung, Intel, Oracle and Google. The major theme in this story is that all of these corporate contributors were using Linux as an integral part of their business.

The 659 organizations that contribute to Drupal (which includes corporations), is roughly three times the number of organizations that sponsor development of the Linux kernel, "one of the largest cooperative software projects ever attempted". In fairness, Linux has a different ecosystem than Drupal. The Linux business ecosystem has various large organizations (Red Hat, Google, Intel, IBM and SUSE) for whom Linux is very strategic. As a result, many of them employ dozens of full-time Linux contributors and invest millions of dollars in Linux each year.

In the Drupal community, Acquia has had people dedicated full-time to Drupal starting nine years ago when it hired Gábor Hojtsy to contribute to Drupal core full-time. Today, Acquia has about 10 developers contributing to Drupal full-time. They work on core, contributed modules, security, user experience, performance, best practices, and more. Their work has benefited untold numbers of people around the world, most of whom are not Acquia customers.

In response to Acquia’s high level of participation in the Drupal project, as well as to the number of Acquia employees that hold leadership positions, some members of the Drupal community have suggested that Acquia wields its influence and power to control the future of Drupal for its own commercial benefit. But neither of us believe that Acquia should contribute less. Instead, we would like to see more companies provide more leadership to Drupal and meaningfully contribute on Drupal.org.

Who is sponsoring the top 30 contributors?

Rank Username Issues Volunteer Sponsored Not specified Sponsors
1 dawehner 560 84.1% 77.7% 9.5% Drupal Association (182), Chapter Three (179), Tag1 Consulting (160), Cando (6), Acquia (4), Comm-press (1)
2 DamienMcKenna 448 6.9% 76.3% 19.4% Mediacurrent (342)
3 alexpott 409 0.2% 97.8% 2.2% Chapter Three (400)
4 Berdir 383 0.0% 95.3% 4.7% MD Systems (365), Acquia (9)
5 Wim Leers 382 31.7% 98.2% 1.8% Acquia (375)
6 jhodgdon 381 5.2% 3.4% 91.3% Drupal Association (13), Poplar ProductivityWare (13)
7 joelpittet 294 23.8% 1.4% 76.2% Drupal Association (4)
8 heykarthikwithu 293 99.3% 100.0% 0.0% Valuebound (293), Drupal Bangalore Community (3)
9 mglaman 292 9.6% 96.9% 0.7% Commerce Guys (257), Bluehorn Digital (14), Gaggle.net, Inc. (12), LivePerson, Inc (11), Bluespark (5), DPCI (3), Thinkbean, LLC (3), Digital Bridge Solutions (2), Matsmart (1)
10 drunken monkey 248 75.4% 55.6% 2.0% Acquia (72), StudentFirst (44), epiqo (12), Vizala (9), Sunlime IT Services GmbH (1)
11 Sam152 237 75.9% 89.5% 10.1% PreviousNext (210), Code Drop (2)
12 borisson_ 207 62.8% 36.2% 15.9% Acquia (67), Intracto digital agency (8)
13 benjy 206 0.0% 98.1% 1.9% PreviousNext (168), Code Drop (34)
14 edurenye 184 0.0% 100.0% 0.0% MD Systems (184)
15 catch 180 3.3% 44.4% 54.4% Third and Grove (44), Tag1 Consulting (36), Drupal Association (4)
16 slashrsm 179 12.8% 96.6% 2.8% Examiner.com (89), MD Systems (84), Acquia (18), Studio Matris (1)
17 phenaproxima 177 0.0% 94.4% 5.6% Acquia (167)
18 mbovan 174 7.5% 100.0% 0.0% MD Systems (118), ACTO Team (43), Google Summer of Code (13)
19 tim.plunkett 168 14.3% 89.9% 10.1% Acquia (151)
20 rakesh.gectcr 163 100.0% 100.0% 0.0% Valuebound (138), National Virtual Library of India (NVLI) (25)
21 martin107 163 4.9% 0.0% 95.1%
22 dsnopek 152 0.7% 0.0% 99.3%
23 mikeryan 150 0.0% 89.3% 10.7% Acquia (112), Virtuoso Performance (22), Drupalize.Me (4), North Studio (4)
24 jhedstrom 149 0.0% 83.2% 16.8% Phase2 (124), Workday, Inc. (36), Memorial Sloan Kettering Cancer Center (4)
25 xjm 147 0.0% 81.0% 19.0% Acquia (119)
26 hussainweb 147 2.0% 98.6% 1.4% Axelerant (145)
27 stefan.r 146 0.7% 0.7% 98.6% Drupal Association (1)
28 bojanz 145 2.1% 83.4% 15.2% Commerce Guys (121), Bluespark (2)
29 penyaskito 141 6.4% 95.0% 3.5% Lingotek (129), Cocomore AG (5)
30 larowlan 135 34.1% 63.0% 16.3% PreviousNext (85), Department of Justice & Regulation, Victoria (14), amaysim Australia Ltd. (1), University of Adelaide (1)

We observe that the top 30 contributors are sponsored by 45 organizations. This kind of diversity is aligned with our desire not to see Drupal controlled by a single organization. The top 30 contributors and the 45 organizations are from many different parts in the world and work with customers large or small. We could still benefit from more diversity, though. The top 30 lacks digital marketing agencies, large system integrators and end-users -- all of whom could contribute meaningfully to making Drupal for them and others.

Evolving the credit system

The credit system gives us quantifiable data about where our community's contributions come from, but that data is not perfect. Here are a few suggested improvements:

  1. We need to find ways to recognize non-code contributions as well as code contributions outside of Drupal.org (i.e. on GitHub). Lots of people and organizations spend hundreds of hours putting together local events, writing documentation, translating Drupal, mentoring new contributors, and more—and none of that gets captured by the credit system.
  2. We'd benefit by finding a way to account for the complexity and quality of contributions; one person might have worked several weeks for just one credit, while another person might have gotten a credit for 30 minutes of work. We could, for example, consider the issue credit data in conjunction with Git commit data regarding insertions, deletions, and files changed.
  3. We could try to leverage the credit system to encourage more companies, especially those that do not contribute today, to participate in large-scale initiatives. Dries presented some ideas two years ago in his DrupalCon Amsterdam keynote and Matthew has suggested other ideas, but we are open to more suggestions on how we might bring more contributors into the fold using the credit system.
  4. We could segment out organization profiles between end users and different kinds of service providers. Doing so would make it easier to see who the top contributors are in each segment and perhaps foster more healthy competition among peers. In turn, the community could learn about the peculiar motivations within each segment.

Like Drupal the software, the credit system on Drupal.org is a tool that can evolve, but that ultimately will only be useful when the community uses it, understands its shortcomings, and suggests constructive improvements. In highlighting the organizations that sponsor work on Drupal.org, we hope to provoke responses that help evolve the credit system into something that incentivizes business to sponsor more work and that allows more people the opportunity to participate in our community, learn from others, teach newcomers, and make positive contributions. We view Drupal as a productive force for change and we wish to use the credit system to highlight (at least some of) the work of our diverse community of volunteers, companies, nonprofits, governments, schools, universities, individuals, and other groups.

Conclusion

Our data shows that Drupal is a vibrant and diverse community, with thousands of contributors, that is constantly evolving and improving the software. While here we have examined issue credits mostly through the lens of sponsorship, in future analyses we plan to consider the same issue credits in conjunction with other publicly-disclosed Drupal user data, such as gender identification, geography, seasonal participation, mentorship, and event attendance.

Our analysis of the Drupal.org credit data concludes that most of the contributions to Drupal are sponsored. At the same time, the data shows that volunteer contribution remains very important to Drupal.

As a community, we need to understand that a healthy Open Source ecosystem is a diverse ecosystem that includes more than traditional Drupal agencies. The traditional Drupal agencies and Acquia contribute the most but we don't see a lot of contribution from the larger digital marketing agencies, system integrators, technology companies, or end-users of Drupal—we believe that might come as these organizations build out their Drupal practices and Drupal becomes more strategic for them.

To grow and sustain Drupal, we should support those that contribute to Drupal, and find ways to get those that are not contributing involved in our community. We invite you to help us figure out how we can continue to strengthen our ecosystem.

We hope to repeat this work in 1 or 2 years' time so we can track our evolution. Special thanks to Tim Lehnen (Drupal Association) for providing us the credit system data and supporting us during our research.

07 Sep 23:03

Construction Begins on Seaside Greenway II

by Ken Ohrn

The second phase of this project on Point Grey Road gets underway this week as city crews prepare to expand sidewalks, install benches and public fountains. Plus sewer and water upgrades.

Presumably, there will be some opposition to this.

pgr-ii

Point Grey Road at the end of Phase I, June 2014.

Local news coverage HERE.  City overview on Seaside Greenway Phase II HERE. Broader overview of the Seaside Greenway HERE.

Excerpt of City overview: 

Phase 2:

  • Improvements to walking conditions, public realm, expanded green space and connections to waterfront parks along Point Grey Road
  • Construction to be coordinated with sewer replacement
  • Seaside Greenway Completion:  Completes a critical 2 km gap in the Seaside Greenway, running from the Vancouver Convention Centre to Spanish Banks.

07 Sep 21:16

Today in Transit: 153 years, 7 months, 28 days ago

by pricetags

From John Graham:

On 10 January, The Metropolitan Railway opens the world’s first underground railway, between Paddington (then called Bishop’s Road) and Farringdon Street.  This is what the first ride looked like:

first-ride

And transit riders think they’re in cattle cars today!


07 Sep 21:15

Snapchat’s rumoured augmented reality device could be one step closer to production

by Rose Behar

News of a stealthy hardware project by Snapchat first hit the internet in March when CNET published a report that said the startup was recruiting wearable technology experts.

The theory put forward at the time was that the device would be a pair of augmented reality eyewear built on technology Snapchat obtained in its 2014 acquisition of Vergence Labs, a startup that made a Google Glass-type device with video recording capabilities. Backing up that idea was a picture of CEO Evan Spiegel on vacation published by Business Intelligence where it appears he could be wearing a prototype model — a pair of black sunglasses with circular cameras at the edges and thick frames to hide the battery.

evan spiegel snapchat glasses

Now the Financial Times has spotted Snapchat’s entry into the Bluetooth Special Interest Group (SIG), indicating that Snapchat’s hardware device is now one step closer to becoming a reality, as well as the fact that it will likely connect to smartphones.

There are no strong indications of what exactly Snapchat’s potential augmented reality glasses might be used for, but, knowing Snapchat, it could be for something as simple as just seeing its filters — for instance, the popular rainbow barf option — overlaid on your friends’ faces in real life. 

A hardware device could be a good strategic move for Snapchat, which is currently facing stiff competition from Instagram, which recently released a feature called Stories that copies much of the app’s core functionalities.

Instagram has a larger user-base of 300 million daily active users, while Snapchat purportedly has 150 million, according to an early June report from Bloomberg.

Image source: AKM-GSI/Business Insider

Related: Snapchat launches Geostickers in select cities, not yet available in Canada

07 Sep 21:15

5G: Will it be Sliced... or Hacked?

by Dean Bubley
I've been giving a lot of thought recently to 5G - the technology, major use-cases, likely business models, timelines and implications for adjacent sectors such as IoT.

5G fits into both my own TelcoFuturism analyst/advisory work on the intersections of multiple technologies in telecoms, and also my secondary role working with STL Partners as Associate Director and lead analyst of its Future of the Network research programme (link). 

A philosophical split is emerging among operators and vendors:
  •  "One network to rule them all" idealists
  • "Make it functional ASAP & add other stuff later" pragmatists
There are various nuances, middle-ground thinkers and shades of grey, but in general the former tend to be companies driven by the core and services domains, and the latter have a radio/access bias.

The core-network group tends to view things through the lens of NFV, and with a 2020 target date. It sees a world that spans diverse 5G use-cases from smartphones to sensors to vehicle-to-vehicle communications, taking in police cars and replacing FTTH and WiFi along the way. It wants to use sophisticated MANO (management and orchestration) layers and next-gen OSS to create network "slices", supposedly from "end to end". Such slices would, in theory, be optimised for different business models, or verticals, or virtual networks - leaning heavily on policy-management, differentiated QoS and assorted other big service-layer machinery. Mobile edge-computing would, ideally, extend the operator's cloud infrastructure into a distributed, Amazon-beating "fog". Often, terms like "HetNet" will be added in, with the notion that 5G can absorb (and assimilate) WiFi, LPWAM, corporate networks and anything else into a unified service-based fabric.

The other group is driven by more pragmatic concerns - "better faster cheaper" upgrades to 4G in a 2018-19 timeframe (and 2017 trials), replacing DSL in rural areas where fibre is too expensive but cable is growing, more spectral efficiency to squeeze more usage out of frequency allocations, lower-cost mobile broadband for emerging markets, better cell-edge coverage, and (ideally) lower power consumption for the RAN. Perhaps unsurpringly, they focus more on the nuts and bolts of radio propagation in different bands, different modulation mechanisms, frame structures needed to optimise latencies - as well as practicalities such as small-cell backhaul. Business model discussions are secondary - or at least decoupled - although obviously there is a large IoT element again. The core network may well remain the same for some time, and 5G access will not necessarily imply NFV/SDN deployment as a pre-requisite. (I've spoken with CTOs who are quite happy without virtualisation any time soon).

In my view, it is the latter group which better understand the "hard" technology compromises that need to be made, as well as the timing considerations around deployment, spectrum availability - and the implied competition from diverse substitute technologies like SigFox, gigabit-speed cable, near-ubiquitous WiFi and even next-gen satellite (assuming no more SpaceX Falcons have unfortunate "anomalies"). A key concern is how to squeeze the ultra-low latency capabilities into a network architecture that also supports low-cost, mass IoT deployment.

Conversely, the other camp is often guilty of wishful-thinking. "Let's control flying public-safety robots with millisecond latency & QoS via MEC nodes & 6GHz+ licensed-band 5G from totally virtualised & sliced service creation & activation platforms". This would probably work as the basis of a 2023 Michael Bay movie, but faces quite a few obstacles as a near-term mobile operator strategy. [Note: this concept is only a very mild exaggeration of some of the things I've had suggested to me by 5G zealots].




There are various practical and technical issues that limit the sci-fi visions coming true, but I want to just note a couple of them here:
  • It is far from clear that there will be enough ultra-performance end-points to justify having the millisecond-latency tail wag the 5G dog. I'd guesstimate a realistic 100 mllion-or so device target, out of a universe of 10-20bn connections. Unless the related ARPU is huge (and margin after all the costly QoS/slicing gubbins is added in), it's not justifiable if it delays the wider market or adds extra complexity. Given that 100m would also likely be thin-sliced further with vertical-specific requirements (cars, emergency, medical, drones, machinery etc.) the scale argument looks even weaker.
  • A significant brake on NFV at the moment is the availability of well-trained professionals and developers. As one telco exec put it to me recently "we don't have the resources to make the architects' dreams come true". And this is for current NFV uses and architectures. Now consider the multi-way interdependencies between NFV + 5G + verticals + Cloud/MEC. The chances of telcos and their vendors building large and capable "5G slice" teams rapidly are very small. What would a "5G Slice development kit" look like? How exactly would an IoT specialist create a 5G-enabled robot anti-collision system for a manufacturing plant with arc-welders generating radio interference, for example? (And let's leave aside for now the question of what 5G NFV slices look like to regulators concerned about neutrality....)
In other words, I think that the "slice" concept is being over-hyped. It sounds great in principle, but it's being driven by the same core-network folk who've been trying to sell "differentiated QoS" in mobile for 15+ years. It took 7+ years to even get zero value-add VoLTE phone calls to work well on 4G with QoS, when that service had been specced and defined to within an inch of its life by committee. The convenient IoT/NFV/5G developer SDK and corresponding QoSPaaS isn't appearing any time soon.

That said, I'm sure that there will be some basic forms of network-slicing that appear - perhaps for the public-safety networks that are moving to 4G/5G whether it's truly ready or not. But the vision of 10, 100 or 1000 differentiated 5G slices all working as a nicely-oiled and orchestrated machine is for the birds.

Instead, I think the right metaphor is hacking not slicing. I don't mean hack in the malware/blackhat-in-a-basement sense, but in terms of taking one bit of technology and tuning/customising it and creating derivatives to serve specific purposes.

We already see this with 4G. There's a mainstream version of LTE (and LTE-A and enhancements), but there's also PS-LTE for public safety, NB-IoT for low-end IoT, and LTE-U/LAA/MuLTEfire for unlicensed spectrum. Those are essentially "hacks" - they're quite different in important ways. They benefit from the LTE mother-spec's scale effects and maturity - and probably would not have evolved as standalone concepts without it. In a way, the original railway GSM-R version of GSM was a similar hack.


I think 5G will need something similar. As well as the so-called "New Radio", there is also work being down on a nextgen core - but there may well also have to be spin-off variants and hacks as well. This could allows the mainstream technology to avoid some possibly-intractable compromises, and could also be a way to bring in vertical specialists that currently think the mobile industry doesn't "get" their requirements - as per my recent post that telecoms can't just be left to the telcos (link). 

As usual, the biggest risk to the mobile industry is strategic over-reach. If it persists in trying to define 5G as an all-encompassing monolithic architecture, with the hope of replacing all fixed and private networks, it will fail. The risk is that if it tries to create a jack-of-all-trades, it will likely end up as master-of-none. 5G has huge potential - but also needs a dose of pragmatism, given that it is running alongside a variety of adjacent technologies that look like potential disruptors.

Ignore the sneers that SigFox is just 2016-era WiMAX and look at the ever-present use of 3rd-party WiFi as a signpost - and the emergence of WiGig. Look too at the threat that SD-WAN is having against MPLS and NFV-powered NaaS in fixed-line enterprise networks (link) - an illustration of the power of software in subverting telco-standardised business models. This time around, non-3GPP wireless is "serious" - especially where it leans on IEEE and ethernet.

In its fullest version, the "slice" concept is far too grandiose and classically 1990s-era telco-esque. Hacking is much more Internet/IETF-style "rough consensus and running code". It will win.

A forthcoming report on the Roadmap for 5G will be published as part of STL Partners' research stream on the Future of the Network soon. Please contact STL or myself (information AT disruptive-analysis DOT com) for more details, or for inquiries about custom advisory work and speaking engagements on 5G, NFV, LPWAN and related themes.
07 Sep 21:15

University of Waterloo develops dedicated hub for smart vehicle research

by Jessica Vomiero

The University of Waterloo will officially unveil its Green and Intelligent Automotive (GAIA) Research Facility tomorrow.

The facility will be located entirely in a renovated space in the Engineering 3 building on campus. The lab will be the latest addition to the Waterloo Centre for Automotive Research (WatCAR). Once opened, the research centre will be the largest of its kind at in Canada.

Statements released to MobileSyrup claim that the research centre will contain tools such as dynamometers as well as a rolling road to simulate real world driving.

The purpose of this centre is to “test and refine innovations for the next generation of smart, clean vehicles under one roof.”

The movement towards clean and autonomous transportation has been active across Canada and around the world. Other Canadian universities such as the University of Ottawa and the University of British Columbia have been recognized for their research contributions to the smart car initiative.

Furthermore, GM Canada announced in July its intentions to hire 1,000 engineers to develop connected, electric and autonomous cars through of partnerships with Canadian universities.

Image credit: JasonParis

Related: The man behind the MacBook Air is now overseeing Apple’s self-driving car project

07 Sep 21:15

A Short Guide for Students Interested in a Statistics PhD Program

This summer I had several conversations with undergraduate students seeking career advice. All were interested in data analysis and were considering graduate school. I also frequently receive requests for advice via email. We have posted on this topic before, for example here and here, but I thought it would be useful to share this short guide I put together based on my recent interactions.

It’s OK to be confused

When I was a college senior I didn’t really understand what Applied Statistics was nor did I understand what one does as a researcher in academia. Now I love being an academic doing research in applied statistics. But it is hard to understand what being a researcher is like until you do it for a while. Things become clearer as you gain more experience. One important piece of advice is to carefully consider advice from those with more experience than you. It might not make sense at first, but I can tell today that I knew much less than I thought I did when I was 22.

Should I even go to graduate school?

Yes. An undergraduate degree in mathematics, statistics, engineering, or computer science provides a great background, but some more training greatly increases your career options. You may be able to learn on the job, but note that a masters can be as short as a year.

A masters or a PhD?

If you want a career in academia or as a researcher in industry or government you need a PhD. In general, a PhD will give you more career options. If you want to become a data analyst or research assistant, a masters may be enough. A masters is also a good way to test out if this career is a good match for you. Many people do a masters before applying to PhD Programs. The rest of this guide focuses on those interested in a PhD.

What discipline?

There are many disciplines that can lead you to a career in data science: Statistics, Biostatistics, Astronomy, Economics, Machine Learning, Computational Biology, and Ecology are examples that come to mind. I did my PhD in Statistics and got a job in a Department of Biostatistics. So this guide focuses on Statistics/Biostatistics.

Note that once you finish your PhD you have a chance to become a postdoctoral fellow and further focus your training. By then you will have a much better idea of what you want to do and will have the opportunity to chose a lab that closely matches your interests.

What is the difference between Statistics and Biostatistics?

Short answer: very little. I treat them as the same in this guide. Long answer: read this.

How should I prepare during my senior year?

Math

Good grades in math and statistics classes are almost a requirement. Good GRE scores help and you need to get a near perfect score in the Quantitative Reasoning part of the GRE. Get yourself a practice book and start preparing. Note that to survive the first two years of a statistics PhD program you need to prove theorems and derive relatively complicated mathematical results. If you can’t easily handle the math part of the GRE, this will be quite challenging.

When choosing classes note that the area of math most related to your stat PhD courses is Real Analysis. The area of math most used in applied work is Linear Algebra, specifically matrix theory including understanding eigenvalues and eigenvectors. You might not make the connection between what you learn in class and what you use in practice until much later. This is totally normal.

If you don’t feel ready, consider doing a masters first. But also, get a second opinion. You might be being too hard on yourself.

Programming

You will be using a computer to analyze data so knowing some programming is a must these days. At a minimum, take a basic programming class. Other computer science classes will help especially if you go into an area dealing with large datasets. In hindsight, I wish I had taken classes on optimization and algorithm design.

Know that learning to program and learning a computer language are different things. You need to learn to program. The choice of language is up for debate. If you only learn one, learn R. If you learn three, learn R, Python and C++.

Knowing Linux/Unix is an advantage. If you have a Mac try to use the terminal as much as possible. On Windows get an emulator.

Writing and Communicating

My biggest educational regret is that, as a college student, I underestimated the importance of writing. To this day I am correcting that mistake.

Your success as a researcher greatly depends on how well you write and communicate. Your thesis, papers, grant proposals and even emails have to be well written. So practice as much as possible. Take classes, read works by good writers, and practice. Consider starting a blog even if you don’t make it public. Also note that in academia, job interviews will involve a 50 minute talk as well as several conversations about your work and future plans. So communication skills are also a big plus.

But wait, why so much math?

The PhD curriculum is indeed math heavy. Faculty often debate the possibility of changing the curriculum. But regardless of differing opinions on what is the right amount, math is the foundation of our discipline. Although it is true that you will not directly use much of what you learn, I don’t regret learning so much abstract math because I believe it positively shaped the way I think and attack problems.

Note that after the first two years you are pretty much done with courses and you start on your research. If you work with an applied statistician you will learn data analysis via the apprenticeship model. You will learn the most, by far, during this stage. So be patient. Watch these two Karate Kid scenes for some inspiration.

What department should I apply to?

The top 20-30 departments are practically interchangeable in my opinion. If you are interested in applied statistics make sure you pick a department with faculty doing applied research. Note that some professors focus their research on the mathematical aspects of statistics. By reading some of their recent papers you will be able to tell. An applied paper usually shows data (not simulated) and motivates a subject area challenge in the abstract or introduction. A theory paper shows no data at all or uses it only as an example.

Can I take a year off?

Absolutely. Especially if it’s to work in a data related job. In general, maturity and life experiences are an advantage in grad school.

What should I expect when I finish?

You will have many many options. The demand of your expertise is great and growing. As a result there are many high-paying options. If you want to become an academic I recommend doing a postdoc. Here is why. But there are many other options as we describe here and here.

07 Sep 21:15

Why the headphone jack must die


Wednesday morning, Apple (APPL) will unveil the iPhone 7 and 7 Plus. As you’ve probably heard, they won’t have headphone jacks.

Yes, that’s right: The headphone jack is going away, starting now.

And not just on the iPhone. In fact, Apple’s not even first. Moto led the revolution in the US with its gorgeous new Moto Droid Z phones—which have no headphone jacks. In China, LeEco and other companies are already omitting the jack. Other brands worldwide will be following suit.

The Moto Droid Z phones have already ditched the jack.
The Moto Droid Z phones have already ditched the jack.

I can see why you might find this news shocking and upsetting. Eliminate the headphone jack? That’s insane! We need the headphone jack! How are we supposed to listen to our music, our YouTube videos, our Facebook clips? Are we supposed to just throw away the expensive headphones we’ve bought?

Whoa there, folks. You will still be able to use headphones. The electronics companies are eliminating only the round 3.5-milimeter jack, not the ability to listen.

You’ll still have three options for listening through headphones or earbuds:

  • Wirelessly. Sooner or later, everything goes wireless: Internet connections, file transfers, even power charging. And already, sales of wireless Bluetooth headphones are growing faster than wired ones for the first time. The convenience and sound quality of Bluetooth buds have been steadily improving, and they’ll only get better when Bluetooth 5.0 comes out at year’s end.
  • Using Apple’s earbuds. Apple will include new earbuds with the iPhone that plug into the Lightning (charging) jack. Other companies already make headphones that plug straight into the Lightning jack, too. On Android phones, you’ll plug the included earbuds into the USB-C jack.
  • Using an adapter. If you have a favorite pair of older ‘phones, you can plug them into the Lightning or USB-C jack with a little adapter. (Apple includes one in the iPhone 7 box; you can buy additional ones for $9 each.) Some will have splitters so that you can charge your phone and plug in headphones simultaneously.

Nobody loves adapters, of course—unless there’s a really good reason for them. In this case, there are at least two.

Reason 1: Age and bulk

The 3.5-millimeter jack is the oldest technology that’s still in your phone. This connector debuted with the transistor radio in the early 1960s; it was, for example, on the Sony EFM-117J radio, which came out in 1964. This is the audio connector of the 8-track tape player (1967-ish) and the Sony Walkman (1979).

This is not cutting-edge technology.
This is not cutting-edge technology.

In short, the jack that everyone’s whining about is 52 years old.

As a result, it’s bulky—and in a phone, bulk = death.

“The device makers would love to get rid of that jack. It makes your phone thicker than it has to be,” Brad Saunders told me last fall.

He’s the Intel executive who led the charge to develop USB-C, which is quickly replacing the standard USB and Micro USB connectors on new phones, tablets, and laptops from Microsoft (MSFT), Google (GOOG), Apple (on the 12-inch MacBook), Samsung, and others. More specifically, Saunders sits at the nexus of 600 electronics companies—and knows what’s coming.

He points out that the 3.5-millimeter jack, by today’s standards, is huge on the inside. The cylinder that accommodates your headphone jack is now among the bulkiest components of your phone!

Inside the phone, the headphone-jack assembly is ridiculously bulky by today's standards.

The headphone jack is what’s preventing phones from getting any thinner. It’s the limiting factor.

A lot of people really love thin phones. But if you don’t care about thinness, you can put it another way: The headphone jack is what’s preventing phone batteries from getting bigger. You do care about battery life, right?

Fact is, jacks and connectors come and go. And every time there’s a transition, we, the people, wail and moan. “Don’t move my cheese!” we cry. (I’ll cop to it: I’ve sometimes been among them.)

But if you look back, you can see how foolish some of our foot-stomping was. We screamed bloody murder when Apple eliminated the floppy drive, and again when it replaced the SCSI and ADB jacks with USB. Face it: We, the people, really don’t make good product designers.

I mean, check out the back of this 1985 Mac Plus. Can you spot the only connector that hasn’t been replaced by something smaller, faster, and more efficient? Is this what we want our smartphones to look like?

Reason #2: Sound quality

Your music is digital. All of it: The songs you buy, the songs you stream.

Alas, the 3.5-millimeter jack is analog.

Your phone contains a cheap consumer digital-to-analog converter, whose job it is to convert the signal output from your digital music files to your ancient analog headphone jack. So no matter how much sound quality is locked away in those files, by the time it reaches your headphones, you’ve lost some audio quality along the way.

In the post-headphone jack era, your music will remain digital until it reaches the headphones, which can have a much nicer converter. You’ll skip over that analog conversion business—and get better-sounding audio.

The headphone jack must die

There may be other reasons to get rid of the headphone jack, too. Maybe it’ll be easier to make our phones waterproof. Maybe it’ll lower the cost of the phones, and goose the reliability.

But for most people, better battery life, thinner phones, and improved audio quality are reasons enough.

The biggest legitimate worry about the post-3.5-millimeter era is that we’ll lose compatibility. You won’t be able to plug my Android earbuds into your iPhone, or whatever. You’ll need one adapter for Lightning devices, and one for USB-C devices like Android phones. (Or, more realistically, you’ll have your Lightning earbuds and one adapter for Android phones, or vice versa.)

Well, Bluetooth is already a universal standard, so there’s that. And for wired headphones, maybe, if we’re lucky, Apple will get smart and move to the sensationally great USB-C that most Android phone makers are using.

In the meantime, let the presses roll. The 3.5-millimeter jack: Dead at 52 after a long, productive life.

Related video:

David Pogue is the founder of Yahoo Tech; here’s how to get his columns by email. On the Web, he’s davidpogue.com. On Twitter, he’s @pogue. On email, he’s poguester@yahoo.com. He welcomes non-toxic comments in the Comments below.

Read more from Yahoo Finance on Apple’s September 2016 keynote:

Why I’m buying the new Apple Watch

Why I’ll never ever want an Apple Watch

Why the iPhone headphone jack must die

07 Sep 21:14

LG V20 Specs

by Rajesh Pandey
LG has just unveiled the V20, its latest flagship handset and multimedia powerhouse. Coming with a 5.7-inch screen, the V20 sports plenty of unique features that makes it stand out from other devices in the market. Continue reading →
07 Sep 21:12

Twitter Favorites: [TheRobertDayton] I've worked on both the downtown east side of Vancouver and the financial district of Toronto. The financial district is far more depressing

Robert Dayton @TheRobertDayton
I've worked on both the downtown east side of Vancouver and the financial district of Toronto. The financial district is far more depressing
07 Sep 21:12

Twitter Favorites: [chapter_three] 8x8 Case Study: Cutting costs by giving users a high-quality, self-serve knowledge base. https://t.co/OFqs3XMetV https://t.co/3a9wAFbhs1

Chapter Three @chapter_three
8x8 Case Study: Cutting costs by giving users a high-quality, self-serve knowledge base. j.mp/2bRkB8r pic.twitter.com/3a9wAFbhs1
07 Sep 21:12

Twitter Favorites: [bmann] Repeat after me: Slack is not a chat system. If that’s what you clone, you won’t get anywhere. https://t.co/OzYDxMQE5T

Boris Mann @bmann
Repeat after me: Slack is not a chat system. If that’s what you clone, you won’t get anywhere. twitter.com/Techmeme/statu…
07 Sep 15:06

Vancouver’s new immigrants not here to make money but to find a better quality of life — and policymakers haven’t caught up

by Frances Bula

As some of you sharper readers may have noticed, I’ve become interested in our significant new bloc of immigrants, those people from mainland China.

(Okay, I’ve always been interested. I got an Asia Pacific Foundation fellowship in 1990 that allowed me to live in China for three months, another different fellowship that took me to Hong Kong briefly in 1994, and I’ve watched the migrations from Hong Kong, Taiwan, and mainland China ever since, occasionally reporting on them)

There are so many people writing about this new group, but the overall coverage has been strange and dehumanizing. No one ever actually talks to any of these new immigrants or tries to understand them. They’re just “investors” here to “park their money.” Or they’re outright criminals.

Undoubtedly there are people like that from China. The legitimate stories about crimes or abuses deserve to be covered and some other reporters are doing that. Good on them.

But I’m interested in the people who are coming here, why they’re here and what they make of their life in Canada. I wrote a big story about a month ago that was the result of several months of talking to more than a dozen people and trying to get a handle on their lives in China and here. (It’s here.)

It’s a little strange that more reporters haven’t done some of this. Usually that’s a first move in journalism. If there’s an interesting sub-group in town, you go out and talk to people and find out who they are. I know some fellow reporters haven’t because they’re worried about exposing this group to the blasts of hatred that unmistakably proliferate on social media. I’m hoping more people will start to do more reporting on this new group of immigrants (about 150,000 — three times the number from Taiwan) in future.

Even at the universities, there isn’t much exploration going on that I know of.  I asked UBC geography professor David Ley, who did wonderful, sensitive, and empathetic research on the new immigrants arriving from Hong Kong and Taiwan in the 1990s (much of it in his Millionaire Migrants book), if he knew of anyone doing research on this new group. He didn’t.

But, over in sociology, it turned out someone was working on something.

My story today is a far-too-condensed summary of a new study, where UBC PhD student Jing Zhao interviewed almost three dozen people who were about to immigrate or had immigrated to Vancouver, and sociology professor Nathanael Lauster analyzed and co-wrote the results. (The full 31-page study is here, for those wanting more details or source material to quibble with my reporting.)

They found, as I had, that some in this group, despite their privileges (they’re usually well educated and are comfortable financially, if not the billionaires and fuerdai so beloved of many reporters), see themselves as refugees from China, with its rigid education system, terrible air and water pollution, dicey food quality, and restrictive policies on having children.

And they don’t care that much about starting new businesses or getting jobs here because that’s not why they came here. It’s a turnaround from the way many, many academics and policymakers think about immigration, which is usually seen as being strictly about improving economic life. As Lauster says, it’s about time we understand that and maybe adapt our thinking about who these new trans-national immigrants are.

 

07 Sep 15:06

How to Recruit

by rands

From a recruiting perspective, the best engineering manager I’ve worked with established her reputation with two hires. It went like this:

ME: “We need to build an iOS team, and while we have talented engineers, we don’t have time to train the current team on iOS, it’ll be faster to hire.”

HER: “Great, who should we hire?”

ME: “Here’s the perfect profile. We’ll never get him, but he’s an incredible, well-known iOS engineer who is not only productive but also a phenomenal teacher. He’d be a perfect seed for the team. We need an engineer like him.”

HER: “Why not hire him?”

ME: “You’ll never get him. Everyone is throwing everything at him.”

Three months later, the long shot hires that I thought we had no chance at getting signed an offer letter. Two months later, same story. I mentioned an unattainable hire which was followed promptly by the hiring of that specific engineer.

You think there is a trick. You think we threw huge amounts of money at these engineers – we didn’t. Standard compensation packages. You think we promised an impossible role – we didn’t. Build the first version of an iOS application with a talented group of engineers.

There is no trick other than carving out time every single day to do the job of recruiting.

The Recruiting Rules for Engineering

Let’s start with the ground rules. For every open job on your team, you need to spend one hour a day on recruiting-related activities. Cap that investment at 50% of your time. No open reqs? There’s still important and ongoing work you need to do on a regular basis that I’ll describe below.

Take a minute to digest that prior paragraph because it might be a shock to great many engineering managers out there. 50% of my time? Yup. But we have a fully functional and talented recruiting organization. That’s super and will make your life better, but 50% of your time still stands. Why? I’m glad you asked.

On the list of work you can do to build and maintain a healthy and productive engineering team, the work involved in discovering, recruiting, selling, and hiring the humans for your team is quite likely the most important work you can do. The humans on your team are not only responsible for all the work; they are the heartbeat of the culture. We spend a lot of time talking about culture in high technology, but the simple fact is the culture is built and cared for by the humans who do the work. Your ability to shape the culture is a function of your ability to hire a diverse set of humans who are going to be additive to that culture.

Let’s begin.

A Recruiting Primer

A good way to think about your recruiting work is to delve into how the recruiting process fits together. Here it is:

rands-fixed-once-more

This is the hypothetical funnel chart for The Rands Software Consortium, and we’re hiring!1 I love funnel charts because they help frame multiple lenses of information into a single digestible view.

  • Applications – humans who have either applied for a role or were sourced by an internal or external party.
  • Screened – humans who made it past a first round screen process.
  • Qualified – humans who made it through a more critical screening process. Think coding challenge or technical phone screen designed to gather more signal.
  • Interviews – humans who entered the formal interview process.
  • Onsites – humans who were in the building for an interview.
  • Offers – humans who received an offer.
  • Hires – humans who accepted their offer.

This fake graph is for roughly six months. The “In Stage” number shows you how long on average the candidate is spending in each stage, the gray percent numbers down the middle show you what percent of candidates are making it through a stage, and the “Total” number on the right shows you total candidates per stage in the period.

Before we talk about where you should be spending 50% of your time, we first need to make sure we have two agreements with our recruiting team:

  1. Agree upon on the states and the definitions of candidates in your pipeline. The above fake chart is just one example and your flow might be different. What are those states? How does a candidate enter and exit a specific state?
  2. With #1 defined, you now need to agree to make it ridiculously easy to access this information.

With all of these reports in place and running smoothly, you can learn about the efficiency of the different parts of your recruiting process and you can ask informed questions. Where are candidates spending the most time and why? We gather the most signal at the coding exercise and the interview – shouldn’t those pass percentages be lower? How long is a candidate spending in each part of the process? Is that the experience we want them to have?

This article assumes you have a fully functional and talented recruiting team. These humans are essential to you effectively doing your part of the recruiting gig. Part of their job is to give a clear and consistent lens into both the health of the entire recruiting funnel as well as status for any candidate in any state in the pipeline. When you have this, you’ll better understand where to invest your time.

Discover, Understand, and Delight

This article is not about the traditional recruiting pipeline and the familiar work you’re already doing. This piece is about the work of recruiting you are neglecting. Let’s call this the engineering recruiting pipeline and it’s a pipeline built right on top the funnel I described above. Our different states are based not on how we measure candidate progress but the evolving mindset of the candidate traversing the process. There are three states I consider: Discover, Understand, and Delight.

Discover, first, is the state of mind of any qualified candidate who does not yet know about the opportunity on your team and at your company. It is your job to first find these humans and help them discover the desire to work with you at this company.

In recruiting parlance, those who find these candidates within recruiting are ‘sourcers.’ Their job is to look at your job description and then find humans who fit the bill. Sourcers cast their nets very wide and fill that top of funnel with as many qualified candidates as possible. Sourcing is also your job during Discovery, but your time is more targeted because you have intimate knowledge of the gig. More importantly, you have likely worked directly with humans who you know can do the job. Let’s operationalize this fact by building The Must List.

Make a list. Fire up a blank spreadsheet and start typing because you’re going to want to capture a bunch of different data and as it grows, you’re going to want to slice and dice it in different ways. This is a list of each and every person who you’ve worked with who you want to work with again. You must work with them again.

Every person. Doesn’t matter if they’re an engineer or not. Keep typing. Doesn’t matter if they’re available or not. Write down their name, their current company, their current role, and why they’re on this list. Done? Ok, put it away for a day and then come back. You missed important humans.

There are two use cases for The Must List. First, whenever a new gig opens on my team, I fire up the List and see if there is anyone on the list who might fit the bill and I mail them a friendly note. Hi. How are you? Got a gig and I must work with you again. Coffee? More often than not if we haven’t spoken recently, this human and I will get coffee regardless of their interest in the role because these are dear friends. Much more often than not, they are happy in their current gigs. Sometimes they’ll know folks who might fit the bill. Rarely, very rarely, they’ll come in for an interview. When coffee is done, I update the remaining columns in the spreadsheet: last contacted date, current status, next steps, and notes that capture their current context.

The second use case for The Must List is my monthly review. Every month or so whether or not I have a relevant open gig on my team, I review the list and see whom I have not spoken with recently. Time for a mail? Ok: Hi. How are you? Coffee? Again, they’re rarely interested in switching gigs, but if they happen to be looking, I will move mountains to work with them again.

Return on time invested in the Discover state is going to feel a lot lower than within our forthcoming Understand and Delight states because it’s hard to measure progress. There are currently 42 humans on my Must List, and if I get one of them in the building a year, I’m giddy. However, these are my people and the time spent investing in this network almost always pays unexpected dividends in ways that have nothing to do with whether I can hire them. These are my people, and they know other people who might fit the gig or who I should simply meet. They observe the world in interesting ways, and I want to hear those observations.

In Discover, you are making targeted strategic investments in your network. The reason these folks are on your Must List is that you have seen the work they can do with your own eyes. You built a bond with these folks in a prior life and these small investments of your time strengthen and reaffirm that bond. The value of this network is a function of the number and the strength of these connections.

Understand. A candidate has passed through the very crowded top of funnel and has reached the evaluation portion. If you look at the hypothetical funnel numbers, this candidate is statistically unlikely going to make it to the offer stage. Whether this particular candidate is getting an offer or not, your job is to Understand.

The recruiting funnel focus here is, “Do they have the necessary skills?” The interview process is designed to gather and triangulate this information from the candidate. Your focus during Understanding is to again consider candidate mindset. While they are getting peppered with skill qualification questions, they are also wondering, “Who is this engineering team?,” “What do they value?,” and “Where are they headed?”

Homework. Step away from your digital device this moment and ask a random engineer who is a part of your interview loops the three questions I asked above. Done? Ok, do it with another engineer. How do the answers compare? Is it the same narrative? Is it a compelling narrative?

The explanation of the culture of an engineering team is usually left to happenstance. The last few minutes of an interview where you ask the candidate, “Do you have any questions for me?” This lazy question is cast out with the hope that the candidate responds with a dull generic query like, “What’s it like to work here?” You respond with your well-practiced recital of “I love it here!” and “We’re solving hard problems!” which sounded great six months ago, but now sounds, well, rehearsed.

Your responsibility is to make sure the candidates understand your mission, culture, and values.2 While they will organically pick up some of this content during interviews, you need to make sure it’s one person’s job to responsibly and clearly tell the engineering story. This is not an interview; the point is to clearly explain the shape of the place they might work and – bonus points – you are going to organically learn about them during the discussion of the character of your company.

There are two scenarios for a candidate passing through Understand. Scenario A: they receive an offer and the time spent understanding paves the way to a rich offer conversation and allows them to hit the ground running when they arrive. Scenario B: they don’t get an offer, but they leave clear about you, the character of your team, and your mission. Recruiters call the time spent interviewing “the candidate experience” and I would suggest that whether they get an offer or not, Understanding is the cornerstone of exceptional candidate experience.

Delight. Congratulations! You’re making an offer to an engineer. Going back and looking at those funnel ratios, you can see this is a statistically unlikely event. Let’s not screw it up, ok?

New managers erroneously think when we make an offer that, “We’ve made a hire!” Experienced managers and recruiters know “They’re not here until they are sitting in that chair.” If you and your recruiting team have done your work, the presentation of the offer is a formality because you already know their life situation and goals. Offer construction, presentation, and negotiation is an entire other article, but it’s a clear sign that you missed critical information somewhere in the candidate experience if the negotiation process is unexpectedly laborious or littered with surprises.

And they accept! Hooray! We’re still not done because they are still not sitting in that chair. Let’s welcome them. Let’s Delight them.

The nightmare scenario is a candidate declining an offer they already accepted. I think it’s professionally bad form, but it happens more often than you’d expect. Put yourself back in their shoes: they likely have an existing gig where everyone knows their name and they know where the good coffee lives. Even after a phone screen, an at home coding exercise, a day-long round of interviews, two other phone calls, and assorted emails, you and your engineering team remain an unknown quantity. In the middle of the night, when the demons of doubt show up, you represent an uncertain future and your job during Delight is to help them imagine their future here.

Reflecting on my experience in this state, I think of how I act after I’ve accepted the offer. After the initial high of receiving and accepting the offer passes, what do I do? I reread the offer letter. I review the benefits. I go to the company website and I examine every word. What am I looking for? Why am I continuing to research? I continue to vet my decision.

The offer letter is an important document. It contains the definitive details of compensation and benefits and these are important facts, but during this critical time of consideration, I want these future co-workers delighted with a Real Offer Letter.

I send the Real Offer Letter a week before the start date. I write a note each time that captures the following:

  1. My current observations of the company, the team, and our collective challenges.
  2. The first three large projects I expect the new hire to work on, why I think these projects are important, and why I think the new hire is uniquely qualified to work on them.
  3. As best as I can, I explain the growth path for the new hire.

Nothing in this letter should be news. In fact, if there are any surprises in this note, there was a screw-up up somewhere in the funnel. The purpose of this letter is to acknowledge that we are done with the business of hiring, and we are now beginning the craft of the work.

During the post-offer-accept time, most companies send a note… a gift. I’ve received (and appreciated) flowers, a terrarium, and brief handwritten notes. Thoughtful gifts, but small thoughts. At a time when a new hire is deeply considering their change of their career, I want them chewing on the big thoughts. I want them understanding the humans they are joining and their mission. I want them to concretely understand what they will be working on, and I want them to understand the potential upside for their career.

50% of Your Time

The work of recruiting is a shared responsibility. Yes, you can be a successful hiring manager devoting less than 50% of your time. Yes, all of funnel work can be completed by a recruiter; many of my best recruiting moves came from watching and working with talented recruiters.

The situation I want to avoid is a hiring manager who delegates the entire recruiting process to their fully functional and talented recruiting team. For Discover, Understand, and Delight there are critical leadership skills you need to learn and refine. In Discover, it’s understanding the power of persistent serendipitous networking, in Understanding it’s both understanding how to tell the tale of your company as well as being able to understand the tale of the candidate. Finally, in Delight, it’s the ability to discern what is the best way to delight this candidate at a time when their worry and risk aversion is the strongest.

Recruiting and engineering must have a symbolic force-multiple relationship because the work they do together – the work of building a healthy and productive team – defines the success of your team and your company.


  1. Not really. This is fake data, but it’s fake data based on experience. I’m making a couple of assumptions regarding this fake company. It’s around 500 employees. It’s in hyper-growth which means is has 100+ open reqs. Your company or team likely is likely in a different stage of growth, but much of the strategy of this piece still applies. 
  2. Yes, this means you’ve defined your mission, culture, and values and everyone agrees with these definitions. 
07 Sep 15:04

TRAVEL BACK IN TIME WITH ME ON A HISTORY WALK THROUGH STRATHCONA, VANCOUVER'S OLD EAST END - September 10th at 10am

by James Johnstone
1890s era CVA photo of the East End taken from what is now Olympic Village. The bridge is Main Street
Vancouver's East End had an amazing history before it had hipsters! Come with me on a walk back in time and find out why this neighbourhood is so cool!

Departs: 696 East Hastings (at Heatley) Saturday, September 10 at 10am. Duration 2.5 to 3 hours depending on the size of the group.

Content: The humble East End was the first Vancouver home to thousands of people fresh off the boat or train arriving from all over the world. Street by street, block by block, the East End developed ethnic enclaves. This neighbourhood boasted the first Synagogue and first Jewish neighbourhood, Vancouver's first Little Italy, Japantown, and Vancouver's only Black identified neighbourhood, Hogan's Alley. Some blocks were dominated by Scandinavians, others by Yugoslavs, Russians and Ukrainians. Over the years the East End became Chinatown's residential district, home to renowned authors Wayson Choy (The Jade Peony/Paper Shadows) and Paul Yee (Salt Water City/Ghost Train). 

Home to three historic red light districts, an unsettling mix of non-British, mostly working class immigrants, three of Vancouver's four Depression era hobo camps, innumerable bootleg joints, even gangs, Vancouver's East End was often viewed by outsiders as an unsavoury, even dangerous place where "those people" lived.  


But it was also home to Angelo Branca, who went on to become Supreme Court Justice for British Columbia, Canada's "Amelia Earhart" Tosca Trasolini, 


boxing legends Jimmy McLarnin and Phil Palmer, NDP Premier Dave Barrett, media personality, musician, filmmaker and actress Sook Yin Lee, CBC programmer, poet and author Bill Richardson, Canadian singing legend k. d. lang, and the Montreal Bakery where the "royal buns" were baked for the 1939 visit to Vancouver by King George VI and Queen Elizabeth. And that is just scratching the surface!


Every one of Strathcona's houses has a story to tell. Want to time travel? Come for a History Walk through Vancouver's oldest and most fascinating neighbourhood, the East End.

Cost: $20 per person

Parking: There is plenty of free parking along Heatley Avenue, Hastings Street, and Keefer Street further South.
  
E-mail: historywalks@gmail.com to reserve a space or for more information.  

This is the last History Walk along this particular route for the Autumn 2016 season. The final History Walk of this season is on September 17th and is a new 2.5 hour route through the Working/Wild Side of Strathcona and focuses on the area south of Prior Street, along Union Street and the Kiwassa section of Strathcona between the railway tracks and Clark Drive. 

See TripAdvisor Reviews. History Walks in Vancouver have received an Award of Excellence from TripAdvisor. Currently ranked 44 of 144 Tours and Activities in Vancouver, it is the top rated "one man show" of all these tours and activities. 

06 Sep 19:50

Uber: Whoops

by pricetags

uber

The blame: China.  (Does everything get blamed on China?)

Consolation: They’re sitting on $8 billion in cash.


06 Sep 19:50

#OpenBadges: the Milestones of a re-Decentralised Web

by Serge

In the digital world we live in, the main ground is possessed by the few, the Digital-Landlords. A whole paraphernalia of digital rights management, technologies, contracts, lawyers, regulations voted under influence and the cyber police make sure that we do not infringe their rights. To live on their lands often means accepting a relationship close to serfdom or digital slavery. To have a name, one has to pay a fee; that is if you want to have a domain of your own and not depend on someone else (a sub-domain) — come and join us at ePIC to hear what Jim Groom has to say on this!

The Emperor’s New Clothes has become The Commoner’s New Clothes: we believe that we are dressed-up, yet we walk naked

We, the digital-commoners, possess very little, if anything at all, at least nothing worth transmitting to our heirs. Not even our name… We should express our gratitude for having been relieved from the anxiety of inheritance, spared the burden of building the walls of our privacy and wearing clothes to protect our intimacy. In this world, the tale of The Emperor’s New Clothes has become The Commoner’s New Clothes: we believe that we are dressed-up, yet we walk naked. As for Digital-Landlords, they simply see a flock of sheep waiting to be shorn.

But digital-commoners are no sheep. We know that if the real world is (almost) a sphere and its surface limited, the digital world has no predefined boundaries! We are free to add to it and establish our own fiefdom if that’s what we want. De-centralisation and aggregation of loosely coupled elements were at the core of the original design of the Internet. The way the Web has grown has obfuscated this central element, and Tim Berners-Lee, the person who is credited with having invented the World Wide Web (Stephen Downes disagrees with that characterisation) is calling for the re-decentralisation of the web.

Open Badges are the milestones of the roads we are tracing while exploring the New Territories

As learning practitioners and citizens, we, the digital-commoners, must take our own share in this re-decentralisation endeavour. In fact, as the development of Open Badges testifies, the work has already started: the enclosures of formal education credentials are now challenged to be open to the commoners. With Open Badges, digital-commoners now have the power to create their own verifiable claims, call on others’ endorsements and establish local and global trust networks. Open Badges are the milestones of the roads we are tracing while exploring the New Territories. We are now able to write our names on them without having to pay our dues to Digital-Landlords, nor their servants. And we can conquer new spaces without having to spoil it for anyone else as the re-decentralised World Wide Web is a land of plenty.

To explore further and charter the New Territories of learning recognition with other digital-commoners, join us in Bologna this October!

NB: this article is inspired by Audrey Water’s post: A Domain of One’s Own in a Post-Ownership Society

06 Sep 19:50

Eastlink coverage to expand into St John’s and the rest of Newfoundland

by Igor Bonifacic

Wireless competition in one of Canada’s oldest cities is about to heat up. Eastlink announced this week it is expanding into the capital of Newfoundland and Labrador, St Johns.

According to The Chronicle Herald, the Halifax-based telecom has submitted applications to build two new wireless towers in the historic city. Eastlink also plans to build a tower in nearby Paradise to serve communities throughout the Avalon Peninsula, which is home to more than half of the island’s population. The company’s foray into St John’s is part of a larger plan to expand throughout Newfoundland and Labrador.

While it builds its own network on the island, residents of St John’s that sign up for wireless service with Eastlink will be able to roam on Bell Aliant’s network free of charge when they’re outside of the city

“We have been pleased to bring much-needed competition to the wireless market in Nova Scotia and Prince Edward Island, and we are grateful for the positive response from customers who have chosen to trust Eastlink as their wireless service provider,” said Matthew MacLellan, president of Eastlink Wireless, in a statement.

In a press release issued to The Herald, the company said “leading network speed” was one of its competitive advantages against incumbent Bell Aliant. The statement apparently drew a laugh from a Bell Aliant spokesperson who said, “Bell’s LTE wireless network is ranked as best in Canada and we’ve been the fastest-growing wireless company in the country in 2016. We’re always ready to compete.”

Image credit: Flickr — Wichan Yingyongsomsawas

Related: Eastlink goes live with wireless service in Sudbury, Ontario

06 Sep 19:48

Box Relay is Notes all over again

by Volker Weber
Box Relay is a new type of workflow solution that is targeted at business users who need to create, share and track simple workflows. It is designed to help you get your work done more efficiently. Box Relay will not replace traditional business process and case management tools but will complement them.

If you habe been in this business long enough, the irony of Box Relay is not lost on you. This is exactly how Lotus Notes started. Move simple workflows out of email. Bottom up, from the departmental level.

Box Relay is designed for the extended enterprise and workflows can include participants in other companies such as your customers, suppliers and partners. There is no need for IT to define external users in your directory - Box Relay utilizes the capabilities in Box to support external users securely and efficiently.

This time however, it works across company borders.

More >

06 Sep 18:05

Experience British Art with Artificial Intelligence Technologies

by Catherine Chapman for The Creators Project

Left, Construction takes place next to the Changi Airport control tower for Project Jewel in Singapore, August 17, 2016. REUTERS/Edgar Su. Right, L.S. Lowry. Industrial Landscape. 1955. Tate. Presented by the Trustees of the Chantrey Bequest 1956. © The estate of L.S. Lowry/ DACS 2016.

Creating a snapshot into the museum’s vast collection of British paintings, the Tate Britain now matches artificial intelligence (AI) with photo reportage to explore traditional British art. As part of the museum’s 2016 IK Prize—an annual award for a project seeking to enhance art gallery experience with digital technology—the Recognition project is anchored by a multifaceted algorithm, comparing the similarities between pieces of art with those of up-to-the-minute photojournalism from Reuters. The program will work over a three-month period to produce a virtual gallery of past and present works through four categorical processes to produce the matches: object recognition, facial recognition, composition recognition, and context recognition.

The team behind Recognition is Fabrica, a communication research center based in Italy that works across digital storytelling disciplines. At the project launch, the team said, “We can’t wait to see what inspiring, insightful, humorous and thought-provoking relationships Recognition unearths between how the world is represented in British art and up-to-the-minute news.”

Left, Holiday makers swim in the Bassin d'Arcachon as warm summer temperatures continue in Arcachon, southwestern France, August 16, 2016. REUTERS/Regis Duvignau. Right, Henry Scott Tuke. August Blue. 1893–4. Tate. Presented by the Trustees of the Chantrey Bequest 1894.

Using art to demonstrate AI capabilities, visitors gain an inside look into how machines think, mimicking the human brain but with noticeable differences—where similar shapes and colors will pair a picture of L.S. Lowry’s Industrial Landscape (1955) with an image of Singapore’s Changi Airport.

L.S. Lowry. Industrial Landscape. 1955. Tate. Presented by the Trustees of the Chantrey Bequest 1956. © The estate of L.S. Lowry/ DACS 2016.

“AI is nowhere near as sophisticated as a human, whether they be an art expert or not,” says Tony Guillan, Tate IK Prize Producer. “The sophistication and flexibility of our brains can consider lots of different things simultaneously while cross-corresponding references against our memories, emotions, dispositions and personalities. AI can’t do that but what it’s trying to do here is take four simple criteria and blend them together in a way that the brain kind of works in. It’s a simulation of a level of human understanding.”

Construction takes place next to the Changi Airport control tower for Project Jewel in Singapore, August 17, 2016. REUTERS/Edgar Su.

The Recognition experience allows viewers to make their own matches between painting and photograph, inputting this data into the algorithm in hopes that it will learn from personal human experiences. Photojournalism from a Reuters live feed is used as it's a commonly accepted notion of providing a window into the world, thus creating a comparison of visual media both aesthetically and thematically. “What I love about Recognition is that it raises fundamental philosophical questions,” says Guillan. “Do we represent the world differently in painting than to the way we do now? Has the world been depicted in similar ways using different media? Is art relevant?”

Left, Eunuchs apply make-up before Raksha Bandhan festival celebrations in a red light area in Mumbai, India, August 17, 2016. REUTERS/Danish Siddiqui. Right, Sir Peter Lely. Two Ladies of the Lake Family. C.1660. Tate. Purchased with assistance from the Art Fund 1955.

In collaboration with Tate and Microsoft, the team at Fabrica received a £15,000 prize and £90,000 production budget. Recognition is on display at Tate Britain through November. See more of the project online here

Related:

Artificial Intelligence Controls These Surreal Virtual Realities

Here's What Actually Goes into Creating Artificial Intelligence

Google Makes Learning Neural Networks Free

06 Sep 18:05

Ottawa: Let’s Talk Parking

by Ken Ohrn

The City of Ottawa opened up discussion in 2015 on minimum parking requirements, which apparently hadn’t changed since the 1960’s.  The gist of the video is that storage for cars takes up a lot of urban space and helps to enshrine motordom as the default option for travel. This, the video argues, is not a great outcome for the city or its people.

Thanks to Ian for the link to Eric Jaffe at CityLab.com (from 2015).

Staff-prepared briefing material is HERE.


06 Sep 18:05

Some news

by djcoyle

Unknown-2After a long hiatus spent researching my new book, I’m back, and I plan to be posting here regularly in coming weeks. Thanks for your patience. The new book will be called The Culture Code: The Hidden Language of Successful Groups, and it will be published next fall (2017) by Random House.

It’s based on a simple idea: beneath the surface, all high-performing groups are fundamentally the same place, following the same rules. I spent the last three years visiting eight of the world’s most highly successful groups, including Pixar, Navy SEALs Team Six, the San Antonio Spurs, IDEO, a gang of jewel thieves, and others.  I found that they share a behavioral fingerprint, relentlessly generating a pattern of messages that create belonging, trust, and purpose. This book is about understanding how those messages work — and understanding how to use them to build your group’s culture.

Researching this book, as you might guess, has been addictively fascinating. I’m eager to share some material with you in the coming months, and, more important, I’m eager to hear your ideas. Looking forward to continuing the conversation, and thanks for reading.

06 Sep 18:04

Steve Jobs’ Turtleneck Is Getting Auctioned Off: Last Week in Art

by Nathaniel Ainley for The Creators Project

Steve Jobs wearing his trademark jeans and black turtleneck combo. Via

A lot went down this week in the weird and wild world of Art. Some things were more scandalous than others, some were just plain wacky—but all of them are worth knowing about. Without further ado:

+ Julien’s Live auction house is holding an online auction of personal items owned by Steve Jobs. Items for sale include one of Jobs' famous black turtlenecks, a bathrobe, and some of his electric razors. [artnet News]

+ The New York Times ended its theater, restaurant, and arts coverage for the tri-state area this week, a move that comes with the layoffs of a number of longstanding contributors. [Deadline]

+ 30 works from their private collection were donated by artist Ed Ruscha and his wife Danna to the University of Oklahoma Art Museum. [Artforum]

A magnification of the mysterious (and gross-looking) white spot on Edvard Munch's The Scream. Via 

+ Researchers have finally cracked the mystery behind the mysterious white smudge on Edvard Munch’s The Scream. What was previously thought to be bird poop has been identified as wax, most likely from a candle in Munch’s studio. [NY Daily News]

+ Despite a scathing State of Conservation report, the UNESCO World Heritage Site Committee decided not to put Venice on its list of World Heritage in Danger sites. [The Art Newspaper]

+ A group of vandals cut power lines and flooded one of Burning Man’s luxury 'plug-n-play' campsites. [The Guardian]

The White Ocean sound camp, pictured above, before it was vandalized on Wednesday night. Via

+ The United Talent Agency is opening up a 4,500 sq. ft. artist and exhibition space in Los Angeles. [Deadline

+ A group of Israeli artists, museum directors, and art educators filed a lawsuit against the country’s ministry of culture, Miri Regev. [The Art Newspaper]   

+ Björk’s new exhibition, Björk Digital, opened last week at the Somerset House in London. [W Magazine]

Björk dances amongst volcanic rocks in Iceland in this screenshot from the film Black Lake by Andrew Thomas Huang. Via 

+ Two Romanian homeless women went in front of the Criminal Court of Meaux for attempting to steal the lead sheets from a Anselm Kiefer statue valued at 1.3 million pounds. [Le Parisien]

+ In the wake of July's failed military coup, Turkey’s fifth Canakkale Biennial has been canceled. [The Art Newspaper]

+ Photographer Nathan Lyons has passed away at 86. [The New York Times]

The Chandos portrait. Via 

+ The National Portrait Gallery is considering cleaning the Chandos portrait of William Shakespeare, a procedure that could cost the bard his beard. [The Art Newspaper]

+ New York Central Art Supply Inc., America’s oldest art supply store, closed its doors after 111 years of service. [artnet News]

Arthur Streeton - Sydney Harbour (1907). Via 

+ Sydney Harbour, a landscape by Australian painter Arthur Streeton, sold at auction for $2M this week. [The Guardian]

+ An English tourist has accused the Elliot Stevens art gallery of the Waldorf Astoria for selling him $100,000 worth of knockoff statues. [NY Post]

Did we miss any pressing art world stories? Let us know in the comments below!

Related:

Kanye West Finally Got His Own Art Exhibit

Trump Effigies Erected In Union Square Park

Leonardo DiCaprio's Art Auction Raises $45 Million

How Brexit Could Affect Britain's Artists

06 Sep 18:04

The Race to Save Computer-Based Art | Conservation Lab

by Noémie Jennifer for The Creators Project

Viewing Java-based net artworks in modern browsers is becoming harder and harder. Pictured: Mark Napier, net.flag, 2002. Interactive networked code (Java applet with server database). Solomon R. Guggenheim Museum, New York 2002.17. © Mark Napier

Some of the most endangered works of art in the Guggenheim’s collection are also some of the youngest. The reason? They rely on computers—which is to say, they rely on the unreliable. Fitting those temperamental, short-lived machines within the museum framework is a difficult marriage, and conservators are left to mediate the conflicts. At a conference last year, Joanna Phillips, the Guggenheim’s conservator of time-based media, explained that the care of software-based work is particularly challenging because guidelines haven’t yet been clearly formulated, but are desperately needed. “These works are aging very rapidly. Intervention is urgent in many cases,” she warns. It’s a race without a roadmap.

In the hopes of fast-tracking technical research into the museum’s 22 computer-based works—and using them as case studies to establish best practices for the field—the Guggenheim has just secured funding for a two-year fellowship dedicated to the Conserving Computer-Based Art (CCBA) initiative. But who will be best for the job? “Should it be a conservator with some knowledge of coding, or a computer scientist with enough understanding of art and conservation?” wonders Phillips during a phone conversation with The Creators Project. “There is no precedent. We’re creating a new job description.”


John F. Simon, Jr., Color Panel v1.0, 1999. Apple Powerbook 280C, software, and acrylic plastic, 13 1/2 x 10 1/2 x 2 1/2 in. Installation view: Seeing Double: Emulation in Theory and Practice, Solomon R. Guggenheim Museum, New York, March 19–May 16, 2004. Photo: David Heald © Solomon R. Guggenheim Museum

New, experimental, and interdisciplinary: Those are the three buzzwords of the CCBA initiative. Ambitious could be another: “We need to conduct a survey of all the works, create backups, identify which are high- and low-risk, and really launch an in-depth research and development phase,” notes Phillips. In this case, R&D will involve close examination of key works in their native environments, with their original hardware, and careful documentation of their “behaviors” through video recordings and detailed documentation (eg. the red triangle hovers over the blue square for 10 seconds, then jumps to the top right corner).


NYU computer science student Jiwon Shin during her summer internship with the Guggenheim Conservation Department. Photo: Joanna Phillips © Solomon R. Guggenheim Museum

Another critical step is to analyze the source code. “It’s simply beyond our human capacity to observe open-ended durations or detect whether an image sequence is randomized or programmed,” explains Phillips in her conference talk. “Code analysis reveals that information, providing that you’re fluent in the programming language.” Cue the arrival of the computer scientists: Since 2014, while preparing for CCBA initiative, the Guggenheim has partnered with NYU’s Prof. Deena Engel of Courant Institute’s Dept. of Computer Science and worked with CS students who are interested in art and old programming languages. Line by line, the students translate the code into information that is meaningful to conservators and curators.

Paul Chan, 6th Light, 2007. Flash animation projection, silent, 14 min, dimensions variable. Solomon R. Guggenheim Museum, New York. Courtesy the artist and Greene Naftali, New York © Paul Chan

“We’re trying to build a bridge between computer science analysis and conservation assessment,” Phillips tells us, adding that the collaboration required a learning curve on both ends. “We realized we have to make them understand what our questions are first. In turn, we try to understand things they find that we hadn’t even thought about.”

Computer scientists can tell conservators, for example, whether the speed of a moving object is coded into the work, or if it’s dependent on the processing speed of the computer. When the file for Paul Chan’s Flash animation 6th Light, which originally ran on a 2006 gaming computer, was tested on a contemporary PC, the 14-minute piece was over in 10 minutes. Since the original computer has now broken down, the museum is “exploring different possibilities of migrating the Flash animation to a new environment without affecting the artist-intended playback speed. Considerations include emulating the 2006 computer with its specific processing speed, and more invasive methods, such as adjusting the frame rate of the animation to compensate for contemporary processing speeds.” They can then assess which test environment most closely reproduces the behaviors of the work in its native environment—which is why that earlier documentation is so crucial. Performing those trials will surely require the help of programmers, but ultimately, the collection caretakers in charge, such as conservators and curators (in coordination with the artists, when available), are the only ones who can make decisions about what can change, taking into consideration all of the rules and ethics of the field. “That cannot be outsourced,” remarks Phillips.

Recording of a narrated screen navigation of Shue Lea Cheang’s net artwork, Brandon, (1998-99) in the Guggenheim conservation lab. Guggenheim conservator Joanna Phillips (left) invites NYU computer science students Jillian Zhong (middle) and Emma Dickson (right) to narrate through the screen navigation in order to capture the complex interactive functions of the work, its overall site structure and any compromised elements such as broken links or blocked Java applets. Photo: Brian Castriota © Solomon R. Guggenheim Museum

In many cases, it seems that the preservation of these works will likely involve emulation of outdated technology and migration onto newer platforms. That simply cannot be done, however, with Jason Rhoades’ Sepia Movie or John F. Simon Jr.’s Color Panel, wherein the hardware is a unique, structural component of the work. “The museum may choose to create ‘exhibition copies’ that are identified as such in an exhibition context and approved by the artist, but we can’t recreate the work itself, if the dedicated custom hardware fails. Ethically, it’s not possible,” notes the Guggenheim conservator. Works that are wholly dependent on hardware, then, seem to be particularly at-risk.

Yet even when software dependency is the only restriction, further ethical quandaries can easily arise: While Siebren Versteeg’s Untitled Film II, which he created in Macromedia Director with Lingo, could probably be recreated with a contemporary language that preserves the work’s look and feel, “that would be eliminating the trace of the artist’s hand,” cautions Phillips. Versteeg has explained that Lingo was the first language he ever used, and he stuck by it in the creation of several pieces. By the time he made the work in 2006, it was already obsolete, but it remained his preferred mode of expression. Phillips, who was originally trained as a paintings conservator, likens Versteeg’s preference for Lingo to that of a painter for a certain pigment—and in the world of conservation, stripping an artwork of such an essential trait is unthinkable.

So what can possibly be done within those limitations? The answer—like so much else in this nascent specialty—is TBD.

Siebren Versteeg, Untitled Film II, 2006. 2 internet connected computers output to LCD screens, real-time obituary listings, real-time birth announcements, dimensions variable. Installation view: Guggenheim Media Conservation Lab, for inspection purposes. Photo: Kris McKay © Solomon R. Guggenheim Museum.

To learn more about the care of computer-based art, watch Joanna Phillips discuss the main issues here.

Related:

Should Perishable Art Be Saved? | Conservation Lab

How Replicas Could Save Threatened Artworks | Conservation Lab

To Preserve Digital Film Culture—Or Lose It Forever | Conservation Lab

06 Sep 18:04

Automation Technologies and the Future of Work

files/images/Automation.JPG


Irving Wladawsky-Berger, [Sept] 09, 2016


From where I sit, these estimates seem surprisingly low. I'll let Irving Wladawsky-Berger summarize: "This past July, McKinsey published a second article on its automation study, which examined in more detail the technical feasibility of automating 7 different occupational activities:

  • Physical work in predictable environments, e.g., manufacturing, food service: 78% automatable;
  • Physical work in unpredictable environments, e.g., construction, agriculture: 25%;
  • Processing data, e.g., finance, retail: 69%;
  • Data collection, e.g., transportation, utilities: 64%;
  • Stakeholder interactions, e.g., retail, finance: 20%;
  • Expertise in decision making, planning, creative tasks, e.g., professional, education: 18%;
  • Managing others, e.g., management, education: 9%;

A companion interactive website adds the ability to analyze the automation potential of over 800 occupations based on the study’ s data sets." But whether or now the numbers are low, they point clearly to the fact that traditional job training is not preparing people for the future.

[Link] [Comment]
06 Sep 18:04

“Mrs. Levine, Alan’s Watching!”

files/images/Private_Party.JPG


Alan Levine, [Sept] 09, 2016


I think that what bothers me most about Facebook and the others is not so much the vileness that they expose as it is the fact that they are monetising it. Here are some wise words from Alan Levine: "the thing about ugliness of online spaces... do not underestimate that the stuff you are not reading in comments as just the tip of the suppressed rage/violence in people we share the non-online world with. Don’ t read the comments, but be aware of them, do not ignore what they indicate about society. Do not pretend what lies beneath them doesn’ t exist." We should be correcting for this, not ussng it as the core feature of our business model.

[Link] [Comment]
06 Sep 18:03

Stop Using Dropbox to Share Your Courses & Try This Instead

files/images/dropbox-no-course.png


Tom Kuhlmann, The Rapid E-Learning Blog, [Sept] 09, 2016


Things have changed again. In this case, it means that if you've been using services like Dropbox or Google drive to share things like web pages, you won't be able to do this any more. Dropbox says, "If you created a website that directly displays HTML content from your Dropbox, it will no longer render in the browser. The HTML content itself will still remain in your Dropbox and can be shared." Other services - like OneDrive - have never allowed this. Tom Kuhlmann recommends that you "Use Amazon S3 or a competing service. Here’ s how to set up the Amazon S3 service." Also, "Be careful of free services. Odds are they’ ll be gone or remove the free part of the service and you’ ll be in the same place you are today." That said, we do need a way to share our cloud-stored files online. It's something that's on my mind.

[Link] [Comment]
06 Sep 18:03

How To Work With Me


noreply@blogger.com (Stephen Downes), Half an Hour, [Sept] 08, 2016


First, see this item; it explains what's going on in the text below. Also, this is a bit incomplete; I may well revisit and revise.

[Link] [Comment]
06 Sep 18:03

MOOCs no longer massive, still attract millions

files/images/MOOCs_offered_2016.png


Dhawal Shah, VentureBeat, [Sept] 09, 2016


We're still waiting for 2016 data, of course, but it's hard to reconcile the 2015 data with statements that the MOOC era is over. The MOOC user base doubled in 2015. "The total number of students who signed up for at least one course had crossed 35 million — up from an estimated 16– 18 million in 2014." And in 2016, the number of courses has doubled, and many of them are available as self-paced or multiple-cohort options. This means that the frenzied pace of MOOCs has slowed - the courses are smaller, and the interactivity is slowing. But that's good. More MOOCs, more students - and a strong future for open online learning.

[Link] [Comment]
06 Sep 18:01

September Update for BlackBerry PRIV and DTEK50 is live

by Volker Weber

ZZ62640C2C

Security Level September, 06. That is a zero day delivery. PRIV on the left with build AAG191, DTEK50 on the right with build AAG326. DTEK50 still does not have Picture Password, although this was a big update.

ZZ37212830 ZZ650F6A5F