Shared posts

23 Oct 16:49

A Guide to Bayesian Statistics

by Will Kurt
BayesianStats.png

Bayesian statistics is one of my favorite topics on this blog. I love the topic so much I wrote a book on Bayesian Statistics to help anyone learn: Bayesian Statistics the Fun Way! The following post is the original guide to Bayesian Statistics that eventually became a the book! If you enjoy the resources below, please consider ordering a copy of “Bayesian Statistics the Fun Way!” It includes reworked versions of these posts and tons of original content.

Getting Started with Bayes' Theorem and Prior Probability

This guide only assumes you have a basic familiarity with probability. If you can work out the chance of flipping a coin and getting 3 heads in a row, then you should be all set. If you do need a refresher the Khan Academy has some great lectures on the topic. There is no need for a background in Statistics. The beauty of Bayesian Statistics is it can be built up from basic probability.

What to read to get you excited?

The Signal and The Noise

This is a great book to get you excited about Bayesian Stats!

This is a great book to get you excited about Bayesian Stats!

 

Nate Silver's The Signal and the Noise is a fun read and will get you very excited about Bayesian statistics. There are no strong math requirements for this book. If equations still make you a bit nervous this is a great place to build up your enthusiasm to dive in!

Recommended posts for getting started!

Bayes' Theorem with Lego

Bayes' Theorem is much easier to understand visually

Bayes' Theorem is much easier to understand visually

The best place to start learning is with Bayes' Theorem. In this popular post, we'll cover the basics using Lego bricks. This allows you to use visual learning to build a deep intuition for the theorem. We won't be ignoring the mathematical detail either. By the end of this post, you'll be able to derive this formula from scratch!


Han Solo and Bayesian Priors

"Never tell me the Odds!"

"Never tell me the Odds!"

The next key topic is Bayesian Priors. Prior Probabilities allow us to model our beliefs about the world. This post focuses on C3POs classic underestimation of Han Solo's ability to navigate an asteroid field. Rather than simply dismissing C3PO, we'll see how using a Prior could help him.

Use Bayes' Theorem to Investigate Food Allergies

Let's put Bayes' theorem to use!

Let's put Bayes' theorem to use!

Ever wonder if your friend is really allergic to gluten? We can use Bayes' Theorem and prior information to find out! We'll also discuss how being more certain than not is not all that certain.


Thinking like a Bayesian

With the basics down, it's time to start really thinking like a Bayesian! In these posts, we focus on how we can model our beliefs about the world, learn about the beliefs of others and compare hypotheses the Bayesian way!

Recommended Posts to get you thinking like a Bayesian!

Bayesian Reasoning in The Twilight Zone!

That glimmer in his eye is from Bayesian Statistics!

That glimmer in his eye is from Bayesian Statistics!

Are you superstitious? Do you believe that a dinner novelty could have mystic powers? How much would it take to convince you? Using a classic episode of the Twilight Zone we'll learn how to answer these questions mathematically. We'll learn about Bayes' Factor and how we can use it to understand everyday reasoning.

Using Bayes' Factor to Build a Voight-Kampff Test!

Just say no to p-values

Just say no to p-values

We further explore Bayes' Factor by building the Voight-Kampff Test from Blade Runner! In this post, we explore how to make Bayes' Factor easier to interpret using decibel-based 'evidence'. Evidence allows us to very easily interpret the results of our test. No more confusing p-values!

Recommend Reading to really understand Bayesian thinking

Probability Theory: The Logic of Science

You could easily spend a lifetime pondering this Jaynes' masterpiece.

You could easily spend a lifetime pondering this Jaynes' masterpiece.

 

Probability Theory: The Logic of Science is the book to read if you're fascinated with Bayesian thinking. Though there is plenty of math, this book is about philosophy. Jaynes is probably the most radical Bayesian there is. A few pages often provide me with food for thought for months. Also, if you need insults to sling at Frequentists this is the place to get them!


The catch is that this is a very challenging book. To help guide you through it Aubrey Clayton has put together a very nice set of lectures. 

Introductory lecture on Probability Theory: The Logic of Science by E.T. Jaynes. Aubrey Clayton, March 2015

 

Bayesian Analysis

Bayesian analysis is where we put what we've learned to practical use. In my experience, there are two major benefits to Bayesian statistics over classical statistics: 

The first is that you very easy model existing information. This can very often lead to better results since the model has more to work with. Additionally, you can account for inherent bias in your analysis. By modeling the subjective parts of analysis, you can better control them.

The second is that is no "mystery meat" in your tools. All of Bayesian statistics is built from the basics of probability. If you understand the posts so far, you have the tools to build nearly everything you'll need for practical analysis. 

Recommend Posts to Learn Bayesian Analysis

Parameter Estimation

Understanding CDF plots is very important

Understanding CDF plots is very important


Most probability books tell us how likely events are. We'll be told that a biased coin has a 0.6 probability of heads. In practice, we often don't know these probabilities. Determining whether or not the data comes from a biased coin is the real challenge. Parameter estimation deals with how we fill in these missing probabilities. 

Bayesian Priors for Parameter Estimation

Three priors from the same data

Three priors from the same data

 

We've seen that priors can be used for subjective information. But quite often prior beliefs are based on hard data. In this post, we see how we can use existing data to develop priors. With data backed priors we can develop much better parameter estimates.

Rejection Sampling and Tricky Priors

Rejection sampling looks pretty fun!

Rejection sampling looks pretty fun!

In all of the examples we've seen so far we've had very nice probability distributions to work with. Quite often we aren't so lucky. In this post, we learn one of the simpler techniques for working difficult prior probabilities and still getting useful results.


Bayesian A/B Testing: A Hypothesis Test that Makes Sense

The beauty of two competing posteriors!

The beauty of two competing posteriors!

Can you derive a classical t-test from scratch? My guess is not without some difficulty. One of the benefits of Bayesian statistics is everything can be built from the basics. In this post, we combine everything covered so far to build an A/B test from scratch.


Recommend Books for Bayesian Analysis

Doing Bayesian Data Analysis.

The next logic step after the posts here

The next logic step after the posts here

If you've followed the post up to this point Doing Bayesian Data Analysis is an excellent next step. It will cover much of the material we have so far and then take you much further. The approach is mathematical, but never too challenging. 

Bayesian Data Analysis.

This book is full of wonderful, practical examples

This book is full of wonderful, practical examples

For practical Bayesian statistics, nobody gets me more excited than Andrew Gelman! This is not an easy book to work through but it is an absolute gem. The text is filled with wonderful, real world example that will alway renew your love of Bayesian Statistics.

Here's a great video that shows off Gelman's enthusiasm for Bayesian Analysis:

Delivered by Andrew Gelman, Director, Applied Statistics Center, Columbia University, at the inaugural New York R Conference in New York City at Work-Bench on Friday, April 24th, and Saturday, April 25th.

 

It's never the end! Look out for updates!

There's always more to explore in Bayesian Statistics! I plan to continually update this guide as I write relevant posts or come across other amazing books. Please subscribe the to email list or follow me on twitter for updates. Feel free to leave a comment on any topics you'd like to see covered!

02 May 22:16

Unbundled

by Chris Saad

How the breaking apart of traditional, rigid structures is creating a personalized, on-demand future and changing the everyday interactions of people, politics, and profit.

About this post

This post is based on a theory and a book outline I’ve been chipping away at since 2010. Since I’m probably going to be too busy to ever finish the full thing, I figured I would massively truncate and post it here so that it’s finally out in the world in some form. In the six years I’ve been thinking about this subject, it’s only become clearer with the advent of the on-demand economy, 3D printing etc. Please excuse the length!

Introduction

In Silicon Valley we’ve used the term “Unbundling” to describe the phenomena of mobile apps breaking apart into multiple separate apps, each essentially providing more focused, single purpose features. Think of the Facebook app being separated into Facebook + Messenger.

I believe this Unbundling phenomena is happening almost universally across all aspects of life. It’s a meta-trend that has been happening for decades (or more) and will continue for decades to come. It’s a common process affecting many of the things happening in the world today. In fact most of the major disruptions we see (loss of traditional jobs, failing record companies, terrorism, divorce rates, the rise of fringe/underdog political candidates etc) are all, in at least some way, connected to this fundamental transition.

See the full post on Medium

02 May 22:15

notes on debian packaging for ubuntu

I’ve devoted some time over the past month to learning how to create distribution packages for Ubuntu, using the Debian packaging system. This has been a longstanding interest for me since Dane Springmeyer and Robert Coup created a Personal Package Archive (PPA) to easily and quickly distribute reliable versions of Mapnik. It’s become more important lately as programming languages have developed their own mutually-incompatible packaging systems like npm, Ruby Gems, or PyPi, while developer interest has veered toward container-style technology such as Vagrant or Docker. My immediate interest comes from an OpenAddresses question from Waldo Jaquith: what would a packaged OA geocoder look like? I’ve watched numerous software projects create Vangrantfile or Dockerfile scripts before, only to let those fall out of date and eventually become a source of unanswerable help requests. OS-level packages offer a stable, fully-enclosed download with no external dependencies on 3rd party hosting services.

The source for the package I’ve generated can be found under openaddresses/pelias-api-ubuntu-xenial on Github. The resulting package is published under ~openaddresses on Launchpad.

What does it take to prepare a package?

The staff at Launchpad, particularly Colin Watson, have been helpful in answering my questions as I moved from getting a tiny “hello world” package onto a PPA to wrapping up Mapzen’s worldwide geocoder software, Pelias.

I started with a relatively-simple tutorial from Ask Ubuntu, where a community member steps you through each part of wrapping a complete Debian package from a single shell script source. This mostly worked, but there are a few tricks along the way. The various required files are often shown inside a directory called “DEBIAN”, but I’ve found that it needs to be lower-case “debian” in order to work with the various preparation scripts. The control file, debian/control, is the most important one, and has a set of required fields arranged in stanzas that must conform to a particular pattern. My first Launchpad question addressed a series of mistakes I was making.

The file debian/changelog was the second challenge. It needs to conform to an exact syntax, and it’s easiest to have the utility dch (available in the devscripts package) do it for you. You need to provide a meaningful version number, release target, notes, and signature for this file to work. The release target is usually a Debian or Ubuntu codename, in this case “xenial” for Ubuntu’s 16.04 Xenial Xerus release. The version number is also tricky; it’s generally assumed that a package maintainer is downstream from the original developer, so the version number will be a combination of upstream version and downstream release target, in my case Pelias 2.2.0 + “ubuntu” + “xenial”.

A debian/rules file is also required, but it seems to be sufficient to use a short default file that calls out to the Debian helper script dh.

I have not been able to determine how to test my debian director package locally, but I have found that the emails sent from Launchpad after using dput to post versions of my packages can be helpful when debugging. I tested with a simple package called “hellodeb”; here is a complete listing of each attempt I made to publish this package in my “hello” PPA as I learned the process.

My second Launchpad question concerned the contents of the package: why wasn’t anything being installed? The various Debian helper scripts try to do a lot of work for you, and as a newcomer it’s sometimes hard to guess where it’s being helpful, and where it’s subtly chastising you for doing the wrong thing. For example, after I determined that including an “install” make target in the default project Makefile that wrote files to $DESTDIR was the way to create output, it turned out that my attempt to install under /usr/local was being thwarted by dh_usrlocal, a script which enforces the Linux filesystem standard convention that only users should write files to /usr/local, never OS-level packages. In the end, while it’s possible to simply list everything in debian/install, it seems better to do that work in a more central and easy-to-find Makefile.

Finally, I learned through trial-and-error that the Launchpad build system prevents network access. Since Pelias is written in Node, it is necessary to send the complete code along with all dependencies under node_modules to the build system. This ensures that builds are more predictable and reliable, and circumvents many of the SNAFU situations that can result from dynamic build systems.

Rolling a new release is a four step process:

  1. Create a new entry in the debian/changelog file using dch, which will determine the version number.
  2. From inside the project directory, run debuild -k'8CBDE645' -S (“8CBDE645” is my GPG key ID, used by Launchpad to be sure that I’m me) to create a set of files with names like pelias-api_2.2.0-ubuntu1~xenial5_source.*.
  3. From outside the project directory, run dput ppa:migurski/hello "pelias-api_2.2.0-ubuntu1~xenial5_source.changes” to push the new package version to Launchpad.
  4. Wait.

Now, we’re at a point where a possible Dockerfile is much simpler.

Comments (2)
02 May 22:15

Why Understanding These Four Types of Mistakes Can Help Us Learn

files/images/Types-of-Mistakes-Chart_v3.jpg


Eduardo Briceño, MindShift, May 05, 2016


"Mistakes are not all created equal," writes the author, "and they are not always desirable. In addition, learning from mistakes is not all automatic. In order to learn from them the most we need to reflect on our errors and extract lessons from them."  Eduardo Briceñ o makes this point clear by identifying four types of mistakes, two of which can be seen as beneficial, and two of which really should be avoided.

[Link] [Comment]
02 May 22:15

The ‘Maker’ Movement: Understanding What the Research Says


Benjamin Herold, EdWeek Market Brief, May 05, 2016


The Maker movement began as a free-form exercise. "Typically, 'Making' involves attempting to solve a particular problem, creating a physical or digital artifact, and sharing that product with a larger audience. Often, such work is guided by the notion that process is more important than results." But as it began to be applied more in schools, it began to  evolve. Diversity and inclusiveness became more important, and questions began to be asked about what was learned. This article is a good overview of some of the recent research. And it's interesting to compare the similarities between the evolution of MOOCs and the evolution of making.

[Link] [Comment]
02 May 22:15

Fifty shades of open


Jeffrey Pomerantz, Robin Peek, First Monday, May 05, 2016


This could have been much more appropriately titled, but the content of the piece is spot on. Specifically:

Open means rights
Open means access
Open means use
Open means transparent
Open means participatory
Open means enabling openness
Open means philosophically aligned with open principles

[Link] [Comment]
02 May 22:15

BlueGriffon officially recommended by the French Government

by BlueGriffon

en-US TL;DR: BlueGriffon is now officially recommended as the html editor for the French Administration in its effort to rely on and promote Free Software!

Je suis très heureux de signaler que BlueGriffon, mon éditeur Web cross-platform et Wysiwyg, est officiellement recommandé par le Socle Interministériel de Logiciels Libres pour 2016 !!! Vous trouverez la liste officielle des logiciels recommandés ici (document pdf).

02 May 22:15

Smartphones – The second derivative.

by windsorr

Reply to this post

RFM AvatarSmall

 

 

 

 

 

Gap between ecosystem and hardware to increase this year. 

  • As the slowdown in the smartphone market is more severe than even I had expected, it is Xiaomi that is looking like it is in real trouble.

Smartphone and ecosystem

  • Q1 16A smartphone shipments look they have been flat or declined as much as 3% to around 340m units compared to 344m units in Q1 15A.
  • This is below RFM’s forecast of 1% growth and substantially below that which I believe most commentators and the technology industry were expecting.
  • The problem with the flattening of the market is that handset makers will have to fight even harder to find growth resulting in even greater pricing pressure.
  • This means that in revenue terms the handset market could decline by 5-10% this year.
  • This is great for consumers and for the ecosystem companies that want smartphones in the hands of as many users as possible, but for the hardware makers it is disastrous.
  • All handset makers with the exception of Samsung and Apple are barely breaking even and this added pressure could push more of them into loss making territory.
  • Consequently, I expect that this year will see an acceleration of the shakeout as the smaller companies realise that they have no hope of ever making a decent return by making commodity Android handsets.
  • This further increases my preference for the ecosystem companies as their addressable markets will keep growing despite the stagnation in the handset market.
  • The addressable market for an ecosystem is smartphone users which RFM forecasts will grow by 14% this year to 2.82bn users from 2.46bn at the end of 2015.
  • This is how the likes of Google, Facebook, Baidu, Tencent and so on will be able to post good growth this year despite the hardships being endured by hardware.

Xiaomi

  • The two exceptions to this are Apple and Xiaomi both of which have decided to monetise their ecosystem by selling hardware.
  • However, it is there that the similarity ends as despite its growth issues, Apple is still fantastically profitable.
  • Xiaomi on the hand is not and this is the third quarter in a row where it has lost market share.
  • To add insult to injury it also no longer number 1 in its home market China having been overtaken by both a resurgent Huawei and Oppo.
  • This leads me to believe that Xiaomi has no money to invest in its ecosystem which will in it falling further behind Baidu, Tencent and Alibaba and even China Mobile.
  • For Xiaomi 2015 was a year that it grew, but not as much as it has promised, while 2016 is looking like one where revenues could decline as much as 10%.
  • For a company that last raised money at $46bn on the promise of very rapid growth, this a dreadful outcome as Xiaomi badly needs to invest in its ecosystem, has no money to do so and will have great difficulty in raising any more.
  • To compound its problems it also appears that usage of its ecosystem is waning (see here) which means that the loyalty of its users to its devices may also be in decline.
  • This will further hamper profitability making the outlook for Xiaomi very difficult indeed.
  • I continue to believe that any investor that can offload his shares in Xiaomi at a valuation of $27bn will be doing very well indeed.
02 May 22:14

When Documents Become Databases – Tabulizer R Wrapper for Tabula PDF Table Extractor

by Tony Hirst

Although not necessarily the best way of publishing data, data tables in PDF documents can often be extracted quite easily, particularly if the tables are regular and the cell contents reasonably space.

For example, official timing sheets for F1 races are published by the FIA as event and timing information in a set of PDF documents containing tabulated timing data:

R_-_Best_Sector_Times_pdf__1_page_

In the past, I’ve written a variety of hand crafted scrapers to extract data from the timing sheets, but the regular way in which the data is presented in the documents means that they are quite amenable to scraping using a PDF table extractor such as Tabula. Tabula exists as both a server application, accessed via a web browser, or as a service using the tabula extractor Java application.

I don’t recall how I came across it, but the tabulizer R package provides a wrapper for tabula extractor (bundled within the package), that lets you access the service via it’s command line calls. (One dependency you do need to take care of is to have Java installed; adding Java into an RStudio docker container would be one way of taking care of this.)

Running the default extractor command on the above PDF pulls out the data of the inner table:

extract_tables('Best Sector Times.pdf')

fia_pdf_sector_extract

Where the data is spread across multiple pages, you get a data frame per page.

R_-_Lap_Analysis_pdf__page_3_of_8_

Note that the headings for the distinct tables are omitted. Tabula’s “table guesser” identifies the body of the table, but not the spanning column headers.

The default settings are such that tabula will try to scrape data from every page in the document.

fia_pdf_scrape2

Individual pages, or sets of pages, can be selected using the pages parameter. For example:

  • extract_tables('Lap Analysis.pdf',pages=1
  • extract_tables('Lap Analysis.pdf',pages=c(2,3))

Specified areas for scraping can also be specified using the area parameter:

extract_tables('Lap Analysis.pdf', pages=8, guess=F, area=list(c(178, 10, 230, 500)))

The area parameter appears to take co-ordinates in the form: top, left, width, height is now fixed to take co-ordinates in the same form as those produced by tabula app debug: top, left, bottom, right.

You can find the necessary co-ordinates using the tabula app: if you select an area and preview the data, the selected co-ordinates are viewable in the browser developer tools console area.

Select_Tables___Tabula_concole

The tabula console output gives co-ordinates in the form: top, left, bottom, right so you need to do some sums to convert these numbers to the arguments that the tabulizer area parameter wants.

fia_pdf_head_scrape

Using a combination of “guess” to find the dominant table, and specified areas, we can extract the data we need from the PDF and combine it to provide a structured and clearly labeled dataframe.

On my to do list: add this data source recipe to the Wrangling F1 Data With R book…


02 May 22:14

Lower socioeconomic status linked to lower education attainment

by Nathan Yau

Money, race, and success

The Upshot highlights research from the Stanford Center for Education Policy Analysis that looks into the relationship between a child’s parents’ socioeconomic status and their educational attainment. Researchers focused on test scores per school district in the United States.

Children in the school districts with the highest concentrations of poverty score an average of more than four grade levels below children in the richest districts.

Even more sobering, the analysis shows that the largest gaps between white children and their minority classmates emerge in some of the wealthiest communities, such as Berkeley, Calif.; Chapel Hill, N.C.; and Evanston, Ill. (Reliable estimates were not available for Asian-Americans.)

Be sure down to browse the chart the shows points for race within the the same school districts. Color represents race, and connecting lines between dots show the magnitude of the differences between white, Hispanic, and black.

If you’re interested in the data itself, you can download it from the Stanford Education Data Archive.

See also the education spending map from NPR, which suddenly takes on a new dimension.

Tags: education, race, Upshot

02 May 22:14

Handling context in "outside-in"

by Dries

In a recent post we talked about how introducing outside-in experiences could improve the Drupal site-building experience by letting you immediately edit simple configuration without leaving the page. In a follow-up blog post, we provided concrete examples of how we can apply outside-in to Drupal.

The feedback was overwhelmingly positive. However, there were also some really important questions raised. The most common concern was the idea that the mockups ignored "context".

When we showed how to place a block "outside-in", we placed it on a single page. However, in Drupal a block can also be made visible for specific pages, types, roles, languages, or any number of other contexts. The flexibility this provides is one place where Drupal shines.

Why context matters

For the sake of simplicity and focus we intentionally did not address how to handle context in outside-in in the last post. However, incorporating context into "outside-in" thinking is fundamentally important for at least two reasons:

  1. Managing context is essential to site building. Site builders commonly want to place a block or menu item that will be visible on not just one but several pages or to not all but some users. A key principle of outside-in is previewing as you edit. The challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator.
  2. Managing context is a big usability problem on its own. Even without outside-in patterns, making context simple and usable is an unsolved problem. Modules like Context and Panels have added lots of useful functionality, but all of it happens away from the rendered page.

The ingredients: user groups and page groups

To begin to incorporate context into outside-in, Kevin Oleary, with input from yoroy, Bojhan, Angie Byron, Gábor Hojtsy and others, has iterated on the block placement examples that we presented in the last post, to incorporate some ideas for how we can make context outside-in. We're excited to share our ideas and we'd love your feedback so we can keep iterating.

To solve the problem, we recommend introducing 3 new concepts:

  1. Page groups: re-usable collections of URLs, wildcards, content types, etc.
  2. User groups: reusable collections of roles, user languages, or other user attributes.
  3. Impersonation: the ability to view the page as a user group.

Page groups

Most sites have some concept of a "section" or "type" of page that may or may not equate to a content type. A commerce store for example may have a "kids" section with several product types that share navigation or other blocks. Page groups adapts to this by creating reusable "bundles" of content consisting either of a certain type (e.g. all research reports), or of manually curated lists of pages (e.g. a group that includes /home, /contact us, and /about us), or a combination of the two (similar to Context module but context never provided an in-place UI).

User groups

User groups would combine multiple user contexts like role, language, location, etc. Example user groups could be "Authenticated users logged in from the United States", or "Anonymous users that signed up to our newsletter". The goal is to combine the massive number of potential contexts into understandable "bundles" that can be used for context and impersonation.

Impersonation

As mentioned earlier, a challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator. Impersonation allows site builders to switch between different user groups. Switching between different user groups allow a page to be previewed as that type of user.

Using page groups, user groups and impersonation

Let's take a look at how we use these 3 ingredients in an example. For the purpose of this blog post, we want to focus on two use cases:

  1. I'm a site builder working on a life sciences journal with a paywall and I want to place a block called "Download report" next to all entities of type "Research summary" (content type), but only to users with the role "Subscriber" (user role).
  2. I want to place a block called "Access reports" on the main page, the "About us" page, and the "Contact us" page (URL based), and all research summary pages, but only to users who are anonymous users.

Things can get more complex but these two use cases are a good starting point and realistic examples of what people do with Drupal.

Step #1: place a block for anonymous users

Let's assume the user is a content editor, and the user groups "Anonymous" and "Subscriber" as well as the page groups "Subscriber pages" and "Public pages" have already been created for her by a site builder. Her first task is to place the "Access reports" block and make it visible only for anonymous users.


Place a block for anonymous users

First the editor changes the impersonation to "Anonymous" then she places the block. She is informed about the impact of the change.

Step #2: place a block for subscribers

Our editor's next task is to place the "Download reports" block and make it visible only for subscribers. To do that she is going to want to view the page as a subscriber. Here it's important that this interactions happens smoothly, and with animation, so that changes that occur on the page are not missed.


Place a block for subscribers

The editor changes the impersonation to "Subscribers". When she does the "Access reports" block is hidden as it is not visible for subscribers. When she places the "Download report" block and chooses the "Subscriber pages" page group, she is notified about the impact of the change.

Step #3: see if you did it right

Once our editor has finished step one and two she will want to go back and make sure that step two did not undo or complicate what was done in step one, for example by making the "Download report" block visible for Anonymous users or vice versa. This is where impersonation comes in.


Confirm you did it right

The anonymous users need to see the "Access reports" block and subscribers need to see the "Download report" block. Impersonation lets you see what that looks like for each user group.

Summary

The idea of combining a number of contexts into a single object is not new, both context and panels do this. What is new here is that when you bring this to the front-end with impersonation, you can make a change that has broad impact while seeing it exactly as your user will.

02 May 22:14

From the Horse’s Mouth: Councillor Joe Mihevc goes all out for the minimum grid

by dandy

horsesmouthweb

Illustration by Ian Sullivan

In our latest contribution, councillor Joe Mihevc (Ward 21, St. Paul's West) declares his support for bike lanes and the minimum grid. He also hints at Bike Share news, touts infrastructure expansion and discusses collaboration with city staff to build successful cycling projects in the city.

What are the top priorities for bike infrastructure in your ward this year?

Bike Share. The city is expanding the stations this year and there will be a few arriving for Ward 21. An announcement will be made by the city in June, so I will need to leave it at that for now, but expanding the program up the escarpment is a discussion in which I have engaged city staff over the past few years. The potential the expansion has to increase the number of people choosing to make trips by bike is exciting.

Another piece of infrastructure that I am happy about is to see our Ward 21 cyclists' request for a lane along Winona Avenue has made it into the Cycling Division's 10 Year Plan, which will go to the May16th public works and infrastructure committee (PWIC) meeting and then to city council for approval.

Would you encourage PWIC, the executive committee and council to support the minimum grid?

Absolutely. I am among councillors who signed on to Cycle Toronto's minimum grid goal. I am working to assist in the effort to have the city meet those bike lane targets. I personally met with city cycling staff to discuss the successful motion by council colleagues at the March PWIC meeting, requesting staff include $20 million and $25 million annual options for a budget increase to make the minimum grid happen. That report is part of the 10 Year Plan being considered at the May 16th PWIC meeting - and I will certainly defend the minimum grid all the way to City Council.

Related on the dandyBLOG

From the Horse’s Mouth: Gord Perks on pushing for more from city hall

Bike lanes on Bloor now one council meeting away from becoming reality

Back in the saddle: Hoopdriver re-opens in new location

Celebrating community and building bike infrastructure: Ontario Bike Summit roundup

From the Horse’s Mouth: Councillor Janet Davis on improving cycling in Ward 31

Protecting vulnerable road users protects all of us

Scarborough cyclists get a spring boost with two new bike hubs

From the Horse’s Mouth: Michael Black on Building Better Bike Lanes

From the Horse’s Mouth: Councillor Mike Layton on building community support for bike lanes

From the Horse’s Mouth: Cycling forecast for 2016

From the Horse’s Mouth: Nancy Smith Lea, TCAT

From the Horse’s Mouth: Councillor Joe Cressy on bike projects in Ward 20

From the Horse’s Mouth: Jennifer Keesmaat on the best city projects of 2015, and a look at the year ahead

02 May 22:14

Ontario is investing $20 million in public EV charging stations in 2017

by Rob Attrell

The Ontario government has been ahead of the curve in trying to adapt to the probable revolution looming in car transportation, with initiatives allowing self-driving vehicles to be tested in the province, and a $325 million Green Investment Fund working to reduce the effects of climate change through meaningful, environmentally-friendly initiatives.

This week, the Ontario Ministry of Transportation announced that a $20 million grant program has been created from the Green Investment Fund to build 500 electric vehicle charging stations across Ontario next year. The Electric Vehicle Chargers Ontario will provide 27 public and private sector groups with funds to create a network of charging stations in places where they’re needed most.

Transportation is the single largest source of greenhouse gas emissions in Ontario, with cars accounting for more atmospheric pollution than iron, steel, cement and chemical industries combined. Currently, there are 6,400 electric vehicles on Ontario’s roads, but with the industry slowly moving toward hybrid and electric models, that number will be growing quickly in the next few years.

The province has set a goal to reduce greenhouse gas emission levels to just 20 percent of 1990 pollution levels by 2050. Making recharging stations more accessible across the province will mean owning an electric vehicle as opposed to a gas-powered one is that much easier.

SourceOntario
02 May 22:14

BCE to acquire MTS for $3.9 billion, plans to divest one-third of the postpaid subscribers to Telus

by Ian Hardy

Wireless competition in Canada may have just been reduced.

BCE, parent company to Bell’s wireless business, has entered into an agreement to purchase Manitoba-based MTS (Manitoba Telecom Services Inc.) for $3.9 billion.

The transaction is expected to close in late 2016 or early 2017 pending regulatory approvals. As for the terms, BCE has agreed to pay MTS shareholders $40.00 per share, which is reportedly above the current price by a staggering 23.2 percent, and “values MTS at approximately 10.1 times 2016 estimated EBITDA.” The total transaction value is about $3.9 billion but will purchase all shares for $3.1 billion and assume its debt of approximately $800 million. Both BCE and MTS shareholders and Board of Directors have agreed to the terms of the deal.

Bell has also agreed to open a western Canadian headquarters in Manitoba, which will employ 6,900 people (MTS currently has 2,700 employees). To show its commitment to the province, Bell will also build out its network in the region and invest $1 billion over the next five years, which will specifically see its Gigabit Fibe Internet be available 12 months after the transaction closes, expand its LTE network and release Fibe TV.

MTS currently has 565,000 subscribers and the terms also state that Bell will divest a third “of MTS dealer locations in Manitoba to Telus.” This number approximately represents 140,000 to Telus. Unfortunately, there is no indication as the dollar amount Bell will be selling the MTS subs to Telus for.

When the transaction closes, Bell and MTS have agreed to sell its services under the “Bell MTS” brand name.

Finally, fine print in the release reveals that if the “arrangement agreement is terminated in certain circumstances, including if MTS enters into a definitive agreement with respect to a superior proposal, BCE is entitled to a break-fee payment of $120 million.”

George Cope, president and CEO of BCE, stated, “BCE looks forward to being part of Manitoba’s strong growth prospects, building on the tremendous MTS legacy of technological innovation, customer service and competitive success by delivering the best broadband, wireless, internet and TV services to the people of Manitoba in communities large and small. As the headquarters for the Western operations of BCE, Bell MTS will focus on delivering the benefits of new broadband communications infrastructure, ongoing technology development and enhanced community investment to Manitobans everywhere.”

“This transaction recognizes the intrinsic value of MTS and will deliver immediate and meaningful value to MTS shareholders, while offering strong benefits to MTS customers and employees, and to the Province of Manitoba,” said Jay Forbes, President & CEO, MTS. “We are proud of our history and what we have achieved as an independent company. We believe the proposed transaction we are announcing today with BCE will allow MTS to build on our successful past and achieve even more in the future.”

SourceBCE
02 May 22:12

Understanding Bluetooth Pairing Problems

by Kevin Purdy

bluetooth issues explained feature

We’ve received some complaints from our readers about the Bluetooth devices we recommend acting up, working intermittently, or otherwise failing, especially when multiple devices are involved. The problems include failing to pair, audio hiccups, and recurring dropped connections. The situation usually involves a few Bluetooth devices—say, a phone, a smartwatch, and a car stereo—trying to get along. Sometimes the conflict is between a phone, a fitness tracker, and Bluetooth headphones. Occasionally, the issue is simply a keyboard that’s confused about different iPads. And though we haven’t heard about all three-way troubles, we have some ideas about what’s going on.

02 May 22:12

What Happened to Google Maps?

by Federico Viticci

Fascinating study by Justin O'Beirne on how Google Maps changed from 2010 to 2016 – fewer cities, more roads, and not a lot of balance between them on a map at the same zoom level.

He writes:

Unfortunately, these "optimizations" only served to exacerbate the longstanding imbalances already in the maps. As is often the case with cartography: less isn't more. Less is just less. And that's certainly the case here.

As O'Beirne also notes, the changes were likely made to provide a more pleasant viewing experience on mobile devices.

I understand his point of view – the included examples really make a solid case – but I can also see why Google may consider the average user (looking up points of interest nearby, starting navigation on their phone) and think that most users don't want that kind of cartographic detail anymore.

It'd be interesting to see the same comparisons between Apple and Google, as well as between old Apple Maps and Apple Maps today.

→ Source: justinobeirne.com

02 May 22:12

Change in Metro

by pricetags

Change and development along the rapid-transit lines as part of “Skywalking through Burnaby” on Sunday:

A big hole at the Brentwood Town Centre redevelopment.  Is this the the biggest single complex in the municipality’s history?

DSC03687

Brentwood

.

At the Commercial-Broadway station, the new east platform for westbound trains is visible:

DSC03689

.

Already the station handles more passengers in a day than YVR airport.  With the opening of the Evergreen line later, the crush would be unmanageable without a platform expansion.


02 May 22:12

Tailoring Pants for Square

by Eric Ayers

This week, the Pants project announced a 1.0 release of the open source Pants Build System. The 1.0 release of Pants indicates that the tool is ready for more widespread adoption.

Square is a proud contributor to Pants. Developers at Square have been using and contributing to Pants since 2014 to develop our Java services. When we first joined the project, we found a tool that required lots of customization and insider knowledge to install and operate. Today the tool has a streamlined installation process, extensive documentation, a clean extensible design, a vibrant community and a history of stable weekly releases.

With Pants we get:

  • Reliable, reproducible builds from the current view of the code repository
  • A streamlined development workflow
  • Easy IDE setup
  • Strong integration with third party artifact repositories
  • Consistent results when switching branches
  • A distributed build cache that can be shared with CI builders and developer laptops
  • Lots of built-in tooling to help us analyze our large build graph
  • The ability to define fine grained dependencies between code modules
  • An extensible tool that can grow with our needs

./pants compile service

To understand why Square uses a tool like Pants, it helps to understand our software lifecycle. We use a monolithic codebase (monorepo) for many of the same reasons that Google does.

We build and release services from HEAD of master. Our Java codebase is housed almost entirely in a single repo consisting of over 900 projects and 45,000 source files. In this style of development, we prefer keeping all code at HEAD consistent using global refactoring and rigorous automated testing instead of maintaining strict API backwards compatibility and long deprecation cycles within the codebase.

We also have a strong commitment to using open source libraries. We extensively rely on artifacts published through the Maven Central Repository. When we upgrade a library, we can do so in one place and update the code dependencies for all services.

./pants idea service::

With such a large codebase, it becomes impractical to load the entire repo into the IDE. At Square, we primarily use IntelliJ IDEA to develop code. We use Pants to configure and launch it. Probably the most valuable feature for day to day development is simply having the ability to quickly bring up any portion of the repo in IntelliJ. With a single command, Pants configures IntelliJ to edit a module and all of its dependencies defined in the repo.

Making it easy to configure any project in the IDE means that developers can easily contribute to any project in the codebase. Being able to easily switch between branches encourages developers to collaborate. Now it is convenient to check out each other’s work locally when performing code reviews. It is easier to confine change requests to small, manageable chunks and switch between them while waiting on code reviews to complete.

./pants –help

We came to the Pants project looking for a tool to help solve problems in our build environment. Previously, we used Apache Maven to compile and package binaries. Maven is a powerful and popular tool with a modular design that makes it easy to extend with excellent support from third party tools and libraries. We had a significant investment in Maven, including many custom plugins for running code generation and supporting a distributed cache for artifacts in our continuous integration (CI) build system.

Using Maven with our “build everything from HEAD” policy strains the Maven model. Maven is designed to support editing a few modules at a time while relying on binary artifacts for most dependencies. To support building the entire repo from HEAD, we set every Maven module in the repo to a SNAPSHOT version.

Using Maven in this way works, but has drawbacks. Running a recursive compile of all dependent modules incurs a lot of overhead. We had wrapper scripts to help us try to be productive in this environment, say to run just run code generation or only run a subset of tests. Still, developers would get into trouble in some situations, often having to deal with inconsistencies between stale binary artifacts and the source on disk. For example, after using mvn install, pulling in new changes from the repo or switching back to an older branch could leave them compiling against stale code. When developers routinely question the integrity of their development environment, they waste a lot of time cleaning and rebuilding the codebase.

./pants test service:test

Our first priority was to allow developers to quickly configure their workspace in the IDE. Next, we migrated to using Pants as the tool to test and deploy artifacts in our CI builder. As of this writing, we have replaced all of our use of Maven in this repo using Pants, including:

  • Developing on developer workstations and laptops
  • Compiling and testing code in our continuous integration environments
  • Publishing artifacts to a Maven style repository
  • Integrating with third party tools like findbugs and kloc

Replacing all of our uses of Maven was not easy. We were able to do this by generating the Pants configuration using the Maven pom.xml files as a source of truth. During an interim phase we supported both tools. Through collaboration with the Pants open source community, we were able to modify Pants through hundreds of open source contributions.

./pants staging-build –deploy

Pants comes out of the box ready to edit, compile, test, and package code. Beyond that, we were able to leverage Pants' extensible module based system. A current favorite is a small custom task to deploy directly to staging environments over our internal deployment system. Along the way, other custom modules run custom code generators, gather meta information about our build process for our security infrastructure, and package the output of annotation processors into yaml files for our deployment system. Today we have about two dozen internal Pants plugins that do all those things, plus additional tools to audit our codebase, integrate with our CI system, and customize our IDE support.

Ship it!

At Square, Pants has helped us realize the promised benefits of monorepo style development at scale. Making sure the development process is consistent and reliable increases our developer velocity. Being able to quickly and reliably edit, compile, and test allows developers to concentrate on reviewing and writing code, not struggling with configuring tools. We believe that Pants is ready for more widespread adoption and encourage you to give it a try.

02 May 22:12

Managing feedback challenges writers (Survey data)

by Josh Bernoff

My survey of 547 business writers found one big pain point: managing editorial feedback. Only half of business writers get the feedback they need, and only one in three feels that their process for managing feedback works well. Business writers have little love for feedback processes In this table, the first data column summarizes all the responses; the other columns … Continue reading Managing feedback challenges writers (Survey data) →

The post Managing feedback challenges writers (Survey data) appeared first on without bullshit.

02 May 22:08

Should we end the single-family home in Vancouver?

by pricetags

An Item from Ian found in Vancity Buzz.  

s-f homes

/

Greg Mitchell asks the question:

… let’s rethink the narrative. We know Vancouver is an expensive city in which to live – it seems to be all Vancouverites think about currently (yes, I’m guilty too). And we are obviously limited in terms of our land on which we can develop (mountains, ocean – enough said). So our only option is to densify – but the question is HOW to densify.

He provides, in detail, an alternative:

4-townhouses-66-foot-lot

.

Those details here.

 


02 May 22:08

Samsung’s IoT SmartThings platform suffers from serious security issues, says report

by Patrick O'Rourke

With the smartphone industry seemingly hitting a plateau when it comes to innovation and perhaps more importantly excitement, internet of things (IoT) gadgets have become one of fastest expanding and most interesting areas of the tech industry.

In Samsung’s case, however, a report from University of Michigan computer science researchers indicates the South Korean smartphone manufacturer’s IoT SmartThings platform suffers from security issues that could potentially allow malicious apps to operate smart locks, change access codes and set off smoke Wi-Fi-enabled detectors, as well as a variety of other forms attacks on Samsung’s smart device line.

A malicious SmartThings app, with access to more permissions than necessary, downloaded directly from Samsung’s SmartThings store, is the source of the security issues according to the research. The problem also stems from apps being given permissions that aren’t actually required. For example, a smart lock only needs the ability to lock remotely, but SmartThings’ API links this command with a variety of others.

After installation, SmartThings apps also request additional permissions, allowing them to be linked to different apps installed on the smartphone, a move the researchers say isn’t necessary because it gives the app more access than is required.

Researchers demonstrated their discovery through an app that monitors the battery life of a variety of Samsung SmartThings products. After installing and granting the malicious but normal looking app permissions on the smartphone, it not only monitors battery, but also has the ability to manipulate the lock’s functionality. It does this by automatically sending out an SMS to the app’s developer each time the user reprograms the the smart lock’s pin code.

A second demonstration showed off an app allowing the user to to program their own pin code through an app that locks and unlocks a browser. Research revealed that of the 499 apps part of the study, 42 percent of them have more privileges than are necessary, giving malicious developers ample opportunity to create exploits.

Following this discovery, the University of Michigan researchers behind the discovery say they have reached out to Samsung’s SmartThings team with their findings.

While these exploits do require user interaction, many people swiftly move through the permission section of installing an app without actually realizing what they’re giving the software access to. Researchers say that of 22 SmartThings users they surveyed, 91 percent said they would allow a battery monitoring app to check their smart lock and give the app whatever permissions it requested. However, only 14 percent said they would allow the battery app to send door access codes to a remote server.

In an email to The Verge, a SmartThing representative said the following about the study.

“The potential vulnerabilities disclosed in the report are primarily dependent on two scenarios – the installation of a malicious SmartApp or the failure of third party developers to follow SmartThings guidelines on how to keep their code secure. Following this report, we have updated our to provide even better security guidance to developers.”

“Smart home devices and their associated programming platforms will continue to proliferate and will remain attractive to consumers because they provide powerful functionality. However, the findings in this paper suggest that caution is warranted as well – on the part of early adopters, and on the part of framework designers. The risks are significant, and they are unlikely to be easily addressed via simple security patches.”

 

 

02 May 22:07

The biggest single project in Burnaby history?

by pricetags

I asked below whether Brentwood Town Centre was the largest single project ever seen in Burnaby. Should have checked my e-mail to see that this just came in, via Vancity Buzz:

 

Concord

A master-planned community called Concord Brentwood is the latest development from Concord Pacific Developments Inc., renowned for its skyline-defining communities on Vancouver’s False Creek and Toronto’s lakefront.

Concord Brentwood will create a bustling community according to Concord Pacific senior vice president Matt Meehan. “Our next project in Burnaby, Concord Brentwood, will see 26 acres in the Brentwood neighbourhood transform into a beautiful and diverse mixed-use park-side community that completes the exciting revitalization of the Brentwood Town Centre neighbourhood.” …

Designed by award-winning architect James K.M. Cheng of Vancouver, Concord Brentwood will consist of 10 towers, most between 40 and 45 storeys tall. Tower 1 of Phase 1 will consist of 426 units on 45 storeys.

I don’t know if this a rendering of the massing for the proposal or the final product.  But if the latter, the architecture looks pretty blah.  I still have no explanation for why in this region there is such a reluctance to use colour, why the palette seems so constrained – off-white or gray, beige and green glass.


02 May 22:07

Harvesting Searched for Tweets Using Python

by Tony Hirst

Via Tanya Elias/eliast05, a query regarding tools for harvesting historical tweets. I haven’t been keeping track of Twitter related tools over the last few years, so my first thought is often “could Martin Hawksey’s TAGSexplorer do it?“!

But I’ve also had a the twecoll Python/command line package on my ‘to play with’ list for a while, so I though I’d give it a spin. Note that the code requires python to be installed (which it will be, by default, on a Mac).

On the command line, something like the following should be enough to get you up and running if you’re on a Mac (run the commands in a Terminal, available from the Utilities folder in the Applications folder). If wget is not available, download the twecoll file to the twitterstuff directory, and save it as twecoll (no suffix).

#Change directory to your home directory
$ cd

#Create a new directory - twitterstuff - in you home directory
$ mkdir twitterstuff

#Change directory into that directory
$ cd twitterstuff

#Fetch the twecoll code
$ wget https://raw.githubusercontent.com/jdevoo/twecoll/master/twecoll
--2016-05-02 14:51:23--  https://raw.githubusercontent.com/jdevoo/twecoll/master/twecoll
Resolving raw.githubusercontent.com... 23.235.43.133
Connecting to raw.githubusercontent.com|23.235.43.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 31445 (31K) [text/plain]
Saving to: 'twecoll'
 
twecoll                    100%[=========================================>]  30.71K  --.-KB/s   in 0.05s  
 
2016-05-02 14:51:24 (564 KB/s) - 'twecoll' saved [31445/31445]

#If you don't have wget installed, download the file from:
#https://raw.githubusercontent.com/jdevoo/twecoll/master/twecoll
#and save it in the twitterstuff directory as twecoll (no suffix)

#Show the directory listing to make sure the file is there
$ ls
twecoll

#Change the permissions on the file to 'user - executable'
$ chmod u+x twecoll

#Run the command file - the ./ reads as: 'in the current directory'
$ ./twecoll tweets -q "#lak16"

Running the code the first time prompts you for some Twitter API credentials (follow the guidance on the twecoll homepage), but this only needs doing once.

Testing the app, it works – tweets are saved as a text file in the current directory with an appropriate filename and suffix .twt – BUT the search doesn’t go back very far in time. (Is the Twitter search API crippled then…?)

Looking around for an alternative, the GetOldTweets python script, which again can be run from the command line; download the zip file from Github, move it into the twitterstuff directory, and unzip it. On the command line (if you’re still in the twitterstuff directory, run:

ls

to check the name of the folder (something like GetOldTweets-python-master) and then cd into it:

cd GetOldTweets-python-master/

to move into the unzipped folder.

Note that I found I had to install pyquery to get the script to run; on the command line, run: easy_install pyquery.

This script does not require credentials – instead it scrapes the Twitter web search. Data limits for the search can be set explicitly.

python Exporter.py --querysearch '#lak15' --since 2015-03-10 --until 2015-09-12 --maxtweets 500

Tweets are saved into the file output_got.csv and are semicolon delimited.

A couple of things I noticed with this script: it’s slow (because it “scrolls” through pages and pages of Twitter search results, which only have a small number of results on each) and on occasion seems to hang (I’m not sure if it gets stuck in an infinite loop; on a couple of occasions I used ctrl-z to break out). In such a case, it doesn’t currently save results as you go along, so you have nothing; reduce the --maxtweets value, and try again. On occasion, when running the script under the default Mac python 2.7, I noticed that there may be encoding issues in tweets which break the output, so again the file can’t get written,

Both packages run from the command line, or can be scripted from a Python programme (though I didn’t try that). If the GetOldTweets-python package can be tightened up a bit (eg in respect of UTF-8/tweet encoding issues, which are often a bugbear in Python 2.7), it looks like it could be a handy little tool. And for collecting stuff via the API (which requires authentication), rather than by scraping web results from advanced search queries, twecoll looks as if it could be quite handy too.


02 May 22:07

A Vanishing Vancouver – documented

by pricetags

The latest Sean Ruthen book review in Spacing:

VAn VAnishes

With affordable housing in the Metro Vancouver region now a daily topic of discussion, Vancouver Vanishes – Narratives of Demolition and Revival by local novelist Caroline Adderson is a formidable addition to the conversation. Combining several photos of demolished Vancouver homes, the book’s main events are the stories and essay contributions from some of the heritage community’s most outspoken proponents, including Michael Kluckner—who provides an update since he published his similarly titled Vanishing Vancouver a few years back. The editor of the book provides an alarming snapshot of the predicament we presently find ourselves in, while Kluckner gives us a brief history on how we came to be in this predicament.

More here.


02 May 22:07

My Tablet Has Stickers

by Federico Viticci

Great piece by Steven Sinofsky, who has replaced his laptop with an iPad Pro. There are several quotable passages, but I particularly liked this one:

Most problems are solved by not doing it the old way. The most important thing to keep in mind is that when you switch to a new way of doing things, there will be a lot of flows that can be accomplished but are remarkably difficult or seem like you’re fighting the system the whole time. If that is the case, the best thing to do is step back and realize that maybe you don’t need to do that anymore or even better you don’t need a special way of doing that. When the web came along, a lot of programmers worked very hard to turn “screens” (client-server front-ends) into web pages. People wanted PF-function keys and client-side field validation added to forms. It was crazy and those web sites were horrible because the whole of the metaphor was different (and better). The best way to adapt to change is to avoid trying to turn the old thing into the new things.

This paragraph encapsulates what I went through for the past two years since I switched to the iPad as my primary computer. To this day, I still get comments from a few people who think "I'm fighting the system". And we don't have to look too far back in our past to find the opinions of those who thought the iPad Pro was a platform for people who "jump through more hoops than a circus elephant".

I've been enjoying the wave of iPad enthusiasm that the iPad Pro caused, and I still believe we're just getting started.

→ Source: medium.com

02 May 22:06

The ‘Technique’ of Blackface

by Jason A. Smith

Snapchat_blackface1

Outrage over the Bob Marley Snapchat filter was swift following its brief appearance on the mobile application’s platform on April 20 (The 420 pot smoking holiday). The idea of mimicking Bob Marley in appreciation of a day dedicated to smoking marijuana enabled users to don the hat, dreads, and…blackface!? News outlets that day covered the issue pretty quickly. CNN.money and The Verge noted the negative reactions voiced on social media in regard to the filter. Tech publisher Wired released a brief article condemning it, calling it racially tone-deaf.

The racial implications of the Bob Marley filter are multifaceted, yet I would like to focus on the larger cultural logic occurring both above and behind the scenes at an organization like Snapchat. The creation of a filter that tapped into blackface iconography demonstrates the complexity of our relationship to various forms of technology – as well as how we choose to represent ourselves through those technologies. French sociologist Jacques Ellul wrote in The Technological Society of ‘technique’ as an encompassing train of thought or practice based on rationality that achieves its desired end. Ellul spoke of technique in relation to advances in technology and human affairs in the aftermath of World War II, yet his emphasis was not on the technology itself, but rather the social processes that informed the technology. This means that in relation to a mobile application like Snapchat we bring our social baggage with us when we use it, and so do developers when they decide to design a new filter. Jessie Daniels addresses racial technique in her current projects regarding colorblind racism and the internet – in which the default for tech insiders is a desire to not see race. This theoretically rich work pulls us out of the notion that technology is neutral within a society that has embedded racial meanings flowing through various actors and institutions, and where those who develop the technology we use on a daily basis are unprepared to acknowledge the racial disparities which persist, and the racial prejudice that can—and does—permeate their designs.

This understanding of technique, when combined with critical race theory, allows us to ask if the presence of blackface in technology is any big surprise in a presumably “post-racial” world. I am positive that any critical race scholar would, without hesitation, answer, “No, it’s not.”  And that’s because we are definitively not post-racial. The intentions behind the filter might have been innocent or playful by developers, but the use of blackface within society has a long and complex history – particularly in regard to its use as a tool to perpetuate systemic racial inequalities in the dehumanizing and “othering” of African Americans in the United States. Hollywood has traditionally been the long time perpetrator of promoting blackface, and variations of it, through utilizing stereotypes that adapt to a given historical moment in society. Yet the racial implications of blackface extend beyond the screens in which we view film. Over the past couple of years tensions brought up over racialized costumes during Halloween and college parties demonstrate the reach and continuation of blackface. With such a contemporary example that has generated conflict within the general public, it seems as if the tech innovators at Snapchat would have known better. I guess that is just wishful thinking. This movement and use of blackface from film, to parties, to the mobile app demonstrates what Ellul meant in regard to technique. The continuation of blackface in our society presently is not necessarily linked to the technologies that produce them, but through the ways in which individuals develop and utilize those technologies. The presumed innocence of using blackface to ‘celebrate’ an individual within a logic of providing ‘daily-new’ filters for consumer use reflects a gross oversight in what blackface means within the larger cultural sphere of public life.

The continued existence of racism in society is undertaken through multiple shifts and debates, in which no actor or institution stands in isolation. This case of the Bob Marley filter only highlights the ways that historical racist images are allowed to perpetuate themselves in the present – becoming not-so-historical in the process as they reincarnate through new mediums. I have no doubt that some cases might be found of individuals using the filter, or commenting on it, in overtly racist ways. Yet, as mentioned above, voices also sprang up to condemn the filter as racially insensitive in various social media and news sites. The technique of blackface is malleable in that it lingers on through practices that are uncritically carried out by tech developers, but those practices are also challenged through other means across various technologies. Unraveling this technique requires disrupting the structural racism that upholds it. Brushing off the filter as a misstep by Snapchat or condemning the developers as socially out of touch, is antithetical to the critical race project, a project that is less interested in identifying those who fail at race relations and more interested in identifying, and subverting, the social conditions that allow racism to persist.

 

Jason A. Smith is a doctoral candidate in Public Sociology at George Mason University whose research centers on the areas of race and the media. His dissertation will look at the Federal Communications Commission and policy decisions regarding diversity in the media for minorities and women. Along with Bhoomi K. Thakore, he is a co-editor of the forthcoming volume Race and Contention in Twenty-first Century US Media (Routledge, May 2016). He is on twitter occasionally.

 

Headline pic: Source (CC licensed and edited by the author)

02 May 22:06

Samsung Galaxy S5 starts receiving Marshmallow update in Canada

by Ian Hardy

Telus’ software update scheduled was recently updated and revealed the S5 would see the long awaited Marshmallow update today. Well, it’s now available to download.

We’ve received several tips from Canadians across the country that informed us that Android 6.0.1 Marshmallow is now available to Telus customers. The update is coming in at just over 819MB in size and brings Google Now on Tap, an upgraded app permissions manager, native support for fingerprint sensors and Marshmallow’s new Doze battery saving capabilities.

slack-imgs

Check your S5 by hitting Menu > Settings > About Device > Software Update > Update.

No indication yet from other Canadian carriers.

Update: Bell users are now reporting the update is available to downlaod.

(Thanks to everyone who sent this in!)

02 May 22:06

The Best Android Tablets

by Chris Heinonen
Our three best Android tablet picks side by side on a wooden surface.

After spending hundreds of hours over the past several years researching and testing tablets, we think the best full-featured Android tablet for most people right now is the Asus ZenPad 3S 10. Other Android tablets are cheaper or more powerful, but the ZenPad offers the best combination of speed, display quality, and features for the price. That said, if you’re not already invested in Android, an iPad is a better tablet in general.

02 May 22:03

Next-Generation Transportation Webinar Series – May 16

by pricetags

The SFU City Program is starting the Next-Generation Transportation Webinar Series on May 16! These engaging and insightful presenters will inspire you to consider new transportation strategies to improve your communities. More details below!

Next-Generation Transportation Webinar Series

Congestion and Challenge: How cities can effectively respond to transportation demands

$20, May 16, 2016

12–1:30 pm PDT

Speaker: Daniel Firth, Sustainable Urban Mobility Strategy, City of Stockholm

.

There aren’t many cities with full-scale congestion pricing. Singapore, London, Stockholm are among the best known. What conditions allowed that to happen in a city like Stockholm? What motivated it?

This webinar takes a look at what happened, with someone who was there when it happened. Daniel Firth is project manager for Stockholm’s Sustainable Urban Mobility Strategy. He’s responsible for the implementation of Stockholm’s policies on congestion pricing and strategic parking management. Before Stockholm, Daniel spent five years at Transport for London, working on the implementation and operation of the central London congestion charging scheme.

Daniel will discuss the impediments and the responses—and provide an up-to-date analysis of the results…along with suggestions on what they’d do today if they started over.

Registration/Details


02 May 22:03

How to Raise the Next Mark Zuckerberg

by Alex

Investors and tech geeks love to talk about “unicorns”: tech startups that are worth $1 billion or more. Incredibly, many of these dazzling success stories, like Facebook and Snapchat, have been launched by people in their early twenties. Other multi-million-dollar startups are regularly created by high school and college students.

So why not help your kids achieve that kind of success for themselves? As I write in today’s Wall Street Journal, the same skills that prepare your kids for the possibility of a big startup win will also prepare them for other forms of academic or professional success. For example, you can teach your kids to be problem solvers:

Most great startups address an unsolved problem—but half the art lies in recognizing that there’s a problem to solve. Teach your children to spot those opportunities by treating every complaint as a learning opportunity: Whenever your child complains about a game, site or app (or a real-world toy or experience), ask them how they would make it better. Yes, it’s annoying that you can’t play this videogame with a friend. What features would you have to add to make it a great collaborative game? I agree, this restaurant isn’t great for young people. What would your dream restaurant look like?

Read the full story in today’s Wall Street Journal.