Shared posts

08 Aug 17:09

Understanding beta binomial regression (using baseball statistics)

by David Robinson

Previously in this series:

In this series we’ve been using the empirical Bayes method to estimate batting averages of baseball players. Empirical Bayes is useful here because when we don’t have a lot of information about a batter, they’re “shrunken” towards the average across all players, as a natural consequence of the beta prior.

But there’s a complication with this approach. When players are better, they are given more chances to bat! (Hat tip to Hadley Wickham to pointing this complication out to me). That means there’s a relationship between the number of at-bats (AB) and the true batting average. For reasons I explain below, this makes our estimates systematically inaccurate.

In this post, we change our model where all batters have the same prior to one where each batter has his own prior, using a method called beta-binomial regression. We show how this new model lets us adjust for the confounding factor while still relying on the empirical Bayes philosophy. We also note that this gives us a general framework for allowing a prior to depend on known information, which will become important in future posts.

Setup

As usual, I’ll start with some code you can use to catch up if you want to follow along in R. If you want to understand what it does in more depth, check out the previous posts in this series. (As always, all the code in this post can be found here).

library(dplyr)
library(tidyr)
library(Lahman)
library(ggplot2)
theme_set(theme_bw())

# Grab career batting average of non-pitchers
# (allow players that have pitched <= 3 games, like Ty Cobb)
pitchers <- Pitching %>%
  group_by(playerID) %>%
  summarize(gamesPitched = sum(G)) %>%
  filter(gamesPitched > 3)

career <- Batting %>%
  filter(AB > 0) %>%
  anti_join(pitchers, by = "playerID") %>%
  group_by(playerID) %>%
  summarize(H = sum(H), AB = sum(AB)) %>%
  mutate(average = H / AB)

# Add player names
career <- Master %>%
  tbl_df() %>%
  dplyr::select(playerID, nameFirst, nameLast) %>%
  unite(name, nameFirst, nameLast, sep = " ") %>%
  inner_join(career, by = "playerID")

# Estimate hyperparameters alpha0 and beta0 for empirical Bayes
career_filtered <- career %>% filter(AB >= 500)
m <- MASS::fitdistr(career_filtered$average, dbeta,
                    start = list(shape1 = 1, shape2 = 10))

alpha0 <- m$estimate[1]
beta0 <- m$estimate[2]
prior_mu <- alpha0 / (alpha0 + beta0)

# For each player, update the beta prior based on the evidence
# to get posterior parameters alpha1 and beta1
career_eb <- career %>%
  mutate(eb_estimate = (H + alpha0) / (AB + alpha0 + beta0)) %>%
  mutate(alpha1 = H + alpha0,
         beta1 = AB - H + beta0) %>%
  arrange(desc(eb_estimate))

Recall that the eb_estimate column gives us estimates about each player’s batting average, estimated from a combination of each player’s record with the beta prior parameters estimated from everyone (, ). For example, a player with only a single at-bat and a single hit () will have an empirical Bayes estimate of

Now, here’s the complication. Let’s compare at-bats (on a log scale) to the raw batting average:

career %>%
  filter(AB >= 20) %>%
  ggplot(aes(AB, average)) +
  geom_point() +
  geom_smooth(method = "lm", se = FALSE) +
  scale_x_log10()

center

We notice that batters with low ABs have more variance in our estimates- that’s a familiar pattern because we have less information about them. But notice a second trend: as the number of at-bats increases, the batting average also increases. Unlike the variance, this is not an artifact of our measurement: it’s a result of the choices of baseball managers! Better batters get played more: they’re more likely to be in the starting lineup and to spend more years playing professionally.

Now, there are many other factors that are correlated with a player’s batting average (year, position, team, etc). But this one is particularly important, because it confounds our ability to perform empirical Bayes estimation:

center

That horizontal red line shows the prior mean that we’re “shrinking” towards (). Notice that it is too high for the low-AB players. For example, the median batting average for players with 5-20 at-bats is 0.167, and they get shrunk way towards the overall average! The high-AB crowd basically stays where they are, because each has a lot of evidence.

So since low-AB batters are getting overestimated, and high-AB batters are staying where they are, we’re working with a biased estimate that is systematically overestimating batter ability. If we were working for a baseball manager (like in Moneyball), that’s the kind of mistake we could get fired for!

Accounting for AB in the model

How can we fix our model? We’ll need to have AB somehow influence our priors, particularly affecting the mean batting average. In particular, we want the typical batting average to be linearly affected by .

First we should write out what our current model is, in the form of a generative process, in terms of how each of our variables is generated from particular distributions. Defining to be the true probability of hitting for batter (that is, the “true average” we’re trying to estimate), we’re assuming

(We’re letting the totals be fixed and known per player). We made up this model in one of the first posts in this series and have been using it since.

I’ll point out that there’s another way to write the calculation, by re-parameterizing the beta distribution. Instead of parameters and , let’s write it in terms of and :

Here, represents the mean batting average, while represents how spread out the distribution is (note that ). When is high, the beta distribution is very wide (a less informative prior), and when is low, it’s narrow (a more informative prior). Way back in my first post about the beta distribution, this is basically how I chose parameters: I wanted , and then I chose a that would give the desired distribution that mostly lay between .210 and .350, our expected range of batting averages.

Now that we’ve written our model in terms of and , it becomes easier to see how a model could take AB into consideration. We simply define so that it includes as a linear term1:

Then we define the batting average and the observed just like before:

This particular model is called beta-binomial regression. We already had each player represented with a binomial whose parameter was drawn from a beta, but now we’re allowing the expected value of the beta to be influenced.

Step 1: Fit the model across all players

Going back to the basics of empirical Bayes, our first step is to fit these prior parameters: , , . When doing so, it’s ok to momentarily “forget” we’re Bayesians- we picked our and using maximum likelihood, so it’s OK to fit these using a maximum likelihood approach as well. You can use the gamlss package for fitting beta-binomial regression using maximum likelihood.

library(gamlss)

fit <- gamlss(cbind(H, AB - H) ~ log(AB),
              data = career_eb,
              family = BB(mu.link = "identity"))

We can pull out the coefficients with the broom package (see ?gamlss_tidiers):

library(broom)

td <- tidy(fit)
td
##   parameter        term estimate std.error statistic p.value
## 1        mu (Intercept)   0.1426  0.001577      90.4       0
## 2        mu     log(AB)   0.0153  0.000215      71.2       0
## 3     sigma (Intercept)  -6.2935  0.023052    -273.0       0

This gives us our three parameters: , , and (since sigma has a log-link) .

This means that our new prior beta distribution for a player depends on the value of AB. For example, here are our prior distributions for several values:

center

Notice that there is still uncertainty in our prior- a player with 10,000 at-bats could have a batting average ranging from about .22 to .35. But the range of that uncertainty changes greatly depending on the number of at-bats- any player with AB = 10,000 is almost certainly better than one with AB = 10.

Step 2: Estimate each player’s average using this prior

Now that we’ve fit our overall model, we repeat our second step of the empirical Bayes method. Instead of using a single and values as the prior, we choose the prior for each player based on their AB. We then update using their and just like before.

Here, all we need to calculate are the mu (that is, ) and sigma () parameters for each person. (Here, sigma will be the same for everyone, but that may not be true in more complex models). This can be done using the fitted method on the gamlss object (see here):

mu <- fitted(fit, parameter = "mu")
sigma <- fitted(fit, parameter = "sigma")

head(mu)
##     1     2     3     4     5     6 
## 0.286 0.281 0.273 0.262 0.279 0.284
head(sigma)
##       1       2       3       4       5       6 
## 0.00185 0.00185 0.00185 0.00185 0.00185 0.00185

Now we can calculate and parameters for each player, according to and . From that, we can update based on and to calculate new and for each player.

career_eb_wAB <- career_eb %>%
  dplyr::select(name, H, AB, original_eb = eb_estimate) %>%
  mutate(mu = mu,
         alpha0 = mu / sigma,
         beta0 = (1 - mu) / sigma,
         alpha1 = alpha0 + H,
         beta1 = beta0 + AB - H,
         new_eb = alpha1 / (alpha1 + beta1))

How does this change our estimates?

center

Notice that relative to the previous empirical Bayes estimate, this one is lower for batters with low AB and about the same for high-AB batters. This fits with our earlier description- we’ve been systematically over-estimating batting averages.

Here’s another way of comparing the estimation methods:

center

Notice that we used to shrink batters towards the overall average (red line), but now we are shrinking them towards the overall trend- that red slope.2

Don’t forget that this change in the posteriors won’t just affect shrunken estimates. It will affect all the ways we’ve used posterior distributions in this series: credible intervals, posterior error probabilities, and A/B comparisons. Improving the model by taking AB into account will help all these results more accurately reflect reality.

What’s Next: Bayesian hierarchical modeling

This problem is in fact a simple and specific form of a Bayesian hierarchical model, where the parameters of one distribution (like and ) are generated based on other distributions and parameters. While these models are often approached using more precise Bayesian methods (such as Markov chain Monte Carlo), we’ve seen that empirical Bayes can be a powerful and practical approach that helped us deal with our confounding factor.

Beta-binomial regression, and the gamlss package in particular, offers a way to fit parameters to predict “success / total” data. In this post, we’ve used a very simple model- linearly predicted by AB. But there’s no reason we can’t include other information that we expect to influence batting average. For example, left-handed batters tend to have a slight advantage over right-handed batters- can we include that information in our model? In the next post, we’ll bring in additional information to build a more sophisticated hierarchical model. We’ll also consider some of the limitations of empirical Bayes for these situations.

Footnotes

  1. If you have some experience with regressions, you might notice a problem with this model: $\mu$ can theoretically go below 0 or above 1, which is impossible for a $\beta$ distribution (and will lead to illegal $\alpha$ and $\beta$ parameters). Thus in a real model we would use a “link function”, such as the logistic function, to keep $\mu$ between 0 and 1. I used a linear model (and mu.link = "identity" in the gamlss call) to make the math in this introduction simpler, and because for this particular data it leads to almost exactly the same answer (try it). In our next post we’ll include the logistic link. 

  2. If you work in in my old field of gene expression, you may be interested to know that empirical Bayes shrinkage towards a trend is exactly what some differential expression packages such as edgeR do with per-gene dispersion estimates. It’s a powerful concept that allows a balance between individual observations and overall expectations. 

02 Jun 00:53

Twitter Favorites: [johnmaeda] Erin McKean let me join the Semicolon Appreciation Society (C-coders

John Maeda @johnmaeda
Erin McKean let me join the Semicolon Appreciation Society (C-coders
02 Jun 00:52

Twitter Favorites: [T2PFilms] Today #TTC Toronto Rocket (TR) trains begin serving line 4. TR at Sheppard-Yonge waiting for their light @bradTTC https://t.co/8I84r18Ed2

T2P Films @T2PFilms
Today #TTC Toronto Rocket (TR) trains begin serving line 4. TR at Sheppard-Yonge waiting for their light @bradTTC pic.twitter.com/8I84r18Ed2
02 Jun 00:52

Twitter Favorites: [sarahjeong] Here's the metastorify of the Complete Oracle v. Google livetweets: https://t.co/G38hexlEh0

sarah jeong @sarahjeong
Here's the metastorify of the Complete Oracle v. Google livetweets: storify.com/sarahjeong/ora…
02 Jun 00:52

Twitter Favorites: [sarahjeong] And here's a piece I wrote about the cultural subtext of Oracle v. Google: https://t.co/fpfIpceFzx

sarah jeong @sarahjeong
And here's a piece I wrote about the cultural subtext of Oracle v. Google: motherboard.vice.com/read/in-google…
02 Jun 00:52

Twitter Favorites: [kaler] Sports has been great tonight! Let’s hear it for sports!

Parveen Kaler @kaler
Sports has been great tonight! Let’s hear it for sports!
02 Jun 00:52

The Internet deserves its proper noun

by Doc Searls

doc036cThe NYTimes says the Mandarins of language are demoting the Internet to a common noun. It is to be just “internet” from now on. Reasons:

Thomas Kent, The A.P.’s standards editor, said the change mirrored the way the word was used in dictionaries, newspapers, tech publications and everyday life.

In our view, it’s become wholly generic, like ‘electricity or the ‘telephone,’ ” he said. “It was never trademarked. It’s not based on any proper noun. The best reason for capitalizing it in the past may have been that the word was new. But at one point, I’ve heard, ‘phonograph’ was capitalized.”

But we never called electricity “the Electricity.” And “the telephone” referred to a single thing of which there billions of individual examples.

What was it about “the Internet” that made us want to capitalize it in the first place? Is usage alone reason enough to stop respecting that?

Some of my tech friends say the “Internet” we’ve had for all these years is just one prototype: the first and best-known of many other possible ones.

All due respect, but: bah.

There is only one Internet just like there is only one Universe. There are other examples of neither.

Formalizing the lower-case “internet,” for whatever reason, dismisses what’s transcendent and singular about the Internet we have: a whole that is more, and other, than a sum of parts.

I know it looks like the Net is devolving into many separate systems, isolated and silo’d to some degree. We see that with messaging, for example. Hundreds of different ones, most of them incompatible, on purpose. We have specialized mobile systems that provide variously open vs. sphinctered access (such as T-Mobile’s “binge” allowance for some content sources but not others), zero-rated not-quite-internets (such as Facebook’s Free Basics) and countries such as China, where many domains and uses are locked out.

Some questions…

Would we enjoy a common network by any name today if the Internet had been lower-case from the start?

Would makers or operators of any of the parts that comprise the Internet’s whole feel any fealty to what at least ought to be the common properties of that whole? Or would they have made sure that their parts only got along, at most, with partners’ parts? Would the first considerations by those operators not have been billing and tariffs agreed to by national regulators?

Hell, would the four of us have written The Cluetrain Manifesto? Would David Weinberger and I have written World of Ends or New Clues if the Internet had lacked upper-case qualities?

Would the world experience absent distance and cost across a The Giant Zero in its midst were it not for the Internet’s founding design, which left out billing proprietary routing on purpose?

Would we have anything resembling the Internet of today if designing and building it had been left up to phone and cable companies? Or to governments (even respecting the roles government activities did play in creating the Net we do have)?

I think the answer to all of those would be no.

In The Compuserve of Things, Phil Windley begins, “On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent the online services of the 1980’s, or will we learn the lessons of the Internet and build a true Internet of Things?”

Would he, or anybody, ask such questions, or aspire to such purposes, were it not for the respect many of us pay to the upper-cased-ness of “the Internet?”

How does demoting Internet from proper to common noun not risk (or perhaps even assure) its continued devolution to a collection of closed and isolated parts that lack properties (e.g. openness and commonality) possessed only by the whole?

I don’t know. But I think these kinds of questions are important to ask, now that the keepers of usage standards have demoted what the Net’s creators made — and ignore why they made it.

If you care at all about this, please dig Archive.org‘s Locking the Web open: a Call for a Distributed Web, Brewster Kahle’s post by the same title, covering more ground, and the Decentralized Web Summit, taking place on June 8-9. (I’ll be there in spirit. Alas, I have other commitments on the East Coast.)

02 Jun 00:51

Game theory reveals a chilling reason for denying climate change

by Josh Bernoff

Fixing the climate has a cost. So does failing to fix it. Both costs are huge, but if you believe that climate change is a real phenomenon, then the human and economic costs of inaction are staggering. What justifies failing to act? My analysis reveals that the explanation that many Republicans stand behind — that climate change … Continue reading Game theory reveals a chilling reason for denying climate change →

The post Game theory reveals a chilling reason for denying climate change appeared first on without bullshit.

02 Jun 00:33

Mozfest 2016 for Participation

by Emma

Earlier this month a group of people met in Berlin to imagine and design Mozfest 2016.

index

Blending inspiration and ideas from open  news, science, localization, youth, connected devices and beyond  – we spent three glorious days collaborating and building  a vision of a Mozfest like no other.

The Participation team emerged from this experience with a new vision for Mozillian participation we’re calling ‘Mozfest Space* Contributors’.  Roles designed to bring success on the goals of every space in the building.  This is a very different approach from recent years where our focus has been more participatory as facilitators, helpers and learners.  With this new approach,  we’re inviting contribution,  ownership and responsibility in shaping the event.  Super, super exciting – I hope you agree!

Exploring the potential of contributor roles within Spaces, we found amazing potential! Open Science imagined a ‘Science Translator’ role –  helping people overcome scientific jargon to connect with ideas. The Web Literacy group has big plans for their physical space, one where a ‘Set Designer’ would be incredibly helpful in making those dreams come true.

MozFest CfP Images-01

Open News, and others thought about ‘Help Desk’  leads, and more than one space has suggested that the addition of technical mentors and session translators would bring diversity and connection.  Can you see yet why this will be amazing?

MozFest CfP Images-02

Outreach for contributors this year will be focused squarely on finding people with the skills, passion, vision and a commitment to supporting these spaces.  In many cases roles will be a key part of planning in the months leading up to Mozfest.

Also – we’re already piloting this very idea! having recently selecting Priyanka Nag and Mayur Patil  to be part of the Participation team’s Mozfest planning.  I’m so grateful for their help and leadership in making this a fantastic experience for wranglers and contributors alike.

On July 15th we’ll post all available roles, and launch the application process.  You can find an FAQ here.

Sponsorship from the Participation Team for Mozfest 2016 will be for these roles only.  The call for the proposals will be run by the MozFest organizers who will have a limited number of travel stipends available through that separate process.


* Space – an area of Mozfest with content and space built and activated under a certain theme (like Open Science, Youth Zone and Web Literacy)

* Space Wrangler – Person organizing and building a space at Mozilla


Special thanks to Elio from the Community Design Group for creating ‘Volunteer Role’ mockups!

Role avatars by freepik.

FacebookTwitterGoogle+Share

02 Jun 00:33

Notes for the summer: now and next

by Benedict Evans

What happened

Mobile became the dominant computing platform, both for consumer internet and for the tech hardware ecosystem.

Apple & Google both won the shift, in different ways (& Microsoft lost).

The internet moved on from web browser + mouse + keyboard.

Search & social changed & Google/Facebook went from total to contingent hegemony.

Google & Facebook made the jump - others did not.

 

What's happening now

Mobile exploded the desktop 'browser + mouse + keyboard' model but hasn't settled on a new one.

Basic discovery & engagement techniques on desktop break on mobile and do not have complete replacements (though they were broken in different ways on the desktop too).

Systemic instability & rapid innovation in pursuit of a third run-time (after web & apps), hopefully with new search or discovery models built in.

As part of this, Apple & Google are trying to move interaction down the stack from apps, Google & Facebook trying to move it up the stack too.

Smartphone supply chain enables many new devices, many of which are also end-points for some of these interaction models (eg Echo, Google Home).

TV starts unbundling, following music, print media, driven by (again) devices powered by the smartphone ecosystem. But very different motivations for APple, Google & Amazon (ecosystem lever) versus Netflix (new global 'cable' channel).

Enterprise productivity still bridging from old to new ecosystem - moving from files & desktop apps to cloud and device-agnostic front-ends. In parallel, unbundling tasks from general-purpose apps (I.e. Office) to cloud-based specialized apps. Increasing attempts to roll AI/ ML into this to automate many tasks or give new tools.

AI techniques emerging from labs and the last AI winter to suggest new capabilities and interaction models.

'AI' as the buzzword for everything, understood or not.

 

What happens next

AI becomes both an enabling layer for new and existing services (analogy: location, imaging) and basis of new interaction layers (NLP, Echo, Assistant), and potentially creating a new center of interaction that's out of Apple's comfort zone.

Ecommerce breaks through~10% of (US) retail, subsuming more and more categories that are less good fits for the historic commodity online merchandising model (i.e. Amazon) & supporting new approaches to discovery (& perhaps delivery - c.f. drones).

AI as a new layer for search and discovery potentially destabilizes search & discovery models, & associated advertising models.

AI + smartphone supply chain enables more and more 'not stupid' devices - drones that can follow you, learning IoT, autonomous & semi-autonomous vehicles.

VR as the new games console (a branch rather than the next generation of computing) - but potentially much larger and embracing much broader story telling. Hardware roadmap points to mass-market product several years out, most probably based (again) on the smartphone ecosystem.

The smartphone supply chain powers everything - AI lights everything up.

Augmented/mixed reality as either the new VR, or potentially the new multi-touch, and the incarnation of lots of other AI challenges - what am I seeing, how do I display this, but more importantly, what should be displayed here?

And finally: something crucially important being created right now that no-one else in tech is really thinking about right now.

02 Jun 00:32

Pendulums

by Tom Insam

So while it’s nice that I’m able to host my own email, that’s also the reason why my email isn’t end to end encrypted, and probably never will be. By contrast, WhatsApp was able to introduce end to end encryption to over a billion users with a single software update.

The ecosystem is moving

01 Jun 20:42

New 548 area code going live June 4th in southwestern Ontario

by Ian Hardy

A new number is coming to town this Saturday.

Starting June 4th, area code 548 will join 226 and 519 to residents in southwestern Ontario, specifically Brantford, Guelph, Kitchener-Waterloo, London, Owen Sound, Sarnia, Stratford, Windsor and Woodstock.

The CRTC and the Canadian Numbering Administrator announced this implementation of area code 548 will enable millions new telephone numbers because “the growing use of communications services has led to a huge increase in the requirement for telephone numbers in this region.” Area codes 226 and 519 will continue to operate without interruption.

“the new area code will be introduced gradually across southwestern Ontario once there is no longer a sufficient supply of numbers with the existing area codes – 226 and 519. Telephone numbers will not change but subscribers may be offered a phone number with the 548 area code when they contact their provider for new or additional services,” said Glen Brown of Canadian Numbering Administrator, in a recent press release.

The 519 area code was introduced in 1953, then 56 years later 226 was added in 2006. Now, 10 years later, 548 is being added.

01 Jun 02:31

Adblocking on smartphones is a growing trend in emerging markets, study says

by Jessica Vomiero

Today owning a laptop or desktop computer likely means invoking an adblocker to do away with those pesky pop advertisements, according to the January 2015 PageFair report, “The Cost of Ad Blocking.” 

This report, published in partnership with Adobe, revealed that at the beginning of last year, use of desktop ad blocking software had increased by 48 percent in the United States, 35 percent in Europe, and 41 percent around the world.

Since then, PageFair has released an additional report in partnership with Priori Data, “Adblocking goes Mobile” revealing that at least 419 million people around the world are using adblocking software on their smartphones. Furthermore, there are now twice as many mobile adblockers as there are desktop adblockers.

While desktop adblocking is more popular in North America and Europe, mobile adblocking has taken off in emerging markets such as China, Pakistan, India and Indonesia. According to the report, China has the highest usage of adblocking software totalling approximately 159 million users.

Over the course of 2015, use of adblocking grew by approximately 90 percent, compiling 21 percent of the world’s 1.9 billion smartphone users. This makes mobile adblocking the most popular form of adblocking in the world.

The report cited several responses from industry leaders expressing concern about the popularity of adblocking in emerging markets, which essentially means that the next wave of internet users may be invisible to digital marketers.

“This research only amplifies our concern in the rise of ad blocking across digital media. The perception that all ads can be blocked is quickly becoming reality as awareness grows. Any channel of consumption is at risk at this point,” said Jason Kint, the CEO of Digital Content Next, in a statement sent to MobileSyrup.

David Chavern, the CEO of the Newspaper Association of America added his voice to the mix, claiming that eventually, whole portions of the population may be excluded from “quality news.”

“If we don’t fix these problems, and we allow ad blockers to take over, then we will be left with small, subscription models that will exclude large portions of the public. Not being able to afford HBO is one thing. Not being able to afford quality news would be a much more serious problem,” said Chavern in a statement.

The report cites several sources, including eMarketer, StatCounter and Digicel for download estimates as well as country estimates were determined by Priori Data.

It’s interesting to note that the 2015 report “The Cost of Adblocking” estimated the loss of global revenue at the hands of blocked advertising to be $21.8 billion. The report estimated the global cost of adblocking to reach $41.4 billion by 2016.

With the large majority of internet browsing shifting to mobile, it’s anyone’s guess how much advertisers are poised to lose as the adoption of mobile adblocking continues to grow, unless digital marketers change their approach.

Image Credit: Joe the Goat Farmer

Related readingOpera adds native ad-blocking to Android mobile app

SourcePageFair
31 May 22:05

On Repeating Oneself

by chuttenc

This is in response to ppk’s blog post DRY: Do Repeat Yourself. If you’re not interested in Web Development or Computer Science, or just don’t want to read that post, you may stop reading now and reward yourself with a near-infinite number of kittens.

A significant middle part of my career was a sort of Web Development coming from a Computer Science background. Working on the BlackBerry 10 Browser (the first mobile browser that was, itself, written as a web page) we struggled to apply, from day 0, proper engineering practices to what ultimately was web development.

And we succeeded. Mostly. Well, it was alright.

So maybe PPK is wrong! Maybe webdev can be CS already and it’s all a tempest in a teapot!

Well, no. The BB10 Browser had one benefit over other web development… one so great and so insidious I didn’t even notice it at the time (iow, privilege): We only had to build the page to work on one browser.

Sure, the UI mostly worked if you loaded it in Chrome or Firefox. That’s how we ran our tests, after all. But at the end of the day, it only needed to work on the BB10 version of WebKit. It only needed to be performant on the BB10 version of WebKit running on just a handful of chipsets. It only needed to look pretty on the BB10 version of WebKit on a handful of phones in at most two orientations each.

We could actually exhaustively test all of our supported configurations. In a day.

And this is where true webdevs would scoff. Because uncertainty is the name of the game on the Web. And I think that’s the core of why CS practices can’t necessarily apply to Web Development.

When I was coding the BB10 Browser, as when I do other more-CS-y stuff, I could know how things would behave. I knew what would and would not work. (and I could cheat by fixing the underlying browser if I found a bug, instead of working around it). I knew what parts of the design were slow and were to be avoided, and what cheap shortcuts to employ in what situations (when to use layers, when to use opacity, when to specifically avoid asking the compositor to handle it because the shaders were quick enough (thanks to mlattanzio, kpiascik, tabbott, and cwywiorski!)). I even knew what order things were going to happen, even when the phone was under load!

In short: I knew. Because I knew, I could operate with certainty. Because of that certainty, we wrote a browser in about a year.

But I know webdev is nothing like that. I’ve been on cross-browser projects since and tried to find that sense of certainty. I’ve tried to write features for screens of bizarre dimensions and groped for that knowledge that what I’d done would work wherever it needed. I’ve struggled to find a single performant and attractive solution that would work on all renderers and have felt the sting of my CS education yelling that I shouldn’t repeat myself.

I still scoff at the bailing-wire-and-ducttape websites that litter the web and bloat it with three outdated versions of each of a half-dozen popular frameworks. But, like the weather, I know the laws that motivate it to be so. Complaining is just my way of coping.

The question becomes, then, is there anything CS that can be applied to webdev? Certainly test-driven development and other meta-dev techniques work. The principles of UI design and avoiding premature optimization are universal.

But maybe we should be thinking of it the other way around. Maybe CS should accept some parts of webdev. Maybe CS educators should spend more time on uncertainty. Then CS students will start thinking about how much uncertainty they are willing to accept in their projects in the same way we already think of how performant or cross-platform or maintainable or verifiable the result needs to be.

I hope a few CS graduate students pick up PPK’s call for “Web Development from a Computer Science Perspective” theses. There’s a lot of good material here that can help shape our industries’ shared future.

Or not. I can’t be certain.

:chutten


31 May 22:04

MediaTek says its new Pump Express tech can charge a phone to 70% in 20 minutes

by Igor Bonifacic

At Computex in Taiwan, MediaTek announced the latest iteration of its Pump Express technology.

Now on version 3.0, the tech is reportedly able to charge a smartphone to 70 percent of its maximum battery capacity in 20 minutes, a speed that’s almost twice as fast as competing solutions like Qualcomm’s Quick Charge spec, according to the company.

Pump Express 3.0 is “the world’s first solution to enable direct charge through Type-C USB power delivery,” says MediaTek, a fact that may make the spec incompatible with the official USB-C standard.

Normally, MediaTek-equipped smartphones don’t make their way to North America, making this something of a moot announcement, but just earlier this month Sony announced that one of its latest Xperia smartphones, the Xperia XA, which is set to ship with a MediaTek Helio P10 SoC, will come to Canada later this summer.

The first devices to ship with Pump Express 3.0 on board will be smartphones with the company’s P20 series processors, which are set to arrive later this year.

SourceMediaTek
31 May 22:04

"al·ter·i·ty ôlˈterədē/ noun formal the state of being other or different; otherness."

“al·ter·i·ty
ôlˈterədē/
noun formal
the state of being other or different; otherness.”

- alterity
31 May 22:04

The Big Toll Paid by Vulnerable Road Users

by Sandy James Planner

There has been press about the important ramifications of reducing vehicular speed in cities and places to 30 kilometers per hour (km/h)  from 50 km/h. Studies show that vulnerable road users-those folks biking or walking without the metal frame of a vehicle to protect them-can better survive car crashes at those speeds. Pedestrians and cyclists have a 10% risk of dying in a vehicle crash at 30 km/h. That risk increases to 80% being hit by a vehicle at 50 km/h.

Dr. Perry Kendall, British Columbia’s  Chief Medical Officer has released his  Annual Report entitled “Where the Rubber Meets the Road” which identifies motor vehicle collisions as a significant threat to the health of people in this province. Although the motor vehicle collision fatality rate has declined from 18.4 deaths per 100,000 population in 1996 to 6.2 deaths per 100,000 in 2012, British Columbia has a high rate of deaths, as well as a high rate of collisions causing serious injuries-444.5 major injuries per 100,000 population. That translates into 280 people being killed on roads annually, with another 79,000 seriously injured.

The human factors contributing to fatal crashes are speed (35.7%) , distraction (28.6%) and impairment (20.4%). It is troubling that for vulnerable road users, the rates of crashes and serious injuries has been increasing, from 38.7 % in 2007 to  45.7 % in 2009.

The Medical Health Officer’s report is comprehensive and points out the current challenges broken down by region. The report cites road design, distraction and speed as three major contributors that can be addressed, and recommended lowering speed limits to 30 km/h in cities. Not surprisingly, Minister of Transportation Todd Stone has put the kibosh on lower speed limits, citing that this was something  he has not heard about from local municipalities, and that such a change needed strong support. You would think when the Province is also paying for health care that they would be mindful on how to keep vulnerable road users as safe as possible with minimal investment. Slower road speeds in municipalities could prevent serious injuries and deaths to pedestrians and cyclists.

 

 

 


31 May 16:55

Never get a broken plane seat again

Don’t choose a seat online until you’ve looked it up on this site! 

SeatGuru.com

Simply type in your flight number and you’ll find out if your seat doesn’t recline, doesn’t have a window, has a broken TV, and so on. No ugly surprises.

One thing you can control when flying this summer! 

31 May 16:55

Making the Most of Compose - DroneDeploy

by Dj Walker-Morgan
Making the Most of Compose - DroneDeploy

The power of software to help us understand the world is immense and companies like DroneDeploy are bringing that power to everyone; their goal is to make the sky more accessible and productive for anyone. That includes the farmer looking for better crop yields, the architect looking to restore a cathedral or the miner wanting to optimize his excavations.

To help them farm, build or manage, they all have to be able to measure the world. The company's innovative approach uses commercial drones that anyone can buy and software which programs them to the task. It teaches the drones how to automatically scan areas and return with a payload of photos.

Without intelligent software, that payload is of little use. The next step is for the images to be uploaded into the cloud where DroneDeploy stitches them together into maps and 3D models using its Map Engine. The user can then explore the completed maps and models with analytical tools within the easy-to-use mobile or web-based viewer to obtain useful insights. For example, near-infrared scans of crops can be interpreted using vegetation indices to aid searching for crop stress, or measurements can be made that can show the volumes of stockpiles around a quarry.


Cornwall Cliff by DroneDeploy on Sketchfab

More maps and models are available in the DroneDeploy gallery.

It's the software that makes the difference. "We're taking the existing drone hardware and combining it with a very powerful piece of software to make that drone into a useful tool," said Nick Pilkington, DroneDeploy's CTO, "something that's repeatable, something that's reliable, something that's safe and something that provides a huge amount of value." He explains that one of their biggest use cases is "helping farmers make better decisions".

The maps and models are then available on desktop or hand-held devices in the field. For example, a farmer could see a near-infrared scan of his fields with crop stress picked out in red. He can then go to those locations and see the "ground truth" by walking to the plants and taking notes. Pilkington explained: "DroneDeploy isn't going to tell the farmer today if it's a lack of nitrogen or it's pests, but if you're a farmer that's managing 2000 acres of 11ft corn you can get a map every morning and don't need to rely on satellite imagery." The drone imagery is unaffected by clouds which cut off satellite imagery and comes in at a higher resolution. "You can zoom in and see the individual plants which is pretty cool" says Pilkington. That same information can also be used to feed the emerging next generation of decision-making applications in precision agriculture.

Making the Most of Compose - DroneDeploy

The latest version of DroneDeploy's app - more at their blog

Behind the scenes, the whole process is powered by DroneDeploy's Map Engine which plugs into Compose databases. DroneDeploy started with Compose back when it was MongoHQ. The early decision to store data in NoSQL came out of them mainly operating on time-series data. Documents about flights, locations, images and other items that don’t have many relations encoded made MongoDB a sensible choice for them. As a small company that didn't want to invest in self-hosting, they decided to go with a Database-as-a-Service provider and that provider was Compose.

Early experience with MongoDB on Compose helped shape an architecture where documents were handled and then quickly moved to lower cost storage. This allowed them to optimize their costs while handling the stream of data. Over time they've expanded their database technology use. Redis came into the picture as their jobs and coordination requirements built up. These required a rapid database and Redis' in-memory processing offers a route to performance for applications. There are queues, lists and sets which can be used to represent the jobs, settings and resources available and they are all tightly managed.

DroneDeploy saw the advantages and they brought Compose Redis online to handle that. This also allowed them to lift the load on Compose MongoDB's more persistent model and boost their application performance. The timing was fortuitous as Compose had just made Redis available as an offering. "We make much heavier use of Compose than initially and now we're starting to use Redis more heavily because we have thousands of jobs being orchestrated all around the world" says Pilkington. "We've got a bunch of different job queues that manage that depending on the sort of user, allowed concurrency and type of processing job it is and all of that is coming through Redis."

Making the Most of Compose - DroneDeploy

Image from Mapping Drones for Professional Surveyors.

The MongoDB and Redis combination is a mix that many companies have found compelling for their core infrastructure, offloading other operations to more appropriate or cost effective databases as needed.

What Compose has given DroneDeploy is the ability to concentrate on running their business, not how to run databases. That concentration has enabled DroneDeploy to offer a product which can help millions of people understand and manage their world.

31 May 16:54

I Made You a Mixtape

by Federico Viticci

I've always loved the idea of someone else making a mixtape for me.

When I was in middle school and until the first year of high school, we didn't have the Internet at home. My parents were against buying me a PC; they thought it was a waste of time. Unlike many of my friends, I depended on books and magazines for my school research and hobbies. I was a voracious reader.

That was 2002. I wasn't exactly a music fan back then: I heard music on the radio in my mom's car on the way to school in the morning, and I occasionally slid my dad's cassette tapes in our Siemens Club 793 stereo, but he only listened to Italian music. I wanted the English stuff.

Until one day my friend Luca told me about MP3s and compact discs with hundreds of songs on them. By leaving his computer plugged in all night, he explained elatedly, he could download any music he wanted from the Internet using programs with exotic names I had never heard – WinMX, eMule, iMesh. Then, all those songs could be "burned" onto a CD as MP3s, and I could play them back for as long as I wanted with a CD player.

I was 14, we were chatting after school, and I didn't know what piracy was. And then, the surprise: because he knew I didn't have the Internet (or a computer), he had made a sample CD for me with about 30 songs on it. He gave me the CD, told me to buy a CD player for myself, and he concluded with "Get back to me soon about the songs you like. I put in a bit of everything except Italian music".

Fourteen years ago, I was handed the first mixtape someone ever made for me.


A few months later, Luca was burning a new CD for me every other week.

After listening to the first one for a few days, I got back to him and shared my impressions: I liked the mid-2000s punk rock and hip hop, didn't care for the house and electronic stuff he put in, appreciated the alternative and rock classics (I knew some of them!), and I could have gone with less R&B. Luca listened, we talked, and he followed up with a second CD. It was clearly based on my feedback: he doubled down on OutKast and Blink-182, slipped in some old Green Day and Van Halen (I'm pretty sure that was my first encounter with When I Come Around and Panama), and there was no sign of Usher. I discovered Stan by Eminem and Dido on that second mix.

This went on for months. Listening to Luca's CDs became a habit for me. I would listen as I perused his handwritten tracklists in the back of the CD covers. My mom would even ask me to "play Luca's music". I loaned a few of Luca's CDs to my classmates. I believe that Luca kept doing it for a simple reason: he was (still is) a good friend and he thought it was cool that he could download music for free and burn an extra CD for me. Talking about new songs and old gems he included in his mixes was an excuse to catch a break between classes – no texting, no selfies, just two friends discussing songs on a mixtape.


By 2009, Luca and I had gone our separate ways. He was studying Economics in Rome. We texted occasionally. Meanwhile, I had somehow managed to apply for Philosophy, drop out after three months (most of them spent drinking and smoking with my roommates), get a job, be fired after 8 months, and start a blog about Apple.

I thought I was good at my new job, but I had no idea what I was doing. Still, I was resilient; I knew that was what I wanted to do. Plus, I had promised myself I'd never have a boss again. I was 21.

I don't remember when or how I read about a Swedish company called Spotify. I think it was on a tech blog – or maybe a forum? The underlying idea immediately seemed like the future to me. I had bought an iPod Classic two years earlier (and eventually moved to an iPod touch and an iPhone, which is how I started MacStories), but the iPod and the iTunes Store came with the overhead of finding and managing music. There was no Luca making playlists for me. I ended up listening to the same music over and over out of laziness, and I wasn't discovering anything new.

Thanks to a friend who lived in London, I managed to create a fake Spotify UK account and installed the app on my iPhone and MacBook Pro.

The feeling was exhilarating. It was like walking into my favorite record store – the place where I got my copies of Siamese Dream and Mellon Collie and the Infinite Sadness – without the constraint of time and money. Spotify could stream any album, any song, any playlist. It felt liberating. And, sure, some artists weren't there and music wasn't "mine" anymore, but it didn't matter. Suddenly, any moment could be filled with music. Playlists by other users could bring back memories of songs I hadn't heard in years.

My CDs and iTunes library had fought a good fight, but it was time for something better.


It's 2012 and Brenda's Got A Baby by 2Pac is playing on my iPhone 4S. "Thank God Apple isn't making those giant Android phones", I think to myself. I can't use my right arm because I'm doing chemo and the IV machine is humming along, pumping a poisonous panacea that will save me into my veins. The iPhone 4S is small enough to be used with my left hand.

"Brenda's Got A Baby" talks about terrible things and it's arguably not 2Pac's best work, but the beat and his delivery make it one of my favorite songs on this Greatest Hits. I can't pinpoint why listening to 2Pac makes the chemo more tolerable, but it's working.

I've been jumping between different music streaming services for a few years now. Spotify keeps getting more popular and there's a rumor that Apple is also working on a streaming service, but I use Rdio now. Rdio looks great: the company's designers care about music, and the service is full of details and smaller features that make streaming songs enjoyable. I like that I can view my History so I can see what I've been into lately. It seems like I listen to Oasis when I'm resting at home and 2Pac when I'm doing chemo at the hospital. That's an odd pattern.

My favorite aspect of Rdio is the focus on people. Not only can I discover playlists by other users with a beautiful interface – I can also see what's trending among my friends and what they've been listening to. I subscribe to a few playlists and I regularly discover new artists thanks to people I follow. It doesn't feel like Luca's mixtapes (remember those CDs? Things are so much better now), but it's nice.

I used to care about owning albums. I sometimes miss that sense of music ownership, and I wonder if Rdio's going to be around forever. I like this service because they seem to know what they're doing.

The nurse notices my white iPhone and she tells about her "Galaxy iPhone" model. I think of the tech news I'm not covering. I need to get back to work.


I had big hopes for Apple Music. After years of trying streaming services (including a stint with Google Play Music), I was certain human curation was the key element missing from the music ecosystem. With Beats Music and Iovine, Dre, and Reznor on board, I thought Apple had a good shot at building the ideal music service for me.

Apple Music launched on June 30, 2015, and I signed up right away. I didn't particularly care for the confused WWDC announcement, the buggy interface, or the problematic integration with existing iTunes libraries. I had been searching and streaming music for years at that point, so I wasn't "organizing" my music library at all. Apple Music wasn't the company's smoothest launch, but, as someone who only cared about streaming, it was alright.

I used Apple Music heavily for eight months. The novelty of Beats 1 wore off quickly, and the app's inconsistencies continued to pile up. It got the job done, though. It wasn't spectacular in any way, but at least a lot of music was available on it and the family plan was cheap.

I gradually realized, however, that I wasn't using Apple Music with the same curiosity and sense of wonder I had felt with other services in the past. Apple Music was okay, and it had a few interesting exclusives, but I also found it unsurprising and stale.

The 'For You' section – the feature I was hoping would reinvent music discovery with a mix of algorithms and human curation – turned out to be uninspired at best. No matter how much I tuned it to my preferences, the section consistently served up the same menu of albums I already knew, 'Intro To' playlists filled with hits I had listened to hundreds of times, and artists I had known for years. The same curated sections popped up weekly without changes.

I wasn't discovering anything new on Apple Music. There was no obscure indie artist I had never heard about and no curveball thrown at me. There was no sass.

I guess that For You is fine for someone who's grown tired of discovering unknown artists every week, but it's not what I'm looking for. Apple Music's For You section is, for me, the musical equivalent of comfort food.

Spotify's Discover Weekly, on the other hand, has brought back the edge I was missing from music streaming services. Discover Weekly is helping me find new music like I haven't done in years. Every Monday, I feel like I'm back in high school and staring at the handwritten back of a mixtape cover. I crave its sense of discovery.

I tried Spotify again in January. I like to challenge my preconceptions and take a tour of products I'm no longer using to know how they're doing. I had seen a few close friends praise Discover Weekly – a feature Spotify launched in 2015 – and I thought the details of how it worked were fascinating.

It took a couple of weeks for Discover Weekly to figure me out, but once it showed up, it was unlike anything I had tried in streaming services before. Discover Weekly seemed to know my preferences and idiosyncrasies. Right off the bat, it guessed it could mix Let's Talk About Your Hair by Have Mercy (which I had never heard) with Make You Smile by +44 and that I would appreciate the contrast between them. Discover Weekly is the kind of feature that brings up Fight Song by The Appleseed Cast and The Girl by City and Colour like a good friend who's not afraid to put wildly different songs on the same mixtape because he knows you're going to love them.

There were some misses, of course, and a few curveballs – but that's what you get with a mixtape. And when it works, those imperfections are offset by the rush of a single song that makes everything right. It's hard to reconcile this with words. Most recommendation features on other music services are built for reassuring coziness. Discover Weekly dares the unknown.

And that's quite the paradox, isn't it? In my experience with Discover Weekly, Spotify's feature has taken more risks with unfamiliar gems than every playlist from expert curators I was shown in Apple Music. It's funny – and ironic – how an algorithm alone can understand this better than a team of people. With all the talk on AI and machine learning that's been going on lately, this is something worth keeping in mind.

In three months of Discover Weekly, I've discovered more new artists than I did in years of streaming services, including Apple Music. None of the artists I discovered with Spotify's mixtapes were ever recommended to me by Apple Music. They might as well not have existed.

Discover Weekly brings joy into my life. I look forward to every Monday now.


There's an idea that has stuck with me since I was in middle school – music creates connections.

My old music teacher, Claudio, often refused to let us play the flute in class as other teachers in the same school did. Instead, he asked us to think of a song together as a group and recreate it with everyday objects around us – pens, pencils, cans, paper sheets, shoes, anything. Some kids didn't like the idea and their parents complained to the school.

I loved it.

The idea was, when you create a connection between people and connections between objects and melodies, you can make music with anything and anyone. His lesson for us wasn't to play a flute that we had been told to buy – he was trying to say that there's music everywhere and in every one of us if we allow ourselves to go look for it. There can be always music.

Claudio was an odd guy like that. I saw him as a rebel. He loved to travel and he often brought photographs of his trips for us to see – the solitary beauty of Africa, the majesty of Machu Picchu, fishing in Cambodia.

I was in shock for days after he was shot and died in Honduras because he tried to stop a robbery. And yet, somehow, that made sense as his way to go out. A weird hero who made music with pens. That was the first funeral I ever went to.

I miss him.


It's October 2013 and Shout Out Louds have just finished playing at the Circolo degli Artisti in Rome. I'm chatting with my friends outside. The band comes out the backstage door and it seems like nobody is paying attention to them. Perhaps it's the drinks doing the job for us, but we walk up to them and say hi.

They're kind and genuine like you would expect from their songs. The singer, Adam, is fun and cordial. A few minutes pass, we're talking, drinking, laughing – I can't believe I'm actually talking with one of the bands I discovered thanks to The O.C. – but it's happening.

I was obsessed with The O.C. when I was in high school. It wasn't for the story (however captivating, still consistent with the cliches of mid-2000s teen dramas), or the California setting – it was the music. Alex Patsavas – music supervisor on The O.C. and several other shows – is responsible for introducing me and millions of other teenagers at the time to bands such as Death Cab for Cutie, Modest Mouse, Shout Out Louds, The Shins, and Rooney. In a way, every episode of The O.C. was a mixtape with a story.

Some of my friends are embarrassed to admit they first heard those bands on a teen show. But I'm not. I don't have to pretend I'm more sophisticated than I am. Music is music. It's everywhere, and it doesn't matter where it comes from.

And eventually it just comes out. "Hey man, you know I first heard one of your songs on The O.C. years ago?"

It was Go Sadness.

My friends give me the look. They had told me before the show not to bring that up because, according to them, bands don't like being associated with it. What if they had a point? But what if they didn't? I don't care.

Adam looks at me and smiles. "And here we are now", he says.

We took a picture with Adam, too.

We took a picture with Adam, too.


Discover Weekly makes me feel younger. It takes me back to the days of Luca's mixtapes. To a time when I was anticipating each episode of The O.C. for its music. Simpler times, when weeks were punctuated by new songs I fell in love with.

There are often songs I don't like in Discover Weekly. It's not perfect. It's not made by people – an algorithm is behind it. But it doesn't matter. Luca's mixtapes weren't perfect; the "music" we played with pens and cans was a cacophony of tempos; the albums my friends used to listen to in Rdio didn't always match my taste. The point isn't the hand behind a collection of songs. The curator – be it a friend, a TV show supervisor, a journalist, or a computer – isn't the protagonist of a mixtape.

Discovery takes the stage. What matters is what you feel in that moment when a song that is just right comes on and punches you in the stomach. Excitement. The beautiful ache of not knowing. You turn the volume up. A single connection that branches in different directions. You hit repeat. It stays with you forever.


My CD collection at my parents' house is a bit dusty. A nondescript white case stands out among Led Zeppelin, Smashing Pumpkins, and Oasis. I pick it up. Scribbled on the cover, still readable after 14 years, it says "Federico – Mix 1".

I've always loved mixtapes.


Like MacStories? Become a Member.

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it's also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it's made in Italy.

Join Now
31 May 16:53

Daily Scot: A different kind of land constraint

by pricetags

From the CBC:

pei

The P.E.I. government has strengthened rules designed to prevent non-residents from buying up Island property. But the province’s real estate association says the changes are moving in the wrong direction.

For decades, the province has had limits on property that can be purchased by non-residents, who require the approval of the provincial cabinet to buy more than five acres of property, or a property containing more than 165 feet of shoreline.

Now the province has changed its definition of who qualifies as a resident. Instead of residing in the province for 183 days over the course of a year, residents must now live in the province for 365 days over 24 months.

For the first time, the province has also stipulated that to be considered a resident under the Lands Protection Act a person must be either a Canadian citizen or a landed immigrant, regardless of how long they’ve lived in the province. …

But the Prince Edward Island Real Estate Association says the more restrictive measures are moving the law in the wrong direction.

‘It’s a good thing to attract new people’

“I’m not 100 per cent sure what the rationale behind the legislative change was, but it certainly will have an impact on our industry,” said association president Mary Jane Webster. …

Non-residents have to pay either $550 or one per cent of the purchase price (whichever is higher) to apply to buy property under the Lands Protection Act. If their application is rejected, 50 per cent of the fee can be refunded. If the purchase is approved, none of the fee is refunded.


31 May 16:32

British-Palestinian schoolgirl expelled from public speaking competition

files/images/imagh.jpg


Middle East Monitor, Jun 03, 2016


When I was in grade five I won the public speaking competition with a speech about Sir Frederick Banting and the discovery of insulin. It was uplifting and mildly patriotic. But when I was in grade eight I won the same competition with a scathing and powerful speech against the Vietnam War. I won the following three years as well, finishing with "How to be a dictator in give easy steps". The ability to write and deliver a speech is a powerful force, and I'm glad it was developed in me. So I feel for  Leanne Mohamad, a 15-year-old student at Wanstead High School in London, who won a regional final  of the Jack Petchey Speak Out Challenge for a  moving speech about the Palestinians called  ‘ Birds not Bombs’ , but was barred from participating in the  final. She should be allowed to speak.

[Link] [Comment]
31 May 16:32

Twelve months

by Volker Weber

ZZ058EE2E1

I will just leave this here. This is what twelve consecutive months look like. We discussed the how and why a few days ago. #dontbreakthechain

31 May 16:23

T-Mobile launches new 21-day $30 USD tourist plan with 2GB of 4G LTE data

by Patrick O'Rourke

Canadians now have another affordable roaming option when visiting the United States.

U.S. carrier T-Mobile has announced a new $30 USD (about $39 CAD) prepaid tourist plan for international visitors to the U.S., which offers 1,000 minutes of domestic calling, unlimited domestic and international texting to over 140 countries and regions, and unlimited 2G data with the first 2GB at 4G LTE speeds.

As long a you own an unlocked GSM smartphone, all you need to do is visit a T-Mobile store to sign up and pay for the plan. Customers will be given a free SIM card to access the carrier’s network. The plan, which only works with one-line, lasts three weeks (21-days).

It’s worth noting that international phone numbers can’t be ported to the United States, so similar to Roam Mobility, customers are given a new U.S. number when they sign up for T-Mobile’s new tourist plan. International calling from the U.S. to other countries also isn’t included in the plan.

However, the Tourist Plan does include 100MB of U.S. roaming data, though other T-Mobile benefits such as Mobile Without Borders, Binge On and Music Freedom are not included. Thankfully, tethering via a mobile hotspot with the 2GBs of 4G LTE data included in the plan (followed by 2G speeds after), is permitted.

In comparison, Roam Mobility offers sim cards for $9.95 CAD and a variety of daily plans, the cheapest being Talk and Data, starting at $2.95 CAD per day for unlimited global text, unlimited 2G data and unlimited Global SMS.

Roam’s $3.95 per day Talk and Text plan features unlimited nationwide talk, unlimited global text and unlimited long distance calls to Canada, as well as voicemail and caller ID. The company’s final Talk, Text and Data plan priced at $4.95 CAD offers unlimited nationwide talk, global text, data, global sms, long-distance calls to Canada, and voice mail/caller ID.

In total, a 21 day trip to the U.S. using Roam’s Talk and Data plan, which features unlimited 2G data, amounts to $48.30 CAD. Talk, Text and Data for 21 days with Roam is priced at $76.30 CAD, but also comes with 10.5GB of 4G LTE data (this plan is priced at $4.95 a day for 14 days, and then $1 a day after).

Bell’s Roam better add-on is priced at $5 a day and offers 100MB of U.S. roaming, but also allows users to keep their Canadian number. Virgin Mobile, a Bell subsidiary brand, offers Roam Sweet Roam, a plan that also allows subscribers to continue using their Canadian number, but at a cost of $5 a day for 100MB of LTE data and unlimited talk and text.

Rogers and Fido also offer similar plans at $5 per day in the U.S. for unlimited calling to Canada and the U.S., as well as data shared from subscriber’s Share Everything and Fido Pulse plans. Wind also offers an affordable $15 U.S. roaming plan that gives subscribers 1GB of full speed data, unlimited texting and 2400 minutes of U.S./Canada talk time.

So while T-Mobile’s new Tourist Plan may not be that great of a deal considering the limited amount of LTE data customers get for $30 USD, the plan could suit some Canadian’s needs depending on what they use their device for and how long they’re visiting the United States. What is amusing and sad on some levels, however, is how affordable T-Mobile’s Tourist Plan is when compared to actual Canadian smartphone plans that would cost nearly double its rate.

T-Mobile says its new Tourist Plan will launch on June 12th.

Image credit: Wikicommons – Mike Mozart

SourceT-Mobile
31 May 16:22

Spatial Computing: why Tim Cook better worry

by Robert Scoble

It is the fourth visible user interface of the personal computer era.

The first was character mode. MS-DOS.
The second was the GUI, graphical user interface. Macintosh and Windows.
The third was touch. iPhone and Android.
The fourth is spatial computing.

At each introduction of a new user interface some companies either went away or became dramatically less important. When GUI’s came along Borland and Wordperfect, both companies bet on the older character mode and promptly went away.

Same when touch came along. Nokia and Blackberry bet against touch, or didn’t react nearly strongly, nor quickly, enough. They are either gone or really much less important than they once were.

I can not see a world where Apple goes away, not with $220 billion in cash reserves.

But I CAN see a world where Tim Cook goes away and his legacy is dramatically changed.

The Apple Watch didn’t hurt him. At least not beyond a small skin scratch. That will not be true when spatial computing comes along.

First, what is spatial computing? Run a Google Search on the term and you find: “Spatial Computing is a set of ideas and technologies that will transform our lives by understanding the physical world, knowing and communicating our relation to places in that world, and navigating through those places. The transformational potential of Spatial Computing is evident.”

It already is arriving, though. Self-driving cars use spatial computing. Robots use spatial computing. Drones, especially ones that map out the world at some level, are using spatial computing. Google will soon introduce spatial computing to smartphones, in the introduction of Tango sensors that map out the world. Robots use spatial computing, particularly those from Boston Dynamics, another Google company.

Finally, mixed reality glasses use spatial computing. Microsoft Hololens is showing you spatial computing and the four video cameras on that product that map out the real world are what bring you both spatial computing and mixed reality. Magic Leap is preparing to launch products in the next 18 months, and has $1.3 billion invested in it so far by a group of companies led by Google and Baidu. Only insiders are paying attention, but both products let you walk around the real world and see virtual items placed on them. Go to YouTube and search for Hololens and you’ll see lots of demos of what spatial computing looks like.

Now, it isn’t clear yet to most people what will force Apple’s hand here. That’s the crazy thing. I know of several mixed-reality-spatial-computing glasses under development:

1. Magic Leap.
2. Microsoft Hololens.
3. Meta.
4. Apple??
5. Facebook. (Zuckerberg already announced he’s working on such).
6. Amazon?? (Its delivery drone has a paper-thin radar in it that my nerdy friends say is quite brilliant).

I’m seeing amazing demos, I’ve been including a few in my speeches around the world, and meeting with engineers that are working on some of these projects (later this week I’ll be in Israel to meet with such).

Also Meta and others are developing new spatial user interfaces. If you haven’t seen my tour of Meta that I did back in February (you need to be logged into Facebook to view), you really should. There you’ll meet some of the people who developed Tango, but are building a new user interface.

Anyway, back to the point. I bet that Tim Cook is going to try to follow Steve Jobs’ playbook. Which is watch everyone else prove there’s a market but wait until you can provide a better alternative. After all, the iPod wasn’t the first audio player. The iPhone wasn’t the first smartphone. The iPad wasn’t the first tablet.

We know Apple and Tim Cook knows what’s coming. Cook hired a professor from Virginia Tech who has seen Magic Leap. Or at least his students had seen it. Cook also has bought a number of companies in VR and AR, including Metaio, which showed me monsters on top of buildings years ago.

So Cook is prepared and he seems to be working hard with teams to come up with new products. Just read MacRumors rundown of all the moves Apple is making and you’ll see Tim Cook has all the ingredients to compete in this spatial computing world.

Here is the rub: Tim Cook isn’t Steve Jobs. He doesn’t have the market thinking he’s a genius. It will be a LOT more skeptical of Cook’s claims than it did toward Jobs. Cook rarely talks about products. He’s not someone I expect to sit down with for an evening and talk about great new products, like, say, a Tesla, and have a stream of visionary feedback about what does and doesn’t work.

Cook doesn’t seem like the kind of guy who can get a superhuman effort out of a development team, either, the way Jobs could. Which is important for focusing a team on a market window. Lets be honest, most of the world’s great products came with quite a bit of pain on behalf of employees. Will Cook’s nicer way of working work at Apple to bring us a world-defining product? The jury is out.

So there are a lot of questions, heck, questions that the Apple Watch didn’t answer, and, in fact, caused to get louder.

Is Cook a product guy? So far the answer is no, he’s not.

To date that hasn’t hurt his legacy. He’s still leading the best company on earth. The one with the best retail stores. The best innovation legacy. The best brand. The best supply chain. The best marketing and PR teams. The best profits. These are daunting advantages for Apple, but if Magic Leap ships and Apple can’t match it for years you’ll see many switch brand preference. Then Tim Cook really will be gone and his legacy will not be a sweet one, but rather a sour one as the guy who hobbled Apple.

All this is saying is a new user interface is coming. Will it bring with it major corporate change the way previous user interfaces have?

History says yes.

Tim Cook better worry.

On the other hand, if Tim Cook delivers in the coming spatial computing era, well, then, he will finally put Steve Jobs in a box that Apple can really move forward from.

Are you a betting person? If so, where would you put your money?

Honestly? I just am not hearing good things out of Cupertino lately and the folks at Google and Microsoft are bringing real innovations to the market.

31 May 16:21

Arbutus Corridor De-railing

by Ken Ohrn

At the intersection of Burrard St and the corridor, rail removal is done, for that short stretch that crosses Burrard. The remainder will take at least a year.

Arbutus.Rails


31 May 07:07

How can Toronto be a ‘smarter city?’

by Jessica Vomiero

Even with new technology being revealed practically every week, the term “smart cities” remains widely undefined. When a city such as Toronto has achieved everything from economic success to cultural relevance, how does it make the jump from being just any city, to an ever-elusive smart city?

The best answer to this question is that such an expansive transition could take decades, however many would be surprised to learn that members of Toronto’s urban ecosystem have been considering these questions long before the Toronto Region Board of Trade’s first Annual Smart Cities Summit.

“I don’t really care about labels. I’m much more interested in what’s behind them. People have been talking about smart cities for more than 20 years.” said John Lung, the executive director of the intelligent community forum of Canada during a panel discussion at the Toronto Region Board of Trade’s first annual Smart Cities Summit.

p2113668017-o653641851-5

On stage with him are Chris Dwelley, the citywide performance manger of the City of Boston, David Amborski, of Ryerson University and Harout Chitilian, of the Montreal Executive Committee.

Ahead of this panel was the first keynote speaker Mark Kleinman, the director of economic and business policy for the Mayor of London, U.K.

“One of my passions is smart cities,” he said. “I’ve been a frequent visitor to Toronto for over 10 years, but I don’t think London or any other city around the world has all the answers to this agenda.”

When most people think of a smart city, an image of Tomrrowland comes to mind, potentially featuring drones, hover jets and robot dogs.

However, a smart city refers to using data and analytics to improve the overall function of the region. This includes using automation and the internet of things to make navigating the city easy for anyone with an internet connection.

Once a city has mastered its data, it will then be able to partner with organizations developing solutions to the issues indicated through the data. However, most cities don’t know where to start.

“Cities generally don’t know what their smart assets are, or even what makes human infrastructure smart,” said Kleinman.

The cities of London, Boston, Montreal, and others making strides in this space were represented at the conference and each came with a smart accolade to their name.

Boston, for example, can be credited with an initiative known as CityScore. CityScore is an online platform designed to inform the mayor how the city ranks in several areas at any given time by aggregating metrics from across the city into a single number.

“Our work really focuses on harnessing the power of data and technology to help the city deliver better services,” said Dwelley during his time on stage. He goes on to say that CityScore was built from a existing infrastructures, meaning data that was already available.

Furthermore, Kleinman focused heavily on Tech City, a cluster of technology companies that grew from a road interchange in East London which has increased the visibility of London’s tech scene over the years.

Where most cities are starting with this initiative is mobility, as Torontonians have observed with bated breath in our own city over the past few years. Not only has the city braved the acceptance and regulation of disruptive technologies like Uber, but it’s struggled to modernize its own transit systems.

However, as Kleinman articulated during his presentation, this kind of “productive” dialogue between the public and private sectors is inevitable.

“Yes, the governance is all wrong. Please just get over it. The governance is always wrong. Where do you want to be in five years?” said Kleinman indicating that  innovation will always be ahead of the government because you can’t regulate something that doesn’t yet exist.

What about when something exists, but only a small portion of the population has access to it? That question is why the Board of Trade brought in some unexpected guests to share their insights about the digital divide.

When people think of innovation, disruptive technology, artificial intelligence and the internet of things, the Toronto Public Library doesn’t typically come to mind.

Toronto Reference Library

Toronto Reference Library

It would surprise many to find that Toronto’s well-established learning hub is also a bridge to some of the city’s most innovative technologies for those who wouldn’t otherwise be able to access them.

In a discussion about the global evolution of smart cities and the role of startups, scale-ups and city officials on facilitating that development, the digital divide was mentioned a grand total of two times. One of the questions facing city officials all over the world is, how can we ensure that everyone can participate in and benefit from these changes?

As Vickery Bowles, a city librarian at the Toronto Public Library, will tell you, a city where only a portion of the population has access to the latest technologies isn’t that smart.

Therefore, the Toronto Public Library announced last December that, funded by the City of Toronto, it would begin piloting a Lendable Hotspot program.

According to the Toronto Star, approximately 27 percent of city residents don’t have broadband internet access in their homes, meaning that “hot spots are a commodity.” Whether this has to do with the rising prices of broadband coverage or spotty coverage in under-serviced areas, it’s been made clear that a smart city that includes only a portion of the population isn’t that smart.

“Access to information and pathways to learning has been the great equalizer of the 20th century, but in the 21st century, access to technology is just as important,” said Bowles.

The Toronto Public Libraries are currently running three digital innovation hubs at the Toronto Reference Library, Fort York and the Scarborough Civic Centre, where citizens can conduct research for entrepreneurial ventures and explore ideas that they would be able to otherwise.

p2083822736-o653641851-5-1

While it seems as if the internet has been around forever, it’s really only been 20 years since it began to significantly impact the world. It wouldn’t’ be reasonable for governments and populations to begin seeing internet access as a human right in what amounts to maybe 100 years in legislative time (we all know how slow institutional change can be).

That’s why city officials have set their sights on the internet first – so the city of Toronto can take advantage of its smart assets when the time is right. Eventually, should we have the resources, a smart city could mean vehicle to infrastructure communication, real-time business analytics, and essentially, knowing about issues before they manifest themselves in breakages, leaks and car accidents. In order for that to happen however, everyone needs to be connected to the internet.

The second mention of the digital divide came from Kristina Verner, the Director of Intelligent Communities at Waterfront Toronto. Intelligent Communities is the innovative division of Waterfront Toronto that offers Canada’s first open access, high speed broadband community network.

Verner emphasized the affordable and reliable nature of the coverage offered in the community, which will be operated by the Toronto-based telecommunications firm, Beanfield Metroconnect.

“Oftentimes in a smart city environment, you end up with an even bigger divide between the have and the have-nots. We’re making sure no one gets left behind.”

Initiatives other than the public library in Toronto that are attempting to involve citizens in their own wireless infrastructure include IdeaSpaceTO, a platform City officials seek feedback from the population about the challenges they’d like to overcome and Better Budget Toronto, an online outlet where citizens can access information about the city’s budget.

In the past few years, it’s become clear that megacities, rather than national governments, are going to be the most influential sources of change in the future. While Toronto may not be in the top five, it’s certainly in the top ten. By 2030, two thirds of the world’s population will live in cities, meaning they’ll need to find a way to accommodate their growing populations.

IMG_2197

Based on the information available, it’s clear Toronto isn’t yet equipped to do this, seeing as the city’s maximized capacity was mentioned over and over again over the course of the summit.

However, the municipal government has made it clear that it’s having this conversation across several channels in an effort to keep up with its population and with its surrounding super-cities, which is definitely a start.

Maybe, just maybe, there’s hope for a smarter Toronto after all.

Image credit: Wikicommons – Chris McPhee

31 May 07:07

Plan 3 Month’s Of Community Activities In 10 Minutes

by Richard Millington

Too many people are guessing what discussion, item of content, or activity to work on next.

You don’t need to guess in 2016.

Let Google guide you. Google knows more than you about your audience. You can do 10 minutes of research and put together a big list of discussions to initiate, content to create, and activities to host.

Step 1: Enter the Basic Topic Search Terms Into Google

Let’s imagine you want to build a community about surfing.

That’s quite a broad topic with a lot of competitors. So you might slice a niche for yourself…perhaps surfboards…and decide to build a community around this concept.

You need to figure out what audience to target (beginners, experts?) with what format of content (guides, blogs, pdfs, videos, images?), and what type of content (discussions, news, resources etc..).

Our first step would be to put surfboards into Google and look carefully at what comes up:

Surfboards.001

Note: Answerthepublic is also a useful site for relevant questions.

What do you notice here? Knowing where to buy a surfboard takes a lot of the top places, but the other categories (images and news) are really interesting.

This gives you some immediate discussion ideas for the community.

Discussion ideas based upon first search

  • Where did you buy your surfboard from? And would you buy from there again?
  • The ultimate surfboard photo thread – share your board!
  • Your favourite surfboard design (share photos!)
  • Do you think ‘competitor’s board’ helped ‘competitor’ ?

This feeds into other activities too. You might invite a top design expert for an interview, interview someone close to the competitor to ask about their board etc..But these questions are still far too vague for our liking.

While this is better than what 90% of community professionals do, you can still do much better by diving slightly deeper.

We want our discussions and content to be as specific as possible. So let’s look at the related searches.

Step 2: Using Related Searches To Get Specific Discussion Questions, Content, And Ideas for Activities

If we scroll to the bottom of the page, we see this:

Surfboards.002

This is really useful information!

While some of this audience wants to know where to buy them, a large number clearly want cheap surfboards, others want to know how to get the right size surfboards for them.

We can also see ‘beginners’ ranks highly here.

If we click on ‘beginners surfboard’ we soon see the exact terms and questions people ask to help us refine our discussions:

surfboards5

Step 3: Compile Unique Segments and Engagement Activities For Each

We can probably see 3 distinct types of beginners here.

  1. Beginners who only want the cheapest surfboards.
  2. Beginners who want to know the best surfboards for beginners.
  3. Beginners who want the best surfboards possible (cash-rich beginners!)

If you like, you could dig further into each of these.

For now, however, we can begin to create a few categories and drop the discussions, content, and activities into relevant places.

For example:

(click here for full image)

Audience Discussions Content Activities
Beginners who want cheap surfboards What is the least you would spend on a surfboard?
Where did you buy your surfboard from? Would you recommend it?
Selling your surfboard? – post it here.
How to negotiate a great surfboard deal
Survey results – how much members would spend on their first surfboards today
Surfboard price list – get the latest prices that members paid for their boards
#surfboardgraduation day. Sell your old surfboard to a newcomer today.
Interview with a surfboard scout – how Joe Smith got an [xyz] surfboard for $350!
Beginners who have money (but not knowledge) to buy the best surfboards If you could have any surfboard you want, what would it be?
Can beginners custom-design a surfboard?
What size surfboard should I get if….
Just bought your first board? Share the picture here..
The top 5 surfboards as voted by you.
And the surfboard brand of the year is…
5 Members describe their dream surfboard if price wasn’t a factor
ASK the experts: What surfboard would you buy for … ?
AMA with a surfboard manufacturer – get tips and tricks to get the best surfboard
Beginners who want to know how to be good beginners What advice would you give to a newbie buying his first surfboard?
What size surfboard should I get if….*
What board are you thinking of buying? Get advice from experts.
Should all newcomers begin by using foam surfboards?
What was your first surfboard and why?
What our top 10 members wish they knew when they bought their first board.
What’s changed about surfboards in the past 3 years?
15 warning signs of bad surfboards.
Surfboarding for beginners induction. Join our monthly live discussion to help newcomers get the best boards for them!

*there’s some natural overlap in these.

This is all activity to target to increase engagement among a specific segment with a unique need you can satisfy.

But beginners was just one of the key stakeholders interested in surfboards, now consider which other segment of surfers might have a unique interests in surfboards?

Step 4: Research The Second Biggest Segment

If we go back to the first results, we noticed that images ranked second.

Clearly images are important to a big segment of the audience…but who is this audience and what do they want?

If we click on images, we notice that design is the number one result….

Surfboards.003 (1)

We can probably assume that designing surfboards and customising surfboards is a big segment (we probably all knew this already, but the process matters).

We can also safely assume two things here.

  1. There is a group of surfers who love to customise their own boards.
  2. This group loves sharing images of customised boards.

We can infer that their motivations are impressing each other (why else share the images?)

Let’s do a proper search for terms like surfboard design and customising surfboards to see what comes up…

Surfboards.004

Surfboards.005

We can assume that the average level of knowledge of this group is quite low (note: the danger of this process is always appealing to the newcomers/beginners who are most likely to search for knowledge).

Now we have a good list of potential engagement topics:

  • Basic knowledge & discussion of the basics.
  • Theory of surfboard designs.
  • Sharing your design.
  • Findings and seeing the designs of others.
  • Video guides on designing surfboards
  • Get to know the big names in surfboard design.
  • Learn the software involved in design.

We can start to make some further educated guesses about the different groups here:

1. Design beginners. They need to know the basics. What software to use, how to design, what products to use, what’s in style etc.

2. Design experts who want to impress others. They want to take images of their boards, share images, and build their reputation.

3. Performance enthusiasts. They care less about aesthetics and more about how the design affects the performance. They want to get every edge for the top performance.

You can drill deeper into any of these if you like to get more specific questions and discussion topics.

For example, if we dig deeper into surfboard design theory we find:

surfboarddesigntheory

Now we have 8 potential topics we can initiate discussions (within design theory alone) and create content around which we know are going to be useful to a large number of this audience.

We’ve also discovered a potential competitor term to our own community efforts (shaping forums)..

Likewise, if we dig deeper into surfboard anatomy (for the performance enthusiasts) we find:

surfboardanatomy

I haven’t surfed, but Dave Parmenter might be a good person to interview.

Discussions about fish foot boards might be interesting, discussions on insight surfboards and rails would also be quite popular.

Once again, we can start making educated guesses about what each of our 3 new audiences might want here:

(click here for full image)

Audience Discussions Content Activities
Beginners designing their first surfboards What is the least you would spend on a surfboard?
What colours fade and which colours last for life?
Where can you buy a blank board to design?
How did you pick a design for your board?
The ultimate list of resources to design your first surfboard.
The basics of design theory. Avoid embarrassing mistakes in your surfboard design.
Five enduringly cool surfboard designs
Hands on workshop – our expert will guide you through designing your first surfboard
Correct your mistakes. Join our team in a live panel discussion to help you correct those design errors.
Design experts who want to impress others Who’s your favourite surfboard design expert?
Favourite surfboard design of all time…go!
What design would you love to create but can’t?
SHARE YOUR LATEST SURFBOARD DESIGN.
Struggling with a design? Post it here and get feedback.
Should you be culturally sensitive in your design?
Which designs for which location?
Most embarrassing design mistake…anyone?
The best designs from our Instagram this month.
Nominate your favourite design from these favourites.
What’s trendy in design today?
The story behind ‘xyz’ design (background, templates, and resources to use)
Design of the month competition.
Interview with the design of the month winner (how he selected, designed, and created this month’s top design)
AMA with the world’s top surfboard designer
Performance enthusiasts Is your board salvageable? Post pictures to get feedback.
How to solve the ‘xyz’ problem?
What is the most innovative design change you’ve seen this week?
Lift vs. drag – which do you prefer?
Speed benefits from fish foot surfboards?
Repairs a surfboard
How [person] created the [innovative performance] surfboard.
15 warning signs of bad surfboards.
LIVE DEBATE: [xyz] surfboard vs. [xyz] surfboard…which gives you the cutting edge?

You can design a much better table than this I’m sure.  

Now we have 2 core segments (beginners and designers) each comprising of three distinct groups of people we can target with dozens of messages, content, and activities

…and we’ve only been researching potential engagement activities for 10 minutes!

Step 5: Project Planning

From here you can begin eliminating groups you don’t want to target, focusing on the activities you feel will get the best return and start delegating who is going to do which of these (and when).  

This could easily be 3 months of community activity all mapped out.

What we’ve done today is to give you a really simple process to begin developing your engagement activity in a community.

You can use this to drive activity in an existing community (cater to new segments) or launching a new community (or sub-group).

In practice you probably want to supplement your research with interviews, surveys, studies of existing communities in the sector and similar communities. This helps you overcome the focus on beginner problem. Test relevant search terms, explore, see what comes up and use that within your community.

You should be amazed at just how quickly you can put together 3 month’s of activity on any topic you like.

Also see:

p.s. Workshop in New York next week, sign up here if you want to use psychology to build better communities.

 

31 May 07:06

Samsung finds itself in a hole. And keeps digging.

by Volker Weber
SEOUL—Samsung Electronics Co. is quietly adding more advertisements to its Internet-connected televisions as it seeks new revenue sources for its struggling TV business.

The world’s largest maker of TVs by shipments added new tile ads to the main menu bar of its premium TVs in the U.S. in June 2015 and is planning to expand the program to Europe in coming months, people familiar with the matter said.

When I discovered that my Samsung Smart TV was installing crapware, I should have taken it back to the store. Since I didn't, it is a constant reminder to never buy anything with a Samsung logo.

More >

31 May 07:06

Why bots won't replace apps anytime soon

by Volker Weber
Lately, everyone’s talking about “conversational UI” [user interface]. It’s the next big thing. But the more articles I read on the topic, the more annoyed I get. It’s taken me so long to figure out why!

Interesting perspective from Dan Grover, WeChat product manager.

More >