Shared posts

09 May 15:01

#OpenBadges: the BOTOX of education? — #BeyondCredentials

by Serge

Two weeks ago, during the Badge Alliance weekly Community Call (link) when Nate Otto presented the outcomes of the Badge Alliance Board Meeting, one of the slides (c.f. below) triggered a discussion on whether Open Badges are “just about credentialing”:

Are Open Badges about credentialing?

Are Open Badges just about credentialing?

Earlier in April, Carla Casilli posted her reflections on “Open badges + credentials: the value of the not-credential” (link):

“Right now, we still need badges to flourish in the non-regimented space of not-credentials—a world of value that has yet to be fully realized or appreciated—where the sliding scale of social and cultural currency changes depending on context.”

Doug Belshaw responded to Carla stating: “ I just can’t see a situation where a badge wouldn’t also count as a credential — even if that wasn’t the original intention” (link). Doug further adds:

“What badges don’t have to be, even if they’re wholly contained within the ‘credential’ circle, is traditional. They can recognise all kinds of knowledge, skills, and behaviours — as well as all kinds of things we haven’t even thought of yet!”

While defending that badges are credentials Doug Belshaw claims that “badges don’t have to be […] traditional,” yet it is precisely because badges tend to be “traditional” that Carla Casilli writes “we still need badges to flourish in the non-regimented space of not-credentials.” Could there be a connection between thinking of Open Badges as credentials and the reason why they are not being more used in the “non-regimented space”?

“While Open Badges could become an authentic rejuvenating medicine, many are only interested in an educational BOTOX® for a cheap facelift.”

With the growing interest of institutions of formal education in Open Badges, I am afraid that we are more likely to witness the transformation of Open Badges technology and practices to fit the needs of formal education for conformance rather than the other way around. While Open Badges could become an authentic rejuvenating medicine, many are only interested in an educational BOTOX® for a cheap facelift — Don Presant detailed one such example in Problems with “Badges for Food”.

My claim is that the vocabulary we use to describe Open Badges and the processes they support can make the difference between authentic transformation and masquerade — and avoid BOTOX®  mishaps!

Credentials, credentialing and recognition

Are Open Badges just about credentialing? When the question was raised two weeks ago, Don Presant suggested to use the word recognition instead. As I agreed with Don, I would like to develop the dangers of equating Open Badges to credentials, not that credentials is a dirty word but, like some substances that cure in small doses and kill in larger ones, I am afraid that we are on the verge of credentialing overdose. The good news is that there is a way to mitigate that risk, and shifting our thinking from credentialing to recognition might be part of the solution.

We cannot neglect the fact that words carry their own bagage, that meaning is contextual, it depends on who is speaking, the audience, the location, etc. — the very same joke can sound hilarious, outrageous or flat whether told by a woman or a man, a transgender, polygender or genderless individual!

Credentialing is value laden, and some people in the world of informal education have expressed their fears about credentialing:

“At the 2010 European Adult Education Association (EAEA) / Nordic network for adult learning conference […] many participants had reservations and anxieties towards qualifications frameworks and associated validation of non-formal and informal learning. It was suggested for example that there is a risk in this sector that accredited courses might give learners the impression that they belong to the formal system, which may deter those with previous negative experiences of education from taking part. Delegates stressed that ‘learning for learning’s sake’ remains important and that there should still be support and financing for non-formal learning, study circles and development of social competences even if the learners choose this type of learning for personal development and do not intend to validate the competences. The conference delegates emphasised that it should be up to the learner to choose what should be validated and not.
Source: 2010 update of the European inventory on validation of non-formal and informal learning [the highlights are mine].

It would be a paradox if the Open Badges, an initiative dedicated to the recognition of informal learning would deter the learners involved in informal learning, in particular those who are “learning for learning’s sake”! Unfortunately, it is precisely what might happen if we continue insisting that Open Badges are credentials.

The crux of the problem is probably the lack of understanding of the differences between recognition and credentialing (or accrediting). To try to capture how they differ from each other, it might be interesting to analyse how they react when combined with a selection of adjectives:

The semantic value of recognition vs. credentialing

The semantic value of recognition vs. credentialing

As those phrases elicit, recognition and credentialing do not have the same semantic value and surface. In fact, one is strictly contained within the other: credentialing is a specific modality of recognition, while recognition is not a specific modality of credentialing. I used to say credentialing is ancillary to recognition. Credentialing is a servant to recognition and it should stay in that subordinate position. Problems arise when the servant becomes the master — think of Dirk Bogard in Joseph Losey’s The Servant. I am afraid that it is the situation we are fostering when equating Open Badges to credentials.

Credentialing should remain the servant of Recognition

If we accept equating Open Badges to credentials we loose the opportunity to address the wider field of recognition, so my question is: what should we use as a means of recognition if Open Badges are just credentials?

As this is a rhetorical question, we will assume from now on that Open Badges are signs of recognition and that some of them are credentials.

Beyond the distinction recognition/credentials, credentials suffer from a congenital infirmity: asymmetry. When you ask someone “show me your credentials” what you are asking for is what others, usually authorities, have to say about that person. You are not asking what that person has to say about others, for example the people they trust, i.e. those they have issued credentials to. Yet, to have a fuller understanding of a person, why limit the information to half of what would be possible if people had the possibility to issue their own credentials?

The credentials issued  by a person can be very informative: “tell me who the people you trust/endorse are, and I’ll tell you who you are.” Had the Open Badge Infrastructure be truly open, when being asked “show me you credentials” one would have been able to respond “here is my trust capital, those who trust me and those I trust” — by displaying the badges received, issued and endorsed.

Unfortunately, the congenital infirmity of credentials, asymmetry, is also hereditary: it has been transmitted to the Open Badge Infrastructure. We have failed in providing learners with the means to issue their own “credentials” denying them the right to trust others.

As I have mentioned several times, the Open Badge Infrastructure (OBI) was primarily designed to support the formal recognition of informal learning — involuntarily paving the way for the Open Badges to be hijacked by institutions of formal education, something happening just now under our nose. Supporting the informal recognition of informal learning would have required a different architecture, empowering learners rather than institutions, looking for recognition rather than mere credentialing.

In the next post, we will explore the relationship between formal and informal recognition and why it is critical to make Open Badges truly open to the informal recognition of informal learning.

09 May 14:59

LinkedIn Just Made a Savvy Business Move and Nobody Noticed

files/images/getty_126695011_90016.jpg


John Nemo, Inc., May 12, 2016


LinkedIn has been moving in this direction for several years, and as the Inc. article notes, "Modeled after popular 'freelancer-for-hire' sites such as  Fiverr  and  Upwork, LinkedIn's ProFinder  matches customers looking for a specific type of product  or service  with a qualified professional." It gives rise to a new type of business model on the other end: a commercial entity with few full-time staff employing dozens of professionals on a contract basis. Ah, but here's the rub: what is to prevent a race to the bottom as individual contractors compete against each other?

[Link] [Comment]
09 May 06:52

How To Persuade People To Participate When Facts Let You Down

by Richard Millington

This article is mind-blowing.

In the 2007 local elections here in the UK, the author’s side lost 500 seats, while the other side gained 900 seats.

…and they’re spinning it as a disaster for the other side!

The other side thought they would win Bury and didn’t. Bury is a small town with a population of 60k, but that’s the story that spread the next day.

Stories trump facts by a long way. Stories are how we organize and prioritize information.

The winner isn’t the person who presents the best facts, it’s the person who spins the most seductive stories. We need to know this now more than ever.

Years ago we worked with an internal community manager who had spent the last 6 months trying to get her colleagues to participate more in her community instead of sending emails.

She had stated and restated the benefits (the facts). She had held dozens of private lunches. Everyone was convinced by her argument, they just weren’t persuaded.

At our first lunch, she was clearly distressed, frustrated, lonely, and feeling ineffective. Her boss was piling on the pressure with weekly “status update” requests.

We’ve all been there, it’s not a nice feeling.

Tell An Emotive Story

Our approach was to find and spread emotive stories instead.

There was the story about the employee who someone mentioned was selfish because they didn’t help others.

There was the story about a rival department that had just moved everyone to a more modern engagement platform and referred to her group as the ‘Dinos’ (short for dinosaurs, I presume).

There was the story about a director of the organization mentioning an idea he had stolen from the community and was later confronted by the employee for not giving him credit…the director apologised.

All of these stories were true of course, we’re just helping them to spread.

All of these stories are very emotive. Fear, jealousy, and pride are very powerful and very persuasive emotions.

All of these stories promote the community too.

Better yet, stories spread far quicker than facts. Few people share facts, everyone shares stories. Stories are persuasive and entertaining, facts aren’t.

And, of course, it began driving up participation too. People began seeing the community in a different light.

That fear, loneliness, and stress our community manager felt melted away. People enjoyed speaking to her again and hearing the latest stories (we made the latter ones funnier). They began to respond more favourably to her ideas. Most of all, she got a contract extension as the level of participation went up.

We want to help you hunt out the emotive stories that will drive your audience.

If you come to our Tactical Psychology workshop in New York on June 6 ($750), we’re going to help you develop some terrific, emotive, stories for your audience.

We’re going to unlock an arsenal of tactics from the world of psychology that you can deploy within your engagement efforts.

You can learn more below:

http://newyork.feverbee.com

We have 10 seats remaining (and 1 group ticket if you want to attend with your team).

I really hope you will join us.

 

09 May 00:16

Fix for ‘Font Not Compatible’ Error on Samsung Galaxy S6, Galaxy Note 5, and Galaxy S7

by Rajesh Pandey
This might sound surprising but one of my favorite features of Touchwiz is the ability to change the system fonts easily. Samsung has included this functionality in all of its recent flagship devices: the Galaxy S6 series, Galaxy Note 5, and Galaxy S7 series. The best part is you are not limited to the fonts pre-loaded by Samsung, and can download additional fonts from the Google Play Store and Samsung’s own Galaxy Apps store. Continue reading →
08 May 23:55

Apple and Podcasting

by Matt

Marco Arment has a great take on how the decentralized nature of podcasting is a feature, not a bug, and Apple being more proactive there would be harmful to the ecosystem. As an aside, since I’ve been in Houston more recently, which means driving a lot, I’ve been really loving his app Overcast and I opted in to the optional paid subscription for it. I just need to get in more of a habit of listening to podcasts outside of Houston.

08 May 23:54

Bike Momma Bike

by Ken Ohrn

As Vancouver’s transportation scene changes, with new infrastructure for people on foot and on bikes, the numbers need some visibility. HERE‘s some latest data from the City of Vancouver. And HERE is the detailed report on the panel survey, for those who like to understand the numbers, and perhaps for those few who like to “lemon-pick”.

  • Daily trips by mode of travel:  People on bikes 7%.
  • Trips to work:  People on bikes 10%
  • Total trips by bike:  131,025 per day (up 32% from 2014 to 2015)
  • Total trips by bike:  134,824 (if recreational trips included)

CoV.Mode.Share

CoV.Mode.Share.2

There is also a qualitative change in who’s out there travelling around on a bike, doing ordinary day-to-day stuff. The sort of thing pictured below is becoming a commonplace sight on Vancouver’s growing network of safe and effective infrastructure for people who want to ride their bike.

Jeffry.Bench.May.1

Happy Mother’s Day


08 May 23:40

A Podcasting Divergence

by Federico Viticci

I almost didn't want to link to this NYT report on a meeting Apple allegedly had with "seven leading podcast professionals" (whoever they are) to hear their concerns on "several pressing issues for podcasters", but Marco Arment's response is an important one.

The takeaway from the NYT story is that Leading Podcast Professionals would love ways to have more data about podcast listening habits as well as monetization features to sell access to podcasts via iTunes. From the report:

With data like listener counts and listening duration — similar to what Apple provides app developers — the industry could accelerate quickly, said Ms. Delvac of “Call Your Girlfriend.”

And:

Expanding the industry much more, though, gets tricky. Apple does not allow shows to charge people to download episodes, for example, and does not support paid subscriptions, as many podcasters would like. Apple has stuck with an advertising model for podcasting that looks almost exactly like what Mr. Jobs predicted onstage in 2005.

And here's Marco Arment:

Big podcasters also apparently want Apple to insert itself as a financial intermediary to allow payment for podcasts within Apple’s app. We’ve seen how that goes. Trust me, podcasters, you don’t want that.

It would not only add rules, restrictions, delays, and big commissions, but it would increase Apple’s dominant role in podcasts, push out diversity, give Apple far more control than before, and potentially destroy one of the web’s last open media ecosystems.

This is a complex issue. But it's important to note that just like Leading Podcast Professionals may have their valid opinions and suggestions for Apple, there are thousands of independent podcasters who are thriving in their own niches.

Right now, the iTunes Store for podcasts is essentially a glorified catalogue of external RSS feeds with show pages, charts, curated sections, and search. And that's a beautiful thing: there's little to no barrier to entry. Anyone can make their own podcast feed, host podcast files wherever they want, and Apple's system will provide users (and other apps) with tools to search and subscribe from a unified location dedicated to podcasts. Ultimately, you own your podcast files, your RSS feeds, and the ads you sell.

What Leading Podcast Professionals would like to see seems harmless on the surface. More data? Apple could use the system they've built with App Analytics, make it work with their Podcasts app only (which does have a big slice of the market share), and display aggregate and anonymized data to podcasters. Monetization and pay-for-access? Easy: instead of giving Apple your own public RSS feed with links to files hosted somewhere on the web, upload your podcast file directly to Apple's cloud and let the service take care of access on a per-Apple ID basis – sort of like YouTube Red, but for podcast access.

This is where my beliefs diverge with those of Leading Podcast Professionals. The system they are wishing for might as well solve all of their data collecting and monetization woes. From my standpoint, though, it would set a dangerous precedent for two reasons:

  • If Podcast Analytics are only available in Apple's Podcasts app, it would indirectly push out innovation for third-party podcast clients. Advertisers would start requiring data that's only available when users listen via Apple's app, which would incentivize podcasters to start recommending Apple's Podcasts app to their audience, which in return would discourage developers from making third-party podcast clients. As a podcaster, would I want my audience to listen with an app that doesn't contribute to the analytics my advertisers want?
  • In the second, darker scenario, podcast files hosted on Apple's cloud would create another walled garden, akin to the App Store (or SoundCloud), where recurring subscriptions and downloads are dependent on authorization checks from Apple's servers to make podcasts work on the user's side. As we've seen with apps, maybe that could even be great, opening up an entirely new economy. But iPhone apps were a new medium – they were born within the confines of Apple's ecosystem, and yet look where we're at now. Do we really need the established and open podcasting medium to become segmented and scrutinized at this point? At what cost for independent podcasters and podcast app makers? With which consequences for listeners?

This, I think, is where many of us – independent podcasters – diverge with the data-driven platform fetishism of Leading Podcast Professionals. If you're a big media company, chances are you're always on the lookout for enticing new "platform opportunities" to keep your audiences into a locked-down space where you can easily collect data, analyze behavior, and monetize aggressively.

Look at what's happening with article and video content on the web. As a Leading Content Professional, why wouldn't you want to have your articles on Facebook or Apple News? How about Medium? Shouldn't you consider making exclusive content for Snapchat instead? The money is flowing in, the usage on these platforms is off the charts (well, except Apple News), and if users are spending time there, shouldn't you reach them through the platform they're using? Wouldn't it be awesome if the same could be true for podcasting?

If you're a Leading Content Professional and you think that's what you want, more power (and money) to you. I understand and respect what you're doing. But the great thing about the free and decentralized web is that the aforementioned web platforms are optional and they're alternatives to an existing open field where independent makers can do whatever they want. I can own my content, offer my RSS feed to anyone, and resist the temptation of slowing down my website with 10 different JavaScript plugins to monitor what my users do. No one is forcing me to agree to the terms of a platform. My readers are free to link to my articles, copy them, print them, subscribe to my feeds, and view them in any browser or feed reader they like.

Big Platforms are scared of this openness. I see an intrinsic beauty in it that no platform, corporation, or Leading Content Professional could ever convince me to abandon.

My concern with podcasts and the iTunes Store is that, compared to the web, they're still a relatively young medium that are primarily searched and discovered on a platform that's too much of a good thing.

Think about it. Podcasts on iTunes link to external RSS feeds, files are hosted somewhere else, and there's a team of people curating the best ones each week. It's a benevolent service with a convenient interface based on an open medium. Can it last forever? How much would the "improvements" craved by Leading Podcast Professionals change that? And if such changes are implemented, would the podcasting industry be able to maintain its open and decentralized nature like the web is struggling to do?

See, this isn't about arguing who's right or wrong. It's about recognizing the divergence of needs and opinions in an industry that, in many ways, is still in its formative years. I want to own and control my podcasts just like I do with my articles. I want podcasting to be a spoken extension of the written web – available to everyone, indexed with an open format, unbound by agreement terms and proprietary file formats. I want to know that, 30 years from now, I'll be able to look up one of my podcast episodes from 2016 like I can look up a 2009 blog post on my server today.

But maybe the sad reality is that the web is an anomaly. Perhaps podcasting will end up like video, largely controlled by one platform, with other companies – each with their own terms, restrictions, and walled gardens – wanting a piece of the action. And maybe it'll even nurture a new generation of entertainers, like YouTube did, and eventually we'll just accept it.

But altering the fundamentals of an existing open medium concerns me today. For podcasters, the current state of the iTunes Store is almost too good to be true. I hope Apple remembers that there's more to podcasting than Leading Podcast Professionals.


Like MacStories? Become a Member.

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it's also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it's made in Italy.

Join Now
08 May 23:39

Daily Durning: NIMBYs and YIMBYs

by pricetags

A typical story – from Bloomberg:

Nimby

.

And this:

Sightline.JPG

.

And then there’s this:

YIMBY

Ballot issue #300 and #301—a separate effort to require every development in Boulder to pay for upgrades in infrastructure and amenities—would have stifled growth in Boulder. That was the whole point for the homeowners who already reside there. (“They are coming for our neighborhoods,” read a memorable tagline from one ballot-measure advocate.)

Those measures lost at the ballot, but they galvanized Better Boulder, a coalition made up of the groups who mobilized against the November ballot measures.

Across the country, similar campaigns for downzoning, restricting housing supply, curbing public transit, and other so-called not-in-my-back-yard machinations have spurred similar movements. There’s AURA in Austin, the San Francisco Bay Area Renters Federation in San Francisco, Greater Greater Washington in Washington, D.C., and many others.

You might call the response a nationwide YIMBY movement: a nation of local policy wonks crying out, “Yes in my back yard”—yes to density, transit, and change. Next month, this movement becomes official: Boulder is hosting thefirst national YIMBY conference from June 17–19.


08 May 23:38

The brain in your pocket

files/images/2498000_orig.jpg


Daniel Willingham, May 11, 2016


Daniel Willingham has two tried-and-true tools he goes back to again and again: the unproven theory, and the artificial example. In this post he combines them to suggests that the internet  weakens our cognitive powers. The theory in this case is 'cognitive miserliness', suggesting that  "we think when we feel we have to, and otherwise avoid it." And computers in our pocket give us a new way to avoid thinking, leading to (he says) poorer results on some 'analytical problems' such as the artificial example he provides. I think the sort of study he proposes would be substantially misleading, because as our technology changes, the nature of the problems (and the thinking we have to do) changes as well, rendering  moot the artificial examples Willingham uses so frequently.

[Link] [Comment]
08 May 23:38

5 Must-Have Accessories for Samsung Galaxy S7

by Rajesh Pandey
Got yourself a Galaxy S7 or Galaxy S7 edge and now looking for some accessories to pair with the handset? Since Samsung’s flagship Galaxy S handsets are among the most popular Android smartphones in the market, there are plenty of accessories available for them.  Continue reading →
08 May 23:38

Coding for Kids

by Rui Carmo

This is a (very) incomplete list of resources for teaching kids how to program.

Date Link Notes
/2 Jun’17 Code Club Online courses for kids
Thonny A simple Python IDE for beginners
May’16 load81 A Lua based programming environment for kids, similar to Codea.
Blockly Google’s Scratch-like toolkit for visual programming.
Stencyl A Haxe-based IDE that uses a similar approach to Scratch
Older Scratch The quintessential reference. The online version is, sadly, not as good as the original, and the iOS version is targeted at smaller children, so 6-8yos have little interest in them.
Code.org Has a fair (but still very small) set of Portuguese language resources.
Codea A Lua IDE for the iPad that has a cheaper “scratchpad” edition.
Thonny A simple cross-platform Python IDE for beginners
Pythonista Not really focused on kids, but it deserves a spot here.
08 May 22:19

Replace RESTful APIs with JSON-Pure

files/images/2015-08-31-json-pure.png


Michael S. Mikowski, May 11, 2016


If you're not coding websites and web applications, this post is not for you. But for the rest of us, it's interesting to look at the evolving world of web applications development (formerly known as 'web pages'). "The primary goal is to provide the best possible user experience for modern SPAs (Single Page Applications). Certainly the practices shown below are nothing new or revolutionary. You will see echoes of SOAP, JSON RPC, JSON API, JSend, JSON LD and Hydra in the recommendations." See also: RESTful APIs - the big Lie, and  Thoughts on JavaScript, CoffeeScript, Node.js, and JSON-LD.

[Link] [Comment]
08 May 22:19

I fly 747s for a living. Here are the amazing things I see every day.

files/images/IMG_2082.0.JPG


Mark Vanhoenacker, Vox, May 11, 2016


I read this a couple days ago while I was in Malaysia. Now I'm in my kitchen in Ontario, typing this out. This is an obvious point, but it's deep and important: "Everywhere is going on at once...  All this would still be going on if I hadn't flown here. And that's equally true of London, and of all the other cities I passed in the long night, that I saw only the lights of.  For everyone, and every place, it's the present." It applies equally well to my next door neighbours as to Tatevik in Armenia, Viplav in India, Dave in PEI and Doug in the UK.

[Link] [Comment]
08 May 15:23

Unlearning and Other Jedi Mind Tricks – Finding the (Creative) Force


Amy Burvall, AmusED, May 11, 2016


Creativity isn't something you have to have a special talent for. It is something that results from paying attention, following your own interests, and most of all, hard work. This is the gist of the message offered by Amy Burvall as she prefaces a list of 'Jedi mind tricks' to promote creativity (quoted and lightly edited (my own take in italics)):

  • sense of wonder.... is to be  mindful  of one’ s surroundings and all the things your senses are taking in (I get this from travel, the beauty of everyday life, from animals and from people).
  • hone your skills of observation... be curious, stay in the question longer, take in different perspectives of things as well as people and processes (I use photography for this).
  • challenge assumptions - learn the rules so you can break them  -  and don’ t take things at face value (not just rules - patterns, expectations, order and organizations...)
  • the best way to complain is to make things - so after you’ ve deconstructed or assessed something, it’ s important to  move on (for me, making things usually means coding, though sometimes it means carpentry).
  • an  incubation period  is also necessary - this is when you step away from the creative endeavor and chill out or do something mundane (for me, this is things like cycling)
  • chefs of creativity must have a variety of tools at hand - it’ s important to achieve  fluency  with your creativity (and this requires practice with your tools, even for no purpose).
  • being able to  learn, unlearn, and re-learn anew  will be imperative (just ask anyone who does web development)
  • creativity inherently involves remix and connecting the dots - dot-connecting takes practice. It’ s a balance of curating (“ dot collection” ), finding, sense-making, and associating (this is the work I do with OLDaily, an essential part of my creative exercise).
  • having a  creative mentor  is so important – even if it’ s not someone in your real life (these, to me, are the people who struggle with genuine enquiry - David Hume, John Stuart Mill, Ludwig Wittgenstein - and   not those who simply see it as a game)
  • while we don’ t want something to be overprescribed - we  can thrive  on conditions  -  rules  -  challenges (This isn't me so much -  I don't like people telling me what to do, or not to do).
  • creativity should be scheduled (OLDaily first thing in the morning at at 4:00 p.m., morning time for priority tasks, afternoon for routine, evenings and weekends for me).
  • Focus will yield flow - Let all the extraneous stuff whiz by – at least for a little while – while you are  intent (yes) 
  • it’ s key to have other, more tedious aspects of the creative task at hand that you can tackle when the big work is difficult (like doing email, reading the RSS feeds, some paperwork, cleaning and sorting).
  • we want our work to touch people- We need people to work with us and spark us on – a  creative posse (I've thought about this a lot - mostly I just need someone to tell me I've done good).
  • if you are hating you are not creating (I need to work on this).

P.S. the list format here, which would work well on slides, is especially for Amy. :) Via Doug Belshaw, who also recommends this Tilt Brush  video. See also Amy Burvall, Everybody is an Artist.

[Link] [Comment]
08 May 15:22

Free Artificial Intelligence (AI) software for your PC

files/images/Braina.JPG


Adrian Kingsley-Hughes, ZDNet, May 11, 2016


The 'artificial intelligence' part of Braina  (I keep wanting to pronounce it 'bran ah') lies mostly in the voice recognition software and in its ability to interpret natural language requests. "It isn't just like a chat-bot; its priority is to be super functional and to help you in doing tasks. You can either type commands or speak to it and Braina will understand what you want to do." According to the review, "Braina is very utilitarian, practical, and actually very functional." I haven't tried it myself (I'm afraid to overload my laptop so I'll wait until I'm in the office). No matter how it functions, something like this application will provide a lot more support for personal productivity and support some time in the near future. Via Doug Peterson.

[Link] [Comment]
08 May 06:11

Dear Adobe, Please buy Flickr

by Doc Searls
A photo readers find among the most interesting among the 13,000+ aerial photos I've put on Flickr

This photo of the San Juan River in Utah is among dozens of thousands I’ve put on Flickr. it might be collateral damage if Yahoo dies or fails to sell the service to a worthy buyer.

Flickr is far from perfect, but it is also by far the best online service for serious photographers. At a time when the center of photographic gravity is drifting form arts & archives to selfies & social, Flickr remains both retro and contemporary in the best possible ways: a museum-grade treasure it would hurt terribly to lose.

Alas, it is owned by Yahoo, which is, despite Marissa Mayer’s best efforts, circling the drain.

Flickr was created and lovingly nurtured by Stewart Butterfield and Caterina Fake, from its creation in 2004 through its acquisition by Yahoo in 2005 and until their departure in 2008. Since then it’s had ups and downs. The latest down was the departure of Bernardo Hernandez in 2015.

I don’t even know who, if anybody, runs it now. It’s sinking in the ratings. According to Petapixel, it’s probably up for sale. Writes Michael Zhang, “In the hands of a good owner, Flickr could thrive and live on as a dominant photo sharing option. In the hands of a bad one, it could go the way of MySpace and other once-powerful Internet services that have withered away from neglect and lack of innovation.”

Naturally, the natives are restless. (Me too. I currently have 62,527 photos parked and curated there. They’ve had over ten million views and run about 5,000 views per day. I suppose it’s possible that nobody is more exposed in this thing than I am.)

So I’m hoping a big and successful photography-loving company will pick it up. I volunteer Adobe. It has the photo editing tools most used by Flickr contributors, and I expect it would do a better job of taking care of both the service and its customers than would Apple, Facebook, Google, Microsoft or other possible candidates.

Less likely, but more desirable, is some kind of community ownership. Anybody up for a kickstarter?

[Later…] I’m trying out 500px. Seems better than Flickr in some respects so far. Hmm… Is it possible to suck every one of my photos, including metadata, out of Flickr by its API and bring it over to 500px?

I also like Thomas Hawk‘s excellent defense of Flickr, here.

 

08 May 06:11

Without Wonder, Without Awe

by Eugene Wallingford

Henry Miller, in "The Books in My Life" (1969):

Every day of his life the common man makes use of what men in other ages would have deemed miraculous means. In the range of invention, if not in powers of invention, the man of today is nearer to being a god than at any time in history. (So we like to believe!) Yet never was he less godlike. He accepts and utilizes the miraculous gifts of science unquestioningly; he is without wonder, without awe, reverence, zest, vitality, or joy. He draws no conclusions from the past, has no peace or satisfaction in the present, and is utterly unconcerned about the future. He is marking time.

It's curious to me that this was written around the same time as Stewart Brand's clarion call that we are as gods. The zeitgeist of the 1960s, perhaps.

"The Books in My Life" really has been an unexpected gift. As I noted back in November, I picked it up on a lark after reading a Paris Review interview with Miller, and have been reading it off and on since. Even though he writes mostly of books and authors I know little about, his personal reflections and writing style click with me. Occasionally, I pick up one of the books he discusses, ost recently Richard Jefferies's The Story of My Heart.

When other parts of the world seem out of sync, picking up the right book can change everything.

08 May 06:09

Money, Race and Success: How Your School District Compares

files/images/Socio-Economic_Status.JPG


Motoko Rich, Amanda Cox, Matthew Bloch, New York Times, May 10, 2016


On average, "Sixth graders in the richest school districts are four grade levels ahead of children in the poorest districts." As usual with American sources, the data is also distributed by race. But race doesn't define the trend; socio-economic status does. "A higher proportion of black and Hispanic children come from poor families. A  new analysis  of reading and math test score data from across the country confirms just how much socioeconomic conditions matter." Of course, knowing about the impact of inequality and doing something about it are two very different things. Here's the data,  based on 200 million test scores. P.S. maybe this explains results  showing lower scores for online schools.

[Link] [Comment]
08 May 06:09

Evernote adds new document scanning and image annotation features to its Android app

by Igor Bonifacic

Back in July of last year, Evernote announced Canadian Chris O’Neil had taken over as the company’s CEO. O’Neil came to Evernote following stints as the head of Google Canada and later Google X, and he quickly set about helping the one-time Silicon Valley darling regain the momentum it had last over the last couple of years.

In one of its first major updates since O’Neil shut down some of the company’s tertiary products, Evernote has added several major new features to its core Android app.

To start, the company has reworked the app’s document scanning functionality. When taking a picture of a document, the app’s new automatic mode will parse said document, determining how best to accommodate its physical size and layout. It will then automatically crop the resulting image and adjust its contrast as necessary to make it easier to read on a mobile screen.

Evernote Android

To access this functionality, launch Evernote, tap the green floating action button and select the camera note option. Evernote’s implementation of this feature is pretty slick. If the app detects a document, it will overlay a green interface element, allowing the user to pick exactly the element they want photographed. Once scanned, documents images are kept in a temporary gallery while the user decides whether to keep the image. Once saved, the file is treated just like any other Evernote note for the purposes of organization.

In the short time I’ve had to test this feature, I’ve found it works reasonably well, though it doesn’t produce as crisp scans as Microsoft’s Office Lens offering does.

Premium Evernote users also get access to a new business card scanning feature. Using the same tech behind its new scanning feature, the app will automatically parse, scrub and transcribe business cards.

Evernote Android

The company has also integrated many of the features previously only present in Skitch, the dedicated drawing and annotation app it pulled from app marketplaces back in January, into Evernote proper.

Premium users can use this same functionality to also annotate PDF files.

Multinote-Select2

On the visual side of things, Evernote has added support for strikethrough, subscript, and superscript formatting styles, and last but not least, the app now allows users to use a long press to quickly select multiple notes while in the main app screen.

Download Evernote from the Google Play Store.

SourceEvernote
08 May 06:09

Unbundled

files/images/lightbulbs.jpeg


Chris Saad, Medium, May 10, 2016


"Unbundling," says this article, "is the process of  breaking apart rigid, man made structures (i.e. bundles) into individual, atomic parts." This article is a superficial look at the process, as suggested by the definition (taking a house apart qould qualify as 'breaking apart rigid, man made structures' but is certainly not 'unbundling'). It is nonetheless useful to have a look at the different enterprises impacted by the phenomenon - everything from news media to work, war and government. And while, yes, there is "an  increased flexibility for empowered individuals to have more choices and more personalized experiences," the effect is not nearly as pervasive as the author, a manager from Uber, suggests.

[Link] [Comment]
07 May 14:33

Inside the Magic Pocket

by James Cowling

Overall Architecture

We’ve received a lot of positive feedback since announcing Magic Pocket, our in-house multi-exabyte storage system. We’re going to follow that announcement with a series of technical blog posts that offer a look behind the scenes at interesting aspects of the system, including our protection mechanisms, operational tooling, and innovations on the boundary between hardware and software. But first, we’ll need some context: in this post, we’ll give a high level architectural overview of Magic Pocket and the criteria it was designed to meet.

As we explained in our introductory post, Dropbox stores two kinds of data: file content and metadata about files and users. Magic Pocket is the system we use to store the file content. These files are split up into blocks, replicated for durability, and distributed across our infrastructure in multiple geographic regions.

Magic Pocket is based on a rather simple set of core protocols, but it’s also a big, complicated system, so we’ll necessarily need to gloss over some details. Feel free to add feedback in the comments below; we’ll do our best to delve further in future posts.

Note: Internally we just call the system “MP” so that we don’t have to feel silly saying the word “Magic” all the time. We’ll do that in this post as well.

Requirements

Immutable block storage

Magic Pocket is an immutable block storage system. It stores encrypted chunks of files up to 4 megabytes in size, and once a block is written to the system it never changes. Immutability makes our lives a lot easier.

When a user makes changes to a file on Dropbox we record all of the alterations in a separate system called FileJournal. This enables us to have the simplicity of storing immutable blocks while moving the logic that supports mutability higher up in the stack. There are plenty of large-scale storage systems that provide native support for mutable blocks, but they’re typically based on immutable storage primitives once you get down to the lower layers. 

Workload

Dropbox has a lot of data and a high degree of temporal locality. Much of that data is accessed very frequently within an hour of being uploaded and increasingly less frequently afterwards. This pattern makes sense: our users collaborate heavily within Dropbox, so a file is likely to be synced to other devices soon after upload. But we still need reliably fast access: you probably don’t look at your tax records from 1997 too often, but when you do, you want them immediately. We have a fairly “cold” storage system but with the requirement of low-latency reads for all blocks. 

To tackle this workload, we’ve built a system based on spinning media (a fancy way of saying “hard drives”), which has the advantage of being durable, cheap, storage-dense and fairly low latency—we save the solid-state drives (SSDs) for our databases and caches. We use a high degree of initial replication and caching for recent uploads, alongside a more efficient storage encoding for the rest of our data.

Durability

Durability is non-negotiable in Magic Pocket. Our theoretical durability has to be effectively infinite, to the point where loss due to an apocalyptic asteroid impact is more likely than random disk failures—at that stage, we’ll probably have bigger problems to worry about. This data is erasure-coded for efficiency and stored across multiple geographic regions with a wide degree of replication to ensure protection against calamities and natural disasters.

Scale

As an engineer, this is the fun part. Magic Pocket had to grow from our initial double-digit-petabyte prototypes to a multi-exabyte behemoth within the span of around 6 months—a fairly unprecedented transition. This required us to spend a lot of time thinking, designing, and prototyping to eliminate the bottlenecks we could foresee. This process also helped us to ensure that the architecture was sufficiently extensible, so we could change it as unforeseen requirements arose.

There were plenty of examples of unforeseen requirements along the way. In one case, traffic grew suddenly and we started saturating the routers between our network clusters. This required us to change our data placement algorithms and our request routing to better reflect cluster affinity (along with available storage capacity, cluster growth schedules, etc) and eventually to change our inter-cluster network architecture altogether.

Simplicity

As engineers we know that complexity is usually antithetical to reliability. Many of us have spent enough time writing complex consensus protocols to know that spending all day reimplementing Paxos is usually a bad idea. MP eschews quorum-style consensus or distributed coordination as much as possible, and heavily leverages points of centralized coordination when performed in a fault-tolerant and scalable manner.  There were times when we could have opted for a distributed hash table or trie for our Block Index and instead just opted for a giant sharded MySQL cluster; this turned out to be a really great decision in terms of simplifying development and minimizing unknowns.

Data Model

Before we get to the architecture itself, first let’s work out what we’re storing.

MP stores blocks, which are opaque chunks of files, up to 4MB in size:

Blocks

These blocks are compressed and encrypted and then passed to MP for storage. Each block needs a key or name, which for most of our use-cases is a SHA-256 hash of the block.

4MB is a pretty small amount of data in a multi-exabyte storage system however and too small a unit of granularity to move around whenever we need to replace a disk or erasure code some data. To make this problem tractable, we aggregate these blocks into 1GB logical storage containers called buckets. The blocks within a given bucket don’t necessarily have anything in common; they’re just blocks that happened to be uploaded around the same time.

Buckets need to be replicated across multiple physical machines for reliability. Recently uploaded blocks get replicated directly onto multiple machines, and then eventually the buckets containing the blocks are aggregated together and erasure coded for storage efficiency. We use the term volume to refer to one or more buckets replicated onto a set of physical storage nodes.

Buckets

To summarize: A block, identified by its hash, gets written to a bucket. Each bucket is stored in a volume across multiple machines, in either replicated or erasure coded form.

Architecture

So now that we know our requirements and data model, what does Magic Pocket actually look like? Well, something like this:Pocket

That might not look like much, but it’s important. MP is a multi-zone architecture, with server clusters in western, central and eastern United States. Each block in MP is stored independently in at least two separate zones and then replicated reliably within these zones. This redundancy is great for avoiding natural disasters and large-scale outages but also allows us to establish very clear administrative domains and abstraction boundaries to avoid a misconfiguration or congestion collapse from cascading across zones.

[We have some extensions in the works for less-frequently accessed (“colder”) data that adopts a different multi-zone architecture than this.]

Most of the magic happens inside a zone however, so let’s dive in:

ZoneWe’ll go through these components one by one.

Frontends

These nodes accept storage requests from outside the system, and are the gateway to Magic Pocket. They determine where a block should be stored and issue commands inside MP to read or write the block.

Block Index

This is the service that maps each block to the bucket where it’s stored. You can think of this as a giant database with the following schema:

hash → cell, bucket, checksum 

(Our real schema is a little more complicated than this to support things like deletes, cross-zone replication, etc.)

The Block Index is a giant sharded MySQL cluster, fronted by an RPC service layer, plus a lot of tooling for database operations and reliability. We’d originally planned on building a dedicated key-value store for this purpose but MySQL turned out to be more than capable. We already had thousands of database nodes in service across the Dropbox stack, so this allowed us to leverage the operational competency we’ve built up around managing MySQL at scale.

We might build a more sophisticated system eventually, but we’re happy with this for now. Key-value stores are fashionable and offer high performance, but databases are highly reliable and provide an expressive data model which has allowed us to easily expand our schema and functionality over time.

Cross-zone replication

The cross-zone replication daemon is responsible for asynchronously replicating all block puts from one zone to the other. We write each block to a remote zone within one second of it being uploaded locally. We factor this replication delay into our durability models and ensure that the data is replicated sufficiently widely in the local zone.

Cells

Cells are self-contained logical storage clusters that store around 50PB of raw data. Whenever we want to add more space to MP we typically bring up a new cell. While the cells are completely logically independent, we stripe each cell across our racks to ensure maximal physical diversity within a cell.

Let’s dive inside a cell to see how it works:Cell

Object Storage Devices (OSDs)

The most important characters in a cell are the OSDs, storage boxes full of disks that can store over a petabyte of data in a single machine, or over 8 PB per rack. There’s some very complex logic on these devices for managing caching, disk scheduling, and data validation, but from the perspective of the rest of the system these are “dumb” nodes: they store blocks but don’t understand the cell topology or participate in distributed protocols.

Replication Table

The Replication Table is the index into the cell which maps each logical bucket of data to the volume and OSDs that bucket is stored on. Like the Block Index, the Replication Table is stored as a MySQL database but is much smaller and updated far less frequently. The working set for the Replication Table fits entirely in memory on these databases which gives us very high read throughput on a small number of physical machines.

The schema on the Replication Table looks something like this:

bucket → volume
volume → OSDs, open, type, generation 

One important concept here is the open flag, which dictates whether the volume is “open” or “closed”. An open volume is open for writing new data but nothing else. A closed volume is immutable and may be safely moved around the cell. Only a small number of volumes are open at any point in time.

The type  specifies the type of volume: a replicated volume or encoded with one of our erasure coding schemes. The generation number is used to ensure consistency when moving volumes around to recover from a disk failure or to optimize storage layout.

Master

The Master is best thought of as the janitor or coordinator for the cell. It contains most of the complex protocol logic in the system, and its main job is to watch the OSDs and trigger data repair operations whenever one fails. It also coordinates background operations like creating new storage buckets when they get full, triggering garbage collection when data is deleted, or merging buckets together when they become too small after garbage collection.

The Replication Table stores the authoritative volume state so that the Master itself is entirely soft-state. Note that the Master is not on the data plane: no live traffic flows through it, and the cell can continue to serve reads if the Master is down. The cell can even receive writes without the Master, although it will eventually run out of available storage buckets without the Master creating new ones as they fill up. There are always plenty of other cells to write to if the Master isn’t around to create these new buckets.

We run a single Master per cell, which provides us a centralized point of coordination for complex data-placement decisions without the significant complexity of a distributed protocol. This centralized model does impose a limit on the size of each cell: we can support around a hundred petabytes before the memory and CPU overhead becomes a bottleneck. Fortunately, having multiple cells also happens to be very convenient from a deployment perspective and provides greater isolation to avoid cascading failures.

Volume Managers

The Volume Managers are the heavy lifters of the cell. They respond to requests from the Master to move volumes around, or to erasure code volumes. This typically means reading from a bunch of OSDs, writing to other OSDs, and then handing control back to the Master to complete the operation.

The Volume Manager processes run on the same physical hardware as the OSDs since this allows us to amortize their heavy network-capacity demands across idle storage hardware in the cell.

Protocol

Phew! You’ve made it this far, and hopefully have a reasonable understanding of the high-level Magic Pocket architecture. We’ll wrap up with a very cursory overview of some core MP protocols, which we can expound upon in future posts. Fortunately these protocols are already quite simple.

Put

The Frontends are armed with a few pieces of information in advance of receiving a Put request: they periodically contact each cell to determine how much available space it has, along with a list of open volumes that can receive new writes.

When a Put request arrives, the Frontend first checks if the block already exists (via the Block Index) and then chooses a target volume to store the block. The volume is chosen from the cells in such a way as to evenly distribute cell load and minimize network traffic between storage clusters. The Frontend then consults the Replication Table to determine the OSDs that are currently storing the volume.

The Frontend issues store commands to these OSDs, which all fsync the blocks to disk (or on-board SSD) before responding. If this was successful then the Frontend adds a new entry to the Block Index and can return successfully to the client. If any OSDs fail along the way then the Frontend just retries with another volume, potentially in another cell. If the Block Index fails then the Frontend forwards the request to the other zone. The Master periodically runs background tasks to clean up from any partial writes for failed operations.Put Protocol

There are some subtle details behind the scenes, but ultimately it’s rather simple. If we adopted a quorum-based protocol where the Frontend was only required to write to a subset of the OSDs in a volume then we would avoid some of these retries and potentially achieve lower tail latency but at the expense of greater complexity. Judicious management of timeouts in a retry-based scheme already results in low tail latencies and gives us performance that we’re very happy with.

Get

Once we know the Put protocol, the process for serving a Get should be self-explanatory. The Frontend looks up the cell and bucket from the Block Index, then looks up the volume and OSDs from the Replication Table, and then fetches the block from one of the OSDs, retrying if one is unavailable.

As mentioned, we store both replicated data and erasure coded data in MP. Reading from a replicated volume is easy because each OSD in the volume stores all the blocks.Replicated Volume

Reading from an erasure coded volume can be a little more tricky. We encode in such a way that each block can be read in entirety from a single given OSD, so most reads only hit a single disk spindle; this is important in reducing load on our hardware. If that OSD is unavailable then the Frontend needs to reconstruct the block by reading encoded data from the other OSDs. It performs this reconstruction with the aid of the Volume Manager.Erasure Coded Volume

In the encoding scheme above, the Frontend can read Block A from OSD 1, highlighted in green. If that read fails it can reconstruct Block A by reading from a sufficient number of blocks on the other OSDs, highlighted in red. Our actual encoding is a little more complicated than this and is optimized to allow reconstruction from a smaller subset of OSDs under most failure scenarios.

Repair

The Master runs a number of different protocols to manage the volumes in a cell and to clean up after failed operations. But the most important operation the Master performs is Repair.

Repair is the operation used to re-replicate volumes whenever a disk fails. The Master continually monitors OSD health via our service discovery system and triggers a repair operation once an OSD has been offline for 15 minutes — long enough to restart a node without triggering unnecessary repairs, but short enough to provide rapid recovery and minimize any window of vulnerability.

Volumes are spread somewhat-randomly throughout a cell, and each OSD holds several thousand volumes. This means that if we lose a single OSD we can reconstruct the full set of volumes from hundreds of other OSDs simultaneously:Volume Placement

In the diagram above we’ve lost OSD 3, but can recover volumes A, B and C from OSDs 1, 2, 4 and 5. In practice there are thousands of volumes per OSD, and hundreds of other OSDs they share this data with. This allows us to amortize the reconstruction traffic across hundreds of network cards and thousands of disk spindles to minimize recovery time.

The first thing the Master does when an OSD fails is to close all the volumes that were on that OSD and instruct the other OSDs to reflect this change locally. Now that the volumes are closed, we know that they won’t accept any future writes and are thus safe to move around.

The Master then builds a reconstruction plan, where it chooses a set of OSDs to copy from and a set of OSDs to replicate to, in such a way as to evenly spread load across as many OSDs as possible. This step allows us to avoid traffic spikes on particular disks or machines. The reconstruction plan allows us to provision far fewer hardware resources per OSD, and would be difficult to produce without having the Master as a central point of coordination.

We’ll gloss over the data transfer process, but it involves the volume managers copying data from the sources to the destinations, erasure coding where necessary, and then handing control back to the Master.

The final step is fairly simple, but critical: At this point the volume exists on both the source and destination OSDs, but the move hasn’t been committed yet. If the Master fails at this point, the volume will just stay in the old location and get repaired again by the new Master. To commit the repair operation, the Master first increments the generation number on the volumes on the new OSDs, and then updates the Replication Table to store the new volume-to-OSD mapping with the new generation (the commit point). Now that we’ve incremented the generation number we know that there’ll be no confusion about which OSDs hold the volume, even if the failed OSD comes back to life.

This protocol ensures that any node can fail at any time without leaving the system in an inconsistent state. We’ve seen all sorts of crazy stuff in production. In one instance, a database frontend froze for a full hour before springing back to life and forwarding a request to the Replication Table, during which time the Master had also failed and restarted, issuing an entirely different set of repair operations. Our consistency protocols need to be completely solid in the face of arbitrary failures like these. The Master also runs a number of other background processes such as Reconcile, which validates OSD state and rolls back failed repairs or incomplete operations.

The open/closed volume model is key for ensuring that live traffic doesn’t interfere with background operations, and allows us to use far simpler consistency protocols than if we didn’t enforce this dichotomy.

Wrap-up

Thanks for making it this far! Hopefully this post gives some context for how Magic Pocket works and for some of our motivations.

The primary design principle here is keep it simple! Designing a distributed storage system is a big challenge, but it’s much harder to build one that operates reliably at scale, and supports all of the monitoring and verification systems and tooling that will ensure it’s running correctly. It’s also incredibly important to make technical decisions that are the right solution to the right problem, not just because they’re cool and novel. Most of MP was built by a team of less than half a dozen people, which required us to focus on the things that mattered, and played a big part in the success of the project.

There are obviously a lot of details that we’ve left out. (Just in case you’re about to respond, “Wait! this doesn’t work when X, Y and Z happens!”—we’ve thought about that, I promise.) Stay tuned for future blog posts where we’ll go into more detail on specific aspects about building and operating a system at this scale.

07 May 14:33

How did I not know about Yonomi

by Volker Weber

Yonomi resides on your phone and in the Cloud. No need for a hub, controller box or other additional hardware. Yonomi magically finds and enhances your existing connected devices allowing them to interact with one another in ways never before possible.

Yonomi works with many devices, amongst them Sonos (yay!), Wemo and HUE. Turn the lights on when you get home, turn Sonos off, when you leave the house. Let Yonomi announce the weather over your bedroom Sonos when your alarm goes off in the morning.

More >

Update: Not working for me.

07 May 14:33

Comox Greenway: The results are in

by pricetags

From the City of Vancouver:

 

Research shows health benefits of public greenways

Cycling up 49%, auto trips down 35%

.
Two separate research studies show multiple health benefits for area residents from the development of the Comox-Helmcken Greenway.
.
We commissioned a study by the UBC Health and Community Design Lab, and partnered with the Centre for Hip Health and Mobility on another study entitled, “Active Streets, Active People”.
.

Both studies examined the effects of improved access to walking and cycling, and opportunities for social connection along the greenway.

The two studies looked at behaviour before and after the greenway was improved.

 

  • 16% increase in the number of days of moderate physical activity
  • 9.8% decrease in the number of days of poor mental and physical health
  • 49% increase in cycling trips
  • 35% decrease in auto trips

.

Comox


06 May 21:53

The Lone Cyclists on the Port Mann Bridge

by pricetags

John Whistler and Joey Lesperance (below) cycle over the Port Mann Bridge:

.

John

The path on the bridge is excellent – no other cyclist or ped was encountered. While Surrey has done a good job integrating into their bike network, the Coquitlam side is a car hell with the entrance almost impossible to find.


06 May 21:52

Apple Music introduces 50 percent student discount that’s not available in Canada

by Jessica Vomiero

Though Apple Music has been criticized for a confusing and complex design, it’s grown at a staggering rate since it was launched nine months ago.

To further that growth, Apple is offering students a discounted subscriber rate, cutting the price of the regular plan by 50 percent. In the U.S., that means the subscriber rate for students will decrease from $9.99 USD to $4.99 USD.

The streaming service has now achieved a user base of 13 million subscribers, up from 11 million in February. In addition to a regular plan at $9.99 per month and a student plan at $4.99 USD per month, Apple Music also offers a family plan at $14.99 USD per month.

This plan allows up to six users to stream at a time, in contrast to Spotify which allows only two people to share an account.

The student discount will be available to students enrolled in eligible colleges and universities across several countries including the United States, U.K., Germany, Denmark, Ireland, Australia, and New Zealand. Unfortunately, the student discount is not available to Canadian students.

The company is also working with the student verification technology provider UNiDAYS to ensure that those signing up are really enrolled in college or university.

Furthermore, it doesn’t matter which degree level users are pursuing. As long as they’re enrolled in an approved school, they will be able to take advantage of the service.

Previous reports indicate that Apple will unveil an overhauled version of Apple Music at this year’s WWDC. The update will reportedly incorporate a more user-friendly interface and will likely be released later this year alongside iOS 10.

Related reading: Apple will reportedly unveil an overhauled version of Apple Music at this year’s WWDC

SourceTechCrunch
06 May 21:52

The Case for Rail in London: Agglomeration and social inclusion

by pricetags

How to make a case for spending big bucks to build rail transport?  “Social inclusion” are not just nice words in a policy paper.

.

By Ian Brown in the Eno Transportation Weekly

eno

.

The real case was all about sustaining the growing economy of London and fostering social inclusion. There were many suggestions around the area of creating jobs more spread across the city but economists made a strong case based on London’s key export – finance and business services. This led to an important city concept where the most efficient way of conducting such business is in one concentrated location. This is now referred to as “the agglomeration effect”.

The other key component not reflected in traditional transport business cases is social inclusion. It was important to find a way of expressing this in a simple way rather than just saying that it is an important policy issue. This was expressed in economic terms by looking at the economic case. The agglomeration effect can only work with sustainable high volume transport (Hong Kong style), requiring a massive increase in capacity over the legacy system. However, for example, for every job in the financial and business services sector there are 4 support jobs (IT, maintenance, cleaning etc). These jobs do not pay as well, but the city cannot function without them.

The agglomeration effect formed the business case for the massive Crossrail project, but social inclusion was also a major factor, particularly in justifying upgrades of radial main line railway routes and the completely new Overground network with its orbital line now completed right round London. This addressed the need to provide a viable alternative to the car and importantly provided alternative non-city-center routings for many cross-city journeys. Both Moscow and Paris have adopted a similar approach.


06 May 21:52

Researchers develop skin-touch gesture user-interface for wearables

by Patrick O'Rourke

Anyone who has used a smartwatch, whether it’s the Moto 360, Apple Watch or the Gear S2, knows that navigating a minuscule touchscreen is not easy, especially when trying to accomplish precise tasks.

Researchers at Carnegie Mellon University’s Future Interface group have invented a new solution to this issue called SkinTrack, the first wearable user navigation interface to utilize gestures performed with the back of your forearm and hand, to navigate the watch’s OS.

“We can spill interaction onto the smalls screen, while simultaneously enabling a larger area for interaction. Our approach still works even when the skin is covered with clothing,” says the project’s YouTube video.

A ring worn on the index finger emits an electronic signal and communicates with the watch’s band, allowing it to measure the hand’s position, distance, and track finger motion, all in real time. To demonstrate the technology, Carnegie Mellon researchers created their own custom smartwatch prototype, a device resembling a cheaper looking version of the Apple Watch.

Researchers pinned virtual applications to different sections of their forearm, allowing them to make phone calls, play music and even create special gestures, such as writing an A to access the smartwatch’s contact list.

While fascinating, the practical application of SkinTrack would likely be somewhat limited, though simple gestures could be useful when it comes to activating specific applications. Imagine being able to activate Transit App’s Apple Watch Glance by tapping your forearm?

Other obstacles to mainstream consumer adoption also include the large power source attached to the ring that’s necessary for the motion tracking to operate.

SourceMashable
06 May 21:51

Daily Durning: A Tunnel for Calgary

by pricetags

From NextCity:

Calgary

Global Rail News reports that engineers studying five possible options for routing the line through the Calgary city center have recommended the all-underground Option D as the best choice to take the Green Line from the Beltline across the Bow River and into downtown Calgary. Option D is expected to cost $1.3 billion, roughly $500 million more than the other four options.


06 May 18:11

Music Software: iTunes alternatives for OS X, 2016

by Rob Campbell

Yesterday I tweeted a thing about somebody losing a big chunk of their music collection because Apple Music, by way of iTunes, decided that they’d prefer to have the version stored in Apple Music instead of their original, painstakingly-curated files. Of course, Apple knows best, but I thought I’d do a survey of the alternatives, just in case I decided to keep my own music from getting cloud ganked.


Amarra

The big kahuna of audio software. Amarra enjoys the respect and recommendation of guys (yes, they’re all guys) who sit in Eames chairs listening to their music through Stax headphones with a Burson DAC and Headphone Amp and bespoke, hand-braided digital audio cables. They invariably have scotch and are listening to Windham Hill field recordings dumped straight to 1″ tape from Neumann ribbon microphones by a crazy person in a meadow. You know, the Absolute Sound and HIFI guys love this stuff, so maybe it’s worth a look. They’ve split it into three confusing versions based on the software’s capabilities and mysterious buzzwords. I feel like I’m going to need to upgrade my speaker cables to use this. And wear a smoking jacket.

Audirvana

audirvana

(15 day free trial, $79 for full version, $13 for iOS remote) Initial experience: Slow iTunes synchronization. Has been running for 5 minutes and is only half-done importing. Hopefully subsequent opens are faster. It does recommend folder sync rather than using an iTunes XML file for import, but I wanted the least destructive means of use. Total time to import: 14 minutes for 14.5k files, followed by a search for missing album artwork.

Library sorting and navigation seems decent and it pulled in all of my iTunes playlists, so that’s a good start. There are a bunch of playback options, but I left everything on the defaults. I won’t claim there’s any sonic advantage over iTunes but it does do some things that should make a difference. Preloading music into memory, taking direct control of my DAC (I’m using a PreSonus Firebox while my RME Fireface is in the shop) and doing smart things with volume control should all minimize degradation from jitter and over-manipulation of the bitstream. Most of which you can get out of iTunes just by keeping the volume control at max and using the mixer controls that come with your audio interface.

The visual interface is fairly rudimentary but not hard to look at. It has all the basic controls and a little extra information around playback – bit depth and sample rate are up front. The library section is sort of iTunes-esque with a default album view that shows the cover art in a big tiled list. Curiously, there’s no mini-window to cut down on distraction.

It has Tidal and Qobuz as streaming/buying options if you’re into that.

Vox

(Free, donation-ware) This app scared the hell out of me when I launched it, asking me to sign into something called “Loop” with my Facebook account (or signup with email). It sounded like they provide “unlimited” cloud storage for your music which feels like an interesting angle. Is this Napster for the cloud crowd? This app is “Free” as in, they have a business model that will make money from your stuff. I didn’t spend a lot of time with this. Figured it wasn’t for me. They sure do talk about the design on their blog.

Fidelia

(or, yes, it’s even bigger than that)

This was the first piece of software I tried as an iTunes replacement. The people on head-fi.org seemed to think it was keen so I shelled out the $35 bucks for a license. Big mistake, always take the free trial option! Fidelia is incapable of library management and their library organization is so rudimentary as to be almost useless for large collections. It is non-existent. You’re relegated to searching with the text box to find your music. The interface is also a humorously-large (on non-retina displays), skeumorphic design harking back to a big chunky piece of stereo equipment. Asking for a refund, no response as yet. Worse, their support forums are a collection of people yelling at them with no response for months on end. Consider this my 1-star Yelp review.

Nightingale

Remember Songbird? This is what appears to be a community built version of that, maybe without all of the store integration they’d been building in. Probably dated, as the last version of Songbird was from 2013. Built on Mozilla tech for the interface. Unlike most of these other players, they make no claims of integer pipes to your DAC or audiophile sound quality. And maybe that’s a good thing.

Conclusion

I’m going to give Audirvana a trial run and see how I like it. First impressions are decent, but I haven’t tried to do any serious library management with it yet, and not sure it’s worth the trouble.

Will it completely replace iTunes? Probably not, as an owner of an iPad, I still need some way to get stuff in and out of that. None of the options above are really meant for that and how could they be with Apple’s proprietary hardware and software double threat? I also still use iTunes as a podcast subscription tool, internet radio interface (though I use TuneIn pretty much everywhere else and it’s great) and the odd movie and tv rental/subscription. All the stuff that has made iTunes a big, bloated pig of a system.

If iOS is any indicator, maybe Apple’s moving away from the big, everything-central, all-included machine that is iTunes and shifting towards smaller, more focused apps. Podcasts is now a separate, much-improved app on iPad. The iTunes store is a separate app. I would love to see Apple Music get out of the main program, but that’s probably unlikely.

Honestly, I think the best solution is my rsync backup to our NAS. If anything happens to my music collection, I have a full backup I can restore from and share around the house.

What are your recommendations for huge music-collection, library management tools?

06 May 18:11

Rethinking suburban roadscapes: building rapid transit greenways

by dandy

greenwayimage

Image by Darnel Harris and Dennis Espino

Imagine a road that not only supported a range of modes of transportation, but also helped clean up our waterways. It's possible – and not all that difficult, according to Darnel Harris. Harris, who holds a Master in Environmental Studies from York University, took a long look at how active transportation, ecological preservation and infrastructure development can work together to produce more resilient communities. In a Q&A with dandyhorse, he says rapid transit greenways can help sustain suburban areas in particular.

What are rapid transit greenways are and how they would benefit people?

Rapid transit greenways use rain gardens as green barriers to separate people driving from people biking or walking along busy arterial roads. Since greenery and concrete protected barriers traditionally occupy different spaces in the roadway, this enables other options.

Reallocating this space, a three-metre-wide mobility pathway can accommodate two lanes of cargo bikes, scooters and velomobiles safely next to the sidewalk on both sides of the road. People driving will know the roadway is for vehicles only. Residents and businesses using cargo bikes will be able to practically and affordably move people and goods around the local area. People walking will no longer conflict with people who are unwilling or unable to bike alongside vehicles.

The rain gardens will reduce the risk of flooding and block pollution from reaching our waterways. Apart from being cost-effective to install, these carefully designed rain gardens will look good while limiting the need for new stormwater piping. They will also drive green job growth.

Where do you see rapid transit greenways working in Toronto?

No matter where you live in Toronto, you need to access your local schools, shops and community services. Safe routes for people walking and riding bikes or velomobiles supports low carbon mobility in Toronto. Flood protection barriers and a sizeable tree canopy help mitigate severe weather. While we need these approaches across Toronto, the large distances between housing, shops and community services in the suburbs mean we need practical and dignified ways to stay mobile. Fortunately, our suburban right-of-ways are likely wide enough to handle a retrofit without really impacting people driving or walking.

Why is it important that greenways are implemented as we invest in our suburban communities?

The low speed street design of the suburbs makes retrofitting them for cargo bike use fairly straightforward. Local streets are already ideal because traffic is infrequent and collector roads need cost effective adjustments that could be handled with bike lanes or bollards. Creating protected pathways along the arterial roads that tie suburban communities together would allow residents to go about their day lives in their community while using a bike in a safe and dignified manner.

Rapid transit greenways seem to connect infrastructure planning with environmental concerns. Why has it taken so long for us to think this way?

We have a history of planning to tame nature rather than trying to work collaboratively with it. Consider for example our approach to dealing with water. We built a stormwater system in Toronto to channel rainfall back into our waterways, rather than let it seep through the soil. That approach has caused severe riverbank erosion, allowed polluted runoff to make its way into Lake Ontario, and affected biodiversity.

Between 2000 and 2012, Toronto was hit with three storms that our previous models suggested should only occur every hundred years. Since our systems were generally built to handle storms that would occur every few years or decades, we need to rebuild our infrastructure. The sheer technical challenge and expense of replacing massive pipe systems at time when budgets are constrained has helped open the door to considering fresh approaches.

In Toronto, who would be responsible for building (and funding) rapid transit greenways?

The city of Toronto and the province of Ontario would both be involved in creating and maintaining greenways. The province is already investing in transit corridors through Metrolinx, and the new Greenhouse Gas Reduction Account has been created specifically to fund  green infrastructure projects.

Since the City of Toronto handles our stormwater management infrastructure, they could employ local residents to maintain the rain gardens. Since road reconstruction is required to create rain garden barriers, working them into our light rail transit projects cost effectively serves local and regional mobility simultaneously. Given that they divert large quantities of water and pollutants from the sewer system, building raingardens compares favourably to traditional sewer system approaches over the length of their life cycle.

Is there much support for this sort of project? What sorts of challenges or obstacles will greenways face?

Last February, the Ontario Ministry of the Environment and Climate Change issued a brief that expressly supported water management approaches which seek to retain water where it falls. The City of Toronto is developing a Green Streets standard, as well as moving toward the adoption of a stormwater surcharge that is increasingly popular across Ontario.

The City of Mississauga passed a law last year requiring all future capital road projects to consider using rain gardens where appropriate. People are enthusiastic for sustainable approaches that also support local mobility, since they want to live in healthier neighbourhoods more affordably. Enabling these approaches will help Toronto meet its climate targets.

The major political obstacles to greenways lie in the way we frame mobility needs and investments in Toronto. Very often we refer to the needs of drivers, cyclists and pedestrians – as if the same person who has a driver’s licence, isn’t already using a bike or walking! Most people fall into all three groups.

Just like the climate change debate, instead of making evidence based decisions about right sizing our mobility choice to the task at hand, we spend time, energy and money funding ‘wars’ with each other based on opposing cultural views – and getting little built.

If we embrace culturally pluralistic language that detaches practical mobility issues and reasonable climate adaption strategies from overarching cultural narratives, we will be able to more quickly implement novel strategies to respond to our changing climate.

Look for more from Darnel in our spring print issue available for FREE at our sponsor bike shops, and for purchase online and at independent book stores in Toronto.

Subscribe to dandyhorse today to get dandy at your door this June!

Related on the dandyBLOG:

Council approves bike lane pilot on Bloor Street

Vision Zero supported by public works: So why no love for Bloor bike lanes

Bike lanes on Bloor now one council meeting away from becoming reality

Bike Spotting: Talking Bloor Street bike lanes

From the Horse’s Mouth: Councillor Joe Mihevc goes all out for the minimum grid