Shared posts

16 May 09:13

Twenty Years Ago for Daniel

Daniel Jalkut: Twenty Years

"This year, many new young people will stare down at the relatively meager salary they’ll be earning, sign away their agreement to start in two weeks, and be in for the twenty-year ride of their lives."

May 13th is an interesting day for myself as well. Eleven years ago, May 13th was the last day that I would ever be working for The Man.

It's also George's birthday. We must not forget that. Happy birthday George.

16 May 09:09

Disseminating reproducible research is fundamentally a language and communication problem

Just about 10 years ago, I wrote my first of many articles about the importance of reproducible research. Since that article, one of the points I’ve made is that the key issue to resolve was one of tools and infrastructure. At the time, many people were concerned that people would not want to share data and that we had to spend a lot of energy finding ways to either compel or incentivize them to do so. But the reality was that it was difficult to answer the question “What should I do if I desperately want to make my work reproducible?” Back then, even if you could convince a clinical researcher to use R and LaTeX to create a Sweave document (!), it was not immediately obvious where they should host the document, code, and data files.

Much has happened since then. We now have knitr and Markdown for live documents (as well as iPython notebooks and the like). We also have git, GitHub, and friends, which provide free code sharing repositories in a distributed manner (unlike older systems like CVS and Subversion). With the recent announcement of the Journal of Open Source Software, posting code on GitHub can now be recognized within the current system of credits and incentives. Finally, the number of data repositories has grown, providing more places for researchers to deposit their data files.

Is the tools and infrastructure problem solved? I’d say yes. One thing that has become clear is that disseminating reproducible research is no longer a software problem. At least in R land, building live documents that can be executed by others is straightforward and not too difficult to pick up (thank you John Gruber!). For other languages there many equivalent (if not better) tools for writing documents that mix code and text. For this kind of thing, there’s just no excuse anymore. Could things be optimized a bit for some edge cases? Sure, but the tools are prefectly fine for the vast majority of use cases.

But now there is a bigger problem that needs to be solved, which is that we do not have an effective way to communicate data analyses. One might think that publishing the full code and datasets is the perfect way to communicate a data analysis, but in a way, it is too perfect. That approach can provide too much information.

I find the following analogy useful for discussing this problem. If you look at music, one way to communicate music is to provide an audio file, a standard WAV file or something similar. In a way, that is a near-perfect representation of the music—bit-for-bit—that was produced by the performer. If I want to experience a Beethoven symphony the way that it was meant to be experienced, I’ll listen to a recording of it.

But if I want to understand how Beethoven wrote the piece—the process and the details—the recording is not a useful tool. What I look at instead is the score. The recording is a serialization of the music. The score provides an expanded representation of the music that shows all of the different pieces and how they fit together. A person with a good ear can often reconstruct the score, but this is a difficult and time-consuming task. Better to start with what the composer wrote originally.

Over centuries, classical music composers developed a language and system for communicating their musical ideas so that

  1. there was enough detail that a 3rd party could interpret the music and perform it to a level of accuracy that satisfied the composer; but
  2. it was not so prescriptive or constraining so that different performers could not build on the work and incorporate their own ideas

It would seem that traditional computer code satisfies those criteria, but I don’t think so. Traditional computer code (even R code) is designed to communicate programming concepts and constructs, not to communicate data analysis constructs. For example, a for loop is not a data analysis concept, even though we may use for loops all the time in data analysis.

Because of this disconnect between computer code and data analysis, I often find it difficult to understand what a data analysis is doing, even if it is coded very well. I imagine this is what programmers felt like when programming in processor-specific assembly language. Before languages like C were developed, where high-level concepts could be expressed, you had to know the gory details of how each CPU operated.

The closest thing that I can see to a solution emerging is the work that Hadley Wickham is doing with packages like dplyr and ggplot2. The dplyr package’s verbs (filter, arrange, etc.) represent data manipulation concepts that are meaningful to analysts. But we still have a long way to go to cover all of data analysis in this way.

Reproducible research is important because it is fundamentally about communicating what you have done in your work. Right now we have a sub-optimal way to communicate what was done in a data analysis, via traditional computer code. I think developing a new approach to communicating data analysis could have a few benefits:

  1. It would provide greater transparency
  2. It would allow others to more easily build on what was done in an analysis by extending or modifying specific elements
  3. It would make it easier to understand what common elements there were across many different data analyses
  4. It would make it easier to teach data analysis in a systematic and scalable way

So, any takers?

16 May 09:08

Developers put their efforts into most urban, most transit-oriented, most expensive ‘hoods

by Frances Bula

No wonder Vancouver is always in an uproar about some development or another. There’s just so much of it in the city, as I was made aware when someone pointed me to the statistics on housing starts in the region.

Vancouver, in spite of being an almost built-out city and with very little greenfield (never built on) land available, has the most residential construction going on of any municipality, and by a long shot. So far this year, 3,500 units under construction in the city, compared to 1,400 in Burnaby, 900 in Surrey.

That’s because development is Vancouver is so much more certain — for sure, everything will get sold and, for sure, it will be for a good price. Which is what is making the city eternally attractive.

My story in the Globe on all this here. The CMHC report here. And an invaluable source for all things housing, the Metro Vancouver Housing Data Book, here.

16 May 09:08

Our network is full of stories

by openmatt

Our network is full of stories, impact and qualitative data. Colleagues and community members discover and use these narratives daily across a broad range — from communications and social media, to metrics and evaluation, to grant-writing, curriculum case studies, and grist for new outlets like the State of the Web.

Our challenge is: how do we capture and analyze these stories and qualitative data in a more systematic and rigorous way?

Can we design a unified “Story Engine” that serves multiple customers and use cases simultaneously — in ways that knit together much of our existing work? That’s the challenge we undertook in our first “Story Engine 0.1” sprint: document the goals, interview colleagues, a develop personae. Then design a process, ship a baby prototype, and test it out using some real data.

inputs

Designing a network story Engine

Here’s what we shipped in our first 3-day sprint:

  1. A prototype web site. With a “file a story tip” intake process.
  2. A draft business plan / workflow
  3. A successful test around turning network survey data into story leads
  4. Some early pattern-matching / ways to code and tag evidence narratives
  5. Documented our key learnings and next steps

1) A prototype web site

http://mzl.la/story is now a thing! It packages what we’ve done so far. Plus a work-bench for ongoing work. It includes:

  • “File a story” tip sheet — A quick, easy intake form for filing potential story leads to follow up on. Goal: make it fast and easy for anyone to file the “minimum viable info” we’d need to then triage and follow up. http://mzl.la/tip
  • See stories — See story tips submitted via the tip sheet. (Requires password for now, as it contains member emails.) Just a spreadsheet at this point — it will eventually become a Git Hub repo for easier tasking, routing and follow-up. And maybe: a “story garden” with a prettier, more usable interface for humans to browse through and see the people and stories inside our network. http://mzl.la/leads
  • PersonasWho does this work need to serve? Who are the humans at the center of the design? We interviewed colleagues and documented their context, jobs, pains, and gains. Plus the claims they’d want to make and how they might use our findings. Focused on generating quick wins for the Mozilla Foundation grants, State of the Web, communications and metrics teams. http://mzl.la/customers
  • AboutOutlining our general thinking, approach and potential methodologies. http://mzl.la/about
  • How-To Guides — (Coming soon.) Will eventually become: interview templates, guidance and training on how to conduct effective interviews, our methodology, and coding structure.

2) A draft business process / workflow 

What happens when a story tip gets filed? Who does what? Where are the decision points? We mapped some of this initial process, including things like: assess the lead, notify the right staff, conduct follow-up interviews, generate writing/ artefacts, share via social, code and analyze the story, then package and use findings.

story-engine-flow

3) Turning network survey data into stories

Our colleagues in the “Insights and Impact” team recently conducted their first survey of the network. These survey responses are rich in potential stories, evidence narratives, and qualitative data that can help test and refine our value proposition.

We tested the first piece of our baby story engine by pulling from the network survey and mapping data we just gathered.

This proved to be rich. It proved that our network surveys are not only great ways to gather quantitative data and map network relationships — they also provide rich leads for our grants / comms / M&E / strategy teams to follow up on.

Sample story leads

(Anonymous for privacy reasons):

  • The network helps us form connections to small organizations that offer digital media and learning programs. We learn from their practices and are able to share them out to our broader network of over 1600 Afterschool providers in NYC. It also expands our staff capacity to teach Digital Media and Learning activities.”
  • My passion is youth advocacy and fighting in solidarity with them in their corner. Being part of the network helps me do more with them like working with libraries in the UK to develop ‘Open source library days; lead by our youths who have so much to share with us all.”
  • The collaboration has allowed the local community to learn about the Internet and be able to contribute to it. The greatest joy is seeing young community girls being a part of this revolution through clubs. Through the process of learning they also meet local girls who share the same passion as they do.”

These are examples of leads that may be worth following up on to help flesh out theory of change, analyze trends, and tell a story about impact. Some of the leads we gathered also include critique or ways we need to do better — combined with explicit offers to help.

Draft-MLN-codes copy

4) Early Pattern-matching / coding  AND tagging

One of our goals is to combine the power of both qualitative and quantitative data. Out of this can come tagging and codes around the benefit / value the network is providing to members. Some early patterns in the benefits network members are reporting:

  • Support — advice, links to resources, financial support, partners (“matchmaking”)
  • Connections — professional, social
  • Credibility / legitimacy of being associated with Mozilla
  • Belongingness — being part of a group and drawing strength from that
  • Skills / practises / knowhow
  • Employability / “Helped me get a job”
  • Educational opportunity / “Helped me get into school”
  • Entrepreneurship & innovation / developing new models, products, services

Imagine these as simple tags we could apply to story tickets in a repo. This will help colleagues sift, sort and follow up on specific evidence narratives that matter to them. Plus allow us to spot patterns, develop claims, and test assumptions over time.

Value-Proposition-Canvas_03

5) Key Learnings

Some of our “a ha!” moments from this first sprint:

  • Increased empathy and understanding is key. Increasing our empathy and understanding of network members is a key goal for this work.

This is a key muscle we want to strengthen in MoFo’s culture and systems: the ability to empathize with our members’ aspirations, challenges and benefits.

Regularly exposing staff and community to these stories from our network can ground our strategy, boost motivation, aid our change management process, and regularly spark “a ha” moments.

  • We are rich in qualitative data. We sometimes fall into a trap of assuming that what we observe, hear about and help facilitate is too ephemeral and anecdotal to be useful. In reality, it’s a rich source of data that needs to be systematically aggregated, analyzed, and fed back to teams and partners. Working on processes and frameworks to improve that was illuminating in terms of the quality of what we already have.
  • The network mapping survey is already full of great stories. Our early review and testing proved this thesis — there’s greater fodder for evidence narratives / human impact in that data.
  • Connect the dots between existing work. This “story engine” work is not about creating another standalone initiative; the opportunity is to provide some process and connective tissue to good work that is already ongoing.
  • We can start to see patterns emerging. In terms of: the value members are seeing in the network. We can turn these into a recurring set of themes / tags / codes that can inform our research and feedback loops.

Feedback on the network survey process:

Open-ended questions like: “what’s the value or benefit you get from the network” generate great material.

  • This survey question was a rich vein. (Mozilla Science Lab did not ask this open-ended question about value, which meant we lost an opportunity to gather great stories there — we can’t get story tips when people are selecting from a list of benefits.)
  • Criticism / suggestions for improvement are great. We’re logging people who will likely also have good critiques, not just ra-ra success stories. And (importantly) some of these critiques come with explicit offers to help.
  • Consider adding an open-ended “link or artefact field” to the survey next time. e.g., “Got a link to something cool that you made or documented as part of your interaction with the network?” This could be blog posts, videos, tweets, etc. These can generate easy wins and rich media.

Story Engine 2

What’s next?

We’ve documented our next steps here. Over the last three days, we’ve dug into how to better capture the impact of what we do. We’ve launched the first discovery phase of a design thinking process centred around: “How might we create stories that are also data?”

We’re listening, reviewing existing thinking, digging into people’s needs and context — asking “what if?” Based on the Mozilla Foundation strategy, we’ve created personas, thought about claims they might want to make, pulled from the results of a first round of surveys on network impacts (New York Hive, Open Science Lab, Mozilla Clubs), and created a prototype workflow and tip sheet. Next up: more digging, listening, and prototyping.

What would you focus on next?

If we consider what we’ve done above as version 0.1, what would you prioritize or focus on for version 0.2? Let us know!

 

16 May 09:07

Why Conventional Wisdom Is the Enemy of Marketing Innovation

by Rex Hammock

(via: Hammock.com Idea Email) By the time innovative, creative and insightful marketing trends become conventional marketing wisdom, they are no longer innovative, creative or insightful.

Conventional wisdom is where innovation goes to become institutionalized, codified and organized around an ecosystem of conferences, acronyms and buzzwords. Conventional wisdom is where innovation goes to receive venture funding, branding and a corps of true believers who are willing to master its language, metrics and software platforms.

(Continue reading on Hammock.com…)

16 May 09:06

Love thy coworker; thy work, not necessarily

by Yossi Kreinin

The whole "passionate about work" attitude irks me out; save your passion for the bedroom. This is not to say that I'd rather be ridiculed for being a nerd who works too hard.

In fact, personally I'm in a strange position of a certified programming nerd with a blog and code on github, who nonetheless does it strictly for the money (I'd be a lawyer or an i-banker if I thought I could.) I'm thus on both sides of this, kinda.

So, in today's quest to change society through blogging, what am I asking society for, if neither passion nor scorn for work please me? Well, I'd rather society neither encourage nor discourage the love of work, and leave it to the individual's discretion.

From a moral angle, I base my belief on the Biblical commandment, "love thy neighbor", which I think does not dovetail into "love thy work" for a reason. From a practical angle, again I think that one's attitude to coworkers (also managers, customers and other people) is a better predictor of productivity than one's attitude to work.

People talk a lot about intrinsic vs extrinsic motivation – passion vs money - but I think they're actually very similar, and the more important distinction is personal vs social motivation.

Why? Because whether I work for fun or for the money, it's a means to my own personal end, which in itself precludes neither negligence nor fraud on my behalf. What makes you do the bits of work that are neither fun nor strictly necessary to get paid is that other people need it done, and you don't want to fail them.

Perhaps you disagree with my ideas on motivation. If so, here's an idea on boundaries that I hope is uncontroversial. Telling me how I should feel about my job infringes on my boundaries, which is to say that it's none of your business. If however I do a shoddy job and it becomes your problem, then I'm infringing on your boundaries, so you're absolutely entitled to complain about it. Here again requiring respect for coworkers is sensible, while requiring this or that attitude towards the work itself is not.

So:

  • Someone's attitude towards work does not predict the quality of their work
  • Inquiring reports and potential hires about their attitude towards work is a minor but unpleasant harassment
  • A corporate culture of "we're doing this thing together" beats both "we're passionate to change the world by advancing the state of the art in blah blah" and "we're laser-focused on fulfilling customers' requirements on time and within budget"

P.S. Which kind of culture do managers typically want? Often they're schizophrenic on this. They want "passionate" workers, hoping that they'll accept less money. On the other hand, the same person often doesn't care about the actual work in the worst way (he sucks at it and not having to do it anymore is management's biggest perk to him.) But what he cares about is deadlines, etc. - so he encourages a culture of shipping shit in the hope that it sorts itself out somehow (these are the people that the term "technical debt" was invented for, of course nobody is convinced by this pseudo-businessy term if they weren't already convinced about the underlying idea of "shipping shit is bad.") Of course a truly passionate worker is going to suffer mightily in the kind of culture created by the same manager who thinks he wanted this worker.

16 May 09:02

On Jane Jacobs: Contrary to the Contrarian

by pricetags

Lawence Solomon responds in the Financial Post to Sam Sullivan’s criticisms of Jane Jacobs (here).

Jacobs FP

.

Jacobs’ critics also marked her 100th anniversary — but by bemoaning her influence. As former Vancouver Mayor Sam Sullivan told CBC, his city is only now beginning to recover from an unfortunate love affair with Jane Jacobs. In his view, her ideologically anti-high-rise, anti-development and anti-change thinking led to sprawl and unaffordable neighbourhoods except for the very few who got in early and then kept others out, profiting at their expense.

“She was very much against towers; she was against the highway, high-rise city,” Sullivan lamented. “Her own home (in Toronto’s Annex) is in an area that should have been densified a lot earlier. One of my favourite moments in life was when I got to sit on the porch with Jane Jacobs and talk about porches. I was such a big fan of hers but over time as I tried to densify Vancouver, I ran into people who were advocates of Jane Jacobs’ ideas and I realized then that there were some negative parts (to her legacy).”

Sullivan is confused, as are her devotees on the left who think she held the anti-development views they hold dear. Jane Jacobs was not “very much against towers.” I was a colleague of hers, and in literally hundreds of conversations that I had with her over a period of 25 years, not once did she ever express an animus towards high rises. Jane did object to high-rise towers for public housing — but then, she objected to all public housing — and she did object to the useless green space that planners forced on high-rise builders, calling such “tower-in-a-park” developments economically sterile. But Jane also thought some areas needed more high-rise towers. Her views on high-rises, as on everything, were never ideological; they were matter-of-fact, geared to solving real-world problems. …

Sullivan seems not to understand that Jane Jacobs’ legacy has been appropriated (misappropriated, really) by those determined to impose the type of results Jane favoured, not realizing her desired results — such as diverse neighbourhoods — can’t be imposed by planners. The misappropriators do harm, too, by failing to understand that Jane’s thinking transcends cities to include economic development and wealth creation generally. Because human ingenuity is unlimited, she believed, the planet is blessed with “unlimited resources.” In a message her fans of today would not want to hear, Jane concluded that planning for resource shortages is not planning for sustainable “economic development at all. It is planning for stagnation.”

.

Full article here.


16 May 09:01

Item From Ian: Pender Looking North

by Ken Ohrn

Frequent contributor Ian sends us this photo, stylized by Google Photos.

Ian

Pender Street looking north, Vancouver


16 May 08:59

Help: why don’t images load in https?

by Doc Searls

For some reason, many or most of the images in this blog don’t load in some browsers. Same goes for the ProjectVRM blog as well. This is new, and I don’t know exactly why it’s happening.

So far, I gather it happens only when the URL is https and not http.

Okay, here’s an experiment. I’ll add an image here in the WordPress (4.4.2) composing window, and center it in the process, all in Visual mode. Here goes:

cheddar3

Now I’ll hit “Publish,” and see what we get.

When the URL starts with https, it fails to show in—

  • Firefox ((46.0.1)
  • Chrome (50.0.2661.102)
  • Brave (0.9.6)

But it does show in—

  • Opera (12.16)
  • Safari (9.1).

Now I’ll go back and edit the HTML for the image in Text mode, taking out class=”aligncenter size-full wp-image-10370 from between the img and src attributes, and bracket the whole image with the <center> and </center> tags. Here goes:

cheddar3

Hmm… The <center> tags don’t work, and I see why when I look at the HTML in Text mode: WordPress removes them. That’s new. Thus another old-school HTML tag gets sidelined. 🙁

Okay, I’ll try again to center it, this by time by taking out class=”aligncenter size-full wp-image-10370 in Text mode, and clicking on the centering icon in Visual mode. When I check back in Text mode, I see WordPress has put class=”aligncenter” between img and src. I suppose that attribute is understood by WordPress’ (or the theme’s) CSS while the old <center> tags are not. Am I wrong about that?

Now I’ll hit the update button, rendering this—

cheddar3

—and check back with the browsers.

Okay, it works with all of them now, whether the URL starts with https or http.

So the apparent culprit, at least by this experiment, is centering with anything other than class=”aligncenter”, which seems to require inserting a centered image Visual mode, editing out size-full wp-image-whatever (note: whatever is a number that’s different for every image I put in a post) in Text mode, and then going back and centering it in Visual mode, which puts class=”aligncenter” in place of what I edited out in text mode. Fun.

Here’s another interesting (and annoying) thing. When I’m editing in the composing window, the url is https. But when I “view post” after publishing or updating, I get the http version of the blog, where I can’t see what doesn’t load in the https version. But when anybody comes to the blog by way of an external link, such as a search engine or Twitter, they see the https version, where the graphics won’t load if I haven’t fixed them manually in the manner described above.

So https is clearly breaking old things, but I’m not sure if it’s https doing it, something in the way WordPress works, or something in the theme I’m using. (In WordPress it’s hard — at least for me — to know where WordPress ends and the theme begins.)

Dave Winer has written about how https breaks old sites, and here we can see it happening on a new one as well. WordPress, or at least the version provided for https://blogs.harvard.edu bloggers, may be buggy, or behind the times with the way it marks up images. But that’s a guess.

I sure hope there is some way to gang-edit all my posts going back to 2007. If not, I’ll just have to hope readers will know to take the s out of https and re-load the page. Which, of course, nearly all of them won’t.

It doesn’t help that all the browser makers now obscure the protocol, so you can’t see whether a site is http or https, unless you copy and paste it. They only show what comes after the // in the URL. This is a very unhelpful dumbing-down “feature.”

Brave is different. The location bar isn’t there unless you mouse over it. Then you see the whole URL, including the protocol to the left of the //. But if you don’t do that, you just see a little padlock (meaning https, I gather), then (with this post) “blogs.harvard.edu | Doc Searls Weblog • Help: why don’t images load in https?” I can see why they do that, but it’s confusing.

By the way, I probably give the impression of being a highly technical guy. I’m not. The only code I know is Morse. The only HTML I know is vintage. I’m lost with <span> and <div> and wp-image-whatever, don’t hack CSS or PHP, and don’t understand why <em> is now preferable to <i> if you want to italicize something. (Fill me in if you like.)

So, technical folks, please tell us wtf is going on here (or give us your best guess), and point to simple and global ways of fixing it.

Thanks.

[Later…]

Some answer links, mostly from the comments below:

That last one, which is cited in two comments, says this:

Chris Cree who experienced the same problem discovered that the WP_SITEURL and WP_HOME constants in the wp-config.php file were configured to structure URLs with http instead of https. Cree suggests users check their settings to see which URL type is configured. If both the WordPress address and Site URLs don’t show https, it’s likely causing issues with responsive images in WordPress 4.4.

Two things here:

  1. I can’t see where in Settings the URL type is mentioned, much less configurable. But Settings has a load of categories and choices within categories, so I may be missing it.
  2. I wonder what will happen to old posts I edited to make images responsive. (Some background on that. “Responsive design,” an idea that seems to have started here in 2010, has since led to many permutations of complications in code that’s mostly hidden from people like me, who just want to write something on a blog or a Web page. We all seem to have forgotted that it was us for whom Tim Berners-Lee designed HTML in the first place.) My “responsive” hack went like this: a) I would place the image in Visual mode; b) go into Text mode; and c) carve out the stuff between img and src and add new attributes for width and height. Those would usually be something like width=”50%” and height=”image”. This was an orthodox thing to do in HTML 4.01, but not in HTML 5. Browsers seem tolerant of this approach, so far, at least for pages viewed with the the http protocol. I’ve checked old posts that have images marked up that way, and it’s not a problem. Yet. (Newer browser versions may not be so tolerant.) Nearly all images, however, fail to load in Firefox, Chrome and Brave when viewed through https.

So the main question remaining are:

  1. Is this something I can correct globally with a hack in my own blogs?
  2. If so, is the hack within the theme, the CSS, the PHP, or what?
  3. If not, is it something the übergeeks at Harvard blogs can fix?
  4. If it’s not something they can fix, is my only choice to go back and change every image from the blogs’ beginnings (or just live with the breakage)?
  5. If that’s required, what’s to keep some new change in HTML 5, or WordPress, or the next “best practice” from breaking everything that came before all over again?

Thanks again for all your help, folks. Much appreciated. (And please keep it coming. I’m sure I’m not alone with this problem.)

16 May 08:58

Books are back. Only the technodazzled thought they would go away

files/images/books.jpg


Simon Jenkins, The Guardian, May 17, 2016


This is just an example of some of the ridiculous assertions still being published in the traditional media. I realize that opinion columns should represent all perspectives, but the denial of reality should not be one of them. If you go into your local bookstore (if you can  find a local bookstore) you'll find it selling knick-knacks, toys, food, and pretty much everything but books. People don't buy Kindles any more because they don't even want another device to read books,    I'm sitting in a café right now and  nobody is reading print on paper. Writing a column like this is the surest way to undermine your credibility. See also: eBook sales are not falling, despite what publoishers say.

[Link] [Comment]
16 May 08:58

Universities must enter the digital age or risk facing irrelevance

files/images/lecture-hall.jpg.size.custom.crop.1086x722.jpg


Don Tapscott, Toronto Star, May 17, 2016


I have to admit that I am impressed by the way Don Tapscott has found something current, used it to reinforce his core message, and released a book with a slew of publicity that is going to keep himself (and his son Alex) employed for some time into the future. This is how you manage your career as a pundit at a high level. And maybe it will even do some good. Tapscott writes, "The digital world is challenging the very notion of a walled-in institution that excludes large numbers of people. Yet the Industrial Age model of education is hard to change. Vested interests fight change. And leaders of old paradigms are often the last to embrace the new." I see this on a daily basis. Time for a change.

[Link] [Comment]
16 May 08:58

The Tension of Swift

Let’s get this out of the way first: I write new code in Swift, both at Omni and in my personal projects, and I’m positively bugged when I have to write Objective-C code.

But Swift makes me anxious for the future, and I think I know why.

It appears to me that the Objective-C runtime will not appear everywhere Swift appears, and it also appears to me that the Objective-C runtime’s days are numbered everywhere. (I hear some of you clapping.)

But the Objective-C runtime allows AppKit and UIKit to provide some powerful features that I’d hate to do without: the Responder Chain, xib and storyboard loading, KVC, and the undo manager.

If you’re writing a Mac or iOS app — even if all your code is in Swift — you rely on these features.

* * *

At a lower level these depend, at least conceptually, on a few things:

  1. respondsToSelector — can you send a given message to a given receiver?

  2. performSelector — actually send a message, with arguments, to a given receiver.

  3. String to Selector conversion.

  4. String to Class conversion.

(Did I miss anything big?)

You do have these things in Swift — but, importantly, only with NSObject-based objects. Only with the Objective-C runtime.

* * *

What makes me nervous is Swift’s emphasis on type safety and on compile-time resolution. As long as we also have what we need from Objective-C, then that’s fine — then we still get xibs and storyboards, the Responder Chain, and so on.

But when I read the writing on the wall, I read that that’s not a permanent position. (I could be wrong.)

What’s the replacement, then? How, in other words, do we retain major features that app writers rely on unless Swift itself grows to include those dynamic features from Objective-C?

I have to think that that’s what Swift 4 will be about. If Swift 3 is about building a beautiful language (and I think it is), then Swift 4 should be about standing it up to the needs of app writers, so that it’s not forever tied by the leg to the Objective-C runtime.

I’m nervous. I have some faith, but I’m still nervous. Should I not be?

16 May 08:55

Global Temperature Change in R & D3 (without the vertigo)

by hrbrmstr

This made the rounds on social media last week:

One of the original versions was static and was not nearly as popular, but—as you can see—this one went viral.

Despite the public’s infatuation with circles (I’m lookin’ at you, pie charts), I’m not going to reproduce this polar coordinate visualization in ggplot2. I believe others have already done so (or are doing so) and you can mimic the animation pretty easily with coord_polar() and @drob’s enhanced ggplot2 animation tools.

NOTE: If you’re more interested in the stats/science than a spirograph or colorful D3 animation (below), Gavin Simpson (@ucfagls) has an awesome post with a detailed view of the HadCRUT data set.

HadCRUT in R

I noticed that the original data source, had 12 fields, two of which (columns 11 & 12) are the lower+upper bounds of the 95% confidence interval of the combined effects of all the uncertainties described in the HadCRUT4 error model (measurement and sampling, bias and coverage uncertainties). The spinning vis of doom may be mesmerizing, but it only shows the median. I thought it might be fun to try to make a good looking visualization using the CI as well (you can pick one of the other pairs to try this at home), both in R and then in D3. I chose D3 for the animated version mostly to play with the new 4.0 main branch, but I think it’s possible to do more with dynamic visualizations in D3 than it is with R (and it doesn’t require stop-motion techniques).

The following code:

  • reads in the data set (and saves it locally to be nice to their bandwidth bill)
  • does some munging to get fields we need
  • saves a version out for use with D3
  • uses geom_segment() + geom_point() to do the heavy lifting
  • colors the segments by year using the viridis palette (the Plasma version)
  • labels the plot by decade using facets and some fun facet margin “tricks” to make it look like the x-axis labels are on top
library(readr)    # read_table() / write_csv()
library(dplyr)
library(zoo)      # as.yearmon()
library(ggplot2)  # devtools::install_github("hadley/ggplot2")
library(hrbrmisc) # devtools::install_github("hrbrmstr/hrbrmisc")
library(viridis)

URL <- "http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.4.0.0.monthly_ns_avg.txt"
fil <- sprintf("data/%s", basename(URL))
if (!file.exists(fil)) download.file(URL, fil)

global_temps <- read_table(fil, col_names=FALSE)

global_temps %>%
  select(year_mon=1, median=2, lower=11, upper=12) %>%
  mutate(year_mon=as.Date(as.yearmon(year_mon, format="%Y/%m")),
         year=as.numeric(format(year_mon, "%Y")),
         decade=(year %/% 10) * 10,
         month=format(year_mon, "%b")) %>%
  mutate(month=factor(month, levels=month.abb)) %>%
  filter(year != 2016) -> global_temps

# for D3 vis
write_csv(global_temps, "data/temps.csv")

#+ hadcrut, fig.retina=2, fig.width=12, fig.height=6
gg <- ggplot(global_temps)
gg <- gg + geom_segment(aes(x=year_mon, xend=year_mon, y=lower, yend=upper, color=year), size=0.2)
gg <- gg + geom_point(aes(x=year_mon, y=median), color="white", shape=".", size=0.01)
gg <- gg + scale_x_date(name="Median in white", expand=c(0,0.5))
gg <- gg + scale_y_continuous(name=NULL, breaks=c(0, 1.5, 2),
                              labels=c("0°C", "1.5°C", "2.0°C"), limits=c(-1.6, 2.25))
gg <- gg + scale_color_viridis(option="C")
gg <- gg + facet_wrap(~decade, nrow=1, scales="free_x")
gg <- gg + labs(title="Global Temperature Change (1850-2016)",
                subtitle="Using lower and upper bounds of the 95% confidence interval of the combined effects of all the uncertainties described in the HadCRUT4 error model (measurement and sampling, bias and coverage uncertainties; fields 11 & 12)",
                caption="HadCRUT4 (http://www.metoffice.gov.uk/hadobs/hadcrut4/index.html)")
gg <- gg + theme_hrbrmstr_my(grid="XY")
gg <- gg + theme(panel.background=element_rect(fill="black", color="#2b2b2b", size=0.15))
gg <- gg + theme(panel.margin=margin(0,0,0,0))
gg <- gg + theme(panel.grid.major.y=element_line(color="#b2182b", size=0.25))
gg <- gg + theme(strip.text=element_text(hjust=0.5))
gg <- gg + theme(axis.title.x=element_text(hjust=0, margin=margin(t=-10)))
gg <- gg + theme(axis.text.x=element_blank())
gg <- gg + theme(axis.text.y=element_text(size=12, color="#b2182b"))
gg <- gg + theme(legend.position="none")
gg <- gg + theme(plot.margin=margin(10, 10, 10, 10))
gg <- gg + theme(plot.caption=element_text(margin=margin(t=-6)))
gg

hadcrut

(Click image for larger version)

My theme_hrbrmstr_my() required the Myriad Pro font, so you’ll need to use one of the other themes in the hrbrmisc package or fill in some theme() details on your own.

HadCRUT in D3

While the static visualization is pretty, we can kick it up a bit with some basic animations. Rather than make a multi-file HTML+js+D3+CSS example, this is all self-contained (apart from the data) in a single index.html file (some folks asked for the next D3 example to be self-contained).

Some nice new features of D3 4.0 (that I ended up using here):

  • easier to use scales
  • less verbose axis creation
  • viridis is now a first-class citizen

Mike Bostock has spent much time refining the API for D3 4.0 and it shows. I’m definitely looking forward to playing with it over the rest of the year.

The vis is below but you can bust the iframe via https://rud.is/projects/hadcrut4/.

I have it setup as “click to view” out of laziness. It’s not hard to make it trigger on div scroll visibility, but this way you also get to repeat the visualization animation without it looping incessantly.

If you end up playing with the D3 code, definitely change the width. I had to make it a bit smaller to fit it into the blog theme.

Fin

You can find the source for both the R & D3 visualizations on github.

16 May 08:54

The Future of Learning: Digital, Distributed, Data-Driven

files/images/tweets.JPG


George Siemens, elearnspace, May 18, 2016


"We know the names," writes George Siemens. "Vygostky, Freire, Illich, Papert, and so on. We know the ideas. We know the vision of networks, of openness, of equity, and of a restructured system of learning that begins with learning and the learner rather than content and testing. But why doesn’ t the positive change happen?" The answer, he suggests, is that these reformers were not able to integrate their ideas with systems or networks. "  Ideas that change things require an integrative awareness of systems, of multiple players, and of the motivations of different agents. It is also required that we are involved in the power-shaping networks that influence how education systems are structured, even when we don’ t like all of the players in the network." Ah, it's that last phrase that contains the rub.   Will Richardson chimes in with a helpful reference to Sarason  in the comments. 

[Link] [Comment]
16 May 08:54

Why a staggering number of Americans have stopped using the Internet the way they used to

files/images/NTIA.jpg


Andrea Peterson, Washington Post, May 18, 2016


The internet used to be so cool. But now people are afraid to use it. That's the gist of this article in the Washington Post which cites statistics showing that  privacy and security fears are preventing people from using the net the way they'd like to. There are many aspects to this, ranging from spying and intrusion, hacking and identify theft, catfishing and fraud, spam and spoofing, cyberbullying and more. I read from time to time (and people tell me in various meetings) that people aren't really so concerned about these issues, that it's the new reality. I don't buy it. I think people crave a secure and safe internet.

[Link] [Comment]
16 May 08:54

Interested In The Moodle Academy?

files/images/Gavin_Henrick.JPG


Moodle News, May 18, 2016


In what looks like "a direct response to the  Canvas Network," Moodle's  Gavin Hendrik has announced the Moodle Academy, "a centralized MOOC hosting platform run and managed by Moodle. This is for institutions or Moodlers who want to hold a MOOC but don’ t necessarily have the bandwidth to deal with the short term (massive) hit on their internal resources from a much heavier user load." Unless this platform is open in ways I don't know about, this appears to me to be more of a response to things like MoodleRooms. I'm guessing it will be located here -  http://www.moodleacademy.org/  - since we have a pre-splash page Moodle install  (and  http://www.moodleacademy.com  is still up for sale). But hey, I've been wrong before. Anyhow, the biggest problem for the use of Moodle with MOOCs has always been the need to sign in to do  anything - for example, to access this page  to ask Gavin Hendrik for more information. See the rest of the Moodle Moot keynotes here.

[Link] [Comment]
16 May 08:54

The Future of Work and Learning 2: Chatbots

files/images/orgbots-1024x656.png


Jane Hart, Learning in the Modern Social Workplace, May 18, 2016


Ah, chatbots. Everyone's favourite potential robot teacher. They've been around for a while - here's me interviewing one that ran for president  in 2000. Today they're a lot more sophisticated and sometimes even passing the Turing test (here, here  and here). But bots come in all shapes and sizes and as this article suggests are already pervasive. "There are already bots for  property searchesgetting up to date news bots,  as well as for  booking hotels.... Esther created her own  resume bot....   there is now talk of the  “ conversational office”   (which Slack is spearheading) and  how messaging bots will change workplace productivity  over the next five years."

[Link] [Comment]
16 May 08:53

An international education strategy with no vision

files/images/australia_education.JPG


Craig Whitsed, Wendy Green, University World News, May 18, 2016


This is a review of the recently released  Australian National Strategy for International Education 2025  (40 page PDF) and it is not a positive one. Australia has been noted in recent years for an explicit focus on revenue generation from international education, and this report represents a continuation of that strategy. "The strategy has three pillars: strengthening the fundamentals, transformative partnerships and competing globally. To operationalise these pillars, the Australian government will provide A$12 million (US$8.8 million) over four years." Without commenting on the objective, I find this a small amount of money to support such wide objectives, in particular given "the closure of the Office for Learning and Teaching – the major source of funding for teaching innovation in Australian higher education."

[Link] [Comment]
16 May 08:53

Whatever happened to … Logo?

files/images/selection_773.png


Doug Peterson, doug – off the record, May 18, 2016


Logo was an almost magical tool in its time designed to help students learn to program. "Of course, it wasn’ t 'real programming,' writes Doug Peterson. "That was reserved for the assignments given in class.  This was just fun, trying to design the most intricate things that we could." As it turns out, though, it was 'real computing' - more real than the other sort. Today, students have many more options for programming creativity. "Students might get a chance to learn using  Lego Mindstorms  or any of the other languages that have been created with developing coders in mind –   HopscotchScratch, and so much  more.  With the right budget, you might even get a programmable device like  Sphero." For me, languages like Basic and C were my toys, and I created games. 

[Link] [Comment]
16 May 08:53

Density Mandated In GTA

by Ken Ohrn

San Grewall in the Toronto Star writes about an announcement by the Province of Ontario of impending legislation intended to curtail sprawl and its inherently crippling cost.

With 3.5 million people set to move into the Greater Toronto and Hamilton Area over the next 25 years, the province is promising sweeping changes to manage smart growth and curb urban sprawl that’s crippling the region.

“There are challenges that have been before us for the last number of years,” said Ontario Finance Minister Charles Sousa, who was joined by three other ministers at an announcement in Mississauga to outline broad new measures the province is taking to properly manage future growth. “It was neglected for far too long in previous regimes.” . .

  • Requiring “pre-zoning” along transit corridors to guarantee dense development if cities want to get future transit funding.
  • Ensuring that at least 60 per cent of all new residential developments in municipalities are in existing “built-up” areas.
  • Substantially increasing employment density so greenfield spaces within cities can’t be eaten up by things such as sprawling warehouses.

Let the pearl-clutching begin, as a precedent for density increase city-wide looks like it has been set in Ontario.

Of particular interest to me is Ontario’s tie-in between transit planning, land use and infrastructure investments, given the narrow transit funding tussle now in play in Metro Vancouver. Not to mention Ontario’s Greenbelt protection, in the light of BC’s apparent intention to enable good ol’ sprawl onto our ALR and elsewhere with a 1950’s debate-free program of building freeways and massive bridges.  BC may have some sort of plan, but I’m not sure what it is.

A broader look at the Ontario Gov’t material is HERE, and it pertains to shaping land use in the entire “Greater Golden Horseshoe” around Toronto. Driven, it seems, by Ontario Prov gov’t plans for some $31.5 B in transit investments, this represents steps towards a green and livable region, while making best use of the money.

Building Complete Communities

Whether they are urban, suburban or rural, complete communities share many common characteristics. They are places where homes, jobs, schools, community services, parks and recreation facilities are easily accessible. Complete communities encourage active transportation, like walking or biking, support public transit, and provide opportunities for people to connect with one another.

Complete communities are more compact, occupy less land, reduce the costs of infrastructure and offer access to healthy local food. They also provide a range of employment opportunities and a mix of housing that offers a range of affordability. With all of these characteristics, complete communities contribute significantly to a high quality of life.

Ontario


16 May 08:53

An Interview about Encryption

by Eugene Wallingford

A local high student emailed me last week to say that he was writing a research paper about encryption and the current conversation going on regarding its role in privacy and law enforcement. He asked if I would be willing to answer a few interview questions, so that he could have a few expert quotes for his paper. I'm always glad when our local schools look to the university for expertise, and I love to help young people, so I said yes.

I have never written anything here about my take on encryption, Edward Snowden, or the FBI case against Apple, so I figured I'd post my answers. Keep in mind that my expertise is in computer science. I am not a lawyer, a political scientist, or a philosopher. But I am an informed citizen who knows a little about how computers work. What follows is a lightly edited version of the answers I sent the student.

  1. Do you use encryption? If so, what do you use?

    Yes. I encrypt several disk images that hold sensitive financial data. I use encrypted files to hold passwords and links to sensitive data. My work laptop is encrypted to protect university-related data. And, like everyone else, I happily use https: when it encrypts data that travels between me and my bank and other financial institutions on the web.

  2. In light of the recent news on groups like ISIS using encryption, and the Apple v. Department of Justice, do you support legislation that eliminates or weakens powerful encryption?

    I oppose any legislation that weakens strong encryption for ordinary citizens. Any effort to weaken encryption so that the government can access data in times of need weakens encryption for all people at all times and against all intruders.

  3. Do you think the general good of encryption (protection of data and security of users) outweighs or justifies its usage when compared to the harmful aspects of it (being used by terrorists groups or criminals)?

    I do. Encryption is one of the great gifts that computer science has given humanity: the ability to be secure in one's own thoughts, possessions, and communication. Any tool as powerful as this one can be misused, or used for evil ends.

    Encryption doesn't protect us from only the U.S. government acting in good faith. It protects people from criminals who want to steal our identities and our possessions. It protects people from the U.S. government acting in bad faith. And it protects people from other governments, including governments that terrorize their own people. If I were a citizen of a repressive regime in the Middle East, Africa, Southeast Asia, or anywhere else, I would want the ability to communicate without intrusion from my government.

    Those of us who are lucky to live in safer, more secure circumstances owe this gift to the people who are not so lucky. And weakening it for anyone weakens it for everyone.

  4. What is your response to someone who justifies government suppression of encryption with phrases like "What are you hiding?" or "I have nothing to hide."?

    I think that most people believe in privacy even when they have nothing to hide. As a nation, we do not allow police to enter our homes at any time for any reason. Most people lock their doors at night. Most people pull their window shades down when they are bathing or changing clothes. Most people do not have intimate relations in public view. We value privacy for many reasons, not just when we have something illegal to hide.

    We do allow the police to enter our homes when executing a search warrant, after the authorities have demonstrated a well-founded reason to believe it contains material evidence in an investigation. Why not allow the authorities to enter or digital devices under similar circumstances? There are two reasons.

    First, as I mentioned above, weakening encryption so that the government can access data in times of legitimate need weakens encryption for everyone all the time and makes them vulnerable against all intruders, including bad actors. It is simply not possible to create entry points only for legitimate government uses. If the government suppresses encryption in order to assist law enforcement, there will be disastrous unintended side effects to essential privacy of our data.

    Second, our digital devices are different than our homes and other personal property. We live in our homes and drive our cars, but our phones, laptops, and other digital devices contain fundamental elements of our identity. For many, they contain the entirety of our financial and personal information. They also contain programs that enact common behaviors and would enable law enforcement to recreate past activity not stored on the device. These devices play a much bigger role in our lives than a house.

  5. In 2013 Edward Snowden leaked documents detailing surveillance programs that overstepped boundaries spying on citizens. Do you think Snowden became "a necessary evil" to protect citizens that were unaware of surveillance programs?

    Initially, I was unsympathetic to Snowden's attempt to evade detainment by the authorities. The more I learned about the programs that Snowden had uncovered, the more I came to see that his leak was an essential act of whistleblowing. The American people deserve to know what their government is doing. Indeed, citizens cannot direct their government if they do not know what their elected officials and government agencies are doing.

  6. In 2013 to now, the number of users that are encrypting their data has significantly risen. Do you think that Snowden's whistleblowing was the action responsible for a massive rise in Americans using encryption?

    I don't know. I would need to see some data. Encryption is a default in more software and on more devices now. I also don't know what the trend line for user encryption looked like before his release of documents.

  7. Despite recent revelations on surveillance, millions of users still don't voluntarily use encryption. Do you believe it is fear of being labeled a criminal or the idea that encryption is unpatriotic or makes them an evil person?

    I don't know. I expect that there are a number of bigger reasons, including apathy and ignorance.


  8. Encryption defaults on devices like iPhones, where the device is encrypted while locked with a passcode is becoming a norm. Do you support the usage of default encryption and believe it protects users who aren't computer savvy?

    I like encryption by default on my devices. It comes with risks: if I lose my password, I lose access to my own data. I think that users should be informed that encryption is turned on by default, so that they can make informed choices.

  9. Should default encryption become required by law or distributed by the government to protect citizens from foreign governments or hackers?

    I think that we should encourage people to encrypt their data. At this point, I am skeptical of laws that would require it. I am not a legal scholar and do not know that the government has the authority to require it. I also don't know if that is really what most Americans want. We need to have a public conversation about this.

  10. Do you think other foreign countries are catching up or have caught up to the United States in terms of technical prowess? Should we be concerned?

    People in many countries have astonishing technical prowess. Certainly individual criminals and other governments are putting that prowess to use. I am concerned, which is one reason I encrypt my own data and encourage others to do so. I hope that the U.S. government and other American government agencies are using encryption in an effort to protect us. This is one reason I oppose the government mandating weakness in encryption mechanisms for its own purposes.

  11. The United States government disclosed that it was hacked and millions of employees information was compromised. Target suffered a breach that resulted in credit card information being stolen. Should organizations and companies be legally responsible for breaches like these? What reparations should they make?

    I am not a lawyer, but... Corporations and government agencies should take all reasonable precautions to protect the sensitive data they store about their customers and citizens. I suspect that corporations are already subject to civil suit for damages caused by data breaches, but that places burdens on people to recover damages for losses due to breached data. This is another area where we as a people need to have a deeper conversation so that we can decide to what extent we want to institute safeguards into the law.

  12. Should the US begin hacking into other countries infrastructures and businesses to potentially damage that country in the future or steal trade secrets similar to what China has done to us?

    I am not a lawyer or military expert, but... In general, I do not like the idea of our government conducting warfare on other peoples and other governments when we are not in a state of war. The U.S. should set a positive moral example of how a nation and a people should behave.

  13. Should the US be allowed to force companies and corporations to create backdoors for the government? What do believe would be the fallout from such an event?

    No. See the third paragraph of my answer to #4.

As I re-read my answers, I realize that, even though I have thought a lot about some of these issues over the years, I have a lot more thinking to do. One of my takeaways from the interview is that the American people need to think about these issues and have public conversations in order to create good public policy and to elect officials who can effectively steward the government in a digital world. In order for this to happen, we need to teach everyone enough math and computer science that they can participate effectively in these discussions and in their own governance. This has big implications for our schools and science journalism.

16 May 08:52

The Pleasure of Reading

In a collection of memoirs on The Pleasure of Reading, Gita Mehta remembers how

In darkened dormitories with the monsoon rain beating so heavily on the tin roof it almost drowned our sobs of homesickness we could be tricked out of loneliness by a teacher reading aloud stories of Harry the Horse and his fellow citizens playing games of chance; or the fortunes of Mrs Bennett’s daughters; or Mehitabel signing off to her cockroach with the inspiring sentiment: ‘Toujours gai, Archy. Toujours gai.’

When my father went off to the war, this is what he smuggled into his duffel: Damon Runyon’s stories and Don Marquis’ Archy and Mehitabel. Dad took Notes From Underground rather than Pride and Prejudice, perhaps anticipating that Jane Austen’s tales were not really ideal for the South Pacific. I think that, later, he’d have swapped Alice in Wonderland for Dostoevsky.

Still, one sees little enough Damon Runyon these days, and hardly anyone talks about Archy and Mehitabel. It’s amazing to find in an essay that begins:

Sahib. Latest from Plato. The Republic. Also, James Hadley Chase and P. G. Wodehouse. You want Catcher in the Rye, sahib? MAD magazine? But sahib, just now unpacked. At least sample Little Dorrit by Charles Dickens.
16 May 08:52

Great insights can be found anywhere, even in a movie review.

by Stowe Boyd

What we call the present is really the past. Life is only what we remember, and all of us are soon forgotten.

Continue reading on Stowe Boyd and the Messengers »

16 May 08:52

Let’s use dots instead of dashes

by Stowe Boyd

I am going to switch over to using dots (‘.’) instead of dashes (‘-’) in hyphenated words.

Continue reading on Stowe Boyd and the Messengers »

16 May 08:52

What’s Going On With Retail?

by Stowe Boyd

Steven Sinofsy looks at the collapsing world of retail and sees disruption’s fingerprints everywhere. It’s all true as far as it goes. But…

Continue reading on Stowe Boyd and the Messengers »

16 May 08:52

America is not a young land

From my friend Albert: Racism is the bogeyman

"America is not a young land: it is old and dirty and evil before the settlers, before the Indians. The evil is there waiting."

16 May 08:52

Something big is apparently coming to Canada

by Ian Hardy

We received this piece of marketing today. It’s a small piece of paper that says “Something Big Is Coming” with #showingyoumore hashtag. We’re thinking it has to do with Sony’s upcoming Xperia lineup this summer, but might also be the Note 6? Maybe something completely different and not hardware.

Thoughts?

16 May 08:52

Rogers Galaxy Note 5 now expected to receive Marshmallow update May 31

by Ian Hardy

Rogers customers with the Samsung Galaxy Note 5 have been patiently waiting for the update to Marshmallow.

Several carriers, including Bell, Telus and Wind Mobile have pushed the update to its customers, while Note 5 owners on Rogers have been faced with numerous delays. Rogers does state on its OS Update Schedule page that, “Making sure these dates are accurate is our top priority. Sometimes, things change beyond our control; so all dates listed are subject to change. Thanks for understanding.”

Last week, Rogers listed the Note 5 to receive Android 6.0 Marshmallow on May 25th. Unfortunately, this date has now moved to the end of the month to May 31st.

In addition, Galaxy S5 owners are still on track for Marshmallow to arrive on May 25th.

OS Upgrade Schedule

16 May 08:51

Samsung will reportedly release five Galaxy flagship smartphones in 2017, including a foldable device

by Ian Hardy

Samsung has been working on a smartphone with a foldable display for a number of years. Early roadmaps revealed the company would break the “technology barrier” necessary to start producing a foldable smartphone at scale by the end of 2016 or early 2017. Today, a new report suggests Samsung will launch five Galaxy flagship smartphones in 2017, including, at long last, its first foldable device.

The five new smartphones pegged for 2017 are the Galaxy S8, Galaxy S8 edge, Galaxy Note 7, Galaxy Note 7 edge, and something new called the Galaxy X. The standard Samsung flagships are rumoured to include 2.5K resolution AMOLED displays. As for the mysterious Galaxy X, this is Samsung’s foldable smartphone. It will reportedly feature a 4K display.

While the Galaxy X is probably not the go-to-market name for the new foldable device, earlier rumours pointed towards Samsung working on a 5-inch smartphone with a curved OLED display that transformed into a 7-inch tablet.

Back in In 2011, Samsung demoed a flexible display with an 800 x 480 pixel resolution that was dubbed the “world’s thinnest resolution Flexible AMOLED display,” measuring in at 0.27mm thick.

In addition, Queen’s University recently showed off ReFlex, the world’s first full-colour, high-resolution flexible smartphone.

Related: Hands-on with ReFlex

Via Weibo 
16 May 08:51

Sunk

mkalus shared this story .



May 14, 2016 at 03:58PM
via 500px <a href="http://ift.tt/1XrEL8C" rel="nofollow">http://ift.tt/1XrEL8C</a>