Shared posts

01 May 17:48

Review: Fairphone 2 – What Price a Conscience?

by Steve Litchfield
For the true geek, especially one who values the planet we all exist on, the Fairphone 2 is perhaps the ultimate smartphone. That it’s demonstrably not the best ‘bang per buck’ or indeed the prettiest phone out there, is almost irrelevant in light of the game changing Unique Selling Point here – you can take the Fairphone apart. Officially. Continue reading →
01 May 18:55

"Busybot: A ‘Parasitic’ Task Management Tool For Slack" in Work Futures

by Stowe Boyd

Because Busybot and Slack look so much alike and are so tightly connected, I avoid the cognitive costs of switching.

Continue reading on Medium »

01 May 18:56

Photo



02 May 01:20

Pray, The Sequel

02 May 01:21

Unbundled

by Chris Saad

How the breaking apart of traditional, rigid structures is creating a personalized, on-demand future and changing the everyday interactions of people, politics, and profit.

About this post

This post is based on a theory and a book outline I’ve been chipping away at since 2010. Since I’m probably going to be too busy to ever finish the full thing, I figured I would massively truncate and post it here so that it’s finally out in the world in some form. In the six years I’ve been thinking about this subject, it’s only become clearer with the advent of the on-demand economy, 3D printing etc. Please excuse the length!

Introduction

In Silicon Valley we’ve used the term “Unbundling” to describe the phenomena of mobile apps breaking apart into multiple separate apps, each essentially providing more focused, single purpose features. Think of the Facebook app being separated into Facebook + Messenger.

I believe this Unbundling phenomena is happening almost universally across all aspects of life. It’s a meta-trend that has been happening for decades (or more) and will continue for decades to come. It’s a common process affecting many of the things happening in the world today. In fact most of the major disruptions we see (loss of traditional jobs, failing record companies, terrorism, divorce rates, the rise of fringe/underdog political candidates etc) are all, in at least some way, connected to this fundamental transition.

See the full post on Medium

02 May 00:40

notes on debian packaging for ubuntu

I’ve devoted some time over the past month to learning how to create distribution packages for Ubuntu, using the Debian packaging system. This has been a longstanding interest for me since Dane Springmeyer and Robert Coup created a Personal Package Archive (PPA) to easily and quickly distribute reliable versions of Mapnik. It’s become more important lately as programming languages have developed their own mutually-incompatible packaging systems like npm, Ruby Gems, or PyPi, while developer interest has veered toward container-style technology such as Vagrant or Docker. My immediate interest comes from an OpenAddresses question from Waldo Jaquith: what would a packaged OA geocoder look like? I’ve watched numerous software projects create Vangrantfile or Dockerfile scripts before, only to let those fall out of date and eventually become a source of unanswerable help requests. OS-level packages offer a stable, fully-enclosed download with no external dependencies on 3rd party hosting services.

The source for the package I’ve generated can be found under openaddresses/pelias-api-ubuntu-xenial on Github. The resulting package is published under ~openaddresses on Launchpad.

What does it take to prepare a package?

The staff at Launchpad, particularly Colin Watson, have been helpful in answering my questions as I moved from getting a tiny “hello world” package onto a PPA to wrapping up Mapzen’s worldwide geocoder software, Pelias.

I started with a relatively-simple tutorial from Ask Ubuntu, where a community member steps you through each part of wrapping a complete Debian package from a single shell script source. This mostly worked, but there are a few tricks along the way. The various required files are often shown inside a directory called “DEBIAN”, but I’ve found that it needs to be lower-case “debian” in order to work with the various preparation scripts. The control file, debian/control, is the most important one, and has a set of required fields arranged in stanzas that must conform to a particular pattern. My first Launchpad question addressed a series of mistakes I was making.

The file debian/changelog was the second challenge. It needs to conform to an exact syntax, and it’s easiest to have the utility dch (available in the devscripts package) do it for you. You need to provide a meaningful version number, release target, notes, and signature for this file to work. The release target is usually a Debian or Ubuntu codename, in this case “xenial” for Ubuntu’s 16.04 Xenial Xerus release. The version number is also tricky; it’s generally assumed that a package maintainer is downstream from the original developer, so the version number will be a combination of upstream version and downstream release target, in my case Pelias 2.2.0 + “ubuntu” + “xenial”.

A debian/rules file is also required, but it seems to be sufficient to use a short default file that calls out to the Debian helper script dh.

I have not been able to determine how to test my debian director package locally, but I have found that the emails sent from Launchpad after using dput to post versions of my packages can be helpful when debugging. I tested with a simple package called “hellodeb”; here is a complete listing of each attempt I made to publish this package in my “hello” PPA as I learned the process.

My second Launchpad question concerned the contents of the package: why wasn’t anything being installed? The various Debian helper scripts try to do a lot of work for you, and as a newcomer it’s sometimes hard to guess where it’s being helpful, and where it’s subtly chastising you for doing the wrong thing. For example, after I determined that including an “install” make target in the default project Makefile that wrote files to $DESTDIR was the way to create output, it turned out that my attempt to install under /usr/local was being thwarted by dh_usrlocal, a script which enforces the Linux filesystem standard convention that only users should write files to /usr/local, never OS-level packages. In the end, while it’s possible to simply list everything in debian/install, it seems better to do that work in a more central and easy-to-find Makefile.

Finally, I learned through trial-and-error that the Launchpad build system prevents network access. Since Pelias is written in Node, it is necessary to send the complete code along with all dependencies under node_modules to the build system. This ensures that builds are more predictable and reliable, and circumvents many of the SNAFU situations that can result from dynamic build systems.

Rolling a new release is a four step process:

  1. Create a new entry in the debian/changelog file using dch, which will determine the version number.
  2. From inside the project directory, run debuild -k'8CBDE645' -S (“8CBDE645” is my GPG key ID, used by Launchpad to be sure that I’m me) to create a set of files with names like pelias-api_2.2.0-ubuntu1~xenial5_source.*.
  3. From outside the project directory, run dput ppa:migurski/hello "pelias-api_2.2.0-ubuntu1~xenial5_source.changes” to push the new package version to Launchpad.
  4. Wait.

Now, we’re at a point where a possible Dockerfile is much simpler.

Comments
02 May 00:00

Why Understanding These Four Types of Mistakes Can Help Us Learn

files/images/Types-of-Mistakes-Chart_v3.jpg


Eduardo Briceño, MindShift, May 03, 2016


"Mistakes are not all created equal," writes the author, "and they are not always desirable. In addition, learning from mistakes is not all automatic. In order to learn from them the most we need to reflect on our errors and extract lessons from them."  Eduardo Briceñ o makes this point clear by identifying four types of mistakes, two of which can be seen as beneficial, and two of which really should be avoided.

[Link] [Comment]
02 May 00:00

The ‘Maker’ Movement: Understanding What the Research Says


Benjamin Herold, EdWeek Market Brief, May 03, 2016


The Maker movement began as a free-form exercise. "Typically, 'Making' involves attempting to solve a particular problem, creating a physical or digital artifact, and sharing that product with a larger audience. Often, such work is guided by the notion that process is more important than results." But as it began to be applied more in schools, it began to  evolve. Diversity and inclusiveness became more important, and questions began to be asked about what was learned. This article is a good overview of some of the recent research. And it's interesting to compare the similarities between the evolution of MOOCs and the evolution of making.

[Link] [Comment]
02 May 00:00

Fifty shades of open


Jeffrey Pomerantz, Robin Peek, First Monday, May 03, 2016


This could have been much more appropriately titled, but the content of the piece is spot on. Specifically:

Open means rights
Open means access
Open means use
Open means transparent
Open means participatory
Open means enabling openness
Open means philosophically aligned with open principles

[Link] [Comment]
02 May 07:59

BlueGriffon officially recommended by the French Government

by BlueGriffon

en-US TL;DR: BlueGriffon is now officially recommended as the html editor for the French Administration in its effort to rely on and promote Free Software!

Je suis très heureux de signaler que BlueGriffon, mon éditeur Web cross-platform et Wysiwyg, est officiellement recommandé par le Socle Interministériel de Logiciels Libres pour 2016 !!! Vous trouverez la liste officielle des logiciels recommandés ici (document pdf).

02 May 07:01

Smartphones – The second derivative.

by windsorr

Reply to this post

RFM AvatarSmall

 

 

 

 

 

Gap between ecosystem and hardware to increase this year. 

  • As the slowdown in the smartphone market is more severe than even I had expected, it is Xiaomi that is looking like it is in real trouble.

Smartphone and ecosystem

  • Q1 16A smartphone shipments look they have been flat or declined as much as 3% to around 340m units compared to 344m units in Q1 15A.
  • This is below RFM’s forecast of 1% growth and substantially below that which I believe most commentators and the technology industry were expecting.
  • The problem with the flattening of the market is that handset makers will have to fight even harder to find growth resulting in even greater pricing pressure.
  • This means that in revenue terms the handset market could decline by 5-10% this year.
  • This is great for consumers and for the ecosystem companies that want smartphones in the hands of as many users as possible, but for the hardware makers it is disastrous.
  • All handset makers with the exception of Samsung and Apple are barely breaking even and this added pressure could push more of them into loss making territory.
  • Consequently, I expect that this year will see an acceleration of the shakeout as the smaller companies realise that they have no hope of ever making a decent return by making commodity Android handsets.
  • This further increases my preference for the ecosystem companies as their addressable markets will keep growing despite the stagnation in the handset market.
  • The addressable market for an ecosystem is smartphone users which RFM forecasts will grow by 14% this year to 2.82bn users from 2.46bn at the end of 2015.
  • This is how the likes of Google, Facebook, Baidu, Tencent and so on will be able to post good growth this year despite the hardships being endured by hardware.

Xiaomi

  • The two exceptions to this are Apple and Xiaomi both of which have decided to monetise their ecosystem by selling hardware.
  • However, it is there that the similarity ends as despite its growth issues, Apple is still fantastically profitable.
  • Xiaomi on the hand is not and this is the third quarter in a row where it has lost market share.
  • To add insult to injury it also no longer number 1 in its home market China having been overtaken by both a resurgent Huawei and Oppo.
  • This leads me to believe that Xiaomi has no money to invest in its ecosystem which will in it falling further behind Baidu, Tencent and Alibaba and even China Mobile.
  • For Xiaomi 2015 was a year that it grew, but not as much as it has promised, while 2016 is looking like one where revenues could decline as much as 10%.
  • For a company that last raised money at $46bn on the promise of very rapid growth, this a dreadful outcome as Xiaomi badly needs to invest in its ecosystem, has no money to do so and will have great difficulty in raising any more.
  • To compound its problems it also appears that usage of its ecosystem is waning (see here) which means that the loyalty of its users to its devices may also be in decline.
  • This will further hamper profitability making the outlook for Xiaomi very difficult indeed.
  • I continue to believe that any investor that can offload his shares in Xiaomi at a valuation of $27bn will be doing very well indeed.
02 May 10:18

When Documents Become Databases – Tabulizer R Wrapper for Tabula PDF Table Extractor

by Tony Hirst

Although not necessarily the best way of publishing data, data tables in PDF documents can often be extracted quite easily, particularly if the tables are regular and the cell contents reasonably space.

For example, official timing sheets for F1 races are published by the FIA as event and timing information in a set of PDF documents containing tabulated timing data:

R_-_Best_Sector_Times_pdf__1_page_

In the past, I’ve written a variety of hand crafted scrapers to extract data from the timing sheets, but the regular way in which the data is presented in the documents means that they are quite amenable to scraping using a PDF table extractor such as Tabula. Tabula exists as both a server application, accessed via a web browser, or as a service using the tabula extractor Java application.

I don’t recall how I came across it, but the tabulizer R package provides a wrapper for tabula extractor (bundled within the package), that lets you access the service via it’s command line calls. (One dependency you do need to take care of is to have Java installed; adding Java into an RStudio docker container would be one way of taking care of this.)

Running the default extractor command on the above PDF pulls out the data of the inner table:

extract_tables('Best Sector Times.pdf')

fia_pdf_sector_extract

Where the data is spread across multiple pages, you get a data frame per page.

R_-_Lap_Analysis_pdf__page_3_of_8_

Note that the headings for the distinct tables are omitted. Tabula’s “table guesser” identifies the body of the table, but not the spanning column headers.

The default settings are such that tabula will try to scrape data from every page in the document.

fia_pdf_scrape2

Individual pages, or sets of pages, can be selected using the pages parameter. For example:

  • extract_tables('Lap Analysis.pdf',pages=1
  • extract_tables('Lap Analysis.pdf',pages=c(2,3))

Specified areas for scraping can also be specified using the area parameter:

extract_tables('Lap Analysis.pdf', pages=8, guess=F, area=list(c(178, 10, 230, 500)))

The area parameter appears to take co-ordinates in the form: top, left, width, height is now fixed to take co-ordinates in the same form as those produced by tabula app debug: top, left, bottom, right.

You can find the necessary co-ordinates using the tabula app: if you select an area and preview the data, the selected co-ordinates are viewable in the browser developer tools console area.

Select_Tables___Tabula_concole

The tabula console output gives co-ordinates in the form: top, left, bottom, right so you need to do some sums to convert these numbers to the arguments that the tabulizer area parameter wants.

fia_pdf_head_scrape

Using a combination of “guess” to find the dominant table, and specified areas, we can extract the data we need from the PDF and combine it to provide a structured and clearly labeled dataframe.

On my to do list: add this data source recipe to the Wrangling F1 Data With R book…


02 May 09:26

Lower socioeconomic status linked to lower education attainment

by Nathan Yau

Money, race, and success

The Upshot highlights research from the Stanford Center for Education Policy Analysis that looks into the relationship between a child’s parents’ socioeconomic status and their educational attainment. Researchers focused on test scores per school district in the United States.

Children in the school districts with the highest concentrations of poverty score an average of more than four grade levels below children in the richest districts.

Even more sobering, the analysis shows that the largest gaps between white children and their minority classmates emerge in some of the wealthiest communities, such as Berkeley, Calif.; Chapel Hill, N.C.; and Evanston, Ill. (Reliable estimates were not available for Asian-Americans.)

Be sure down to browse the chart the shows points for race within the the same school districts. Color represents race, and connecting lines between dots show the magnitude of the differences between white, Hispanic, and black.

If you’re interested in the data itself, you can download it from the Stanford Education Data Archive.

See also the education spending map from NPR, which suddenly takes on a new dimension.

Tags: education, race, Upshot

02 May 09:28

Handling context in "outside-in"

by Dries

In a recent post we talked about how introducing outside-in experiences could improve the Drupal site-building experience by letting you immediately edit simple configuration without leaving the page. In a follow-up blog post, we provided concrete examples of how we can apply outside-in to Drupal.

The feedback was overwhelmingly positive. However, there were also some really important questions raised. The most common concern was the idea that the mockups ignored "context".

When we showed how to place a block "outside-in", we placed it on a single page. However, in Drupal a block can also be made visible for specific pages, types, roles, languages, or any number of other contexts. The flexibility this provides is one place where Drupal shines.

Why context matters

For the sake of simplicity and focus we intentionally did not address how to handle context in outside-in in the last post. However, incorporating context into "outside-in" thinking is fundamentally important for at least two reasons:

  1. Managing context is essential to site building. Site builders commonly want to place a block or menu item that will be visible on not just one but several pages or to not all but some users. A key principle of outside-in is previewing as you edit. The challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator.
  2. Managing context is a big usability problem on its own. Even without outside-in patterns, making context simple and usable is an unsolved problem. Modules like Context and Panels have added lots of useful functionality, but all of it happens away from the rendered page.

The ingredients: user groups and page groups

To begin to incorporate context into outside-in, Kevin Oleary, with input from yoroy, Bojhan, Angie Byron, Gábor Hojtsy and others, has iterated on the block placement examples that we presented in the last post, to incorporate some ideas for how we can make context outside-in. We're excited to share our ideas and we'd love your feedback so we can keep iterating.

To solve the problem, we recommend introducing 3 new concepts:

  1. Page groups: re-usable collections of URLs, wildcards, content types, etc.
  2. User groups: reusable collections of roles, user languages, or other user attributes.
  3. Impersonation: the ability to view the page as a user group.

Page groups

Most sites have some concept of a "section" or "type" of page that may or may not equate to a content type. A commerce store for example may have a "kids" section with several product types that share navigation or other blocks. Page groups adapts to this by creating reusable "bundles" of content consisting either of a certain type (e.g. all research reports), or of manually curated lists of pages (e.g. a group that includes /home, /contact us, and /about us), or a combination of the two (similar to Context module but context never provided an in-place UI).

User groups

User groups would combine multiple user contexts like role, language, location, etc. Example user groups could be "Authenticated users logged in from the United States", or "Anonymous users that signed up to our newsletter". The goal is to combine the massive number of potential contexts into understandable "bundles" that can be used for context and impersonation.

Impersonation

As mentioned earlier, a challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator. Impersonation allows site builders to switch between different user groups. Switching between different user groups allow a page to be previewed as that type of user.

Using page groups, user groups and impersonation

Let's take a look at how we use these 3 ingredients in an example. For the purpose of this blog post, we want to focus on two use cases:

  1. I'm a site builder working on a life sciences journal with a paywall and I want to place a block called "Download report" next to all entities of type "Research summary" (content type), but only to users with the role "Subscriber" (user role).
  2. I want to place a block called "Access reports" on the main page, the "About us" page, and the "Contact us" page (URL based), and all research summary pages, but only to users who are anonymous users.

Things can get more complex but these two use cases are a good starting point and realistic examples of what people do with Drupal.

Step #1: place a block for anonymous users

Let's assume the user is a content editor, and the user groups "Anonymous" and "Subscriber" as well as the page groups "Subscriber pages" and "Public pages" have already been created for her by a site builder. Her first task is to place the "Access reports" block and make it visible only for anonymous users.


Place a block for anonymous users

First the editor changes the impersonation to "Anonymous" then she places the block. She is informed about the impact of the change.

Step #2: place a block for subscribers

Our editor's next task is to place the "Download reports" block and make it visible only for subscribers. To do that she is going to want to view the page as a subscriber. Here it's important that this interactions happens smoothly, and with animation, so that changes that occur on the page are not missed.


Place a block for subscribers

The editor changes the impersonation to "Subscribers". When she does the "Access reports" block is hidden as it is not visible for subscribers. When she places the "Download report" block and chooses the "Subscriber pages" page group, she is notified about the impact of the change.

Step #3: see if you did it right

Once our editor has finished step one and two she will want to go back and make sure that step two did not undo or complicate what was done in step one, for example by making the "Download report" block visible for Anonymous users or vice versa. This is where impersonation comes in.


Confirm you did it right

The anonymous users need to see the "Access reports" block and subscribers need to see the "Download report" block. Impersonation lets you see what that looks like for each user group.

Summary

The idea of combining a number of contexts into a single object is not new, both context and panels do this. What is new here is that when you bring this to the front-end with impersonation, you can make a change that has broad impact while seeing it exactly as your user will.

02 May 11:39

From the Horse’s Mouth: Councillor Joe Mihevc goes all out for the minimum grid

by dandy

horsesmouthweb

Illustration by Ian Sullivan

In our latest contribution, councillor Joe Mihevc (Ward 21, St. Paul's West) declares his support for bike lanes and the minimum grid. He also hints at Bike Share news, touts infrastructure expansion and discusses collaboration with city staff to build successful cycling projects in the city.

What are the top priorities for bike infrastructure in your ward this year?

Bike Share. The city is expanding the stations this year and there will be a few arriving for Ward 21. An announcement will be made by the city in June, so I will need to leave it at that for now, but expanding the program up the escarpment is a discussion in which I have engaged city staff over the past few years. The potential the expansion has to increase the number of people choosing to make trips by bike is exciting.

Another piece of infrastructure that I am happy about is to see our Ward 21 cyclists' request for a lane along Winona Avenue has made it into the Cycling Division's 10 Year Plan, which will go to the May16th public works and infrastructure committee (PWIC) meeting and then to city council for approval.

Would you encourage PWIC, the executive committee and council to support the minimum grid?

Absolutely. I am among councillors who signed on to Cycle Toronto's minimum grid goal. I am working to assist in the effort to have the city meet those bike lane targets. I personally met with city cycling staff to discuss the successful motion by council colleagues at the March PWIC meeting, requesting staff include $20 million and $25 million annual options for a budget increase to make the minimum grid happen. That report is part of the 10 Year Plan being considered at the May 16th PWIC meeting - and I will certainly defend the minimum grid all the way to City Council.

Related on the dandyBLOG

From the Horse’s Mouth: Gord Perks on pushing for more from city hall

Bike lanes on Bloor now one council meeting away from becoming reality

Back in the saddle: Hoopdriver re-opens in new location

Celebrating community and building bike infrastructure: Ontario Bike Summit roundup

From the Horse’s Mouth: Councillor Janet Davis on improving cycling in Ward 31

Protecting vulnerable road users protects all of us

Scarborough cyclists get a spring boost with two new bike hubs

From the Horse’s Mouth: Michael Black on Building Better Bike Lanes

From the Horse’s Mouth: Councillor Mike Layton on building community support for bike lanes

From the Horse’s Mouth: Cycling forecast for 2016

From the Horse’s Mouth: Nancy Smith Lea, TCAT

From the Horse’s Mouth: Councillor Joe Cressy on bike projects in Ward 20

From the Horse’s Mouth: Jennifer Keesmaat on the best city projects of 2015, and a look at the year ahead

02 May 11:48

Ontario is investing $20 million in public EV charging stations in 2017

by Rob Attrell

The Ontario government has been ahead of the curve in trying to adapt to the probable revolution looming in car transportation, with initiatives allowing self-driving vehicles to be tested in the province, and a $325 million Green Investment Fund working to reduce the effects of climate change through meaningful, environmentally-friendly initiatives.

This week, the Ontario Ministry of Transportation announced that a $20 million grant program has been created from the Green Investment Fund to build 500 electric vehicle charging stations across Ontario next year. The Electric Vehicle Chargers Ontario will provide 27 public and private sector groups with funds to create a network of charging stations in places where they’re needed most.

Transportation is the single largest source of greenhouse gas emissions in Ontario, with cars accounting for more atmospheric pollution than iron, steel, cement and chemical industries combined. Currently, there are 6,400 electric vehicles on Ontario’s roads, but with the industry slowly moving toward hybrid and electric models, that number will be growing quickly in the next few years.

The province has set a goal to reduce greenhouse gas emission levels to just 20 percent of 1990 pollution levels by 2050. Making recharging stations more accessible across the province will mean owning an electric vehicle as opposed to a gas-powered one is that much easier.

SourceOntario
02 May 13:01

BCE to acquire MTS for $3.9 billion, plans to divest one-third of the postpaid subscribers to Telus

by Ian Hardy

Wireless competition in Canada may have just been reduced.

BCE, parent company to Bell’s wireless business, has entered into an agreement to purchase Manitoba-based MTS (Manitoba Telecom Services Inc.) for $3.9 billion.

The transaction is expected to close in late 2016 or early 2017 pending regulatory approvals. As for the terms, BCE has agreed to pay MTS shareholders $40.00 per share, which is reportedly above the current price by a staggering 23.2 percent, and “values MTS at approximately 10.1 times 2016 estimated EBITDA.” The total transaction value is about $3.9 billion but will purchase all shares for $3.1 billion and assume its debt of approximately $800 million. Both BCE and MTS shareholders and Board of Directors have agreed to the terms of the deal.

Bell has also agreed to open a western Canadian headquarters in Manitoba, which will employ 6,900 people (MTS currently has 2,700 employees). To show its commitment to the province, Bell will also build out its network in the region and invest $1 billion over the next five years, which will specifically see its Gigabit Fibe Internet be available 12 months after the transaction closes, expand its LTE network and release Fibe TV.

MTS currently has 565,000 subscribers and the terms also state that Bell will divest a third “of MTS dealer locations in Manitoba to Telus.” This number approximately represents 140,000 to Telus. Unfortunately, there is no indication as the dollar amount Bell will be selling the MTS subs to Telus for.

When the transaction closes, Bell and MTS have agreed to sell its services under the “Bell MTS” brand name.

Finally, fine print in the release reveals that if the “arrangement agreement is terminated in certain circumstances, including if MTS enters into a definitive agreement with respect to a superior proposal, BCE is entitled to a break-fee payment of $120 million.”

George Cope, president and CEO of BCE, stated, “BCE looks forward to being part of Manitoba’s strong growth prospects, building on the tremendous MTS legacy of technological innovation, customer service and competitive success by delivering the best broadband, wireless, internet and TV services to the people of Manitoba in communities large and small. As the headquarters for the Western operations of BCE, Bell MTS will focus on delivering the benefits of new broadband communications infrastructure, ongoing technology development and enhanced community investment to Manitobans everywhere.”

“This transaction recognizes the intrinsic value of MTS and will deliver immediate and meaningful value to MTS shareholders, while offering strong benefits to MTS customers and employees, and to the Province of Manitoba,” said Jay Forbes, President & CEO, MTS. “We are proud of our history and what we have achieved as an independent company. We believe the proposed transaction we are announcing today with BCE will allow MTS to build on our successful past and achieve even more in the future.”

SourceBCE
02 May 12:00

Understanding Bluetooth Pairing Problems

by Kevin Purdy

bluetooth issues explained feature

We’ve received some complaints from our readers about the Bluetooth devices we recommend acting up, working intermittently, or otherwise failing, especially when multiple devices are involved. The problems include failing to pair, audio hiccups, and recurring dropped connections. The situation usually involves a few Bluetooth devices—say, a phone, a smartwatch, and a car stereo—trying to get along. Sometimes the conflict is between a phone, a fitness tracker, and Bluetooth headphones. Occasionally, the issue is simply a keyboard that’s confused about different iPads. And though we haven’t heard about all three-way troubles, we have some ideas about what’s going on.

02 May 14:44

What Happened to Google Maps?

by Federico Viticci

Fascinating study by Justin O'Beirne on how Google Maps changed from 2010 to 2016 – fewer cities, more roads, and not a lot of balance between them on a map at the same zoom level.

He writes:

Unfortunately, these "optimizations" only served to exacerbate the longstanding imbalances already in the maps. As is often the case with cartography: less isn't more. Less is just less. And that's certainly the case here.

As O'Beirne also notes, the changes were likely made to provide a more pleasant viewing experience on mobile devices.

I understand his point of view – the included examples really make a solid case – but I can also see why Google may consider the average user (looking up points of interest nearby, starting navigation on their phone) and think that most users don't want that kind of cartographic detail anymore.

It'd be interesting to see the same comparisons between Apple and Google, as well as between old Apple Maps and Apple Maps today.

→ Source: justinobeirne.com

02 May 15:58

Change in Metro

by pricetags

Change and development along the rapid-transit lines as part of “Skywalking through Burnaby” on Sunday:

A big hole at the Brentwood Town Centre redevelopment.  Is this the the biggest single complex in the municipality’s history?

DSC03687

Brentwood

.

At the Commercial-Broadway station, the new east platform for westbound trains is visible:

DSC03689

.

Already the station handles more passengers in a day than YVR airport.  With the opening of the Evergreen line later, the crush would be unmanageable without a platform expansion.


02 May 04:00

Tailoring Pants for Square

by Eric Ayers

This week, the Pants project announced a 1.0 release of the open source Pants Build System. The 1.0 release of Pants indicates that the tool is ready for more widespread adoption.

Square is a proud contributor to Pants. Developers at Square have been using and contributing to Pants since 2014 to develop our Java services. When we first joined the project, we found a tool that required lots of customization and insider knowledge to install and operate. Today the tool has a streamlined installation process, extensive documentation, a clean extensible design, a vibrant community and a history of stable weekly releases.

With Pants we get:

  • Reliable, reproducible builds from the current view of the code repository
  • A streamlined development workflow
  • Easy IDE setup
  • Strong integration with third party artifact repositories
  • Consistent results when switching branches
  • A distributed build cache that can be shared with CI builders and developer laptops
  • Lots of built-in tooling to help us analyze our large build graph
  • The ability to define fine grained dependencies between code modules
  • An extensible tool that can grow with our needs

./pants compile service

To understand why Square uses a tool like Pants, it helps to understand our software lifecycle. We use a monolithic codebase (monorepo) for many of the same reasons that Google does.

We build and release services from HEAD of master. Our Java codebase is housed almost entirely in a single repo consisting of over 900 projects and 45,000 source files. In this style of development, we prefer keeping all code at HEAD consistent using global refactoring and rigorous automated testing instead of maintaining strict API backwards compatibility and long deprecation cycles within the codebase.

We also have a strong commitment to using open source libraries. We extensively rely on artifacts published through the Maven Central Repository. When we upgrade a library, we can do so in one place and update the code dependencies for all services.

./pants idea service::

With such a large codebase, it becomes impractical to load the entire repo into the IDE. At Square, we primarily use IntelliJ IDEA to develop code. We use Pants to configure and launch it. Probably the most valuable feature for day to day development is simply having the ability to quickly bring up any portion of the repo in IntelliJ. With a single command, Pants configures IntelliJ to edit a module and all of its dependencies defined in the repo.

Making it easy to configure any project in the IDE means that developers can easily contribute to any project in the codebase. Being able to easily switch between branches encourages developers to collaborate. Now it is convenient to check out each other’s work locally when performing code reviews. It is easier to confine change requests to small, manageable chunks and switch between them while waiting on code reviews to complete.

./pants –help

We came to the Pants project looking for a tool to help solve problems in our build environment. Previously, we used Apache Maven to compile and package binaries. Maven is a powerful and popular tool with a modular design that makes it easy to extend with excellent support from third party tools and libraries. We had a significant investment in Maven, including many custom plugins for running code generation and supporting a distributed cache for artifacts in our continuous integration (CI) build system.

Using Maven with our “build everything from HEAD” policy strains the Maven model. Maven is designed to support editing a few modules at a time while relying on binary artifacts for most dependencies. To support building the entire repo from HEAD, we set every Maven module in the repo to a SNAPSHOT version.

Using Maven in this way works, but has drawbacks. Running a recursive compile of all dependent modules incurs a lot of overhead. We had wrapper scripts to help us try to be productive in this environment, say to run just run code generation or only run a subset of tests. Still, developers would get into trouble in some situations, often having to deal with inconsistencies between stale binary artifacts and the source on disk. For example, after using mvn install, pulling in new changes from the repo or switching back to an older branch could leave them compiling against stale code. When developers routinely question the integrity of their development environment, they waste a lot of time cleaning and rebuilding the codebase.

./pants test service:test

Our first priority was to allow developers to quickly configure their workspace in the IDE. Next, we migrated to using Pants as the tool to test and deploy artifacts in our CI builder. As of this writing, we have replaced all of our use of Maven in this repo using Pants, including:

  • Developing on developer workstations and laptops
  • Compiling and testing code in our continuous integration environments
  • Publishing artifacts to a Maven style repository
  • Integrating with third party tools like findbugs and kloc

Replacing all of our uses of Maven was not easy. We were able to do this by generating the Pants configuration using the Maven pom.xml files as a source of truth. During an interim phase we supported both tools. Through collaboration with the Pants open source community, we were able to modify Pants through hundreds of open source contributions.

./pants staging-build –deploy

Pants comes out of the box ready to edit, compile, test, and package code. Beyond that, we were able to leverage Pants' extensible module based system. A current favorite is a small custom task to deploy directly to staging environments over our internal deployment system. Along the way, other custom modules run custom code generators, gather meta information about our build process for our security infrastructure, and package the output of annotation processors into yaml files for our deployment system. Today we have about two dozen internal Pants plugins that do all those things, plus additional tools to audit our codebase, integrate with our CI system, and customize our IDE support.

Ship it!

At Square, Pants has helped us realize the promised benefits of monorepo style development at scale. Making sure the development process is consistent and reliable increases our developer velocity. Being able to quickly and reliably edit, compile, and test allows developers to concentrate on reviewing and writing code, not struggling with configuring tools. We believe that Pants is ready for more widespread adoption and encourage you to give it a try.

02 May 13:31

Managing feedback challenges writers (Survey data)

by Josh Bernoff

My survey of 547 business writers found one big pain point: managing editorial feedback. Only half of business writers get the feedback they need, and only one in three feels that their process for managing feedback works well. Business writers have little love for feedback processes In this table, the first data column summarizes all the responses; the other columns … Continue reading Managing feedback challenges writers (Survey data) →

The post Managing feedback challenges writers (Survey data) appeared first on without bullshit.

02 May 16:05

Should we end the single-family home in Vancouver?

by pricetags

An Item from Ian found in Vancity Buzz.  

s-f homes

/

Greg Mitchell asks the question:

… let’s rethink the narrative. We know Vancouver is an expensive city in which to live – it seems to be all Vancouverites think about currently (yes, I’m guilty too). And we are obviously limited in terms of our land on which we can develop (mountains, ocean – enough said). So our only option is to densify – but the question is HOW to densify.

He provides, in detail, an alternative:

4-townhouses-66-foot-lot

.

Those details here.

 


02 May 16:08

Samsung’s IoT SmartThings platform suffers from serious security issues, says report

by Patrick O'Rourke

With the smartphone industry seemingly hitting a plateau when it comes to innovation and perhaps more importantly excitement, internet of things (IoT) gadgets have become one of fastest expanding and most interesting areas of the tech industry.

In Samsung’s case, however, a report from University of Michigan computer science researchers indicates the South Korean smartphone manufacturer’s IoT SmartThings platform suffers from security issues that could potentially allow malicious apps to operate smart locks, change access codes and set off smoke Wi-Fi-enabled detectors, as well as a variety of other forms attacks on Samsung’s smart device line.

A malicious SmartThings app, with access to more permissions than necessary, downloaded directly from Samsung’s SmartThings store, is the source of the security issues according to the research. The problem also stems from apps being given permissions that aren’t actually required. For example, a smart lock only needs the ability to lock remotely, but SmartThings’ API links this command with a variety of others.

After installation, SmartThings apps also request additional permissions, allowing them to be linked to different apps installed on the smartphone, a move the researchers say isn’t necessary because it gives the app more access than is required.

Researchers demonstrated their discovery through an app that monitors the battery life of a variety of Samsung SmartThings products. After installing and granting the malicious but normal looking app permissions on the smartphone, it not only monitors battery, but also has the ability to manipulate the lock’s functionality. It does this by automatically sending out an SMS to the app’s developer each time the user reprograms the the smart lock’s pin code.

A second demonstration showed off an app allowing the user to to program their own pin code through an app that locks and unlocks a browser. Research revealed that of the 499 apps part of the study, 42 percent of them have more privileges than are necessary, giving malicious developers ample opportunity to create exploits.

Following this discovery, the University of Michigan researchers behind the discovery say they have reached out to Samsung’s SmartThings team with their findings.

While these exploits do require user interaction, many people swiftly move through the permission section of installing an app without actually realizing what they’re giving the software access to. Researchers say that of 22 SmartThings users they surveyed, 91 percent said they would allow a battery monitoring app to check their smart lock and give the app whatever permissions it requested. However, only 14 percent said they would allow the battery app to send door access codes to a remote server.

In an email to The Verge, a SmartThing representative said the following about the study.

“The potential vulnerabilities disclosed in the report are primarily dependent on two scenarios – the installation of a malicious SmartApp or the failure of third party developers to follow SmartThings guidelines on how to keep their code secure. Following this report, we have updated our to provide even better security guidance to developers.”

“Smart home devices and their associated programming platforms will continue to proliferate and will remain attractive to consumers because they provide powerful functionality. However, the findings in this paper suggest that caution is warranted as well – on the part of early adopters, and on the part of framework designers. The risks are significant, and they are unlikely to be easily addressed via simple security patches.”

 

 

02 May 16:45

The biggest single project in Burnaby history?

by pricetags

I asked below whether Brentwood Town Centre was the largest single project ever seen in Burnaby. Should have checked my e-mail to see that this just came in, via Vancity Buzz:

 

Concord

A master-planned community called Concord Brentwood is the latest development from Concord Pacific Developments Inc., renowned for its skyline-defining communities on Vancouver’s False Creek and Toronto’s lakefront.

Concord Brentwood will create a bustling community according to Concord Pacific senior vice president Matt Meehan. “Our next project in Burnaby, Concord Brentwood, will see 26 acres in the Brentwood neighbourhood transform into a beautiful and diverse mixed-use park-side community that completes the exciting revitalization of the Brentwood Town Centre neighbourhood.” …

Designed by award-winning architect James K.M. Cheng of Vancouver, Concord Brentwood will consist of 10 towers, most between 40 and 45 storeys tall. Tower 1 of Phase 1 will consist of 426 units on 45 storeys.

I don’t know if this a rendering of the massing for the proposal or the final product.  But if the latter, the architecture looks pretty blah.  I still have no explanation for why in this region there is such a reluctance to use colour, why the palette seems so constrained – off-white or gray, beige and green glass.


02 May 17:03

Harvesting Searched for Tweets Using Python

by Tony Hirst

Via Tanya Elias/eliast05, a query regarding tools for harvesting historical tweets. I haven’t been keeping track of Twitter related tools over the last few years, so my first thought is often “could Martin Hawksey’s TAGSexplorer do it?“!

But I’ve also had a the twecoll Python/command line package on my ‘to play with’ list for a while, so I though I’d give it a spin. Note that the code requires python to be installed (which it will be, by default, on a Mac).

On the command line, something like the following should be enough to get you up and running if you’re on a Mac (run the commands in a Terminal, available from the Utilities folder in the Applications folder). If wget is not available, download the twecoll file to the twitterstuff directory, and save it as twecoll (no suffix).

#Change directory to your home directory
$ cd

#Create a new directory - twitterstuff - in you home directory
$ mkdir twitterstuff

#Change directory into that directory
$ cd twitterstuff

#Fetch the twecoll code
$ wget https://raw.githubusercontent.com/jdevoo/twecoll/master/twecoll
--2016-05-02 14:51:23--  https://raw.githubusercontent.com/jdevoo/twecoll/master/twecoll
Resolving raw.githubusercontent.com... 23.235.43.133
Connecting to raw.githubusercontent.com|23.235.43.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 31445 (31K) [text/plain]
Saving to: 'twecoll'
 
twecoll                    100%[=========================================>]  30.71K  --.-KB/s   in 0.05s  
 
2016-05-02 14:51:24 (564 KB/s) - 'twecoll' saved [31445/31445]

#If you don't have wget installed, download the file from:
#https://raw.githubusercontent.com/jdevoo/twecoll/master/twecoll
#and save it in the twitterstuff directory as twecoll (no suffix)

#Show the directory listing to make sure the file is there
$ ls
twecoll

#Change the permissions on the file to 'user - executable'
$ chmod u+x twecoll

#Run the command file - the ./ reads as: 'in the current directory'
$ ./twecoll tweets -q "#lak16"

Running the code the first time prompts you for some Twitter API credentials (follow the guidance on the twecoll homepage), but this only needs doing once.

Testing the app, it works – tweets are saved as a text file in the current directory with an appropriate filename and suffix .twt – BUT the search doesn’t go back very far in time. (Is the Twitter search API crippled then…?)

Looking around for an alternative, the GetOldTweets python script, which again can be run from the command line; download the zip file from Github, move it into the twitterstuff directory, and unzip it. On the command line (if you’re still in the twitterstuff directory, run:

ls

to check the name of the folder (something like GetOldTweets-python-master) and then cd into it:

cd GetOldTweets-python-master/

to move into the unzipped folder.

Note that I found I had to install pyquery to get the script to run; on the command line, run: easy_install pyquery.

This script does not require credentials – instead it scrapes the Twitter web search. Data limits for the search can be set explicitly.

python Exporter.py --querysearch '#lak15' --since 2015-03-10 --until 2015-09-12 --maxtweets 500

Tweets are saved into the file output_got.csv and are semicolon delimited.

A couple of things I noticed with this script: it’s slow (because it “scrolls” through pages and pages of Twitter search results, which only have a small number of results on each) and on occasion seems to hang (I’m not sure if it gets stuck in an infinite loop; on a couple of occasions I used ctrl-z to break out). In such a case, it doesn’t currently save results as you go along, so you have nothing; reduce the --maxtweets value, and try again. On occasion, when running the script under the default Mac python 2.7, I noticed that there may be encoding issues in tweets which break the output, so again the file can’t get written,

Both packages run from the command line, or can be scripted from a Python programme (though I didn’t try that). If the GetOldTweets-python package can be tightened up a bit (eg in respect of UTF-8/tweet encoding issues, which are often a bugbear in Python 2.7), it looks like it could be a handy little tool. And for collecting stuff via the API (which requires authentication), rather than by scraping web results from advanced search queries, twecoll looks as if it could be quite handy too.


01 May 15:17

"If thought corrupts language, language can also corrupt thought." in Underpaid Genius

by Stowe Boyd

— George Orwell

Continue reading on Medium »

01 May 16:18

I am working on a Cognitive Computing book

by Mark Watson, author and consultant
My latest writing project is "Creating a Cognitive Computing Laboratory on the Google Cloud Platform A Guide for Individuals and Small Companies".

Most of the examples in the book involve text processing using the open source TensorFlow library for machine intelligence and the OpenNLP projects. I decided to concentrate on what I do now and only supply references to other areas like image and speech recognition, etc. that I don't work in anymore. I spent years working on digital signal processing for handling images and some time using time delay neural networks for speech recognition but now, my interests are mostly in intelligently processing and understanding natural language text.

I feel like I am entering the third stage of my work in natural language processing, starting in the 1980s with symbolic processing, then machine learning approaches like the Stanford NLP library and OpenNLP, and now in deep learning.

While the book specifically covers setting up a laboratory on the Google Cloud Platform, much of the DevOps material is also applicable to AWS and Azure.

I am developing the book examples in a private git repository now and they will eventually be in this github repository.


30 Apr 13:50

Twitter Favorites: [rvkgrapevine] A little Icelandic girl named Ripley meets her alien godmother, Sigourney Weaver. https://t.co/7wXeVoggE7 #cute https://t.co/Jo5RmXoaUA

Reykjavík Grapevine @rvkgrapevine
A little Icelandic girl named Ripley meets her alien godmother, Sigourney Weaver. grapevine.is/news/2016/04/3… #cute pic.twitter.com/Jo5RmXoaUA
30 Apr 20:53

Twitter Favorites: [timbray] So much we know to be true really isn’t: https://t.co/CE1vEh3zNq

Tim Bray @timbray
So much we know to be true really isn’t: fivethirtyeight.com/features/who-w…