Shared posts

21 Jun 04:04

Om on Apple and Google

by Matt

Writing for the New Yorker (!) Om Malik compares and contrasts Apple and Google.

21 Jun 07:00

Building Erlang for Android

Support for building Erlang on Android is provided in the standard Erlang source.

Build setup

I use the Erlang git version for building. Cloning can be done with:

git clone https://github.com/erlang/otp

In the xcomp directory of the cloned repository there is an erl-xcomp-arm-android.conf file that contains details for cross compiling to Android. This can be modified per the instructions in the Erlang cross compilation documentation but the defaults are probably fine.

Some environment variables need to be set to locate the Android NDK:

export NDK_ROOT=/path/to/ndk/root
export NDK_PLAT=android-21

The NDK_PLAT environment variable identifies the Android API version to use for building. In this case android-21 is for KitKat (See STABLE-APIS.html).

Add the path to the Android version of the gcc compiler:

export PATH=$PATH:$NDK_ROOT/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin

When building from the git repository an initial step of generating configuration files needs to be done. This requires autoconf version 2.59. If autoconf2.59 is the command to run this version you may need to change some symlinks or defaults for your OS or you can edit the otp_build file to replace occurences of autoconf with autoconf2.59.

Building

Building from git requires generating build configuration files first:

./otp_build autoconf

Once generated, run the configure step. This will configure both a host version of Erlang for bootstrapping and the Android version:

./otp_build configure --xcomp-conf=xcomp/erl-xcomp-arm-android.conf

Build a bootstrap system of Erlang for the host machine, followed by one for the Android target:

./otp_build boot -a

Installing

Installing to an Android device involves running a build step to copy the files to a temporary directory, run a script to change paths to the directory where the installation will be on the Android device and pushing the final result to the device.

In the following series of commands I use /tmp/erlang as the temporary directory on the host system and /data/local/tmp/erlang as the install directory on the Android device. The directory /data/local/tmp is writable on non-rooted Android KitKat devices. It’s useful for testing.

./otp_build release -a /tmp/erlang
cd /tmp/erlang
./Install -cross -minimal /data/local/tmp/erlang

One of the files bin/epmd is a symlink which adb push has problems with. For the copying of the files below I delete the file and manually recreate the symlink after the push:

adb shell mkdir /data/local/tmp/erlang
cd /tmp
rm erlang/bin/epmd
adb push erlang /data/local/tmp/erlang/
adb shell ln -s /data/local/tmp/erlang/erts-6.4.1/bin/epmd \
                /data/local/tmp/erlang/bin/epmd

The adb commands assume the device is already connected and can be accessed via adb.

Running

Once the final push completes Erlang can be run via adb shell or a terminal application on the device:

$ adb shell
$ cd /data/local/tmp/erlang
$ sh bin/erl    
Eshell V6.4.1  (abort with ^G)
1> 

You may get an error about sed not being found. This is due to a sed command run on the first argument of the erl shell script. A workaround for this is to build an Android version of sed and install it along with Erlang.

Networking

I’ve tested with some basic Erlang functionality and it works fine. Some tweaks need to be made for networking however. The method Erlang uses for DNS lookup doesn’t work on Android. By default it’s using native system calls. With a configuration file it can be changed to use its own internal DNS method. Create a file with the following contents:

{lookup, [file,dns]}.
{nameserver, {8,8,8,8}}.

In this case the nameserver for DNS lookups is hardcoded to Google’s DNS. Ideally this would be looked up somehow using Android functionality for whatever is configured on the phone but this works for test cases. Push this file to the device and run erl with it passed as an argument like so (inetrc is the name I used for the file in this case):

$ adb push inetrc /data/local/tmp/erlang/
$ adb shell
$ cd/data/local/tmp/erlang
$ sh bin/erl -kernel inetrc '"./inetrc"'

Network examples should now work:

1> inets:start().
ok
2> inet_res:getbyname("www.example.com",a).
{ok,{hostent,"www.example.com",[],inet,4,[{93,184,216,34}]}}
3> httpc:request(get, {"http://bluishcoder.co.nz/index.html", []}, [], []).
{ok,...}

More information on the inetrc file format is available in the Erlang documentation.

Conclusion

This showed that a basic installation of Erlang works on Android. I’ve also tested on a Firefox OS phone with root access. An interesting project would be to install Erlang on either a Firefox OS or an Android AOSP build as a system service and write phone services in Erlang as a test for an Erlang based device.

19 Jun 23:03

Twitter Favorites: [britrbennett] Hi everyone, I wrote a little about the Charleston shooting and our language surrounding racial violence http://t.co/8X3WvWLNLr

Brit Bennett @britrbennett
Hi everyone, I wrote a little about the Charleston shooting and our language surrounding racial violence nytimes.com/2015/06/19/mag…
16 Jun 12:30

Introducing UserVoice 3.0: Data-Driven Product Decisions Have Never Been Easier

by Bonnie Pecevich

We’re excited to announce the launch of UserVoice 3.0, a whole new suite of tools to help product managers better understand their customers and make data-driven product decisions like never before.

At UserVoice, we’re in the business of listening to customers and ensuring their happiness, and we practice what we preach when it comes to building our own product. We strive to delight product managers, like yourself, and help them build better products. Our new features provide relevant, actionable data so you can make informed decisions on prioritizing customer preferences in your roadmap.

Comparison Tests with SmartVote™

Our stand-out feature in this release, SmartVote™, helps build consensus within your organization by eliminating ambiguity over the direction of your product roadmap. SmartVote™ gathers statistically significant data related to customer demand, revenue, and return on investment through a single question survey designed to engage even the most casual of users (so you’ll have data points beyond your most vocal customers). The end result is a list of product ideas, both internal and external, prioritized by their impact to your business. We’re striving to help you make the decisions around what makes it into your roadmap and have the data to answer why.

smartvote_v2

Advanced Trend Reporting

Advanced Trend Reporting displays the change in velocity of user support at a glance so you can quickly identify hot or trending ideas. You’ll be able to see whether user desire for an idea is increasing more quickly or slowly and evaluate whether certain ideas and user concerns remain relevant over time. This, in addition to SmartVote™, gives you even more information to help you prioritize ideas and lets you know whether those ideas are worth building or the demand is no longer urgent. Since these reports show user opinion over a given time, you can also use these reports to close the loop after launch and determine if product changes made a positive impact after release.

trending

Personalized Views and Notifications

Personalized Views and Notifications allows you to save specific filters and gives you one-click access to the information most relevant to you. This is especially helpful for members of bigger teams who want to just access issues and feedback around their own product. You can now save any custom view you create so it’s easier than ever to stay on top of your product.

personalized-views

 

Finally, all these features come in a slick new UI. The new design provides a great experience and improved performance. Just how much better is performance, you ask? Common activities, like browsing your list of suggestions, just got more than 2x faster with our new UI and API.

We hope you’re as excited as we are about our new features that are advancing data-driven product management. UserVoice 3.0 starts at $499/month. See detailed pricing here.

To request this upgrade, talk to your account manager. Don’t have one? No problem, email us at Iwant3point0@uservoice.com.

21 Jun 21:55

Guest post: Driverless taxis, driverless buses, and the future urban mobility mix

by Jarrett at HumanTransit.org

Antonio Loro is an urban planner who focuses on the planning implications of emerging road vehicle automation technologies. He has conducted research with TransLink and Metrolinx on the potential impacts of automated vehicles, and is currently with the Ministry of Transportation of Ontario. This article was written by Antonio Loro in his personal capacity. The views expressed in this article are the author's own and do not necessarily represent the views of the previously mentioned organizations.

As efforts to develop automated vehicles continue to speed forward, researchers have begun to explore how driverless taxis in particular could play a prominent role in the future mix of urban transportation options. Some of these early findings raise the provocative argument that driverless taxis (or self-driving or fully-automated taxis, if you prefer) could hugely reduce or even eliminate the need for buses and trains. However, careful interpretation of this research reveals that vehicles with high passenger capacities (bus and rail transit, in other words) could be superseded by lower-capacity vehicles only where there is plenty of road space to spare. Where road lanes are in shorter supply, buses and trains – which could themselves get a huge productivity boost from automation – will continue to be indispensable for moving large volumes of passengers. In such cases, driverless taxis, especially share taxis, will be ideally suited to complement higher-capacity transit, generally by focusing on areas with a surplus of road space. And in the near-term, even before such advanced automation is perfected, automated buses could start improving mobility for large numbers of urban travelers.

Researchers have begun to explore future scenarios where vehicle automation technologies have advanced to a level that enables taxis to drive without human intervention through the full network of urban roads. Recently, the ITF (International Transport Forum), a think tank within the OECD, modeled a number of scenarios to examine how these driverless taxis, once they are commonplace, could serve urban travelers on a typical weekday in Lisbon. Among the outputs of their model, one is particularly attention-grabbing: even in a scenario where 8% of trips go by foot or bike, none go by transit, and the remaining 92% go by driverless taxis that serve single passengers, ITF researchers say that the number of taxis needed would be less than a quarter of the number of cars currently in use in the Portuguese capital. Such a reduction in the size of the overall fleet of cars in the city would greatly diminish parking demand. Unsurprisingly, though, in order to serve so many trips, the fleet of taxis would be used very intensively, and the total vehicle kilometres traveled (VKT) in the city would more than double.

Remarkably, the ITF team say that despite the upsurge in VKT in this scenario, travel times would be slowed very little. Underpinning this result is a noteworthy assumption in the model: currently, according to the ITF researchers, less than 40% of available capacity on Lisbon’s roads is in use during peak periods. (The authors caution that their figures are underestimates, as they do not account for bus travel, which makes up 13% of VKT in Lisbon.) The upshot is that even in a scenario where new taxi trips and empty taxis driving to their next passengers cause VKT to double, the ITF model’s outputs suggest  that there should be road capacity to spare.

The model shows the previously very lightly used local roads absorbing much of the new VKT. Meanwhile, the most heavily traveled category of roads (“local traffic distributor roads”) rises from 43% to 69% of road capacity used. That 69% doesn’t necessarily mean traffic will be flowing smoothly, however. These percentages refer to the average capacity in use on different categories of roads – even if there is a low percentage of capacity in use on a given category of road, there could nevertheless be congestion focused at some locations on those roads. Interestingly, according to the TomTom Traffic Index, congestion may already be an issue in Lisbon, as travel times during peak periods are currently 45% to 70% longer than they would be under free-flow conditions.

 The crucial implication to highlight here is that single-passenger driverless taxis could supplant buses and trains – but only where the roads have the capacity to absorb potentially huge increases in traffic without becoming congested.

A 2014 study by the Singapore-MIT Alliance for Research and Technology (SMART) arrived at results similar to those in the ITF study, though the SMART team modeled travel speeds in a much simpler way. The SMART researchers examined a scenario with single-passenger driverless taxis (or car-share vehicles) serving all trips in Singapore. They concluded that a fleet sized at one-third of the total number of passenger vehicles currently in use in the city could serve all trips while keeping peak period waiting times for the average passenger under 15 minutes. Such a dramatic result follows from a simplifying assumption in the model: the SMART team first estimated the current average speed that taxis drive at – including both the denser and the more dispersed areas of the city – and then assumed that future driverless taxis would drive at that particular speed, regardless of their location in the road network.

Less optimistically, the taxis would be bogged down in congestion of their own making. Currently in Singapore, 63% of peak period trips go via public transit. Shifting all of those trips to taxis would generate an abundance of new VKT – including VKT produced by empty taxis moving to their next passengers. Currently, peak period travel times on Singapore’s roads are 50% to 80% longer than they would be under free-flow conditions, according to TomTom; a massive increase in VKT wouldn’t help much. 

A more recent article from the SMART team includes a brief exploration of the congestion effects of empty taxis repositioning themselves in a very simple road network. They find that their algorithm generally results in the empty vehicles traveling mainly on less busy roads, though the repositioning process could cause heavy congestion in networks where there is already congestion. This preliminary analysis suggests that much of the traffic produced by driverless taxi repositioning could be focused on the roads that are least congested to begin with. This suggestion lines up with the results of the ITF team. Interestingly, the ITF’s model of Lisbon has the biggest increases in traffic showing up on local streets in particular – which could unfortunately make them less attractive places to live, the authors caution. Such streets serve purposes other than being conduits for cars – they may be quiet routes for walking and cycling, or safe places for children to play, or inviting public spaces, for example – so injecting new car traffic could produce impacts other than congestion that are worth considering. Furthermore, while the SMART team suggests that much of the traffic created by repositioning per se might be focused on the roads that were previously least congested, the taxis that are actually carrying passengers could of course be the more important source of congestion – especially when large mode shifts, such as those seen in the scenarios described above, inevitably produce large VKT increases.

The ability of automated vehicles to use roads more efficiently could mitigate congestion resulting from increased VKT – with some caveats. Combining automation and V2V (vehicle-to-vehicle communication) technologies would enable vehicles to drive safely with short following gaps. However, the large potential capacity increases resulting from this “platooning”, where vehicles are grouped into closely-packed files, would mainly materialize on freeways, where traffic flows are less turbulent than on city streets. Automation with V2V could also boost flows through intersections by coordinating the movements of vehicles far more efficiently than traffic signals. These improvements would be constrained, though, wherever intersections are shared with entities not equipped with the requisite tech – not just cars, but pedestrians and cyclists as well. More radically, vehicles themselves could be smaller, thus occupying less road space, if the crash avoidance capabilities of automation eliminate the need for bulky, crashworthy construction. However, such a revolution in vehicle design would not be able to take over the streets until automation technologies are advanced enough and adopted widely enough to guarantee occupant safety.

 The capacity limits of roads could be less of an impediment for multi-passenger driverless taxis. Hypothetically, if 8% of trips in Lisbon were taken on foot or bike and the remaining 92% were served by driverless taxis carrying multiple passengers (most commonly three to five, but as many as eight), the ITF team estimates peak period VKT would rise by 25%. It’s a substantial increase, but much smaller than the 103% jump in the single-passenger taxi scenario discussed above. Not surprisingly, providing public transit service would further mitigate VKT. In a scenario where 22% of trips go by subway, 8% go by foot or bike, and the remaining 70% go by driverless share taxi, the model estimates peak period VKT would rise by a relatively modest 9%.

Taxis serving multiple rather than single passengers would also mean an even smaller taxi fleet would suffice. Fewer cars in the city, kept busy serving dozens of trips a day, would drastically cut the need for parking. Sidewalks and bike lanes would be among the potential uses for the freed-up land. If taxi passengers share rides, and if 22% of trips go by public transit, the ITF figures that close to 95% of all parking spaces in Lisbon could be made redundant. (This outcome depends on traffic still flowing smoothly despite a 9% increase in VKT – if traffic is slowed, a larger taxi fleet would be needed to effectively serve all trips, so more parking would be needed during periods of low demand).

The discussion above just scratches the surface of the ITF and SMART studies – it’s definitely worth reading the original articles to explore their insights and to understand how the models were constructed. Higher-fidelity models that build on the ITF and SMART efforts will improve our estimates of the potential for driverless taxis to serve urban trips; however, even without complex models, it is clear that automation won’t eliminate the need for buses and trains when large numbers of people have to move through limited space. This is one of the straightforward geometric arguments that Jarrett has made before in this blog: larger vehicles fit more people into a given length and width of right-of-way than convoys of small vehicles can carry. (To illustrate, a freeway lane might have a capacity as high as 2400 cars per hour, while Bogotá’s TransMilenio bus rapid transit system has a capacity of 45,000 people per hour per direction.)

Of course, it’s a contentious question when we will attain the holy grail of automation technology sufficiently sophisticated to enable taxis or other vehicles to drive on any road in any conditions. It may appear further in the future than some suppose; nevertheless, even before this “Level 5” technology (as defined by the Society of Automotive Engineers) is mature, there will be vehicles capable of fully automated operation under more restricted conditions. There already are – such “Level 4” vehicles are currently capable of driving themselves at low speeds when segregated from challenging traffic environments. Beginning in the near-term, these kinds of low-speed automated vehicles, perhaps looking something like Google’s famously cute prototype, could carry passengers in settings like retirement communities. They could also circulate through networks of low-speed roads in suburban neighbourhoods or business parks to provide “first and last mile” access to and from transit routes.

ParkShuttle-GRT-vehicles-at-Kralingse-Zoom-stationBoth in the near-term and the long-term, one of the most effective ways to reap the mobility benefits of automation will be to apply it to buses. Even with current technology, driverless operation would be feasible for buses running on busways with adequate exclusion of other vehicles and potential hazards. And even for buses on streets with mixed traffic, some of the technical challenges of achieving full automation are eased. For example, since bus routes run along only a small subset of the larger urban road network, the challenges of building and maintaining exquisitely detailed, meticulously annotated, continually updated maps would be substantially reduced – this would be advantageous for mapping-reliant approaches to automated driving, such as Google’s approach. The drop in labour costs from automation would enable dramatically increased frequencies, and the precision of automated control could also improve reliability. Because of the imminent potential to significantly improve mobility for large numbers of travelers, buses are a key priority for the application of automation.

With Level 5 automation, driverless taxis would become feasible. But rather than usurping the place of buses, they could play a complementary role. Level 5 would of course expand the domain of driverless buses, enabling them to provide service on any road; but it’s simply because buses can move numerous passengers in little road space that they will remain indispensable. Buses could ply heavily traveled corridors; driverless taxis could operate in less dense areas and during periods of lighter travel, whether serving complete trips or feeding into bus and train networks.

Interestingly, since low labour costs for driverless buses would enable higher frequencies, smaller vehicles would suffice to provide effective service in some areas. At a certain point, it could be difficult to distinguish in appearance between a small driverless bus and a driverless share taxi. However, their functions could be distinct: for example, driverless buses could provide frequent, predictable service on fixed routes according to schedules or headways, and driverless share taxis could provide on-demand, flexible route service. On the other hand, because diverting routes to pick up and drop off multiple passengers at numerous origins and destinations would cut into the travel time and cost benefits of taxi travel, some driverless share taxis could end up providing service with more fixed routing.

The take-home message from all this is that it’s critical to strategically deploy vehicle automation technologies according to their strengths and weaknesses. At some point, Level 5 automation will be achieved and driverless taxis will become feasible – but it’s important to think beyond just driverless taxis. To create the best possible mix of urban transportation options, it’s essential to consider the advantages and disadvantages of a range of potential automated vehicles and services – including buses, driverless taxis, and low-speed vehicles. Even more important, there's no need to wait for a Level 5 world with fully-fledged driverless taxis to appear before reaping the benefits of automation – and transit agencies have an opportunity to take the lead.

 [My thoughts on this piece are at the top of the comments! -- Jarrett]

Photo: Rotterdam driverless bus prototype, 2getthere.eu 

21 Jun 21:36

Ethereum messaging for the masses (including fathers) – via infographic

by Anthony Di Iorio

When I started evangelizing bitcoin and blockchain tech back in 2012 my Dad was a hard sell. There was the common skepticism and typical counters of “what’s backing it?”, “what can be done with it?” and “what the heck is cryptography?” Back then my pitch wasn’t refined and the learning tools had just not quite matured yet…but frankly, I think more of the reason he didn’t grasp it was that he just isn’t technical and doesn’t adopt early tech.

I delicately persisted and he eventually came around. Before the end of 2012 he had secured a nice stash of coins and as the price rose by many multiples I secured more and more of his ear for future opportunities that might arise.

Now he didn’t decide to buy Bitcoins because he ended up understanding the technology, in fact, I’m still not sure years later if he understands much of the bitcoin nitty-gritty or has wrapped his head around the potential implications that blockchain technology provides. When Ethereum came along, he most certainly didn’t understand the concept and why I decided to drop everything in late 2013 and risk sizable capital bootstrapping the project with my then co-founders Vitalik, Mihai, and Charles, and eventually the rest of the founding team, as it exists today.

Bitcoin is complex to explain, Ethereum takes it up a couple notches. It’s not rocket science but sure can seem like it to many people. Unheard of terms like as smart contracts, smart property, or Dapps can seem daunting to the masses. Even terms or words becoming more common such as “cryptocurrency” or “distributed ledger system” can still be unfamiliar to many. Let’s face it. This technology is not easy for the majority to grasp and the messaging needs continual refinement and crafting for this to change.

Rightly so, the majority of focus and current goal of Ethereum is to get the tools into the hands of developers and to show them how they can create the products end users will eventually value. Less time and resources have been spent explaining Ethereum to end consumers, enterprise, and institutions.

With weekly announcements of “blockchain integrations” (See NASDAQ, IBM, Honduran Government and countless others) and friendly legislation proposals emerge from countries like Canada, there’s obvious increased interest from many sectors and as more integration projects start hitting the news, more and more interest gets sparked from organizations not wanting to get left behind.

Explaining Complex Ideas

So how do we get Ethereum, and in general, complex ideas to the general public? Well, whether it’s a company trying to communicate new innovations to investors, or an educator teaching a challenging topic in a brief amount of time, the problem is how do you take an abundance of mostly technical information and effectively simplify and present if in an engaging and informative way?

One answer is infographics. Here are a few benefits of this medium

  1. Infographics are more eye-catching than printed words since they usually combine images, colors and content that naturally draw the eye and appeal to those with shorter attention spans.
  2. Infographics are extremely shareable around the web using embed code and are perfect for social network sharing
  3. If visually appealing with consistent colors, shapes. messages and logo they provide an effective means of brand awareness

Bellow is the concept of the first general infographic for Ethereum designed for the masses. I’ll allow it to speak for itself and I welcome feedback and questions as to why we crafted the message the way we did or why certain terms or ideas were left out. Please feel free to share this and help get the message around the web.

Ethereum Infographic Image - Beginners Guide

This is the first of a few proposed infographics. The next one will focus more on Ethereum use cases for various sectors, then one for the more technically inclined.

I’m missing father’s day with my Dad (and my brother’s first fathers day) to be in Switzerland on Ethereum business. However I did see my Dad this week and felt a certain accomplishment when after showing him the infographic he told me: “I finally have a clue what Ethereum is and its potential. Now what’s all this talk about Bitcoin?”

Happy father’s day to all you fathers out there from Ethereum

Anthony Di Iorio is a founder of Ethereum and currently serves as a consultant and adviser for the project. He is president and founder of Decentral and Decentral Consulting Services, offering technology consultancy services that specialize in blockchain and decentralized technology integrations for enterprise, small business and start-ups. Anthony is the cryptocurrency adviser at MaRS Discovery District, organizes the Toronto Ethereum Meetup Group and DEC_TECH (Decentralized Technologies) events and will be lecturing this summer at the University of Nicocia’s Masters Program in Digital Currencies.

The post Ethereum messaging for the masses (including fathers) – via infographic appeared first on Ethereum Blog.

21 Jun 23:20

Mean Bosses Are Killing You

by rands

Via Christine Porath in the New York Times:

Bosses produce demoralized employees through a string of actions: walking away from a conversation because they lose interest; answering calls in the middle of meetings without leaving the room; openly mocking people by pointing out their flaws or personality quirks in front of others; reminding their subordinates of their “role” in the organization and “title”; taking credit for wins, but pointing the finger at others when problems arise. Employees who are harmed by this behavior, instead of sharing ideas or asking for help, hold back.

#

21 Jun 19:12

I’m all about the Yes: YxYY003

by John

Last weekend was the third annual gathering of awesome people in the desert, also known as Yes & Yes Yes (YxYY for short).

As I’ve written previously, this event/conference/pool party has become my most anticipated ‘thing’ each year. Why? Mostly because there is no pressure to do anything, but just hang out with new and old friends, in an awesome location, with a pool.

I’m not going to talk long on the topic (check out my friend, Alexandra’s great post on why she keeps coming back, Liza’s Flipboard from the event and Tantek’s post event reflections) but thought I’d post a few photo/video highlights:

The super awesome lemonade stand/advice dispensary (complete with adult lemonade) where people would signup for ‘expert’ slots and answer questions:
YxYY003

The pool, of course:
YxYY003

The Kitten Kafe (supplied by the Palm Springs Animal Shelter):
This conference has a kitten cafe (via the local shelter) #YxYY #soawesome

Messing around with a 3D printed 360 degree GoPro camera rig with Sam (will post our results as soon as we have a chance to crunch the gigabytes of footage):
360 poolside action about to get stitched #YxYY

Printing lots of octopi for attendees in the Maker Lounge:

A video posted by John Biehler (@johnbiehler) on

This year also had lots of people bringing their own rad projects in like this holographic Princess by Chris Weisbart:

A video posted by John Biehler (@johnbiehler) on

I also won a MakerBot Replicator Mini courtesy of the Salesforce.com UX team! More on that later.

Making timelapses from the sweltering roof deck:

A video posted by John Biehler (@johnbiehler) on

Jonathan’s amazing drone video:

Of course, I couldn’t forget the epic Prom photo (via Wayne Racine) with Jonathan Tucker who was dining next door and grabbed us for a photo op:

On our way to Prom

On our way to Prom

Hopefully, you have more than enough reasons above to attend next year’s event!

You can see the rest of my photos from the event on Flickr.

The post I’m all about the Yes: YxYY003 appeared first on johnbiehler.com.

21 Jun 21:37

On the relevance of social science and interdisciplinarity: Some reflections

by Raul Pacheco-Vega

Every so often, the issues and questions of whether “social sciences are relevant” or “how can we social scientists impact how policy is made” come back to academic circles, both online and offline. Recently, Don Moynihan, Michael Horowitz and Jay Ulfelder have written about this issue, and I would encourage you to read their posts. As someone with an interdisciplinary background (I’m originally a chemical engineer, with a Masters in technology management and economics of technical change, and a PhD that gave me training both as a political scientist and as a human geographer) my concern is four-fold:

  • first, to be recognized as a political scientist and someone who knows the literature;
  • second, to have my human geography scholarship acknowledged and respected;
  • third, to be accepted in a department of public administration (which isn’t much of an issue itself, as much as my own self-consciousness because I’m more of a public policy scholar than I am of public management and public administration)
  • fourth, to have my interdisciplinary background and research understood, recognized and applied to the real world

The biggest challenge I face when writing grant proposals is not the ever-shrinking pool of money itself, but having potential reviewers understand why it is important to include political science theories, anthropological research methods, spatial analysis and geographical literature in a project on informal waste pickers, for example. I remember a wise professor of mine who said “departments say they want interdisciplinarity, but when it comes to hiring, they’ll hire someone within the discipline”. So, for example, a political science department will want to hire a political scientist (e.g. someone with a PhD in political science). This may be changing as the number of tenure-track positions also shrinks worldwide, but it’s a reality. Everyone wants “interdisciplinary work” but defends their own disciplinary silos too.

I myself find it a challenge to convince fellow academics of the importance of conducting research that crosses disciplinary boundaries. Can you imagine how open government folks will be to this type of applied research when academics themselves can’t see the importance of crossing the comfortable boundaries of their own disciplinary silo? I liked Don’s suggestions of how to take steps to bridge the academic-practitioner divide, and Michael’s clear demarcations of policy relevance and actionability. But like Jay, I would like to take a skeptical view. I particularly like this point that Jay makes:

With so much uncertainty and so much at stake, I wind up thinking that, unless their research designs have carefully address these assumptions, scholars—in their roles as scientists, not as citizens or advocates—should avoid that last mile and leave it to the elected officials and bureaucrats hired for that purpose.

Why is it important to let policy-makers try to push for applicability of social science research? Because it’s hard to do without being thrown into the political arena (often times, as Tressie McMillan Cottom argues, without having a structure in place within their own universities and institutions to support them). And that’s one challenge of being an advocate and a public intellectual is that, particularly in challenging and sensitive policy areas, scholars and their work will be attacked not on the basis of scientific research but even falling into damning critiques of the academic as an individual and their personality. Because, as Tressie aptly points out, being a public intellectual “means pissing people off”. And if we want to have our interdisciplinary research be relevant, ourselves and our institutions will need to be comfortable with pissing people off, conservative and unidisciplinary academics and politicians and politicized constituencies alike.

I do like a good challenge, though, and I plan to continue to advocate for (and engage in) interdisciplinary, applied public policy research, hoping that (and pushing for) my work will be relevant, insightful and credible enough for policy-makers to implement it in the national and international policy arenas.

21 Jun 22:31

state of the map 2015

Two weeks ago I was in New York for State Of The Map 2015, the annual OpenStreetMap conference.

2014’s event was in DC, and I had kind of a hard time with it. Good event, but dispiriting to come back to the community after a year away and encountering all the same arguments and buillshit as 2013 and before.

This year in New York was awesome, though. It felt as though some kind of logjam had been cleared in the collective consciousness of OpenStreetMap. Kathleen Danielson, new international board member, delivered a mic-drop talk on diversity. New York designers and urbanists were in attendance. A wider variety of companies than ever before sponsored and presented.

I was a last-minute addition to the program, invited by Brett Camper to take over moderation of his panel on vector rendering. There is video now on the website (here is a direct Youtube link for when the permalink-haters who do the site each year break all the current URLs).

We had four participnts on the panel.

Matt Blair from Mapzen works on the Tangram rendering engine, a WebGL rendering layer for in-browser delivery of responsive, dynamic, and funny maps like this sketchy style:

Mapzen has been doing a bunch of interesting things with 3D, and when I visited the office Peter Richardson had a bunch of printed Manhattan tiles on his desk, including this one of 16/19300/24630 with Grand Central and the NYPL viewed from the north:

Konstantin Käfer from Mapbox works on the new GL rendering product, and he’s been producing a regular stream of new rendering work and data format output throughout the three years I’ve known him. Konstantin shared this gorgeous animated view of a map zooming from Boston to Melbourne, showing off dynamic text rendering and frame-by-frame adjustments:

(Click for video)

Hannes Janetzek of OpenScienceMap produces a rendering product intended for scientific use. His work is used to support academic research, and he was unique on the panel for not having a commercial product on offer: “OpenScienceMap is a platform to enable researchers to implement their ideas, to cooperate with others, and to share their results.”

Steve Gifford of WhirlyGlobe-Maply is primarily in the consulting business, and his open source 3D globe rendering platform is used by the current number two app in the Apple app store, Dark Sky. The rendering output of Steve’s work is typically different from the other three, in that he’s mostly delivering zoomed-out views of entire regions rather than the street-level focus of OpenScienceMap, Mapzen, and Mapbox. It was telling that everyone except Steve identified text rendering and labeling as their primary difficulty in delivering vector-driven client-side renders.

Steve’s rendering pipelines commonly cover more raster rendering than the others, such as this screen from Dark Sky showing stormy weather over Ohio and Kentucky:

I produced a bunch of vector rendering work two years after I left Stamen, and I enjoyed moderating a panel on a topic I’m familiar with without having any skin in the game. It’s super exciting to see all of this happening now, and it feels a bit like OSM raster rendering in 2006/07, when Mapnik was still impossible to install but a growing group of people like Dane were nudging it forward into general accessibility. I give vector tiles and vector rendering another 1-2 years before it tips from weird research and supervised, commercial deployment into wide use and hacking. Comments

22 Jun 06:02

Eastbound Cycling Access on Pacific Across Burrard Needed

by Richard Campbell

The intersection of Burrard and Pacific is currently pretty bad for everyone walking, cycling and driving. It has one of the highest number of motor vehicle crashes in Vancouver due mainly to the right turn lane from Pacific to the Bridge. It is pretty dicey for cycling as well. Drivers often block the bike crossing while they are waiting to merge onto the bridge forcing cyclists to either wait or dodge them.
Assuming the proposed changes are approved by Council, Burrard and Pacific will join Burrard and Cornwall as one of the best intersections in North America for cycling. People cycling and walking will be protected from turning vehicles by separate signal phases greatly reducing the chances of crashes. It will also be great to be allowed to walk on the East Side of the Bridge again!
burrard-detail
However, the proposed plan is missing an all ages and abilities cycling connection eastbound along Pacific across Burrard. This connection is needed to provide basic access to businesses and residences in the area. With two right turn lanes heading onto the Bridge from Pacific, riding on the road with traffic will be even a dicier proposition than it is today.

As well, the grade on Beach and Pacific is much less than the grades on Thurlow (14%) and Hornby (10%) thus Beach and Pacific is naturally a better cycling route than the other options to access downtown via Burrard and Hornby. This is likely why it was included in the City’s 1999 Bicycle Plan.

Without eastbound access across Burrard, there is really no reasonable bicycle access to downtown from residences and business in the area bounded by Harwood, the west side of Burrard, Thurlow, and the north side of Pacific. Without this access across Burrard, the only options for all but the bravest are to cycle to Bute then up to the alley that connects to Drake or to cycle to Thurlow then down to Beach or the Seaside Greenway then back up Hornby. As confusing as it sounds. Either option is several hundred metres long and requires crossing more intersections taking more time and exposing people to greater risk of a collision. Even worse, should a visitor or wayward local find their way to the southeast side of the intersection, as all the bike lanes are one-way leading to that side of the intersection, with no eastbound access across Pacific, the only allowed option is to cycle over the Bridge then cycle back across the Bridge.

As this is pretty unreasonable, the result will be people not cycling, riding on the sidewalk, cycling in crosswalks or riding the wrong way in one-way bike lanes. Not safe for people cycling or walking.

Cycling Routes Should be Direct and Obvious

With Burrard Bridge and cycling in general proving popular with visitors to Vancouver, bike routes need to be obvious. I have witnessed many a tourist, some with maps open, at the north and south ends of the bridge wondering how to get anywhere.

In the following photos, four people, likely visitors, making their way across Pacific to get to the bicycle rental shop on Burrard. A sure sign that better access across Burrard is needed.

20150621-224548-81948942.jpg

20150621-224551-81951264.jpg

20150621-224552-81952911.jpg

As well, the goal is to attract new people to cycling. These people will be used to navigating around the city by car, foot and transit. If cycling connections are providing along the roads that they are used to, it will be much easier for them to find their way around the city by bike.

Overloading Weak Network Links

The only other eastbound all ages cycling route across Burrard south of Comox is the Seaside Greenway. The section of the Greenway between Burrard Bridge and Hornby is rather problematic with the current levels of bicycle traffic. Providing people with a route to avoid the problem areas is a good idea that will likely reduce cycling and walking conflicts and injuries.

Possible Solutions

There are a couple of possible solutions that would provide people with good bicycle access across Pacific. Sadly, one involves removing the large tree at the intersection. The other really worth considering is removing the eastbound traffic lane on Pacific across Burrard which could allow for enough space for the bike lane while sparing the tree. Unlike for people cycling, where there are two bicycle rental shops, there are no destinations where the eastbound lane Pacific from Thurlow to Hornby provides needed motor vehicle access.  Traffic on eastbound Pacific is pretty low and so is eastbound traffic on Beach. Beach also doesn’t have the long traffic light at Burrard so at most times of the day, Beach might even be faster than using Pacific. Traffic from Thurlow would just cross Pacific and head to Beach. The hill up from Beach  isn’t a problem for motor vehicles either.

Please email Mayor and Council at Mayor Gregor Robertson and Council mayorandcouncil@vancouver.ca and copy the project team at BurrardBridgeNorth@vancouver.ca to encourage the City to provide this badly needing cycling access.

More info on the improvements and a questionnaire at:  http://vancouver.ca/burrardbridgenorth


22 Jun 04:55

Music Streaming – Temporary gravy

by windsorr

Reply to this post

RFM AvatarSmall

 

 

 

 

 

Stop Press.

  • Apple has now reversed its position and will now pay the labels for music streams during the trial period reverting to the percentage model when users convert from trial to paid.
  • I suspect that Taylor Swift will now allow her album 1989 to be streamed by Apple Music.
  • This about face does not change my view that the real problem with the economics of music streaming is the labels rather than Apple or Spotify.

Music labels likely to have share to more with the artists.

  • Taylor Swift’s withdrawal from Apple Music (see here) is good news for Spotify but it brings into focus where the real problems with music streaming lie.
  • Taylor Swift’s latest album 1989 will not be available on Apple Music during the three month free trial period.
  • This is because Apple will not be paying artists for their music during this period as per the deals it has struck with the labels.
  • Apple gets the free trial period for free but in return it will pay 71.5% (inside US) and 73% (outside US) of the revenues it receives back to the labels.
  • This is slightly higher than what Spotify pays (70%) but Spotify pays up during its free trials and also for its free users.
  • The labels then share this money with the artists they have as per the terms of the contracts signed with their artists.
  • This is where the big question mark lies as how much the artists receive and how much the labels keep for themselves is a black box.
  • Given how much the artists appear to hate music streaming, I am thinking that the labels keep the vast majority of the revenue for themselves and give only a tiny fraction to the artists.
  • I suspect that when a lot of these contracts with the artists were negotiated, music streaming was a minor issue given that the vast majority of revenues were coming from album sales and digital downloads.
  • Hence, it would appear likely that the percentage of revenue paid to an artist of the revenue from a streamed track is orders of magnitude lower than that from an album sale or digital download.
  • This is why I suspect that Spotify (and soon Apple Music) is deeply unpopular with the artists as they see it as the reason why their revenues are not meeting their expectations.
  • Given Taylor Swift’s withdrawal from Apple Music, it would appear that the economics of a track streamed from Apple Music will be little different than one streamed from Spotify.
  • As a result, I do not see Apple Music being any more popular with the artists than Spotify meaning that it will not be getting any advantage when it comes to securing content for its offering.
  • It also highlights the probability that the real culprits behind artist woes are the labels.
  • As contracts expire and need to be renegotiated, I suspect that much greater attention will be paid to this form of revenue generation and that the labels will have to give the artists a bigger share.
  • This also highlights one of the major weaknesses of Tidal where many of the acts on stage at its launch actually had no say on where and how their content is accessed.
  • Although the financial impact for Spotify and Apple Music will be neutral from the artists getting a better deal from the labels, there will be a benefit all round.
  • This is because if artists finally start making a decent return from music streaming they will be more willing to engage with it which will mean a larger opportunity for all players.
  • I still think that the streaming market will be carved up between Spotify and Apple as many of the other players are too small or too beset with problems to make it in the long term.

 

 

 

15 Jun 07:49

Book review: Mastering Gephi Network Visualisation by Ken Cherven

by Ian Hopkinson

1994_7344OS_Mastering Gephi Network VisualizationA little while ago I reviewed Ken Cherven’s book Network Graph Analysis and Visualisation with Gephi, it’s fair to say I was not very complementary about it. It was rather short, and had quite a lot of screenshots. It’s strength was in introducing every single element of the Gephi interface. This book, Mastering Gephi Network Visualisation by Ken Cherven is a different, and better, book.

Networks in this context are collections of nodes connected by edges, networks are ubiquitous. The nodes may be people in a social network, and the edges their friendships. Or the nodes might be proteins and metabolic products and the edges the reaction pathways between them. Or any other of a multitude of systems. I’ve reviewed a couple of other books in this area including Barabási’s popular account of the pervasiveness of networks, Linked, and van Steen’s undergraduate textbook, Graph Theory and Complex Networks, which cover the maths of network (or graph) theory in some detail.

Mastering Gephi is a practical guide to using the Gephi Network visualisation software, it covers the more theoretical material regarding networks in a peripheral fashion. Gephi is the most popular open source network visualisation system of which I’m aware, it is well-featured and under active development. Many of the network visualisations you see of, for example, twitter social networks, will have been generated using Gephi. It is a pretty complex piece of software, and if you don’t want to rely on information on the web, or taught courses then Cherven’s books are pretty much your only alternative.

The core chapters are on layouts, filters, statistics, segmenting and partitioning, and dynamic networks. Outside this there are some more general chapters, including one on exporting visualisations and an odd one on “network patterns” which introduced diffusion and contagion in networks but then didn’t go much further.

I found the layouts chapter particularly useful, it’s a review of the various layout algorithms available. In most cases there is no “correct” way of drawing a network on a 2D canvas, layout algorithms are designed to distribute nodes and edges on a canvas to enable the viewer to gain understanding of the network they represent.  From this chapter I discovered the directed acyclic graph (DAG) layout which can be downloaded as a Gephi plugin. Tip: I had to go search this plugin out manually in the Gephi Marketplace, it didn’t get installed when I indiscriminately tried to install all plugins. The DAG layout is good for showing tree structures such as organisational diagrams.

I learnt of the “Chinese Whispers” and “Markov clustering” algorithms for identifying clusters within a network in the chapter on segmenting and partitioning. These algorithms are not covered in detail but sufficient information is provided that you can try them out on a network of your choice, and go look up more information on their implementation if desired. The filtering chapter is very much about the mechanics of how to do a thing in Gephi (filter a network to show a subset of nodes), whilst the statistics chapter is more about the range of network statistical measures known in the literature.

I was aware of the ability of Gephi to show dynamic networks, ones that evolved over time, but had never experimented with this functionality. Cherven’s book provides an overview of this functionality using data from baseball as an example. The example datasets are quite appealing, they include social networks in schools, baseball, and jazz musicians. I suspect they are standard examples in the network literature, but this is no bad thing.

The book follows the advice that my old PhD supervisor gave me on giving presentations: tell the audience what you are go to tell them, tell them and then tell them what you told them. This works well for the limited time available in a spoken presentation, repetition helps the audience remember, but it feels a bit like overkill in a book. In a book we can flick back to remind us what was written earlier.

It’s a bit frustrating that the book is printed in black and white, particularly at the point where we are asked to admire the blue and yellow parts of a network visualisation! The referencing is a little erratic with a list of books appearing in the bibliography but references to some of the detail of algorithms only found in the text.

I’m happy to recommend this book as a solid overview of Gephi for those that prefer to learn from dead tree, such as myself. It has good coverage of Gephi features, and some interesting examples. In places it is a little shallow and repetitive.

The publisher sent me this book, free of charge, for review.

18 Jun 09:14

Summer Reading for Busy Community Pros

by Richard Millington

I imagine you'll have plenty of time to read this summer.

Here are some favourites from the past year. 

1) Amanda Palmer - The Art of Asking. Look specifically at how she engaged fans, promotes their problems, and create a culture of constantly asking for and receiving help. My favourite book of the year. 

2) Michael Chwe - Rational Ritual. This is a tougher read, but an important concept to understand. We need to create common knowledge among members. This common knowledge helps members to take 

3) Rob Friendman - The Best Place To Work. A terrific and practical overview of self-determination theory and it's application to a workplace. You don't need to make much of a leap to apply this to building a community too. 

4) Matt Lieberman - Social: Why Our Brains are Wired to Connect. A mind-blowing book that covers the science behind social. If you ever want to talk intelligently about how the social parts of the human brain works, read this book. 

5) Olivia Fox-Cabane - The Charisma Myth. If you're not as influential, popular, or success at managing communities or gaining internal buy-in as you think you should be, it's probably not what you're saying - it's how you're saying it. This book has some good tips for being better in person (and online).

Happy summer. 

18 Jun 00:00

Max C Roser and 223 of his closest friends are Enemies of the Truth!

Display
files/images/6a00d8341cf01053ef01b7c799d168970b-800wi.png


Will Thalheimer, Will at Work Learning, Jun 21, 2015


I think that the question we need to ask here is not how wrong Max Roser was not how much damage he has done (both of which, I think, are quantified in this post) but rather, what is it about our existing system to creating and disseminating knowledge in education that makes Max Roser's actions seem reasonable and plausible to Max Roser. Because I'm quite sure he never set out to misinform 223 people. So what led him to, first, believe that the diagram represented a form of knowledge, and second, to share it without verifying the veracity of the information? This is the fundamental problem of education in our society. It is incredibly easy to get people to remember things - too easy (which is why these 'learning outcomes' studies are so misleading). What really matters is remembering the right things, useful things, and usable things. Maybe by studying Max Roser instead of merely complaining about him we can find out how to address this.

[Link] [Comment]
18 Jun 00:00

Tap, Swipe … but not for long.

Display
files/line-graph.svg


Dean Groom, Playable, Jun 21, 2015


I think it is useful to observe that different devices are used for different purposes. For example, we tend to prefer large devices - such as desktop computers - if we are using the computer for a long period of time. Now true, I read What is Code on my phone (the first time; I reread it on a desktop and enjoyed all the animations too the second time). But reading 38,000 words on a phone isn't something I normally do (what can I say? I was riveted). So generalizations like "teen s are using nothing but phones" may reflect the fact that they're not working at jobs all day (or at least, not desk jobs) more than than some trend about the future of computing devices.

[Link] [Comment]
18 Jun 21:29

Repeat after me: Alberta isn’t Greece

by Michal Rozworski

Last week it was Andrew Coyne; this week it’s Jack Mintz. Seems all the National Post’s favourite conservative commentators have suddenly decided to offer their Very Serious Advice™ to Alberta’s new government. While Coyne made a spurious comparison between raising the minimum wage and instituting a minimum income, Mintz outdoes him with an even more spurious comparison between Alberta and Greece.

Simply put, it is completely disingenuous to compare Greece to Alberta. Greece has seen its economy lose a quarter of its GDP since 2008 – a level of economic crisis unseen since the Great Depression. Unemployment has spiked to over 25%, youth unemployment is over 50% and poverty is widespread. While private creditors who participated in the pre-crisis boom have been bailed out, Greece has been forced into a vicious spiral of austerity driven by an unsustainable debt.

What’s the situation in Alberta? Alberta is still expected to grow, albeit very slowly, in 2015 according to most economists. Unemployment is up by 1% from a year ago, before the oil price crash. In part this is due to firms trying desperately to find efficiencies and cut costs to maintain profits. The picture is not rosy to be sure, but Alberta is in a wholly different category from Greece.

However, not only are Alberta’s problems completely unlike those of Greece, Mintz is wrong about Greece itself. Mintz joins the chorus of mystification that presents Greece as profligate rather than insolvent. It’s not the flow of “unsustainable deficits” but the stock of crushing debt and insolvency that is driving Greece deeper and deeper into crisis–one openly abetted by creditors hoping to make it an example for anyone else in Europe hoping to free themselves from the yoke of austerity.

Don’t just trust me on this – just ask the IMF. Its own research department has for some time been in the awkward role of playing “good cop” to the bad cop of the IMF negotiating teams in Brussels and Athens as well as their political superiors in Washington. An IMF report released in 2013 admitted that Greece should not have been put through the “extend and pretend” austerity wringer. New reports have focused on the roles of public debt reduction and inequality in hampering global recovery.

Crucially, Greece is part of a monetary union, the Eurozone, without a corresponding fiscal union. It is the worst of both worlds: it is bound to a fixed exchange rate regime within its major trading bloc without any mechanism to alleviate the effects of the uneven and unequal development this can produce. Alberta is part of a Canadian federation that shares the Canadian dollar but also engages in significant internal fiscal redistribution. As a province experiences economic decline due to an external shocks (like the recent oil price shock), increased federal transfers help alleviate the economic pain.

Finally, Mintz completely ignores the specificity of oil sands investment. As I’ve written before and as explained by the Financial Times just yesterday, oil sands investments are incredibly long-term (think decades), incredibly costly and geographically fixed. While oil sands producers are revising their future investments downwards, they can’t just pack up and leave. Indeed, the investments that have been made or are scheduled to come online in the next few years will continue to produce (the worse for the planet). The oil and gas industry is expected to increase production until at least 2030.

There is a significant, several-year timeframe to jump start the economy as the oil sands slowly adjusts to lower long-term investment. Clearly the fall in oil prices is and will continue to have effects on Alberta, but there is time to increase public revenues and move away from a boom-and-bust economic model.

Alberta today is an extremely low-tax jurisdiction, yet Mintz completely concentrates on the relatively small increase in corporate tax. He doesn’t mention the equally-important plans to increase personal income taxes on the highest earners. Remember too, that Alberta long ago completely eliminated its sales tax.

The fear-mongering implicit the wild projection of $9.2 billion in lost investment from a 2% tax raise does not quite jive with Mintz’s other projection of a meager $50 to $200 million in revenues from the same increase. If tax shifting is so easy, why would companies go to the trouble to uproot investment (remember: long-term, costly, geographically-specific) if they can just shift gains to other jurisdictions on paper? Hiring a few more tax lawyers and accountants is surely cheaper than moving shop to another jurisdiction.

Neither is increasing the minimum wage the dangerous proposition Mintz makes it out to be. Although that was the topic last week, in short, it could allow the lowest-wage workers to share more widely in the wage growth that has accompanied the oil boom. It can also start to generate greater internal demand to compensate for the fall in investment.

Likewise, expanding investment in healthcare and education, while finding ways to promote new industries to slowly take over from an extractive sector in crisis are not economically-destructive policies. For instance, as climate change caused by the extractivism that currently powers Alberta’s economy continues to progress, agricultural growing lands are migrating northwards. CIBC’s former chief economist recently provocatively argued that Alberta is well primed to become a new bread-basket and carbon mitigation policies could help speed up the transition.

The point is not that any single policy is a magic bullet, but neither is anything on the table now a death wish. Alberta has the time, the resources and the political will to make start changing its economic model. Fear-mongering, whether with spurious comparisons to Greece or grim prognostications of the results of mildly redistributive policies, is not only unhelpful but completely disingenuous.


18 Jun 22:30

“What can I, as an individual, do to stop climate change?”

by Stephen Rees

Greenpeace Shell BC

Illustration taken from GreenPeace

One of the benefits of having a blog – and one of its curses too – is that I get things in the email that other people want me to put on my blog. Or write about on my blog. This is one of those: it comes from The Nation which is a magazine whose web site operates behind a paywall. So I get a complimentary log in to see articles which they think I will direct you to. Many are worthy, and I understand why The Nation wants to stay in business and keep paying its journalists to provide content. But, as far as possible, I continue to try and find sources that are not paywalled.

Today the news is full of two things that everybody is writing about: the new Papal encyclical and the latest American shooting atrocity. The Nation has three, searing articles about that and how this church and this date were neither randomly picked. And a commencement speech by Naomi Klein to the College of the Atlantic on June 6, 2015.

Mine is not going to be your average commencement address, for the simple reason that College of the Atlantic is not your average college. I mean, what kind of college lets students vote on their commencement speaker—as if this is their day or something? What’s next? Women choosing whom they are going to marry?

So as it happens there’s a couple of things here that have resonance with me. Firstly the Atlantic has, very wisely, closed comments on the three articles about the Charleston massacre. After yesterday, I have been seriously thinking that might not be too bad of an idea here, but two comments from the Usual Suspects set me straight on that. We do have good discussions here, and one wingnut is not going to be allowed to upset that. Secondly, one of the topics that Naomi Klein addresses speaks to something I have been thinking about.

These days, I give talks about how the same economic model that superpowered multinationals to seek out cheap labor in Indonesia and China also supercharged global greenhouse-gas emissions. And, invariably, the hand goes up: “Tell me what I can do as an individual.” Or maybe “as a business owner.”

The hard truth is that the answer to the question “What can I, as an individual, do to stop climate change?” is: nothing. You can’t do anything. In fact, the very idea that we—as atomized individuals, even lots of atomized individuals—could play a significant part in stabilizing the planet’s climate system, or changing the global economy, is objectively nuts.

Recently Jane Fonda visited Jericho Beach and spoke there about pipelines and coastal tankers and whatnot, and of course the commenters weighed in as usual, being snide about how Jane chose to travel here, and thus was some kind of hypocrite because that trip used fossil fuel. Just as the same cabal has chided Al Gore for his campaigning on the same topic.

Maybe the Pope is going to be different. Maybe his speech will start the moral shift that is needed in the corridors of power to finally address the issue. Of course the fact that someone inside the Vatican leaked the encyclical (not a usual turn of events) and that Jeb Bush was already out front of it seem to point in the direction that the pontiff will be going. A bit like the way the President has had to acknowledge on gun control.

But continuing the “fair use “privilege, here is how Naomi Klein sees it towards the end of her speech

….the weight of the world is not on any one person’s shoulders—not yours. Not Zoe’s. Not mine. It rests in the strength of the project of transformation that millions are already a part of.

That means we are free to follow our passions. To do the kind of work that will sustain us for the long run. It even means we can take breaks—in fact, we have a duty to take them. And to make sure our friends do too.

And, as it happens you can also watch – for free –  what Naomi Klein said on YouTube

And also here is what she has to say about the Pope’s new message


Filed under: energy, Environment, greenhouse gas reduction, Impact of Climate Change, politics, sustainability Tagged: "Charleston Massacre", Al Gore, Jane Fonda, Naomi Klein, The Atlantic
18 Jun 14:26

Price Tags: The Colour of My Voice

by pricetags

Some readers are confused about which ‘voice’ is speaking in a Price Tags post at any one time. Is it me, the blogger, or someone like the Daily Scot? Is it a quote from another source? Do I speak in italics, indents or plain text?

In an experiment starting today, I’ll speak in colour (Irish green, they call it), whether in plain text or italics. I’m not sure whether the colour will be picked up in all media, but it’s worth a try to provide more clarity.

I’m trying out different colours. Let me know if it works for you.


18 Jun 14:39

Morning Thought: The Ecology of Housing Prices

by pricetags

It’s hardly an original thought to think of housing as ecological: nothing stands in isolation, no piece is separate from the whole.  My favourite example: the relationship of the West End to Downtown South in the 1990s.  As thousands of units came on stream east of Burrard, they took the pressure off the West End, which changed neither in number nor rental rates, when adjusted for inflation, hardly at all.  It remained, as it largely does today, a lower middle-income neighbourhood.

Despite perception at the time, the West End was not an island, isolated from the changes that occurred beyond its borders.   Rather, its stability was in large part a consequence of that adjacent turbulence: those who arrived in the city could find accommodation in new buildings without having to compete with those in older rental suites.

That’s not the situation with single-family homes since they’re not making any more of the latter, while the condo market, at least, seems to have flattened out with additional supply.

Which makes me wonder how much the prices of single-family homes in markets beyond those impacted directly by investment and foreign capital are secondarily affected – when, say, a boomer couple cash out their west-side bungalow for millions more than anticipated and then pass along several hundred thousand to their children to purchase east of Main.

How much has that rising tide of money lifts prices elsewhere, and for whom and how many has it kept an unaffordable city still within achievable reach?  Is that what explains how those who, by income, can’t afford to live or buy in Vancouver but nonetheless do?  And is it a significant portion of the market?

Gee, some data might be useful.


18 Jun 14:15

chartier: Can we take a moment to salute whomever at Apple...





chartier:

Can we take a moment to salute whomever at Apple invented this little pull tab for opening accessories? This is serious accessory opening innovation.

Agreed.

18 Jun 15:04

Vancouver Heritage Foundation Summer Season

by pricetags

Walking Tours: Commercial Drive– Little Italy

.

If you enjoyed Italian Day last weekend, you will love this chance to learn a bit more about this diverse neighbourhood. The section of Commercial Drive known as Vancouver’s “Little Italy” represents 50 years of history. This neighbourhood has changed dramatically over the last decade with businesses reflecting the changes in the community. Join Maurice Guibord for a walk down Commercial Drive where he’ll talk about “Kits-ification,” the role of family businesses, coffee culture, and of course, development pressures.

Saturday, June 27 (limited availability) and Friday, July 10 
10am – 12pm

Register Here $15 (inc. tax)

Upcoming Walking Tours:

July 18th – The Edge of Shaughnessy w/ John Atkin
July 25th – Post War Architecture w/ Maurice Guibord

Click here to view all upcoming walking tours.

John Atkin’s walking tours on June 20th and July 4th are currently at capacity. If you would like to be added to the wait list please contact the office.


Update: The Heritage Site Finder interactive map

.

Last year VHF launched the beta version of an interactive map which plots all sites listed on Vancouver’s Heritage Register. We are happy to announce that thanks to the dedication and hard work of a wonderful team of volunteers, most of the 2,200 sites found on the map now have a current image! There is also now a mobile-friendly version thanks to the team at Split Mango Media Inc.

You can use the map as a tool to see which heritage sites are already listed or designated, and which ones aren’t but could be nominated to the Heritage Register Update. Nominations are open until September 12.
 .
Click Here to visit the latest version of the Heritage Site Finder. Please note if you have viewed the map previously you may need to clear your browser history to see all the updates.We are also now looking for volunteer researchers to help add information about each site listed on the map. If you are interested in helping out please contact Special Project Coordinator Jessica Quan.


Mid-Century Modern House Tour

.

photo credit: Martin Knowles Photo/MediaComing up this September is the Mid-Century Modern House Tour. Spend an afternoon exploring examples of this influential style including a 1962 home designed by Arthur Erickson. Celebrating the architecture of the West Coast Modernists, this tour gets inside Vancouver examples of this remarkable style. Emphasizing a connection to nature, natural materials and simple construction methods, Mid-Century Modernism offers beauty in its simplicity. More details of this year’s tour will be released in the next few weeks.

Saturday, September 26
Tour 1 – 5 pm, Reception 5 – 7 pm

Bus $110, Self-guided $90

New for 2015! Early Bird pricing – until July 10th only
Bus $100, Self-guided $85

*Pricing does not include GST, which is applied to the non-donation cost only



17 Jun 06:00

Why Doesn't Creativity Matter in Tech Recruiting?

by James Hague

A lot of buzz last week over the author of the excellent Homebrew package manager being asked to invert a binary tree in a Google interview. I've said it before, that organizational skills beat algorithmic wizardy in most cases, probably even at Google. But maybe I'm wrong here. Maybe these jobs really are hardcore and no day goes by without implementing tricky graph searches and finding eigenvectors, and that scares me.

A recruiter from Google called me up a few years ago, and while I wasn't interested at the time, it made me wonder: could I make it through that kind of technical interview, where I'm building heaps and balancing trees on a whiteboard? And I think I could, with one caveat. I'd spend a month or two immersing myself in the technical, in the algorithms, in the memorization, and in the process push aside my creative and experimental tendencies.

I hope that doesn't sound pretentious, because it's a process I've experienced repeatedly. If I focus on the programming and tech, then that snowballs into more interest in technical topics, and then I'm reading programming forums and formulating tech-centric opinions. If I get too much into the creative side and don't program, then everything about coding seems much harder, and I talk myself out of projects. It's difficult to stay in the middle; I usually swing back and forth.

Is that the intent of the hardcore interview process? To find people who are pure programming athletes, who amaze passersby with non-recursive quicksorts written on a subway platform whiteboard, and aren't distracted by non-coding thoughts? It's kinda cool in a way--that level of training is impressive--but I'm unconvinced that such a technically homogeneous team is the way to go.

I've always found myself impressed by a blend between technical ability and creativity. The person who came up with and implemented a clever game design. The person doing eye-opening visualizations with D3.js or Processing. The artist using a custom-made Python tool for animation. It's not creating code so much as coding as a path to creation.

So were I running my ideal interview, I'd want to hear about side projects that aren't pure code. Have you written a tutorial with an unusual approach for an existing project? Is there a pet feature that you dissect and compare in apps you come across (e.g., color pickers)? And yes, Homebrew has a number of interesting usability decisions that are worth asking about.

(If you liked this, you might enjoy Get Good at Idea Generation.)

18 Jun 14:15

Drafts 4.3

by Federico Viticci

A great update to Drafts was released earlier this week, and it's got some interesting changes for users who manage a lot of notes or save bits of text in the same notes on a regular basis.

The Drafts extension can now offer to append/prepend whatever it receives (some text, a URL, etc.) to existing notes – useful to keep a running list of items without ending with multiple notes or having to merge them manually every time. This is useful for me when I want to assemble lists of links I can use for MacStories or Relay.

The Drafts Share extension (used from the Share sheet in other apps) now supports appending and prepending to inbox drafts as well as capture of new drafts. To use these options in the share sheet, tap the “Append” or “Prepend” buttons at the bottom of the window and select the draft to add the text to.

You can also run an action on multiple notes at once now:

When using the “Select” and “Operations” options below the drafts list, there is now a “Select All” option to quickly select all drafts in the current tab, and a “Run Action” operation to apply an action to multiple drafts. “Select All” is particularly useful to quickly archive all drafts in the inbox, for example. The “Run Action” operation lets you quickly select multiple drafts and run an action on them. When selecting this operation, the action list will be shown to select the action to run. Some actions (such as ones that leave Drafts) are not supported for multiple selections and will be grayed out in the list.

The most impressive aspect of Drafts is how Greg Pierce manages to keep the app simple and powerful at the same time with features that are there but not in the way. That's an exercise of restraint and good design that can't be appreciated in other apps. Drafts is $9.99 on the App Store.

∞ Read this on MacStories

18 Jun 16:22

There was a Plan B after all – lots of them

by pricetags
Turns out that, like the Mayor of Surrey, the Mayor of Vancouver has a Plan B in the event the referendum fails, as, reportedly, does the Premier (raising the question for many: why did we vote in the first place?).
.

Ian Bailey in The Globe and Mail:

Vancouver working on ‘Plan B’ to fund subway ahead of plebiscite results

.

Vancouver Mayor Gregor Robertson says his administration is working on an alternative to build the Broadway subway across the city, regardless of the outcome of a controversial plebiscite on a new tax to pay for transit and other transportation projects. …

“Obviously, we’re hopeful we have a Yes vote and we can proceed as planned with the mayors’ 10-year investment, but if that doesn’t work out, we’ll go to Plan B and look at alternatives.”

However, the mayor declined to offer specifics on the plan for building the $3-billion project from Commercial Drive to Arbutus, currently hinging on plebiscite funding as well as money from the federal and B.C. governments.

Asked for details on Plan B, Mr. Robertson said, “It’s too early to say.” …

Gord Price, director of the City Program at Simon Fraser University, said he expects that Mr. Robertson is talking about some kind of levy on development to help raise money, but saluted the Vancouver leader for affirming the necessity of expanding transit in the region.

“I’m surprised. If he’s got a mechanism for capital projects of this scale, the implications are profound,” Mr. Price said in an interview on Wednesday.

Still, he said he was curious about how the city would cover operating costs for such a project.

Mr. Price suggested the entire plebiscite process has been a waste that postponed the need for hard decisions on transit in the region. B.C. Premier Christy Clark announced the plebiscite concept as part of the platform for the B.C. Liberals ahead of campaigning for the 2013 provincial election.

“The Premier forced it on us,” Mr. Price said. “Every part of it was a waste.” …

.

Correction on my part: the Broadway subway could likely cover its operating costs (depending on how much of the debt servicing is included in annual budgets, which in turn is dependent on the scale of the project and who provides the financing.)

I’m skeptical that Surrey’s light-rail proposal could cover operating costs, but same criteria apply: It will no doubt be a P3, with senior governments providing most of the funding, given the electoral importance of the municipality (expect an announcement by the Feds prior to October). A development cost charge on new development could provide some bucks without tapping existing property tax excessively.  So it’s presumably possible to make that deal pencil.

But here’s the thing: rail lines do not a transit system make.  The concept of a Frequent Transit Network is primarily dependent on the bus lines that provide coverage and service to the region as a whole, and feed the rail lines that provide cross-region services at higher speeds and frequency.  (Jarrett Walker discusses Vancouver’s FTN here.)

Without that integration and continued growth of the bus network, the system as a whole fails to justify the costs of the expensive rail lines. Worse, with no new funding for TransLink, the temptation will be to cannibalize the bus system if the agency has to pay for new services – and post-referendum, there’s a very good chance of that happening.


18 Jun 18:09

Three of the best people I’ve ever worked with…

by Doug Belshaw

…were colleagues at Mozilla. Like me, they’re all doing their own thing now. It would be remiss of me not to point you in their direction, on the off-chance you’re not already aware of their work.


Laura Hilliger (@epilepticrabbit)

Laura Hilliger

I’ve known Laura for a while as she wrote her Masters thesis on web literacy. She left her position as Education & Training Lead at Mozilla last week to pursue the next chapter of her career, explaining her move in this post.

Laura is an American living in Dresden, Germany. You should hire her to do anything learning related as she’s has the creativity and capacity to get anything done that anyone throws at her! She’s super-talented.

Laura’s LinkedIn profile


John Bevan (@bevangelist)

John Bevan

John seems to know everyone. It was kind of his job to do so when he was on the Engagement team at Mozilla, getting the word out and raising money. Since then, he’s been at Nesta, and he’s also worked for the BBC, The Guardian, Rewired State, you name it.

His current focus of attention is dotcomrades doing something about trade unions in the networked era. I went to the alpha launch event in London last week and he’s definitely onto something. Find out more and get involved via his blog post.

John’s LinkedIn profile


Kat Braybrooke (@codekat)

Kat Braybrooke

Kat is one of those people – like many I worked with at Mozilla – who defies categorisation. She’s talented technically, but driven by cultural endeavours and her sharp designer’s eye.

Leaving Mozilla in February, Kat upped-sticks from Vancouver and moved to London. She’s “taking on small contracts with value-based projects that aim to make the world a better place” and particularly interested in “web development, curriculum design, participatory research or community curation.” More about that in this blog post.

Kat’s LinkedIn profile


There’s so many talented people who have left Mozilla over the last year that this could have been a fairly long list. I wanted to point you towards Laura, John, and Kat as I think there’ll be some of you who could benefit from knowing them better.

As for me, I couldn’t be happier at the moment. I’m working primarily with City & Guilds around digital strategy and Open Badges, as well as the occasional workshop and keynote for other organisations. I’m designing a little bit more capacity into my schedule from September onwards, so if you’re interested drop me a line: hello@nulldynamicskillset.com or let’s connect via my LinkedIn profile!

18 Jun 17:19

3 reasons why the phone is here to stay

by Kevin Ives

With the rise of social media and other channels your customers can use to reach out to you, is providing phone support still important?  Let’s also keep in mind that of all of the support channels, phone is the most expensive in terms of resources.  This is because while an Agent is on a phone... Read more »

The post 3 reasons why the phone is here to stay appeared first on Desk.com.

18 Jun 19:00

Google releases its official Clock app to the Play Store

by Igor Bonifacic

If you own non-Nexus device and simply can’t take the look and feel your smartphone’s clock app, Google has a treat for you. Continuing its trend of separating core apps from Android, the company has released its official clock app to the Play store.

In a bit of a treat for Android owners, the version of the app that’s hitting the Play Store today is the same one that’s included with Android M. This version brings with it the ability to select the day of the week recurring calendars start on, though only Saturdays, Sundays and Mondays are an option. Beyond that, love it or hate it, it’s the same clock app that’s been with us for several years now.

Google’s first-party clock app is available on the Play Store, and is compatible with any Android device running KitKat 4.4 and up. There are also six different versions of the app available on APK Mirror, in case you don’t like how the current version looks.

18 Jun 19:07

Memorial Ride for Peter (Zhi Yong) Kang

by jnyyz

Peter Kang was killed while riding his bike last week. The driver was caught after a police chase. The memorial ride was this morning.

A group of riders gathered at the start. The mood was particularly somber as there was a memorial ride a little over a week ago, and everyone knew that there would be another in two days time.
DSC05595

These canine friends support our cause.
DSC05585
DSC05587

Bilenky’s hard at work.
DSC05591

Dave with the banner.
DSC05590

Discussing the route.
DSC05592

These are classmates of the deceased. They told me that Peter studied and taught mechanics in Beijing before emigrating to Canada.
DSC05596

Joey makes the announcements.
DSC05594

DSC05588

And we’re off, headed north on Spadina. It’s going to be a long haul up to just east of Jane/Finch.
DSC05597

Westbound on Davenport, the bike lane fades in and out of existence, and in any case, we are hemmed in between traffic and parked cars.
DSC05599

Paying our respects with a moment of silence at Tom Samson’s ghost bike.
DSC05603

Headed north again.
DSC05604

DSC05605

On Jane
DSC05606

Under Black Creek.
DSC05607

Now a little east on Sheppard.
DSC05608

Just before we turned north, a driver cut very close to the pack, one rider banged his fist against the car, and then the driver started driving erratically. We had hem him in so that he wouldn’t run anyone over.
DSC05609

Preparing to turn west on Finch.
DSC05610

At the crash site.
DSC05611

DSC05612

DSC05614

DSC05618

DSC05616

DSC05615

On the ride home, a glimpse of the ghost bike for Adrian Dudzicki.

DSC05619

It is bitterly ironic that the Toronto Transportation department tweeted this picture on May 13, 2015.
traffic sign

Since then:


18 Jun 18:26

Why aren't the BigCo's converging on JavaScript?

Everywhere I look individual programmers are getting on board with JavaScript. It really is something. After a couple of decades of fragmentation in the development world, we now have what I called, in 1995, a consensus platform. Chances are pretty good if you and I are working on server code, we're both working in Node.js. And if you and I are writing code that runs in the browser, the chances are 100 percent that we are both working in JavaScript.

Yet almost all the big companies are trying to create their own languages, presumably with proprietary or patent secret sauces, that are not JavaScript.

If we were healthy as an industry in ways that we are clearly not, we would see this coming-together as an opportunity to become more efficient. We'd be looking for opportunities to factor redundancy from our platforms, for example reducing our reliance on CSS and HTML, and perhaps eliminating the need for server code. These are serious possibilities. There isn't much functionality left that must be on the server. If we concentrated real hard, we could make those go away.

But the BigCo's seem to want the chaos? And as a result they'll need lots more programmers to maintain all the incompatible stacks. I don't think this is driven by business needs, rather it's programmers trying to be sure they continue to have jobs. Re-inventing stuff that already works pretty well. Job security.

Reminds me of all the incompatible BigCo networking products that were swept off the table by the emergence of the web as the consensus platform in the early 90s. JavaScript is that strong a force in 2015.