Shared posts

24 Nov 13:06

Square Announces New Square Reader for Apple Pay, Contactless & Chip Cards

by Graham Spencer

Square yesterday announced a new Square Reader, designed to work with contactless payment services such as Apple Pay, as well as "chip and PIN" EMV cards. The new contactless Square Reader can be reserved from today for $49.

The new contactless Square Reader is a pocket sized square puck (naturally), which can be used wirelessly with the Square app on an iOS or Android device. For customers with an EMV card, there's also a slot on one of the sides of the the new Square Reader to insert the card's chip into the device.

For now at least, Square's mobile payment processing for small businesses is limited to the United States, Canada and Japan. The launch of this new contactless Square Reader may help boost Square's international expansion efforts, particularly in countries such as the UK and Australia, where EMV cards are more widely adopted.

24 Nov 13:30

Beamer 3 Launches with New User Interface, Google Chromecast Support

by Graham Spencer

Following a two month public beta period, Beamer 3 was released earlier this month. Beamer, a favorite of the MacStories team, is a Mac app that allows you to easily stream video (in almost any format) to your Apple TV via AirPlay. In Beamer 3, streaming support has expanded beyond AirPlay and it can now stream to Google Chromecast.

The user interface has also been redesigned in Beamer 3, and now fits in better with OS X Yosemite and El Capitan. The new UI isn't just prettier, it's also more useful because it provides easier access to audio track and subtitle controls. Another new feature enables you to skip to the next video in your Beamer queue by double clicking the play button the Apple Remote. All of the new features are listed here.

Beamer 3 is available for $19.99 for new customers, and existing Beamer customers can "pay what you want" to upgrade to Beamer 3. Beamer 3 requires OS X Yosemite or El Capitan.

24 Nov 15:36

Course Management and Collaborative Jupyter Notebooks via SageMathCloud

by Tony Hirst

Prompted by a joint coursemodule team to look at options surrounding a “virtual computing lab” to support a couple of new level 1 (first year equivalent) IT and computing courses (they should know better?!;-), I had another scout around and came across SageMathCloud, which looks at first glance to be just magical:-)

An open source, cloud hosted system [code], the free plan allows users to log in with social media credentials and create their own account space:


Once you’re in, you have a project area in which you can define different projects:

Projects_-_SageMathCloudI’m guessing that projects could be used by learners to split out different projects with a course, or perhaps use a project as the basis for a range of activities within a course.

Within a project, you have a file manager:


The file manager provides a basis for creating application-linked files; of particular interest to me is the ability to create Jupyter notebooks…


Jupyter Notebooks

Notebook files are opened in to a tab. Multiple notebooks can be open in multiple tabs at the same time (though this may start to hit performance from the server? pandas dataframes, for example, are held in memory, and the SMC default plan could mean memory limits get hit if you try to hold too much data in memory at once?)?


Notebooks are autosaved regularly – and a time slider that allows you to replay and revert to a particular version is available, which could be really useful for learners? (I’m not sure how this works – I don’t think it’s a standard Jupyter offering? I also imagine that the state of the underlying Python process gets dislocated from the notebook view if you revert? So cells would need to be rerun?)



Several users can collaborate on a project. I created another me by creating an account using a different authentication scheme (which leads to a name clash – and I think an email clash – but SMC manages to disambiguate the different identities).


As soon as a collaborator is added to a project, they share the project and the files associated with the project.


Live collaborative editing is also possible. If one me updates a notebook, the other me can see the changes happening – so a common notebook file is being updated by each client/user (I was typing in the browser on the right with one account, and watching the live update in the browser on the left, authenticated using a different account).


Real-time chatrooms can also be created and associated with a project – they look as if they might persist the chat history too?



The SagMathCloud environment seems to have been designed by educators for educators. A project owner can create a course around a project and assign students to it.

My_first_project_-_SageMathCloud_1(It looks as if students can’t be collaborators on a project, so when I created a test course, I uncollaborated with my other me and then added my other me as a student.)


An course folder appears in the project area of the student’s account when they are enrolled on a course. A student can add their own files to this folder, and inspected by the course administrator.


A course administrator can also add one or more of their other project folders, by name, as assignment folders. When an assignment folder is added to a course and assigned to a student, the student can see that folder, and its contents, in their corresponding course folder, where they can then work on the assignment.


The course administrator can then collect a copy of the student’s assignment folder and its contents for grading.


The marker opens the folder collected from the student, marks it, and may add feedback as annotations to the notebook files, returning the marked assignment back to the student – where it appears in another “graded” folder, along with the grade.



At first glance, I have to say I find this whole thing pretty compelling.

In an OU context, it’s easy enough imagining that we might sign up a cohort of students to a course, and then get them to add their tutor as a collaborator who can then comment – in real time – on a notebook.

A tutor might also hold a group tutorial by creating their own project and then adding their tutor group students to it as collaborators, working through a shared notebook in real time as students watch on in their own notebooks, and perhaps may direct contributions back in response to a question from the tutor.

(I don’t think there is an audio channel available within SMC, so that would have to be managed separately?)


So what else would be nice? I’ve already mentioned audio collaboration, though that’s not essential and could be easily managed by other means.

For a course like TM351, it would be nice to be able to create a composition of linked applications within a project – for example, it would be nice to be able to start a PostgreSQL or MongoDB server linked to the Jupyter server so that notebooks could interact directly with a DBMS within a project or course setting. I also note that the IPython kernel being used appears to be the 2.7 version, and wonder how easy it is to tweak the settings on the back-end, or via an administration panel somewhere, to enable other Jupyter kernels?

I also wonder how easy it would be to add in other applications that are viewable through a browser, such as OpenRefine or RStudio?

In terms of how the backend works, I wonder if the encapsulation would be useful (eg in context of Why doesn’t Sandstorm just run Docker apps?) compared to a simpler docker container model, if that indeed is what is being used?

24 Nov 13:26

Your outlines are useless. You need a fat outline.

by Josh Bernoff

When you’re planning to write, but before you’re actually writing, you create an outline. Unfortunately, most outlines are worthless. You need a better outline: a fat outline. Outlines are helpful for mapping out the structure of a long piece of writing — anything more than 1,000 words (a couple of pages). An outline ought to help the … Continue reading Your outlines are useless. You need a fat outline. →

The post Your outlines are useless. You need a fat outline. appeared first on without bullshit.

24 Nov 16:24

Google announces Android Studio 2.0 with a faster Android emulator, Instant Run and more

by Evan Selleck
Recently, Google took part in the Android Dev Summit and showed off the newest version of its Android Studio, officially unveiling version 2.0 to the world. Continue reading →
24 Nov 16:51

What’s The Opposite Of A Poisonous Jelly Bean?

by Richard Millington

Groups are persuaded more by emotive imagery than rational facts. Facts can help shape the image, but the image reigns supreme.

Consider putting the refugee situation in these terms:

“If I gave you a bag of 50,000 jelly beans and told you 100 are poisonous, you wouldn’t accept them right? Then why would we accept 50,000 refugees if some of them are bad?”

As Marginal Revolution calculates, statistically the average American is more likely to commit a murder than an incoming refugee.

Yet the imagery here matters. We can picture a bowl of jelly beans. We can imagine a poisonous jelly bean among them, and we can imagine how dangerous it would feel to eat one.

That’s powerful, emotive, imagery. The only way to beat this is to find more powerful imagery, not more powerful facts. Comparing refugees to murderers just heightens fear. Comparing refugees to asylum seekers from a bygone era just evokes more negative imagery.

One approach is to start talking about an army of incoming doctors, builders, teachers, and fresh young blood to keep America healthy, smart, young, and strong.

You can’t fight powerful images with powerful facts. You have to fight powerful images with even more powerful and more emotive images. The only thing that can defeat fear is a more unifying, more emotive, more powerful image people can identify with.

24 Nov 15:07

GPIO Zero: a friendly Python API for physical computing

by Ben Nuttall

Physical computing is one of the most engaging classroom activities, and it’s at the heart of most projects we see in the community. From flashing lights to IoT smart homes, the Pi’s GPIO pins make programming objects in the real world accessible to everybody.

Some three years ago, Ben Croston created a Python library called RPi.GPIO, which he used as part of his beer brewing process. This allowed people to control GPIO pins from their Python programs, and became a hit both in education and in personal projects. We use it in many of our free learning resources.

However, recently I’ve been thinking of ways to make this code seem more accessible. I created some simple and obvious interfaces for a few of the components I had lying around on my desk – namely the brilliant CamJam EduKits. I added interfaces for LED, Button and Buzzer, and started to look at some more interesting components – sensors, motors and even a few simple add-on boards. I got some great help from Dave Jones, author of the excellent picamera library, who added some really clever aspects to the library. I decided to call it GPIO Zero as it shares the same philosophy as PyGame Zero, which requires minimal boilerplate code to get started.


This is how you flash an LED using GPIO Zero:

from gpiozero import LED
from time import sleep

led = LED(17)

while True:

(Also see the built-in blink method)

As well as controlling individual components in obvious ways, you can also connect multiple components together.


Here’s an example of controlling an LED with a push button:

from gpiozero import LED, Button
from signal import pause

led = LED(17)
button = Button(2)

button.when_pressed = led.on
button.when_released =


We’ve thought really hard to try to get the naming right, and hope people old and young will find the library intuitive once shown a few simple examples. The API has been designed with education in mind and I’ve been demoing it to teachers to get feedback and they love it! Another thing is the idea of minimal configuration – so to use a button you don’t have to think about pull-ups and pull-downs – all you need is the pin number it’s connected to. Of course you can specify this – but the default assumes the common pull-up circuit. For example:

button_1 = Button(4)  # connected to GPIO pin 4, pull-up

button_2 = Button(5, pull_up=False)  # connected to GPIO pin 5, pull-down

Normally, if you want to detect the button being pressed you have to think about the edge falling if it’s pulled up, or rising if it’s pulled down. With GPIO Zero, the edge is configured when you create the Button object, so things like when_pressed, when_released, wait_for_press, wait_for_release just work as expected. While understanding edges is important in electronics, I don’t think it should be essential for anyone who wants to create a simple interactive project.

Here’s a list of devices which currently supported:

  • LED (also PWM LED allowing change of brightness)
  • Buzzer
  • Motor
  • Button
  • Motion Sensor
  • Light Sensor
  • Analogue-to-Digital converters MCP3004 and MCP3008
  • Robot

Also collections of components like LEDBoard (for any collection of LEDs), FishDish, Traffic HAT, generic traffic lights – and there are plenty more to come.

There’s a great feature Dave added which allows the value of output devices (like LEDs and motors) to be set to whatever the current value of an input device is, automatically, without having to poll in a loop. The following example allows the RGB values of an LED to be determined by three potentiometers for colour mixing:

from gpiozero import RGBLED, MCP3008
from signal import pause

led = RGBLED(red=2, green=3, blue=4)
red_pot = MCP3008(channel=0)
green_pot = MCP3008(channel=1)
blue_pot = MCP3008(channel=2) = red_pot.values = green_pot.values = blue_pot.values


Other wacky ways to set the brightness of an LED: from a Google spreadsheet – or according to the number of instances of the word “pies” on the BBC News homepage!

Alex Eames gave it a test drive and made a video of a security light project using a relay – coded in just 16 lines of code.

GPIO Zero Security Light in 16 lines of code

Using GPIO Zero Beta to make a security light in 16 lines of code. See blog article here… If you like the look of the RasPiO Portsplus port labels board I’m using to identify the ports, you can find that here

Yasmin Bey created a robot controlled by a Wii remote:

Yasmin Bey on Twitter

@ben_nuttall @RyanteckLTD

Version 1.0 is out now so the API will not change – but we will continue to add components and additional features. GPIO Zero is now pre-installed in the new Raspbian Jessie image available on the downloads page. You can also install it now by entering the following commands into a terminal:

sudo apt-get update

sudo apt-get install python3-gpiozero python-gpiozero

Remember – since the release of Raspbian Jessie, you no longer need to run GPIO programs with sudo – so you can just run these programs directly from IDLE or the Python shell. GPIO Zero supports both Python 2 and Python 3. Python 3 is recommended!

Let me know your suggestions for additional components and interfaces in the comments below – and use the hashtag #gpiozero to share your project code and photos!

A huge thanks goes to Ben Croston, whose excellent RPi.GPIO library sits at the foundation of everything in GPIO Zero, and to Dave Jones whose contributions have made this new library quite special.

See the GPIO Zero documentation and recipes and check out the Getting Started with GPIO Zero resource – more coming soon.

The post GPIO Zero: a friendly Python API for physical computing appeared first on Raspberry Pi.

23 Nov 02:30

Recommended on Medium: Don’t Be An Air Travel Zombie

How to Travel Like a Rational Human Being Over the Holidays

Continue reading on Medium »

23 Nov 22:49

Twitter Favorites: [knguyen] My R&B single is gonna start with me gently whispering "Nguyen."

Kevin Nguyen @knguyen
My R&B single is gonna start with me gently whispering "Nguyen."
24 Nov 00:10

Twitter Favorites: [anildash] This evolution does mean most WordPress sites are more or less centralized now. Hope that enables them to compete as a major social network.

Anil Dash @anildash
This evolution does mean most WordPress sites are more or less centralized now. Hope that enables them to compete as a major social network.
24 Nov 14:33

Twitter Favorites: [rtanglao] miss you both @sillygwailo & @katherinebailey

Roland Tanglao @rtanglao
miss you both @sillygwailo & @katherinebailey
24 Nov 16:44

Peak iPad Mini

by Neil Cybart

The iPad mini's best days are behind it. Using app analytics data from Fiksu and Mixpanel, along with my own iOS device sales estimates and projections, I was able to derive iPad mini sales since launch. Over the past two years, iPad mini sales trends have deteriorated much faster than most people think. When taking into account the move to larger iPhones and iPads, the iPad mini's value proposition has likely been weakened to such a degree that the decline in sales is permanent. We have experienced "Peak iPad mini." More importantly, by analyzing the iPad mini's sales trends, we have better insight as to where the iPhone and iPad product lines are headed and the iOS platform's overall direction when it comes to form factors. 

iPad Mini Has Fallen Out of Favor

Conventional wisdom would suggest that the iPad mini has been the better performing iPad line due to its low price and feature set. In reality, the iPad mini has seen much weaker sales trends compared to its larger 9.7-inch screen sibling. As shown in Exhibit 1, when looking at iPad mini sales on a trailing twelve month (TTM) basis, the device's multi-year sales decline becomes quite obvious. Apple would need to sell close to double the number of iPad minis sold over the last 12 months to set a new sales record, a feat that is looking very unlikely. 

Exhibit 1: iPad Mini Unit Sales (TTM)

While everyone is aware of the iPad's sales troubles, one surprising observation is that most of the iPad's sales decline can be attributed to the iPad mini line. As seen in Exhibit 2, the 7.9-inch screen form factor has seen a dramatic 50% drop in sales on a TTM basis over the past two years while the 9.7-inch iPad line has seen much more resilient sales. This trend seems counterintuitive but provides a very strong clue as to how consumers are thinking about the iPad. When taking into account this granular iPad sales data, Apple management likely had a much easier time deciding to launch the larger iPad Pro. The trend towards larger iPads has already been years in the making.  

Exhibit 2: iPad Unit Sales by Screen Size (TTM)

Screen Shot 2015-11-24 at 10.58.06 AM.png

What Happened?

There has been a polarizing debate as to what happened to iPad sales. Some have blamed slowing sales on longer life cycles while others have pointed to iPhone cannibalization or a combination of the two. The latest theory is that iPad software issues and a flawed App Store are to blame. When considering that the smaller iPad mini is becoming less popular than the larger models, many of those reasons fall by the wayside.

In reality, the iPad was a victim of its own success. The combination of a very popular iPad 2 (decent weight, okay screen, and good battery) and the launch of a smaller iPad mini (with a low price) led to a boom in sales that resulted in iPad sales growth peaking only three months after the iPad mini went on sale.

In retrospect, the iPad mini served as a great precursor to the phenomenon known as larger screen iPhones. There was likely significant demand for an iOS device larger than the current iPhone at the time (iPhone 5's 4-inch screen) but a bit less bulky than the 9.7-inch screen iPad. Soon, all of these reasons began to be wiped away with new, larger iPhones and thinner iPads. While the iPad mini's low price meant the device was the more popular iPad for gifting around the holidays, which is likely still true today, the need for an iPad mini was increasingly being met by the iPhone and larger iPads. 

iPad Mini's Declining Value Proposition

A slowdown in iPad mini sales is not enough to lead to the conclusion that the product line will see a permanent reduction in sales. The Mac serves as great example of this as the product seemed to have seen peak sales, only to come roaring back in subsequent quarters. Instead, the Peak iPad mini theory is predicated on the idea that the device's position in the market has fundamentally changed, leading to a weaker value proposition and corresponding cut in sales. One example of this type of value destruction is found with iPod and how the smartphone led to less demand for dedicated music players. Peak iPod occurred in 2008, a year after the iPhone was released. 

The very significant move towards larger screen iPhones has altered the iPad mini's ultimate sales trajectory. As shown in Exhibit 3, when the iPad mini was first introduced in 2012, the world was predominately using 3.5-inch screen iPhones. Meanwhile, the Android phablet movement was only just beginning to take off.  Over the course of the next two years, iPhone sales trended towards bigger screens. Today, a vast majority of iPhones sold include a 4.7-inch or 5.5-inch screen. These larger devices are increasingly serving as better consumption devices, taking away a key value proposition previously held by the iPad mini. 

Exhibit 3: iPhone Sales Breakdown by Screen Size

Meanwhile, on the other end of the spectrum, the iPad Air's resilient sales (as shown in Exhibit 2) suggest that an increasing number of consumers are looking to use an iPad as a laptop and desktop replacement. Accordingly, the iPad mini's smaller screen is not preferred as the reduction in screen real estate leads to a less useful product for media consumption, web surfing, and other basic computing tasks. 

Some have said that the iPad is being squeezed between the iPhone and Mac. In reality, the iPad mini is being squeezed between larger iPhones and the iPad Air and iPad Pro. 

iPad Mini's Price Advantage is Overrated

The iPad mini was launched in 2012 as a defense against Android OEM competitors potentially underpricing the iPad and risking a repeat of the Windows vs. Mac battle. In reality, the iPad mini is still too expensive to compete with pure media consumption devices running Android, but there is no evidence such devices pose a threat to iOS. Many now point to price as the iPad mini's secret weapon that should not be underestimated. In reality, this strength is likely overrated. At $399, a new iPad mini is only $100 less than a new iPad Air and the same price as a year-old iPad Air. This pricing dynamic likely explains why the iPad mini sales have declined more than sales of the more expensive iPad Air models. 

Meanwhile, the year old iPad mini is still $70 more than the entry-level iPod touch, which is the lowest cost entry point for the iOS ecosystem. While some may look at the iPad mini as holding a better future than the iPod touch, that isn't exactly saying much considering the iPod touch's lackluster sales. 

iOS Sales Spectrum

The iPad mini's declining sales provides clues as to iOS form factor trends. Instead of looking at the iPhone and iPad as separate product categories, I like to think of them as existing on the same iOS spectrum but occupying different screen size segments. The fact that iPhone and iPad rely on the same mobile operating system makes this view reasonable. In Exhibit 4, iOS sales according to screen size are depicted for 2013 and 2015. 

Exhibit 4: iOS Sales Spectrum (2013 vs. 2015)

In the span of two years, screen size preferences have shifted to larger screens (reflected by the blue line in Exhibit 4 shifting to the right in 2015). Notice the iOS screen sizes that saw the largest sales declines between 2013 and 2015: 3.5-inch iPhone, 4-inch iPhone, and 7.9-inch iPad. A better way to see this trend is to picture the lines in Exhibit 4 as waves, depicted in Exhibit 5. Note two differences: The new sales peaks (in blue) are now found with 4.7-inch iPhones and 9.7-inch iPads while the amplitude of the wave at 4.7-inch iPhones is increasing. Over time, it is conceivable that the wave in blue continues to shift to the right with a higher iPhone crest. 

Exhibit 5:  iOS Sales Spectrum (2013 and 2015)

A few takeaways:

1) Consumers are increasingly allowing iPhones to play a greater role in their lives. Consequentially, purchasing habits are trending to larger screen iPhones. Apple will likely look to the 5.5-inch iPhone form factor as positioned well for potential sales growth, not to mention increased revenue and profit share. There is room for Apple to slim the device's bezels, making the device easier to hold in one hand while maintaining the 5.5-inch screen. 

2) The iPad Pro is positioned well to see an increasing share of iPad sales as consumers position the device as a laptop and desktop replacement. While this replacement transition is not possible for everyone at this time, the definition of work is changing and for many consumers typing is becoming one of the last barriers to switching completely to iPad. 

3) The 7.9-inch iPad from factor is caught in the middle.  It is too big to be put in a pocket or held in one hand but too small to replace a laptop or desktop. 

iPad Mini Is a Fading Star

Apple is still selling too many iPad minis for the product to be mothballed. However, the more likely path will be a slow yet steady slide into irrelevancy. The product will see more sporadic refreshes, which has already happened with the iPad line, while the value proposition continues to become less compelling. Holiday quarters will represent iPad mini's best sales times, and it will possibly even outsell the iPad Air during certain months, but the elephant in the room will keep a lid on iPad mini's upside. Ultimately, the iPhone is the single biggest threat to both the iPad mini and the broader iPad category as Apple pushes forward in differentiating the iPhone Plus line. Such a device holds the best chance of being the most popular and useful iOS form factor. From Apple's perspective, having the iPad mini be cannibalized by an iOS device with a higher profit margin may actually turn out to be a long-term positive from a financial and strategic perspective.   

Receive my exclusive analysis and perspective about Apple in a daily email containing 2-3 stories (10-12 stories a week). For more information and to sign up, visit the membership page

24 Nov 19:21

Smartphone Second Life: when old devices get a second chance

by Daniel Bader

I thought I had read it wrong. Galaxy S5 Neo? What does that even mean, and why, after nearly two years, would Samsung release an almost identical version of its old flagship smartphone?

The story of the Galaxy S5 Neo is the story of a new reality for Canadians, where a weak dollar converges with the kind of component longevity the PC market began seeing eight or so years ago. With smartphone usage approaching nearly 90 percent of most carriers’ postpaid customer base, those companies need to outfit their shelves with a variety of devices at every price point.

The most notable example of this trend is Apple’s consistent re-framing of its aging handsets for new demographics. This year, the 6s and 6s Plus replaced the iPhone 6 and 6 Plus as the company’s highlight products, but instead of being decommissioned, the phones received a lower price tag and a second life. Similarly, the iPhone 5s became the company’s ostensible entry-level phone.

This strategy is now being imitated by Samsung, the next-biggest smartphone vendor in Canada. Having released the Galaxy S6 and S6 edge, the company’s plastic derivatives, 2014’s Galaxy S5 and 2013’s Galaxy S4, got pushed down the chain, the latter selling for free at many carriers of record.

As the Canadian dollar continues its weak showing against the Greenback, this trend is likely to continue. High-end products act as aspirational markers for the entire product line, similar to the way a car brand’s premium lineup shines a light on its more conventional sedans.


With the release of the Samsung Galaxy S5 Neo, we’ve reached peak recycling. What amounts to a regular Galaxy S5 with its Qualcomm-based chip replaced with one of Samsung’s own Exynos processors, allows carriers to position the device as both a new device and one that is comfortably familiar. And when one looks at the spec sheet — a 5.1-inch HD display, an octa-core processor, a 16 megapixel camera and waterproofing — it somehow survives the barrage of scrutiny often imposed on a new smartphone.

Samsung gets away with this recycling because the market has reached a point where components — especially the important, high-margin modules like the processor, the display and camera — are cheaper to manufacture, and don’t become obsolete as quickly as they did in the past. In many cases, devices like the Galaxy S5, and its Exynos-equipped Neo counterpart, are better value propositions for many customers than buying a brand new product aimed at the mid-range market, like the Galaxy A5, which was also recently released in this country.

But those two markets are quickly converging. We’re seeing products like the Alcatel OneTouch Idol 3, Moto X Play and HTC One A9, positioned at three price points and audiences, with nearly identical internals (the Snapdragon 615 and 617 are largely differentiated by LTE capabilities). Indeed, the chasm at the top of the charts, where the high-end iPhones and Galaxies beget premiums of $200 to $300 on contract versus their counterparts from LG, Huawei, Sony and others, is becoming even more stark as the mid-range market becomes more diffuse. It’s difficult to find a bad smartphone anymore.

Screenshot 2015-11-24 13.23.28

The narrative is even further complicated by the positioning of Canada’s flanker brands like Fido, Koodo and Virgin Mobile. While it’s easy to take for granted the Big Three carriers, Rogers, Bell and Telus, will stock offer practically every expensive smartphone, their sub-brands must be more careful about how they position themselves. If you live in a big Canadian city, you’re unlikely to have missed the enormous LG G3 renders associated with Koodo’s and Fido’s latest marketing campaigns, the phone’s friendly dual-tone face beckoning potential customers with, in Koodo’s case, “the best money you’ve never spent.”

When the G4 was unveiled earlier this year, it seemed likely that its predecessor would be quickly whisked away from the maddening crowd, especially since the company was pursuing a strategy of recreating its flagships in smaller form factors in devices like the LG G4 Vigor. But LG caught at the right time the trend of re-purposing aging devices, lowering the phone’s MSRP from over $700 to around $400, and enticing carriers to offer it for free on a two-year contract. And while it’s unlikely we’ll ever know the degree to which the price drop spiked sales, that LG’s near-dead flagship was resurrected as what amounts to a mid-range phone speaks to the market’s need to fill such empty spaces, and the resilience of the components found inside Android devices released over the past 18 months.

Indeed, 2015 will be seen as the year the smartphone market turned towards the middle. Whether it’s selling modified versions of aging products as new again, as with the Galaxy S5 Neo, or giving them new life, as with the LG G3, smartphones are spending a lot longer on store shelves. That, and the maturity of the platforms in general, is making it increasingly difficult for companies like Apple and Samsung to convince users to spend upwards of $400 to $600 up front for an on-contract phone.

24 Nov 17:31

A fine suburban #sunrise and a vexing #CS6 issue

by Doc Searls

Sacramento SunriseMade a dawn run to the nearby Peets for some dry cappuccinos, and was bathed in glow on my return by one of the most spectacular sunrises I have ever seen. It was post-peak when I got back (to the place where I’m staying in Gold River, California), but with some underexposure and white balance tweaking, I was able to get the shots in this set here.

Alas, the shot above is not in that set. It’s a screen shot I took of an adjusted raw file that Adobe Photoshop CS6 simply refused to save. “The file could not be created,” it said. No explanation. I checked permissions. No problem there. It just refused. I just checke, and the same thing happens with all files from all directories on all drives. Photoshop is suddenly useless to for editing RAW files. Any suggestions?

[Later…] An Adobe forum provided the answer here. All better now.

24 Nov 17:46

Facebook Knows A Cult When It Sees One

by Tyler Durden
mkalus shared this story from Zero Hedge.

It appears, like so many of the world's status-quo worshippers, that Facebook thinks The Fed is a 'religious center'...



h/t @hsiaoleim

24 Nov 00:00

Li-Fi has just been tested in the real world, and it's 100 times faster than Wi-Fi


Bec Crew, Science Alert, Nov 27, 2015

Just in case you thought technology was slowing down for a bit, along comes LiFi - it's like WiFi, except it uses visible light instead of radio waves. It has two advantages. First, it provides network speeds many times faster than WiFi - up to a gigabit per second. And second, it can be installed in lights, meaning that everywhere we have a light bulb, we could have internet access. Oh sure, there are many things that could go wrong - remember WiMax? - but the existence of the possibility should suggests that the speed of connectivity will continue to increase. "Li-Fi was invented by Harald Haas from the University of Edinburgh, Scotland back in 2011, when he demonstrated for the first time that by flickering the light from a single LED, he could transmit far more data than a cellular tower." See also this TED talk and more in Wikipedia.

[Link] [Comment]
24 Nov 21:23

Non-technical advice for startups and open source projects

by Kristina Chodorow

A former coworker recently asked me about what had worked well (and not) at MongoDB. I realized that I actually know a bunch of things about running an open source project/startup, some of which may not be common knowledge, so I figured I’d share some here.

Things changed dramatically as the company grew and the product matured, but this post is about the project’s infancy (when the company was less than, say, 20 people). At that point, our #1 job was pleasing users. This meant everything from getting back to mailing list questions within a few minutes to shifting long-term priorities based on what users wanted. We’d regularly have a user complain about an issue and a couple of hours later we’d tell them “Please try again from head, we’ve fixed the issue.”

Unfortunately, fast turnaround meant bugs. We weren’t using code review at this point (give us a break, it was 2008), so we could get things out as fast as we could code them, but they might not… exactly… work. Or might interact badly with other parts of the system. I’m not sure what the solution to this would be: I think fast iteration was part of what made us successful. People got stuff done.

(via Poorly Drawn Lines.)

Users loved how fast we fixed things, but the support load was insane (at times) and basically could not be handled by non-engineers (i.e., we could not hire a support staff). The only reason this worked was that the founders were twice as active as anyone else on the list, setting the example. We actually had a contest running for a while that if anyone had more messages than Eliot in a month, they would get an iPad. No one ever won that iPad.

If a bug couldn’t be fixed immediately, it was still incredibly important to get back to people fast. Users didn’t actually care if you couldn’t solve their problem, they wanted a response. (I mean, they obviously preferred it if we had a solution, but we got 100% goodwill if we responded with a solution, 95% goodwill if we responded with “crap, I’ve filed a bug” within 15 minutes, 50% goodwill within a day, and 0% after that.) I’m still not as good at this as I’d like to be, but I recommend being as responsive as possible, even if you can’t actually help.

Because our users were developers (and at this point, early adopters), they generally detested “business speak.” Occasionally we had a new non-technical person join who wan’t familiar with developer culture, which resulted in some really negative initial perceptions on Reddit (that was the most memorable one, but we also had some business-speak issues on Twitter and other channels).

In fact, when we first hired a non-technical person, I was really skeptical. I didn’t understand why an engineering company making a product for other engineers would need a non-engineer. Now I’d credit her with 50% of MongoDB’s success and a good deal of the developers’ happiness. Meghan would reach out to and contact users and contributors, get to know meetup and conference organizers, and was generally a “router” for all incoming requests that we didn’t know what to do with (“Can we get someone to come talk at our conference in Timbuktu?” “Could I have an internship?” “Can you send me 20 MongoDB mugs to give to my coworkers?”). In general, there was a surprising amount of non-technical stuff that was important for success. We had a piece of software that people definitely wanted and needed, but I don’t think it would have been nearly as successful without the non-technical parts.

25 Nov 00:48

Items From Ian

by Ken Ohrn
Good article in the Guardian – part of the series ‘things which aren’t about Vancouver but could be’.


There is a relationship to Vancouver in this, and it is hinted at in the Vancouver Sun  this morning.

It has been said over and over (here’s one) that there is an appearance that Vancouver only provides for the public good when there is CAC money, and the city will permit anything which provides sufficient CAC money. There is a risk with this that increasingly, Public Space is ‘brought to you by’ Private Means, instead of being truly open and free. The city seems to become reliant on a particular kind of large development to finance needed civic infrastructure, which does not guarantee conflicts but does make them seem more likely. Little Mountain is a case in point when a reliance on a private firm to build public goods has so far produced neither, at the expense of the hundreds of households who live there. At least now for the sake of the displaced residents, there does seem to again be progress: HERE.

When the city will not build public space itself, and relies on others, it cedes at least some of its ability to dictate time, place, schedule, needs, and content of this Private-Public ‘Pri-blic(?)’ development, and the city’s lack of an overall plan would seem to make it more difficult to anticipate what content will be needed in the first place. The public good becomes the starting point in a negotiation, and some public needs are too vital to be compromised.”


A recent morning newspaper headline was “Brains versus the benefactors” with the subtitle “Cash-strapped universities are turning to donations from corporations or individuals to fund research initiatives. Their influence — real or implied – sometimes can make things awkward…” Vancouver is a city whose functions which have historically been funded through taxes are increasingly being funded through what are effectively donations, with the same potential for influence in tow.

25 Nov 00:45

Thunderbird 38.4.0 release still ‘In Progress’

by El Guru

Thunderbird 38.4.0 was suppose to be released in early November. Here we are at almost the end of November and still no Thunderbird 38.4.0 release. Reviewing the Status Meeting Notes from the November 17th meeting this build is still ‘in progress’. Also been following the Build Notes for Thunderbird 38.4.0 which currently shows they are the third build (attempt) at releasing Thunderbird 38.4.0, but still running into issues. The next status meeting will be on November 30th and hopefully will provide some insight if there is going to be a 38.4.0 release or wait until mid December 2015 to release 38.5.0 instead.

24 Nov 22:27

Immigration mega-fraud: The rich Chinese immigrants to Canada who don’t really want to live there

by ian_young

The case of Xun “Sunny” Wang, a Vancouver-area consultant jailed for masterminding the biggest immigration fraud in Canadian history, is startling in scope.

25 Nov 00:49

TV, mobile and the living room

by Benedict Evans

"I also want to share some additional thoughts on Xbox and its importance to Microsoft. As a large company, I think it's critical to define the core, but it's important to make smart choices on other businesses in which we can have fundamental impact and success."

(Translation - Xbox is no longer core to Microsoft) - Satya Nadella

The tech industry has wanted to get to the TV for decades. For a long time it was widely assumed that PCs were only a transitional device and the normal consumer computing experience and ‘interactive media’ experience would happen on the TV, with a ‘ten foot’ user interface, powered by the ‘information superhighway’. TV would become ‘interactive TV, and that would be a big part of how ‘computing’ came to normal people. 

Of course, the tech industry had made its way to the TV, in closed, single-purpose devices: games consoles on one hand and CATV set-top-boxes on the other. Tech tried to use these as bridgeheads: games consoles went online, partly because they should as games devices, but also as a step towards a larger vision, and Microsoft also tried the other route, buying WebTV. 

But none of this worked. None of the attempts to prise set-top-boxes away from the cable companies or add any interactivity beyond a better EPG went anywhere much. Xbox and Playstation going online became important for games, but not much more. Attempts to move the actual PC into the living room (Windows Media Centre, Apple’s Front Row) are probably best forgotten. (I could also mention smart TVs, but really, why bother?) So you could argue about whether Microsoft or Sony did better in the console wars, but it doesn’t really matter for that broader vision (and of course more and more gaming will shift to mobile). 

Today, we have another wave of products trying to get there - the Chromecast and the (new) Apple TV, which are really iterations of a previous wave of products (Vudu, Roku, Boxee) that never quite went mass-market either. (There's a complex discussion here about content availability, most of which is very specific to the USA.) But what interests me is that both of these are really about turning the TV into dumb glass - a peripheral for the smartphone. The Chromecast doesn’t even have an on-screen UI. They’re both about the smartphone market: they're about selling phones (they're too cheap and low-margin to make much money of themselves and most of the content money will go to the content industry), and they come from phones.

That is, even if Apple or Google finally 'win', and get their device connected to every TV in the developed world, it's something of a sideshow. The TV isn't the end point for consumer technology anymore, in either sense of the term. The consumer computing revolution went and happened anyway, without ever touching the TV. First the web, not ‘interactive media’ on the ‘information superhighway’, drove the PC into every home in the developed world, and now smartphones take a truly personal computer into every pocket on earth. And it turns out that smartphones and tablets are the way computing gets into the living room, and the way that the tech industry gets hold of video content, whatever that will mean. (This, in case it isn't clear, is why Satya Nadella said that the Xbox is no longer core to Microsoft's strategy.) There was a wonderful quote from CBS this autumn that they’re less worried about PVR ad-skipping ‘because people are too busy with their smartphones to bother skipping ads anymore’. You can see the shift pretty clearly in this chart from the UK: short form is all about smartphones and tablets, and so, increasingly, is long-form as well. 

Why would you watch a film on a phone? This is why. 

There are two things that I wonder in all of this. 

  1. First, how much of the linear schedule actually goes away, as content restrictions and UI friction finally falls away? Does everything except live events and especially sport fall off and everything else goes to on-demand, or does the passive, lean-back, ‘just show me something’ remain a large part of the large-screen experience? To the extent that viewing does move, this changes the distribution of watching across different types of content
  2. Second, how much, really, do we need a large screen, and how much does it remain something that’s just there, and turning on sometimes, but used less and less, with the thing we hold in our hand actually a better watching experience? And for what kinds of content?

These are obviously linked - if we watch lots of big-budget event drama then we’ll use the big screen more and probably also use the linear schedule more. This is a little reminiscent of Hollywood’s embrace of spectacle in response to TV. So the stronger the linear schedule, the stronger live events, and the stronger big shows are, the more the big screen matters (even if just as dumb glass). The more TV changes, the more it moves device. And then, do you put that event drama you’re sort-of-watching onto the big screen so that you can do more important stuff on your phone, and look up the wikipedia page to tell you what happened when you weren’t paying attention? Maybe it's the small screen that really gets the attention either way. 

Finally, this is a case study of open and closed approaches. Games consoles' closed ecosystem delivered huge innovation in games, but not in much else. The web's open, permissionless innovation beat the closed, top-down visions of interactive TV and the information superhighway. The more abstracted, simplified and closed UX model of smartphones and especially iOS helps to take them to a much broader audience than the PC could reach, and the relative safety of installing an app due to that 'closed' aspect enables billions of installs and a new route to market for video. It's not that open or closed win, but that you need the right kind of open in the right place. 

23 Nov 23:11

This season’s 361 Podcast: Blandford’s going to love it

by Ewan

Have you been following the 361 Podcast at all? I know I’ve been banging on about it now and again here on the blog so regular readers will have a limited awareness I’m sure.

This is the podcast that myself, Rafe Blandford and Ben Smith setup ages ago. Rafe is the world famous Nokia blogger. Or more accurately, the blogster-in-chief covering Symbian. His All About Symbian (and All About Windows Phone) websites pull in millions of readers every month. Ben is the blogger-in-a-suit who founded Wireless Worker and who, when he’s not managing the purchase and installation of hundreds of millions of telecoms technology, is a real mobile geek. 

We got together to record our perspective on the mobile world and the culture around it — and we’ve done it nicely, if we do say so ourselves. Well, it’s actually down to Ben’s fastidiousness with sound quality. He really does go nuts if you touch your microphone during recording. And we put in quite a bit of effort to try and keep each episode on time and relevant.

We’re now in our 11th Season — which is quite unbelievable for me. It seems like only yesterday we were standing on the beach in Cannes on the French Riviera talking about the possibilities of Nokia and BlackBerry — with the upstart Apple and Google still at risk of being nailed by a resurgent existing player. That obviously never happened, but it’s been fascinating to listen to what we were saying and how we felt at that point. 

Anyway the first episode goes live tonight. And we’re changing things up a bit: We’ve add a support feature. You can now opt to show your support for the podcast using our Patreon listing. The default option is $1 per episode. You can start and stop at any time and there’s zero commitment. We pay for the whole production ourselves and the podcast will always remain free of charge. However the opportunity to invite listeners to help contribute to our costs is particularly exciting.
It’s exciting not because we’re looking for megabucks. No. We’re fine. The very small #firstworld issue we’ve got is based on the fact we already pay the costs. It’s quite difficult to justify doing slightly more exciting things — not least to significant others wondering why we’re bothing using up our evenings and weeekends to record and produce the episodes. 

For example we’ve been discussing taking Blandford to Helsinki for a day to get him to re-live the good old days with Nokia. I like the idea of getting a tour around some of the factories, offices and event spaces that hosted some of the big Nokia launches. Perhaps maybe a few interviews with some old Nokia executives? Maybe even a current one? Who knows. Arranging and recording abroad or doing an excursion like this is quite a commitment for us. So it would be particularly exciting to have a little bit more resource to play with.

So a full season of 361 Episodes — approximately 10 episodes plus a bonus — would represent $10 support. 

What could we do encourage listeners to think about participating? 

We had an idea.

In the last season we were talking about Urban Massage, a mobile app that lets you book masseur(s) to come to your home, office and hotel. Really smart. We thought it would be quite cool if we did this ‘live’ during a recording and got a therapist to come to the studio and give Blandford a treatment. 

The discomfort and horror on his face was a true picture. He objected in the strongest possible terms. That was last season. 

This season — as you’ll hear in the first episode — Rafe looked on in sheer panic as we brought up this topic as a reason to help support the Podcast.

So there you have it. Blandford will get a massage ‘live’ during one of the episodes if we hit a $100/episode target.

I think he’ll find it quite difficult to object if the listeners have voted with their dollars. What do you reckon? 😉

Find out more on our Patreon listing here:

You can subscribe to the 361 Podcast easily — search for us in almost any Podcast-compatible app, or listen online via the website at

23 Nov 00:00

Why WordPress's new Calypso interface is genius


Ben Werdmüller, Nov 26, 2015

As Ben Werdmuller says, setting up self-hosted web applications is hard. This is one of the major reasons why, say, personal web servers have never become mainstream. The  new redesign of WordPress sounds like a step in the right direction. As Werdmuller writes, "we'll start to see more examples of this data-interface separation, where the logic and data will sit wherever you want, and the beautiful apps and interfaces will be powered by centralized services." It's the opposite of the classical content management service model, where the data is managed by a central server, and the interfaces sit wherever you want. It takes a bit to wrap your mind around.

[Link] [Comment]
23 Nov 00:00

The (open) future is here, it’s just not evenly distributed


Clint Lalonde,, Nov 26, 2015

So this seems exactly right: "The unfortunate equation of open education w/ free text books has made the movement seem more and more myopic and less and less compelling." It's Jim Groom, and cited within this Cliont Lalonde wrap-up of the recent OpenEd conference in British Columbia. And as Lalonde responds,. "textbooks are so deeply ingrained in our education systems that trying to find others ways of doing education for many is very difficult, especially in an education world where we continually remove capacity for those faculty who DO want to change and experiment and try different things." Image: The Peak.

[Link] [Comment]
23 Nov 18:35

Coding Bootcamps and the New For-Profit Higher Ed

After decades of explosive growth, the future of for-profit higher education might not be so bright. Or, depending on where you look, it just might be…

In recent years, there have been a number of investigations – in the media, by the government – into the for-profit college sector and questions about these schools’ ability to effectively and affordably educate their students. Sure, advertising for for-profits is still plastered all over the Web, the airwaves, and public transportation, but as a result of journalistic and legal pressures, the lure of these schools may well be a lot less powerful. If nothing else, enrollment and profits at many for-profit institutions are down.

Despite the massive amounts of money spent by the industry to prop it up – not just on ads but on lobbying and legal efforts, the Obama Administration has made cracking down on for-profits a centerpiece of its higher education policy efforts, accusing these schools of luring students with misleading and overblown promises, often leaving them with low-status degrees sneered at by employers and with loans students can’t afford to pay back.

But the Obama Administration has also just launched an initiative that will make federal financial aid available to newcomers in the for-profit education sector: ed-tech experiments like “coding bootcamps” and MOOCs. Why are these particular for-profit experiments deemed acceptable? What do they do differently from the much-maligned for-profit universities?

School as “Skills Training”

In many ways, coding bootcamps do share the justification for their existence with for-profit universities. That is, they were founded in order to help to meet the (purported) demands of the job market: training people with certain technical skills, particularly those skills that meet the short-term needs of employers. Whether they meet students’ long-term goals remains to be seen.

I write “purported” here even though it’s quite common to hear claims that the economy is facing a “STEM crisis” – that too few people have studied science, technology, engineering, or math and employers cannot find enough skilled workers to fill jobs in those fields. But claims about a shortage of technical workers are debatable, and lots of data would indicate otherwise: wages in STEM fields have remained flat, for example, and many who graduate with STEM degrees cannot find work in their field. In other words, the crisis may be “a myth.”

But it’s a powerful myth, and one that isn’t terribly new, dating back at least to the launch of the Sputnik satellite in 1957 and subsequent hand-wringing over the Soviets’ technological capabilities and technical education as compared to the US system.

There are actually a number of narratives – some of them competing narratives – at play here in the recent push for coding bootcamps, MOOCs, and other ed-tech initiatives: that everyone should go to college; that college is too expensive – “a bubble” in the Silicon Valley lexicon; that alternate forms of credentialing will be developed (by the technology sector, naturally); that the tech sector is itself a meritocracy, and college degrees do not really matter; that earning a degree in the humanities will leave you unemployed and burdened by student loan debt; that everyone should learn to code. Much like that supposed STEM crisis and skill shortage, these narratives might be powerful, but they too are hardly provable.

Nor is the promotion of a more business-focused education that new either.

Image credits

Career Colleges: A History

Foster’s Commercial School of Boston, founded in 1832 by Benjamin Franklin Foster, is often recognized as the first school established in the United States for the specific purpose of teaching “commerce.” Many other commercial schools opened on its heels, most located in the Atlantic region in major trading centers like Philadelphia, Boston, New York, and Charleston. As the country expanded westward, so did these schools. Bryant & Stratton College was founded in Cleveland in 1854, for example, and it established a chain of schools, promising to open a branch in every American city with a population of more than 10,000. By 1864, it had opened more than 50, and the chain is still in operation today with 18 campuses in New York, Ohio, Virginia, and Wisconsin.

The curriculum of these commercial colleges was largely based around the demands of local employers alongside an economy that was changing due to the Industrial Revolution. Schools offered courses in bookkeeping, accounting, penmanship, surveying, and stenography. This was in marketed contrast to those universities built on a European model, which tended to teach topics like theology, philosophy, and classical language and literature. If these universities were “elitist,” the commercial colleges were “popular” – there were over 70,000 students enrolled in them in 1897, compared to just 5800 in colleges and universities – something that highlights what’s a familiar refrain still today: that traditional higher ed institutions do not meet everyone’s needs.

Image credits

The existence of the commercial colleges became intertwined in many success stories of the nineteenth century: Andrew Carnegie attended night school in Pittsburgh to learn bookkeeping, and John D. Rockefeller studied banking and accounting at Folsom’s Commercial College in Cleveland. The type of education offered at these schools was promoted as a path to become a “self-made man.”

That’s the story that still gets told: these sorts of classes open up opportunities for anyone to gain the skills (and perhaps the certification) that will enable upward mobility.

It’s a story echoed in the ones told about (and by) John Sperling as well. Born into a working class family, Sperling worked as a merchant marine, then attended community college during the day and worked as a gas station attendant at night. He later transferred to Reed College, went on to UC Berkeley, and completed his doctorate at Cambridge University. But Sperling felt as though these prestigious colleges catered to privileged students; he wanted a better way for working adults to be able to complete their degrees. In 1976, he founded the University of Phoenix, one of the largest for-profit colleges in the US which at its peak in 2010 enrolled almost 600,000 students.

Other well-known names in the business of for-profit higher education: Walden University (founded in 1970), Capella University (founded in 1993), Laureate Education (founded in 1999), Devry University (founded in 1931), Education Management Corporation (founded in 1962), Strayer University (founded in 1892), Kaplan University (founded in 1937 as The American Institute of Commerce), and Corinthian Colleges (founded in 1995 and defunct in 2015).

It’s important to recognize the connection of these for-profit universities to older career colleges, and it would be a mistake to see these organizations as distinct from the more recent development of MOOCs and coding bootcamps. Kaplan, for example, acquired the code school Dev Bootcamp in 2014. Laureate Education is an investor in the MOOC provider Coursera. The Apollo Education Group, the University of Phoenix’s parent company, is an investor in the coding bootcamp The Iron Yard.

Image credits

Promises, Promises

Much like the worries about today’s for-profit universities, even the earliest commercial colleges were frequently accused of being “purely business speculations” – “diploma mills” – mishandled by administrators who put the bottom line over the needs of students. There were concerns about the quality of instruction and about the value of the education students were receiving.

That’s part of the apprehension about for-profit universities’ (almost most) recent manifestations too: that these schools are charging a lot of money for a certification that, at the end of the day, means little. But at least the nineteenth century commercial colleges were affordable, UC Berkley history professor Caitlin Rosenthal argues in a 2012 op-ed in Bloomberg,

The most common form of tuition at these early schools was the “life scholarship.” Students paid a lump sum in exchange for unlimited instruction at any of the college's branches – $40 for men and $30 for women in 1864. This was a considerable fee, but much less than tuition at most universities. And it was within reach of most workers – common laborers earned about $1 per day and clerks' wages averaged $50 per month.

Many of these “life scholarships” promised that students who enrolled would land a job – and if they didn’t, they could always continue their studies. That’s quite different than the tuition at today’s colleges – for-profit or not-for-profit – which comes with no such guarantee.

Interestingly, several coding bootcamps do make this promise. A 48-week online program at Bloc will run you $24,000, for example. But if you don’t find a job that pays $60,000 after four months, your tuition will be refunded, the startup has pledged.

Image credits

According to a recent survey of coding bootcamp alumni, 66% of graduates do say they’ve found employment (63% of them full-time) in a job that requires the skills they learned in the program. 89% of respondents say they found a job within 120 days of completing the bootcamp. Yet 21% say they’re unemployed – a number that seems quite high, particularly in light of that supposed shortage of programming talent.

For-Profit Higher Ed: Who’s Being Served?

The gulf between for-profit higher ed’s promise of improved job prospects and the realities of graduates’ employment, along with the price tag on its tuition rates, is one of the reasons that the Obama Administration has advocated for “gainful employment” rules. These would measure and monitor the debt-to-earnings ratio of graduates from career colleges and in turn penalize those schools whose graduates had annual loan payments more than 8% of their wages or 20% of their discretionary earnings. (The gainful employment rules only apply to those schools that are eligible for Title IV federal financial aid.)

The data is still murky about how much debt attendees at coding bootcamps accrue and how “worth it” these programs really might be. According to the aforementioned survey, the average tuition at these programs is $11,852. This figure might be a bit deceiving as the price tag and the length of bootcamps vary greatly. Moreover, many programs, such as App Academy, offer their program for free (well, plus a $5000 deposit) but then require that graduates repay up to 20% of their first year’s salary back to the school. So while the tuition might appear to be low in some cases, the indebtedness might actually be quite high.

According to Course Report’s survey, 49% of graduates say that they paid tuition out of their own pockets, 21% say they received help from family, and just 1.7% say that their employer paid (or helped with) the tuition bill. Almost 25% took out a loan.

That percentage – those going into debt for a coding bootcamp program – has increased quite dramatically over the last few years. (Less than 4% of graduates in the 2013 survey said that they had taken out a loan). In part, that’s due to the rapid expansion of the private loan industry geared towards serving this particular student population. (Incidentally, the two ed-tech companies which have raised the most money in 2015 are both loan providers: SoFi and Earnest. The former has raised $1.2 billion in venture capital this year; the latter $245 million.)

Image credits

The Obama Administration’s newly proposed “EQUIP” experiment will open up federal financial aid to some coding bootcamps and other ed-tech providers (like MOOC platforms), but it’s important to underscore some of the key differences here between federal loans and private-sector loans: federal student loans don’t have to be repaid until you graduate or leave school; federal student loans offer forbearance and deferment if you’re struggling to make payments; federal student loans have a fixed interest rate, often lower than private loans; federal student loans can be forgiven if you work in public service; federal student loans (with the exception of PLUS loans) do not require a credit check. The latter in particular might help to explain the demographics of those who are currently attending coding bootcamps: if they’re having to pay out-of-pocket or take loans, students are much less likely to be low-income. Indeed, according to Course Report’s survey, the cost of the bootcamps and whether or not they offered a scholarship was one of the least important factors when students chose a program.

Here’s a look at some coding bootcamp graduates’ demographic data (as self-reported):

Mean Age 30.95
Female 36.3%
Male 63.1%
American Indian 1.0%
Asian American 14.0%
Black 5.0%
Other 17.2%
White 62.8%
Hispanic Origin
Yes 20.3%
No 79.7%
Yes, born in the US 78.2%
Yes, naturalized 9.7%
No 12.2%
High school dropout 0.2%
High school graduate 2.6%
Some college 14.2%
Associate’s degree 4.1%
Bachelor’s degree 62.1%
Master’s degree 14.2%
Professional degree 1.5%
Doctorate degree 1.1%

(According to several surveys of MOOC enrollees, these students also tend to be overwhelmingly male from more affluent neighborhoods, and MOOC students also tend to already possess Bachelor’s degrees. The median age of MITx registrants is 27.)

It’s worth considering how the demographics of students in MOOCs and coding bootcamps may (or may not) be similar to those enrolled at other for-profit post-secondary institutions, particularly since all of these programs tend to invoke the rhetoric about “democratizing education” and “expanding access.” Access for whom?

Some two million students were enrolled in for-profit colleges in 2010, up from 400,000 a decade earlier. These students are disproportionately older, African American, and female when compared to the entire higher ed student population. While one in 20 of all students are enrolled in a for-profit college, 1 in 10 African American students, 1 in 14 Latino students, and 1 in 14 first-generation college students are enrolled at a for-profit. Students at for-profits are more likely to be single parents. They’re less likely to enter with a high school diploma. Dependent students in for-profits have about half as much family income as students in not-for-profit schools. (This demographic data is drawn from the NCES and from Harvard University researchers David Deming, Claudia Goldin, and Lawrence Katz in their 2013 study on for-profit colleges.)

Deming, Goldin, and Katz argue that

The snippets of available evidence suggest that the economic returns to students who attend for-profit colleges are lower than those for public and nonprofit colleges. Moreover, default rates on student loans for proprietary schools far exceed those of other higher-education institutions.

Image credits

According to one 2010 report, just 22% of first- and full-time students pursuing Bachelor’s degrees at for-profit colleges in 2008 graduated, compared to 55% and 65% of students at public and private non-profit universities respectively. Of the more than 5000 career programs that the Department of Education tracks, 72% of those offered by for-profit institutions produce graduates who earn less than high school dropouts.

For their part, today’s MOOCs and coding bootcamps also boast that their students will find great success on the job market. Coursera, for example, recently surveyed its students who’d completed one of its online courses and 72% who responded said they had experienced “career benefits.” But without the mandated reporting that comes with federal financial aid, a lot of what we know about their student population and student outcomes remains pretty speculative.

What kind of students benefit from coding bootcamps and MOOC programs, the new for-profit education? We don’t really know… although based on the history of higher education and employment, we can guess.

EQUIP and the New For-Profit Higher Ed

On October 14, the Obama Administration announced a new initiative, the Educational Quality through Innovative Partnerships (EQUIP) program, which will provide a pathway for unaccredited education programs like coding bootcamps and MOOCs to become eligible for federal financial aid. According to the Department of Education, EQUIP is meant to open up “new models of education and training” to low income students. In a press release, it argues that “Some of these new models may provide more flexible and more affordable credentials and educational options than those offered by traditional higher institutions, and are showing promise in preparing students with the training and education needed for better, in-demand jobs.”

The EQUIP initiative will partner accredited institutions with third-party providers, loosening the “50% rule” that prohibits accredited schools from outsourcing more than 50% of an accredited program. Since bootcamps and MOOC providers “are not within the purview of traditional accrediting agencies,” the Department of Education says, “we have no generally accepted means of gauging their quality.” So those organizations that apply for the experiment will have to provide an outside “quality assurance entity,” which will help assess “student outcomes” like learning and employment.

By making financial aid available for bootcamps and MOOCs, one does have to wonder if the Obama Administration is not simply opening the doors for more of precisely the sort of practices that the for-profit education industry has long been accused of: expanding rapidly, lowering the quality of instruction, focusing on marketing to certain populations (such as veterans), and profiting off of taxpayer dollars.

Who benefits from the availability of aid? And who benefits from its absence? (“Who” here refers to students and to schools.)

Shawna Scott argues in “The Code School-Industrial Complex” that without oversight, coding bootcamps re-inscribe the dominant beliefs and practices of the tech industry. Despite all the talk of “democratization,” this is a new form of gatekeeping.

Before students are even accepted, school admission officers often select for easily marketable students, which often translates to students with the most privileged characteristics. Whether through intentionally targeting those traits because it’s easier to ensure graduates will be hired, or because of unconscious bias, is difficult to discern. Because schools’ graduation and employment rates are their main marketing tool, they have a financial stake in only admitting students who are at low risk of long-term unemployment. In addition, many schools take cues from their professional developer founders and run admissions like they hire for their startups. Students may be subjected to long and intensive questionnaires, phone or in-person interviews, or be required to submit a ‘creative’ application, such as a video. These requirements are often onerous for anyone working at a paid job or as a caretaker for others. Rarely do schools proactively provide information on alternative application processes for people of disparate ability. The stereotypical programmer is once again the assumed default.

And so, despite the recent moves to sanction certain ed-tech experiments, some in the tech sector have been quite vocal in their opposition to more regulations governing coding schools. It’s not just EQUIP either; there was much outcry last year after several states, including California, “cracked down” on bootcamps. Many others have framed the entire accreditation system as a “cabal” that stifles innovation. “Innovation” in this case implies alternate certificate programs – not simply Associate’s or Bachelor’s degrees – in timely, technical topics demanded by local/industry employers.

Image credits

The Forgotten Tech Ed: Community Colleges

Of course, there is an institution that’s long offered alternate certificate programs in timely, technical topics demanded by local/industry employers, and that’s the community college system.

Vox’s Libby Nelson observed that “The NYT wrote more about Harvard last year than all community colleges combined,” and certainly the conversations in the media (and elsewhere) often ignore that community colleges exist at all, even though these schools educate almost half of all undergraduates in the US.

Like much of public higher education, community colleges have seen their funding shrink in recent decades and have been tasked to do more with less. For community colleges, it’s a lot more with a lot less. Open enrollment, for example, means that these schools educate students who require more remediation. Yet despite many community colleges students being “high need,” community colleges spend far less per pupil than do four-year institutions. Deep budget cuts have also meant that even with their open enrollment policies, community colleges are having to restrict admissions. In 2012, some 470,000 students in California were on waiting lists, unable to get into the courses they need.

This is what we know from history: as the funding for public higher ed decreased – for two- and four-year schools alike, for-profit higher ed expanded, promising precisely what today’s MOOCs and coding bootcamps now insist they’re the first and the only schools to do: to offer innovative programs, training students in the kinds of skills that will lead to good jobs. History tells us otherwise...

23 Nov 17:01

Sub-surface broadcast

by Rob Campbell


I thought I’d come up to periscope depth to drop a few words about what I’ve been up to. WordPress is a tricky vehicle to publish with and it usually devolves into extended configuration and Linux spelunking but I think I’m through the worst of it. If I can just ignore that blinking terminal window…

Anyway, I’ve been pretty much off Twitter the past couple of weeks, only dropping in to retweet a funny animal picture or cryptic sigil. It’s a funny thing, Twitter. Everybody’s so wrapped up in their little moments while there are big things happening in the world at large all around them. The juxtaposition of tech foibles, social injustices, untappd posts and the terribleness in Paris were too much for my feeble mind to cope with, so I shut it all down. This morning’s the first time I had Twitter open on my desktop machine in a week or two. Didn’t really miss it, though I’ll keep it around for DMs and the occasional drive-by.


Camera System: Check.

Last weekend I was at the Waterloo Wellington Flight Centre attending ground school for UAV pilots. In the eyes of Transport Canada, I am a certified airman. Industry Canada has licensed me to operate an aviation radio. I ordered my Big Book of Canada Aerodromes and received it in the mail on Friday with giddy enthusiasm. Yes, it’s an actual book with paper in it. No, I don’t know why it’s not on the internet. Because reasons, I’m sure. Anyway, it was a lot of fun and the pilots at the WWFC were all wonderful people and full of stories from the world of aviation they backed up with deep knowledge. If you would like to go legit, I recommend their program whole-heartedly. Or you could just keep flying your drone illegally which is what basically every hobbyist is doing in the lower Canadian latitudes in controlled and restricted airspaces. Which is everywhere.

There will probably be a come-uppance. You’ve been warned.

And now a short teaser about a project I’ve been working on for lo this past year. That’s it, really. That’s the whole teaser. It’s really close to being done in a form that I’m almost happy with and I hope to have an exciting announcement later this week. Maybe Wednesday if the stars all line up.

no it’s not an app. or a startup. geez.

24 Nov 07:45

BlackBerry – High jump.

by windsorr

Reply to this post

RFM AvatarSmall






BlackBerry’s target of 5m devices looks much too high.

  • The new BlackBerry Priv and its rumoured successors are aimed at such a narrow niche that I doubt that they will ever make money.
  • Once this realisation has sunk in, I think that BlackBerry will abandon its hardware business and focus on its software business which has recently been bolstered with the acquisition of Good (see here).
  • I think that the main problem with this new line of attack in hardware is that apart from a tiny segment of the market, the smartphone user no longer cares about having a physical keyboard.
  • This is very similar to the issue that killed Nokia’s smartphone business in 2007.
  • Prior to the iPhone, a smartphone had to be a good phone first and everything else second and Nokia’s entire line up was based upon that supposition.
  • The arrival of the iPhone turned this on its head such that mediocre phone performance was no longer a barrier to selling devices.
  • It was Nokia’s inability to see that the market had changed led to it losing almost all of its market share.
  • Something very similar has happened in the enterprise segment.
  • Prior to 2007, an email device had to have a decent physical keyboard upon which to type.
  • However, Apple has made the touch-based form factor so popular that almost all users have learnt to adapt to typing on a screen keyboard which has obviated the need for a physical one.
  • There are some hard-core users in the financial and government who will love this device but these very few and far between.
  • These days, there is very little need for a physical keyboard which is why I think that BlackBerry’s Priv line of devices will only sell in tiny volumes.
  • In Q3 15A, Counterpoint Research estimates that BlackBerry sold just 700,000 units meaning that the Priv has to be a knock-out success just for BlackBerry to break even.
  • The problem here is that the smartphone market is slowing down and becoming even more competitive and into that mix there is a new device with a feature that almost no one cares about.
  • Furthermore, this device is so expensive that only users who care passionately about a physical keyboard are likely to buy it.
  • Consequently, I expect BlackBerry to miss its target of 5m units and to withdraw from the market in 2016 focusing instead on software.
  • Here, it has a credible proposition which is more than I can say for HTC which is in a very similar situation to BlackBerry but has no plan B.
  • I can still see downside in both of these companies but HTC most of all.
24 Nov 08:01

Metadata surveillance investigation

by Nathan Yau

Just Metadata

Metadata can tell you a lot, and most of us agree that it's not “just metadata” at this point. The Share Lab shows what one can find, just using everyday tools and relatively straightforward analysis.

Although our investigation primarily discovered relations, patterns and anomalies of someone’s work life, it still gave us an insight into that person’s habits that border with private life. In the end, metadata scans someone’s behaviors on a much deeper level than traditional surveillance practice related to content could ever do.

The graphic above shows how people in the sample dataset emailed with others over. There's no email content, but the headers provide enough information to sniff out connections.

See also: the search for Paul Revere with network analysis.

Tags: metadata, privacy

24 Nov 12:00

500px to help host Toronto’s first Accessible Photowalk

by Igor Bonifacic

500px, Henry’s and Access Toronto have teamed up to bring Toronto its first accessible photowalk.

Set to go down this Saturday between 3PM and 5PM (just as that sweet golden hour makes its appearance), the walk will start at Yonge-Dundas Square and work its way through the downtown core. It will be led be Evgeny Tchebotarev, one of the co-founders of 500px, as well as Maayan Ziv, founder of Access Now and a notable street and fashion photograph within the city, and Julian Stein, a photographer that writes about what it’s like photographing the world from a wheelchair.

“All of the event is designed to be accessible, and will probably proceed at a slightly slower pace, and the route will be slightly shorter than normal,” says Tchebotarev. You also don’t need to pack a professional grade DSLR to take part; Tchebotarev says the event is open to everyone, no matter what kind of camera one uses as a primary shooter.

If the concept of this photowalk doesn’t sound enticing enough, then there’s also the chance to walk away with some awesome swag. In particular, one photographer will win a X-T10 camera kit from Fujifilm. Participants will also be able to win $800 worth of gear from Henry’s, and five Awesome accounts from 500px.

For full details, check out the event’s Facebook page.

24 Nov 13:16

Giving Up On The iPad2

by Dan York


I finally gave up. After months of trying to continue to use my older iPad 2 with first iOS 8 and then iOS 9, as chronicled in several blog posts, I finally gave in and bought a new iPad Air 2. These two blog posts, and the many comments left both on the posts and on social media, show I am clearly NOT alone in wanting to continue using my iPad 2:

What finally did it for me is that after the iOS 9 upgrade, I was no longer able to use a specific application that I use all the time.

To explain a bit more, I coach a competitive girls Junior Curling team that my daughter is a member of. As part of that, I've been using an app call "iCurlStats" to track the actions and statistics in curling games so that we can be able to go back over them afterward. When I tried to use it in a recent curling tournament (a "bonspiel") it kept crashing all the time... and at terrible moments when I'd entered half of an "end" of a curling game.

It was so frustrating.

And unfortunately I discovered that the makers of that "iCurlStats" app seem to have gone out of business. The app is gone from the AppStore and the developer's website is completely gone. (In the little bit of digging it looks like the company may have been acquired by another company who then shut down different parts of the acquired company.)

So the chances of me getting an updated version of the app from the developer that would still work with an iPad 2 running iOS 9 were basically non-existent.

So I gave up. I gave in to the "planned obsolesence" and forked over more money to Apple for a iPad Air 2. This is the latest iPad in this size and so one would hope that Apple will keep it around for a while. Because I have come to heavily use a number of apps that are only on iOS, I'm right now locked into Apple's shiny, pretty walled garden. And I'm reluctantly okay with that because the apps are useful and help me get things done.

But I will also now be VERY CAUTIOUS applying future iOS updates to this iPad.

Had I not "updated" the iPad 2 to iOS 8 and left it running iOS 7 it probably would still be quite workable. (At least until I was forced to upgrade to newer apps that only ran on iOS 9 or later.) Now the iPad 2 will become something I use for an extra web browser screen or for some of the music apps... at least while all of those continue to work.

So that's the end of the saga.

No more glacial slowness for me - the iPad Air 2 is a remarkable and fast tablet. I can chart my curling games extremely easily and it works great for all the other apps I use, too.

Hopefully I can get a good run of years out of this one.

An audio commentary on this topic is also available:

P.S. There's another part to the story, too. After getting all set up on the iPad Air 2 and having iCurlStats work great - and getting all set up for the curling bonspiel all this past weekend... I decided that I wasn't comfortable with using an app that was no longer supported at all. In my research I had stumbled upon Curl Coach, a newer iPad app for curling coaches, and wound up using it for this past weekend's bonspiel. It is an amazing application! It's not cheap ($40 USD), but it's well worth it for how well it helped me work with our team! I don't know if this would have run on the iPad 2 (removing the need to buy the iPad Air 2), but I'm sure it wouldn't have run as fast as it did... and that is key when you're in the midst of recording a game.