So true. I keep telling people analytics is a complement to good content and common sense, a goal in and by itself…
So true. I keep telling people analytics is a complement to good content and common sense, a goal in and by itself…
All of a sudden, the scale of the importance of what was going on in my lifetime was brought home. It wasn’t just seeing it on the TV, it was reading about world changing events and people and REMEMBERING them happening. Seeing the broken wall, walking from West to East Berlin, and remembering watching it live, being taken apart.
And then reading about Mandela in prison, the protests, remembering our choice as a family to boycott companies that were investing in South Africa (I’ve still never even considered banking with Barclays). Reading the good and the bad – The Soweto riots in the late 70s, Special AKA’s Free Nelson Mandela, the killing of Stompie Moeketsi, the boycotts of SA goods, reports from Robben Island, the names of the main players in the SA political scene – Walter Sisulu, Chief Mathole Buthelezi, Tokyo Sexwale, Oliver Tambo, Eugène Terre’Blanche, FW De Klerk, PW Botha – the emerging sense that the race riots in the UK in the late 70s/early 80s were part of a MUCH bigger global inequality, the artist boycott of Sun City and the dickheads that went to play there, Thatcher’s pathetic attempt to paint Mandela and the ANC as terrorists while cosying up to Pinochet. It was MY history, my lifetime, my memories, written down by Mandela.
I hope the book sells a million more copies in the wake of his passing, I hope more people my age get a handle on what we lived through, and that, as I said yesterday, we’re not Thatcher’s children, we’re Mandela’s. We grew up knowing that the greatest man of our age was in prison for standing up for what’s right, that he represented the fight for equality, and that when he was released he sought peace and reconciliation, not revenge. He stood alone amongst world leaders as a person of immense integrity, even when he got it wrong (see Desmond Tutu’s exquisitely written obituary.). That’s our legacy. The inspiration to fight against oppression. That’s the spirit that opposes cuts, the spirit seen in the actions of Tariq Jahan during the riots here when his son was killed. It’s the spirit that defines the protest movement worldwide.
Ghandi, MLK, Mandela.
the struggle goes on, and those we fight against are today trying to claim Mandela as one of their own. Resist that at every turn. We don’t need to dig up stupid shit Cameron did while a student to reject his words now (Mandela invited his jailers to dinner), but we can see what the Tories are doing now as the kind of politics of inequality that Mandela stood against, as a protestor, activist, prisoner and politician. The struggle is ours, on behalf of those who can’t struggle.
Ubuntu – the spirit of life, the exaggerated humanity of someone living life to the full by pursuing life for everyone, a word I was introduced to not via the Linux operating system, but by another revolutionary South African leader, Bishop Desmond Tutu, who described Mandela as the living embodiment of it. Google it, read about it, own it as his legacy. Ubuntu
1. Don’t hesitate to delegate
If there’s someone out there who should (or simply can) deal with one of your emails while you’re traveling - by all means, forward it on. It’s helpful to use our SaneReminders feature to make sure the person you delegated to gets back to you by a certain time.
2. Mobile = Important, Desktop = Can Wait
Checking your email while traveling can be a pain, so make sure your mobile inbox is used exclusively for important stuff. Use SaneBox to get unimportant emails out of your Inbox into a separate folder. Then when you’re back at your computer, you can process them in bulk.
3. Forward itineraries to the day of your trip
If you use SaneBox, Simply forward your travel itinerary or e-ticket to <time>@sanebox.com (e.g. firstname.lastname@example.org or email@example.com etc) and we’ll automatically put it in your inbox on (you guessed it) that day.
4. Some emails shouldn’t interrupt your vacation
SaneBox customers can set up Custom Snooze Folders, which let you defer emails until you’re ready to deal with them. If you’re on vacation we recommend setting up a Snooze Folder for the day after you get back, effectively snoozing unimportant emails until your vacation is over. Brilliant, we know.
5. The ”urgent” filter
When you’re particularly busy traveling, set an auto-reply that asks people to resend their email with the word “urgent” in the subject. Then create a filter that only processes emails with the word “urgent” in the subject line. Most people will respect the request, and won’t abuse it with non-urgent emails (courtesy of our good friend Jon Orlin).
|mkalus shared this story from jwz.|
I upgraded from MacOS 10.8.3 to 10.9. I didn't want to, but it was bound to be necessary eventually.
Pro tip: if you want to do a clean install, but don't want to spend 48 hours restoring your photos and music from backup afterward, you can now do it like so:
I've been using it for about a week, and here are the things that suck most about 10.9:
I was careful to download the iTunes 10.7 package before upgrading, and not let iTunes 11 touch my Music directory before deleting iTunes 11 and re-installing 10.7. However, it turns out that while you can run iTunes 10.7 on OSX 10.9, what you cannot do is sync anything. No local backups, no transfer of local MP3 files to the phone. Presumably no Xcode. So if your phone has no music files on it and you do your backups through iClod, I guess you can keep using 10.7. Otherwise, you're fucked.
There's a workaround for this, but it's a hassle:
But I have just discovered that you can go back to the old way by de-selecting "Displays have separate Spaces" in Mission Control preferences, but then you have to reboot. I didn't realize it was working.
As before, I have my "Dock unread count" and "New message notifications" prefs set to a smart mailbox that includes the various folders in which new messages appear. Notifications work, badges don't.
Oh, except then I rebooted and now Mail.app is permanently badged with "1" regardless of the number of unread messages. How very.
Oh, this appears to be because my "Biff Mailboxes" smart mailbox is permanently badged with 1 unread message. Though when I sort by unread, it shows me no unread messages in it. This seems to now be true of most of my smart mailboxes: they all have completely random and untrue unread counts.
Maybe blowing away Spotlight -- again -- will fix it. I'll know in a couple of days.
Possibly this fixes it?
defaults write -g NSDisableAutomaticTermination -bool yes
I kind of liked it that Preview and certain other apps auto-quit, so I wish I could turn it off just for Safari. That seems to be global.
This only happens on the boot screen, not once the machine is up and running. After that, it's fine. So I have to have a second keyboard around every time I reboot. Oh and it only happens most of the time.
Creative Commons has released the new versions of their licenses, now available for adoption. They write, "The 4.0 licenses — more than two years in the making — are the most global, legally robust licenses produced by CC to date." When I choose the non-commercial license I have always used, it throws up a big 'This is not a Free Culture License' warning. Which is absurd, because the whole point of using the non-commercial license is to ensure that the work remains free. I don't know why Creative Commons persists in lobbying against its own licenses, but there it is, and nothing I say will change that, I guess.
A dispute in Portland is bringing to light the age old question of whether fare cuts or service increases are the best way to "improve" transit. Both options improve ridership.
The high-level answer is pretty simple.
Cutting fares is good for lower-income people, while increasing service is good for almost everyone, including many low-income people.
But it's not as good for some low-income people, and that's the interesting nuance in this particular story.
OPAL, an environmental justice organization that claims to focus on the needs of low-income people, is demanding that Portland's transit agency, Tri-Met, institute a fare cut. The cut is specifically in the form of extending the period for which a cash fare is valid from two to three hours, an interesting issue that the Oregonian's Joseph Rose explores in a good article today. (The headline is offensive, but reporters don't write headlines.)
At the same time, Portland has a throughly inadequate level of midday service, by almost any standard. In the context of cities of Portland's size and age, Tri-Met practically invented the high-frequency grid that enables easy anywhere to anywhere travel in the city, but in 2009 it destroyed that convenience by cutting service to 17-20 minute frequencies. At those frequencies, the connections on which the grid relies are simply too time-wasting. Those cuts correlated with substantial ridership losses at the time.
OPAL's demand for a fare cut costing $2.6 million (about 2% of the agency's revenue) is, mathematically, also a demand that Tri-Met should not restore frequent service. This money (about 80 vehicle-hours of service per day) is more than enough to restore frequent all-day service on several major lines.
The rich irony of this proposal is that OPAL uses those service cuts to justify its proposed fare reduction. In Portland, the basic cash fare purchases a two-hour pass that enables the passenger to transfer one or two times. Because of the frequency cuts, transfers are now taking longer, and a few are taking too long for the two-hour pass. OPAL therefore wants the pass to be good for longer.
So OPAL's position is that because service has been cut, Tri-Met must mitigate the impact on low-income people instead of just fixing the problem.
In particular, OPAL wants a solution that benefits only people who are money-poor but time-rich, a category that tends to include the low-income retired, disabled, and underemployed. You must be both money-poor and time-rich to benefit from a system that reduces fares but wastes more and more of your time due to low frequencies and bad connections.
If, on the other hand, you are money-poor and time-poor -- working two jobs and taking a class and rushing to daycare -- you will benefit from a good network that saves you time as much as from one that saves you money. But that means you don't have time to go to meetings or be heard. We transit professionals see these busy low-income people on our systems and care about their needs, but we also know that we're not going to hear their voice as much from advocacy organizations, because they just don't have time to get involved.
The same is true, by the way, of the vast working middle class. In the transit business, we get lots of comments the money-poor-but-time-rich, who have time to get involved, and from the wealthy, who can hire others to represent them. We don't hear as much from the middle class or from the money-poor-and-time-poor, even though those groups dominate ridership. But hey, we understand! They're just too busy.
According to a report by the Federal Trade Commission, a popular free flashlight app, Brightest Flashlight Free, has been selling Android users’ location data to advertisers right under our noses.
Despite an option within the app to disable the location sharing feature, which was promised to be anonymized and used only by the developer, Brightest Flashlight was locating users and, paired with a unique device ID, selling that data to advertisers.
The FTC report, which was issued only after the Commission settled the dispute with the developer, “alleges that the company deceived consumers by presenting them with an option to not share their information, even though it was shared automatically rendering the option meaningless.”
The report points out that consumers of apps must be aware of the permissions associated with each download, and make informed choices about which developers to trust. Android is a much less regulated environment than iOS, BlackBerry or Windows Phone, making it especially easy for developers to take advantage of the wealth of user data at their fingertips. Why a flashlight app needed to collect users’ location data should have been at the top every Android users’ mind when downloading the app.
While this is not the first Android app to run afoul of Google’s terms of service, its seemingly innocuous nature exposes the potential risks of using an app that says one thing but does another. In this case, Brightest Flashlight Free does apparently live up to its claim of being an excellent flashlight app; it just turns out that we, the users, were the product, not the customers.
The app’s developers, Goldenshores Technologies, are prohibited from misrepresenting their apps in the future, but this is unlikely to be the last time we hear of a free app misusing peoples’ location data. Apple makes it much more difficult for developers to do the same thing, and has even banned the use of UDIDs for advertising purposes. It’s unlikely Goldenshore was selling users’ location data to advertisers for nefarious purposes, but the fact remains that we didn’t know, and were mislead, and that’s what caused the FTC investigation.
There’s a crushing monotony to stories on how the National Security Agency has been bending and breaking every rule to crack open your mail. Each new revelation that hits the news — that the agency has tapped into data warehouses belonging to Google and Yahoo, systematically undermined commercial encryption with backdoors, surreptitiously engineered weaknesses in encryption standards – seems like another confirmation that the NSA is trying to batter down every technological barrier that might prevent it from reading your e-mails and listening in on your phone calls.
The steady drip-drip-drip of new violations obscures the most interesting — and saddest — part of the whole NSA story. The agency wasn’t always out to steal your secrets. Twenty years ago, the agency was trying to protect them from outsiders.
Sometime in the 1990s or early 2000s, most likely in the late Clinton administration, there began a quiet but dramatic shift in doctrine. Over the span of a few years, the NSA decided that American citizens’ computers would have to be targeted. And as targets, we citizens could not be trusted with strong encryption.
The NSA is trying so hard to undermine commercial encryption nowadays that it’s hard to imagine that the agency ever had a different attitude. Back when the Internet was new, however, forward-thinking NSA analysts realized that electronic commerce was coming, and without a strong, cryptographically secure infrastructure, banks and stores and other entities attempting to do business on the Internet would be vulnerable to pirates — or, worse yet, a foreign power — who could collect sensitive information or disrupt commerce. So the NSA tried its damnedest to build cryptographic defenses for Americans to use.
Case in point is the development of the Digital Encryption Standard. In the 1970s, the US decided it needed a new, fast, secure, standardized algorithm for encrypting blocks of sensitive digital data. The National Bureau of Standards sent out a request for proposals, and eventually, an algorithm designed by cryptographers at IBM won out.
The innards of the algorithm resemble nothing more than a gigantic cuisinart, dicing up and scrambling large chunks of data and reassembling them in a new order. That scrambling and reassembling would be easily reversible, making the encryption useless save for a set of mathematical widgets in the middle of the algorithm known as S-boxes.
Each of the eight S-boxes contained a set of numbers that gave precise instructions on how to permute data in an irreversible way. Irreversible, that is, without the secret key. Underneath an enormous amount of juggling and jumbling of bits and bytes, the quality of IBM’s algorithm almost entirely hinged on whether those S-boxes were constructed properly. Unfortunately, S-box design was more an art than a science, especially in the 1970s.
The National Bureau of Standards decided to ask the NSA to evaluate the security of IBM’s proposed algorithm. The NSA gave the green light, but the agency had made a subtle change: the NSA subtly tweaked the numbers in the algorithm’s S-boxes, and refused to explain why. This caused a bit of consternation in the cryptographic community; some believed the agency had introduced a flaw in the S-boxes that would allow the NSA to decrypt messages, if need be. Nevertheless, the new algorithm, dubbed DES, was adopted in 1977.
It took more than a decade for outside cryptographers to figure out why the NSA had tweaked those S-boxes. In the late 1980s, Eli Biham and Adi Shamir (the S in RSA) figured out a new way of attacking cryptographic systems by feeding very similar — but not identical — blocks of data into the algorithm and comparing how the outputs differ. This technique became known as differential cryptanalysis. It turns out that the NSA-chosen S-boxes are particularly resistant. By tweaking the S-boxes to defend against an attack that hadn’t yet been discovered by outsiders, the NSA proved that it had been trying to strengthen domestic cryptography rather than weaken it.
That was the culture of the NSA, at least during the short time that I was there (from 1992 through 1993). The agency’s mission was not just to crack enemy codes, but to ensure that your own were secure. And your own didn’t just mean your mil-spec equipment, but the everyday codes that invisibly made Citibank and AmEx and NYSE — not to mention power plants, water works, and transport systems — function as securely as possible in the digital wilds.
Of course, the two halves of NSA’s mission were in constant tension. If you made your own cryptosystems stronger, adversaries could use your algorithms and techniques to protect their communications from agency eavesdroppers. Strengthening encryption necessarily made intelligence-gathering harder. And in the early 1990s, it became clear that the spread of fast, cheap, secure encryption algorithms would make NSA’s eavesdropping mission much more difficult. So, how could the agency ensure secure communications at home while denying them to potential adversaries?
The answer the NSA came up with was the infamous “clipper chip.” In the early 1990s, the Clinton administration floated the idea of a sealed microchip that would encrypt data securely using a (then-classified) algorithm known as SKIPJACK. Like DES, SKIPJACK was a secure block cipher, but a backdoor had been built into the chip itself, allowing the government to decrypt the communications.
The backdoor was supposed to be secure; it was a secret code that was known only to the manufacturer of the chip. So, even though the encryption was flawed, the flaw was small and (theoretically) could be activated only by authorized government personnel. And if the government could convince Americans to adopt the clipper chip and other similar law-enforcement-accessible encryption schemes, the two halves of the NSA’s mission would no longer be in conflict. It could promote strong (if government-accessible) encryption at home without any worries that the technology would be attractive to foreign adversaries.
The clipper chip sank like a silicon balloon. Rejected by privacy advocates and corporations alike, the administration retreated, withdrawing the proposal and eventually declassifying SKIPJACK in 1998. The NSA wouldn’t get a clean solution to its dilemma of fostering strong encryption at home while trying to crack foreign cryptosystems. Instead, the government relied on lame, ineffective export controls to try to keep American encryption algorithms from being used overseas. They didn’t work; encryption simply got stronger and stronger, and not just domestically.
I believe that within a few years after the clipper chip died — sometime in the late 1990s or early 2000s — the NSA finally cut the Gordian knot. The agency would stop trying to reconcile two fundamentally irreconcilable mission goals. Foreign intelligence, it decided, was much more important than strengthening American commercial and personal encryption. And so the agency began began actively to undermine the latter to enable the former.
Flash forward to 2007. NIST, the agency formerly known as the National Bureau of Standards, published a set of algorithms known as pseudorandom number generators. Many computer programs rely upon such algorithms to generate a stream of random-looking numbers; if designed poorly — if an attacker can guess which numbers are going to be generated at a given time — they can undermine the security of a computer system or a whole network. (Imagine, for example, what you could do if you could figure out what Keno number was coming up next on the telescreen.)
One of these generators, known as Dual_EC_BCRG, relies upon the properties of some imperfectly-understood mathematical objects known as elliptic curves. Built into the algorithm is a set of numbers that defines the particular shape of the elliptic curves that generate the random numbers. Nobody knew for certain at the time — NIST didn’t make it explicit how those numbers were chosen — but it was assumed that the NSA helped NIST choose them.
We now know that, yes, the NSA was behind the chosen numbers. We know because mathematicians figured out that the person who generated the numbers had the opportunity to insert a backdoor into the algorithm. And now, thanks to the Edward Snowden revelations, we know that the NSA did, indeed, take that opportunity. The algorithm, published and certified by NIST to be secure, had a gaping hole put there by design. Three decades after NSA tweaked DES to make it more secure, it did precisely the opposite with Dual_EC_BCRG.
Those leaked memos show that the National Security Agency has been systematically finding flaws — and creating them — in the cryptographic protocols that all internet commerce relies upon: HTTPS, SSH, SSL, PPTP, and many many more. Nobody in the cryptographic community knows for sure what’s secure and what’s been subtly undermined by the agency. There’s even evidence that the NSA, or an attacker as sophisticated as the NSA, has tried to stick cleverly-camouflaged backdoors into open-source software.
The implications are clear. The NSA has abandoned half of its mission; it no longer feels obliged to help Americans keep their communications secure from outside attackers. Just the opposite. The NSA now feels that to fulfil its intelligence-gathering function, it must undermine the cryptographic security of American citizens and corporations. The NSA of the 1970s was trying to protect our digital infrastructure against exactly the kinds of attacks that the NSA of the 2000s is successfully carrying out again and again.
“Attack” is exactly the right word. It’s a term of the art; it’s the word that cryptographers and cryptanalysts use to describe an attempt to undermine security. Another cryptographic term that fits right in: adversary.
Perhaps the most disturbing revelation of the Snowden leaks is that in the most literal sense, the National Security Agency now considers every American citizen and every American corporation to be an adversary.
Must we now return the favor?
Interesting article on the design of a course that has students act as instructors, drawing their presentations from dense and difficult texts, in an environment that is challenging and often results in failure. The course, taught by Concrodia University professor Vivek Venkatesh, "was really about thinking on your feet,” says Tieja Thomas, a PhD candidate who took the course. “ You had to come prepared . . . It really was a deeper form of learning.”[Link] [Comment]
Google’s packaged Chrome apps could be making the move from browser to mobile as soon as next month. TheNextWeb team today stumbled across a GitHub repository called Mobile Chrome Apps. Led by Michal Mocny, a software dev at Google, the repository hosts a toolkit for developers looking produce Chrome apps specifically for mobile devices and for porting existing Chrome apps to Google Play and the iOS App Store.
The toolkit allows for modification of design, fixing of bugs, testing, and more through Apache’s Cordova. Documentation on GitHub suggests support for Android 4.x as well as possibly Android 2.2 and Android 2.3 at a later stage. iOS support is apparently in development but it doesn’t seem to be as far along as Android (it’s marked as ‘TBA’). Right now, rumors point to a January beta release.
Google declined to commented on whether or not it will be bringing packaged apps to mobile, but it wouldn’t exactly come as a surprise. Some apps, like Google Keep, should enjoy a pretty seamless transition to mobile, while others, like Google Docs and Sheets, might not translate as well to smaller screens.
Shut up and take my money.
It's hard to find two organizations that developed a community the same way.
Some begin with big top-down processes driven by the CEO. The CEO, or someone at the executive level, clears the path, provides the resources, and pushes the process through (often against skeptical employees).
Some begin bottom-up. A few employees begin a community with a small goal. The community is successful. They gradually expand their efforts. They bring more employees in to the community. Over time they attract the attention of an exec.
Others begin at the middle management level and are a cross between the two.
In the 120+ organizations we've worked with, there are few common structures between them. Beyond having a full-time community manager, what works in one organization usually won't work in yours. The culture, complexity, personalities, and charisma of the community manager vary too greatly.
What does matter, more than official team structure, is having a ragtag band of believers. A group of people involved in developing the community (often unofficially) and help make things happen. A community manager, alone, usually can't get the job done. The challenge is to build that unofficial band of believers to keep pushing the project.
Even in CEO-directed projects, you need to build your own believers in the project to make it a success. Forget the internal structure. Reach out to the people likely to believe in the project. These will be people who either know about the power of communities, have a good relationship with you, or have the personality to try new things.
You couldn’t make it to the Mobile Meeting but you follow our blog regularly?
Starting this week we will be posting the Wednesday meeting notes on our blog as well so that it’s easier for you to follow.
Last week’s notes HERE.
Hope it helps!
We’ve been really busy here at the Sports Tracker office as you’ve probably noticed from the lack of frequent blog updates, sorry for that! But here’s what we’ve been up to and what’s happening next:
We’re super excited to announce that the new version 2.0. is now available in Google Play store ! The app is a complete rebuild in order to make it more stable and enhance the performance. This also means that it will now be easier to add new features to it.
What’s new in version 2.0 for Android:
Next step is to concentrate on fixing any occurring bugs and at the same time we’re eagerly planning the next phase. More on that later.
The development of the WP8 app is progressing nicely and we’re now in alpha testing phase. The new app will support heart rate monitoring (BT2.0), so our Sports Tracker hrm1 & Sports Tracker hrm2 will be supported. We’ll keep you posted on the progress and inform when we’re ready to start beta testing. So keep an eye on the blog!
The new modern and sleek iOS 7 facelift version will be released shortly. We’ve done a lot of work behind the scenes to make the app even better. The new version will no longer support iOS 4.x but we’ve made sure that it works well with iOS 5.x and of course iOS 6.x & 7.x.
Our new backend has been rolling steadily since September, new servers have been cooperating really well with us!
New HTML5 website is also being worked on (after being interrupted by backend work). We should be ready to start beta testing fairly soon. More news on that to follow.
Other than that we’re pretty much waiting for some snowfall to start the ski season!
Team Sports Tracker
Part 5 of my Top 10 Ed-Tech Trends of 2013 Series
In memory of Jeffrey McManus, founder of CodeLesson and friend
As with all of the trends I’m covering in my year-end review, neither the “Learn to Code” nor the “Maker Movement” are new. I’ll say it again: read Seymour Papert’s Mindstorms, published in 1980.
Last year, I wrote about “Learning to Code” and “The Maker Movement” in two separate trends post. This year, I’m combining the two. This decision shouldn’t be seen as an indication that interest in either has diminished. To the contrary.
Despite the proliferation of these learn-to-code efforts, computer science is still not taught in the vast majority of K–12 schools, making home, college, after-school programs, and/or libraries places where students are more likely to be first exposed to the field.
There are many barriers to expanding CS education, least of which is that the curriculum is already pretty damn full. If we add more computer science, do we cut something else out? Or is CS simply another elective? To address this particular issue, the state of Washington did pass a bill this year that makes CS classes count as a math or science requirement towards high school graduation. Should computer science – specifically computer science – be required to graduate? In a Google Hangout in February, President Obama said that that “made sense.” In the UK, computing became part of the national curriculum.
While many argue that efforts to expand computer science instruction in schools have been insufficient, it’s worth noting that the number of students who took the AP exam in Computer Science did jump up 19% this year. (The NSF also gave the College Board a $5.2 million grant to develop a new Computer Science AP course and exam.)
There are numerous groups who’ve been long working to improve CS education in the US (such as CSTA), but it was with typical Silicon Valley fanfare that Code.org launched this year. Founded by brothers Hadi Partovi and Ali Partovi (the former was on the founding teams of iLike and Tellme), the non-profit promised to “help make computer programming accessible to everyone.” Code.org kicked off with a video designed to “go viral,” produced by Lesley Chilcott of Waiting for Superman fame and featuring tech entrepreneurs like Bill Gates and Mark Zuckerberg.
In addition to its video, Code.org has orchestrated a sweeping PR campaign about the need to teach programming Hadi Partovi even got Ryan Seacrest to learn a little code in a Today Show segment. The organization is running an “Hour of Code” during Computer Science Education Week (December 9 - 15). This involves sending kits and promotional materials to interested schools and teachers, offering them hour-long lessons to help introduce computer science to students.
Code.org plans to compile a database of all sites that offer programming instruction and will also offer professional development and CS curriculum to teachers. To receive the latter, teachers and schools must sign a contract and commit to two years’ participation as well as hand over students’ achievement data. (“Handing over student data” is yet another ed-tech trend I’ll cover in a subsequent post.)
Also making the rounds encouraging “making” in schools, two of my favorite educators: Sylvia Martinez and Gary Stager. Their book, Invent to Learn: Making, Tinkering, and Engineering in the Classroom, was published this year and has received great reviews. (I believe I called it “the most important education published this year.” Bonus: no contract required to implement their ideas in your classroom.
* Some restrictions may apply
“Everyone needs to learn to code” – that assertion has spun out a genre, of sorts, of blog posts and articles arguing that (some) programming knowledge is mandatory. For example: this one or this one. Arguably there’s a genre too in the “not everyone needs to learn to code” responses. For example: this one or this one. The worst, THE WORST: “Finding the unjustly homeless and teaching them to code.”
But at the same time that we hear the “everyone should learn to code” chant, it remains clear that not everyone is welcome in tech. There are so many examples I could point to: bias and discrimination based on accent, race, gender, and so on. Violence and intimidation (the reason I took comments off my blog, for example.)
And probably not THE WORST, but pretty mind-blowingly awful: Titstare, an app demoed on stage at Techcrunch Disrupt, in front of Adria Richards (fired earlier this year after she tweeted about two men’s inappropriate comments during a tech conference), hackers from Black Girls Code, and other young girls who were on stage or in the audience.
The messages about who’s welcome in tech start early. They can start in the media or they can start in school. (Incidentally, according to a survey conducted by CSTA this year, the average computer science teacher is “a white male who has been teaching for more than 15 years and has been teaching Computer Science for about 13 years.”) The messages can be subtle or overt – like expelling a young black girl from school and charging her with a felony when a science experiment she conducts causes a minor explosion.
As Mimi Ito wrote in an op-ed in Wired,
Recruitment into the life of a coder happens well before kids walk into the classroom. The peer groups that young geeks form are as critical to their learning and development as tech experts. Kids become coders because they are friends with other coders or are born into coder families, which is why the networks can become exclusionary even when there is no explicit racism and sexism involved. It’s about cultural identity and social networks as much as it’s about school offerings or career opportunities. Kids need to play and tinker with computers, have friends who hack and code together, and tackle challenging and new problems that are part of their everyday lives and relationships.
We know that the more diverse the ecosystem of talent, the more innovative are the solutions that result. If we really care about the talent gap in high tech, innovation, and entrepreneurism, we need to do more than look overseas, or push classes and school requirements at kids. We need to build a sense of relevance and social connection into what it means to be a coder for a wide diversity of kids.
The “everyone needs to learn to code” narrative dovetails nicely with the “there’s a shortage of STEM workers” story: that is, we simply aren’t training people with the necessary high-tech skills to fill job vacancies — now or in the future. Both of these also work well with the “schools are broken” narrative.
In an August article in IEEE Spectrum, Robert Charette smashes “the myth of the STEM crisis,” arguing that there has long been a cycle of “alarm, boom, and bust” in claims about a shortage of scientists and engineers.
Clearly, powerful forces must be at work to perpetuate the cycle. One is obvious: the bottom line. Companies would rather not pay STEM professionals high salaries with lavish benefits, offer them training on the job, or guarantee them decades of stable employment. So having an oversupply of workers, whether domestically educated or imported, is to their benefit. It gives employers a larger pool from which they can pick the “best and the brightest,” and it helps keep wages in check. No less an authority than Alan Greenspan, former chairman of the Federal Reserve, said as much when in 2007 he advocated boosting the number of skilled immigrants entering the United States so as to “suppress” the wages of their U.S. counterparts, which he considered too high.
It's worth asking, I think, who benefits from this crisis rhetoric about a shortage of programmers and programming classes. (Clearly as I outline at the beginning of this post, the latter's a booming industry; and as such I don't think it's surprising that the learn-to-code movement has sort of subsumed the maker movement, which is not as maniacally focused on "jobs" and "skills" as much as it is on inquiry and creativity.)
Charette concludes that while there isn't a STEM worker shortage, there is a STEM knowledge shortage. “Rather than spending our scarce resources on ending a mythical STEM shortage,” he writes, “we should figure out how to make all children literate in the sciences, technology, and the arts to give them the best foundation to pursue a career and then transition to new ones. ”
Some ways to do that: more project-based learning, more hands-on experimentation, more tinkering, more “making" - things that are still too often pushed aside in the classroom for more lecturing and more standardized testing.
Some other ways to do that: cut the "it's a meritocracy" bullshit.
We’ve all had some form of this experience (sometimes, I need to flip the USB plug only once):
Apple’s solution to the problem was the Lightning connector, a beautiful and proprietary 8-pin connector that can be inserted with either side facing up, and for which they’d be very happy to charge a licensing fee for building devices that are compatible with it:
For people who like Apple devices and for whom good design is important, Lightning is worth the price. For those who thought that beige desktops and big grey laptops were just-fine-thank-you-very-much, their reaction’s more like this:
It looks as though good ol’ USB will finally follow suit. Brad Saunders, chair of the USB 3.0 Promoter Group says that the proposed USB Type-C connector — an addition to the USB 3.1 spec — will be about the size of a Micro USB plug and be reversible. The spec for the connector is expected to be finalized “by the middle of 2014″, presumably in order to give manufacturers time to ramp up for the holiday season.
According to Intel’s Alex Peleg:
“[The USB Type-C connector] will enable an entirely new super-thin class of devices from phones to tablets, to 2-in-1s, to laptops to desktops. This new industry standards-based thin connector delivering data, power, and video is the only connector one will need across all devices.”
Let’s hope it’s prettier than the current Micro USB 3 connector. It would appear that it was “designed” by people who, to use an expression I heard ages ago, “have the visual sense that God gave oysters”:
While it remains to be seen how the connector standards will turn out, one thing is clear: I’ve got boxes and boxes of cables that are soon to become obsolete.
From the WSJ
China Mobile Ltd. has signed a long-awaited deal with Apple Inc. to offer iPhones on its network, a person familiar with the situation said, an arrangement that would give the U.S. technology giant a big boost in the world’s largest mobile market.
The rollout of iPhones on the world’s largest mobile carrier by users, with over 700 million subscribers, is expected to start later this month, around the time of a Dec. 18 China Mobile conference in Guangzhou, according to two people familiar with the carrier’s plans. China Mobile is one of the world’s last major carriers that doesn’t offer the iPhone.
At the Dec. 18 event, China Mobile plans to unveil a brand for its fourth-generation, or 4G, network. China Mobile executives have said they would only begin to sell the iPhone after introducing 4G services. China’s Ministry of Industry and Information Technology said Wednesday it gave licenses to China Mobile and its smaller rivals to operate the higher-speed mobile networks, clearing one of the last hurdles.
This was all but confirmed back in September when the iPhone got a TD-LTE/TD-SCDMA license, but now we at least know the date.
This is a very big deal. Feel free to ignore anyone making snarky comments about China’s average monthly wage being the same as the price of an iPhone 5C. The two pertinent facts about China are that:
So while many Western markets may have a greater percentage of the population that can afford an iPhone, the absolute number of Chinese who are potential customers is very high as well.
China Mobile covers 50% of those customers.1
The China Mobile iPhone will be a BIG upgrade – and it will only be available in China. There have been reports that there are as many as 45 million iPhones on China Mobile’s network, and every single one of those runs at EDGE speeds thanks to China Mobile’s use of the aforementioned TD-LTE/TD-SCDMA networking. It’s fair to wonder how much customers value LTE>3G, but LTE>Edge is a massive upgrade indeed; all of those 45 million customers are prime candidates for the China Mobile iPhone.
Moreover, calling it the “China Mobile iPhone” is not an accident. This is a third version of the iPhone that will only be available for sale in China. There will be no gray market undercutting iPhone sales as is the case for China Unicom and China Telecom. At the very least, this fact alone will provide a nice boost to Apple’s quarterly China numbers.
Update: As Naren Balaji pointed out on Twitter, This article at Forbes suggests that LTE Band 41 – which no iPhone currently supports, and which is necessary for China Mobile’s network – can be added in a firmware update. If so, that would negate the idea of a separate China Mobile iPhone. However, it may not necessarily negate the general premise of this paragraph, and has no impact on the rest of the article.
The low-hanging fruit is gone. For several years now any questions about APPL’s growth prospects have had a simple answer: just wait until they add NTT Docomo and especially China Mobile. Well, they now have, and the only way forward for significant iPhone growth is the long slog of winning new customers. I certainly think Apple is up to it, but there are no more home runs.
|Etienne on his was to fight crime on his "Bat"cycle|
The Webmaker.org user experience team conducted user testing at Mozilla Toronto between August 9 and October 1 with 11 users. The goal: examine the navigation and site architecture, and whether users understood the concept of remix. Sessions lasted approximately 1 hour.
The navigation between WebMaker.org and the individual tools is unclear (e.g., no consistent way to return to the homepage)
The meaning of X-Ray Goggles, Thimble and Popcorn is not intuitive to users
Publishing and saving is an unclear process for users
Help is not easily available in Thimble or Popcorn
Search is confusing for some users. The filter option on the homepage is sometimes confused for search, and the magnifying glass icon is missed
Users had problems navigating between the tools and the Webmaker.org homepage.
Thimble: Confusion with save/publish
In Thimble the “publish” option appears only after sign in. On Popcorn there is a greyed out save. (Center interface)
Thimble: Lack of help for novice users or those new to the Webmaker.org interface
Thimble: “Show hints” doesn’t do anything for most users
Thimble: Lack of colour picker
Thimble: users expected “content upload ” (e.g., upload image)
Thimble: Lorum ipsum text in starter makes confuses some users
Popcorn: Lack of help for novice users and those new to the interface
Popcorn: the “event” tab is not noticed by users (i.e. they can’t easily add text, images or maps)
Popcorn: Zoom issue. Sometimes users zoom out too much and can’t find their project anymore.
Popcorn: Audio loop issue
Popcorn: Copyright concerned users
The YotaPhone launched today with a 720p colour screen on the front and a 4.3 inch e-ink display on the back. It is a unique combination and a welcome innovation in a market where form factors remain stagnated around the generic, single screen, monobloc design popularised by the iPhone in 2007.
The inclusion of the low power, high contrast e-ink panel enables several new experiences. For instance, users can explore a map using the interactive, colour touchscreen on the front and then transfer the relevant section to the e-ink display on the back. It will sit there using virtually no power, easily visible in high contrast black and white, and accessible without having to unlock the phone. It can also be used to quietly display incoming messages and system notifications, materialising them on the rear panel while the main colour screen remains off.
Of course, it also supports e-books, meaning the YotaPhone can provide the same ‘easy on the eyes’ reading experience of a Kindle, while still sporting a 720p touchscreen for watching videos, browsing the web and all the functions users expect from their smartphones.
Yota has added playful elements too. Fire up the camera and the e-ink screen displays a ‘Smile’ message on the back of the device.
The concept of designing for second (or third or fourth) screens has been an area of exploration in the MEX community for several years. It is a broad principle, ranging from digital experiences accessed simultaneously on a mobile phone and TV screen, to devices – like the YotaPhone – which have two or more display panels integrated into the same product.
Earlier this year we spoke with Lau Geckler, Yota’s Chief Operating Officer, who explained the user insights which inspired this approach, talking about customers affected by the constant anxiety of checking the flashing badges and notifications on their smartphones. He’s not alone in recognising this growing problem. Nokia’s former design lead Marko Ahtisaari made it a personal crusade to ‘give users back their heads’ after he observed couples at restaurants bowed over their phones rather than talking to each other. Sergey Brin of Google described this tendency as ‘emasculating’, in a perhaps ill-judged remark to create interest in Google’s direct-to-eye Glass interface, presumably expecting his legions of currently ‘emasculated’ Android users to migrate to his more virile robot eyewear?
Yota’s strategy of combining a familiar colour touchscreen with the very different aesthetic of a simpler e-ink display feels more closely aligned with the reality of customer lives. The quality of the experience is better understood when seen up close, but Yota has certainly achieved something unique.
The screen in the photo (kindly modelled for MEX by Yota’s Lau Geckler) shows how Yota handles message notifications. It displays on the rear case, so a user can glance at it while their device is lying face down on a table, and carry on with their conversation without the immersive interruption which often leads to users being unconsciously drawn into a lengthy engagement with their colour screen.
The product appears to have progressed significantly (Engadget has a good hands-on review of the commercial hardware) since the early prototypes I played with, but I still expect this unusual combination will require many compromises. It will cost about $675 to purchase outright – a high-end price tag – but some of its specifications lag similarly priced devices, including a 1.7 Ghz dual core Krait processor where you might expect a quad core Snapdragon 800, Android Jelly Bean 4.2.2 (where others are already on 4.4 Kitkat) and relatively small 1800 mAh battery. However, more encouragingly, it will boast 2 Gb of RAM (enough for Android to run quite smoothly), 32 Gb of storage and a 13 megapixel camera.
I also have some question marks in my mind over the practicalities of the dual screen form factor. For instance, what will the colour screen look like after you’ve been holding it face down in your palm for half an hour to read a book on the rear e-ink display? Touchscreens are magnets for greasy fingerprints at the best of times and cradling a screen in your warm hand is likely to exasperate this problem. I also have a concern over the extent to which users will be comfortable flipping the device around to use the different screens. I remember some user observations I did on devices with slide-out keyboards and how users were reluctant to expend even the small effort required to flip out the keyboard. These are questions which will be answered only during an prolonged period of real world usage.
For now, however, Yota should be commended for introducing something different. It should prompt device manufacturers to consider whether they can find their own form factor innovations to excite customers who are increasingly ambivalent about the smartphone industry’s undifferentiated selection of generic monobloc slates.
The practice of building any, uh, practice is challenging. Especially from the ground up or while merging multiple teams together. Verne Ho does a phenomenal job at explaining how his design practice evolved, and how he tries to keep his team engaged, focused and successful in his recent post A Framework for Building a Design Practice.
I’ve often described building a successful practice as one part mechanic, one part architect and one part magician. You end up creating a process that’s unique to your business’s needs (architect), then figuring out the minute details and gears that are out of sync and causing larger issues (small problems often lead to big impacts: mechanic), and then working constantly to make your team feel like a real team (which is often pure magic).
I tend to focus on three core things when building a new (or rebuilding an existing) practice:
Ultimately, you need to find the right mix for you as a leader, and for your organization and clients.
There have been thousands of reported security vulnerabilities in 2013 alone, often with language that leaves it unclear if you're affected. Heroku's job is to ensure you can focus on building your functionality, as part of that we take responsibility for the security of your app as much as we're able. On Friday, November 22nd a security vulnerability was disclosed in Ruby (MRI): CVE-2013-4164 . Our team moved quickly to identify the risk to anyone using the Heroku platform and push out a fix.
The disclosed Ruby vulnerability contains a denial-of-service vector with the possibility of arbitrary code execution as it involves a heap overflow. In a denial-of-service attack, a malicious individual sends your app requests that either causes the system to lock up or become unresponsive. When multiple people or systems do this, it becomes a distributed denial-of-service attack (or DDoS).
A denial-of-service attack can vary in damage depending on how crucial uptime is to your app. For example if your app brings in a significant amount of money every hour, an attacker could cost you a large sum by bringing your service down.
In this case, the denial-of-service was particularly easy to execute, making this a potentially devastating attack to some users. In addition, because this attack triggers a heap overflow, there is also a slim theoretical possibility of a much more serious vulnerability, an arbitrary code execution. We could not rule out the possibility of arbitrary code execution even though there is no known way to achieve it through this vulnerability.
When the vulnerability was announced, patched versions of Ruby 1.9.3, 2.0.0, and 2.1.0 were made available from Ruby core. Heroku’s Ruby Task Force pulled in the latest versions, compiled them and after running tests to confirm their compatibility with the platform released updated versions of 1.9.3, 2.0.0, and 2.1.0. Any new pushes to the platform receive these patched versions.
Some have asked why we didn’t automatically force-update all Ruby apps. First, Heroku will not re-deploy a user’s app without their direct knowledge or action. Additionally, since some upstream dependencies may have changed since an app’s last deploy, we also want to maintain erosion resistance by only changing components on an explicit push.
In addition to building and deploying the fixed versions of Ruby, we:
We also did something unprecedented in the history of Heroku: patched and released two unmaintained language versions, Ruby 1.8.7 and 1.9.2.
The Ruby 1.8.7 version has been at the End of Life for months, which means that the Ruby Core team will no longer issue bug or security fixes for the language. However, developers are still using it actively in production. Ruby 1.9.2 is currently unmaintained, though this status has not been formally announced.
While security patches were not made available for Ruby 1.8.7 and 1.9.2, Ruby engineer Terence Lee discovered that he could cleanly apply the security fix to both versions. The source for these are available on Heroku's fork of Ruby: 1.8.7p375 and 1.9.2p321. He then built, tested, and released these versions on Heroku’s Cedar stack. Terence recently got commit access to Ruby core and is currently working on pushing the changes upstream even though the two versions are technically unmaintained.
Terence’s actions give you a longer runway but developers using Ruby 1.8.7 or 1.9.2 must upgrade as soon as possible. Heroku recommends upgrading to Ruby 2.0.0 or at very minimum 1.9.3. Matz has stated that support for 1.9.3 will likely be dropped within a year. Heroku’s Bamboo stack runs Ruby Enterprise Edition which has been at end of life since early 2012 and was not patched. Bamboo users should upgrade to the Cedar stack to stay secure.
Staying on a current version is crucial to being able to iterate quickly and respond to vulnerabilities. For example, the Chrome web browser auto updates in the background, and recently Apple software has moved their Mac and iOS app stores to this model. Using updated software provides benefits to a maintainer: less fragmentation and less time spent supporting legacy versions. This means more time for features, performance, and compatibility.
The Rails web framework only supports Ruby 1.9.3+ for their 4.0.0 release, and it is rumored that they will be supporting only 2.0.0+ for their 4.1.0 release. Without dropping support for older syntaxes, developers cannot utilize new ones. As a language user, staying current means you have access to the latest features, latest security updates, and most active language support. For all these reasons, we want to encourage Heroku users to regularly upgrade and stay up-to-date.
The ending of support for Ruby 1.8.7 and 1.9.2 are interesting events for Heroku: they represent the first time a technology used on the platform became unmaintained by its core developers. While we have extended the period you are secure on these unmaintained versions, we have not made a commitment to maintain either Ruby 1.8.7 or 1.9.2 indefinitely. We know that these environments are still in use, and want to make sure customers have ample time to upgrade. We’re working to make Heroku’s Ruby version support commitments and timelines explicit, and will publish documentation to that effect.
For Heroku’s security team, communication is as much a concern as technical fixes. When communication breaks down, so does security. This year we’ve seen several large vulnerabilities that required notifications for languages, frameworks and tools including a Postgres patch and the Rails YAML vulnerability. We’re working on better ways to notify affected application owners. In this incident, we sent notifications to all Ruby application owners even if they were running JRuby or Rubinius, runtimes that were not affected. In the future we aim to be able to dial up our signal-to-noise ratio on security notifications, and even be able to provide app-specific information quickly and securely. If this sounds like fun, our Security team is hiring.
If you’re using Ruby on the platform, be sure to take advantage of the fix as soon as possible. If you’re depending on an older, unmaintained version of Ruby, upgrade as soon as possible. If you’re maintaining your own versions of Ruby, make sure you update and re-compile.
If you’re running on Heroku, sleep well knowing that we care about your security.
Magazine is an adjective, I promise.
Today Medium, which has been called a “magazine killer,” rolls out a new design and a bunch of new features that make it even more magazine-like. The blogging platform was designed to solve the combined shortcomings (or build on the combined strengths, depending on your point of view) of co-founder Evan Williams’ two prior companies, Twitter and Blogger. Medium offers the built-in audience of Twitter, with the highly malleable publishing capabilities of Blogger.
It has worked — Medium’s clean interface and recruiting of high quality writers has attracted plenty of attention, pageviews and respect. (The company won’t disclose any metrics on sign-ups or uniques, so I’m basing this statement on personal experience.)
Today Medium’s publishing capabilities got even more robust. Now Medium’s “collections” will be edited solely by their creators, which adds a level of human curation to the stories and makes the reading experience more personalized. There’s a design update too: big beautiful covers for stories, new fonts, full bleed photos, and more ways to edit and position art in the story. Lastly, writers will get more detailed stats and data around their stories and collections. It’s the kind of stuff a magazine editor would want. Williams called the overhaul ”Medium 1.0.”
The addition of more magazine-like features only makes Medium’s inherent conflict murkier: Is it a platform or a publication?
Medium has the qualities of a magazine: It employs writers and editors, commissions stories and illustrations, and generally acts like a media organization. It has formed partnerships with groups like the longform journalism organization Epic. It acquired Matter, a magazine. As a publication, it must take responsibility for the ramifications of the content it runs.
On the other hand Medium is an open platform like Twitter or Blogger. Previously only available by invite or approval, Medium opened itself up to allow anyone to publish in late October. So when the un-edited masses post hate speech, or false information, a platform like Twitter or Blogger can take a hands-off approach. It’s the only way they to avoid getting sucked into controversy and avoid accusations of censorship or bias. And when someone alters or deletes a controversial post? Again, “not our problem.”
— Medium (@Medium) August 2, 2013
It may seem like a question of semantics, but the answer to “platform vs. publication” has ramifications as publications become more like platforms. Forbes, BuzzFeed, Business Insider, Huffington Post, Gawker, and even the New York Times all allow outsiders to post on their sites with varying degrees of oversight and editing.
It’s important to know whether these outlets will take responsibility for the content their users post. As publications, they’ve built their brand around reader trust, and they will continue to grapple with the question of what to do when bad actors find a megaphone on their sites.
Williams’ stance on this has been clear: Medium is a platform, even as it looks increasingly like a publication. Fortunately for Medium, it started out as a platform, and that’s what readers expect, even as it raises the bar for user-generated content. Over time Medium needs readers to get comfortable with its dual-citizen status. It may have an easier time doing that than publications with a legacy of highly edited reportage, like Forbes or The Atlantic.
For now, Medium is pushing deeper in each direction.