Shared posts

27 Feb 04:58

Beer and Tell – February 2016

by Michael Kelly

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call "Beer and Tell".

There's a wiki page available with a list of the presenters, as well as links to their presentation materials. There's also a recording available courtesy of Air Mozilla.

Bruce Banner: Web Developer

shobson was up first with Bruce Banner: Web Developer, a small webcomic generator. It provides sweet relief from those workplace stressors via the violent justice of The Incredible Hulk. The idea came from willkg, the code from shobson, and the art from craigcook. Excelsior!

Dokku + Let's Encrypt

Next up was pmac, who showed off dokku, a small PaaS implementation similar to Heroku, that uses Docker containers. Not only is it convenient for running several apps on a single server, but there is also a plugin called dokku-letsencrypt that lets you automatically retrieve and install TLS certificates from letsencrypt.org. Easy peasy!

RPG Maker MV

Next was Osmose (that's me!) who talked about RPG Maker MV, the latest entry in the RPG Maker series of game-making tools. Interestingly, RPGMV uses HTML and JavaScript to implement the engine used to run games made with it. The application itself edits JSON files that are loaded by the web-based engine. The engine itself uses pixi.js for rendering, and can be extended via plugins written in JavaScript.

Battleshits

peterbe stopped by to share Battleshits, a mobile-friendly web app and a fairly gross version of the popular boardgame Battleship. The game connects you with other players via WebSockets and Fanout, and most of the interface is implemented using React.

Chava

Last up was jpetto with a small personal project memorializing his local coffeeshop, Chava, which closed earlier this year. The page uses Hammer.js for touch events and LazyLoad to lazily load the images, but the lightbox implementation is custom-made from scratch. Neato!


If you're interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

27 Feb 04:57

Bike helmets

by Don Marti

An adtech proponent once suggested this to me (slightly paraphrased).

You can keep track of households that have children and are interested in cycling. Then when the parents visit a news site, you can offer them a 25% discount on children's bike helmets!

Cool story bro.

But that kind of thing is the lowest-value part of advertising. Adtech people are trying to chase another $50 billion by rearranging database marketing and throwing more computers at the problem. But that's not where the money is.

Let's think about that bike helmet example a little more.

Very few parents, fortunately, have direct experience of the quality of a children's bike helmet. You buy the thing, keep it for a while, the kid outgrows it or loses it, and you're done with it.

Unless something really bad, that you don't want to think about, happens. Some event where the difference between a good bike helmet and a not-so-good bike helmet means the difference between remember that time I crashed my bike and we had to go buy a new helmet? and something else.

Bike helmets are really hard for shoppers to tell apart. It looks like they're all basically made out of the same stuff. Some hard plastic, some foam, some nylon straps. But some designs are better than others. Specific products change all the time, but some companies are good at some product categories. Some brands have a good reputation.

Reputation. Some company is known for doing well at engineering, testing, and manufacturing? And I can pay that company to feel better about the protection of my child's brain? Like the man says, shut up and take my money.

Where does that reputation come from? It doesn't come from stalking users with the digital version of windshield flyers. Building brand equity is complicated, and depends only partly on advertising. And the advertising piece of the puzzle depends on where the ad appears, and how consistently and expensively it appears.

Product ads and product-related editorial need each other. High-reputation news and reviews come from a high-reputation source, and a publication's reputation depends on having the budget to write independently. The budget depends on ad sales. And signaling connects it all up. A brand's ability and willingness to advertise in a high-reputation publication carries a powerful signal of its intent in the market. A high-reputation publication isn't afraid to write bad things about problem products—even the ones that come from its advertisers.

Publisher reputation is even more important in regulated product categories. Shoppers know about regulatory capture even if they don't know to call it that. Government standards need independent reviews just as much as the products do. Same problem, one level up, even harder to do. (Anyone got a good link to a story about cadmium, a toxic metal, in art materials sold for children's use? And what regulations cover it? Let me know, I'll be over here chewing crayons.)

In real life, customers aren't in a funnel or a distillery, working their way down the pipe to purchase. Customers are active participants in markets and in communities of practice. Humans are wired to constantly try to measure the reputation and honesty of others. Everyone is picking up on signals, all the time.

So reputable publications run obviously expensive, hard-to-repudiate ads, which pay for more and better journalism and cultural works, which build publisher reputation, and that reputation brings in more ad money, and around and around the engine of wealth creation goes. Tracking and targeting users doesn't just take a piece out of publishers' share—it breaks the cycle. (Am I always getting ads for crap because someone thinks I don't know any better? Or am I seeing that same thing that more knowledgable shoppers are?) Signaling and the open web are a great match, as soon as we web people can fix up all those 1990s bugs that allow for cross-site tracking.

Let's make web ads work. The first step in growing web advertising from a targeted medium to a signal-carrying medium is to get more users protected from third-party tracking, so that signal-carrying ads will stand out. Take a tracking protection test and be part of the transformation. Firefox users, try the new pq add-on to turn on Firefox's Tracking Protection functionality. The social silos can growth-hack their funnels and distilleries of stalking, couponing, and crap, while the open web makes mad cash with signaling.

27 Feb 04:57

Computer Science is the Discipline of Reinvention

by Eugene Wallingford

The quote of the day comes courtesy of the inimitable Matthias Felleisen, on Racket mailing list:

Computer science is the discipline of reinvention. Until everyone who knows how to write 10 lines of code has invented a programming language and solved the Halting Problem, nothing will be settled :-)

One of the great things about CS is that we can all invent whatever we want. One of the downsides is that we all do.

Sometimes, making something simply for the sake of making it is a wonderful thing, edifying and enjoyable. Other times, we should heed the advice carved above the entrance to the building that housed my first office as a young faculty member: Do not do what has already been done. Knowing when to follow each path is a sign of wisdom.

27 Feb 04:57

Microsoft buys Xamarin

by Rui Carmo
Click on the image to zoom in

Well, that was unexpected. Probably inevitable, really, but kind of coming in from left field.


27 Feb 04:57

T. Greg Doucette on false arrest and police brutality

by Ethan

This post is not from me, but is a remarkable rant from T. Greg Doucette, an attorney in Durham, NC, who took to Twitter to share his experiences defending a young client from charges of reckless driving to endanger, a serious crime in North Carolina. (Greg, if you’re not okay with me collecting these here, let me know and I will take it down.)

I’m sharing it because, as the child of a legal aid defense attorney, I remember growing up with loads of stories like this, and having these stories shape my understanding about law enforcement, criminal justice and power. My father used to frequent courtrooms and offer to defend people facing charges without counsel precisely because stories like this are extremely common.

A couple of things. Greg mentions that this situation is wrong whether you’re Republican, Democrat or undecided, but you may be assuming that he’s a Dem. He’s not – he’s a Republican and a libertarian, and is running for state senate as a Republican.

You may also assume that he’s African American. He’s a white dude, who happened to go to a historically black law school and who runs a law firm with two female lawyers of color. And while he’s getting lots of Twitter love today, he points out that he’s been blogging about these issues for a long time – see this post on prosecuting abusive prosecutors where he features a friend he went to NC State with.

But while Greg’s an interesting figure, what’s important about his rant – IMHO – is that he doesn’t address this as a case of a rogue cop potentially ruining a young man’s life. He sees this as a systemic problem, and as a form of police brutality. Greg’s take may focus on this as a manifestation of a greedy and out-of-control state (he is a libertarian, after all), but he’s absolutely right to point out that when court systems are forced to become partially or entirely self-financing, there’s a strong pressure to increase prosecutions, even when those prosecutions are entirely bogus. Even if Greg’s rant ends up somehow leading to the arresting officer being sanctioned or otherwise punished, the problem he identifies is a systemic one – set up a system where courts need to prosecute people to survive and they will prosecute a lot of people.

I was especially struck by Greg’s identification of this arrest as a form of brutality. It’s a form that’s hard for most white people to see – this young man wasn’t beaten up, wasn’t imprisoned, wasn’t shot. But he was terrified. And his encounters with law enforcement going forward will be colored by the knowledge that power can and will be exercised arbitrarily based on his status as a young black male.

When we look at questions like whether predictive policing is fair and ethical, we need to understand that not all encounters between citizens and police are handled in the most ethical and professional manner we’d like to see. Populations that have grown up with a long tradition of being harassed and brutalized by police are understandably concerned about strategies that identify “hot spots” and promise additional “attention” to those areas, which often turn out to be communities of color.

In watching debates about policing after Ferguson, it’s hard not to be struck by the importance that imagery can have in disputes between police and citizens. Without his mother’s photographs, Greg’s client would likely have been convicted based on the officer’s testimony. Without Feidin Santana’s video, we would never have known that Walter Scott was murdered by officer Michael Slager. And so it makes sense that activists – and the President – would push for officer-worn body cameras.

But imagery alone doesn’t change flawed systems – the video of Eric Garner choking to death wasn’t enough to indict the officers who arrested him in Staten Island. Greg Doucette’s story points to the fact that problems with criminal justice in the US are problems of structural injustice and racism, that a system where power is not held accountable will veer towards abuse and where financial incentives to prosecute crimes leads to unjustifiable prosecution. Props to Greg for identifying this as a structural problem and looking for ways to fix it, and to all defense attorneys who work hard, with little recognition, to fight for the rights of their clients in a system that is often biased against them.

27 Feb 04:56

Class Support

We’ve just added class support to Tonic! JavaScript classes are shorthand that make it easier to work with JavaScript’s existing protoypal inheritance. As usual, Tonic is smart about the way it supports this feature, using Babel in earlier versions (like 0.12), but giving you full access to native classes in Node 4 and 5, even when not in strict mode. Give it a try:

class AbstractMockSingletonFactoryFactory { injectDependency() { console.log("Subclasser Responsibility"); } }

This brings Tonic’s ES6 support to near completion, making it one of the best places to try the latest JavaScript features in every version of Node!

Don’t forget, we’re holding office hours on Thursday if you want to stop by and discuss this or any other JavaScript features.

27 Feb 04:56

Your most dangerous customer

by Eric Karjaluoto

My timing was good. We were already off Burnaby Mountain when my back wheel blew. Actually, my timing was great. There was a bike shop just a block away.

I was in this store a few weeks earlier, looking for a decent bike for my wife. The fellow there was pleasant, and showed me the new models due to arrive.

Upon fixing the tube, the mechanic noted some other problems with my bike. Specifically: my stretched chain. It needed replacement before it damaged the sprocket cassette (a more expensive fix).

I typically use another bike shop, but the guy won me over with how friendly he was. I rode off, with my now inflated tire—and returned a week later for the chain repair. They needed a few days to make the fix, so, I left the bike with them.

Getting to that shop isn’t convenient for me. But, I had a brief opportunity to pick up the bike—if I rushed. So, I drove fast, and snuck in the door just 5 minutes before closing. The shopkeep (someone I hadn’t met until today) was not pleased. He was in a rush, had cashed out, and didn’t want to deal with me.

I apologized. I didn’t know what his hurry was, but he was under some stress. I tried to problem-solve, “Listen—I don’t want to put you out, here. What if I give you my credit card number? You can run it tomorrow. That way you don’t need to re-open the till.” He rolled his eyes and said that couldn’t work. Instead, he scurried downstairs to get the bike, and came back in a huff.

The mechanic who had since left, had written some notes on service tag. The fellow processing the payment barked these at me. I nodded and agreed. This went on for a bit. Eventually, I didn’t feel as sympathetic.

Sure, he was in a rush, but it still wasn’t 6:00. If the sign on the door says the shop’s open until 6:00, I figure the staff should be prepared for that.

Although the process only took a few moments, each one dragged. This guy was so pissed off at me—and I just wanted out of the place. Once done, I looked at my watch. It was 6:00.

As I left the shop, I apologized once again for holding him up. For all the fuss, he must have had something awfully important to do. Although I hadn’t intended to, I must have screwed up his plans. Calmer, now, he responded, “I like to catch the early bus. This means I have to wait for the next one.”

All his frustration boiled down to waiting a few minutes for a bus. Wow.

I didn’t say anything negative. I didn’t complain about him making an issue out of so little. I didn’t call his manager to talk about the experience.

Instead, I nodded and smiled. I knew I’d never go back.

That was last November. Spring sprung early in Vancouver, and it’s now time to buy my wife’s new bike. That guy’s desire to catch an early bus cost his boss $2,000. (Not to mention bikes and helmets for our kids, occasional bits-and-bobs, and inevitable repairs.)

Customers who complain might seem like a nuisance. In actuality, they’re easy. Acknowledge their concern, show empathy, and find a way to resolve the problem. No big deal. If you do this well, you might even turn them into more loyal customers.

It’s those customers who don’t complain that you need to worry about. Sure, advertising can get new people through your front door. But, it takes a lot of ad dollars to make up for the loss of a customer you already had.

27 Feb 04:55

The Emotional Labor of Flea Extermination

by Lindsay Poirier

3404894430_99f0f1463f_z

I want to share with you a personal story – an experience that I dealt with about four months ago that caused me a great deal of anxiety: I found a flea on my dog.  That’s right a flea; not multiple fleas, a flea.  But I panicked.  I vacuumed everything – couches, throw pillows, mattresses, floors – twice a day, every day for at least three weeks.  I mopped every other day.  I washed everything in the house three times a week.  I bought some of that terrible chemical shampoo and washed my poor pup with it.  I flea combed her three times a day.  I set up flea traps in every room before bed.  I caught three more fleas.  I started having recurring dreams about fleas multiplying on my dog – growing in size as in an arcade game while I tried to knock them out one by one.  My language changed.  I started singing the Pokemon – “Gotta Catch All” song while vacuuming and talking like Ted Cruz, using phrases like “we’ve got to obliterate…,” etc.  My partner was seriously concerned about my sanity.

Now this sort of anxiety is partly personal – an anxiety over microscopic things that have the potential to grow completely out of my control.  But I’m going to argue that there is more to it than that.  I’ve talked to pet owners who – upon spotting fleas – tore apart their houses, spent hundreds of dollars on flea products, set off chemicals in their homes that notably released the same poisonous gases that were instrumental to the India’s Bhopal disaster.  We can’t all be this crazy.

I don’t think we are.

For one, new information systems – PetMD, e-wiki articles, and veterinary forums –  instigate panic while setting impossible standards for care. While supposedly designed to help diagnose and treat our own issues, these are the same links that warn that for every one flea, there are a hundred fleas.  The same links that show images of hundreds of tiny bugs weaving in between a dog’s fur.  The same links that break down the scientific version of the flea life cycle into colloquial terms so that you know how serious the issue is.  The same links that suggest that you throw out carpets and bedding and that you vacuum ferociously several times a day for weeks.  They state at the beginning “Don’t panic!  You can get through this.” and then proceed to list a series of chores that working families cannot possibly get through.  It’s dizzying to click through them.

Now, these arguments seem to substantiate claims that Google has provoked a sort of “cyberchondria” – where Googling to find information about our health problems, pet problems, and house problems leads people to overact – to jump to irrational conclusions.  That the freckle on their finger must be cancer.  Or that the spot on their dog means they need to burn their houses down.  But I find this to be an extremely gendered argument – one that positions information seekers as emotionally-charged and unreasonable – characterized by conceptions of a feminine excessiveness as Luce Irigaray would say.  You see this gendering in the articles that advise against Googling health symptoms – in magazines like Glamour and Women’s Health – articles that disproportionately feature images of anxious women staring at computer screens.   Articles that include statistics such as “one in four British women have misdiagnosed themselves on the Internet.” Such depictions, I believe, stem from a very problematic assumption that women can’t think logically when exposed to provocative information.

New “anxieties of care” are induced, not so much by the sensationalized information in online articles (though this is certainly part of it), nor the raising costs of expert care advice from doctors, vets, exterminators (though is certainly part of it too).  Instead, families today constantly find themselves in information double binds – binds that make it impossible to at once meet new standards of care (house care, pet care, and child care, etc.) and at the same time attend to the grounded knowledge they’ve developed in their experience being caregivers.  Impulsive cleaning, spending, and chemical spraying, in this sense, are not a result of mis-information or falling prey to “cyberchondria” but instead are anxious responses to being embedded in conflicting communication streams – streams mediated by algorithms, science, capital, and gendering.

Double bind theory, proposed by Gregory Bateson, suggests that anxiety can be provoked in situations where an individual must respond to two or more conflicting streams of communication.  For instance, imagine a mother tells her child, “I demand that you disobey me.”  There is no way for the child to attend to both requests.  Today’s parents face the same communicative contradictions when it comes to care.

Take the flea problem for example.  The #1 recommended flea product by veterinarians and many online sources is Advantix II, produced by Bayer, one of the largest pharmaceutical companies in the world.  Bayer is also acclaimed for a particularly long history of chemical and drug-related corporate injustices, which may not be of particular interest to caregivers until they come across other web articles describing what can happen if you “poison” your dog with toxic chemicals.  Or consider the bug bomb issue.  While many online sources describe it as the only way to ensure that you kill off the bugs in your home – to exterminate them at all stages of the flea life cycle – other articles remind us that such insecticides can cause serious long-term health effects, particularly for young children.  Caregivers know that any product that tells you to cover all of the surfaces in your house and stay away from it with the windows closed for several hours is not the safest choice for the kids and pets.  They see that companies they’ve learned to hate – such as Monsanto – sponsor the top medical/pet advice websites.  And that veterinarians make their living by recommending expensive pharmaceutical products.   And if they’re not going to purchase these products, they better be prepared to clean – every second of every day – for months, which is just not an option for most working families today.

In the midst of such information binds, I found myself searching, relentlessly, for anything rational, anything that didn’t require me to be super-mommy – to pull off the impossible.  But unfortunately, with information infrastructures so heavily mediated today – more information tends to equate to more expectations, while simultaneously exposing more contradictions.  It’s much like Ruth Cowan describes in her book More Work for Mother – how modern labor-saving technologies for the home, such as the vacuum and the washing machine, set new standards for cleanliness, thus creating more work for mothers, not less.  Information infrastructures, designed to help families deal with medical or home issues, have set impossible standards for care.  Digesting this information is an anxiety-ridden experience – one that pushes families to consider how much time they are willing to devote, how much money they are willing to spend, and what toxic chemicals they are willing to expose themselves to.  It’s a gendered experience; women, in particular, are held to higher expectations, while simultaneously, couched as emotionally-charged and overreacting when they respond to them.

Don’t worry parents; you are not alone.  New information systems are making us all a little more anxious – a little more prone to panic.  And it’s not because we’re irrational or foolish.  It’s because we’re embedded in an information infrastructure that’s impossible to move within.   Key to double bind theory, though, is that working the double bind – resisting and weaving your way out of it – leads to emotional growth.  Recognizing how information sources compete – how they are so often shaped by macro-forces that are particularly antagonistic to women – can be the best medicine for dealing with anxieties of care.

Lindsay Poirier is a PhD Student in Science and Technology Studies at Rensselaer Polytechnic Institute.  She occasionally Tweets at @lindsaypoirierhttp://lindsaypoirier.com/

Image source credit: Kat Masback

27 Feb 04:55

Payment Processors Still Using Weak Crypto

by Richard Barnes

Part of how Mozilla protects the Web is by participating in the governance of the Web PKI, the system of security certificates that allows websites to authenticate themselves to browsers. Together with the other browsers and stakeholders in the Web, we agree on standards for how such certificates are issued.  We then require that these standards, plus a few additional ones specific to Mozilla, be applied to all certificates which are issued, directly or indirectly, by the “roots” that Firefox trusts.

We have been notified that some payment providers are using Web PKI certificates (i.e. certificates which chain up to roots trusted by Firefox) to secure the connection between central servers and payment terminals, for the purpose of transmitting payment data over the public Internet. Unfortunately, some of those non-browser users of the Web PKI have not kept up with the advances in security that the Web is achieving. The SHA-1 hash algorithm (used to validate the integrity of a certificate) has been declared obsolete in the Web PKI, but these providers have failed to upgrade these devices to support its replacement, SHA-2, despite the SHA-1 deadlines having been set years ago. As a result, many payment-related devices continue to require their servers to have certificates which use SHA-1 in order to be able to operate.

In particular, Worldpay PLC approached Mozilla through their Certificate Authority, Symantec, to request authorization to issue, in violation of standard policy, a limited number of SHA-1 certificates needed to support a large number of outdated devices. They made this request less than two weeks before the authorization needed to be effective. To avoid disruption for users of these devices, after a discussion on the dev.security.policy mailing list, in this particular case we have decided to allow these certificates to be issued, but only under a set of conditions that ensure that the issuance of SHA-1 certificates is fully transparent and allowed only for purposes of transition to SHA-2.

This authorization means that Symantec can issue SHA-1 certificates that will enable Worldpay’s devices to keep operating a while longer, and that issuance will not be regarded by Mozilla as a defect. This decision only affects the Mozilla root program; other root programs may still consider the issuance of these certificates to be a mis-issuance.

We understand that there are payment processing organizations other than Worldpay that continue to have similar requirements for SHA-1 — either within the Web PKI or outside it. It is disappointing that these organizations are putting the public’s data at risk by using a weak, outdated security technology.  We encourage organizations with a continuing need for SHA-1 in the Web PKI to come forward as soon as possible and provide as much detail as possible about their plans for a transition to SHA-2.

27 Feb 04:55

The Lenses [Flickr]

by vanderwal

vanderwal posted a photo:

The Lenses

Opening slide for presentation on my Lenses

The photo is from: www.flickr.com/photos/39955793@N07/4505850838/in/photolis...

27 Feb 04:53

The moral vacuum within Donald Trump: a campaign speech by . . .

by Josh Bernoff

I challenge each of the remaining candidates for president. Do you have the courage to make this speech and save America from Donald Trump? My fellow Americans, today’s speech is not about me. It is about you. I speak to you today about the greatest threat to America’s future. It’s not ISIS. It’s not gun violence … Continue reading The moral vacuum within Donald Trump: a campaign speech by . . . →

The post The moral vacuum within Donald Trump: a campaign speech by . . . appeared first on without bullshit.

26 Feb 22:17

The Apple Case Will Grope Its Way Into Your Future

by Federico Viticci

Farhad Manjoo, writing for The New York Times:

Consider all the technologies we think we want — not just better and more useful phones, but cars that drive themselves, smart assistants you control through voice or household appliances that you can monitor and manage from afar. Many will have cameras, microphones and sensors gathering more data, and an ever more sophisticated mining effort to make sense of it all. Everyday devices will be recording and analyzing your every utterance and action.

This gets to why tech companies, not to mention we users, should fear the repercussions of the Apple case. Law enforcement officials and their supporters argue that when armed with a valid court order, the cops should never be locked out of any device that might be important in an investigation.

But if Apple is forced to break its own security to get inside a phone that it had promised users was inviolable, the supposed safety of the always-watching future starts to fall apart. If every device can monitor you, and if they can all be tapped by law enforcement officials under court order, can anyone ever have a truly private conversation? Are we building a world in which there’s no longer any room for keeping secrets?

26 Feb 22:17

Connected: We Didn’t Stream Live and There Was No Showbot

by Federico Viticci

The whole gang is back this week to discuss Stephen’s semi-smart watch, Federico’s annual iPad checkup and more.

Don't let the seemingly sad title of the latest Connected fool you: in the latest episode, we've talked about changes in the iOS 9.3 beta, Apple and the FBI, and the backstory of my iPad article from earlier this week. You can listen here.

Sponsored by:

  • PDFpen, from Smile: The Swiss Army Knife for working with PDFs.
  • Igloo: An intranet you’ll actually like, free for up to 10 people.
26 Feb 22:17

Canvas, Episode 4: Photo Editing

by Federico Viticci

In this show, Fraser and Federico look at four major applications for iOS: Photos itself, Pixelmator, Adobe Lightroom Mobile and Snapseed. All of these are very powerful applications for serious photo editing on iOS, each with their own particular strengths.

We also look at apps which provide Photo Editing Extensions. These are small bundles of features that can be accessed directly inside the editing view of the Photos app itself.

In episode 4 of Canvas, Fraser and I continued our Photos series with photo editing feature and apps. You can listen here and check out the links below.

Featured Standalone Editing Apps

Featured Apps with Photo Editing Extensions

26 Feb 16:38

Before You Get Too Excited About That GitHub Study…

by Scott Alexander
mkalus shared this story from Slate Star Codex.

Another day, another study purporting to find that Tech Is Sexist. Since it’s showing up here, you probably already guessed how this is going to end. Most of this analysis is not original to me – Hacker News had figured a lot of it out before I even woke up this morning – but I think it’ll at least be helpful to collect all the information in one easily linkable place.

The study is Gender Bias In Open Source: Pull Request Acceptance Of Women Vs. Men. It’s a pretty neat idea: “pull requests” are discrete units of contribution to an open source project which are either accepted or rejected by the community, so just check which ones are submitted by men vs. women and whether one gender gets a higher acceptance rate than the other. what the acceptance rate is. This is a little harder than it sounds – people on GitHub use nicks that don’t always give gender cues – but the researchers wrote a program to automatically link contributor emails to Google Plus pages so they could figure out users’ genders.

This alone can’t Of course, this doesn’t rule out that one gender is genuinely doing something differently than another, so they had another neat trick: they wrote another program that automatically scored accounts on obvious gender cues: for example, somebody whose nickname was JaneSmith01, or somebody who had a photo of themselves on their profile. By comparing obviously gendered participants with non-obviously gendered participants whom the researchers had nevertheless been able to find the gender of, they should be able to tell whether there’s gender bias in request acceptances.

Because GitHub is big and their study is automated, they manage to get a really nice sample size – about 2.5 million pull requests by men and 150,000 by women.

They find that women get more (!) requests accepted than men for all of the top ten programming languages. They check some possible confounders – whether women make smaller changes (easier to get accepted) or whether their changes are more likely to serve an immediate project need (again, easier to get accepted) and in fact find the opposite – women’s changes are larger and less likely to serve project needs. That makes their better performance extra impressive.

So the big question is whether this changes based on obviousness of gender. The paper doesn’t give a lot of the analyses I want to see, and doesn’t make its data public, so we’ll have to go with the limited information they provide. They do not provide an analysis of the population as a whole (!) but they do give us a subgroup analysis by “insider status”, ie whether the person has contributed to that project before.

Among insiders, women do the same as men when gender is hidden, but better than men when gender is revealed. In other words, if you know somebody’s a woman, you’re more likely to approve her request than you would be on the merits alone. request. We can’t quantify exactly how much this is, because the paper doesn’t provide numbers, just graphs. Eyeballing the graph, it looks like being a woman gives you about a 1% advantage. I don’t see any discussion of The study and the media coverage ignore this result, even though it’s half the study, and as far as I can tell the more statistically significant half.

Among outsiders, women do the same as/better than men when gender is hidden, and the same as/worse than men when gender is revealed. I can’t be more specific than this because the study doesn’t give numbers and I’m trying to eyeball confidence intervals on graphs. The study itself say that women do worse than men when gender is revealed, so since the researchers presumably have access to their real numbers data, that might mean the confidence intervals don’t overlap. From eyeballing the graph, it looks like the difference is 1% – ie, men get their requests approved 64% of the time, and women 63% of the time. Once again, it’s hard to tell by graph-eyeballing whether these two numbers are within each other’s confidence intervals.

The paper concludes that “for insiders…we see little evidence of bias…for outsiders, we see evidence of gender bias: women’s acceptance rates are 71.8% when they use gender neutral profiles, but drop to 62.5% when their gender is identifiable. There is a similar drop for men, but the effect is not as strong.”

In other words, they conclude there is gender bias among outsiders because obvious-women do worse than gender-anonymized-women. They admit that obvious-men also do worse than gender-anonymized men, but they ignore this effect because it’s smaller. They do not report doing a test of statistical significance on whether it is really smaller or not.

So:

1. Among insiders, women get more requests accepted than men.

2. Among insiders, people are biased towards women, that is, revealing genders gives women an advantage over men above and beyond the case where genders are hidden.

3. Among outsiders, women still get more requests accepted than men.

4. Among outsiders, revealing genders appears to show a bias against women. It’s not clear if this is statistically significant.

5. When all genders are revealed among outsiders, men appear to have their requests accepted at a rate of 64%, and women of 63%. The study does not provide enough information to determine whether this is statistically significant. Eyeballing it it looks like it might be, just barely.

6. The study describes its main finding as being that women have fewer requests approved when their gender is known. It hides on page 16 that men also have fewer requests approved when their gender is known. It describes the effect for women as larger, but does not report the size of the male effects, either effect, nor whether the difference is statistically significant. Eyeballing it, it looks about 2/3 the size of the female effect, and maybe?

7. The study has no hypothesis for why both sexes have fewer requests approved when their gender is known, without which it seems kind of hard to speculate about the significance of the phenomenon for one gender in particular. For example, suppose that the reason revealing gender decreases acceptance rates is because corporate contributors tend to use their (gendered) real names and non-corporate contributors tend to use handles like 133T_HAXX0R. And suppose that the best people of all genders go to work at corporations, but a bigger percent of men women go there than women. men. Then being non-gendered would be a higher sign of quality in a man than in a woman. This is obviously a silly just-so story, but my point is that without knowing why all genders show a decline after unblinding, it’s premature to speculate about why their declines are of different magnitudes – and it doesn’t take much to get so small a difference. a difference of 1%.

8. There’s no study-wide analysis, and no description of how many different subgroup analyses the study tried before settling on Insiders vs. Outsiders (nor how many different definitions of Insider vs. Outsider they tried). Remember, for every subgroup you try, you need to do a Bonferroni correction. This study does not do any Bonferroni corrections; given its already ambiguous confidence intervals, a proper correction would almost certainly destroy the finding.

9. We still have that result from before that women’s changes are larger and less likely to serve immediate project needs, both of which make them less likely to be accepted. No attempt was made to control for this.

“Science” “journalism”, care to give a completely proportionate and reasonable response to this study?

Here’s Business Insider: Sexism Is Rampant Among Programmers On GitHub, Research Finds. “A new research report shows just how ridiculously tough it can be to be a woman programmer, especially in the very male-dominated world of open-source software….it also shows that women face a giant hurdle of “gender bias” when others assess their work. This research also helps explain the bigger problem: why so many women who do enter tech don’t stick around in it, and often move on to other industries within 10 years. Why bang your head against the wall for longer than a decade?” [EDIT: the title has since been changed]

Here’s Tech Times: Women Code Better Than Men But Only If They Hide Their Gender: “Interestingly enough, among users who were not well known in the coding community, coding suggestions from those whose profiles clearly stated that the users were women had a far lower acceptance rate than suggestions from those who did not make their gender known. What this means is that there is a bias against women in the coding world.” (Note the proportionate and reasonable use of the term “far lower acceptance rate” to refer to a female vs. male acceptance rate of, in the worst case, 63% vs. 64%.)

Here’s Vice.com: Women Are Better At Coding Than Men: “If feminism has taught us anything, it’s that almost all men are sexist. As this GitHub data shows, whether or not bros think that they view women as equals, women’s work is not being judged impartially. On the web, a vile male hive mind is running an assault mission against women in tech.”

This is normally the part at which I would question how a ask how this study got through peer review, but luckily this time there is a very simple answer: it didn’t. If you read the study, you may notice the giant red “NOT PEER-REVIEWED” sign on the top of every page. The paper was uploaded to a pre-peer-review site asking for comments. The authors appear to be undergraduate students.

I don’t blame the authors for doing a neat study and uploading it to a website. I do blame the entire world media up to and including the BBC for swallowing it uncritically. Note that two of the three news sources above failed to report that it is not peer-reviewed.

Oh, one more thing. A commenter on the paper’s pre-print asked for a breakdown by approver gender, and the authors mentioned that “Our analysis (not in this paper — we’ve cut a lot out to keep it crisp) shows that women are harder on other women than they are on men. Men are harder on other men than they are on women.”

Depending on what this means – since it was cut out of the paper to “keep it crisp”, we can’t be sure – it sounds like the effect is mainly from women rejecting other women’s contributions, and men being pretty accepting of them. Given the way the media predictably spun this paper, it It is hard for me to conceive of a level of crispness which justifies not providing me not knowing this information.

So, let’s review. A non-peer-reviewed paper shows that women get more requests accepted than men. In one subgroup, unblinding gender gives women a bigger advantage; in another subgroup, unblinding gender gives men a bigger advantage. When gender is unblinded, both men and women do worse; it’s unclear if there are statistically significant differences in this regard. Only one of the study’s subgroups showed lower acceptance for women than men, and the size of the difference was 63% vs. 64%, which may or may not be statistically significant. This may or may not be related to the fact, demonstrated in the study, that women propose bigger and less-immediately-useful less useful changes on average; no attempt was made to control for this. This tiny amount of discrimination against women seems to be mostly from other women, not from men.

The media uses this to conclude that “a vile male hive mind is running an assault mission against women in tech.”

Every time I say I’m nervous about the institutionalized social justice movement, people tell me that I’m crazy, that I’m just I must just be sexist and privileged, and that feminism is merely the belief that women are people so any discomfort with it is totally beyond the pale. I would nevertheless like to re-emphasize my concerns at this point.

[EDIT: I don’t have much of a quarrel with the authors, who seem to have done an interesting study and are doing the correct thing by submitting it for peer review. I have a big quarrel with “science” “journalists” for the way they reported it. If any of the authors read this and want my peer review suggestions, I would recommend:

1. Report gender-unblinding results for the entire population before you get into the insiders-vs.-outsiders dichotomy.
2. Give all numbers represented on graphs as actual numbers too.
3. Declare how many different subgroup groupings you tried, and do appropriate Bonferroni corrections.
4. Report the magnitude of the male drop vs. the female drop after gender-unblinding, test if they’re different, and report the test results.
5. Add the part about men being harder on men and vice versa, give numbers, and do significance tests.
6. Try to find an explanation for why both groups’ rates dropped with gender-unblinding. If you can’t, at least say so in the Discussion and propose some possibilities.
7. Fix the way you present “Women’s acceptance rates are 71.8% when they use gender neutral profiles, but drop to 62.5% when their gender is identifiable”, at the very least by adding the comparable numbers about the similar drop for men in the same sentence. Otherwise this will be the heading for every single news article about the study and nobody will acknowledge that the drop for men exists at all. This will happen anyway no matter what you do, but at least it won’t be your fault.
8. If possible, control for your finding that women’s changes are larger and less-needed and see how that affects results. If this sounds complicated, I bet you could find people here who are willing to help you.
9. Please release an anonymized version of the data; it should be okay if you delete all identifiable information.]

26 Feb 16:38

According To Morgan Stanley This Is The Biggest Threat To Deutsche Bank's Survival

by Tyler Durden
mkalus shared this story from Zero Hedge.

Two weeks ago, on one of the slides in a Morgan Stanley presentation, we found something which we thought was quite disturbing. According to the bank's head of EMEA research Huw van Steenis, while in Davos, he sat "next to someone in policy circles who argued that we should move quickly to a cashless economy so that we could introduce negative rates well below 1% – as they were concerned that Larry Summers' secular stagnation was indeed playing out and we would be stuck with negative rates for a decade in Europe. They felt below (1.5)% depositors would start to hoard notes, leading to yet further complexities for monetary policy."

As it turns out, just like Deutsche Bank - which first warned about the dire consequences of NIRP to Europe's banks - Morgan Stanley is likewise "concerned" and for good reason.

"concerned."

With the ECB set to unveil its next set of unconventional measures during its next meeting on March 10 among which almost certainly even more negative rates (for the simple reason that a vast amount of monetizable govt bonds are trading with a yield below the ECB's deposit rate floor and are ineligible for purchase) the ECB may cut said rates anywhere between 10bps, 20bps, or even more (thereby sending those same bond yields plunging ever further into negative territory). more.

As Morgan Stanley warns that any substantial rate cut by the ECB will only make matters worse. As it says, "Beyond a 10-20bp ECB Deposit Rate Cut, We Believe Impacts on Earnings Could Be Exponential."

Which brings us the the punchline: according to Morgan Stanley, a fellow bank, the biggest threat to its largest European competitor, Deutsche Bank is not its unquantified commodity loan exposures, nor its just as opaque exposure to China, nor its massive derivatives book, not even its culture of rampant corruption and crime which have resulted in constant top management changes over the past several years, but the deflationary challenges to profitability - specifically, "Risks to Trading/Markets Revenues and Due to Negative Rates" imposed by none other than the European Central Bank!

In other words, according to Morgan Stanley the biggest threat to the profibatility, viability and outright existence of the most leveraged commercial bank in the world, is none other than ECB president Mario Draghi...

... who will almost certainly unveil even more negative rates in two weeks time, and in doing so will unleash another round of selling in European bank securities, which will further tighten financial conditions, which may force even more "desperate" ECB intervention and so forth in a feedback loop, for the simple reason that Draghi appears to not realize that just like Kuroda, he himself is the cause of asset volatility and European bank instability.

Which, incidentally, is precisely what Bundesbank president (and ECB member) Jens Wiedmann warned against. As WSJ reports, Weidmann expressed reservations Wednesday about further expansionary monetary policy to combat very low rates of inflation in the currency bloc.

According to prepared remarks to present the annual report of the Deutsche Bundesbank, which he heads, Mr. Weidmann said "it would be dangerous to simply ignore" the longer-term risks and side effects of loosening already highly accommodative policy.

As the WSJ writes, "the comments from perhaps the ECB's most outspoken critic of very accommodative policy provide further evidence that ECB head Mario Draghi may have a tough time garnering unanimity for any effort to further expand the central bank's accommodative monetary policy."

Still, Draghi may well get his wishes: after all, despite the ongoing conflict between the two central bankers, so far Draghi has gotten absolutely everything he has gotten, from QE to NIRP, over Weidmann's loud objections. The WSJ further adds that the ECB is expected to cut its deposit rate further into negative territory beyond the minus 0.3% where it sits now, as well as expanding its bond buying program beyond the EUR60 billion a month of mostly government bonds that it has purchased since last March.

The two are linked: for the ECB to expand QE - now that the NIRP genie has been released - it will have to cut rates or else it will run into liquidity limitations and an inability to procure the desire bonds.

Worse, due to the central bank's voting rotation system, Mr. Weidmann won't have a vote at the March meeting.

On a separate topic, Weidmann also criticized efforts to abolish the EUR500 bank note and presented a 22-page report defending the use of cash, pushing back against proposals that German policy makers worry could be used to cut interest rates more deeply in the euro area.

Specifically, Weidmann warned that abolishing the EUR500 note could damage citizens' confidence in the single currency. "If we tell citizens the bank notes they currently hold are not valid, that would impact trust," Mr. Weidmann said.

Weidmann also ridiculed the "fantasy" that cash might be abolished altogether to make it easier for central banks to cut interest rates further. "In my view this would be the false, disproportionate answer to the challenges," he said.

Well, that's what many said about the ECB's QE, which many years ago was, correctly, seen as monetary financing and thus banned by Article 123. Everyone knows that happened next.

But going back to NIRP, the question then becomes: is Mario Draghi really so naive and confused not to realize that the more negative and Net Interest Margin crushing rates and conditions he unrolls, the worse Deutsche Bank's (and all other European banks') fate will be.

And then this question in turn is transformed into a more sinister one: since Draghi most certainly does understand that impact on bank profitability from excessively negative rates (as virtually every bank from DB to MS to BofA has warned in recent weeks), is he engaging in this escalation of negative rates with an intent to harm Deutsche Bank on purpose?

Because should Draghi cut rates on March 10 and send DB's stock price to fresh record low, and its CDS to all time highs, the comparisons to Lehman will get every louder, at which point the self-fulfilling prophecy may become a reality.

And not just due to those two factors. Recall that in the bankruptcy of Lehman it was a former Goldmanite who ultimately decided to pull the plug on the bank - US Treasury Secretary Hank Paulson. By doing so, he unleashed the biggest bailout of the banking system in history, and what may have been the most lucrative period of all time for Goldman bankers as well as the greatest wealth transfer in human history.

That period is ending, so it may be time for another wholesale, global bailout. Just one thing is missing: the "next Lehman" sacrificial lamb whose failure will be the catalyst for the next mega bailout of everyone else who will survive. And since this bailout will involve the paradropping (both metaphorical and literal) of trillions in paper or digital money, central banks may just get the inflation they so desire.

As for the former Goldmanite pulling the switch this time, well we already know who it is: Mario Draghi.

26 Feb 16:37

3 Reasons Why All Learning is Personal

files/images/Screen-Shot-2016-01-30-at-4.53.05-PM.png


George Couros, Connected Principals, Feb 27, 2016


You remember that saying, "all politics is local?" Well, in the same way, all learning is personal. George Couros writes, "Here are three reasons that struck me upon reflection of this experience.

  1. Each individual has their own experiences and acquired knowledge. (Past)
  2. Each person creates their own connections to content based on the reason mentioned above. (Present)
  3. What interests each person biases what they are interested in learning moving forward. (Future)

Doesn’ t this to apply to all teaching and learning whether it is from the curriculum, delivered in a workshop, or watching it on a YouTube video?" Yes it does.

[Link] [Comment]
26 Feb 16:37

More On 2016 Bike Share

by Ken Ohrn

Dozens of media workers showed up today at City Hall to hear speakers and ask questions — Jerry Dobrovolney (CoV Engineering), Charles Gauthier (DVBIA), Josh Squire (CycleHop CEO).

2016.Bike.Share.Media.1

Mr. Squire is clearly media-savvy, and in around 10 minutes of air time, mentioned “sponsorship opportunities” at least 6 times.  Staying on message, big time.

2016.Bike.Share.Media.2

The questions revolved around these issues:  bike rental companies, helmets, costs.

On bike rental companies, I agree with many that bike share and day-rentals are different markets, with some overlap.  Bike rentals are typically a day-or-afternoon-long, and bike-share use is typically 30 minutes or less (and gets very expensive as an all-afternoon option).  Rentals are out-and-back; bike-share is pick up and drop off. This is similar to the difference between AVIS car rentals and car-share.  They’re different markets with some overlap.

Anything that stimulates bike usage in Vancouver is good for bike sales, and may improve bike rentals to some extent.

The City has discussed this extensively with bike rental shop owners, and will not site bike-share corrals inside a yet-to-be-sized buffer zone around such shops.

Helmets.  A bike-share bike will come with a new type of helmet, designed for the purpose by Bell helmets.  CycleHop pledges to service these helmets daily and provide a liner for those worried about sanitation. My opinion is that this solution is a bit weak, but apparently we should not pin any hopes on a bike-share helmet exemption (or outright repeal) by the Provincial Gov’t.

The bikes are stable and heavy. I took one for a spin, and within 10 seconds had the “feel” of it.  Its geometry and handling is quite different from my lovely Brodie. But not a problem.  Bike features:  chain drive, 7-speed internal rear hub, step-through frame, hub brakes, dynamo-driven front & rear lights, chain guard, skirt shield on rear wheel, kick-stand, onboard electronics.

Overall, I’m glad the CoV took its time to get this right, to learn from other cities, and for the “smart bike” option to become available. I agree with others that, like bike lanes and the Burrard Bridge, initial fear and skepticism will fade, and the bike-share system will become just another transportation option in Vancouver.


26 Feb 16:36

“Too Many Good Things” – How can Price Tags improve its format?

by pricetags

An Item from Ian:

I was talking with Sam Sullivan at Michael Green’s party. And we were discussing how good things sometimes appear with such frequency on Price Tags that other good comment gets pushed down and disappears without having enough time at the top to have impact.

Sam was mentioning he has switched to a digest version, which I didn’t know WordPress could do. Which would provide a bit of ‘one stop shopping’ possibility. But I wonder if a bit of that would be good for the site as well?  It’s hard not to shuffle all the previous posts down to Neverland, especially as they sometimes come fast and furious.

Have you thought about a change of template? Maybe picking up some newspaper type formatting to allow multiple headlines above the fold?

I know just enough about websites to know this isn’t a trivial or easy endeavour, just wanting to pose the thought.

.

PT: It’s certainly time for Price Tags to be refreshed.  I’ve been using this format for several years, not to mention the title image.  (If anyone has a high-density horizontal image suitably Vancouver-ish, send it along or provide a link).

I’m not sure how one does a digest version of a WordPress blog.  (Explanation welcome.)

But especially, if anyone experienced in WordPress has a format they would recommend, I’d certainly welcome a change. 

Plus any other thoughts regular readers have that could improve readability.  Add to Comments.


26 Feb 16:26

Six Months an Apprentice

by Reverend

handSince late July, early August I have been focusing a lot more of my time on wrapping my head around the Reclaim Hosting server infrastructure, as well as providing support to folks using it. It’s been a welcome deviation from the career trajectory I’ve been on for the last ten years. You see, when you get into edtech in higher ed, often you have the starting position of something like instructional technologist, learning designer, media specialist, etc. You can then move on to something like coordinator, manager, assistant director, or director. In some instances, and with the right credentials, there may even be more—some kind of assistant VP or associate provost gig. But in the end, it’s all pretty much middle management admin hell after instructional technologist—the only truly pure job. And, as you are pushed along this pre-defined career track, it’s easy to move further and further away from the actual work and deeper and deeper into a culture of meetings, administrative trivia, and managing others. Don’t get me wrong, some folks like this stuff and are good at it, but that’s only because they have been institutionally lobotomized :)

20151204152013

Anyway, I got off that carousel of denial in September, and one of the nicest things has been that I could focus more of my energy on the actual work. Not managing up or expectations or for success, or any of that nonsense. I avoided this for about as long as I could at UMW, but the writing was on the wall: you can’t remain outside of the expectations of institutional conformity forever if you want to be a house cat. I remember during the EDUPUNK insurgency folks called me out for being a hypocrite proclaiming some vision of “radical” edtech (why is shunning clunky corporate tools radical?) while working as part of an institution and being insulated by the freedom and security of higher ed. But for me any sense of security and freedom in higher ed increasingly vanished over my ten year career. By leaving I was actually able to return to doing the work I love without feeling the psychic drain of being pulled in a million different directions.

So, six months later (and with fewer friends :) ) I’m in the process of re-focusing my work on how to build, scale, and support a web hosting infrastructure alongside Tim Owens, who has been the best teacher you can imagine. Neither of us have any experience in running a business, so we have been very practical in our approach. We decided early on to be completely free of investment money, loans or debt of any kind. And we have been able to do that by running the business lean. Our main overhead is server costs, licensing fees for cPanel and WHMCS, and over the last year salaries. This year we have started building a reserve so we can avoid the misguided logic of constant growth and scale as the irrefutable recipe to success. We’re not trying to become hosting magnates, rather to be a sustainable, independent edtech shop.  Indie EdTech #4life.

shop front

Getting to this point has been no mystery. We have a network of folks that supported us as we got started. There is no question they’ve been, and continue to be, crucial to our success. But that can only go so far when you’re providing a service folks depend on. So, we approached this practically as well: simply provide stellar service. And I think we have done that so far.

As yesterday can attest to, everything isn’t always perfect in Reclaimland and we have our issues time and again. But we communicate with our community regularly about any and all problems, as well as our ideas, aspirations, and principles. We understand the work we do as part of a broader conversation, which means it is not alienated and divorced from what people are trying to do conceptually.  I think it’s apparent when you have a technical issue that we’re going to solve it fast, but folks also recognize that every communication is part of a larger conversation around empowering the edtech community to reclaim the web for teaching and learning—something I remain passionate about even if I’m a bit burnt on institutional culture.

All that said,  I am very much an apprentice in this whole enterprise. I’ve been learning tons about cPanel server administration since August. I have setup numerous Domain of One’s Own packages; experimented with getting self-hosted versions of Sandstorm spun-up; and migrated institutional servers over to Linode. It’s been really rewarding to have the time and energy to get schooled in what should be one of the core competencies of edtech: agile infrastructure. I’m not necessarily there yet, but I’m closer than I was 6 months ago, and I’m not letting up anytime soon.

26 Feb 16:26

The Underwater World of Networks

by Reverend
submarine-cable-map-2015-l-1

Image credit: TeleGeography’s Submarine Cable Map

Cross-posted at the CUNY Academic Commons News site.

While exploring CUNY’s Academic Commons, I came across an announcement on the CUNY Digital Humanities Initiative blog about an upcoming talk at the Grad Center:

At the Edge of the Network: A Talk with Nicole Starosielski


The post immediately caught my attention because I had seen announcements of Nicole Starosielski‘s book The Undersea Network on Twitter, and her work inhabits one of the more fascinating fields of interdisciplinary inquiry, sometimes referred to as Media Archaeology. I had my own exposure to how cool this field can be when I went to visit the Media Archaeology Lab at UC Boulder last Spring. So when I saw the the Grad Center was hosting a talk with professor Starosielski’s research on the underwater infrastructure that undergirds the digital network we come to take for granted made me a bit jealous I’m currently on the other side of  the Atlantic. So I searched out resources about the book, and I came across this hour long interview by Carli Nappi in which Starosielski discusses her work in some detail.

The interview does an excellent job of embedding the cultural and colonial significance of these cable networks within specific environments and ecologies. She explores the relationship between two ostensibly unrelated histories such as telecom and fishing, not to mention the ways in which the transoceanic history of colonialism maps on top of the 21st century network of underwater cables that drive the “cloud.”  The networks of power that can be traced through this material underwater network provides for truly compelling and relevant media archaeology for the world we live in. The interview also pointed me to an interactive site,  Surface.in, the authors created as a companion to the book, or as Starosielski suggests, the book is a companion to the website. The site provides an interactive map of this underwater network, giving the curious user various points of entry into this submarine network.

The Underwater Network companion site: Surface.in

The Undersea Network‘s companion site: Surface.in

This is an fascinating topic, and I have ordered the book because kind of broader inquiry into the importance of the materiality of networks for edtech, something I had the good fortune of listening to Audrey Watters speak about brilliantly in Barcelona, is essential to a broader understanding of the work we do. I went searching for other talks or interviews about this topic/book on the web and came up with nothing else. So, I hope the folks at CUNY’s Digital Humanities Initiative consider getting and posting a recording of the talk and discussion because they would have at least one viewer/listener in Italy. That’s an immediate global audience :)

26 Feb 16:25

The case for an embeddable Gecko

by Chris Lord

Strap yourself in, this is a long post. It should be easy to skim, but the history may be interesting to some. I would like to make the point that, for a web rendering engine, being embeddable is a huge opportunity, how Gecko not being easily embeddable has meant we’ve missed several opportunities over the last few years, and how it would still be advantageous to make Gecko embeddable.

What?

Embedding Gecko means making it easy to use Gecko as a rendering engine in an arbitrary 3rd party application on any supported platform, and maintaining that support. An embeddable Gecko should make very few constraints on the embedding application and should not include unnecessary resources.

Examples

  • A 3rd party browser with a native UI
  • A game’s embedded user manual
  • OAuth authentication UI
  • A web application
  • ???

Why?

It’s hard to predict what the next technology trend will be, but there’s is a strong likelihood it’ll involve the web, and there’s a possibility it may not come from a company/group/individual with an existing web rendering engine or particular allegiance. It’s important for the health of the web and for Mozilla’s continued existence that there be multiple implementations of web standards, and that there be real competition and a balanced share of users of the various available engines.

Many technologies have emerged over the last decade or so that have incorporated web rendering or web technologies that could have leveraged Gecko;

(2007) iPhone: Instead of using an existing engine, Apple forked KHTML in 2002 and eventually created WebKit. They did investigate Gecko as an alternative, but forking another engine with a cleaner code-base ended up being a more viable route. Several rival companies were also interested in and investing in embeddable Gecko (primarily Nokia and Intel). WebKit would go on to be one of the core pieces of the first iPhone release, which included a better mobile browser than had ever been seen previously.

(2008) Chrome: Google released a WebKit-based browser that would eventually go on to eat a large part of Firefox’s user base. Chrome was initially praised for its speed and light-weightedness, but much of that was down to its multi-process architecture, something made possible by WebKit having a well thought-out embedding capability and API.

(2008) Android: Android used WebKit for its built-in browser and later for its built-in web-view. In recent times, it has switched to Chromium, showing they aren’t adverse to switching the platform to a different/better technology, and that a better embedding story can benefit a platform (Android’s built in web view can now be updated outside of the main OS, and this may well partly be thanks to Chromium’s embedding architecture). Given the quality of Android’s initial WebKit browser and WebView (which was, frankly, awful until later revisions of Android Honeycomb, and arguably remained awful until they switched to Chromium), it’s not much of a leap to think they may have considered Gecko were it easily available.

(2009) WebOS: Nothing came of this in the end, but it perhaps signalled the direction of things to come. WebOS survived and went on to be the core of LG’s Smart TV, one of the very few real competitors in that market. Perhaps if Gecko was readily available at this point, we would have had a large head start on FirefoxOS?

(2009) Samsung Smart TV: Also available in various other guises since 2007, Samsung’s Smart TV is certainly the most popular smart TV platform currently available. It appears Samsung built this from scratch in-house, but it includes many open-source projects. It’s highly likely that they would have considered a Gecko-based browser if it were possible and available.

(2011) PhantomJS: PhantomJS is a headless, scriptable browser, useful for testing site behaviour and performance. It’s used by several large companies, including Twitter, LinkedIn and Netflix. Had Gecko been more easily embeddable, such a product may well have been based on Gecko and the benefits of that would be many sites that use PhantomJS for testing perhaps having better rendering and performance characteristics on Gecko-based browsers. The demand for a Gecko-based alternative is high enough that a similar project, SlimerJS, based on Gecko was developed and released in 2013. Due to Gecko’s embedding deficiencies though, SlimerJS is not truly headless.

(2011) WIMM One: The first truly capable smart-watch, which generated a large buzz when initially released. WIMM was based on a highly-customised version of Android, and ran software that was compatible with Android, iOS and BlackBerryOS. Although it never progressed past the development kit stage, WIMM was bought by Google in 2012. It is highly likely that WIMM’s work forms the base of the Android Wear platform, released in 2014. Had something like WebOS been open, available and based on Gecko, it’s not outside the realm of possibility that this could have been Gecko based.

(2013) Blink: Google decide to fork WebKit to better build for their own uses. Blink/Chromium quickly becomes the favoured rendering engine for embedding. Google were not afraid to introduce possible incompatibility with WebKit, but also realised that embedding is an important feature to maintain.

(2014) Android Wear: Android specialised to run on watch hardware. Smart watches have yet to take off, and possibly never will (though Pebble seem to be doing alright, and every major consumer tech product company has launched one), but this is yet another area where Gecko/Mozilla have no presence. FirefoxOS may have lead us to have an easy presence in this area, but has now been largely discontinued.

(2014) Atom/Electron: Github open-sources and makes available its web-based text editor, which it built on a home-grown platform of Node.JS and Chromium, which it later called Electron. Since then, several large and very successful projects have been built on top of it, including Slack and Visual Studio Code. It’s highly likely that such diverse use of Chromium feeds back into its testing and development, making it a more robust and performant engine, and importantly, more widely used.

(2016) Brave: Former Mozilla co-founder and CTO heads a company that makes a new browser with the selling point of blocking ads and tracking by default, and doing as much as possible to protect user privacy and agency without breaking the web. Said browser is based off of Chromium, and on iOS, is a fork of Mozilla’s own WebKit-based Firefox browser. Brendan says they started based off of Gecko, but switched because it wasn’t capable of doing what they needed (due to an immature embedding API).

Current state of affairs

Chromium and V8 represent the state-of-the-art embeddable web rendering engine and JavaScript engine and have wide and varied use across many platforms. This helps reenforce Chrome’s behaviour as the de-facto standard and gradually eats away at the market share of competing engines.

WebKit is the only viable alternative for an embeddable web rendering engine and is still quite commonly used, but is generally viewed as a less up-to-date and less performant engine vs. Chromium/Blink.

Spidermonkey is generally considered to be a very nice JavaScript engine with great support for new EcmaScript features and generally great performance, but due to a rapidly changing API/ABI, doesn’t challenge V8 in terms of its use in embedded environments. Node.js is likely the largest user of embeddable V8, and is favoured even by Mozilla employees for JavaScript-based systems development.

Gecko has limited embedding capability that is not well-documented, not well-maintained and not heavily invested in. I say this with the utmost respect for those who are working on it; this is an observation and a criticism of Mozilla’s priorities as an organisation. We have at various points in history had embedding APIs/capabilities, but we have either dropped them (gtkmozembed) or let them bit-rot (IPCLite). We do currently have an embedding widget for Android that is very limited in capability when compared to the default system WebView.

Plea

It’s not too late. It’s incredibly hard to predict where technology is going, year-to-year. It was hard to predict, prior to the iPhone, that Nokia would so spectacularly fall from the top of the market. It was hard to predict when Android was released that it would ever overtake iOS, or even more surprisingly, rival it in quality (hard, but not impossible). It was hard to predict that WebOS would form the basis of a major competing Smart TV several years later. I think the examples of our missed opportunities are also good evidence that opening yourself up to as much opportunity as possible is a good indicator of future success.

If we want to form the basis of the next big thing, it’s not enough to be experimenting in new areas. We need to enable other people to experiment in new areas using our technology. Even the largest of companies have difficulty predicting the future, or taking charge of it. This is why it’s important that we make easily-embeddable Gecko a reality, and I plead with the powers that be that we make this higher priority than it has been in the past.

26 Feb 16:22

Twitter Favorites: [drupal] Today, #Drupal6 has reached end-of-life. Find out what that could mean for you, and how to get long-term support: https://t.co/4LfKBNoSUd

Drupal @drupal
Today, #Drupal6 has reached end-of-life. Find out what that could mean for you, and how to get long-term support: ow.ly/YA7bA
26 Feb 16:22

Twitter Favorites: [OnePerfectShot] 26 years ago today at 11:30am, a federal agent arrived in a strange town to investigate a murder. #TwinPeaksDay https://t.co/PN7L6p67d1

One. Perfect. Shot. @OnePerfectShot
26 years ago today at 11:30am, a federal agent arrived in a strange town to investigate a murder. #TwinPeaksDay pic.twitter.com/PN7L6p67d1
26 Feb 16:20

Dell U3415W

34", 21:9, 3440x1440. Which is to say, pretty big and pretty sharp. Full name: Dell UltraSharp 34 Curved Monitor. It’s curved. I like it a lot.

Dell U3415W

That’s a 13" MacBook Pro off to the left. To avoid revealing AWS
secrets, I filled the screen with the Ingress map and parked a browser window over it for scale, and another on the laptop.

What happened was, Amazon’s got an engineering hardware refresh rolling around, which included a choice between either this beast or dual 27" monitors. The guys who deal them out tell me the choice is breaking more or less fifty-fifty.

What’s hilarious is the blank incomprehension that prevails between the 2x27 and 1x34 camps. Neither side can understand why their colleagues, who normally seem pretty smart, would foolishly make the other choice. I’m part of it; anyone who would trade this sweeping expanse of tiny pixels for, well, anything else, is just Doing It Wrong, as far as I can tell.

You want to read a review?

Go visit PC Monitors or TFT Central or Digital Trends or Tek Everything (nice video) or Tom’s Hardware or TechRadar.

Those guys will tell you about brightness and sharpness and color fidelity and latency and that stuff. The TFT Central review in particular is awesomely detailed and includes lots of graphs and charts describing things that I don’t understand at all.

Software geek with a Mac?

This part’s for you. Key points:

  1. You can plug it into the Thunderbolt or the HDMI. Depends which side you want the wires on.

  2. I can get a browser and a Terminal and an Emacs and an IntelliJ all visible at the same time.

  3. It has lots of buttons and menus and options. I can testify that the on/off button works great, dunno about any of the rest.

  4. Its resolution is advertised as WQHD. If I get my nose up close to the screen and squint then I can see pixels, so I guess this isn’t what Apple would call “Retina”. But when my head’s in my actual working position, it has that same creamy ultra-smooth look as the 15" MBP I’m typing this on.

  5. It has a ton of connectors. I have a powered USB hub on the desk (visible in the picture above), maybe I can discard that.

  6. It has “dual 9-watt speakers”. ROFL. My computer is plugged into a Schiit Modi 2 Uber DAC, driving a nice old NAD integrated driving nice old Totem speakers.

  7. The pedestal setup is really extra-good; totally effortless getting it at just the right height, pointing just the right way.

I think we are now living in the era of the big-ass screen.

26 Feb 15:54

Vega-Lite for quick online charts

by Nathan Yau

Vega-Lite

A few years ago, Trifacta released Vega, a “visualization grammar” that lets you create charts with a JSON file. But you still have to declare a lot of things to make something standard like a bar chart. Vega-Lite, recently released by the University Washington Interactive Data Lab, lets you take advantage of Vega but with much fewer specifications.

As you might have guessed, Vega-Lite is built on top of Vega, a visualization grammar built using D3. Vega and D3 provide a lot of flexibility for custom visualization designs; however, that power comes with a cost. With Vega or D3, a basic bar chart requires dozens of lines of code and specification of low-level components such as scales and axes. In contrast, Vega-Lite is a higher-level language that simplifies the creation of common charts. In Vega-Lite, a bar chart is simply an encoding with two fields.

That sounds good to me. Gonna give it a shot.

Tags: Trifacta

26 Feb 15:52

Huawei and Leica enter into a long-term partnership to reinvent smartphone photography

by Rajesh Pandey
Chinese smartphone maker Huawei has announced a partnership with Leica that will see both companies work together on reinventing the smartphone photography. Leica is a very popular camera brand from Germany that is known for its top-notch and expensive lenses and cameras. Continue reading →
26 Feb 15:46

Why the Samsung Galaxy S7 camera is better than Galaxy S6

by Rajesh Pandey
Ever since Samsung launched the Galaxy S7 and Galaxy S7 edge earlier this week, many people have been scratching their head on how the 12MP camera on the handsets is actually better than the 16MP shooter on the S6 and S6 edge. After all, how can a 16MP camera not be better than a 12MP shooter? Continue reading →
26 Feb 15:41

Cross-platform development studio Xamarin acquired by Microsoft

by Rob Attrell

Mobile apps are becoming increasingly important for businesses, and developer tools to make powerful software across platforms is vital to the modern app ecosystem. Back in 2013, Microsoft announced a partnership with Xamarin, a mobile development company helping its clients build cross-platform apps for iOS, Android and Windows.

This week, bringing this partnership to its logical conclusion, Microsoft has agreed to acquire Xamarin, formalizing the companies’ work together with Visual Studio, Microsoft Azure, and Office 365. Xamarin already works with over 15,000 customers around the world, including 20 percent of Fortune 500 companies. Their clients include Coca-Cola Bottling, JetBlue, Honeywell, and Thermo Fisher, among many other high profile businesses.

Both companies have posted information about the acquisition on their respective blogs, and promise the change will allow deeper development integration. Microsoft has also said it will provide more information about the acquisition at the Build conference at the end of March, while Xamarin has promised to discuss its future in more detail at the Xamarin Evolve conference at the end of April.

26 Feb 15:41

Twitter Canada’s new boss says country is top ten market, sees ‘tremendous growth potential’

by Igor Bonifacic

Last week, part of the MobileSyrup team got the opportunity to visit the offices of Twitter Canada to meet Rory Capern, the outpost’s new managing director.

Despite it being Capern’s second day on the job (his first day involved hosting Canada’s new Prime Minister), he did his best to answer the questions we had for him. The almost forty minute long interview saw us asking Capern about the future of Twitter, the company’s plans for making its platform safer and why he left Google to join the company. Capern knows Twitter has a lot of work to do to make it investors, partners and, most importantly, users happy, but he’s also optimistic that the best is yet to come.

Can you talk a bit about your decision to join Twitter? You left what was for a short moment the most valuable company in the world to join, if we’re going to be a honest here, a company that is failing to grow its user base.

What drove this decision for me were a few things. One, I was very involved with media partnerships [at Google Canada] over the last five years. It became increasingly clear to me how important Twitter is to the overall media dynamic — both for users, media partners and I would say loftily the world at large. The power of this tool resonated very deeply with me.

I also see a company that’s still answering important strategic questions. We’re still authoring our future. I’m a builder by nature and that was something that was very attractive to me.

The other factor was where we are in Canada right now. I’m the 37th person through the door here at Twitter Canada. We’re still small, and over the course of the last three to four years this team has been curated with some great people.

And we still have a lot of runway from a growth perspective. This was a chance for me to get back to a smaller environment. I think I was the 34 employee in at the Google Canada office in Toronto. Those were amazing days, and even though we’re almost 10 years in here, I’m now back in a very similar situation. We have the chance to build the story of Twitter as a whole and take it in a completely new direction.

Can you speak to how Canada factors into Twitter’s larger strategy? After your predecessor’s promotion, the position you hold now was left vacant for a significant amount of time, and, as an outsider looking in, it seems to be taking Twitter a long time to launch products like Twitter Moments in this market.

I would actually answer the question in a different way, which is that it was very clear to me through the interview process that Twitter was looking for fit. It was a very important cultural decision for them about who would take on the role from Kirstine (Stewart, Twitter’s current VP of media).

Let me take a second to say that Kirstine is incredible person. She did amazing work here at Twitter Canada, and continues to do an incredible job as the head of North America media partnerships. I’m happy to be filling some very large shoes.

I’ll also tell you that Canada is a top ten market for Twitter, and that the company sees tremendous growth potential here. So I think the opposite is true: because Twitter Canada is so important to Twitter, they were very deliberate about filling the role.

I think the time gap is a testament to the team that’s already here. The team has had significant success, even without a managing director.

What kind of mark are you going to leave on this company?

It’s my second day, so that’s a hard question to answer.

I would say, however, at the moment I’m focused on building my team’s dynamic. Canada has come along way since Kirstine’s exit. There’s been some significant talent dropped in at different parts of the business, and now there’s an opportunity to galvanize all these great people towards a common goal. That’s one of my objectives.

The other is to establish a really strong relationship with headquarters. This is not my first time in a role where I’m in the Canadian office of a large multi-national company.

If I’m going to talk about the impact that I’m going to have, it’s a commitment to the growth of Twitter Canada as a business, as well as a partner. I think we can continue to be a really strong partner and we can continue to strengthen both the scope of the partnerships we have and the scale of them in this market in a way that is hopefully unique and different.

How many Twitter users are there in Canada?

We connect with about 40 percent of online Canadians on a monthly basis, which translates to about 10- to 12-million users. Those are users who log in at least once a month. Also, 70 percent of our users are engaging us through a mobile device.

How do grow that number?

I think fundamentally a big part of it is letting people to know the capabilities of the platform. I think Twitter users today, particularly people who have been on the platform for a number of years, know where to go if they want to find what’s going on right now. That’s a story we need to tell to the rest of the market that doesn’t understand.

In your opinion, what is about Twitter that makes it inherently difficult for new users to understand?

There’s been no lack of vocal folks giving me all sorts of feedback on how to improve the platform.

I’ve bucketed the feedback into two main areas.

One, there’s a nomenclature to Twitter that’s just not intuitive. If you haven’t been on Twitter before how do you know what a hashtag is or what a handle is? It’s really not as obvious as it is to all of us who have been on the service since 2007. I have to remove myself from all the knowledge I’ve gained about the platform through my work.

The other part of it is this idea of exploring. When I sign up for Twitter, I have to ask all these fundamental questions to myself about myself. What am I interested in? What do I want to follow?

We have to become great at providing cues and methods for helping people understand what they want to know in the world. It sounds silly, but it’s really true. You’re faced with this almost existential question when first signing in where the question becomes, okay, so now what do I do? And the answer is pretty much you can be connected with almost anyone and everyone in the world.

Beyond the safety council you announced recently, what is Twitter doing to make the platform safer for everyone, and in particular women? (Note: this interview was conducted just prior to when BuzzFeed Canada writer Scaachi Koul was harassed off the platform).

The first point I would raise in advance of the council is our policies. We have a very well-articulated set of rules about what’s allow and not allowed on the platform. It’s enforcing those rules that is the challenge. However, it’s not as though Twitter is a Wild West where you can do whatever you want.

The council itself is a really interesting entity insofar as we’re getting great advice from experts all around the world. They’re giving us advice not just on product, but on process, policy, strategy and philosophy. That’s not something to be diminished. It’s starting from a place of trying to understand the entire corpus of this issue, and making sure that we’re not running blindly into the fire.

I can’t speak specifically to product feature changes that are coming, but I can tell you that we’ve been very vocal — Jack’s been very vocal — about how important this and the type of investments we’re going to make. There are large teams and a lot of heat and light in San Francisco focused on improving the platform on a feature level and a strategic level that are going to help with this issue.

There are going to be some announcements in this area coming in the next few months.

This interview has been edited for clarity and brevity.