Shared posts

24 Jul 22:18

Here’s what’s coming to Netflix Canada in August

by Dean Daley
Netflix

This month in Netflix Canada additions, we’ll be getting the classic movie Hook starring the late Robin Williams, Netflix Original Voltron: Legendary Defender season 3 and the live-action film version of the highly-rated anime series Death Note.

Additionally, there’s the Netflix show that many Marvel fans have been waiting for: the debut season of The Defenders season, premiering August 18th.

Here’s a full list of all of the television shows and movies coming to Netflix Canada this August:

August 1st

August 2nd

  • Hook (available for download) 

August 4th

August 8th

August 10th

August 11th

August 14th

August 15th

August 16th

August 18th

August 22nd

August 23rd

August 24th

August 25th

August 29th

This is your last chance to watch

August 1st

August 2nd

August 4th 

August 10th 

August 11th

August 28th

August 31st

The post Here’s what’s coming to Netflix Canada in August appeared first on MobileSyrup.

24 Jul 22:18

Android chip demand decreases as Apple’s next iPhone launch nears

by Rose Behar
Apple

Apple competitors are adopting a “wait-and-see” approach to production ahead of the company’s upcoming iPhone launch, according to sources cited in a recent DigiTimes article.

The Taiwanese publication says its sources at back-end manufacturing houses in the semi-conductor industry are reporting that chip suppliers in Apple’s supply chain saw orders recently pick up, while chip demand from non-Apple manufacturers has been slow.

The sources don’t expect orders from those customers to rise substantially until the fourth quarter, warning of disappointing handset-chip shipments in the third quarter.

This extreme level of anticipation from Apple’s competitors is reportedly counter to industry expectations, which had Android chipset orders picking up in April and growing through August.

Fabless firms MediaTek and HiSilicon are two of the companies said to be slowing down the pace of their orders and Taiwan Semiconductor Manufacturing Company (TSMC) is seeing non-Apple customers express more interest in the foundry’s 12nm node manufacturing than the 10nm process technology that sources say Apple will use in its next iPhone.

DigiTime‘s sources report that Apple’s iPhone sales will likely sustain demand for TSMC’s 10nm mobile chips through the first quarter of 2018.

The upcoming devices — which will likely debut as the iPhone 8, iPhone 8 Plus and a special 10th anniversary edition — are expected to debut this September, though recent reports have suggested the launch may see delays.

Source: DigiTimes Via: 9to5Mac

The post Android chip demand decreases as Apple’s next iPhone launch nears appeared first on MobileSyrup.

24 Jul 22:18

Bike Spotting: What’s your favourite bike lane?

by dandy

On a glorious July afternoon we set up camp at St. George and Bloor to ask folks what their favourite bike lane is in the city.

Cassius

Adelaide. I like the separation they have with the planters on the west side of the city. It provides a lot of space where cyclists can ride side by side and pass each other if need be. Also drivers seem to respect no parking in the lane.

 

Cheryl

I’m quite partial to Harbord street. It’s nice and winding.

 

Phil

I like Beverly, even though it’s not a true bike lane (it’s sharrows), but it's nice and smooth. I like Sherbourne too.

 

Tenzin

I like the one on Christie (that starts at Dupont) it’s really safe.

Andrea

Wellsley and Harbord, just the fact it’s not super bumpy it’s got the partial dividers, and it’s raised a little bit.

 

Vida

I find bike lanes depend on time of day. For example, I love the davenport lane really early in the morning or late at night, as the rest of the time there's a lot of traffic, but when there isn't traffic I can zoom down and get where I'm going really fast. Other than that I guess I really like Shaw Street. It's a major thoroughfare that connects me to a lot of things from where I live. Also it's a residential street so it's always very quiet and pleasant to cycle down.

Thanks to SteamWhistle for supporting bike lanes and  independent media outlets like dandyhorse!

Related Articles on dandyhorsemagazine.com

Bike Spotting: How do you feel riding on the Bloor bike lane?

Bike Spotting: Should the Bloor bike lane be made permanent?

http://dandyhorsemagazine.com/blog/2017/03/28/bike-spotting-on-bloor-do-you-shop-on-bloor/

24 Jul 22:18

Firefox Developer Edition 55 Beta 11 Testday Results

by Petruta Rasa

Hello!

As you may already know, last Friday – July 21st – we held a new Testday event, for Firefox Developer Edition 55 Beta 11.

Thank you all for helping us make Mozilla a better place – Ilse Macias, Athira Appu, Iryna Thompson.

From India team:  Fahima Zulfath A, Nagarajan .R, AbiramiSD, Baranitharaan, Bharathvaj, Surentharan.R.A, R.Krithika Sowbarnika, M.ponmurugesh.

From Bangladesh team: Maruf Rahman, Sajib Hawee, Towkir Ahmed, Iftekher Alam, Tanvir Rahman, Md. Raihan Ali, Sazzad Ehan, Tanvir Mazharul, Md Maruf Hasan Hridoy, Saheda Reza Antora, Anika Alam Raha, Taseenul Hoque Bappi.

Results:

– several test cases executed for the Screenshots, Simplify Page and Shutdown Video Decoder features;

– 7 new logged bugs: 1383397, 1383403, 1383410, 1383102, 1383021, #3196, #3177

– 3 bugs verified: 1061823, 1357915, 1381692

Thanks for another successful testday! 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

24 Jul 22:18

‘This is a human life’: Ranger medic recalls treating Omar Khadr

mkalus shared this story from The Globe and Mail - National.

For years the battle-hardened and decorated American veteran wrestled with his conscience, with whether he’d done the right thing in saving the life of Omar Khadr, seen by many as a terrorist who profited from his crimes.

Now, watching the furor over the government’s $10.5-million payout to Khadr from afar, Donnie Bumanglag wants to tell his story, offer a perspective born of bitter experience — one he admits may not be popular with many Canadians, or even some of his own former comrades in arms.

Bumanglag, of Lompoc, Calif., 36, has spent years coming to terms with his former life as an elite airborne medic supporting U.S. special forces during three missions to Afghanistan and Iraq. He’s been haunted by flashbacks, frequently thrown back to that time in the summer of 2002, when he spent hours in the back of a helicopter frantically working on Khadr, then 15 years old and at the very edge of death.

Ibbitson: Khadr is to Trudeau what the census was to Harper

“This is a human life. This is war. This is something that most people can’t fathom, and they want to be real quick to give an opinion just because it makes them feel good about themselves,” Bumanglag said. “(But) there’s more to this story than just talking points.”

The following account is based on interviews Bumanglag gave to The Canadian Press, as well as on a recent podcast he co-hosts in which he talks about saving Khadr.

***

Doc Buma, as the 21-year-old Ranger medic was known, was looking forward to leaving the remote area of Afghanistan in which he had been operating for more than a month and heading to Bagram for a shower and some downtime before redeploying to Kandahar.

Instead, as they flew toward Bagram that day in July 2002, a distress call came in. The MH-53 helicopter veered toward Khost and an encounter that would stay with him for years.

Edmund Sealey, then the Rangers platoon sergeant, remembers the call coming in with orders to divert and pick up an “enemy fighter” who had been shot.

“I was on the aircraft. We picked up that casualty in a firefight,” Sealey, 47, now of Columbus, Ga., said from Afghanistan where he still works as a contractor. “With Buma being a Ranger medic, he’s going to assist as soon as you get on board, enemy or friendly, it doesn’t matter.”

With the chopper gunners providing covering fire, they landed in a field. Sealey led the way, Bumanglag behind him, as they threaded their way through a suspected minefield, down a road, and connected with a group of U.S. special forces soldiers.

On what appeared to be a wooden door lay the wounded enemy fighter, shot twice by one of the elite Delta forces. The soldiers had found the casualty barely alive in a compound the Americans had pounded to rubble during a massive assault. One of their own, Sgt. Chris Speer, had been fatally hit by a grenade, and another, Layne Morris, blinded in one eye. It was apparent to the incoming medic that the Delta soldiers were in “some pretty severe distress” over the loss of their comrade.

“There’s a look on somebody’s face when the whole world went to shit 10 minutes ago and it’s too much to process,” Bumanglag says.

As he recalls, the soldiers gave him bare-bones biographical data on the casualty: The fighter had killed Speer. He was a Canadian who had been Osama bin Laden’s “houseboy.” They also told him to keep the high-value detainee alive because he would be a vital source of information and passed him off.

Bumanglag was now charged with saving Khadr, son of a high-ranking member of al-Qaida. He didn’t know Khadr was 15 years old, but his youth struck him.

“I don’t know if I can call him a little kid but he sure looked little to me. He’s 80 pounds or something. He’s a little guy who’s on a door, basically,” Bumanglag says.

They moved the patient up the ramp and the chopper took off. The medic immediately began working to save the boy, who was covered in blood and sand.

“Omar, with gunshot wounds and flex cuffs like an animal had been shot, didn’t look human,” Bumanglag recalls. “But moving in closer and working on him as a patient and seeing the facial features and seeing the skin pigmentation, those images always stuck with me.”

Khadr, it turned out, bore a striking resemblance to one of Bumanglag’s cousins, which bothered the young medic then, and for years after.

“All I seen was a kid that looks like a kid that I knew.”

***

As the chopper bobbed and weaved toward Bagram, Doc Buma worked to stabilize his disoriented, barely conscious patient, who was writhing and moaning in pain. At the other soldiers’ insistence, Khadr’s hands remained handcuffed behind his back out of concern he might turn violent.

Bumanglag’s main task was to deal with Khadr’s two gaping bullet exit wounds on his chest. His head raced with thoughts about whether he should save the life of this “terrorist,” whether he’d have enough medical supplies for his own guys should something happen. He even pondered pushing the enemy fighter out the chopper and being done with it.

“He’s rocking his body around everywhere,” he says. “I took it as aggression. You get this idea that everybody is jihad and they’re going to fight to the death.”

Then there was his ego, he admits: the notion that saving this captive would earn him praise, would show he had what it took. So he kept working, trying to staunch the bleeding.

“My mission, my job was just to save him, keep him alive. There was no politics in it then. I was a young Ranger and this was my chance,” Bumanglag says. “I worked on him for over two hours in the back of a helicopter as the sun went down. At the end, I’m working under finger light.”

He kept working, and Khadr kept living, not saying anything, just making noises.

“His body indicated that he was a pretty brave guy. He fought for his life just as much as we fought to save him,” Bumanglag says. “Some people have a will to live and some people don’t. He definitely did.”

They finally touched down at Bagram.

“We plugged all the holes and we tried to keep things viable,” he says. “I pass him off and I don’t know whether he’s going to live or die.”

What he did know was that Khadr hadn’t died on his watch and it was therefore mission accomplished — one for which he would later be commended for by his superiors. It would take another year or so before Bumanglag learned that Khadr had survived.

***

Omar Khadr, born in September 1986 in Toronto spent several months recovering from his wounds at Bagram, where, from the moment he was conscious and able to speak, he underwent what were, by most accounts, some of the harshest interrogations the Americans had devised in the War on Terror.

A few months later, in October 2002, he was transferred to Guantanamo Bay. He had just turned 16.

It was in his early days at the infamous U.S. military prison in Cuba that Canadian intelligence officers went down to interrogate him. The Americans made the interviews conditional on having the information he provided passed on to them. The Canadians also knew the teen had been subjected to the “frequent flyer program,” a brutal process of sleep deprivation designed to soften him up.

Video would surface years later of a weeping teen, now realizing the Canadian agents weren’t there to help him, whimpering for his mother.

Khadr ultimately pleaded guilty to five war crimes in 2010 before a widely discredited military commission. He later disavowed his confession to having killed Speer, saying it was the only way the Americans would return him to Canada, which happened in 2012.

The Supreme Court of Canada ruled that the federal government had violated Khadr’s rights. The ruling underpinned the recent settlement of his lawsuit in which Ottawa apologized to him and, sources said, paid him $10.5 million.

“If you say you’d go through what he went through for $10 million, you’re out of your mind, and that’s the truth,” Bumanglag says.

Khadr has said he no longer remembers the firefight and would not comment on Bumanglag’s account.

***

Doc Buma returned to his native California and left the military in 2003. He became a police officer, working anti-narcotics, for almost 10 years. Ultimately, the flashbacks, the post-traumatic stress bested him and he retired as a cop about five years ago. He studied educational psychology, he said, as part of trying to sort himself out.

He took up co-hosting a podcast, Sick Call, in which he and a fellow vet talk about a variety of issues, including topics related to the military and law enforcement. In one recent episode, he talks about Khadr. It’s all part educating others, part therapy for himself, he says.

The years since his days in the military, when he was ready to drop everything at a moment’s notice and heed the call of duty wherever it took him, he says, have afforded him time to grow up, to gain some perspective on war, on his life as a soldier, on demonizing people he has never met or with whom he has no personal quarrel.

“I’ve been on the worst combat missions. I bought into the ideology. Now it’s time for reflection,” he says.

Time and again, he is careful to make clear he intends no disrespect to Speer’s relatives or to Morris and empathizes with what they have lost.

“Omar lost his eye, too. I don’t know how much more symbolic that can be.”

At the same time, he is clear that Speer and Morris were grown men who had signed on the line to become elite professional soldiers, knowing the risks of their jobs.

On the other hand, Bumanglag also makes it clear he empathizes with the young Canadian who was taken by his father to another country and thrown into an ideologically motivated war over which he had no control.

As a married father of four, Bumanglag says it’s naive to believe Khadr could somehow have just walked away from the compound his father had sent him to. More to the point, he says, had he found himself as Khadr did that fateful day in July — under heavy bombardment with the fighting men dead and the enemy closing in for the kill, he likely would not have hesitated to throw a grenade.

“What happens if the shoe is on the other foot? This is the scenario that I’ve played in my head,” Bumanglag says, his mind turning to those who are furious at the Canadian government’s settlement with Khadr.

“They can be upset but the reality is that they don’t understand the full story. I don’t think any of us do.”

Doc Buma says he no longer frets that he should have let Khadr die.

“Everybody may hate him but I’m glad I saved his life,” he says. “It just wasn’t his time then.”

Report Typo/Error
24 Jul 22:17

A Privacy Choice

by rands

I departed Safari several years ago for performance and stability issues. Too often, I was finding myself in a situation where Safari was wedged or just plain slow. As the majority of my time is spent staring at a browser, this was unacceptable, so I moved to Chrome. Yeah, the typography rendering wasn’t as good, and my bookmark bar looked like a circus, but it launched quickly and remained fast.

No major complaints. Chrome is frequently updated, the keyboard support is just fine, the browser is fast, and I can’t remember the last time the application crashed. Both bookmarks and extensions are stored in the cloud, so my settings gracefully followed me between machines. No complaints…

… but a lingering worry.

Apple’s 2017 WWDC was dense with announcements, but one feature stuck in my head: disabling auto-play videos with audio. There is no better way to destroy flow than having a random video start playing audio as I’m digesting the state of the world via Feedly. Chrome, like Safari, has a rich set of developer-friendly extensions to augment browser functionality, so I found one that disabled autoplay of videos. The problem? The plug-in I selected was garbage. It did an excellent job of blocking auto-play videos, but there are videos you want to auto-play like Netflix or YouTube and the process of whitelisting these “approved” sites was error-prone and laborious.

My ears perked up when Craig Federighi announce that Safari would block auto playing videos with audio. I’ve yet to see the feature work, but by having the feature engineered into the browser, I am expecting thoughtfulness with low friction affordances allowing sites where I want autoplay.

Disabling was the impetus for considering a migration back to Safari, but the more I considered the situation, the more it became a required migration. To understand my reasoning, consider why autoplay videos with sound exist at all. Why does such a user-hostile feature exist? It starts with a difference of perspective.

There are legitimate and moral businesses who just love interrupting your flow with their message because that is how they earn money. They are rewarded as a function of how effectively they can interrupt you. They do not see the harm in this, and they do not call it interruption: they call it good business.

This is how it started. There was a meeting years ago at one of these businesses when Boris in customer acquisition suggested, “Hey, let’s just auto-play a video with sound and see what happens?” and every single other person in the meeting laughed loudly at him. They said what you and I know, “People are going to hate it. I’d hate it.”

Boris stood his ground and suggested, “Hold it, hold it. Let’s just test one page and one video for a month and see what the data says.” The rest of the team begrudgingly agreed because the company had this value painted on the walls that read, “Data wins arguments.”

Not surprisingly, the resulting data was incredible. Both awareness and other measures were way up. No one bothered to run an NPS survey, and no one bothered to look at the complaints to customer support because the data on hand were both blindingly delightful… and profitable.

It is this difference in perspective that has me back on Safari. It is not that I believe Google is evil (they aren’t) or that their browser is substandard (it’s exceptional). It’s that I’m certain there are Boris-like meetings going on all the time and they are packed with intelligent, rationale, and well-intentioned humans whose perspective is, “We need to run a healthy business, and our business is advertising.”

Google has a compelling answer to not just autoplay ads with audio, but also pop-up ads and interstitial ads that obscure the whole page. They’re planning on banning them via recommendations of the Coalition for Better Ads not only because people hate them, but because this hate is driving us to install ad-blocking software that doesn’t just block these heinous ads, this software also blocks the tracking that allows 3rd party sites track user behavior.1 The latter of which is an advertiser’s bread and butter.

Google’s business is a function of their ability to convince other businesses that they have the most efficient means of delivering relevant Ad X to Targeted Human Y so that they’ll purchase Product Z. I have zero issue with Google building a multi-bajillion business convincing the Planet Earth that they are best in class in ad delivery efficiency (they are), but I am not convinced that Google’s interests align with mine.

Google’s compelling answer to ban heinous ads does not include affordances to block tracking, and I wouldn’t expect them to because they are an advertising company.2 That’d be like requiring Apple to legally embrace shitty typography. No way. Apple’s business is design, and that means Apple will prioritize low-friction useful and approachable elegance no matter what.

Google privacy policy is vast and worth a read. There is an army of privacy-minded humans in the Google privacy organization, and I trust they are trying to do the right thing about privacy. It is not this team’s intent where I have concerns. It is not the meetings they are invited to that I care about, it’s the meeting where privacy is not represented.

There is no nefarious reason they weren’t invited to this hypothetical meeting. It’s just a quick strategy meeting on a topic seemingly unrelated to privacy, and it results in an inconsequential decision that creates a privacy crack. It’s a tiny little space that no one is going to notice for years until someone somewhere else on the planet is going to find a clever way to use that crack for nefarious or semi-nefarious for-profit activities that also violates your privacy.

And I’m not even worried about this one meeting. I’m worried about all of the meetings and the collective compounding impact of all the small seemingly inconsequential decisions in a company where the business is selling advertising versus a company where the business is selling product.

I have great respect for the engineering teams building Safari and Chrome. They are building a window into the Internet, and the Internet is a hostile, for-profit, lying beast that actively fights the good intentions and well-debated choices of engineers and designers. Tough gig.3

But I get a choice.

I’m fine with advertising. It funds media and services I trust and depend upon. I appreciate ads that deliver value to me. I understand the more a company knows about me, the better they can deliver me ads I care about, but if that is their core business, I will forever question their motivations regarding the ethical use of my personal information.

I choose a business that began as a company building products to empower the individual. Apple has spent decades making approachable, functional, and aesthetically pleasing technology for the individual and this bolsters their claim they want to protect the privacy of the individual.

That’s my choice.4


  1. I’m continuing to use Adblock Plus, and as part of the research for this piece, I started using Disconnect.Me. It appears Disconnect.Me duplicates tracking blocking in Adblock Pro, but I like the visual map that Disconnect.Me gives me regarding trackers. 
  2. Via Disconnect.Me, you will see four trackers on Rands: Google Analytics, Chartbeat, Twitter, and Gravatar. I’m planning on removing or replacing each of them all. With the demise of Mint, I’m in the market for self-hosted analytics. Drop me a note. 
  3. I’m also evaluating the Brave browser. First impression: many rough edges. 
  4. HTTPS is coming for Rands. I’ve got the certificate all set-up, but the process of converting the rest of the site is… laborious. 
24 Jul 22:17

State of Microservices: Are You Prepared to Adopt?

by olaf
Webcast replay

The allure of microservices is clear: shorter development time, continuous delivery, agility, and scalability are characteristics that all IT teams can appreciate. But microservices can increase complexity and require new infrastructure—in other words, they can lead teams into uncharted territory.

Join Gartner’s Anne Thomas and Google Cloud’s Ed Anuff as they present an in-depth look at the state of microservices.

They discuss:

  • what a microservice is and what it isn’t
  • trends in microservices architecture
  • the relationship between microservices, APIs, and SOA architecture
  • connecting, securing, managing, and monitoring microservices

Watch the webcast replay now.

24 Jul 22:17

Back to the Bridge and the “Rotting” Tunnel Vision

by Sandy James Planner

 

Rush hour traffic moving through the Massey Tunnel in Vancouver

You would hope that the Vancouver region could work on a cohesive vision of accessibility and affordability that includes actively listening to the Mayors’ Council and Metro Vancouver and their long-term plan. But in Delta with their 100,000 plus population and reliance on all things vehicle and related to the Port, an analysis of the best approach at the Massey Tunnel crossing holds no compromise-they want their bridge.

The Vancouver Sun and Jennifer Saltman report on the meeting held with Delta’ mayor and city manager  with the editorial board of the Vancouver Sun and Province newspapers.  You wonder if that editorial board was able to keep a straight face with the pronouncements that were pretty positional from Delta’s top brass. They maintained that “replacing the George Massey Tunnel should be a priority for the new provincial government because it’s old, congested, dangerous to drivers and first responders — and will not withstand even a moderate earthquake.”

hqdefault

 

“This tunnel’s rotting. Are we just going to let it rot?” Delta Chief Administrative Officer George Harvie said.”  The Delta contingent trotted out the same rationale as previously reported in Price Tags-the tunnel is too old, a bridge can stand a stronger earthquake, a new tunnel will disrupt farmland and be more expensive. Nothing new here-in fact all the other mayors in the region opposed the Massey bridge project because of its impacts on regional livability, the lack of a transparent public process, and changing and insufficient background information access. But never mind that, the Mayor of Delta believes that the Mayors are not dealing with the proposed bridge because it is a Provincial initiative.

Meanwhile back in Delta the lack of consultation with local residents over the Massey crossing has been further flamed by Delta City Hall’s full-page ad in the Vancouver Sun advocating their position of “Bridge Good” and “Tunnel Bad”. As Nicholas Wong (who ran as an independent MLA in Delta) notes  “Christy Clark announced the bridge in 2013, years before any inquiry was done to evaluate alternative options. Also remember, the real cost of the bridge was purposely withheld by the Liberals and redacted in the project’s public documents. Where is the due process? Despite this, Delta still thinks all necessary information is publicly available. Our rookie MLA (Ian Paton, who is strangely serving a dual role  as an MLA AND a member of Delta Council) even went so far as to say this practice of redacting documents and withholding information, like the bridge proposal has, is “just how you do business.”

Delta can pay tens of thousands of our tax dollars to call out others for spreading rumours and misinformation, but turns around and uses statements from a report more than 28 years old as evidence for its position. There were supposed to be two phases of seismic upgrades to address those exact concerns.”

“This is by no means the extent to the unjustifiable information being put forth by those in favour of a bridge. They can continue to call this misinformation all they want, but all I did was take the time to read their own documents.

After years of research and extensively reading the documents presented on the bridge proposal, I understand how drastically any replacement option will impact our community. If anyone has any information that I do not have or questions about where or how I derive my facts, please get in touch.”

dez-gvau0aaq-em


24 Jul 22:16

@tamaradraut

@tamaradraut:
24 Jul 22:16

New proposed Canadian drone regulations increase pilot costs, do little for safety

by Simon Cohen
DJI Spark

After several months of interim orders governing the use of drones–otherwise known as UAVs–Transport Canada has published a full, proposed overhaul of the Canadian Aviation Regulations.

The new rules, if approved, would dramatically reduce the paperwork burden on both Transport Canada and commercial drone operators, but they would also increase the costs for all pilots while their impact on air safety remains uncertain. The new rules will take effect on January 15, 2018 and the public has until October to provide feedback.

Emphasis on risk

The biggest change to the existing regulatory environment, is a re-framing of the risk posed by the drones themselves. Under the proposed rules, a drone’s weight and flight conditions determine how strictly its use will be controlled – not whether the aircraft is being flown for commercial or personal reasons.

Xiro Xplorer MIni drone

The rationale is that a 1 kg drone would pose the same risk to people, and other aircraft, regardless of a pilot’s level of commercial gain from the flight. The heavier the aircraft, and the riskier the flight conditions, the more onerous the regulations become.

New categories

Transport Canada proposes five new UAV categories:

  • Micro: These drones weigh 250 grams or less, and are basically unregulated.
  • Very Small: More than 250 grams to 1 kilogram.
  • Small [“Limited” flight conditions]: More than 1kg to 25kg.
  • Small [“Complex” flight conditions]: More than 1kg to 25kg.
  • Large/Beyond Visual Line-of-Sight (BVLOS): More than 25kg — though there are nuances, Transport Canada generally defines “limited” as flights that take place in rural settings, while “complex” refers to built-up areas, or more specifically, “a populated or developed area of a locality, including a city, a town, a village or a hamlet.”

    Tiny drone, tiny rules

    As before, if your drone weighs less than 250 grams (Micro), you are exempt from regulation. The rules caution that you must still respect any relevant privacy laws, and that you must fly your drone “so as not to endanger life or property of any person.”

    Insurance for all

    Under the proposed new rules, anyone flying a Very Small, Small, or Large drone, must carry a minimum of $100,000 in liability insurance. Unfortunately, drones are considered aircraft and as such are not typically covered by homeowners insurance policies. Intact Insurance was one of the few companies we spoke to that does cover drones under both its homeowners comprehensive and liability policies.

    Transport Canada estimates that the cost for this insurance — if you had to buy it separately — will be $15 per year, however MobileSyrup was unable to find an insurance company with drone liability policies for consumers in that price range.

    Xiro Xplorer MIni drone

    Spark Insurance offers hobbyist policies that cost $400 to $500 CAD a year for $100,000 in liability, covering damage to any people or property that arise from your drone use, but only if you are flying in Canada, and only if your flight adheres to Transport Canada’s regulations.

    Membership in MAAC (Model Aeronautics Association Of Canada) costs about $90 annually (prices vary by province) and includes $7.5 million in liability coverage — well in excess of the minimum requirement — which is the least expensive option we could find. Their group insurance is valid globally, but as with Spark Insurance, if you’re flying in Canada, you must adhere to all of Transport Canada’s rules and regulations in order to be covered by it. There are plenty of insurance options for businesses that operate drones.

    Rules of the sky

    Another new requirement for all drone pilots: They must pass an aviation knowledge test, which Transport Canada claims will be similar to the current boating knowledge test. There will be four tests, one for each category of weight/flight condition, consisting of about 100 questions each.

    Pilots must “demonstrate aeronautical knowledge in specific subject areas, such as airspace classification and structure, the effects of weather and other areas.” Transport Canada expects the cost to take one of these tests will be $35. Technically speaking, even a recreational pilot flying a drone as small as a DJI Spark in the middle of the countryside, would have to pass this test.

    Strangely, even though the new rules go into effect by January, Transport Canada says it will take 18 months for their staff to develop the test for the small-complex category (the primary category for non-recreational pilots), and makes no mention of what pilots are expected to do during this interval.

    Once you’ve met the rules above, your drone’s specific weight category and flight location bring with them further requirements.

    Very small drones

    For the very small drone category, e.g. a DJI Mavic Pro, the proposed flight regulations are the same for both rural and built-up areas, and they’re nearly identical to the current interim rules, but Transport Canada has now added a minimum operator age:

  • Be at least 14 years of age.
  • Clearly mark the UA with the name, address and telephone number of the operator.
  • Notify air traffic control if the UA inadvertently enters or is likely to enter controlled airspace.
  • Operate in manner that is not reckless or negligent (that could not endanger life or property).
  • Give right of way to manned aircraft.
  • Use another person as a visual observer if using a device that generates a streaming video also known as a first-person view (FPV) device.
  • Confirm that no radio interference could affect the flight of the UA.
  • Do not operate in clouds.
  • Operate at the following minimum distance from an aerodrome: 3 nautical miles (NM) [5.56 km] from the centre of the aerodrome. The required distance from heliports and/or aerodromes used exclusively by helicopters would be 1 NM (1.85 km).
  • Operate at least 100 feet (30.5 m) from a person. A distance of less than 100 feet laterally would be possible for operations if conditions such as a reduced maximum permitted speed of 10 knots (11.5 mph) and a minimum altitude of 100 feet are respected.
  • Operate at a maximum distance of 0.25 NM (0.46 km) from the pilotOperations over or within open-air assemblies of persons would not be allowed.
  • Operate below 300 feet.
  • Operate at less than 25 knots (29 mph).
  • Night operations would not be allowedIn some ways, these rules are an expansion of the previous allowances — first-person view devices (goggles that let you see what the drone’s onboard camera sees) are currently banned for pilots under the interim rules regardless if they have a visual observer or not.

    There’s also an acknowledgement that slower speeds and lower altitudes could reduce the risk to people and thus a smaller minimum lateral distance might be allowed as long as a minimum altitude is respected, but the wording stops short of defining what that lateral distance could be.In theory, this could allow the pilot of a ‘Very Small ‘drone like a Mavic Pro, to hover directly over — or very close to — a person as long as they kept the drone 100 feet in the air.

    Small drones

    Once you’re into the small drone category, e.g. a DJI Phantom 4 series, the rules get stiffer and are different depending on where you fly. In a limited or rural context, the rules are essentially the same as for very small drones, but with a few modifications:

  • Perform a site survey prior to launch to identify any obstacles and keep maintenance and flight records.
  • Be at least 16 years of age.
  • Operate at the following minimum distance from an aerodrome: 3 NM (5.56 km) or greater, respecting the control zone; or 1 NM (1.85 km) if there is no control zone. The required distance from heliports and/or aerodromes used exclusively by helicopters would be 1 NM (1.85 km).
  • Operate at least 250 feet (76.20 m) from a person. A lateral distance of less than 250 feet would be possible for operations if conditions such as a maximum permitted speed of 10 knots (11.5 mph) and a minimum altitude of 250 feet are respected.
  • Operate at a minimum distance of 0.5 NM (0.93 km) from a built-up area.
  • Operate at a maximum distance of 0.5 NM (0.93 km) from the pilot.
  • Operate below 300 feet (91.44 m) or 100 feet (30.48 m) above a building or structure with condition.
  • Operate at less than 87 knots (100 mph).

    For small drones in a complex, or built-up context, the proposed rules affect both the pilot and the drone itself.

    Transport Canada will require that the drone in question is “in compliance with a standard published by a standards organization accredited by a national or international standards accrediting body; have available the statement from the manufacturer that the UAS meets the standard; and do not modify the UAS.”While the rules don’t state which standard will be used, they do indicate that non-compliant drones purchased before these rules go into effect, can still be used – though possibly with greater restrictions than compliant ones.Pilots would then have to:

  • Register their drone with Transport Canada and carry that document with them whenever flying, at a cost of $110 per drone.
  • Obtain a pilot permit ($35) that would be valid for five years. The pilot permit application to Transport Canada would include, for example, the following:
  • An attestation of piloting skills by another UA pilot, and the successful completion of a comprehensive knowledge exam ($35 exam fee).
  • Operate over or within open-air assemblies of persons if operated at an altitude of greater than 300 feet, but less than 400 feet, and from which, in the event of an emergency necessitating an immediate landing, it would be possible to land the aircraft without creating a hazard to persons or property on the surface.
  • Operate at a maximum of 400 feet (121.92 m) or 100 feet above a building or structure with conditions.

    Night operations would be allowed with conditionsWhat’s interesting here is that Transport Canada has significantly increased a pilot’s ability to fly near people, at higher altitudes, and potentially at night, as long as the minimum requirements are met.

    Goodbye SFOCs

    Under the current rules, anyone flying their drone for commercial purposes is supposed to apply for and obtain a Special Flight Operations Certificate (SFOC) from Transport Canada, before flying.

    Xiro Xplorer MIni drone

    Not only is this a significant administrative burden on individual and large drone operators alike, the process has been a challenge for the ministry to complete in a timely manner, leading to delays of 20 days or more before an SFOC could be issued.The new proposed rules essentially eliminate the SFOC requirement for all but the heaviest drones and riskiest flight conditions — a welcome move for commercial drone operators both large and small.

    It means that an aerial cinematography or inspection company, as long as they are operating drones weighing less than 25 kg, could operate on their own schedule if their pilots held valid permits. It’s worth noting that the government expects to save taxpayers million of dollars through the reduction in SFOC applications.

    Is this safer?

    A common thread to Ministry of Transportation’s concerns around drone use, is their potential impact on people, and other aircraft.

    Transport Canada says, “Risks are largely due to lack of understanding and working knowledge of airspace, aviation regulations, manned aviation airspace users, and best practices.” However it still unclear how the institution of tougher educational requirements will translate into more responsible pilot behaviour.

    Enforcement

    Transport Canada’s enforcement provisions seem to be at odds with the number of UAVs it believes to be in operation in Canada. The new rules simply say that the RCMP will be empowered to fine pilots who are non-compliant, and that local law enforcement may be used for this as well, though it is unclear whether or not these groups have the resources to dedicate to any kind of ongoing surveillance.

    Fines for individuals range from $1,000 to $5,000 depending on the severity of the infraction, while corporations can expect to pay anywhere from $5,000 to $25,000.

    Reactions

    Chinese drone manufacturer, DJI, reacted to the new draft rules in a statement. “We are disappointed that Transport Canada has taken an overly restrictive approach for its new proposed drone rules,” said Brendan Schulman, DJI Vice President of Policy and Legal Affairs.

    “Strong restrictions placed on drones in built-up areas — essentially all locations where people live — overlook the benefits drones can provide to cities and will result in millions of Canadians not having the opportunity to realize the full potential of this emerging technology.”

    Xiro Xplorer MIni drone

    Non-recreational pilots seem most concerned with the new standards-compliance rule for Small drones. At the moment, none of the most popular pro-level drones made by DJI e.g. Phantom 3, Phantom 4, Inspire, or Inspire 2, are standards-compliant, leaving many commercial operators wondering how they’ll be able to meet the new regulations for built-up areas without a significant investment in new equipment.

    A fully-loaded Inspire 2 with a top-of-the-line camera, for instance, costs almost $9,000.Some pilots welcome the changes, especially those who operate a very small drone for commercial flights. Mayooran N, a member of the Special Flight Operations Certificate Canada Facebook group, uses his Mavic Pro to shoot real estate and wedding/engagement photos and videos.

    “Compared to SFOC,” he said, “I can write my exam, get insurance and do my work following the [very small drone] rules and Controlled Airspace limitations in the GTA. I don’t have to build my work to get a standing SFOC, thus I can start my work legally.”

    Photography by Patrick O’Rourke and Igor Bonifacic.

The post New proposed Canadian drone regulations increase pilot costs, do little for safety appeared first on MobileSyrup.

24 Jul 22:16

Quebecor closes sale of 7 spectrum licenses to Shaw’s Freedom Mobile

by Rose Behar

Quebecor’s sale of 700MHz and 2500MHz wireless spectrum to Shaw’s Freedom Mobile for $430 million CAD is now officially complete.

Before the sale closed, Quebecor had to obtain regulatory approval from Innovation, Science and Economic Development Canada (ISED) and the Competition Bureau in order to sell the spectrum away from its Quebec regional carrier Videotron to Shaw subsidiary Freedom Mobile, which launched its LTE network in November 2016.

“This is an important incremental step in our evolution as an enhanced connectivity provider. We are excited about improving our wireless capabilities and putting this spectrum to use for the benefit of Canadians,” said Brad Shaw, CEO of Shaw Communications.

“The addition of this spectrum enhances our ability to offer higher quality wireless experiences, and choice, to more Canadians.”

The package included three 700MHz licenses for Southern Ontario, Alberta and British Columbia and four 2500MHz licenses covering Toronto, Edmonton, Calgary and Vancouver — a holdover from when Videotron was expected to try its hand at national expansion.

Shaw notes that funding for the transaction for Freedom Mobile combined cash on hand and Shaw’s existing credit facility.

Meanwhile, Quebecor states in its release that the asset sale will “enable Videotron to continue investing in the development of its network in Québec and Eastern Ontario.”

700MHz is a high-quality wireless frequency that Rogers uses for LTE service, while the majority of the high-frequency 2500MHz spectrum auctioned in 2015 went to Telus.

Source: Quebecor, Shaw

The post Quebecor closes sale of 7 spectrum licenses to Shaw’s Freedom Mobile appeared first on MobileSyrup.

24 Jul 22:16

Get Started With Serverless Computing On Kubernetes With Minikube And Kubeless

files/images/get-started-kubeless-1-8aa9d788.png

Bitnami, Jul 25, 2017


Icon

This is the sort of thing that could eat the rest of my vacation (or a lot longer, if you don't have a developer background). Kubeless allows you to manage a "serverless" architecture (it's not really 'serverless', it's just that all of your applications and functions run on other people's servers), and you use software like Kubernetes to set up and coordinate them. This is bleeding edge and far  from user-friendly. From Wikipedia: "Kubernetes (commonly referred to as "K8s") is an open-source system for automating deployment, scaling and management of containerized applications that was originally designed by Google and donated to the Cloud Native Computing Foundation." There's a webinar this Wednesday if you want to learn more.

[Link] [Comment]
24 Jul 22:16

Going Serverless with OpenWhisk

by Thejesh GN

I have been using Webhooks for a long time now. Webhooks are HTTP callbacks. They usually get called with HTTP Post request on an event. For example system one does a POST request on system two when something changes on system one. These are typically used by heterogeneous systems for plumbing.

In my case there would be couple of notifications per day but the receiving system was a standard web app which was running 24×7 even though it wasn’t necessary. So instead of running on independent machine i started looking for a hosted services. Hook.io was one the services I used. Its simple, stable and it’s open source. You can host it yourself if you ever want. It really worked well.

Overtime my requirements grew and also I started working on an enterprise level project that needed more than simple Webhooks. I wanted to run small functions synchronously, asynchronously and sometimes on a schedule. One way was to do my own queues, scripts and scheduler. Other way was to look into Serverless architecture.

Apache OpenWhisk

There are many hosted serverless systems today. But I was looking for a system which was open source and can be hosted on premises if required. Whether it’s for an enterprise or for an indie project vendor lock-ins are bad. I prefer systems which allows one to export data, code, process etc and allows them to host it themselves. Apache OpenWhisk was probably the best serverless architecture that fit all these requirements and of course along with standard technical requirements of any serverless architecture.

I am quoting the definition of OpenWhisk from their website as its precise

OpenWhisk is an open source, distributed serverless computing platform able to execute application logic (Actions) in response to events (Triggers) from external sources (Feeds) or HTTP requests governed by conditional logic (Rules). It provides a programming environment supported by a REST API-based Command Line Interface (CLI) along with tooling to support packaging and catalog services.

OpenWhisk comes with quite a bit of integrations that can be used as triggers or destinations. For example you can listen to Github Webhook or CouchDB change events and then perform certain actions. You can also chain actions if you like. This makes it easy to build IFTTT like services very easy.

OpenWhisk Integrations

OpenWhisk Integrations

IBM Bluemix OpenWhisk

IBM provides a hosted OpenWhisk service called IBM Bluemix OpenWhisk. It’s probably one of the earliest provider of hosted OpenWhisk. You need an IBM id to use it. You don’t need credit card to start using IBM Bluemix services in trial mode. But you need to have a credit card to continue using the services after the trial period. But OpenWhisk has enough freetire allowance that you don’t need to pay for experimentation.

Getting Started

There are two ways to start with OpenWhisk. You can use the UI if the use case or the code is simple. But if the scripts are complex then its better to use CLI. To create a script (or Action in OpenWhisk terminology) using Web UI. Go to Manage under OpenWhisk and click on Create Action. All actions are put under a namespace. UI will prompt to create a namespace if there isn’t one. You can select one if you have already created namespaces.

Create Action using OpenWhisk Web UI

Create Action using OpenWhisk Web UI

Give a unique name to the service and select the runtime. I usually code in Python and hence this example is in Python. Swift and Node are the other two options. Also make sure to check Web Action because we want to use it as a web service.

Creating Web Action

Creating Web Action

Since it’s a simple piece of code. You can test inside the browser.

Testing inside the browser

Testing inside the browser

It’s just three simple steps. Here is the working API endpoint URL that will return your public IP address.

Setting up CLI

Setting up CLI

Once echo is working, your setup is ready to do bigger things. To try CLI let’s build a service that will ping the address of a given domain in the parameter. This is not a real ping. Idea here is to show how easy it is to package and deploy serverless applications.

#file name __main__.py
import requests
def main(args):
	query_string = args.get('__ow_query',None)
	if query_string:
		parameters = {query_string[0] : query_string[1] for query_string in [query_string.split("=") for query_string in query_string.split("&") ]}
		if "domain" in parameters:
			domain = parameters["domain"]
			try:
				rsp = requests.get(domain)
				return {"ping":"yes", "domain": str(domain)}
			except:
				return {"ping":"no", "domain": str(domain)}
	return {"error":"Error while pinging"}

To deploy we will create a fabfile that will essentially run the wsk commands which makes it easy to build and deploy in a single step. We use virtualenv to create an isolated environment. Then we install dependencies using requirements package. Then we package it as zip including our main function in a file called __main__.py.

from fabric.api import local

action_name = 'ping'

def setup():	
    local('virtualenv virtualenv')
    local('. virtualenv/bin/activate; pip install -r requirements.txt')


def package():
	#clean
    local('rm -rf virtualenv package.zip')
    #create a zip with all the dependecies
    local('zip -r package.zip __main__.py virtualenv/')


def update():
	cmd = 'wsk action update '+action_name+' --kind python:3 package.zip --web raw' 
	local(cmd)    


def create():
	cmd = 'wsk action create '+action_name+' --kind python:3 package.zip --web raw' 
	local(cmd)    

def delete():
	cmd = 'wsk action delete '+action_name
	local(cmd)    	

When you run the very first time, you need to create the action and hence use create, from next time onward you can use update. You don’t need to run setup unless there are changes in the environment.

#first time
thej@uma:~/code/openwhisk-fabfile-deploy$ fab setup package create

#for updates
thej@uma:~/code/openwhisk-fabfile-deploy$ fab package update

I have all the code on a github repo if you want to fork it and use it.

Note: This post was written with inputs from IBM.
24 Jul 22:15

The right way to defend (or attack) the Congressional Budget Office

by Josh Bernoff

The Congressional Budget Office is in the crosshairs for its prediction that the republican health care bills will cause 22 million people to lose health insurance. Eight of the organization’s former leaders wrote a letter to defend it. Who’s right, the CBO’s critics or its defenders? Congress created the CBO in 1974 as an independent, … Continued

The post The right way to defend (or attack) the Congressional Budget Office appeared first on without bullshit.

24 Jul 22:15

In Labor

by Eleanor Penny

In the limpid waters of an electrolyte solution, under the close watch of assembled scientists, lamb fetuses grow guts and brains. They sprout limbs and eyes. They begin to move.

These are among the first successful beneficiaries of “bio-bags”: plastic contraptions that mimic uteruses, providing the heat, nutrients, and liquid environment necessary for fetuses to grow. Researchers at the Children’s Hospital of Philadelphia, who described their efforts in a recent paper published in Nature Communications, hope that the bio-bag can be used to give premature babies a better chance at survival than earlier attempts to simulate the conditions of the womb. Previously, a team at the Center for Reproductive Medicine and Infertility at Cornell University had succeeded in growing a mouse embryo almost to full term by adding engineered endometrium tissue to a bioengineered extra-uterine “scaffold.”

Scientists have long dreamed of uncoupling the process of birth from the messiness of human biology. In 1924, J.B.S. Haldane foretold a future where, 150 years on, fewer than 30 percent of children would be “born of woman.” He christened this speculative technology ectogenesis. Imagining how a future college student would describe the phenomenon, Haldane wrote, “Had it not been for ectogenesis, … there can be little doubt that civilization would have collapsed within a measurable time owing to the greater fertility of the less desirable members of the population in almost all countries.”

The mere thought of ectogenesis has long been enough to stoke dystopic fears of what artificial wombs might wreak. But conventional child bearing results in the deaths of hundreds of thousands of people a year

In line with that blending of eugenics and utopia, the mere thought of ectogenesis has also reliably stoked dystopic fears of what artificial wombs might wreak. Aldous Huxley’s 1932 novel Brave New World used artificial wombs as an emblem of world of carefully administrated comfort, in which people live as they were born, assigned to a caste, cradled in a chemically sustained stupor, stripped of meaningful human contact even on an umbilical level. The “hatcheries” that house countless ranks of artificial wombs are the first thing we encounter in the novel, positing them as the basis of a world where “natural” familial relations are replaced by synthetic, state-organized relations optimized for complacency and for social control.

But to this dystopia, supporters of artificial wombs respond with another. Conventional child bearing results in the deaths of hundreds of thousands of people a year, with many more risking life-threatening complications. Such levels of misery inspired Shulamith Firestone to call birth “barbaric” and welcome the prospect of ciswomen being unshackled from their biological fate. In her account in The Dialectic of Sex, synthetic reproduction could spell “the freeing of women from the tyranny of their biology by any means available, and the diffusion of the childbearing and childrearing role to the society as a whole.” It may be “unnatural,” but, Firestone argues, “the ‘natural’ is not necessarily a ‘human’ value. Humanity has begun to transcend nature: We can no longer justify the maintenance of a discriminatory sex class system on grounds of its origins in nature. Indeed, for pragmatic reasons alone it is beginning to look as if we must get rid of it.”

It’s early days yet. Alan Flake, one of the researchers at the Children’s Hospital of Philadelphia, cautions against such speculation about the end of human child bearing. “It’s complete science fiction to think that you can take an embryo and get it through the early developmental process and put it on our machine without the mother being the critical element there.” Even so, trials on human fetuses may be only a few years away.

But there is another sense in which Flake is right to call the prospect of artificial wombs “science fiction”: Not because they are necessarily far-fetched, but because the prospect of them adheres to science fiction’s particular ability to test the bounds of what we consider normal, natural, and human.

Not all women can give birth, and not all people who can give birth are women. But it remains inarguable that womanhood and birth are inextricably fused in our cultural imaginary. By disentangling birth from women’s bodies, artificial wombs threaten to weaken that association in which many people remain deeply invested. Social conservatives — such as some modern catholic bioethicists — look on in dismay as artificial wombs present a technological assault on the sanctity of the family unit and the sentimental celebration of mothers for their supposedly selfless nurturing, effortless empathy, and gentle unambitious nature. As conservative doyenne Phyllis Schlafly put it in The Power of Positive Women (1977), “the Positive Woman looks upon her femaleness and her fertility as part of her purpose, her potential, and her power.”

Ectogenesis undermines the idea that women’s lives are — and indeed should be — ultimately invested in making and caring for babies. That biological fatalism comes in handy if you want to keep traditional gender roles largely the way they are — that is, to keep women having babies and performing domestic labor for scant recognition and even less pay. It mobilizes a heady mix of different ideologies; from cack-handed pseudoscience claiming compassion is hard-wired into female brain to the language of romantic love, mobilized to guilt trip women into thinking that refusing birth (or demanding a wage for their trouble) would be to cheapen their feminine capacity for selfless caring. Even to deny their own womanhood. In Against Love: A Polemic, Laura Kipnis argues that “if modern love has power over us, domesticity is its enforcement wing: the iron dust mop in the velvet glove.” This essentially privatizes childbirth and all the domestic labor it involves, making it, in the words of feminist academic Camille Barbagallo, “a private matter for which individuals bear the costs and responsibility.”

These are the terms with which we’re used to talking about female life — still underwritten by the imperative to reproduce, reproduce, reproduce. Sure, you might be an artist or a politician on the side, but “motherhood is the most important job in the world,” as Ivanka Trump has declared. And it’s important to remember that this job stretches far beyond the basic biological processes of gestating a fetus and bearing it into the world. It also comprises the domestic labor, care work, and emotional labor necessary to turn a zygote into a human thing that talks and walks around and blows out its birthday candles and files its tax returns. The uncompensated provision of this work has been fundamental to the ways in which capitalist economy functions.

The advent of private washing machines saved on labor, but that labor — once public, visible, collective, waged, and less explicitly gendered — became the responsibility of individual women, cloistered within the confines of the bourgeois home

In Marxism and Women’s Oppression, Lise Vogel argues that all capitalist production draws on deep on the wellspring of free labor into which women are regularly cajoled — the reproductive labor that keeps capitalism stocked with a ready supply of fresh workers. But the technology of gender governs work performed with the hands and the mind than as well as the uterus. In explaining the Marxist feminist theory of social reproduction, Thithi Bhattacharya points out that childbirth is just one part of this process, which also includes “activities that regenerate the worker outside the production process and allow her to return to it. These include, among a host of others, food, a bed to sleep in, but also care in psychical ways that keep a person whole.” Women also work to “maintain and regenerate non-workers outside the production process — i.e. those who are future or past workers, such as children, adults out of the workforce for whatever reason, be it old age, disability or unemployment.” In The Problem With Work, writer and theorist Kathi Weeks claims that this work is inseparable from our vision of what it is to be a woman: “Doing this job is part of what it means to do gender.”

What, then, do we make of the artificial womb — a technology that seemingly allows people to control such production without the complications of managing human sociality along gender lines? By potentially automating childbearing, it seems to offer us the chance to unpick the stitches that tie people with certain kinds of organs to certain ways of living. That is to say, it might clear the path toward the abolition of gender.

But a quick look at history reveals that it is not so simple. Aspects of the work of gender — the labor of social reproduction described above — have been automated before, with ambivalent results. The advent of washing machines is sometimes hailed as having liberated women from the burden of low-tech laundry, a task which could swallow up countless hours. At the same time, though, that technological shift brought with it a sea change in how domestic work was organized more generally. When the private washing machine came into being, U.K. cities were marked by social washhouses, where even the poorest could take their laundry. Though the labor was still back-breaking, the sociality of these spaces stood as testament to a way of organizing domestic labor alien to our modern mythologies of private domesticity — that it could be collective — even that it could be waged.

Then, outmoded and out-marketed, these washhouses began to shut up shop, and washing clothes — labor that was once public, visible, collective, waged, and less explicitly gendered — became for many households the responsibility of individual women, silent and invisible, cloistered within the confines of the bourgeois home. Washing machines saved on labor but also set the stage for the rise of the housewife.


The political ramifications of an invention are not dependent simply on what it does but how it’s used, and by whom. It doesn’t take a huge leap of imagination to conjure a future in which the gender-revolutionary potential of artificial wombs is seamlessly metabolized by the machinery of capitalism and patriarchy: Artificial wombs leased only to straight, married couples, of the right religion and income bracket. Lower-class women tending the hatcheries, raising the kids in the first crucial stages of development. Ectogenesis used as an excuse for further roll-backs on tentative 20th century concessions toward the legalization of abortion.

So we find ourselves caught at a crux of different possible futures: Firestone’s daydream of transcending the barbarism of “natural” birth on one side; artificial wombs reinforcing the armory of patriarchy on the other. So we may look again for lessons in a sphere where prospective futures are explored and the different potential ramifications of technology are elaborated: science fiction. Margaret Atwood’s The Handmaid’s Tale may be as pertinent a guide to reproductive futures as Huxley’s dystopia. In Gilead, the fascist theocracy of the near-future that Atwood describes, birth and reproduction are highly regulated, but Brave New World’s depersonalized and disembodied ultra-tech hatcheries are replaced by live women, bound to a lifetime of sexual and reproductive servitude. Rather than replace women’s bodies with reproductive machines, the women tasked with childbirth are simply treated as machines, valued entirely in terms that particular biological capacity.

With artificial wombs, the conspicuous labor of childbirth could be rendered invisible, mitigating one of the more overt inequities around which a broader insurrection could be organized

Like all successful dystopias, this caricature persuades not because it’s fully plausible in its details, but because we feel the shiver of recognition in its underlying logic. The war on women has always trained its crosshairs on women’s attempts to control their own reproductive futures. Reproductive freedom underwrites women’s economic independence and political; it’s vastly easier to hold down a job or participate in public life when not saddled with caring duties for which you and you alone are responsible.

Would artificial wombs, we might ask, do anything to unshackle the women of Gilead? Would the handmaid’s obsolescence also mean the handmaid’s freedom? In Atwood’s dystopia, handmaidenhood is only one of many ways in which women are bonded to servitude. The different tasks of social reproduction are doled out pin-factory style to different women: Female maids, cooks, sex workers are similarly tasked with assuring that society is sustained, fed, cared for, and, yes, reproduced. Even the wives of Gilead, living in luxurious, privileged complicity, are tasked with floor-managing the whole affair, acting as moral overseer in exchange for exemption from the toil and humiliation reserved for other castes of women.

Automating childbirth, one part of gendered labor, wouldn’t amount to abolishing all forms of gendered labor. The obsolescence of the handmaiden leaves untouched and unchallenged the lives of the sex worker and the household cook. In fact, with artificial wombs, the conspicuous labor of childbirth could be rendered invisible, mitigating one of the more overt inequities around which a broader insurrection could be organized.

Gender has shown an astonishing ability to reinvent itself according to the particular technological needs of capitalism. It has just proved too useful a tool for signposting and governing what kinds of work should be done by whom. This has even carried over to artificial intelligence. In a recent talk at London’s Institute for Contemporary Arts on “Automated Emotion,” Nora Khan noted that in customer feedback surveys, companies developing artificial-intelligence services have observed that customers respond more favorably to bots to characterized as female. So accustomed are we are to women being in roles of service and gentle facilitation that anything else apparently feels alien. Hence Siri, Alexa, and Cortana rather than Simon, Alexander and Caleb.

Gender is thus used to assimilate new technologies into existing social relations and distributions of power. As academic and theorist Helen Hester explores in Technology Becomes Her, gender is coded into new technology of service to ensure that people keep on buying, keep on signing in.

Upheavals in the technology of gendered work don’t spell upheavals in the ever-adaptive technology of gender. Such advances are only revolutionary in the right circumstances. But at the same time, we should not lose sight of the opposite: that in the right circumstances, such advances are indeed revolutionary. 830 women a day die from pregnancy or childbirth-related complications. Pregnancy means running a higher risk of intimate partner violence or pure-and-simple penury. To these social ills, artificial wombs would provide a simple medical solution. They underwrite a positive politics of choice around reproduction: giving people the ability not simply to refuse childbirth, but to embrace it if they’re, say, older, or trans, or queer, or simply struggling to get pregnant.

But in a yet more revolutionary move, such technology can help undo gender, as it allows us to reorganize the work of reproduction. It defies the central logic of that Silvia Federici pinpoints as the source of restrictions on women’s reproductive freedom: that in order to reproduce itself, society needs to coerce women into squeezing out children — in the process, effectively shutting them out of public and economic life. Ectogenesis allows us to neatly outsource those concerns to the robots. Moreover, artificial wombs — hatcheries, perhaps — could be run collectively, tended by well-paid staff. The service could be delivered free at the point of need, available to people regardless of gender or family status. Biological limits tying birth to female bodies underwrite the arguments of those invested in keeping reproduction and childcare cloistered in the confines of the private, heterosexual domestic unit. Struck against the prospect of easy, widely available ectogenesis, such arguments ring hollow.

In such a world, we would have less need for the ideology of the ever-caring, endlessly self-sacrificing mother figure to regulate systems of work. No wonder then, that Huxley’s hatcheries proved so readily nightmarish for readers. They showcase technology that implies gender is not a biological fact but, as Helen Hester puts it, a workplace technology, regulating the labor of social reproduction. Allowing us to unsettle the way we organize this labor, artificial wombs lay the groundwork for that technology to become obsolete.

24 Jul 22:15

MIUI 9 Features Previewed Ahead of Official Unveiling: New Themes, Redesigned Lock Screen, More

by Rajesh Pandey
Ahead of the official unveiling of MIUI 9 on July 26th, Xiaomi has gone ahead and previewed some of the features that will be a part of it.  Continue reading →
24 Jul 22:15

Hmm, ist das hier ein klarer Fall von "die sind alle ...

mkalus shared this story from Fefes Blog.

Hmm, ist das hier ein klarer Fall von "die sind alle so verstrahlt, die merken gar nicht, wie verstrahlt sie sind" oder von "burn, venture capital, burn"?
AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks
The best defense against malicious AI is AI.
Ja super!
24 Jul 22:15

If GoDaddy Can Turn the Corner on Sexism, Who Can’t? | Charles Duhigg

If GoDaddy Can Turn the Corner on Sexism, Who Can’t? | Charles Duhigg:

GoDaddy shows that acknowledging our inherent biases can be the keystone of moving past chauvinism and deep-seated workplace discrimination:

Today, as Silicon Valley sexism again draws attention, it’s worth studying those shifts at GoDaddy. There’s a regular procession of headlines about sexual harassment scandals at venture capital firms and large tech companies. But learning to address this problem requires studying where things have gotten better, as well. And GoDaddy has become, surprisingly, a lodestar among gender equity advocates — an example of how even regressive cultures can change.

So what did GoDaddy do right?

The answer is more complicated than just stamping out overt sexism. GoDaddy also focused on attacking the small, subtle biases that can influence everything from how executives evaluate employees to how they set salaries.

“The most important thing we did was normalize acknowledging that everyone has biases, whether they recognize them or not,” said Debra Weissman, a senior vice president at the company. “We had to make it O.K. for people to say, ‘I think I’m being unintentionally unfair.’”

Though GoDaddy still has work to do, the company is “evidence that things can change,” said Lori Mackenzie, executive director of the Clayman Institute for Gender Research at Stanford, which has worked with the firm. “Oftentimes, what keeps companies from shifting is believing the existing system is already fair. Blake is really committed to undermining that.”

[…]

Some of the problems applicants and workers faced were subtle. For years, for instance, GoDaddy’s job descriptions were needlessly aggressive, saying the company was looking for “rock stars,” “code ninjas,” engineers who could “knock it out of the park” or “wrestle problems to the ground.”

Moreover, when GoDaddy’s human resource department began reviewing how the company analyzed leadership capacities, it found that women systematically scored lower because they were more likely to emphasize past team accomplishments and use sentences like “we exceeded our goals.” Men, in contrast, were more likely to use the word “I” and stress individual performance.

“There’s a lot of little things people don’t usually notice,” said Katee Van Horn, GoDaddy’s vice president for engagement and inclusion. “But they add up. They reinforce these biases you might not even realize you have.”

GoDaddy began focusing on countering these biases, assessing the company’s hiring, employee evaluations and promotions. In particular, executives scrutinized employee reviews, which evaluated workers using questions similar to those found at many companies: Does this person reply to emails promptly? Have they sought leadership roles? Have they shown initiative?

“We realized a lot of those are invitations for subjectivity,” said Ms. Van Horn.

GoDaddy’s data indicated that women tended to systematically be scored lower than men on communication, in part because they were more likely to be a family’s primary parent, and so were more likely to be off email in the early evening during homework and bedtime hours.

“And the more important question isn’t whether someone responds to email right away,” said Ms. Van Horn. “It’s what they say, whether their responses have impact. We shouldn’t be judging people based on how fast they communicate. We should be looking at whether they achieved the goals set for them.”

Women also, on average, scored lower than men on evaluations of taking initiative, because most of GoDaddy’s midlevel managers were men, and the culture was top-down, which made it harder for female employees to participate in and get attention for prominent projects, employees say.

Changing workers are measured, relative to promotion and advancement, changes everything, especially moving from subjective ‘leadership’ qualities to more objective measures of progress.

Today, almost a quarter of GoDaddy’s employees are women, including 21 percent of its technical staff. Half of new engineers hired last year were female, and women make up 26 percent of senior leadership. Female technologists, on average, earn slightly more than their male counterparts.

24 Jul 22:15

Siri Featured in Apple Ad Starring The Rock

by John Voorhees

Apple released an advertisement showcasing Siri starring former pro-wrestler turned film star, Dwayne (‘The Rock’) Johnson. Teased yesterday by Johnson on Twitter and Facebook, the video, posted to Apple’s YouTube channel, features Johnson accomplishing a long list of life goals with the help of Siri during a single day. The tongue-in-cheek spot highlights several Siri features such as:

  • reading Johnson’s schedule;
  • creating a reminder;
  • scheduling a Lyft ride;
  • getting the weather forecast;
  • reading email;
  • displaying photos;
  • texting someone;
  • converting measurements;
  • playing a playlist;
  • starting a FaceTime call; and
  • taking a selfie.

The Siri ad is a clever and entertaining way of explaining the breadth of tasks that can be accomplished with Siri, from the basics like weather forecasts to less well-known features like taking a selfie.


Support MacStories Directly

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it’s also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it’s made in Italy.

Join Now
24 Jul 22:14

Reform of ICBC needed

by Stephen Rees

Screen Shot 2017-07-23 at 5.44.26 PMThe front page of Saturday’s Vancouver Sun was the need to raise insurance rates identified by a leaked report that the BC Liberals asked for, and then kept quiet about. Over the next 24 hours the tone of the Sun story has changed on line since, of course, the corporation (Postmedia) that publishes the Sun supports the BC Liberals. So the banner headline on line now reads “NDP must come clean about plans for ICBC, Liberal Opposition demands” rather than “Huge ICBC rate hikes loom without reform:report”. The report comes from Ernst & Young and is critical of the policies of the BC Liberal government which cross subsidized mandatory basic rates from the profitable optional side.

The report was commissioned by ICBC’s board earlier this year, but was not made public. A copy was leaked to Postmedia News.

While ICBC premiums are among the highest in Canada, the report said, “they are not high enough to cover the true cost of paying claims.”

“More accidents are occurring on B.C.’s roads, and the number and average settlement of claims are increasing. Only recent government intervention has protected B.C. drivers from the currently required 15 per cent to 20 per cent price increases. This rate protection has eroded ICBC’s financial situation to a point where it is not sustainable.

“The average driver in B.C. may need to pay almost $2,000 in annual total premiums for auto insurance by 2019, an increase of 30 per cent over today’s rates,” the report said, adding that assumes that current trends persist, that ICBC is expected to cover its costs from its premiums and that significant reforms are not made.

There are a number of recommendations

The review suggested B.C. could follow the models of New Brunswick, Alberta and parts of Australia by capping payouts for pain and suffering on minor injuries from $4,000 to $9,000, while at the same time increasing accident wage and medical benefits.

It’s also possible to let drivers buy an optional “top-up” coverage that would, in effect, give the drivers back the right to sue to replace any reduced claim money they could have got through the courts.

Minor claims have soared in cost by 365 per cent since 2000 and are eating up 60 per cent of all total injury payouts, says the report. The size of cash settlements for minor injuries is also rising, as is the number of accidents on the road and the cost to fix technology inside modern vehicles.

Of course the Liberals are already accusing the NDP of wanting a “no fault” system – even before the new government has had time to get their feet under the table. The Liberals are also in full damage control mode since it was their decision to cancel photo radar that started the problem. Changing red light cameras to catch speeders would be a relatively easy thing to do, but the real speed problems are out on the open road. The intersection issues arise from contempt for other basic rules of the road, lack of common courtesy and patience, and an almost total absence of common sense.

Unfortunately there is no mention of the interval camera system. This uses existing technology widely used in traffic surveys to match number plates over a fixed distance. The owner of the vehicle gets a ticket when the car has covered the distance between two cameras in much less time than the posted speed limit allows. This system is more effective that the just at this point of the old photo radar – which was housed in a fairly distinctive vehicle, and thus fairly easy to avoid.

I think another reform is not just capping the amount allowed for minor claims, but also banning the present practice of lawyers advertising for claimants and being paid on a share of the payout. The incessant repetition of these ads during the CBC 6pm tv news means I now know them off by heart. And the message is that you can make ICBC increase the payments they offer if you sign on with a the named lawyer. Of course, what it does not say is the increase in the settlement goes to the lawyer and not the plaintiff. I find these practices offensive and they have only been permitted in recent years and should be reversed. It simply wrong to expect to make a profit from the suffering of others – and I think these adverts get very close to encouraging people to exaggerate their claims.

The mismanagement of crown corporations under the previous government is going to take some time to correct. If I were advising the BC Liberals, I would tell them to tone down the attacks, when clearly the current government has to do what it can to sort out the mess the Liberals left behind. The current tone taken by my local MLA Andrew Wilkinson, Liberal MLA for Vancouver-Quilchena is not one that is going to win him much support. Except from Mr Toad who enjoys speeding and relishes crashes as exciting intervals in an otherwise dull existence.


Filed under: cars, Road safety, Transportation Tagged: BC Liberal Party, Ernst & Young, ICBC, photo radar, speed cameras
24 Jul 22:14

Most People Are Attracted To Success

by Richard Millington

Understand most people are attracted to success.

Almost nobody wants to be in the group at the beginning.

Why would they? There isn’t much there. Just a few discussions, no sense of community, and no track record of doing amazing things.

But success, wow, that’s attractive. Everyone wants to embrace the group identity when you’re successful.

Outside of customer support communities, the best way to build a community is to begin with the true believers. These are the people who closely share your vision for what the community could be and want it to be something new and terrific.

And the best way to find and nurture true believers is to invest the time beforehand to build up as many close relationships as possible. They will either be infected by your energy or reveal themselves through early interactions to be true believers.

It’s not the influencers you want to invite first, they’re almost never going to be great early members. It’s those with the most passion for the project. Begin there and you’ll find success follows.

24 Jul 22:14

Revamped Google Feed on Android Rollout Delayed Due to Deep Integration with System

by Rajesh Pandey
Last week, Google announced an AI-powered news feed for its Google app for Android and iOS. The new feed was supposed to be rolled out to Android users over the next few weeks. However, that rollout has hit a snag as Google with its rollout plans. Continue reading →
24 Jul 22:14

Week 127 of chemo: Cancer levels remain stable – Goodbye Dexamethasone!

by tyfn

Week 127 of chemo: Cancer levels remain stable - Goodbye Dexamethasone!

I look at my chemo treatment as walking along a road, carrying rocks that represent my pomalyst and dexamethasone. However after 127 weeks, my time with dexamethasone has come to an end. It no longer weighs me down. I’m letting it go.

I’m moving forward with a smile on my face and a spring in my step.

My July blood test results show that my M protein (cancer levels) remain stable at 3.0 g/L.

The chemo continues to be effective and I’m feeling happy.

M protein (g/L)
July = 3.0
June = 3.2
May = value missing
Apr = 3.0
Mar = 3.0
Feb = 3.5
Jan = 3.3
Feb 2015 (pre-chemo) = 36.1

My Multiple Myeloma Specialist stopped my dexamethasone (dex) treatment. Tests ordered by my Glaucoma Specialist showed that the dex (steroid) had caused mild right eye damage due to increased eye pressure from cumulative use since 2015. Prescribed eye drops prescribed by my Glaucoma Specialist are keeping my eye pressure normal, however, I’m steroid sensitive to dex, meaning glaucoma is an ongoing concern.

I’m hopeful that my cancer levels will remain stable without major side effects from the Pomalyst chemo only. I’m also remaining optimistic that the symptoms from my cancer will remain manageable. I’m extremely happy to get off dex as the mental and physical side effects are pretty brutal.

I’m thankful that the next day I am able to go online to view my lab results. I’m focusing positive energy on keeping my cancer levels stable, I try and remain calm and keep my stress levels low each day. I’m also focused on eating as healthy as possible. When I started in February 2015, my cancer levels were 36.1 and now they are 3.0.

My Hematology profile (how my body responds overall to being on treatment) looks good.

Hematology Profile
Date WBC Hemoglobin Platelet Count Neutrophils
Reference Range 4.0 – 11.0 135 – 170 150 – 400 2.0 – 8.0
Jul 2017 4.6 136 323 3.6
Jun 2017 5.2 131 312 4.3
May 2017 5.1 132 303 4.1
Apr 2017 6.6 127 294 3.7
Mar 2017 5.1 130 303 4.2
Feb 2017 4.8 132 324 3.6
Jan 2017 4.8 136 304 3.7
Dec 2016 6.7 128 303 3.4

To recap: On Sunday, July 16th, I completed Cycle 32 Week 3. I have Multiple Myeloma and anemia, a rare cancer of the immune system. Multiple myeloma is a cancer of the plasma cells that affects the plasma cells, a type of immune cell that produces antibodies to fight infection. These plasma cells are found in the bone marrow. As a blood cancer, it is incurable, but treatable. Since February 9th 2015, I have been on Pomalyst and dexamethasone chemo treatment (Pom/dex). On July 16th, my dexamethasone treatment ended, due to eye damage, reported by my Glaucoma Specialist, from long-term use.

Weekly chemo-inspired self-portraits can be viewed in my flickr album.

Sun sets on Granville IslandMay 2014: Granville Island Sunset

The post Week 127 of chemo: Cancer levels remain stable – Goodbye Dexamethasone! appeared first on Fade to Play.

24 Jul 22:14

So You’re Going To Manage a Data Science Team

by Rui Carmo

I’ve seen things you people wouldn’t believe. Pivot tables on fire off a dashboard. R scripts glittering in the dark near the Hadoop cluster.

But (with apologies to Rutger Hauer for hijacking his amazing monologue) I’ve also seen a lot of data science being done technology-first without taking people or processes into account, and I thought I’d lay down some notions that stem from my experience steering customers and partners through these waters.

As a budding corporate anthropologist, (recovering) technical director and international cat herder, I am often amazed at how much emphasis is placed on technical skills and tooling rather than on actually building a team that works.

And as an engineer by training (albeit one with a distinctively quantitative bend), I am fascinated by the number of opinions out there on the kind of technology, skill sets and even the kind of data required to make a data science team successful, because there’s actually very little hard data on which of those are the critical factors.

So I’m going to take a step back from the tech and science involved and look at the way the process should work, and some of the things you should consider when running a data science team regardless of your background.

People, Processes, Technology

A few years back I got hammered into me (by a former CTO of mine) that excellence is a process, and the motto stuck with me because he meant “excellence” in the sense of both personal and team growth rather than riding the tech hype or getting aboard the Six Sigma train.

Mind you, tooling and technology are critical, but you have to look at the wider picture.

Take deep learning, for instance: Tensorflow might be the go-to library at the moment, but Keras will give you a nicer abstraction that also lets you leverage CNTK as a back-end and possibly get faster turnaround when iterating on a problem, so I’d argue it should be the higher-level tool that you (and your team) need to invest in.

If you take the long view, running the gamut from purely statistical/regressive approaches to RNNs implies a deep commitment not just in terms of learning the science behind them, but also about understanding where they fit in in the range of challenges you have to address.

And believe me, choosing tools is not the one you need to tackle first - what you should tackle first is your team, and then the context in which it operates.

The First Mistake

The first mistake organizations (and managers) make is thinking that the data scientists reporting to you are your whole team.

No matter how much people go on about matrix management and the need for cross-functional teams, there is a natural human tendency to sort out people (and things) into nice, tidy bins, and when you have to motivate and drive people, there’s an added bias involved - after all, as a manager, your primary role is to make sure the team you’ve been assigned works cohesively, and in data science these days (especially in companies new to the field) there’s also a need to prove your worth.

And by that I mean the team’s worth - you might be a whiz on your own right, but your job is to make sure your team delivers, and that the goalposts and expectations are clearly defined both inside and outside your direct reports.

So your actual team comprises stakeholders of various kinds - product owners, management, and (just as importantly) everyone else in technical roles, because what you do (and the insights you obtain) inevitably impacts the rest of the business and how it’s built/implemented/deployed/etc. You don’t exist in a vacuum, but rather are the conduit between what data you have (or, more often, don’t) and what the business needs to improve (and I’m deliberately avoiding the reverse flow here, which is when you’re tasked by the business to improve something that’s already implemented).

I’ve seen a very similar thing happen before during the Big Data hype, and the way we tackled that successfully was by setting up “pods” of people to address each specific problem - each pod being composed of the usual triumvirate of a data scientist (who is usually a direct report to you), an implementer (who might or might not be) and a domain expert (who is usually a product owner or a business stakeholder).

I use the term “implementer” above because depending on the issue you’re tackling, the problem domain might require:

  • quick iteration on data conversion (in which case you’d fill that role with a developer or DBA)
  • putting together a data visualization (a front-end developer or a designer)
  • or figuring out how to deploy a model at scale (an architect or devops whiz)

Either way, the net effect is that in more formal organizations you will find yourself having to mix and match schedules with your management peers, so it helps if you can communicate clearly about what the overall goals are and what sort of skills you need to tackle a particular challenge.

Living up to the role, inside and out

You have plenty of hands-on experience, your team looks up to you, and you let yourself get involved in all sorts of discussions regarding architecture, feature engineering, algorithm selection and model evaluation - and that is fine and good, except that management has nothing to do with that.

Running a team requires you to go out of your comfort zone and juggle priorities, commit to deadlines, steer people’s careers and all the messy, unscientific trappings that come with leading people and delivering results in a business environment.

The key thing here is to avoid falling prey to impostor syndrome - remember, you got the job for a reason, right? And being a manager doesn’t mean you stop doing science work - in fact, you will likely be doing a lot more science work that you usually do (but at a higher level), simply because you need to understand what your extended team is doing, identify pitfalls or roadblocks, and steer people in the right direction.

And to do that, you need to learn how to communicate effectively - not just inside your team, but outside it, and quite often to people who don’t have the same kind of background (technical or otherwise).

Putting processes in place

With work coming in and all your pods in a row, your team starts building a pipeline of challenges to go through (usually with multiple sub-challenges as you start drilling down on things). So how do you farm those challenges out to your team while keeping everyone happy?

Well, before we tackle that, we need to take a step back and think about how people will most likely be spending their time.

There are essentially two constraints involved in scaling up a data science team, and they both boil down to time: time spent understanding the problem and working out a solution, and time spent implementing and rolling it out.

In practice, what generally happens is that:

  • 80% of your team’s time will be spent wringing meaning out of the data: hunting down datasets, doing ETL and doing initial feature selection. The first part is by far the least glamorous part of the work, but ties in well with the human mind’s need to get an intuitive feel for the data and problem domain and you should step out of the way. By all means do daily stand-ups (if that’s your thing), sit in with the team and discuss what they’re doing, but try to only bring your own expertise to bear when asked/required - don’t micromanage people, but help them take the long view and turn what they’re doing into repeatable process.
  • The remaining 20% of the time is usually spent figuring out how to make your data and models available to the rest of the company - and this is where most data science teams skimp.

As it happens, there is a lot more to delivering data science than churning out reports and dashboards. So you (as a manager) should expect to spend a lot of time (probably up to 80% of it, in the early days) going over those 20% above and working with your team to turn what they did into a repeatable, measurable process, either by leveraging continuous integration tools to compare models between iterations or by defining checks and balances: What features were added to the model? How does it perform against new datasets? How fast will it become stale given the rate at which your data (or your business process) is updated?

A good manager knows there is a balance to be struck between the “fail fast” approach and not rushing into things - by all means have people go and experiment with new methods (that’s a big part of keeping people happy), but define the yardstick you’re going to measure the results with - do we get faster turnaround out of the new model or tools? Does it perform much better without impacting performance? Do we get side benefits like automatic removal of biases or preventing overfitting?

It might take considerably longer in the beginning depending on your company, resources and processes, but the idea here is that as you start delivering solutions you will be building a set of APIs, infrastructure or datasets that other people will consume, defining a roadmap for those, and iterating upon them - so the bridges you build with other teams will be invaluable here.

Growing your team, with science!

Soon enough, you’ll figure out who has the knack or expertise to tackle specific kinds of problems. Instead of assigning them to everything that looks like a nail, though, take those people and pair them with someone else who’s never done it before.

Have them broaden their horizons and, again, turn that into a repeatable process, but also present it to their peers. Do step in to steer the discussion, but remember that personal growth comes from learning new things and passing them on, and that your team will be more effective (and happy) if processes are clear to everyone and if they value those processes as an integral part of their work.

Don’t get caught up in processes as efficiency, or start tallying up KPIs for their own sake - rather, think of processes as adding structure (and thus meaning) to the work you’re all doing, and take a leaf (or two) from the Kaizen handbook.

Make Data Science part of your company culture

Once everything else is in place, the best way to make sure the organization understands your team’s role involves reaching outside it - which means leveraging your communications skills yet again to:

  • Foster a data-centric culture in other teams, making it plain that it is not really about storing heaps of raw data, but about making sure the datasets you have are clearly identified and easy to get at (with the usual caveats about personally identifiable information and proper data hygiene in that respect).
  • Agree on common data representations (or bridge formats) that can be exchanged with minimum development/integration overhead, and on APIs for other teams to access the trained models your team produces.
  • Address the really hard problems, like moving from batch-oriented processes to event streaming. Fraud detection, recommendation engines, and other staples businesses rely on require instant access to data, and (speaking from experience) there is nothing like dealing with streaming data, both technically and from a business perspective.
  • Understand what the business wants. Nothing that is really necessary is actually impossible (even if it seems hard given the hype around data science these days), and there will be a lot of patiently steering people back to the realm of feasibility, but remember, you were chosen as a manager because you are good at building bridges, in all respects.

Above all, don’t freak out - you’re still doing data science

Even if some of the above doesn’t come naturally to you at first, don’t worry. You’ll be fine, as long as you keep re-training your own mental model of what role your team (and you) have to play in the larger picture.

And rest assured that you will be able to spend a lot of time doing actual data science, if only because most of the business and team-related aspects outlined above evolve a lot more slowly than you’d expect - there’s no gradient descent algorithm for optimizing human organizations, and, all things considered, I think that’s a good thing.

This essay originally appeared on LinkedIn on July 2nd, and then a week later on Medium.

24 Jul 22:13

A Quiet Day At The Office

by Rui Carmo
Early morning soon after FY close
24 Jul 22:13

Last Futures: From Web 2.0 Utopia to Platform Capitalism

by Reverend

I am presenting at Deakin University in Melbourne, Australia in about two hours, and I have been working on my talk pretty diligently over the last 24 hours. It will work in a bunch of my favorite topics such as UMW Blogs, ds106, Domains, and more, but this will be a bit different (or maybe not) given I have been encouraged to speak specifically to what is possible with WordPress. I have been a fairly unapologetic fanboy of WordPress for the last 12 years, so that won’t be hard—plus it is constant source of pride to have picked the winning horse early on 🙂

In fact, while thinking about the work that I’ve been a part of over the last 12 years (and stumbling around the internet) I found a pretty useful aesthetic analogy to help make the shift in edtech sensible for a community exploring the open web for teaching and learning at this moment. It started with wanting to use images from the 70s Sci-fi Art Tumblr to bookend my presentation. This blog is one of my constant inspirations these days, and it struck me how easy it was to map the current anxieties around edtech onto just random recent images, such as surveillance:

Virtual reality:

Control:

Sustainability:

Behaviorism:

Etc.

These awesome visuals mainly focused on a kind of dystopia or conflict, whereas the utopian visions of that era is a bit harder to find on the 70s Sci-fi Art, which led me to search for the utopian aesthetic of that era, which resulted in the discovery of this book review in the Guardian by Andy Beckett of Doug Murphy’s Last Futures. The article tagline dragged me in and made the immediate analogy I was looking for about Web 2.0 and its aftermath:

How the shining architectural optimism of the 1960s and 70s has ultimately produced buildings such as supermarkets, open-plan offices and other spaces of control.

This immediately made me think about Audrey Watter’s ongoing crusade as Cassandra for us in ed-tech to understand the transition we are going through, as well as Chris Gilliard recent piece in the EDUCAUSE Review, “Pedagogy and the Logic of Platforms” that spells so much of this out so beautifully, Brian Lamb’s recent reading of Gilliard’s piece (amongst others) puts it well:

Critics such as Gilliard, Cottom, Audrey Watters articulate a wider sense that the web has not only failed to achieve the breathless utopian ideals of a space in which traditional power relationships would be challenged, it is increasingly a mechanism for power to exert itself in ways that were unimaginable until recently. Higher education seems resigned to accepting the fundamental logic of surveillance capitalism as it stands, without asserting competing values or working to address its ill effects.

This is our moment in ed-tech, a far cry from the breathless utopia I felt possible in the early stages of Web 2.0, and turns out the spaces we thought we were building for open, expansive teaching and learning were being re-enginneered for “personalization,” which is an edtech code-word for mining personal data from students and faculty alike. And this subversion of utopian visions to serve consumerism is exactly what Murphy’s Last Futures seems to argue in the transition from the utopian dream of the 60s and 70s architecture movement to the hijacking of that vision for the late logic of capital:

During the 80s, most of the utopian architectural schemes of the previous two decades were so quickly forgotten or derided – “Nothing dates faster than people’s fantasies about the future,” sneered the art critic Robert Hughes in 1980 – that it was almost as if they had never existed.

As an early Web 2.0 proponent, that hurts.

The flexible, socially responsive sort of building first conceived by progressive postwar architects lives on … but it has mutated into the supermarket, the open-plan office, the distribution warehouse – not usually spaces of liberation but of control.

So, this book review opened up an interesting historical analogy for how utopian vision expressed through architecture in the 60s and 70s quickly became the model for capitalist platforms in the 80s and 80s. For me it’s a striking analogy that helps me understand our own moment on the web a bit more clearly, the realization that the utopian vision of open education and community spaces for learning would quickly become the sites of corporate control in the not too distant now.

Anyway, that is where my thinking is for this talk, we’ll see how it goes, but it is a good reminder how crucial aesthetic are for me to make sense of any of this. Might be why Bryan Mathers’ recent NDGLE art is featured so prominently in my slides today 🙂

24 Jul 22:13

Tags and case

by Lauren Wood

For a while there XML.com didn’t handle tags on submitted news items very well. If a tag was included that was in a different case to an existing tag, the preview and publish would result in a 500 server error. Fortunately this was something that wasn’t visible to the outside world, but annoying nonetheless.

Wagtail allows case-insensitive tags, and I had already turned that on (it would be confusing to have searches for the tags “XSLT” and “xslt” return different results, for example). Articles and news items submitted using the standard interface behaved properly, it was just the news items submitted by people without logins on the system that didn’t.

It turns out that the problem lay in the way I called the get_or_create() method, which is used to look up the tags in the database and then create them if they don’t exist. In my code, that looked like this:

tag, create = Tag.objects.get_or_create(name=tag_name)

By default, this is a case-sensitive method (as it should be, for the general case). To make the lookup case-insensitive, you use name__iexact instead of name. The next problem I found was that no tags were being created if the tag didn’t already exist in the database. To create the tag, if you’re using name__iexact instead of name for the tag lookup, you also need to give the get_or_create() method a defaults parameter to use when creating the tag. Now that line looks like this:

tag, create = Tag.objects.get_or_create(defaults={'name': tag_name},
                                        name__iexact=tag_name)

and it all works the way it’s meant to.

24 Jul 22:13

maui’s gorgeous coastline @sunshinehelicopters @hawaiimagazine @gohawaii

by Emily Chang

maui's gorgeous coastline @sunshinehelicopters  @hawaiimagazine @gohawaii

Photo Caption: maui’s gorgeous coastline @sunshinehelicopters @hawaiimagazine @gohawaii

Photo taken at: Maui

Instagram filter used: Clarendon

View in Instagram ⇒

24 Jul 21:29

An In-Depth Look at the Parity Multisig Bug

by Lorenz Breidenbach and Phil Daian and Ari Juels and Emin Gün Sirer
Multiple signatures

Multiple signatures are better than one.

This year’s IC3-Ethereum bootcamp brought together the Ethereum Foundation’s top developers, IC3 students and faculty, and dozens of others from industry and academia. We worked together over the course of a week on ten exciting, intensive development projects, and ended with a bang on the last day. The Ethereum wallet rescue group (with a little help from a couple of IC3ers) scrambled to respond when 153,037 Ether (worth $30+ million) was stolen from three large Ethereum multisig wallet contracts.

The MultisigExploit-Hacker (MEH), as he, she, or they are known, exploited a vulnerability in the Parity 1.5 client’s multisig wallet contract. A fairly straightforward attack allowed the hacker to take ownership of a victim’s wallet with a single transaction. The attacker could then drain the victim’s funds, as happened in these three transactions once the wallets were compromised. The victims were three ICO projects: Edgeless Casino, Swarm City, and æternity.

In the following we will give an in-depth technical explanation of the hack, describe the white-hat response, and draw some lessons about how such breaches might be prevented in the future.

How the attack worked

There are many reports that the vulnerability was due to the simple omission of an “internal” modifier that made it possible for anyone anywhere to take ownership of an existing wallet due to Solidity’s “default-public” policy. While it is true that the addition of the right modifiers would have prevented the attack, the attack is a little more clever than this would suggest.

The vulnerable MultiSig wallet was split into two contracts to reduce the size of each wallet and save gas: A library contract called “WalletLibrary” and an actual “Wallet” contract consuming the library. Here is a toy version of WalletLibrary:

contract WalletLibrary {
     address owner;

     // called by constructor
     function initWallet(address _owner) {
         owner = _owner;
         // ... more setup ...
     }

     function changeOwner(address _new_owner) external {
         if (msg.sender == owner) {
             owner = _new_owner;
         }
     }

     function () payable {
         // ... receive money, log events, ...
     }

     function withdraw(uint amount) external returns (bool success) {
         if (msg.sender == owner) {
             return owner.send(amount);
         } else {
             return false;
         }
     }
}

WalletLibrary looks pretty boring: Beyond some initialization code that will be called in the constructor of Wallet, WalletLibrary provides the basic functionality you would expect from a wallet: Anybody can deposit money into the wallet, but only the owner can withdraw her funds or change the owner of the wallet.

Here’s an example, simplified contract that could be using this WalletLibrary:

contract Wallet {
    address _walletLibrary;
    address owner;

    function Wallet(address _owner) {
        // replace the following line with “_walletLibrary = new WalletLibrary();”
        // if you want to try to exploit this contract in Remix.
        _walletLibrary = <address of pre-deployed WalletLibrary>;
        _walletLibrary.delegatecall(bytes4(sha3("initWallet(address)")), _owner);
    }

    function withdraw(uint amount) returns (bool success) {
        return _walletLibrary.delegatecall(bytes4(sha3("withdraw(uint)")), amount);
    }

    // fallback function gets called if no other function matches call
    function () payable {
        _walletLibrary.delegatecall(msg.data);
    }
}

This time, the code looks more complex. Notice the use of delegatecall throughout the contract. delegatecall is designed to enable the use of shared libraries, saving precious storage space on the blockchain otherwise wasted in duplicating widely used, standard code.

delegatecall works by executing the program code of a contract in the environment (and with the storage) of its calling client contract. This means that the library code will run, but will directly modify the data of the client calling the library. It essentially is as if the code of the library had been pasted into the client issuing the delegatecall. Any storage writes inside the delegatecall will be made to the storage of the client, not the storage of the library. delegatecall allows a client contract to delegate the responsibility of handling a call to another contract.

At the EVM level, a contract is just a single program that takes a variable-length binary blob of data as input and produces a variable-length binary blob of data as its output. A transaction in EVM provides an address and some data. If the address holds code, this data is used in a “jump table” like structure in the beginning of the contract’s code, with some of the data (the “function selector”) indexing jumps to different parts of contract code using the standard encodings described in the Ethereum Contract ABI specification. Above, the function selector for calling a function called initWallet that takes an address as its argument is the mysterious looking bytes4(sha3("initWallet(address)")).

Now that we understand delegatecall and how function selectors work, we can read and understand the Wallet contract. To begin with, we have a simple constructor that delegates the initialization of the contract’s state to WalletLibrary. This is followed by a withdraw function which -- once again -- delegates its task to WalletLibrary.

Finally, we want our wallet to be able to receive funds. This is commonly handled with a Solidity construct called a fallback function. A fallback function is a default function that a contract is able to call to respond to data that doesn’t match any function in the lookup table. Since we might want all sorts of logic to be triggered upon the receipt of funds (e.g. logging events), we again delegate this to WalletLibrary and pass along any data we might have received with the call. This data forwarding also exposes WalletLibrary’s changeOwner function, making it possible to change the Wallet’s owner.

Wallet is now completely vulnerable. Can you spot the vulnerability?

You might have noticed that the initializeWallet function of WalletLibrary changes the owner of the Wallet. Whoever owns the Wallet can then withdraw whatever funds are contained in the contract. But as you can see, initializeWallet is only executed in the constructor of Wallet (which can only run once, when Wallet is created), and Wallet doesn’t itself have an initializeWallet function. Any calls sending this function selector to Wallet in a transactions’ data won’t match.

So, the attacker cannot call Wallet.initializeWallet(attacker) to change the owner of the wallet and we are safe after all‽

As it turns out, the implementation of the fallback function means that we are not. As we already saw, a fallback function is a default function that a contract is able to call to respond to data that doesn’t match any function in the lookup table. The intent is to allow contracts to respond to receipt of funds and/or unexpected data patterns; in our wallet, it both enables funds receipt and allows for functions to be called in the wallet library that are not explicitly specified in the wallet.

One use of such a “generic forward” feature could be upgradeability, with the pattern perhaps even recommended as a security precaution for the event that the need for additional functions not anticipated at release time became known: By allowing the wallet’s owner to change the address of the library contract, one could keep the wallet with its funds at the same address while changing the underlying implementation. In the Parity contract, it is likely that forwarding data was only done to save gas costs, as the contract is not upgradeable and all forwards could have been made explicit at compile time. Instead, this implicit default forwarding was used over explicit forwarding to expose certain functions like revoke.

So, when an attacker calls Wallet.initializeWallet(attacker), Wallet’s fallback function is triggered (Wallet doesn’t implement an initializeWallet function), and the jump table lookup fails. Wallet’s fallback function then delegatecalls WalletLibrary, forwarding all call data in the process. This call data consists of the function selector for the initializeWallet(address) function and the attacker’s address. When WalletLibrary receives the call data, it finds that its initializeWallet function matches the function selector and runs initializeWallet(attacker) in the context of Wallet, setting Wallet’s owner variable to attacker. BOOM! The attacker is now the wallet’s owner and can withdraw any funds at her leisure.

In reality, the initializeWallet function was more complicated and took more parameters, but the principle of the attack is exactly the one described above. You can see one of the initializeWallet calls of the attacker; the attacker then immediately withdrew the funds, earning her 26,793 ETH (or ~6.1 million USD). Not bad for two function calls and 60 cents in gas cost!

If initializeWallet had been marked as internal, there would have been no corresponding entry in the jump table, making it impossible to call initializeWallet from outside the contract. If initializeWallet had checked for double initialization, there would also have been no problem. Adding to the confusion is the fact that certain functions that are supposed to be callable from outside the contract are marked as external; one could easily wrongly assume that all functions not marked as external aren’t visible from the outside. In the Parity’s patch to the vulnerable contract, a modifier was added to a helper function of the vulnerable wallet initialization process to throw an exception if the attacked initialization function was re-called.

The response

A white-hat recovery team (MEH-WH) developers identified and drained all remaining vulnerable wallets into this wallet. They recovered a total of $78 million worth of tokens (half the value being BAT and ICONOMI) plus 377,105+ ETH (around $72 million). The funds will be returned to their owners as noted on r/ethereum:

If you hold a multisig contract that was drained, please be patient. [The MEH-WH] will be creating another multisig for you that has the same settings as your old multisig but with the vulnerability removed and will return your funds to you there.

This is all well and good for the recovered funds, but the stolen funds are in all likelihood unrecoverable. The Ethereum community cannot, for instance, easily execute a hard fork as they did in the case of The DAO. The DAO had a built-in 34-day delay, during which the stolen funds were locked into the contract and subject to recovery. The MEH only needs to identify compliant exchanges to cash out or convert to ZEC to retain the stolen funds with full anonymity. The MEH has already cashed out small amounts, as in this roughly 50 ETH transaction at Changelly.

Unless the hacker trips up, the community will have to resign itself to the loss of the money---more than in any U.S. bank robbery -- and an enduring mystery over the identity of the thief / thieves. In case you’re wondering, none of our ten bootcamp projects involved stealing ETH from multisig wallets. :)

Lessons

Looking at the history of the vulnerable contract in Parity’s github repository, we find that the contract was first added as a complete blob of code on December 16 of last year in commit 63137b. The contract was edited extensively once, on March 7 in commit 4d08e7b and then wasn’t touched until the attack occurred. Since the contract was originally added as one big blob, it was likely copied from somewhere else, making its provenance in development unclear. Note that the first version of the contract already contained the vulnerable code.

It is hard to believe that such an large (and valuable!) vulnerability could have gone undiscovered for such a long time. However, in light of the contract’s length and complexity and the complex interactions between Solidity’s fallback functions, its default-public visibility of functions, delegatecalls, and call data forwarding, that enable the attack, this seems less surprising. At least one other multisig contract had an analogous bug that stemmed from the lack of a function modifier.

We believe that there are multiple levels on which lessons should be drawn from this attack:

First of all, we recommend that Solidity adopt a default-private level of visibility for contract functions. This change would have likely prevented this exploit and others like it. This may be an opportunity to batch a number of other safe usability related changes, much needed additional types, and solutions to common gripes into Solidity. It's also an opportune time to think about versioning at the source language level, to be able to easily introduce new features into the language without having to worry about backwards compatibility.

In a more general sense, we believe that this attack was the result of security’s oldest enemy, complexity: It seems likely that the missing “internal” function modifier would have been discovered by the developers if Wallet had just been a single contract instead of delegatecalling out to WalletLibrary. Even without this modifier, Wallet would not have been vulnerable as long as Wallet’s fallback function wouldn’t have unconditionally forwarded any calldata to WalletLibrary, exposing unexpected functions able to modify the data in Wallet.

Interestingly, this specific attack may not have been caught by testing as implemented by most developers, since the vulnerability wasn’t caused by incorrect behaviour of a function, but rather by the unexpected exposure of a function (that behaved correctly) to the public. Nevertheless, we do, of course, strongly recommend that smart contract authors thoroughly test their contracts, and a test policy that included testing for function visibility on every function would have exposed the issue.

The creation of a rigorously followed best practices guide for testing and smart contract review that requires that visibility assumptions be made explicit and tested is thus, we believe, one of the strong lessons from this attack. Today, it is not uncommon to see large contracts deployed with only a handful of unit tests, with little to no reasoning about interaction between contracts, and with unit tests that do not even accomplish full statement or decision coverage. Beyond these glaring omissions of basic software quality techniques standard in the space, it remains apparent that there is still work to be done in understanding best practices for high level tooling and language design for smart contracts.

The Parity project has released a post-mortem giving a high level overview of the attack and discussing the steps Parity will take in the future to ensure that such an attack will not happen again. Many of their conclusions agree with the ones we made here.

Acknowledgements

We would like to thank Everett Hildenbrandt of the KEVM project for his feedback and helpful suggestions on explaining the attack.

24 Jul 17:02

Microsoft is killing MS Paint after 32 years, sort of

by Patrick O'Rourke
MS Paint

It looks Microsoft has plans to kill its historic MS Paint program when the Windows 10 Autumn Creators Update rolls out, which also adds a variety of new features to the operating system.

Paint, the app everyone used to doodle with when they were supposed to be doing school work, was originally introduced with Windows 1.0 in 1985. The program started its life as a 1-bit monochrome licensed version of ZSoft’s PC Paintbrush and it wasn’t until Windows 98 that the iconic software was even capable of saving a JPEG file.

With the last Windows 10 Creators Update, Microsoft introduced a new version of the app called Paint 3D, which allows both 2D and three-dimensional editing, so in a sense, paint will still live on to some extent.

Alongside apps like Outlook Express, Reader app and Reading List, Microsoft Paint has been added to a list of “features that are removed or deprecated in Windows 10 Fall Creators Update.”

While Paint was never a particularly capable app, for many it served as a childhood introduction to using a mouse, as well as art programs in general. The most recent version of Paint for Windows 7 added a variety of new features to the program, but its functionality still wasn’t even up to par with free web-based paint apps, let alone Adobe’s Photoshop platform.

Pour one out for Paint.

Source: Microsoft Via: The Guardian

The post Microsoft is killing MS Paint after 32 years, sort of appeared first on MobileSyrup.