Shared posts

14 Nov 14:33

S9:E5 - Why you should understand user interface and design (Mina Markham)

No matter how good of an idea you have for a product, if the design isn't executed well and people don't like the interface, the product might as well not even exist. To talk about the importance of good user interfaces and design, we brought in Mina Markham, senior engineer at Slack and creator of the Pantsuit User Interface for Hilary Clinton's 2016 campaign

Show Links

Mina Markham

Mina is actively involved in the tech community, teaching for Black Girls Code and founding the Dallas chapter of Girl Develop It and DFW Sass. In addition, she has presented at various conferences, including Front-End Design Conference, Midwest.io and Distill. Lastly, she's co-organizer of Front Porch, a conference on front-end web technologies for developers, designers, entrepreneurs, and managers.

14 Nov 14:33

S9:E7 - How do you create visual recognition software ethically and responsibly (Nashlie Sephus)

At the time of this recording, the New York Times released a report titled "As Cameras Track Detroit’s Residents a Debate Ensues Over Racial Bias," which discussed some of the issues in machine learning such as algorithmic bias, and facial recognition software giving more false matches for black people than white people. We chat with Nashlie Sephus, CTO of Partpic, which was acquired by Amazon in 2016, and now an Applied Science Manager at Amazon Web Services, about her journey into machine learning, developing Partpic, and tackling some of the ethical issues in machine learning in her new role at Amazon.

Show Links

Nashlie Sephus

Dr. Nashlie Sephus is currently an Applied Science Manager at Amazon AI in Atlanta where she was formerly the CTO of startup Partpic (acquired by Amazon in 2016). She focusses on computer vision, machine learning, fairness and biases in AI, and is founder of Mississippi-based non-profit The Bean Path (https://thebeanpath.org), having received her B.S. in computer engineering at Mississippi State University (2007) and Master's/Ph.D. at Georgia Tech (2014).

09 Aug 02:04

Let’s think more broadly about computing education research: Questions about alternative futures

by Mark Guzdial

At the Dagstuhl Seminar in July, we spent the last morning going broad.  We posed three questions for the participants.

Imagine that Gates funds a CS teacher in every secondary school in the world, but requires all new languages to be taught (not Java, not Python, not R, not even Racket). “They’re all cultural colonialism! We have to start over!” Says Bill. We have five years to get ready for this. What should we do?

Imagine that Oracle has been found guilty of some heinous crime, that they stole some critical part of the JVM, whatever. The company goes bankrupt, and installation of Java on publicly-owned computers is outlawed in most countries. How do we recover CS Ed?

Five years from now, we’ll discover that Google has secretly been moving all of their infrastructure to Racket, Microsoft to Scala, and Amazon to Haskell (or swap those around). The CS Ed world is shocked — they have been preparing students for the wrong languages for these plum jobs! What do we do now? How do you redesign undergrad ed when it’s not about C++/#/Java/Python?

We got some pushback.  “That’s ridiculous. That’s not at all possible.” (I found amusing the description of we organizers as “Willy Wonka.”) Or, “Our goal should be to produce good programmers for industry — PERIOD!”

Those are reasonable positions, but they should be explicitly selected positions. The point of these questions is to consider our preconceptions, values, and goals. All computing education researchers (strike that: all researchers) should be thinking about alternative futures. What are we trying to change and why? In the end, our goal is to have impact. We have to think about what we are trying to preserve (and it’s okay for “producing industry programmers” to be a preserved goal) and what we are trying to change.

07 Aug 02:51

Tightening constraints

by Chris Corrigan

I live on a small island in the sea with a very complicated water supply. We have some community water systems, and a complex geology that means that many people are on wells, and nearly every well seems different. As our population increases, and as the moisture decreases, we are finding ourselves subject to more and more restrictions on what we can do with water. This is as it should be. We cannot live on our island beyond our limits, with a bigger water footprint than the water we have available to us. In the past, you were free to run taps as long as you want. Now we are metered and in some neighbourhoods there are daily sue restrictions. Signs at the entrance to these areas say “Converse Water Or Have None.” It’s not an alarmist message. It is true.

One of the arguments I often hear people using against things like climate change mitigation is that it will somehow restrict their freedom. Libertarians, for whom all taxation is theft, protest against carbon pricing as a tax grab, even though it was always the preferred mechanism of free market economists. Oil companies and manufacturers complain about excessive regulation of fuel standards and emissions, and consumers object to high prices which limit what they are able to do.

Climate change requires a radical shift in the way economies and societies work, and it’s interesting to look at this from a complexity perspective. Ideally in a society you want to manage complex dynamics with complexity based policy solutions. In other words, instead of dictating behaviours, it’s better to influence behaviours by incentivizing things that lead in a positive direction and dis-incentivizing things that lead in a negative direction. This can be done with laws, regulations, pricing incentives, policies, and taxation. These attractors and boundaries create the conditions for behaviour change.

The free market is indeed a self-organizing mechanism, but it is also amoral. There is a reason why, even in the United States where gun ownership is a right, there are plenty of weapons and firearms that are highly restricted and outright banned. There is a good reason why it illegal to dispose of PCBs and dioxin in the atmosphere, despite the fact that for years companies used the fact that air wasn’t taxed to dump their waste products. So markets are regulated and behaviours change. That is a complexity based approach to trouble.

In chaos, the only response is a massive imposition of constraints and restriction of people’s freedom. Think of situation in which you might have required a first responder like a paramedic. If you are injured, you will accept a high degree of control over your life in order to stabilize the situation. First responders impose sometimes extreme levels of command and control to manage a situation. When things are more stable and heading out of chaos, the constraints relax and the complex task of healing or rebuilding or moving on can begin.

The argument I find myself making with folks who object to climate solutions is this: if you think that a simple carbon tax is an infringement on your freedom now, are you willing to live with that freedom now in exchange for much more brutal constraints on your freedom later? As climate emergencies continue to increase, it is very likely that people will be told where they can live and where they cannot, how they are allowed to travel, how much water they can use, what they can do with their land. The increase of control over people increases with the level of crisis and chaos. At a certain point you simply cannot live free beyond the limits of your bio-physical system to maintain you. The system imposes the constraints, and you will have no choice but to be told what to do.

For libertarians and others who value personal choice, now is actually the time to get on board with the complexity tools that can help us make choices that limit our impact on the climate. If we fail to influence populations into positive choices now – and it may already be too late – we will find ourselves increasingly being subjected to highly controlled environments later. One way or the other, our freedom to do whatever we want needs to be curtailed. We have lived for decades in unmitigated commercial and economic freedom on the backs of future generations, and the planet is telling us that it’s over. Choose differently now, or be told what to do later.

07 Aug 02:51

Seeding Social Norms

by Richard Millington

You’re not seeding activity, you’re seeding social norms.

Is this a place where members come, ask a question, and leave when they get an answer?

Is this a place where members come when they’ve had a hard day and want to share their struggles and get support from someone else?

Is this a community where members can share their successes and others congratulate them?

Is this a community where members proactively welcome each other and build strong relationships that extend outside of the community? If one member is traveling through another member’s town, are they likely to have coffee?

Remember when you’re in the early stages of building a community that you’re not just seeding early activity, you’re seeding the social norms which will help make your community the indispensable asset it deserves to be.

06 Aug 22:07

How the Park Board Tolerates an Unsafe Space

by Gordon Price
mkalus shared this story from Price Tags:
The routing through the parking lot is always suspect, especially in Summer. It’s also weird, because there is an access road for park vehicles that is on the other side of the tennis courts, alas, they still funnel everybody through the lot.

If a work environment is reported to tolerate inappropriate and hostile interactions, in tone or vocabulary, it can be considered an unsafe space – and even debated in the national news.  But here it’s possible for an environment to be physically unsafe and, in the case of the Vancouver Park Board, be considered business as usual.

An example from Peter, an unaffiliated resident who cares about this kind of thing:

On May 30th of this year, Bikehub informed us that the Park Board had decided to implement a “quick fix” this summer to the Seaside Greenway that currently goes through the Kits Beach parking lot (an absolutely disgraceful and very dangerous section of what is otherwise fantastic bike infrastructure).
Apparently, this is said “quick fix”:

.
.
“New stencils in the Kits Beach parking lot. Apparently we are to ride in both directions, right at oncoming traffic, too close to the parked cars. This doesn’t help, at all.”
 .
Park Board Commissioners and staff should be deeply ashamed that this is considered a ‘quick fix’. This changes absolutely nothing for those of us who bike through this parking lot on a regular basis, and especially for all the tourists cycling through this part of Vancouver, who may not be familiar/comfortable with cycling through this chaotic section of the Seaside Greenway. In fact, based on research, these sharrows are likely to make the situation more dangerous for cyclists:
.
“A 13-year study of a dozen cities found that protected bike-lanes led to a drastic decline in fatalities for all users of the road. As for painted bike-lanes? No safety improvement at all. And for sharrows, it’s safer to NOT have them.
.
I fail to understand how Vancouver’s Park Board can be so consistently and obviously anti-bike (as has been the case for well over a decade), in such sharp contrast to the City of Vancouver’s commendable pro-bike efforts.
.
Truly shameful! Citizens and tourists of Vancouver deserve much better!
.
06 Aug 22:06

Practical Concurrency: Some Rules

Practical Concurrency: Some Rules

When Tinderbox started out, computers had a single processor. When it was time to update agents, Tinderbox interrupted your work for a moment, updated agents, and then allowed you to resume. We tried hard to guess when it would be convenient to do this, but of course that’s not always something your computer can anticipate.

Nowadays, your computer has somewhere between 4 and 24 processors. Starting with Tinderbox 6, agents no longer interrupted you; agents do their work on one processor while you do your work on another. The two tasks run concurrently.

Concurrent operations can be tricky to get right. For example, suppose one operation is writing “The truth will set you free” into a buffer, and another operation is writing “Donald Trump” into the same buffer. You might end up with “The tr Trump”, or “Donald will set you free”, or “Toearl…” or something else. If one processor reads while another is writing, it might see a partial result, and that might be complete nonsense. This means you need to take great care whenever processors share results.

Getting concurrency right by the book is one thing, and getting it right in the trenches is something else entirely. I’ve recently changed my approach to concurrency; here are my newly-revised rules.

  1. You can get away with murder. Going by the book, you’ve got to use extreme caution and you’ve always got to get it right. In practice, Tinderbox Six took all sorts of risks and accepted that it was Doing It Wrong in order to get stuff done. That worked remarkably well for a long time. Naive Concurrency blows up when two operations step on each others’ toes: a lot of the time, they’ll just be lucky and will go for hours (or years) without getting in each others’ way.
  2. You can often get away without concurrency. (This is the Law of Brent Simmons, named for the developer of Vesper who pledged to do everything on the main thread.) Computers are fast: much of the time, just ask one processor to do everything and you’ll be fine. You can’t always do without concurrency: some things like network access do require it. But if you just do everything on the main thread, you’ll often find that everything is fast enough.
  3. The profiler is now good. It wasn’t always. In the Tinderbox 4 era, firing up the Profiler meant recompiling the world, and that took 20 minutes. Then, you'd get slow and inconclusive results, and throw up your hands. Life today is better, recompiling the world only takes a minute or two. For Tinderbox, ruthless refactoring has eliminated lots of classes that had a zillion dependencies, and that means I seldom need to recompile the world anyway.
  4. Placekicks are easy. The placekick concurrency pattern is a way to offload a complex calculation or network access if you don't need the answer right away. In the placekick patterns, the “ball” is a bundle of work that needs to be done; you set it up, you kick it to another processor, and then you forget all about it. For example, Tinderbox notes may need to display miniature thumbnails of their text, and because those thumbnails might have lots of different fonts, styles, and pictures, they can be slow to build. So, when we add the note’s view, we fire up a new operation to build the thumbnail in the background, and leave a note behind to say that there's no thumbnail yet but it's under construction. If we need to draw the note before the thumbnail's ready, we simply skip the thumbnail; when it’s finally ready, we leave the thumbnail the appropriate place and it’ll get drawn the next time we update the screen. Placekicks are hard to get wrong: you kick the ball, and you're done. When the operation has done what it was asked to do, it leaves the result in an agreed-upon place; it doesn't need to coordinate with anything else or ask permission. If you can, use placekicks and only placekicks.
  5. The placekicker shouldn’t tackle. The concurrent operation has one responsibility: kick the ball. It does its task. It may also need to do something to let the system know that it’s finished its work, but that last thing should be trivial. Post a notification, or set a flag, or send one object a single message. Don’t mix responsibilities.
    1. Never put anything but a placekick on a global queue. Placekicks can’t deadlock. You know they can’t deadlock. Any idiot can see they can’t deadlock. If there’s any question, make a private queue instead.
  6. Queues are light and cheap. Operations are light and cheap, too. It takes a few microseconds to make a GCD dispatch queue, and scarcely longer to make an NSOperationQueue. Adding an operation to a queue is just as fast. It’s not necessary to be confident that all your tasks are big enough to demand concurrent processing: if some are, there's not much overhead to simply spinning everything off.
  7. Focused queues are easier to use. If a queue has one clear purpose, it’s easier to be confident it won’t deadlock. Dispatch queues are cheap. Don’t share queues, don’t reuse queues, don’t worry about making queues.
  8. Classes should encapsulate their queues. This is a big change: Tinderbox used to depend heavily on a bunch of queues that were public knowledge. That’s a bad idea, First, we're sharing disposable objects — I had no idea how disposable dispatch queues are, but there’s no reason to conserve them. Second, when lots of classes are sharing a queue, then any badly-behaved class might cause untold trouble for unrelated classes that share the work queue. Placekick concurrency is an implementation detail: no one needs to know that there’s a queue in use, and they certainly don't need the details or identity of the queue.
  9. Test the kick, not the queue. Unit testing concurrent systems is better than it used to be, but clever factoring makes it unnecessary to unit-test placekicks. Instead, make the task operation available as its own method or method object, and let the test system test that. You’ll also want to do some integration testing on the whole concurrent system, but that’s another story.
  10. Classes should clean their queues. Be sure that any objects and resources that your tasks require remain available until the tasks are done. Closing the document and quitting the application requires special care that tasks be completed before we dispose of their tools.
    1. To clean a queue, cancel all pending operations, then wait for the current operation to finish. Do not suspend the queue, but do make sure no new tasks are added! It’s easy for a class to be sure that it doesn't add anything to its own private queue, but hard for a system to be confident that no one is adding tasks to a queue shared with lots of other objects. That’s a big advantage of private queues.
  11. Use queue hierarchies to consolidate bursts of work. When we open a new Tinderbox document, there's a bunch of low-level indexing that needs to be done. It’s not urgent, and typically we need only a few milliseconds per note, but some notes will take more work and there might be 10,000 notes. So, we placekick the indexing. The system sees that we want 10,000 things done! “Gadzooks!” it says to itself, “I’d better roll up my sleeves!” This can make the system s[ion up a bunch of worker threads. But we know better: it looks like a pile of work, but it’s not that much and it’s not urgent. So, we tell the system to route all 10,000 tasks to another queue with limited concurrency: now, the system says “I have 10,000 things to do, but I can only do two of then at a time: piece of cake!”
  12. Read sync, write async. When you read a shared object, you need a result. Read synchronously: that shows your intent and, if the wait is long, you'll see the problem at once as waiting for the read, rather than some mysterious lock or stall. Write asynchronously; it’s the classic placekick and there's no need to wait. The exception here is where we’re writing to say that the old value is dead, defunct, and not to be used; in that case, write needs to block all the readers, and asynchronous reading can get you back some time. Often, the easiest approach remains serial access managed by a single dedicated serial queue that can be used with the read sync/write async rule.
05 Aug 23:57

Web Authentication in Firefox for Android

by J.C. Jones

Firefox for Android (Fennec) now supports the Web Authentication API as of version 68. WebAuthn blends public-key cryptography into web application logins, and is our best technical response to credential phishing. Applications leveraging WebAuthn gain new  second factor and “passwordless” biometric authentication capabilities. Now, Firefox for Android matches our support for Passwordless Logins using Windows Hello. As a result, even while mobile you can still obtain the highest level of anti-phishing account security.

Firefox for Android uses your device’s native capabilities: On certain devices, you can use built-in biometrics scanners for authentication. You can also use security keys that support Bluetooth, NFC, or can be plugged into the phone’s USB port.

The attached video shows the usage of Web Authentication with a built-in fingerprint scanner: The demo website enrolls a new security key in the account using the fingerprint, and then subsequently logs in using that fingerprint (and without requiring a password).

Adoption of Web Authentication by major websites is underway: Google, Microsoft, and Dropbox all support WebAuthn via their respective Account Security Settings’ “2-Step Verification” menu.

A few notes

For technical reasons, Firefox for Android does not support the older, backwards-compatible FIDO U2F Javascript API, which we enabled on Desktop earlier in 2019. For details as to why, see bug 1550625.

Currently Firefox Preview for Android does not support Web Authentication. As Preview matures, Web Authentication will be joining its feature set.

 

The post Web Authentication in Firefox for Android appeared first on Mozilla Security Blog.

05 Aug 23:56

A Living Legacy~Pat Davis & Vancouver’s 100 Block of West 10th Avenue

by Sandy James Planner

fullsizeoutput_2e59

fullsizeoutput_2e59

John Davis Jr. with Pat Davis

It seems only fitting on this civic holiday which is called “British Columbia Day” in this province that we celebrate the remarkable Davis family and Pat Davis who passed away last week.  Over a period of five decades the Davis Family stewarded a group of Edwardian and Victorian  houses on Mount Pleasant’s  100 block of West Tenth Avenue just east of city hall, restoring them. At the time in the late 70’s and early 80’s renovating old houses and fitting them with rental units was not the thing to do. The Davis family fought pressure to turn their houses into a cash crop of three-story walk-ups  on their street, and proudly display a plaque indicating that their restoration work was done with no governmental assistance of any kind.

But more than maintaining a group of heritage houses that described the rhythm and feel of an earlier Vancouver,  the Davis family extended their interest and stewardship to the street. In the summer a painted bicycle leans on a tree near the sidewalk with the bicycle basket full of flowers~in season there is a wheelbarrow to delight passersby full of  blooming plants. An adirondack chair perches near the sidewalk. And every morning, one of the Davis family was out sweeping the sidewalk and ensuring that no garbage was on the boulevards or the street.

As author and artist Michael Kluckner notes the Davis Family’s stewardship profoundly altered the way city planning was managed in Mount Pleasant. As one of the oldest areas of the city with existing Victorian houses, zoning was developed to maintain the exterior form and add rental units within the form. The first laneway houses in the city, called “carriage houses” were designed for laneway access and to increase density on the lots. And when it came time for a transportation management plan, residents threw out the City engineer’s recommendations and designed their own. That plan is still being used today.

John Davis Senior passed away in the 1980’s but his wife Pat and his sons John and Geoff maintained the houses and managed the rentals. Michael Kluckner in an earlier Price Tags post described the Davis Family as being strongly in the tradition of social and community common sense.

They championed street lighting for Tenth Avenue, with the street’s residents  choosing (and partially paying for) a heritage type of lighting standard. The City’s engineer at the time thought that the residents of Tenth Avenue would never pick a light standard that they would have to pay for . The City’s engineer was wrong.

Pat Davis also single handedly changed the way that street trees were trimmed by B.C. Hydro. When I was working in the planning department I received a call from B.C. Hydro indicating that trimming work on the Tenth Avenue large street trees had to be halted due an intervention from Mrs. Pat Davis. Pat was horrified that hydro crews were cutting back street trees down to their joins (called “crotch dropping”) to ensure that hydro wiring was not compromised. A spritely senior, Pat Davis had taken the car keys away from  B.C. Hydro personnel  and refused to give them back until the hydro crew agreed to leave.

A subsequent report to Council led to B.C. Hydro agreeing to raise the electrical wires passing through the street trees, so that the trees could maintain their natural form. That is now city policy.

You can read more about the Davis family and the Tenth Avenue houses in this article by CBC’s Rafferty Baker. You can also read Pat Davis’ obituary here.  The YouTube video below features John Davis junior, Pat’s son talking about the houses and how the family managed to renovate them. The Davis family demonstrates the “varied talent” of good community that Jane Jacobs passionately describes. And if you look closely at the short video, you will see the chosen light standards on the street as well as  that bicycle with the basket of flowers on the city’s boulevard.

And full disclosure~I wrote my planning thesis on the Davis family’s remarkable work. Pat Davis will be deeply missed, but her legacy will live on for future generations of Vancouverites.

 

 

 

05 Aug 23:55

| Rebecca Solnit, Tyranny of the Minority

by Stowe Boyd

Solnit is part of a trend to avoid the word citizen in favor of denizen.

Continue reading on Medium »

05 Aug 23:55

“Name for the order?” “Duncan,” I answered ...

“Name for the order?”

“Duncan,” I answered

“Bumpkin?”

Awkward silence… followed by wide eyes as understanding of what was just said registered. “Oh my gosh, I’m so sorry!”

“No worries. It’s OK. Just write Duncan.”

05 Aug 23:55

Code as Craft: Understand the role of Style in e-commerce shopping

by Aakash Sabharwal

Aesthetic style is key to many purchasing decisions. When considering an item for purchase, buyers need to be aligned not only with the functional aspects (e.g. description, category, ratings) of an item’s specification, but also its aesthetic aspects (e.g. modern, classical, retro) as well. Style is important at Etsy, where we have more than 60 million items and hundreds of thousands of them can differ by style and aesthetic. At Etsy, we strive to understand the style preferences of our buyers in order to surface content that best fits their tastes.

Our chosen approach to encode the aesthetic aspects of an item is to label the item with one of a discrete set of “styles” of which “rustic”, “farmhouse”, and “boho” are examples. As manually labeling millions of listings with a style class is not feasible – especially in a marketplace that is ever changing, we wanted to implement a machine learning model that best predicts and captures listings’ styles. Furthermore, in order to serve style-inspired listings to our users, we leveraged the style predictor to develop a mechanism to forecast user style preferences.

Style Model Implementation

Merchandising experts identified style categories.

For this task, the style labels are one of the classes that have been identified by our merchandising experts. Our style model is a machine learning model which, when given a listing and its features (text and images), can output a style label. The style model was designed to not only output these discrete style labels but also a multidimensional vector representing the general style aspects of a listing. Unlike a discrete label (“naval”, “art-deco”, “inspirational”) which can only be one class, the style vector encodes how a listing can be represented by all these style classes in varying proportions. While the discrete style labels can be used in predictive tasks to recommend items to users from particular style classes (say filtering recommended listings to a user from just “art-deco”), the style vector is supposed to serve as a machine learning signal into our other recommendation models. For example, on a listing page on Etsy, we recommend similar items. This model can now surface items that are not only functionally the same (“couch” for another “couch”) but can potentially recommend items that are instead from the same style (“mid-century couch” for a “mid-century dining table”).

The first step in building our listing style prediction model was preparing a training data set. For this, we worked with Etsy’s in-house merchandising experts to identify a list of 43 style classes. We further leveraged search visit logs to construct a “ground truth” dataset of items using these style classes. For example, listings that get a click, add to cart or purchase event for the search query “boho” are assigned the “boho” class label. This gave us a large enough labeled dataset to train a style predictor model.

Style Deep Neural Network

Once we had a ground truth dataset, our task was to build a listing style predictor model that could classify any listing into one of 43 styles (it is actually 42 styles and a ‘everything else’ catch all). For this task, we used a two layer neural network to combine the image and text features in a non-linear fashion. The image features are extracted from the primary image of a listing using a retrained Resnet model. The text features are the TF-IDF values computed on the titles and tags of the items. The image and text vectors are then concatenated and fed as input into the neural network model. This neural network model learns non-linear relationships between text and image features that best predict a listings style. This Neural Network was trained on a GPU machine on Google Cloud and we experimented with the architecture and different learning parameters until we got the best validation / test accuracy.


By explicitly taking style into account the nearest neighbors are more style aligned

User Style

As described above, the style model helps us extract low-dimension embedding vectors that capture this stylistic information for a listing, using the penultimate layer of the neural network. We computed the style embedding vector using the style model for all the listings in Etsy’s corpus.

Given these listing style embeddings, we wanted to understand users’ long-term style preferences and represent it as a weighted average of 42 articulated style labels. For every user, subject to their privacy preferences, we first gathered the entire history of “purchased”, “favorited”, “clicked” and “add to cart” listings in the past three months. From all these listings that a user interacted with, we combined their corresponding style vectors to come up with a final style representation for each user (by averaging them).

Building Style-aware User Recommendations

There are different recommendation modules on Etsy, some of which are personalized for each user. We wanted to leverage user style embeddings in order to provide more personalized recommendations to our users. For recommendation modules, we have a two-stage system: we first generate a candidate set, which is a probable set of listings that are most relevant to a user. Then, we apply a personalized ranker to obtain a final personalized list of recommendations.  Recommendations may be provided at varying levels of personalization to a user based on a number of factors, including their privacy settings.

In this very first iteration of user style aware recommendations, we apply user style understanding to generate a candidate set based on user style embeddings and their latest interacted taxonomies. This candidate set is used for Our Picks For You module on the homepage. The idea is to combine the understanding of a user’s long time style preference with his/her recent interests in certain taxonomies.

This work can be broken down into three steps:

  • For each user, obtain top three styles and three latest taxonomies.

Given user style embeddings, we take top 3 styles with the highest probability to be the “predicted user style”. Latest taxonomies are useful because they indicate users’ recent interests and shopping missions.

  • For each (taxonomy, style) pair, generate 100 listings.

Given a taxonomy, sort all the listings in this taxonomy by the different style prediction scores for different classes, high to low. We take the top 100 listings out of these.

Minimal” listings in “Home & Living”

Floral” listings in “Home & Living”
  • For each user, remove invalid (taxonomy, style) pairs.  

Taxonomy, style validation is to check whether a style makes sense for a certain taxonomy. eg. Hygge is not a valid style for jewelry.

  • For each user, aggregate all listings generated by each valid style & taxonomy pair and take top 200 listings with the highest average purchase and favorite rate.

These become the style based recommendations for a user.

1-4: boho + bags_and_purses.backpacks
5-7: boho + weddings.clothing
8,13,16: minimal + bags_and_purses.backpacks

Style Analysis 

We were extremely interested to use our style model to answer questions about users sense of style. Our questions ranged from “How are style and taxonomy related? Do they have a lot in common?”, “Do users care about style while buying items?” to “How do style trends change across the year?”. Our style model enables us to answer at least some of these and helps us to better understand our users. In order to answer these questions and dig further we leveraged our style model and the generated embeddings to perform analysis of transaction data.

Next, we looked at the seasonality effect behind shopping of different styles on Etsy. We began by looking at unit sales and purchase rates of different styles across the year. We observed that most of our styles are definitely influenced by seasonality. For example, “Romantic” style peaks in February because of Valentines Day and “Inspirational” style peaks during graduation season. We tested the unit sales time series of different styles for statistical time series-stationarity test and found that the majority of the styles were non-stationary. This signifies that the majority of styles show different shopping trends throughout the year and don’t have constant unit sales throughout the year. This provided further evidence that users tastes show different trends across the year.




Using the style embeddings to study user purchase patterns not only provided us great evidence that users care about style, but also inspired us to further incorporate style into our machine learning products in the future.

Etsy is a marketplace for millions of unique and creative goods. Thus, our mission as machine learning practitioners is to build pathways that connect the curiosity of our buyers with the creativity of our sellers. Understanding both listing and user styles is another one of our novel building blocks to achieve this goal.

For further details into our work you can read our paper published in KDD 2019.

Authors: Aakash Sabharwal, Jingyuan (Julia) Zhou & Diane Hu


05 Aug 23:55

They hate the US government, and they're multiplying: the terrifying rise of 'sovereign citizens' | World news

mkalus shared this story from The Guardian.

On 20 May 2010, a police officer pulled over a white Ohio minivan on Interstate 40, near West Memphis, Arkansas. Unbeknown to officer Bill Evans, the occupants of the car, Jerry Kane Jr, and his teenage son, Joseph Kane, were self-described “sovereign citizens”: members of a growing domestic extremist movement whose adherents reject the authority of federal, state and local law.

Kane, who traveled the country giving instructional seminars on debt evasion, had been posing as a pastor. Religious literature was laid out conspicuously for anyone who might peer into the van, and, when Evans ran the van’s plates, they came back registered to the House of God’s Prayer, an Ohio church. Also in the van, though Evans did not know it, were weapons Kane had bought at a Nevada gun show days earlier.

Kane had been in a series of run-ins with law enforcement. After the most recent incident, a month earlier, he had decided that the next time a law enforcement officer bothered him would be the last.

Another officer patrolling nearby, Sgt Brandon Paudert, began to wonder why Evans was taking so long on a routine traffic stop. When he pulled up at the scene, he saw Evans and Kane speaking on the side of the highway. Evans handed him some puzzling paperwork that Kane had provided when asked for identification – vaguely official-looking documents filled with cryptic language. He examined the papers while Evans prepared to frisk Kane.

Suddenly, Jerry Kane turned and tackled Evans, knocking him down into a ditch. The younger Kane vaulted from the passenger side of the minivan and opened fire with an AK-47. Evans, an experienced officer who also served on the Swat team, was fatally wounded before he even drew his weapon. Paudert was struck down moments later while returning fire.

As the two officers bled out on the side of the highway, the Kanes jumped back in their van and sped off. A FedEx trucker who witnessed the shooting called 911.

The Kanes’ ideological beliefs – which the Anti-Defamation League (ADL) believes are shared by “well into the tens of thousands” of Americans – put them under the broad umbrella of the “Patriot” movement, a spectrum of groups who believe the US government has become a totalitarian and repressive force.

Although the Trump administration is reportedly planning to restructure the Department of Homeland Security’s countering violent extremism (CVE) program to focus exclusively on radical Islam, a 2014 national survey of 175 law enforcement agencies ranked sovereign citizens, not Islamic terrorists, as the most pressing terrorist threat. The survey ranked Islamic terrorists a close second, with the following top three threats all domestic in origin and sometimes overlapping: the militia movement, racist skinheads, and the neo-Nazi movement.

Though the federal CVE program already devotes almost the entirety of its resources to organizations combatting jihadism, the White House feels that the current name is “needlessly ‘politically correct’”, an anonymous government source told CNN.

Paudert’s father – who also happened to be the West Memphis chief of police – was driving home with his wife when he heard chatter on the police scanner about an officer-down situation on the interstate.

He headed to the scene, assuming a state trooper had been attacked. He then saw a figure in uniform sprawled at the bottom of the embankment. It was Bill Evans, his gun still locked in its holster.

Paudert then saw another body lying on the asphalt behind the vehicles. One of his officers tried to block him from going further. “Please,” he pleaded, “don’t go around there.” Paudert shoved him aside. As he came around the corner he saw his son, Brandon. Part of his head had been blown off. His arm was outstretched and his pistol still clutched in his hand.

Images of his son as a child, growing up, flooded through his mind. Then he saw his wife, who had been waiting in the car, coming toward him. He moved to stop her. “Is it Brandon?” she asked. “Yes, it is,” he said. “Is he OK?” she asked. “No,” he said, and she broke down.

Paudert died at the scene. He had been shot 14 times, and Evans, who died at the hospital, had been hit 11 times, suggesting that Joseph Kane had shot them again after they were already on the ground, wounded.

Both officers had been wearing ballistic vests, which the rifle rounds from 16-year-old Joseph Kane’s AK-47 punched through as if they were cloth.

The Kanes’ minivan was spotted 90 minutes later in a Walmart parking lot. Officers from multiple law enforcement agencies closed in. A shootout erupted. The Kanes managed to wound two more law enforcement officers before they were killed.

Cop killers and rightwing extremism: an overlap

In 2009, Daryl Johnson, a career federal intelligence analyst, wrote a report predicting a resurgence of what he called “rightwing extremism”.

Republicans were enraged by what they saw as politically motivated alarmism conflating nonviolent conservative and libertarian groups with terrorists, and especially angry at the report’s prediction that Iraq and Afghanistan veterans would be targets for recruitment by extremist groups.

Johnson defended the report’s conclusions as the product of reasoned and nonpartisan analysis. An Eagle Scout and registered Republican raised in a conservative Mormon family, he says he was particularly perplexed by the accusation that he was guilty of an anti-conservative agenda. But the then secretary of homeland security, Janet Napolitano, bowing to political pressure, disclaimed the report and ordered Johnson’s team dissolved. Johnson left government and started a private consultancy.

The eight years since seem to have borne out Johnson’s prediction. The year he left government, 2010, there was a suicide plane attack on the IRS building in Austin; then came a series of other incidents including the 2012 shooting of the Sikh temple in Wisconsin and the 2015 shooting of the Emanuel AME black church in Charleston.

According to data from the Anti-Defamation League, at least 45 police officers have been killed by domestic extremists since 2001. Of these, 10 were killed by leftwing extremists, 34 by rightwing extremists, and one by homegrown Islamist extremists.

In 2009, a man with white supremacist and anti-government views shot five police officers in Pittsburgh, three fatally.

In 2012, self-described sovereign citizens shot four sheriff’s deputies, two fatally, in St John the Baptist, a Louisiana parish.

In 2014, two Las Vegas police officers eating lunch were killed by a husband-and-wife pair inspired by the Patriot movement; the couple were killed by police before following through on their plan to take over a courthouse to execute public officials.

The same year, survivalist Eric Frein ambushed a Pennsylvania state police barracks, assassinating one state trooper and wounding another, then led law enforcement on a 48-day manhunt.

In 2016, a marine veteran-turned-sovereign citizen killed three law enforcement officers in Baton Rouge and wounded three others.

Johnson and other terrorism experts worry that a generation of people who came of age in the shadow of 9/11 may not understand that historically, most terror attacks in the US have been domestic in origin.

In fact, a 2016 report by the US Government Accountability Office noted that “of the 85 violent extremist incidents that resulted in death since September 12, 2001, far-rightwing violent extremist groups were responsible for 62 (73%) while radical Islamist violent extremists were responsible for 23 (27%).” (The report counts the 15 Beltway sniper shootings in 2002 as radical Islamist attacks, though the perpetrators’ motives are debated.)

Johnson said: “There are a lot of people – millennials – who have no idea of Oklahoma City and what happened there in 1995.”

The Oklahoma City bombing, which killed 168 people, including 19 children, was widely assumed to be related to Middle Eastern terrorism, but the perpetrator turned out to be someone quintessentially middle American: a white Gulf war veteran, Timothy McVeigh, who used his military knowledge to build a huge truck bomb out of commercial fertilizer. He and his collaborator Terry Nichols – who described himself as a sovereign citizen – saw the attack as the opening gambit in an armed revolt against a dictatorial and globalist federal government.

More specifically, the bombing was conceived as payback for two federal law enforcement operations that had become cultural flashpoints for the American far right: the incidents at Ruby Ridge, Idaho, in 1992, where a fundamentalist, Vicki Weaver, was killed by an FBI sniper’s bullet while holding her baby, and at Waco, Texas, in 1993, where federal agents negotiated a 51-day standoff with the Branch Davidian cult that only ended when most of the Davidians died in a horrific fire.

An explosion in activity by far-right militias since the 1980s

Partly as a consequence of the 1980s farm crisis, which left American farmers with crippling levels of debt, the 1990s saw an explosion in activity by far-right militias and fringe political and religious groups.

Gary Noesner, a retired FBI agent who served as the chief hostage negotiator during Ruby Ridge and Waco, as well as an 81-day standoff with the sovereign citizen-influenced Montana Freemen in 1996 and the response to a barricade and kidnapping incident by the Republic of Texas militia group in 1997, sees numerous parallels between the political climate then and now.

“Many of [the people attracted to such movements] are guys my age, middle-aged white guys. They’re seeing profound change and seeing that they have been left behind by the economic success of others and they want to return to a never-existent idyllic age when everyone was happy and everyone was white and everyone was self-sufficient.”

Thanks to the standoff between the Bundy family and the federal government, as well as the headline-grabbing 2016 occupation of the Malheur wildlife preserve in Oregon, the previously dormant militia movement has recently exploded in popularity.

Militia members are not necessarily sovereign citizens, but their beliefs are intertwined. Today’s sovereign citizen movement can be traced in part to two popular Patriot ideologies: the Posse Comitatus movement, built around the theory that elected county sheriffs are the highest legitimate law officers, and the Freemen-on-the-Land movement, a fringe ideology whose adherents believe themselves subject only to their own convoluted, conspiratorial, and selective interpretation of common law.

There was significant overlap between the Patriot movement and white nationalism. One of the movement’s foundational texts was The Turner Diaries, a 1978 novel by the white supremacist William Luther Pierce that describes a near future in which a small group of patriots fighting the extinction of the white race work to bring about a race war and the eventual genocide of non-white peoples.

McVeigh, who considered the book a blueprint for the coming revolution, was carrying an excerpt when he was arrested, although he later said he did not agree with the book’s racial content.

At the time, the Oklahoma City bombing actually appeared to spell the end of the militia movement: it led to a law enforcement crackdown and an evaporation of public sympathy for the radical right. McVeigh, unrepentant to the end, was executed in 2001, three months to the day before 9/11 made domestic terrorism seem like a distant memory.

The rise of sovereign citizens is linked to home foreclosures

Today, the face of domestic terror looks different from in McVeigh’s day – sometimes literally. Some extremists – such as Jerry Kane, who was an unemployed truck driver – still fit roughly into the American popular image: blue-collar white men hiding in the woods and training for doomsday. But many do not. Not all, for example, are people on the economic margins. In 2012, Christopher Lacy, a software engineer with sovereign beliefs who had started a new job only a week earlier, shot a California state trooper in the head during a routine traffic stop.

Furthermore, not all sovereign citizens are white: Gavin Long, a black sovereign citizen, killed three law enforcement officers in Louisiana last year. An increasing number of black Americans are coming to the sovereign movement from the Moorish Science Temple, a black Muslim church that believes African Americans are the descendants of ancient Moors.

Experts believe white nationalism has waned in influence on some elements of the radical right, opening the movement to anyone enthusiastically anti-government and anti-law enforcement.

“This is no longer a white supremacist movement,” said JJ MacNab, an expert on sovereign citizens and militias and the author of the forthcoming book The Seditionists: Inside the Explosive World of Anti-Government Extremism in America.

“There is still racism and bigotry,” she said. “Some of this is situational. If there are two members of your 12-person militia who are black, who are conservatives, military veterans, whatever – they are your brothers. You would kill for them and you would die for them. But two black guys in Ferguson, on the other side of the political spectrum – if there is a hierarchy of hatred, they are as low as you can get, lower than animals.”

Bob Paudert, the former West Memphis police chief, said: “Their only agenda is they are anti-government.” Paudert believes that in some ways, sovereign citizens are better understood as an extreme left or anarchist movement than an extreme right movement.

Joanna Mendelson, senior investigative researcher and director of special projects at the ADL, said: “I call them rightwing anarchists ... So perhaps it is almost a full circle, if you have that continuum.”

MacNab said: “The sovereign citizens really got big in the late 2000s because people were losing their houses to foreclosure.” Many are house-squatters, either because of foreclosure or because they are preying on others who vacated their houses. Financial crime is rampant among sovereign citizens, who are also well-known for harassing their enemies with fraudulent liens. “There are a lot of people scamming each other.”

A generational change is taking place as the anti-government movement attracts younger people. Many come from a cluster of amorphous internet communities, MacNab noted, including far-right trolls, the hacking collective Anonymous, and Copwatch, whose supporters upload critical videos of police on YouTube.

Younger and older sovereigns get an overwhelming share of their news from Infowars, the media channel of the conspiracy theorist Alex Jones, and RT, the propaganda network known for pushing negative stories about the American government.

Repeating the cycle

To the knowledge of Daryl Johnson, the former Department of Homeland Security (DHS) intelligence analyst, there are no longer any DHS analysts monitoring domestic terrorism full time. (When asked about it, a DHS representative said: “This is a question for the FBI.”)

“The FBI is the only US government agency that still has full-time analysts assessing threats from the far right,” Johnson said, “and their analytical cadre could be measured in the dozens.”

The FBI declined to comment. An FBI press officer noted that holding extremist opinions was not a crime, and the FBI only investigated people suspected of breaking federal law.

In the meantime, renaming CVE to focus only on radical Islam will merely further “alienate Muslims – justify their fears, and reinforce them as well”, Johnson said.

Among some of the anti-government groups MacNab tracks, Trump has enjoyed something of a honeymoon since the election, she said. But she believes that it won’t last: when they realize Trump is not the panacea they thought he was, they will feel used, and turn against him.

Extremist sentiment follows certain historical patterns, according to MacNab; the last cycle moved through a series of specific manifestations – tax resistance, sovereign ideology, the militia era – before ending with Oklahoma City.

“We are now repeating that cycle,” MacNab said, and getting near the end.

  • Due to an error in the editing process, this article was amended on 15 May 2017 to reflect that the Kanes were killed by police.
05 Aug 23:54

I get seriously fed up of police being criticised for taking part in community activities. Repairing/rebuilding/strengthening public-police relations is an important and under-appreciated aspect of the criminal justice system.

by OliverNorgrove
mkalus shared this story from OliverNorgrove on Twitter.

I get seriously fed up of police being criticised for taking part in community activities. Repairing/rebuilding/strengthening public-police relations is an important and under-appreciated aspect of the criminal justice system.




58 likes, 6 retweets
05 Aug 23:54

Citationsy

Cenk Dominic Özbakır, Aug 05, 2019
Icon

This is a lovely product, and with the Firefox extension works seamlessly in my workflow. The idea is, if you want to site something from the web, provide the URL (or click the extension button) and you get automatically-created properly formatted references. Like I said, lovely. Of course, the publishers will immediately try to ruin this for everyone.

Web: [Direct Link] [This Post]
05 Aug 23:53

The Economist, Ride-Hailing & Who is Kater Catering to?

by Sandy James Planner

https3A2F2Fs3-ap-northeast-1.amazonaws.com2Fpsh-ex-ftnikkei-3937bb42Fimages2F22F22F72F22F1962722-7-eng-GB2F0207N-Uber

https3A2F2Fs3-ap-northeast-1.amazonaws.com2Fpsh-ex-ftnikkei-3937bb42Fimages2F22F22F72F22F1962722-7-eng-GB2F0207N-Uber

Image: FT.com

Even The Economist is weighing in  on the fact that Vancouver is the special child, the one big city in North America that still does not have the common ride-hailing  services like Uber and Lyft.  There is the Kater service which uses taxi licences and is part of the local taxi association, which take a share of profits. Prices are similar to taxis, but there are a few Kater cars that are Karaoke cars. (We can’t make this stuff up.)

As The Economist observes British Columbia’s requirement of Class Four commercial licences may be deterring licensing  part time drivers for ride hailing. But the Province’s cautious approach to ride-hailing is also being lauded:

The regulators have reason to proceed cautiously. In many cities where ride-hailing has taken off, congestion has worsened and use of public transport has dropped. In San Francisco, congestion, as measured by extra time required to complete a journey, increased by 60% from 2010 to 2016, according to Greg Erhardt, a professor at the University of Kentucky. More than half of the rise was caused by the growth of ride-hailing. Population and employment growth accounted for the rest. Ride-hailing led to a 12% drop in ridership on public transport in the city. San Francisco’s experience is a “cautionary tale for Vancouver”, says Joe Castiglione, who analyses data for its transport authority.

Without providing data, the Economist article calls Vancouver “one of  North America’s most traffic-jammed cities, in part because its downtown is small.” 

The article rightly notes that ride-hailing can worsen congestion, but also observes that Vancouver is one of the few places in North America with public transit use increasing.

TransLink’s head of policy Andrew Curran states that high gas price, population and employment growth has helped boost transit use as well as car sharing, where people book vehicles that they drive themselves. Andrew notes that deferring Uber and Lyft in the province has helped transit and car share. Vancouver has 3,000 vehicles in car share, which is double the number of similar vehicles in San Francisco.

While the Province is now welcoming ride-hailing as a complement to ride share and to transit services,  Andrew Curran thinks there may be particular niches for ride-hailing such as getting people to transit points and enhancing transportation options for people with disabilities.

Currently, TransLink hires taxis to give door-to-door rides to some disabled people. The requirement for drivers to have commercial licences will contain the services’ growth and protect taxi-drivers, ride-hailing’s fiercest foes, or so the province hopes.”

The YouTube video below describes the current Kater service available in Vancouver. It’s worth having a quick read through the comments.

 

And the Economist article, which is behind a pay wall, is here:

BC Give Uber a Cautious Go-Ahead

“If you look Chinese and speak Mandarin you can summon a ride in Vancouver by using an app, as long as it’s Chinese. The drivers normally call to confirm the order, says Daniel Merkin, who lives in the Canadian city. “Sometimes they’ll hang up on me when they realise I don’t speak Mandarin,” he says. But he keeps trying, because popular ride-hailing services, such as Uber and Lyft, are not available. Vancouver is the only big North American city where they do not operate. The Chinese service is not legal, but it is tolerated.

Mr Merkin hopes that his options will soon expand. In July the province of British Columbia, which licenses drivers, said it would allow the big ride-hailing services in. They could start operations by late September. But British Columbia has made their entry difficult by requiring drivers to hold commercial licences. That may deter part-timers who provide much of the services’ workforce. Lyft does not operate in places that require such licences

The regulators have reason to proceed cautiously. In many cities where ride-hailing has taken off, congestion has worsened and use of public transport has dropped. In San Francisco, congestion, as measured by extra time required to complete a journey, increased by 60% from 2010 to 2016, according to Greg Erhardt, a professor at the University of Kentucky. More than half of the rise was caused by the growth of ride-hailing. Population and employment growth accounted for the rest. Ride-hailing led to a 12% drop in ridership on public transport in the city. San Francisco’s experience is a “cautionary tale for Vancouver”, says Joe Castiglione, who analyses data for its transport authority.

Even without Uber and Lyft, Vancouver is one of North America’s most traffic-jammed cities, in part because its downtown is small. Ride-hailing might worsen congestion. Its absence has made Vancouver one of the few North American cities where public transport is attracting more passengers. The number of journeys started on TransLink, the city’s public-transport system, rose by 7.1% to 437m in 2018, making it “another record-breaking year” for the network of buses and trains. From 2016 to 2018 the number rose by 18.4%. British Columbia’s higher petrol prices and growth in employment and population explain some of that rise. Not allowing Uber and Lyft helped, says Andrew Curran, TransLink’s head of policy. (It has also boosted car-sharing services, which let people book vehicles they drive themselves. Vancouver has 3,000 cars that can be hired for such services, double the number in San Francisco, which has more people.)

Vancouver was among the first cities Uber tried to enter, in 2012, and “the first city that Uber ever left”, in the same year, says Michael van Hemmen, who leads the company’s operations in western Canada. Forbidding rules, such as classifying it as a limousine service, which for some reason must charge a minimum of C$75 ($57) per trip, killed its business. British Columbia is now inviting it back to Vancouver (and other cities in the province) in hopes of complementing its public-transport system rather than undermining it. It will not be classified as a limousine service.

Mr Curran says ride-hailing could increase use of public transport by ferrying people from their houses to a bus or train stop. It could also improve transport for people with disabilities. Currently, TransLink hires taxis to give door-to-door rides to some disabled people. The requirement for drivers to have commercial licences will contain the services’ growth and protect taxi-drivers, ride-hailing’s fiercest foes, or so the province hopes.

But the commercial-licence requirement could have the opposite effect. Analysts think it will reduce the number of drivers available to pick up passengers in distant suburbs. Instead, they will cluster in the centre. Some of Uber’s future competitors say they are not worried. The commercial-licence rule will discourage most drivers, believes Chris Iuvancigh of Sharenow, which runs Car2go, one of Vancouver’s four car-sharing services. A driver who offers rides in his Mercedes suv to people who hire him via WeChat, a Chinese app, thinks they will stay loyal. If ride-hailing does come to Vancouver, he predicts, it will just slow their journeys down.”

This article appeared in the The Americas section of the print edition under the headline”Stop and go”
Print edition | The Americas
Aug 1st 2019| VANCOUVER

 

05 Aug 23:52

Why You Need a Password Manager. Yes, You.

by Andrew Cunningham
Why You Need a Password Manager. Yes, You.

You probably know that it’s not a good idea to use “password” as a password, or your pet’s name, or your birthday. But the worst thing you can do with your passwords—and something that more than 50 percent of people are doing, according to a recent Virginia Tech study—is to reuse the same ones across multiple sites. If even one of those accounts is compromised in a data breach, it doesn’t matter how strong your password is—hackers can easily use it to get into your other accounts.

05 Aug 23:52

FRA die zweite

by Andrea

Für diejenigen, denen die ersten sechs Folgen noch nicht genug waren, gibt es jetzt sechs weitere Folgen, nämlich Folge 7 bis Folge 12:

hr Fernsehen: Mittendrin – Flughafen Frankfurt – Serie mit 12 Folgen à 45 Minuten (YouTube, Link zur siebten Folge in der Playlist).

“Rund 80.000 Menschen arbeiten am Frankfurter Flughafen. 65 Millionen Passagiere kommen jährlich in die zwei Terminals. Wer von Frankfurt aus fliegen möchte, hat 107 Länder zur Auswahl. 1.400 Flugzeuge starten und landen täglich am viertgrößten Airport Europas. Ein Flughafen mit weltweiten Rekorden: die größte Werkfeuerwehr, die größte Flughafenklinik, die größte KFZ-Werkstatt, die größten Flugzeugschlepper. Vier Videojournalisten des Hessischen Rundfunks durften exklusiv 50 Tage auf dem Airport drehen. Dabei haben die Reporter 22 Menschen mit der Kamera begleitet, hautnah in Bereichen und Situationen, die so bisher noch nicht im Fernsehen zu sehen waren. Spannende, emotionale Geschichten rund um Hessens größten Arbeitgeber. “Mit spektakulären Kameraeinstellungen und aufwändigen 3-D-Animationen wollen wir den Zuschauern diese komplexe Welt näher bringen und eintauchen lassen in das Abenteuer Flughafen”, sagt Redakteur und Videojournalist Andreas Graf. Daraus sind zwölf spannende Filme entstanden, jeweils 45 Minuten lang.”

Das Ende des Textes stimmt aber offenbar nicht, es soll ab dem 30. Oktober weitere sechs Folgen geben.

05 Aug 23:50

Riding into Hamilton from Aldershot

by jnyyz

This past weekend I visited my parents who live on Hamilton mountain. For various reasons, the best plan was for me to take the GO train to Aldershot, and then to ride the rest of the way. The route that I chose was part of the HamBur loop. I rode a variation of this trail during Bike for Mike a couple of years ago.

Here’s my ride for the day.

The first part of the ride was down Waterdown Rd to North Shore Blvd, cutting through a cemetery, connecting on Spring Garden Rd, and then taking a trail up to York Blvd. This is the first tricky part of the trail, where you have to cross a high speed off ramp from the 403.

After you cross the high level bridge that you can see in the image above, you need to look to the left for the connection to the waterfront trail. This connection is circled on this map.

What is not clear on most mapping apps is that the connection involves 200 stairs. Here is a cyclist just having come up to York Blvd.

Here are the stairs.

The payoff is that instead of riding into town on busy York Blvd, you get to use the waterfront trail. The last time I came this way was at night, in the company of several hundred Hamilton Glowriders.

Crossing downtown is easy along Ferguson Street. You can take it south almost to the foot of the escarpment.

I did note on the way south that Cannon St. was being resurfaced, and as a result, the bi directional bike lanes were out of action.

Here is the start of the trail up the escarpment. It follows a section of the Bruce Trail, and is extremely gradual.

Here the trail crosses Wentworth St.

From this point forward the surface is rough asphalt and some gravel, but no worse than some roads in Toronto I could mention. The trail takes you fairly far to the east by the time you crest the escarpment.

You are on fully separated bike trails all the way to Stone Church Rd. Here is the bridge across the Lincoln Alexander Expressway.

Here the trail ends at Stone Church Rd.

Stone Church has a bike lane and is a good way to get across the mountain in the East West direction. There is signage here pointing to the Chippewa rail trail, but that is an adventure that has to wait for another day.

One side note about gear: my Brompton as a red Selle Anatomica saddle, and I noticed after my ride today that the colour is still bleeding a bit. Better stick to dark pants for long rides on this saddle.

If you are interested in the Ham Bur loop, the Bike for Mike people still have their version up on Ride with GPS.

05 Aug 23:49

OnePlus 7 Pro gets Android Q Developer Preview 4

by Bradly Shankar

Roughly two weeks after seeding the Android Q Developer Preview 3(DP3) for its OnePlus 7 Pro, OnePlus is now in the process of pushing out Android Q DP4 to the device.

Before we go further, it’s important to note that OnePlus did not publicly announce the release. Instead, numerous reports of the OnePlus 7 Pro getting Android Q DP4 update surfaced from a thread on the official OnePlus forum.

Let’s get to the update itself. While the DP4 changelog remains vague about the changes, anecdotal evidence from users who updated their OnePlus 7 Pro claimed noticeable improvements in gestures and animations.

Furthermore, Android Police said that the DP4 build added Digital Wellbeing, a revised OnePlus camera app that has Focus Tracking and “super macro,” a more flexible Zen Mode, and at least a couple of UI tweaks. Besides, the Android Auto app disappeared after the update.

Android Police also pointed out that users may sideload the new apps from the DP4 build to their stable OxygenOS versions to obtain new features before official releases. The outlet provided links for the new OnePlus camera app and the new Zen Mode app for people to try.

However, for owners with the OnePlus 7, 6 and 6T, it seems that they will have to wait longer for Android Q DP4 to arrive at their doorsteps.

Source: OnePlus forums Via: Android Police

The post OnePlus 7 Pro gets Android Q Developer Preview 4 appeared first on MobileSyrup.

05 Aug 23:49

August 2019 security patch rolling out to Google Pixel, Essential phones

by Bradly Shankar
Pixel 3a

Google has begun rolling out its August 2019 security patch to Pixel and Essential phones.

Overall, the update addresses a few dozen bugs and other vulnerabilities across most of the Pixel line, including:

  • Saved Wi-Fi network configuration improvements (all Pixel phones)
  • Wi-Fi CaptivePortalLogin stability improvements (all Pixel phones)
  • Sleep mode improvements (Pixel 3a and Pixel 3a XL)

However, as confirmed last month, Google is no longer rolling out patches to the Pixel C.

The full download and OTA links for the August security patch are below:

Meanwhile, it’s unclear exactly what fixes the patch brings to Essential phones. In a tweet, Essential simply mentioned that the update is now live for its customers.

Via: XDA Developers

The post August 2019 security patch rolling out to Google Pixel, Essential phones appeared first on MobileSyrup.

05 Aug 23:48

Exploring Jupytext – Creating Simple Python Modules Via a Notebook UI

by Tony Hirst

Although I spend a lot of my coding time in Jupyter notebooks, there are several practical problems associated with working in that environment.

One problem is that under version control, it can be hard to tell what’s changed. On the one hand, the notebook .ipynb format, which saves as a serialised JSON object, is hard to read cleanly:

The .ipynb format also records changes to cell execution state, including cell execution count numbers and changes to cell outputs (which may take the form of large encoded strings when a cell output is an image, or chart, for example:

Another issue arises when trying to write modules in a notebook that can be loaded into other notebooks.

One workaround for this is to use the notebook loading hack described in the official docs: Importing notebooks. This requires loading in a notebook loading module that then allows you to import other modules. Once the notebook loader module is installed, you can run things like:

  • import mycode as mc to load mycode.ipynb
  • `moc = __import__(“My Other Code”)` to load code in from `My Other Code.ipynb`

If you want to include code that can run in the notebook, but that is not executed when the notebook is loaded as a module, you can guard items in the notebook:

In this case, the if __name__=='__main__': guard will run the code in the code cell when run in the notebook UI, but will not run it when the notebook is loaded as a module.

Guarding code can get very messy very quickly, so is there an easier way?

And is there an easier way of using notebooks more generally as an environment for creating code+documentation files that better meet the needs of a variety of users? For example, I note this quote from Daniele Procida recently shared by Simon Willison:

Documentation needs to include and be structured around its four different functions: tutorials, how-to guides, explanation and technical reference. Each of them requires a distinct mode of writing. People working with software need these four different kinds of documentation at different times, in different circumstances—so software usually needs them all.

This suggests a range of different documentation styles for different purposes, although I wonder if that is strictly necessary?

When I am hacking code together, I find that I start out by writing things a line at a time, checking the output for each line, then grouping lines in a single cell and checking the output, then wrapping things in a function (for example of this in practice, see Programming in Jupyter Notebooks, via the Heavy Metal Umlaut). I also try to write markdown notes that set up what I intend to do (and why) in the following code cells. This means my development notebooks tell a story (of a sort) of the development of the functions that hopefully do what I actually want them to by the end of the notebook.

If truth be told, the notebooks often end up as an unholy mess, particularly if they are full of guard statements that try to separate out development and testing code from useful code blocks that I might want to import elsewhere.

Although I’ve been watching it for months, I’ve only started exploring how to use Jupytext in practice quite recently, and already it’s starting to change how I use notebooks.

If you install jupytext, you will find that if you click on a link to a markdown (.md)) or Python (.py), or a whole range of other text document types (.py, .R, .r, .Rmd, .jl, .cpp, .ss, .clj, .scm, .sh, .q, .m, .pro, .js, .ts, .scala), you will open the file in a notebook environment.

You can also open the file as a .py file, from the notebook listing menu by selecting the notebook:

and then using the Edit button to open it:

at which point you are presented with the “normal” text file editor:

One thing to note about the notebook editor view over the notebook is that you can also include markdown cells, as you might in any other notebook, and run code cells to preview their output inline within the notebook view.

However, whilst the markdown code will be saved into the Python file (as commented out code), the code outputs will not be saved into the Python file.

If you do want to be able to save notebook views with any associated code output, you can configure Jupytext to “pair” .py and .ipynb files (and other combinations, such as .py, .ipynb and .md files) such that when you save an open .py or .ipynb file from the notebook editing environment, a “paired” .ipynb or .py version of the file is also saved at the same time.

This means I could click to open my .py file in the notebook UI, run it, then when I save it, a “simple” .py file containing just code and commented out markdown is saved along with a notebook .ipynb file that also contains the code cell outputs.

You can configure Jupytext so that the pairing only works in particular directories. I’ve started trying to explore various settings in the branches of this repo: ouseful-template-repos/jupytext-md. You can also convert files on the command line; for example, <span class="s1">jupytext --to py Required\ Pace.ipynb will convert a notebook file to a python file.

The ability to edit Python / .py files, or code containing markdown / .md files in a notebook UI, is really handy, but there’s more…

Remember the guards?

If I tag a code cell using the notebook UI (from the notebook View menu, select Cell Toolbar and then Tags, you can tag a cell with a tag of the form active-ipynb:

See the Jupytext docs: importing Jupyter notebooks as modules for more…

The tags are saved as metadata in all document types. For example, in an .md version of the notebook, the metadata is passed in an attribute-value pair when defining the language type of a code block:

In a .py version of the notebook, however, the tagged code cell is not rendered as a code cell, it is commented out:

What this means is that I can tag cells in the notebook editor to include them — or not — as executable code in particular document types.

For example, if I pair .ipynb and .py files, whenever I edit either an .ipynb or .py file in the notebook UI, it also gets saved as the paired document type. Within the notebook UI, I can execute all the code cells, but through using tagged cells, I can define some cells as executable in one saved document type (.ipynb for example) but not in another (a .py file, perhaps).

What that in turn means is that when I am hacking around with the document in the notebook UI I can create documents that include all manner of scraggy developmental test code, but only save certain cells as executable code into the associated .py module file.

The module workflow is now:

  • install Jupytext;
  • edit Python files in a notebook environment;
  • run all cells when running in the notebook UI;
  • mark development code as active-ipynb, which is to say, it is *not active* in a .py file;
  • load the .py file in as a module into other modules or notebooks but leaving out the commented out the development code; if I use `%load_ext autoreload` and `%autoreload 2` magic in the document that’s loading the modules, it will [automatically reload them](https://stackoverflow.com/a/5399339/454773) when I call functions imported from them if I’ve made changes to the associated module file;
  • optionally pair the .py file with an .ipynb file, in which case the .ipynb file will be saved: a) with *all* cells run; b) include cell outputs.

Referring back to Daniele Procida’s insights about documentation, this ability to have code in a single document (for example, a .py file) that is executable in one environment (the notebook editing / development environment, for example) but not another (when loaded as a .py module) means we can start to write richer source code files.

I also wonder if this provides us with a way of bundling test code as part of the code development narrative? (I don’t use tests so don’t really know how the workflow goes…)

More general is the insight that we can use Jupytext to automatically generate distinct versions of a document from a single source document. The generated documents:

  • can include code outputs;
  • can *exclude* code outputs;
  • can have tagged code commented out in some document formats and not others.

I’m not sure if we can also use it in combination with other notebook extensions to hide particular cells, for example, when viewing documents in the notebook editor or generating export document formats from an executed notebook form of it. A good example to try out might be the hide_code extension, which provides a range of toolbar options that can be used to customise the display of a document in a the notebook editor or HTML / PDF documents generated from it.

It could also be useful to have a very simple extension that lets you click a toolbar button to set an active- state tag and style or highlight that cell in the notebook UI to mark it out as having limited execution status. A simple fork of, or extension to, the freeze extension would probably do that. (I note that Jupytext responds to the “frozen” freeze setting but that presumably locks out executing the cell in the notebook UI too?)

PS a few weeks ago, Jupytext creator Marc Wouts posted this handy recipe for *rewriting* notebook commits made to a git branch against markdown formatted documents rather than the original ipynb change commits: git filter-branch --tree-filter 'jupytext --to md */*.ipynb && rm -f */*.ipynb' HEAD This means that if you have a legacy project with commits made to notebook files, you can rewrite it as a series of changes made to markdown or Python document versions of the notebooks…

05 Aug 14:50

A new study reinforces the conclusion that autism is primarily genetic – Science-Based Medicine

mkalus shared this story from Science-Based Medicine.

The key belief of the modern antivaccine movement over the last two decades has been that vaccines either cause, contribute, or predispose children to autism. Although this particular belief is generally believed to have originated in 1998 with the publication of disgraced physician and scientist Andrew Wakefield’s case series of 12 children that reported an association between vaccination with the measles-mumps-rubella (MMR) vaccine and autism and was published in The Lancet , the idea predates Wakefield’s fraudulent publication. After all, a lawyer paid Wakefield handsomely to do “research” that he could use to sue vaccine manufacturers on behalf of parents of children with autism. Wakefield’s case series was ultimately retracted due to findings of scientific fraud and Wakefield’s medical license taken away, but it didn’t matter. The idea that vaccines cause autism is the most prevalent antivaccine misinformation that fuels the movement, and Wakefield still milks it, having produced with Del Bigtree an antivaccine propaganda movie disguised as a documentary, VAXXED.

The idea is not just limited to the MMR vaccine, either. Another strain of antivaccine belief attributes autism to the mercury in the thimerosal preservative that was used in many childhood vaccines until 2001, while others claim it’s “too many too soon,” and still others blame DNA fragments in vaccines crossing the blood-brain barrier and causing neuroinflammation. Many are the vaccines blamed for autism, and many are the bogus proposed biological mechanisms by which vaccines supposedly “cause autism”, but in the end it’s always about the vaccines. Always. Antivaxers deny that autism has a major genetic components, and, even when they do conceded a genetic component, try to divert to claiming that it’s a “genetic susceptibility” to “vaccine injury” that is responsible for autism, not primarily genetics.

All of this is why antivaxers generally lash out whenever a study comes out that reports a large genetic component to autism and autism spectrum disorders. This happened just last week, as the largest study of its kind, encompassing five countries and two million subjects was published last week and concluded that autism spectrum disorders are 80% reliant on inherited genes. Before I get to the study itself, here’s how it was reported in WebMD:

The findings could open new doors to research into the genetic causes of autism, which the U.S. Centers for Disease Control and Prevention now says affects 1 in every 59 U.S. children.

It might also help ease fears that autism is caused by maternal factors — a mother’s weight, mode or timing of delivery, or nutrient intake, for example. The new study found the role of maternal factors to be “nonexistent or minimal.”

Instead, “the current study results provide the strongest evidence to our knowledge to date that the majority of risk for autism spectrum disorders is from genetic factors,” said a team led by Sven Sandin, an epidemiological researcher at the Karolinska Institute in Stockholm, Sweden.

The new study might help dampen public interest in supposed — but unproven — “environmental” causes of autism, such as vaccines. Long-discredited, fraudulent data linking childhood vaccination with autism is still widely cited by the “anti-vaxxer” movement.

I fear that last part is pretty much wishful thinking. Whenever a new study provides evidence for a genetic cause of autism, antivaxers tend to double down and attack the study. Will this study withstand the attacks? Scientifically, probably (as I’ll explain in a moment), but in the vaccine PR wars? Who knows?

Genes, not environment, are the primary driver of ASDs

The study under discussion, “Association of Genetic and Environmental Factors With Autism in a 5-Country Cohort“, was published online in JAMA Psychiatry on July 17 by a multinational group of researchers led by Sven Sandin, an epidemiologist at the Karolinska Institute in Stockholm, Sweden. The investigators set the stage in the introduction thusly:

Autism spectrum disorder has both genetic and environmental origins. Research into the genetic origins of ASD has consistently implicated common and rare inherited variation (heritability). However, evidence shows that there are other, noninherited, genetic influences that could be associated with variation in a trait.3 Given the prenatal origins of ASD, an important source of such genetic influences could be maternal effects.4 The term maternal effects is used to describe the association of a maternal phenotype with ASD in offspring (ie, the noninherited genetic influences originating from mothers beyond what is inherited by the offspring). Maternal effects have been associated with a substantial proportion of the variation in several traits associated with ASD, including preterm birth5 and intelligence quotient.6 Research on nongenetic origins has frequently pointed to a role for environmental exposures unique to different family members (nonshared environment), an example of which is cesarean delivery.7 In contrast, contribution from environmental exposures that make family members similar (ie, shared environment), has been uncertain.8

A meta-analysis of twin studies estimated heritability to be in the range of 64% to 91%,8 and 3 population-based studies from Sweden recently estimated the heritability of ASD to be 83%,9 80%,4 and 66%.10 Among those earlier heritability calculations from twin and family studies (eTable 1 in the Supplement), a single study has estimated maternal effects,4 reporting modest, if any, contribution to ASD. Estimates of the contribution of shared environment range from 7% to 35%,8 but multiple studies estimate the contribution to be zero.4,9,11,12 Thus, although the origin and development of ASD has been investigated for half a century, it remains controversial.

I can’t help but note here that, although there is controversy, most of the estimates cited fall within a fairly narrow range, which makes me think that how much genetics contributes to the development of ASDs is really not that controversial. Be that as it may, I do feel obligated to briefly discuss one point here before I dig into the meat of the study, and that’s to answer the question, “What do we mean by ‘genetic’?” The general public, given how these sorts of issues are reported, often tends to think that “genetic” means that there is a single gene (or a handful of genes) responsible for a trait, disease, or condition. That is, of course, very simplistic. While there are traits and diseases that can be determined by a single gene, most complex traits depend upon many genes and can be affected by many variants of those genes. What studies of this sort estimate is how much autism susceptibility is due to heritability. The other thing that you need to know is that, for purposes of this sort of model, maternal and environmental factors that impact autism risk are basically everything else that is non-heritable. We’ll come back to this point later.

So here’s what the investigators did. They examined the medical histories of more than two million children born in singleton births in Denmark, Finland, Sweden, Israel, and Western Australia between 1998 and 2012. For Sweden, Finland, and Western Australia, investigators included all births between January 1, 1998, and December 31, 2007, while for Israel they included all births between January 1, 2000 and December 31, 2011. All were tracked for a diagnosis of ASD from birth up to December 31, 2014, in Sweden; December 31, 2013, in Denmark; December 31, 2012, in Finland; December 31, 2014, in Israel; and July 1, 2011, in Western Australia. Children were followed until age 16. Denmark, Finland, Sweden, and Israel provided clinically ascertained diagnoses from national patient registers while the data from Western Australia was obtained from a government provided service and benefits register with clinically ascertained autism diagnoses. Of the cohort, 22,156 went on to be diagnosed with an autism spectrum disorder, for a prevalence in the group of 1.1%. Data were analyzed from September 23, 2016 through February 4, 2018.

As you might expect, the authors used Generalized Linear Mixed Effect Models (GLMM) to estimate genetic and environmental effects on the risk for ASD and autistic disorder (AD), using the three-generational data sources in the databases examined to construct families that vary by genetic relatedness and therefore are informative for genetic modeling. These included full siblings and cousins related through their mothers (maternal parallel cousins [mPCs]), or cousins of other relationships. To be honest, I’m not a statistician, which places the nitty-gritty of the statistics mostly beyond me. (If a statistician wants to chime in in the comments, I try never to be too proud do learn something. As Harry Callahan said in Magnum Force, “A man’s got to know his limitations.”)

The key findings of the study were as follows:

  • The median (95% confidence interval) ASD heritability for the whole cohort was estimated to be 80.8% (73.2%-85.5%).
  • For the Nordic countries combined, heritability estimates ranged from 81.2% (73.9%-85.3%) to 82.7% (79.1%-86.0%).
  • Country-specific heritability estimates ranged from 50.9% (25.1%-75.6%) (Finland) to 86.8% (69.8%-100.0%) (Israel).
  • Maternal effect was estimated to range from 0.4% to 1.6%, and in all models used the 95% CI interval included zero, meaning that the estimates for maternal effects were not distinguishable from zero, leading the authors to conclude that there was no support in their models for a significant contribution from maternal effects. (The term maternal effects is used to describe the association of a maternal phenotype with ASD in offspring; i.e., noninherited genetic influences originating from mothers beyond what is inherited by the offspring). The authors note that maternal effects have been associated with a substantial proportion of the variation in several traits associated with ASD, including preterm birth and intelligence quotient.)
  • Estimates of genetic, maternal, and environmental effects for autistic disorder were similar with ASD.

The authors did a variety of sensitivity analyses to determine if changing their inputs had a major effect on the estimates, and the estimates remained pretty solid. The authors note the following strengths in their study:

The major strength of this study is the use of multiple large population-based samples with individual-level data in 3-generation pedigrees. Our data were based on prospective follow-up and health systems with equal access. This approach, following all participants from birth using population registers, avoids bias owing to self-report and retrospective collection of data and reduces selection biases owing to disease status or factors such as parental education. In addition to providing exceptional statistical power, the study directly addresses the concern of lack of replication in research findings31,32 replicating results across 5 countries and health systems.

And limitations:

Our study has several limitations. Despite its large overall sample size, the effective sample size for individual countries was limited by the low prevalence of ASD. Misspecification is another potential limitation. The first potential misspecification arises from the possible violation of the assumption of independence between genetic and environment. If this correlation is not specifically included in the model, its components will mostly be incorporated into the estimate of genetic variance component, potentially biasing the heritability estimate. The direction of the bias will depend on the sign of the covariance between genetic and environmental factors.36 The second misspecification arises from plausible gene-environment interactions that were not modeled and could also bias the heritability estimate. The direction of bias will depend on whether the environmental component is familial and whether the trait is multifactorial.36

Of course, in the case of autism and ASDs, what those “plausible gene-environment interactions” might be is rather unclear. Of course, predictably, antivaxers latched on to that one issue, as you will see. Before I get to that, though, let’s look at the accompanying editorial, “The Architecture of Autism Spectrum Disorder Risk: What Do We Know, and Where Do We Go From Here?” by Amandeep Jutla, Hannah Reed, and Jeremy Veenstra-VanderWeele, which notes:

The study by Bai et al1 elegantly summarizes and confirms, using the largest data set to date, a key truth about ASD’s risk architecture: the disorder is strongly heritable, with environmental factors, although important, contributing relatively less to its variance than genetic factors. Where do we go from here?

One clear next step is to disentangle ASD’s heritability into components that can be identified in ever-growing molecular genetic data sets. The most robust data in ASD implicate de novo copy number variants and rare, de novo single-nucleotide variants in at least 99 genes,6 many of which are involved either in synaptic signaling or regulating expression of other genes, which suggests convergent pathways. Importantly, in a twin study, these noninherited (de novo) variants would be shared by a monozygotic twin pair but not dizygotic twins and would therefore contribute to an estimate of genetic risk. In a family study such as this one,1 these de novo variants are not captured in the estimate of genetic risk because they are not shared between nontwin family members. These rare mutations would instead land in the remaining variance not accounted for by either inherited genetic or shared environmental risk.

Of course, if it is true that de novo (new noninherited) gene variants play a significant role in the pathogenesis of autism and ASDs and, in this analysis, would show up in the nonshared environmental risk component, then it’s possible, even probable, that the genetic contribution to ASD was actually underestimated in this study. Jutla et al also note:

The contribution of the environment to ASD risk appears to be much smaller than the contribution of genetics, yet potential environmental risk factors often receive disproportionate attention from the public and the media, even when (as in the case of vaccine fears) they are debunked. Perhaps this is because environmental risk factors, at least in principle, are modifiable. Even with a smaller contribution to risk, it is worthwhile to enrich understanding of environmental risk factors, which remain relatively understudied.10 Some identified risk factors, such as preterm birth or birth complications,4 are already targets of public health efforts for other reasons. Others, such as a shortened interpregnancy interval or an infection during pregnancy,4 may also be modifiable if the underlying risk mechanisms can be better understood.

Ironically, one of those potentially modifiable environmental factors is congenital infection with rubella, which greatly increases the risk of autism in the child and is, ironically enough, preventable with vaccination of the mother with MMR. In any event, here’s the problem with the assertions above. To antivaxers, the top “environmental risk factors for autism” = vaccines, vaccines, vaccines, vaccines, vaccines, and then everything else. (Actually, perhaps I should have repeated “vaccines” ten times instead of five.) Indeed, I’ve said in the past that one additional harm caused by the antivaccine movement other than depressing vaccine uptake and thus making various populations susceptible to outbreaks of vaccine-preventable diseases, is that their fanatical belief that vaccines cause autism (1) has diverted funding and research effort to repetitive studies looking at correlations between vaccines and autism and (2) might have made researchers more reluctant to study potentially plausible environmental risk factors for autism because of the association with antivaccine pseudoscience and, of course, the inevitable attacks on them when they find potential environmental risk factors for autism that antivaxers don’t like (anything not vaccines).

A final issue noted in the editorial is that, although the population studied was geographically diverse, it was not as ethnically diverse as one would like. In addition, the smaller sizes of the Israeli and Australian populations studied hamper a clear understanding of whether contributions to ASD risk may differ geographically, leading as they did to higher variability in the estimate of the genetic contribution to ASDs and autism in those countries. I thus agree that it’s unlikely that the estimates for genetic risk will become much more precise with additional family studies, but I also agree that it “would be useful for the findings to be replicated in non-Western or more ethnically diverse countries”.

Antivaxers counterattack

Not surprisingly, antivaxers do not like this study. The reason they don’t like it is because it is practically gospel among them that autism and ASDs are not primarily genetic conditions, but rather caused mainly by environment, specifically vaccines, although they frequently obfuscate by attributing the conditions to vaccines and “other environmental influences” and “toxins”. Also not surprisingly, first out of the gate (that I could find, anyway) to attack the study was James Lyons-Weiler. We’ve met him before when he tried to attribute the death of a teenage boy to Gardasil. The sad thing is that Lyons-Weiler was once an actual reputable scientist (or at least not, to my knowledge, a disreputable one) who during his pre-antivaccine career directed two different bioinformatics cores, one at the University of Massachusetts in Amherst and the other at the University of Pittsburgh, the latter of which closed in 2014. Since then, he’s gone all-in on antivaccine pseudoscience, even going so far as to form an institute he dubbed the Institute for Pure and Applied Knowledge (IPAK), which is about as arrogant-sounding a name for an institute as I’ve ever heard. IPAK isn’t just into antivaccine “science”, but that does seem to be its primary focus compared to others. Since then, Lyon-Weiler’s been battling it out with Leslie Manookian for the title of Most Antivaccine, appearing on antivaccine panels with Del Bigtree, Gayle DeLong, Sherri Tenpenny, and Toni Bark, and, it appears, helping antivaccine pediatrician Paul Thomas carry out a “vaxxed vs. unvaxxed” study.

Lyons-Weiler thus knows a lot, and he really should know enough to know that the complaints he has about the study are mostly irrelevant, save one, which was explicitly and openly acknowledged by the study’s authors. However, something about going antivaccine leads once competent scientists to embrace analyses that in their past lives they would have recognized immediately as nonsense. None of that stopped Lyons-Weiler from entitling his blog post “Yet Another Highly Unethical and Socially Irresponsible “Genes-Only” Study Fails to Show that Autism is 80% “Genetic”” and run with it. Oddly enough, he harps mainly on the WebMD article that I discussed above, rather than the actual study:

The article skips over the fact that the newest, latest study, like the prior studies, fails to actually measure the contribution of a single environmental factor. While the article rails against “anti-vaxxers”, the study ignores the vaccination status of those involved in the study. The mantra of so many studies never showing association has be tempered with a mature, responsible and realstic interpretation in the context of how those studies were conducted: restricted to one vaccine (MMR), and then there is this:

I found it amusing that he repeated one of his criticisms of autism-gene studies twice. As for the Bonferroni correction, I’m not sure why he’s harping on that in the context of the Generalized Linear Mixed Effect Models. The Bonferroni correction is used for much simpler models than this. Actually, the whole thing is rather confused, a list of things that he thinks studies should do (Bonferroni correction) and that they do that he doesn’t like (analyze the data repeatedly until the effect “goes away”). Of course the latter of those two examples reveals shocking statistical ignorance, as quite frequently, raw data reveal an effect that turns out to have been spurious when adjustments are appropriately made for confounding factors. Apparently, like Brian Hooker, Lyons-Weiler prefers the simplicity of the raw, unadjusted analysis that doesn’t control for confounders and provides false positive associations between vaccines and autism.

Next up, Lyons-Weiler asks a question all scientists love: “Why didn’t you study and measure what I think you should have studied and measured?” Here’s what I mean:

Their entire methodology is based on familial correlations. In the current study under consideration, no exposure levels to pesticides, medical exposures in utero, smoking history, nothing environmental was measured. And yet somehow the study authors pretend they can estimate the % liability from environmental factors. How do they pretend to achieve such a feat?

This is just a diversion. Remember, if you accurately estimate the percentage of risk that is genetic, then whatever’s left over must include the environmental risk factors. You don’t have to measure exposure to each and every potential environmental risk factor.

Lyons-Weiler does make one valid criticism, but, as is his wont, he drives off the cliff of ridiculousness with it and fails to note that the authors themselves listed this as a limitation of their study, namely that there could be interactions between genetics and environment. Naturally, he assumes that these interactions would overwhelm the genetics component:

And if the interaction term “(Genetics + Environment)” is more highly significant than “Genetics” or “Environment“, a reasonable interpretation would be that we cannot interpret genetics in a vacuum, that the significance of many ADK risk alleles must be modified by environmental factors. If during model selection, G or E is significant, but then in the full model G x E is significant, we attribute liability to both G and E working together.

Of course, the problem is that we don’t know the size of this term, and it would have to be pretty darned large to push the genetics-only risk factor down to the second or third most important risk factor. We have no good evidence that this is likely, and, if you click on the review article cited, you’ll find that the examples cited by Lyons-Weiler are all supported by small studies, the largest of which had only 408 subjects. It’s wishful thinking to believe that these potential examples are going to overshadow the contribution of genetics to overall risk of autism and ASDs.

Lyons-Weiler then continues:

There are over 850 genes that have been determined to contribute to ASD risk – and not one of them explain >1% of ASD risk individually. Most of these are Common Variants – meaning they are ancient – as in, they pre-date both the ASD epidemic (and yes, there is an epidemic) and vaccination.

Here, he appears to be either falling prey to the simplistic idea that there must be “a” gene for autism or counting on the likelihood that antivaxers reading his screed won’t know that complex conditions and traits are often impacted by many genes, any one of which has little effect. Surely he knows this if he ran bioinformatics cores until 2014, and if he didn’t then he shouldn’t have been running those cores.

Now here’s the part that made me laugh out loud:

The study ignores the fact that environmental factors can impact genes, proteins and biological pathways in a manner that is identical to the effects of genetic variation. This is called Phenomimicry – a term so cool I wish I had invented it. Examples of Phenomimicry are known in science relevant to ASD.

Phenomimicry, if it exists, would indeed be a cool biological phenomenon, but basically here Lyons-Weiler is speculating wildly, pulling something out of his nether regions and using its lack in the study as a cudgel without telling us any plausible scientific rationale why the study authors should have even considered phenomimicry. It’s also a term that, as far as I can tell from Pubmed and Google, is only used by one person, Dorothy V.M. Bishop and is not a generally accepted concept. A PubMed search turns up only one article by Bishop using the term and Googling only turns up mainly articles referring to Bishop’s article.

Lyons-Weiler gives up the game not long after that:

It is highly unethical – and socially irresponsible – for “Genes-only” studies to be conducted that claim to rule out environmental factors. All “Yet Another Highly Unethical Genes-only Study”s – YAHUGS – should be replaced with fully and correctly specified models – that means measuring and studying both vaccination patterns and genetics.

See what I mean? To antivaxers “environmental risk factors” = vaccines, first and always. Whatever the strengths and weaknesses of the current study, the reason Lyons-Weiler doesn’t like it is because it doesn’t consider vaccines (why should it, given the mountains of epidemiological evidence that show no association between vaccination and autism risk?) and concentrated on genetics. Remember that.

The genetics of autism

None of this is to say that this and all the other studies examining the genetics of autism are without problems. Autism and ASDs represent a spectrum of neurodevelopmental disorder impacted by many genes. Teasing out which genes and combinations of genes are most important in determining autism risk is incredibly difficult, as is the case for any complex multigene condition. I’ll even concede that sometimes scientists go too far in touting gene association studies. However, this study was not a gene association study, but rather the largest study to date to estimate how much of the risk of autism is genetic. As such, it produced an estimate that is in line with previous estimates and strengthens the scientific conclusion that autism is mainly heritable, with an effect on risk due to environment that is much smaller and, however large it actually is, not due to vaccines.

05 Aug 14:49

Unpopular Opinions

mkalus shared this story from xkcd.com.

I wasn't a big fan of 3 or Salvation, so I'm trying to resist getting my hopes up too much for Dark Fate, but it's hard. I'm just a sucker for humans and robots traveling through time to try to drive trucks into each other, apparently.
05 Aug 14:47

What the Balmoral Hotel can teach us about private ownership of affordable housing

mkalus shared this story .

The Balmoral Hotel in Vancouver's Downtown Eastside has a well documented history of trouble.

For years, it was one of the most decrepit buildings in the neighbourhood and was well known for its mould, bugs and rat infestations. City inspectors finally shut it down in 2017, deeming it structurally unsafe and unfit for living in.

But in the midst of all the reports documenting the hotel's terrible conditions, I think we missed an opportunity for an interesting discussion: How can the private sector provide safe and affordable housing? If at all?

The Balmoral was a single-room occupancy (SRO) building, of which there are currently 156 in Vancouver. These SROs house some of the city's most vulnerable residents: people with disabilities, addictions and mental health issues as well as people on income assistance.

Many people end up in an SRO because the rooms are dirt cheap (between $375 and $800 per month) and because SRO landlords aren't as picky about who they rent to. The rooms might not be glamorous but without them, many of their inhabitants would be homeless.

What fascinates me, though, is many of these SROs are privately owned and exist in the housing market without public subsidies. Landlords are able to make a profit while offering some of the lowest rents in the city — and the reason behind that lies in the design of their rooms.

SROs are like adult dorms. Tenants have their own bedroom but share bathrooms and kitchens with other tenants. Some SROs opt out of having a kitchen entirely, instead providing a hot-plate and mini-fridge in each room. By reducing and sharing amenities, the rent can be made much cheaper.

This design appears to be gaining popularity globally. Several trendy startups have popped up in New York and the Bay Area, offering housing under a similar model.

Investor-backed startups like Common, Starcity and WeLive (yes, WeWork is now a housing developer) offer housing branded as "dorms for adults" or "co-living," where kitchens, living rooms and bathrooms are shared. Starcity has a permit to build an 800-unit "co-living" building in San José —  basically an SRO with better marketing.

The focus for these startups hasn't been about making rent cheaper but offering luxury amenities (like gyms, catered parties, furnished units and cleaning services) at reduced rates for tenants. Studios at WeLive's building in New York start at $3,091 US per month ($4,065 Cdn), slightly higher than the average rent for a studio in the same area.

But other co-housing initiatives do show potential for affordability. In New York, the local government recently launched ShareNYC, a pilot program that will pick private developers to build "shared housing" intended to be "substantially affordable". 

However, Vancouver's case demonstrates there is a limit to the extent that the private sector can provide affordable housing, especially when property values in the city have skyrocketed in recent years.

The Balmoral was assessed at $3.9 million in 2007. Ten years later, it was worth $10 million. For a landlord, that means higher property taxes and a pressure to raise rents or redevelop.

But at the same time, the city made it nearly impossible for landlords to redevelop SRO buildings. In 2015, a new bylaw charged landlords a fee of $125,000 per room to redevelop an SRO.

The intention was to protect SRO residents from eviction, but I think it's resulting in a stalemate. There's really no financial incentive for a landlord to upgrade or repair a building except to keep it from being shut down. So they'll do bare-minimum maintenance to keep their buildings open and not much else.

With that in mind, the city's strategy for the last couple of decades has been to buy up SROs from landlords and replace them with social housing. We had 7,830 units in privately owned SROs in 1994, and we have 4,379 today. Most of those lost units have been replaced by social housing.

I think that strategy makes sense for the low-income residents of the Downtown Eastside.

Relying on the private sector to provide housing for some of the most vulnerable people in our city was probably never a good idea. That being said, the rebirth of SROs elsewhere suggests there is still some merit to this model, so let's not throw the key away quite yet.

This column is part of CBC's Opinion section. For more information about this section, please read our FAQ.

05 Aug 14:45

H&M very much on Brexit trend with their font sizing pic.twitter.com/Ia9Eq6aafg

by ottocrat
mkalus shared this story from ottocrat on Twitter.

H&M very much on Brexit trend with their font sizing pic.twitter.com/Ia9Eq6aafg





26 likes, 4 retweets
05 Aug 14:44

What is a calculation?

by Eric Normand

Level 1 of functional thinking is to distinguish between actions, calculations, and data. But what is a calculation? In this episode, we go over what it is, how to recognize them, and how to implement them. By the end, you should understand why they are so important to functional programming.

Transcript

Eric Normand: What is a calculation? By the end of this episode, you will understand how to represent timeless calculations in your language. My name is Eric Normand and I help people thrive with functional programming.

This is an important topic. I have a three-level system, three-level scheme, for how you can progress in your functional thinking journey. In the first level, the most fundamental level, you have to make a distinction between actions, calculations, and data. This distinction is necessary for the next two levels.

I’ve already gone through actions in another episode. You should look that one up if you’re interested. In this one, we’re going to go over calculations. We’re going to go over what they are, how to identify them, their requirements and how to implement them. Let’s get started.

As a rule of thumb, I like to say calculations are runnable code. They’re computations that do not depend on when they are run or how many times they are run. That doesn’t explain much. It’s a rule of thumb. It’s a way of identifying them. We’ll come back to this a few times in this episode. They are the kind of mathematical function that you think about.

Another term that we hear a lot in functional programming is a pure function. It is a function from inputs to outputs. It’s a computation that takes some inputs and returns an output. It doesn’t do anything else. It doesn’t send an email. It doesn’t change any mutable values in your program. It simply does a calculation.

When you have one of these, because it doesn’t matter when it’s run or how many times it’s run, it’s always going to give that same answer no matter what. They’re easy to test. They’re easy to understand how they work. You don’t have to look at the whole program and the history of the program to understand how it works.

If you can say, “Hey, this one doesn’t depend on when it’s run or doesn’t matter how many times I run this. I’m always going to get the same answer.” Then you have a calculation. Now it can get tricky because calculations aren’t always functions. That’s why I don’t use the term “Pure function”.

Because in a lot of languages…Function already has a lot of connotation when you’re talking in the context of programming languages. More functional languages tend to use functions more, but languages like C, Java, JavaScript, and Python, most languages, don’t always use a functions, even when it’s a mathematical function.

Easy example, addition. In JavaScript, addition is an arithmetic operator. It is not a function. You do not call it the same way as you call a function.

It is not a function. It’s a piece of syntax that the compiler recognizes and converts into machine code or whatever, it interprets it. It’s a different way of interpreting from interpreting functions. That is one reason why I call them calculations instead of functions. Is because function already has meaning to most programmers.

It’s unfortunate, it’s sad that it’s not the same meaning mathematicians have. It would be better to have a more accurate usage of these different terms but that’s the way it is. Pure function gets at it, but then again it’s function. What about plus? What about times? These aren’t functions in most languages.

When you have something like addition, in JavaScript, the plus operator. It’s pure, it’s a pure function right? Like in the abstract sense. Meaning, if you give it the same arguments, it will evaluate to the same answer.

One plus two is always going to be three, right? A plus B is always going to be C, if you have the same values of A and B. It does make it easy to reason about. The problem is, that operator is not first-class. It cannot be passed to another function.

JavaScript has first-class functions. You can pass a function as an argument to another function. You can save a function to a variable. You can return a function from another function. You can’t do any of those things with the Plus operator.

The reason you need to do that is because at level one you start doing what are called, “Higher order operations.” You start doing Map, Filter, Reduce. You start building abstractions that take functions and return functions. You have functions that operate on functions. You can’t do that with Plus.

What you need to do in your language is to find a way to represent all of those things that are not first-class baked in and represent them as a first-class thing. That is one of the challenges of doing functional programming in a non-functional language.

JavaScript makes it relatively easy. You can always define a function called, Plus, P-L-U-S that takes two arguments and adds them and returns the answer. That’s easy to write. It’s so small now with the new lambda syntax that you can do that inline. You don’t even have to name the function. It can be anonymous.

When you do that, you now have a function that does plus, it does addition but it’s first-class. You can start doing level two things. Did I say, “Level one?” Yeah, first-class is definitely level two. I got to change my notes here.

Level one, just to be clear because I just messed it up. Level one is making this distinction between actions, calculations, and data. Level two is higher order thinking. This is data transformation pipelines. It’s Map, Filter, Reduce. It’s making higher order actions. You need to be able to represent these things as first-class values, like you can in JavaScript.

In something like Java, it becomes a little bit more difficult. You could look at a method, let’s say it’s a static method and it does not access any mutable state or anything. It just takes some values as arguments and return to value as a return value. That is a calculation but it’s not first-class.

You get all the benefits of it being a calculation, meaning it’s easy to test. It’s easy to think about what it’s going to do. You don’t have to worry about when it’s going to be called or what order it’s going to be called in. It has all those benefits. What it doesn’t have is the ability to pass it as an argument.

Maybe with the new Java 8 stuff they’ve changed this but when I was learning Java, you could not pass a method into a function as an argument or a method into another method.

The only thing you can pass in besides the primitive types like int and stuff, you can pass in an object. You need to represent your function as an object. I think with the lambda syntax in Java 8, you can start doing that. You can represent a single method call as a lambda.

This episode is not about how to do it in Java. That might be a different episode or you might just have to do that on your own and figure out how to do that. To get to level two you need a representation that is first-class.

One thing I like to say about calculations that I think might be clearer now is that they are timeless. Actions are bound up in time just by definition. When you look at any action, you can say one or two things. It either depends on when it is run. Meaning, compared to other things that are running, there is going to be some order where it has a different output.

Example, if my JavaScript function reads the value of a global variable to make a decision in the function — it returns something different depending on what’s in that global variable — there’s going to be some way that I’m writing to that global variable in some other place that now I need to know about that to know what my function is going to do.

Now, is that good or bad? I’m not judging. I’m just saying we put those in a different bucket because they require different kind of attention. Those are actions. If it doesn’t read any global variables, if it doesn’t read from the network, do anything like that, all it does is it compute something from the arguments to the return value, then I can say that it’s timeless.

It doesn’t matter when it’s run. I could run it on a different machine. I could run it right now. I could’ve run it two weeks ago, and cache the value, and just kept the value. It’s timeless. I can free myself from worrying about any issues of time. Can I run it 100 times? Yes, no problem.

I can’t say that about sending an email. A function that will send an email, I can’t say that it’s timeless. If I send the email today, it’s different from sending it tomorrow. If I run this function five times, it’s different from running it one time or running it zero times.

Calculations are these things that because of these constraints we put on them, it doesn’t matter when you run them or how many times you run them. Then we can free our mind and just say, “These are an easier bucket. This bucket is easier to deal with. We can actually pay a lot less attention to them. They’re easier to test too.”

You can just treat them in a different way. It’s like having the difference between a wild animal and a tamed animal. The tamed animal will just sit there and listen to you. If you say, “Hey, shoo, shoo,” it’ll just go. Whereas a wild animal, if you say, “Shoo,” you don’t know, it might jump out at you. They’re in different buckets. Cool.

I’ve kind of belabor this point. That’s what I mean by timeless calculations. They’re timeless because they’re not bound up in time. Whenever you run them is fine. You don’t even have to think about it. This is what functional programmers do, is they put more and more of their code into calculations.

They still have some actions. You need action. You need to send that email, but the more we can put into calculations, the more relaxed we can be because this stuff is much easier to work with. It’s an easier medium, these things that don’t matter when you run them.

One of the hardest things about software is getting things to run in the right order. If you have a whole bucket of things that it doesn’t matter what order they run in, it makes everything easier. It means you can cache them.

Also, another word for that is memoize. There’s a whole episode on that. It means you can even figure it out at compile time. You can do lazy evaluation. Maybe you don’t ever use it. Maybe you never need that value. You could just not calculate it until you really need it.

There’s all sorts of great stuff you can do once you are free out of time. You’re freeing yourself from being bound in time.

OK, let’s recap. Calculations are computations from inputs to output. They don’t do anything else. The rule of thumb is, it’s a calculation if it doesn’t depend on when it is run or how many times it is run.

We’ll also call these, very often, pure functions, but they’re not always functions in your language. Some operations in your language are operators and not functions. It’s possible, or they’re a static method.

What we need to learn to do in our language — and every language is going to do it differently — is to convert these things into first-class objects, first-class values. That’s so that we can do level two stuff, which is like Map, Filter, and Reduce that kind of thing.

I like to say that they’re timeless. They represent like mathematical relationships, addition. It’s timeless. It’s never going to change. It’s been that way since who knows how long. The beginning of the universe, I want to say, but I wasn’t there.

No one knows if the laws have changed, but as long as people have been around, two apples plus two apples is equal to four apples. It’s just always going to be the same. Cool.

This has been my thought on functional programming. I’m Eric Normand. If you want to find the other episodes, the past episodes, you can find everything at lispcast.com/podcast. There, you’ll find audio, video, and text transcripts of all of the episodes. You’ll also find links to subscribe, whether you want to subscribe on YouTube, or the podcast, or via RSS if you like the whole blog thing.

Also, you’ll find links to find me on social media. Please get in touch with me. I’m looking for the others. All right. Thank you very much. This has been my thought on functional programming. Rock on.

The post What is a calculation? appeared first on LispCast.

05 Aug 14:42

Another Tragedy of the Commons

by Stowe Boyd

New Jersey’s largest lake is now its largest environmental mess

Continue reading on Medium »

05 Aug 14:38

What Do you Mean You Write Code EVERY DAY?

by Tony Hirst

Every so often, I ask folk in the department when they last wrote any code; often, I get blank stares back. Write code? Why would they want to do that? Code is for the teaching of, and big software engineering projects, and, and, not using it every day, surely?

I disagree.

I see code as a tool for making tools, often disposable ones.

Here’s an example…

I’m writing a blog post, and I want to list the file types recognised by Jupytext. I can’t find a list of the filetypes it recognises as a simple string that I can copy and paste into the post, but I do find this:

Copying out those suffixes is a pain, so I just copy that text string, which in this case happens to play nicely with Python (because it is Python), sprinkle a bit of code:

and here’s the list of filetypes supported by Jupytext: .py, .R, .r, .jl, .cpp, .ss, .clj, .scm, .sh, .q, .m, .pro, .js, .ts, .scala.

Note that is doesn’t have to be nice code, and there may be multiple ways of solving the problem (in the example, I use a hybrid “me + the computer” approach where I get the code to do one thing, I copy the output, paste that into the next cell and then hack code around that, as well as “just the computer” approach. The first one is perhaps more available to a novice, the second to someone who knows about .join()).

So what?

I tend use code without thinking anything special of it; it’s just a tool that’s to hand to fashion other tools from, and I think that colours my attitude towards the way in which we teach it.

First and foremost, if you come out of a coding course not thinking that you now have a skill you can use quite casually to help get stuff done, you’ve been mis-sold…

This blog post took much longer to write than it took me to copy the _SCRIPT_EXTENSIONS text and write the code to extract the list of suffixes… And it didn’t take long to write the post at all…

See also: Fragment – Programming Privilege.

05 Aug 14:25

Google Canada cuts Pixel 3 and Pixel 3 XL by $400

by Ian Hardy

A new day has arrived and that means Google Canada has once again dropped the price of the Pixel 3 and Pixel 3 XL.

However, this time around, we have a new benchmark for its lowest Canadian price by taking a very respectable $400 off the smartphone. Google Canada is offering the following prices:

  • 64GB Pixel 3 is $599 CAD from $999
  • 128GB Pixel 3 is $729 from $1,129
  • 64GB Pixel 3 XL is $729 from $1,129
  • 128GB variant Pixel 3 XL is $859 from $1,259

The fine print notes, “Save $400 on Pixel 3/3 XL. Promotion starts August 4, 2019 at 12am PT and ends September 28, 2019 at 11:59pm PT, while supplies last and subject to availability. Offer available only to Canadian residents aged 18 years or older with Canadian shipping addresses. Purchase must be made on Google Store Canada.”

The Pixel 3 has a 5.5-inch display with a 1,080 x 2,160-pixel resolution, 4GB of RAM, dual front-firing stereo speakers, a Snapdragon 845 processor and comes in ‘Clearly White,’ ‘Just Black’ and ‘Not Pink.’ The larger 6.3-inch Pixel 3 XL features a 1,440 x 2,960-pixel resolution display, a Snapdragon 845 processor and 4GB of RAM.

Source: Google Canada

The post Google Canada cuts Pixel 3 and Pixel 3 XL by $400 appeared first on MobileSyrup.