Shared posts

14 May 03:32

A World Redesigned: What will be our most challenging urban transportation issue?

by Gordon Price

For urban centres, it will be this: (Click on title for answer.)


From the interview on Slate Money with Richard Florida:

Felix Salmon, host: The most efficient form of transportation ever invented … the elevator – that one seems to me to be the real bottleneck.  … I’ve been trying to do the math on elevators, and it just doesn’t work …

Richard Florida: The elevator problem and the transit problem are very similar …  I’ve talked to a lot of commercial real-estate developers … they’ve got a big problem on their hands …

One thing people say is that you’re going to have big queues down the street, how do you deal with that?  Your’e going to have more remote work …  (Google has just said ‘more remote work until late 2021’. … About 40 percent of us can do more remote work.)  …

The second thing people are talking about is staggered commutes.  On one hand, staggered days: you come in every other day or every third day.  And the second on is staggered times.  People will come in at 7, 7:30, 8.  And that’s not because of transit … that’s to take the queues out of the elevator. …

Another thing people are beginning to talk about is: Do you have a short-term rebirth of the suburban office park? … to work in an office closer to where they live part of the time as well as coming to the headquarters or central office part of the time …

It is transit and elevators that are the bottlenecks.

Segment on elevators starts at 16.20.

 

Further thoughts from PT: the economics of floor-location will change.  The first three floors up from the lobby will have a premium over the previously prestigious locations at the top with the views.  Because you can just walk up without a wait or reservation.

How about making a reservation for an elevator?  For a parking space?  For a time to drive on the HOV lanes? For a particular work-day?  For a desk?  For a gym workout?  For a grocery-store shopping time or pick-up?  And oh yeah, for a restaurant.  

Welcome to the new world of reservations.

14 May 03:32

We raised $500,000!

by Dries
Blue hearts

I'm excited to announce that the Drupal Association has reached its 60-day fundraising goal of $500,000. We also reached it in record time; in just over 30 days instead of the planned 60!

It has been really inspiring to see how the community rallied to help. With this behind us, we can look forward to the planned launch of Drupal 9 on June 3rd and our first virtual DrupalCon in July.

I'd like to thank all of the individuals and organizations who contributed to the #DrupalCares fundraising campaign. The Drupal community is stronger than ever! Thank you!

14 May 03:32

Mein kleiner Eskapismus

by Volker Weber
Die beschlossenen Lockerungen heißen nicht, dass die Pandemie vorüber ist. Sie bedeuten nur, dass es für dich gerade einen Platz auf der Intensivstation gibt.
— Volker Weber (@vowe) May 3, 2020

Um meinen Geburtstag herum habe ich mich mal ein bisschen unsichtbar gemacht. Ich brauche angesichts der von Facebook angefachten Schwarmdummheit immer wieder mal etwas Abstand vom Netz. Zwischenzeitlich ist dann dieser Tweet mit 300.000 Impressionen explodiert, wohl weil es vielen Menschen genauso geht.

Das Virus ist so tödlich wie immer, wir haben noch keinen Impfstoff und keine Medizin. Und es interessiert sich nicht für lautstark vorgetragene Meinungen. Haltet Abstand, bleibt zu Hause soviel Ihr könnt, und wenn Ihr andere Menschen trefft, dann tragt eine Maske. Bleibt gesund.

14 May 03:32

Covid-19 kills the open floorplan office

by Volker Weber
Indoor spaces, with limited air exchange or recycled air and lots of people, are concerning from a transmission standpoint. We know that 60 people in a volleyball court-sized room (choir) results in massive infections. Same situation with the restaurant and the call center. Social distancing guidelines don't hold in indoor spaces where you spend a lot of time, as people on the opposite side of the room were infected.

Read the whole thing to understand why our current plan to reopen offices is not going to work.

More >

14 May 03:00

Twitter Favorites: [mironov] Just published a 2000-word blog post on how to read articles online https://t.co/INqTL6xL0y

Anton Mironov @mironov
Just published a 2000-word blog post on how to read articles online blog.mironov.live/how-to-build-y…
14 May 02:59

What the heck happened with .org?

by Eric Rescorla

If you are following the tech news, you might have seen the announcement that ICANN withheld consent for the change of control of the Public Interest Registry and that this had some implications for .org.  However, unless you follow a lot of DNS inside baseball, it might not be that clear what all this means. This post is intended to give a high level overview of the background here and what happened with .org. In addition, Mozilla has been actively engaged in the public discussion on this topic; see here for a good starting point.

The Structure and History of Internet Naming

As you’ve probably noticed, Web sites have names like “mozilla.org”, “google.com”, etc. These are called “domain names.” The way this all works is that there are a number of “top-level domains” (.org, .com, .io, …) and then people can get names within those domains (i.e., that end in one of those). Top level domains (TLDs) come in two main flavors:

  • Country-code top-level domains (ccTLDs) which represent some country or region like .us (United States), .uk (United Kingdom, etc.)

Back at the beginning of the Internet, there were five gTLDs which were intended to roughly reflect the type of entity registering the name:

  • .com: for “commercial-related domains”
  • .edu: for educational institutions
  • .gov: for government entities (really, US government entities)
  • .mil: for the US Military (remember, the Internet came out of US government research)
  • .org: for organizations (“any other domains”)

It’s important to remember that until the 90s, much of the Internet ran under an Acceptable Use Policy which discouraged/forbid commercial use and so these distinctions were inherently somewhat fuzzy, but nevertheless people had the rough understanding that .org was for non-profits and the like and .com was for companies.

During this period the actual name registrations were handled by a series of government contractors (first SRI and then Network Solutions) but the creation and assignment of the top-level domains was under the control of the Internet Assigned Numbers Authority (IANA), which in practice, mostly meant the decisions of its Director, Jon Postel. However, as the Internet became bigger, this became increasingly untenable especially as IANA was run under a contract to the US government. Through a long and somewhat complicated series of events, in 1998 this responsibility was handed off to the Internet Corporation for Assigned Names and Numbers (ICANN), which administers the overall system, including setting the overall rules and determining which gTLDs will exist (which ccTLDs exist is determined by ISO 3166-1 country codes, as described in RFC 1591). ICANN has created a pile of new gTLDs, such as .dev, .biz, and .wtf (you may be wondering whether the world really needed .wtf, but there it is). As an aside, many of the newer names you see registered are not actually under gTLDs, but rather ccTLDs that happen to correspond to countries lucky enough to have cool sounding country codes. For instance, .io is actually the British Indian Ocean’s TLD and .tv belongs to Tuvalu.

One of the other things that ICANN does is determine who gets to run each TLD. The way this all works is that ICANN determines who gets to be the registry, i.e., who keeps the records of who has which name as well as some of the technical data needed to actually route name lookups. The actual work of registering domain names is done by a registrar, who engages with the customer. Importantly, while registrars compete for business at some level (i.e., multiple people can sell you a domain in .com), there is only one registry for a given TLD and so they don’t have any price competition within that TLD; if you want a .com domain, VeriSign gets to set the price floor. Moreover, ICANN doesn’t really try to keep prices down; in fact, they recently removed the cap on the price of .org domains (bringing it in line with most other TLDs). One interesting fact about these contracts is that they are effectively perpetual: the contracts themselves are for quite long terms and registry agreements typically provide for automatic renewal except under cases of significant misbehavior by the registry. In other words, this is a more or less permanent claim on the revenues for a given TLD.

The bottom line here is that this is all quite lucrative. For example, in FY19 VeriSign’s revenue was over $1.2 billion. ICANN itself makes money in two main ways. First, it takes a cut of the revenue from each domain registration and second it auctions off the contracts for new gTLDs if more than one entity wants to register them. In the fiscal year ending in June 2018, ICANN made $136 million in total revenues (it was $302 million the previous year due to a large amount of revenue from gTLD auctions).

ISOC and .org

This brings us to the story of ISOC and .org. Until 2003, VeriSign operated .com, .net, and .org, but ICANN and VeriSign agreed to give up running .org (while retaining the far more profitable .com). As stated in their proposal:

As a general matter, it will largely eliminate the vestiges of special or unique treatment of VeriSign based on its legacy activities before the formation of ICANN, and generally place VeriSign in the same relationship with ICANN as all other generic TLD registry operators. In addition, it will return the .org registry to its original purpose, separate the contract expiration dates for the .com and .net registries, and generally commit VeriSign to paying its fair share of the costs of ICANN without any artificial or special limits on that responsibility.

The Internet Society (ISOC) is a nonprofit organization with the mission to support and promote “the development of the Internet as a global technical infrastructure, a resource to enrich people’s lives, and a force for good in society”. In 2002, they submitted one of 11 proposals to take over as the registry for .org and ICANN ultimately selected them. ICANN had a list of 11 criteria for the selection and the board minutes are pretty vague on the reason for selecting ISOC, but at the time this was widely understood as ICANN using the .org contract to provide a subsidy for ISOC and ISOC’s work. In any case, it ended up being quite a large subsidy: in 2018, PIR’s revenue from .org was over $92 million.

The actual mechanics here are somewhat complicated: it’s not like ISOC runs the registry itself. Instead they created a new non-profit subsidiary, the Public Interest Registry (PIR), to hold the contract with ICANN to manage .org. PIR in turn contracts the actual operations to Afilias, which is also the registry for a pile of other domains in their own right. [This isn’t an uncommon structure. For instance, VeriSign is the registry for .com, but they also run .tv for Tuvalu.] This will become relevant to our story shortly. Additionally, in the summer of 2019, PIR’s ten year agreement with ICANN renewed, but under new terms: looser contractual conditions to mirror those for the new gTLDs (yes, including .wtf), including the removal of a price cap and certain other provisions.

The PIR Sale

So, by 2018, ISOC was sitting on a pretty large ongoing revenue stream in the form of .org registration fees. However, ISOC management felt that having essentially all of their funding dependent on one revenue source was unwise and that actually running .org was a mismatch with ISOC’s main mission. Instead, they entered into a deal to sell PIR (and hence the .org contract) to a private equity firm called Ethos Capital, which is where things get interesting.

Ordinarily, this would be a straightforward-seeming transaction, but under the terms of the .org Registry Agreement, ISOC had to get approval from ICANN for the sale (or at least for PIR to retain the contract):

7.5              Change of Control; Assignment and Subcontracting.  Except as set forth in this Section 7.5, neither party may assign any of its rights and obligations under this Agreement without the prior written approval of the other party, which approval will not be unreasonably withheld.  For purposes of this Section 7.5, a direct or indirect change of control of Registry Operator or any subcontracting arrangement that relates to any Critical Function (as identified in Section 6 of Specification 10) for the TLD (a “Material Subcontracting Arrangement”) shall be deemed an assignment.

Soon after the proposed transaction was announced, a number of organizations (especially Access Now and EFF) started to surface concerns about the transaction. You can find a detailed writeup of those concerns here but I think a fair summary of the argument is that .org was special (and in particular that a lot of NGOs relied on it) and that Ethos could not be trusted to manage it responsibly. A number of concerns were raised, including that Ethos might aggressively raise prices in order to maximize their profit or that they could be more susceptible to governmental pressure to remove the domain names of NGOs that were critical of them. You can find Mozilla’s comments on the proposed sale here. The California Attorney General’s Office also weighed in opposing the sale in a letter that implied it might take independent action to stop it, saying:

This office will continue to evaluate this matter, and will take whatever action necessary to protect Californians and the nonprofit community.

In turn, Ethos and ISOC mounted a fairly aggressive PR campaign of their own, including creating a number of new commitments intended to alleviate concerns that had been raised, such as a new “Stewardship Council” with some level of input into privacy and policy decisions, an amendment to the operating agreement with ICANN to provide for additional potential oversight going forward, and a promise not to raise prices by more than 10%/year for 8 years. At the end of the day these efforts did not succeed: ICANN announced on April 30 that they would withhold consent for the deal (see here for their reasoning).

What Now?

As far as I can tell, this decision merely returns the situation to the status quo ante (see this post by Milton Mueller for some more detailed analysis). In particular, ISOC will continue to operate PIR and be able to benefit from the automatic renewal (and the agreement runs through 2029 in any case). To the extent to which you trusted PIR to manage .org responsibly a month ago, there’s no reason to think that has changed (of course, people’s opinions may have changed because of the proposed sale). However, as Mueller points out, none of the commitments that Ethos made in order to make the deal more palatable apply here, and in particular, thanks to the new contract in 2019, PIR ISOC is free to raise prices without being bound by the 10% annual commitment that Ethos had offered.

It’s worth noting that “Save dot Org” at least doesn’t seem happy to leave .org in the hands of ISOC and in particular has called for ICANN to rebid the contract. Here’s what they say:

This is not the final step needed for protecting the .Org domain. ICANN must now open a public process for bids to find a new home for the .Org domain. ICANN has established processes and criteria that outline how to hold a reassignment process. We look forward to seeing a competitive process and are eager to support the participation in that process by the global nonprofit community.

For ICANN to actually try to take .org away from ISOC seems like it would be incredibly contentious and ICANN hasn’t given any real signals about what they intend to do here. It’s possible they will try to rebid the contract (though it’s not clear to me exactly whether the contract terms really permit this) or that they’ll just be content to leave things as they are, with ISOC running .org through 2029.

Regardless of what the Internet Society and ICANN choose to do here, I think that this has revealed the extent to which the current domain name ecosystem depends on informal understandings of what the various actors are going to do, as opposed to formal commitments to do them. For instance, many opposed to the sale seem to have expected that ISOC would continue to manage .org in the public interest and felt that the Ethos sale threatened that. However, as a practical matter the registry agreement doesn’t include any such obligation and in particular nothing really stops them from raising prices much higher in order to maximize profit as opponents argued Ethos might do (although ISOC’s nonprofit status means they can’t divest those profits directly). Similarly, those who were against the sale and those who were in favor of it seem to have had rather radically different expectations about what ICANN was supposed to do (actively ensure that .org be managed in a specific way versus just keep the system running with a light touch) and at the end of the day were relying on ICANN’s discretion to act one way or the other. It remains to be seen whether this is an isolated incident or whether this is a sign of a deeper disconnect that will cause increasing friction going forward.

The post What the heck happened with .org? appeared first on The Mozilla Blog.

14 May 02:59

More on the Default Feeds Issue

Here’s the latest on the story from yesterday.

NetNewsWire 5.0.1 for iOS was approved for the App Store this morning, and I assumed that was the end of it. I figured this whole thing was just an error.

But later today I heard from Apple that, while this latest version has been approved, the app is now under further review for this issue.

This isn’t quite over yet — but at least we could ship 5.0.1, so that’s cool, and I’m glad.

More Details

The issue really is about the default feeds. They’re added by default on the first run of the app.

Apple suggested some options — things I could do if, after further review, they decide that I need to bring the app into legal compliance:

  • Provide documentation (to Apple) expressing permission to use those feeds as defaults
  • Have the user, on first run, pick from a list of these feeds
  • For these feeds, show only a title, and then link out to Safari

For now I’m not doing any of those things, since Apple’s review is ongoing. I’ll wait for the review to complete.

If the review completes and I do need to do something, I’ll take the first option: I’ll get the necessary documentation.

(Yes, I could change the UX instead. But I don’t want to — the app works the way I think is best. You could debate whether I’m right or wrong on that point, but there’s no debating that this is the UX I want.)

I’m Not Actually Against Getting Permission

As I wrote on the NetNewsWire FAQ about the default feeds:

We change the feeds from time to time. We don’t have any arrangements with the feed owners, though we usually ask permission — unless it’s something like Daring Fireball or Six Colors where it would obviously be no problem.

The authors of Daring Fireball, Six Colors, and a few other sites are friends, and I don’t need to bug them to ask permission. There are other default feeds where I know the people less well (or not at all), and I have asked permission from people — not because I was worried legally but because it seemed like basic courtesy. I don’t think anyone’s ever said no, but I did want to give them the chance.

But can I find all these conversations, and can I turn those conversations over to Apple without asking the other parties?

I don’t think so. So this would mean going through and getting explicit permission from a dozen-ish different people and turning copies over to Apple.

Which is fine. I can do that if Apple decides they need that documentation. It’s not onerous.

What Still Rubs Me the Wrong Way

I’m trying to figure out what bothers me. I think there are two things.

One is just that the App Store has always seemed rather arbitrary. The guidelines don’t even have to change for unseen policies to change, and it’s impossible to know in advance if a thing you’re doing will be okay and stay okay. (Recall that NetNewsWire has been doing the same thing with default feeds for 18 years.)

This gets really tiring, because every time we submit an app — even just a bug-fix release, like 5.0.1 is — I have to deal with the anxiety as I wonder what’s going to happen this time.

The other issue is a little harder to explain, but it goes like this:

If a site provides a public feed, it’s reasonable to assume that RSS readers might include that feed in some kind of discovery mechanism — they might even include it as a default. This is the public, open web, after all.

Now, if NetNewsWire were presenting itself as the official app version of Daring Fireball, for instance, then that would be dishonest. But it’s not, and that’s quite clear.

To nevertheless require documentation here is for Apple to use overly-fussy legal concerns in order to infantilize an app developer who can, and does, and rather would, take care of these things himself.

In other words: lay off, I want to say. I’m an adult with good judgment and I’ve already dealt with this issue, and it’s mine to deal with.

14 May 02:59

On Talking with the Duet Folks

One of the teams I talked to during my job hunt — and that didn’t put me through a scary tech interview :) — was the folks at Duet.

They make an app where you can use an iPad as a second screen for your Mac. They also support Android and Windows. (You should check out their apps.)

I thoroughly enjoyed talking with the team, and I believe I would have been very happy working there.

They’re looking for a Mac developer. Maybe you? Get in touch via their contact page.

14 May 02:59

Mozilla research shows some machine voices score higher than humans

Jofish Kaye, The Mozilla Blog, May 12, 2020
Icon

I still can't stand listening to machine voices (and the samples here haven't changed my response) but they're getting better and, as noted in this study reported by Mozilla, some machine-generated voices are scoring more highly than some human voices. I would imagine machine voices will some become ubiquitous, which means the selection of voice matters. "What happens when computers are more pleasant to listen to than our own voices? What happens when our children might prefer to listen to our computer reading a story than ourselves?... What happens when we can increase the number of people who believe something simply by changing the voice that it is read in?" Related: Google Duo uses AI to insert missing packets in online audio.

Web: [Direct Link] [This Post]
14 May 02:59

Summer2020 Blogging Fest

Laura Gibbs, OU Digital Teaching, May 12, 2020
Icon

This is a great initiative. "I will be taking some time to this summer to share some more detailed information about blogging," writes Laura Gibbs, "both as a strategy for fully online courses but also as a strategy for any course, including classroom-based courses."

Web: [Direct Link] [This Post]
14 May 02:59

Last weekend we already enjoyed some cake to ce...

by Ton Zijlstra

Last weekend we already enjoyed some cake to celebrate my 50th birthday today. It’s odd having my birthday during the pandemic lockdown, but at the same time I don’t mind not celebrating it (I don’t often do). I remember how my dad absolutely hated it when he turned 50, and to evade being home took me on a walking trip in the Alps, despite his fear of heights. I don’t have any particular feeling about it, except maybe that my continued sense that I’m just starting out is a bit more out of place now.

20200510_115547

14 May 02:59

Surviving a pandemic

by Chris Corrigan

Sunday was one of these magnificent days we get on the south coast of British Columbia at this time of year, where the deep summer makes a preview appearance with warm temperatures, bright sunshine, and the scent of berry blossoms and new grass wafting on the air. It’s just humid enough that it is warm in the shade, and there is no bite in the breeze by the water. So it was a perfect day to launch the kayaks.

Our first paddle of the year took us out of Galbraith Bay on the northwest side of Bowen Island and had us exploring the shoreline around to Grafton Bay and back. The tide was low and there was lots to see, but most remarkable of all was the number of sea stars

Back in 2013, seas stars began dying by the millions. Over 40 species on our coast fell victim to what became known as Sea Star Wasting Disease which causes them to dissolve and die. Our once abundant sea star populations crashed. Our coast was once covered in deep purple ochre sea stars and the bottoms were patrolled by magnificent huge twenty-legged sunflower stars, with another few dozen species thrown in the mix. Overnight, these iconic creatures disappeared and some years it was nearly impossible to find even one.

The cause of the collapse of sea stars was attributed to warmer seawater – an effect of climate change – and the spread of a virus against which the sea stars were helpless. They had their pandemic, and it looked like entire local populations would be completely wiped out.

Last year, whilst paddling in the Gulf Islands, we saw some ochre sea stars in small patches on some of the more isolated islands, like the Secretary Islands in the Trincomali Channel. This was encouraging.

But yesterday, paddling at low low tide, we rounded a point and saw entire shorelines covered with young ochre sea stars. There were hundreds of them crammed into cracks and nooks and crannies. The vast majority of them were young, and clumping together protected them from hungry gulls that like to wolf them down whole. In a year these stars will be big enough to feast on the sea urchins that have cleaned out our local kelp beds, and thereby restore the balance of plants and animals in that little cycle. More kelp means more small fish, and that is good for the seals and sea lions and the salmon and the orcas. And so goes the chain of interrelationships and interdependencies that make up the marine ecosystem of my local fjord.

Seeing these huge groups of sea star survivors was moving, because I’ve missed them for these past seven years. And in a time of the global pandemic, where we are helpless against our own viral threat, it was good to be reminded of cycles, and resilience.

I hope our sea stars make it, and that they adapt to the ocean conditions and that they have somehow developed an immunity to their virus. I’m hoping they are different and no longer as vulnerable to climate change or disease. The threat of a second spike is always just around the corner, as it is with our virus at the moment. Witnessing a mass local extinction of an iconic species is sobering, and the reminder that survival is possible under changed conditions is encouraging.

I have no idea how OUR pandemic will play itself out: I imagine we will survive. But I’m interested in how we will be different too. This virus detests the things that will defeat it: collaboration, sharing of information, care and protection of the vulnerable, gentleness, compassion. Absent these things, the novel coronavirus COVID-19 will thrive and continue to run through us all, taking the vulnerable and randomly selecting from the rest. If we beat it, it will only be with the capacities and capabilities that make us the best of who we can be.

14 May 02:59

Making stupid Excel bar charts

by Nathan Yau

I’m just gonna put this right here, from @_daviant: “Another day another stupid Excel chart”.

Tags: Excel, humor

14 May 02:59

New Comments-Only Feed

by Ton Zijlstra

In response to Peter’s earlier request I have created a new RSS feed that contains only comments on postings, not other types of reactions such as likes, mentions, and ping- or trackbacks. It was a bit of a puzzle to get it all working, having me dive down the rabbit hole leading to the maze that is the WordPress documentation. With some suggestions from Jan Boddez, I now have a result. The new feed is listed on the right hand side. Subscribe to it if you care to follow conversations on this blog. The feed with all interactions, so including likes etc., is also available.

I documented how I created the feed over in the wiki.

14 May 02:58

Dithering and Open Versus Free

by Ben Thompson

The big news, in case you haven’t yet heard: John Gruber and I have launched a new podcast called Dithering:

Dithering

Dithering costs $5/month or $50/year. If you’re a Stratechery subscriber, it costs $3/month or $30/year to add it as a bundle.1

Dithering covers some of the same topics as Stratechery — the Dithering web page has a descriptive lists of topics — but in the conversational style that many of you have enjoyed on my appearances on Gruber’s The Talk Show podcast, or in the Daily Update Interview I did with Gruber last Thursday.2 Expect less in-depth analysis than a typical Stratechery post, and more back-and-forth, with the occassional foray into non-tech topics. All in fifteen minutes, exactly. It’s perfect for your dishwashing commute!

That time limit is certainly a challenge (that is why we recorded 20 episodes before we launched — the entire back catalog is available to subscribers), but we really wanted to experiment with what a podcast might be. We purposely don’t have show notes or much of a web page, and we have created evocative cover art embedded in each episode’s MP3, because the canonical version of Dithering is in your podcast player. This is as pure a podcast as can be — and that means open, even if it isn’t free.

Open != Free

I’m used to dealing with the seeming contradiction between open and free: back in 2014 I started selling an email I called the Daily Update. There was no special app required, and while Daily Updates were archived on the web, I took care to not shove a paywall in your face; if you wanted more content from me you could pay for more, and I would send you an email over the open SMTP protocol, that landed in the email client you already used.

This combination of open and for-pay turned out to be extraordinarily powerful: even as closed but free feeds like Facebook were turning into pay-to-play for publishers, email remained the only feed that everyone checked every day that didn’t have a gatekeeper, which made it the best possible means of delivering the value proposition I was charging for — a proposition I most clearly defined in 2017’s The Local News Business Model:

It is very important to clearly define what a subscriptions means. First, it’s not a donation: it is asking a customer to pay money for a product. What, then, is the product? It is not, in fact, any one article (a point that is missed by the misguided focus on micro-transactions). Rather, a subscriber is paying for the regular delivery of well-defined value.

The importance of this distinction stems directly from the economics involved: the marginal cost of any one Stratechery article is $0. After all, it is simply text on a screen, a few bits flipped in a costless arrangement. It makes about as much sense to sell those bit-flipping configurations as it does to sell, say, an MP3, costlessly copied.

So you need to sell something different.

In the case of MP3s, what the music industry finally learned — after years of kicking and screaming about how terribly unfair it was that people “stole” their music, which didn’t actually make sense because digital goods are non-rivalrous — is that they should sell convenience. If streaming music is free on a marginal cost basis, why not deliver all of the music to all of the customers for a monthly fee?

This is the same idea behind nearly every large consumer-facing web service: Netflix, YouTube, Facebook, Google, etc. are all predicated on the idea that content is free to deliver, and consumers should have access to as much as possible. Of course how they monetize that convenience differs: Netflix has subscriptions, while Google, YouTube, and Facebook deliver ads (the latter two also leverage the fact that content is free to create). None of them, though, sells discrete digital goods. It just doesn’t make sense.

Aggregators Versus Publishers

This model is pretty good for consumers: they get access to an abundance of content for a set price. It’s great for the Aggregators: because they have so many consumers, the suppliers of content are forced to accede to the Aggregator’s terms, even as Aggregators are best placed to serve advertisers. That is another way of saying that it is the individual content maker that is getting the short end of the stick:

  • On Spotify, individual artists make fractions of a cent per play, and their payout is based on their share of all Spotify plays; if you have a super-fan that listens to nothing but your songs, you still only get a few pennies.
  • On Netflix, show creators are getting bigger payments up front, but in return they are giving up residuals and international rights; Netflix owns all of the upside.
  • YouTube is actually one of the more creator-friendly Aggregators: what you earn is pretty closely tied to how many views you achieve. That, though, means a hamster wheel lifestyle of constantly churning out content and begging for subscribers, even as it requires ever more views to achieve the same amount of money. And, of course, YouTube could de-monetize you at any time, for any reason.
  • Google helps consumers find content, but because (all of the) consumers start with search, so do advertisers; Facebook lets consumers make content, and then favors it over professionally produced links.

It is important to note that, the constant griping of traditional gatekeepers notwithstanding, Aggregators are by definition good for most content creators; after all, everyone is now a content creator, whereas previously publishing was reserved for those who had access to physical assets like printing presses, recording studios, or broadcast towers. That means most people are publishing for the first time (with effects both good and bad).

It also means that traditional publishers face more competition for attention, and, as long as they rely on Aggregators, an inherently unstable source of income: one big song, show, video, or article can make some money, but without an ongoing connection and commitment from the consumer to the content creator, it is increasingly impossible to make a living.

Subscriptions and Open Protocols

This is why subscriptions — “paying for the regular delivery of well-defined value” — are so important. I defined every part of that phrase:

  • Paying: A subscription is an ongoing commitment to the production of content, not a one-off payment for one piece of content that catches the eye.
  • Regular Delivery: A subscriber does not need to depend on the random discovery of content; said content can be delivered to the subscriber directly, whether that be email, a bookmark, or an app.
  • Well-defined Value: A subscriber needs to know what they are paying for, and it needs to be worth it.

This runs in the opposite direction of a Spotify-type model, even as it takes advantage of the same foundation of zero marginal costs. If an email is an artifact of hard work creating something people are interested in, the open ecosystem of HTTP and SMTP drives the costs of delivering that artifact to zero. There is no massive streaming infrastructure to build, nor endless data centers in the cloud — this can all be rented for not much money at all — which means that the cost structure of an independent creator can be dramatically lower than any traditional publisher, even as their addressable market is the same size.

HTTP and SMTP, though, are not the only open protocols available to publishers: RSS is another, and it is the foundation of the podcast ecosystem. Most don’t understand that podcasts are not hosted by Apple, but rather that iTunes is a directory of RSS feeds hosted on servers all over the Internet. When you add a podcast to your podcast player, you are simply adding an RSS feed that includes information about the show, and a link for where to download new episodes.

This, if you squint, looks a lot like email: create something that listeners find valuable on an ongoing basis, and deliver it into a feed they already check, i.e. their existing podcast player. That is Dithering: while you have to pay to get a feed customized to you, that feed can be put in your favorite podcast app, which means Dithering fits in with the existing open ecosystem, instead of trying to supplant it.

Well, almost all podcast apps: Spotify is an exception.3

Spotify’s Facebook Play

Podcasting, as I wrote last year, looks a lot like the early web; iTunes is the Yahoo directory, and advertising is punching the monkey:

The current state of podcast advertising is a situation not so different from the early web: how many people remember this?

The old "punch the monkey" display ad

These ads were elaborate affiliate marketing schemes; you really could get a free iPod if you signed up for several credit cards, a Netflix account, subscription video courses, you get the idea. What all of these marketers had in common was an anticipation that new customers would have large lifetime values, justifying large payouts to whatever dodgy companies managed to sign them up.

The parallels to podcasting should be obvious: why is Squarespace on seemingly every podcast? Because customers paying monthly for a website have huge lifetime values. Sure, they may only set up the website once, but they are likely to maintain it for a very long time, particularly if they grabbed a “free” domain along the way. This makes the hassle of coordinating ad reads and sponsorship codes across a plethora of podcasts worth the trouble; it’s the same story with other prominent podcast sponsors like ZipRecruiter or SimpliSafe.

Some are content with this state of affairs, and I understand the sentiment: the early web, annoying banner ads notwithstanding, was in many respects a nicer place as well, and some folks even made money. The problem, though, is it didn’t last: once Google and Facebook figured out that the best way to advertise was to aggregate users and deliver targeted ads, the open web withered; it is only in the last few years that, thanks to email, independent publishing is making a return.

I strongly believe that podcasting is approaching a similar precipice; I wrote a year ago that Spotify wants to be the podcast Aggregator:

What I think Spotify senses, though, is that while podcasts, at least in theory, solve many of their business model problems, Spotify is also uniquely positioned to solve the problems of many podcasters/suppliers. To wit:

  • Increasing advertising revenue for the entire industry requires a centralized player that can leverage a large userbase. Spotify is still a distant second to Apple in podcasts, but they are growing fast. Just as importantly, Spotify already has a strongly growing advertising business — again, larger than the entire podcast market — that it can extend to podcasts.
  • The open nature of podcasts means it is very difficult to monetize users directly; Spotify, though, has already built an entire infrastructure around monetizing users directly. Podcasts exclusive to Spotify can likely make meaningful money from Spotify subscribers that still gives Spotify far higher margin than music.

Spotify CEO Daniel Ek made clear that this was the goal after the company acquired The Ringer; from the Q1 2020 earnings call:

When we look at the overall opportunity, it is pretty clear that we haven’t added Internet-level monetization yet to audio. So, all the things that you’ve come to expect in video and display in terms of measurability, in terms of just targeting, a lot of that is lacking in podcasts today. And you’ve seen it time and time again. As you add those capabilities, you generally can raise CPMs across the board, because advertisers feel more certain about the results that they’re getting. And if we do that, that’s going to be a tremendous benefit for all the podcasting creators, but it’s also going to be a tremendous benefit for Spotify.

This is why Spotify adjusted its accounting last quarter to recognize the cost of its owned-and-operated podcast production as an expense for its advertising business; the company isn’t primarily focused on acquiring paying subscribers, although that is a nice side effect of increased engagement on Spotify. The real goal is to intermediate podcasters and listeners and take over podcast advertising just like Facebook and Google took over web advertising.

That, though, is bad for openness — indeed, Spotify isn’t open at all. You can’t simply add an RSS feed to Spotify, as you can most other podcast players. Rather, podcasters have to submit their feeds to Spotify and agree to the service’s terms of service, which can be changed at any time at Spotify’s sole discretion. Sure, the terms are relatively benign today; they could include the right to insert advertising tomorrow. Even if that doesn’t happen, though, Spotify still is not open: they can take down your content or choose not to play it, just as Facebook could not show your page unless you were willing to pay-to-play.

This is where, as I noted when I launched the Daily Update podcast, it is important to distinguish between my role as analyst, podcaster, and publisher:

Analyst Ben says it is a good idea for Spotify to try and be the Facebook of podcasting…Writer/Podcaster Ben certainly sees the allure: having my podcast available to Spotify’s 271 million monthly active users would be great; for that matter, having this Daily Update read by everyone I could reach on Facebook would be great as well. I’ve already put in the work, why not reach everyone? Indeed, were I supported by advertising, that would be the imperative.

Publisher Ben, though, remembers that my business model is predicated on a higher average revenue per user (thanks to subscriptions), not a higher number of users; that means making tradeoffs, and foregoing wide reach is one of them. That, by extension, means not agreeing to Spotify’s terms for Exponent, and accepting that leveraging RSS to have per-subscriber feeds makes having the Daily Update Podcast on Spotify literally impossible. More broadly, owning my own destiny as a publisher means avoiding Aggregators and connecting directly with customers.

Dithering is another effort driven by Publishers Ben and John; if we are to maintain a thriving podcast ecosystem that is open, we must figure out monetization, and from my perspective, that means subscriptions. The fact that Spotify won’t even allow Dithering to be played on their app only increases the urgency: if the choice is free and closed versus for-pay and open I will always push for the latter — three times a week, 15 minutes per episode.


Some additional notes on Dithering and the service powering it:

  • Yes, there are several companies building out paid podcasting services. None of them had all of the features I wanted, including full control over hosting, the ability to create bundles, and customized feeds based on subscription status; to paraphrase Alan Kay, I’m serious about figuring out how paid podcasts might be a sustainable business not only for me but for the entire ecosystem, and that meant building my own software to maximize experimentation.
  • My good friends at Model Rocket did most of the actual work on the podcast service; they are, if I may use the term, rock stars, and you haven’t heard the last of our collaboration. Brad Ellis (who designed the Stratechery logo), created Dithering’s look-and-feel in collaboration with Gruber.

  • The Stratechery + Dithering bundle originally launched with both podcasts on the same feed; I think there is tremendous potential in this approach, but a positive user experience would require podcast players to adopt a new standard we proposed. That seems unlikely given that…

  • It is frustrating the degree to which many players don’t abide by the current podcast standard. Few apps, for example, respect the <itunes:image> tag, which shows per-episode artwork in the feed (and Apple’s own Podcasts app doesn’t even show MP3 artwork).

  • To that end, last night we pushed a new update that splits Stratechery and Dithering into two feeds, even if you subscribe to the bundle. Visit the podcast management page to add the independent Dithering feed.

Finally, Stratechery and the Daily Update are not going anywhere. Indeed, I am more inspired than ever — building something is nice in that way. Also, don’t miss Gruber’s post about Dithering, and note that both Exponent and The Talk Show remain as free podcasts. Free is fine! — but we should not forget that open is more important.

  1. Unfortunately due to the limitations of our membership software, we can’t offer monthly subscriptions to annual subscribers; if you are an annual subscriber and add on Dithering, and realize you don’t like it, we will refund you your remaining 11 months
  2. That interview is free-to-listen-to even if you aren’t a Daily Update subscriber; create an account and add a Stratchery feed here.
  3. And, to be fair, Google Podcasts and Stitcher; this analysis applies to them as well
14 May 02:57

All 315 Wirecutter Budget Picks

by Annam Swanson
All 315 Wirecutter Budget Picks

At Wirecutter, we have several different recommendation tiers. At the top of the heap is “our pick”—the one thing that, after rigorous testing, we think is the best choice for most people. Then there are upgrade picks (usually boasting extra or high-end features, benefits, and conveniences for those willing to splurge), runners-up (a very strong option if you can’t get your hands on our top pick), also-greats (models we really like that just missed earning a spot on the podium), and budget picks (recommended but serviceable choices that make trade-offs—in aesthetics, durability, or functionality—in exchange for a lower price).

14 May 02:56

What happened to train travel after the 1918 Flu?

by Gordon Price

Michael Gordon writes …

We’re coming across a lot of predictions on how life will change after the current pandemic – such as “The Harsh Future of American Cities: How the pandemic will alter our urban centers, now and maybe forever”.

However, when I read sweeping statements about history, I do like to see some statistical foundation of the statement.  So I thought, let’s have a look at the passenger rail statistics (which admittedly do not account for ‘how people felt about being on a train.’)

In 1920, passenger numbers increased on the Canadian Pacific Railway passenger trains from 14.4 million to 16.9 million*. 

South of the border, the number of rail passengers increased from 1.1 billion in 1918 to 1.27 billion in 1920**.

My grandparents who were in their mid-20’s in 1918 never mentioned the Spanish Flu epidemic or how it changed things. I do recall lots of mentions of having their first car and learning how to drive in the mid-1920’s. But they still traveled and took the train when they were heading back to Ontario from Kamloops.

Travelling on a CPR passenger train in the 1920’s

 

Cleaning a CPR Passenger Train

 

*Harold Innes, 1923, History of the CPR, p. 198.

**USA government document (1958) Historical Statistics, Colonial Times to 1957, p. 430.

14 May 02:56

Bread Scheduler

by Nathan Yau

Is bread-making still a thing, or is that so two weeks ago? If you’re late to the sourdough train or still working out the feeding schedule, the Bread Scheduler by Stuart Thompson makes the timing more obvious. Just enter when you want to start or finish, and the scheduler works out the details.

I haven’t found the patience yet for sourdough. For less time-sensitive breads, try out no-knead bread or this easy focaccia.

Tags: bread, schedule, Stuart A. Thompson

14 May 02:56

Twitter not returning to the office any time soon

by Volker Weber
Twitter CEO Jack Dorsey emailed employees on Tuesday telling them that they’d be allowed to work from home permanently, even after the coronavirus pandemic lockdown passes. Some jobs that require physical presence, such as maintaining servers, will still require employees to come in. ...

In his email, Dorsey said it’s unlikely Twitter would open its offices before September, and that business travel would be canceled until then as well, with very few exceptions. The company will also cancel all in-person events for the rest of the year, and reassess its plan for 2021 later this year. Finally, Twitter upped its allowance for work from home supplies to $1,000 for all employees.

Only one question: $1,000/month or $1,000 once? Allowance implies a monthly payment and gives you an idea of how expensive your desk at work is.

More >

14 May 02:56

My New Job

As of this morning the ink is all dry, and I can happily report that my new job is at Audible. I’ll be an architect on the mobile team.

I’m very excited for this job! It’s perfect for me in so many ways — not least that it’s about books.

My plan is for this to be my last job — I plan to work at Audible until I retire. I start Monday. 🐣🐥🕶

14 May 02:56

21st century democracy requires an open web

Ben Werdmuller, May 12, 2020
Icon

Back in 2012 Sebastian Thrun made the ridiculous statement that in fifty years, there will be only 10 institutions in the whole world that deliver higher education. But what if it wasn't so ridiculous? No, not in the way we think, where there are only 10 universities. But more like what has happened to the news industry: "the entire news industry has consolidated down to two points of distribution... Facebook and Google have outsized supplier power over the entire news industry." In other media - movies, say - other technology companies are building distribution monopolies. Why couldn't this happen to online education? What can be done to stop it?

Web: [Direct Link] [This Post]
14 May 02:50

How about hyperlocal pandemic forecasting

I’m a big fan of weather forecasts. It’s an incredible feat to describe the ever-changing multi-variable fluid-dynamical state of the freaking atmosphere in such simple terms that we can

  • act – imagine how insanely complex it must be just to say “it’s going to rain today,” yet the entire forecast can be summarised in that line and I know whether to wear a coat.
  • communicate – I can hear or read a weather forecast and share it with another person, using speech only, face to face or just over the phone. I can’t even figure out how to share a URL to a Facebook post half the time.
  • reason – seeing a warm front on a map lets me predict for myself the next 24 hours, despite me having no idea about the actual numbers in the underlying atmospherics equations.

Amazing.

I’ve just been looking at some stats. Next day forecasts are 80% accurate, up from 66% a decade ago. The UK Met Office’s 4 day forecast today is as accurate as the 1 day in 1980.

Barometers: especially good. With their dial running stormy/rain/change/fair/etc. See some antique ones here. It’s everything you need to know in a single instrument: e.g. it was originally raining and it is improving quickly. A vector not a point. Well done barometers!

Sometimes I wish I had a weather app that did the same. Open it, and the screen would just say “wear the same as you wore yesterday,” or maybe also wetter/drier/windier.

The app Dark Sky comes closest to that magical feeling, although in a different way: its hyperlocal, to-the-minute forecasts aren’t always accurate, but when it says “rain stopping in 12 minutes” and then, in 12 minutes, the SKIES CLEAR and you can go outside without bothering to take a hat… it’s a superpower.

I read recently that weather forecasts are suffering because flights have been cancelled, and aircraft are responsible for a large amount of the data that goes to feed the simulations:

The European Centre for Medium-Range Weather Forecasts reports a 80% drop in meteorological readings due to cancellations of commercial flights. According to their study, removing all aircraft data from weather models reduces accuracy by 15%.

(You might guess that I took the atmospherics module at uni, and meteorology was my favourite part of geography at school.)


ANYWAY SO HERE’S MY QUESTION:

How about hyperlocal, to-the-minute pandemic forecasts?


The UK govt announced its COVID Alert Levels which run from 1-5. Here they are [pdf]. e.g.:

  • Level 2: COVID-19 is present in the UK, but the number of cases and transmission is low
  • Level 4: A COVID-19 epidemic is in general circulation; transmission is high or rising exponentially

Like a barometer, the level takes into account situation and direction of change and rate of change.

The government also released an equation which has been roundly mocked. Here it is:

COVID Alert Level = Rate of infection + Number of infections

Which is… fine? I don’t get the mocking. I mean, it communicates exactly the right information. The alert level goes up if either of the other two numbers go up. What were they supposed to write? It’s impressive wordsmithing to convey that entire mental model so concisely.

But what caught my attention was that if the alert level rises, the lockdown would be once again tightened… and they said this could happen locally.

WHICH LED ME TO THINK:

What would it mean to have a giant pandemic simulation running on those impressive Met Office computers?

AND FURTHERMORE:

Could this pandemic mirror world be used for forecasting?

How many million data points would it need to be fed to forecast

  • how many contagious infections there are in a particular neighbourhood, at a particular time,
  • the effective rate of infection, given crowd levels etc, and whether crowds are likely to form because of the weather and so on,
  • how to compare levels of exposure, just for you, given your route.

What sensors would be required to feed such a simulation?

How fine could the resolution become?

Could some kind of future Dark Sky meets Citymapper meets contact tracing app say things like…

“well you’re near London Bridge and the general number of infections is low, but there’s been an infection wavefront moving up slowly from the south plus, huh, Tuesday morning it usually gets kinda busy, so between 8-10am in that area we’re forecasting a COVID Alert Level of 4.3.

“BUT the surrounding neighbourhood we’re looking at a local alert level of 3.6 and falling,

“AND SO,

“if you get off the train one stop early and walk the rest of the way to work, sure you’ll be out in public for 30 minutes longer, but your exposure overall will still be lower, so that’s your recommended route this morning.”

???

Communicating this might end up looking a bit like rads, the unit of absorbed radiation dose.

Maybe your phone could track your location and give you a live exposure number over the day, like a badge? It’s 2pm and you’re at 40 co-rads today. We recommend you leave before rush hour and take this 20 co-rad route home, also WASH YOUR HANDS.

14 May 02:20

What happened to train travel after the 1918 Flu?

by Gordon Price
mkalus shared this story from Price Tags.

Michael Gordon writes …

We’re coming across a lot of predictions on how life will change after the current pandemic – such as “The Harsh Future of American Cities: How the pandemic will alter our urban centers, now and maybe forever”.

However, when I read sweeping statements about history, I do like to see some statistical foundation of the statement.  So I thought, let’s have a look at the passenger rail statistics (which admittedly do not account for ‘how people felt about being on a train.’)

In 1920, passenger numbers increased on the Canadian Pacific Railway passenger trains from 14.4 million to 16.9 million*. 

South of the border, the number of rail passengers increased from 1.1 billion in 1918 to 1.27 billion in 1920**.

My grandparents who were in their mid-20’s in 1918 never mentioned the Spanish Flu epidemic or how it changed things. I do recall lots of mentions of having their first car and learning how to drive in the mid-1920’s. But they still traveled and took the train when they were heading back to Ontario from Kamloops.

Travelling on a CPR passenger train in the 1920’s

 

Cleaning a CPR Passenger Train

 

*Harold Innes, 1923, History of the CPR, p. 198.

**USA government document (1958) Historical Statistics, Colonial Times to 1957, p. 430.

14 May 02:14

Thoughts on where tools fit into a workflow

by Brett Cannon

I am going to admit upfront that this is a thought piece, a brain dump, me thinking out loud. Do not assume there is a lesson here, nor some goal I have in mind. No, this blog post is providing me a place to write out what tools I use when in my ideal development workflow (and yes, this will have a bias towards the Python extension for  VS Code 😁).

While actively coding

The code-test-fix loop

Typically when I am coding I think about what problem I'm trying to solve, what  the API should look like, and then what it would take to test it. I then start to code up that solution, writing tests as I go. That means I have a virtual environment set up with the latest version of Python and all relevant required and testing-related dependencies installed into it. I am also regularly running the test I am currently working on or the related tests I have to prevent any regressions. But the key point is a tight development loop where I'm focusing on the code I'm actively working on.

The tools I'm using the most during this time is:

  1. pytest
  2. venv (although since virtualenv 20 now uses venv I should look into using virtualenv)

Making sure I didn't mess up

Once code starts to reach a steady state and the design seems "done", that's when I start to run linters and to expand the testing to other versions of Python. I also start to care about test coverage. I put this off until the code is "stable" to minimize churn and the overhead cost of running a wider amount of tools and have to await their results which slow down the development process.

Now, I should clarify that for me, linters are tools that you run to check your code for something which do not require running under a different version of Python. If you have to run something under every version of Python that you support then that's a test to me, not a lint. This allows me to group linters together and run them only once instead of under every version of Python with the tests, cutting the execution time down.

The tools that I am using during this time are:

  1. coverage.py
  2. Black
  3. mypy
  4. I should probably start using Pyflakes (or flake8 --ignore=C,E,W)

Running these three tools all the time can be a bit time-consuming. I have to remember to do it and they don't necessarily run quickly. Luckily I can amortize the costs of running linters thanks to support in the Python extension for VS Code. If I set up the linters to run when I save, I can have them running regularly in the background and not have to do the work they will ask me to do later. Since the results show up as I work without having to wait for a manual run it makes it much cheaper to run linters. Sames goes for setting up formatters (which also act as linters when you're enforcing style).

The problem is not everyone uses VS Code. To handle the issue of not remembering what to run, people often set up tox or nox which also has the benefit of making it easier to run tests against other versions of Python. Another option is you can also set up pre-commit so as to not forget and get the benefit of linting for other things like trailing whitespace, well-formed JSON, etc. So there's overlap between tox/nox and pre-commit, but also differentiators. This leads some people to set up tox/nox to execute pre-commit for linting to get the most that they can out of all the tools.

So tools people use to run linters:

  1. tox or nox
  2. pre-commit

But then there is also the situation where people have their own editors that they want to set up to using these linters. And so when using build tools like poetry and flit they have the concept of development dependencies. That way everyone working on the project get the same tools installed and they can set them up however they want to fit their workflow.

Proposing a change

When getting ready to create a pull request, I want the tests and linters to run against all supported versions of Python and OSs via continuous integration. To make things easier to debug when CI flags a problem, I want my CI to be nothing more than running something I could run locally if I had the appropriate setup. I am also of the opinion that people proposing PRs should do as much testing locally as possible which requires being able to replicate CI runs locally (I hold this view because people very often don't pay attention to whether CI for their PR goes green or not and making the maintainer have to message you saying your PR is failing CI adds delays and takes up time).

There is one decision to make about tooling updates. Obviously tools like the linters that you  rely on will make new releases and chances are you want to use them (improved error detection, bugfixes, etc.). There are two ways of handling this.

One is to leave the development dependencies unpinned. Unfortunately that can lead to an unsuspecting contributor having CI fail on their PR simply because a development dependency changed. To avoid that I can run a CI cron job at some frequency to try and pick up those sorts of failures early on.

The other option is to pin my  development dependencies (and I truly mean pin; I have had micro releases break CI because a project added a warning and a flag was set to make warnings be considered errors). This has the side-effect that in order to get those bugfixes and improvements from the tools I will need to occasionally check for updates. It's possible to use tools like Dependabot to update pinned dependencies in an automated fashion to alleviate the burden.

Tools for CI:

  1. GitHub Actions
  2. Dependabot

Preparing for a release

I want to make sure CI testing against the wheel that you would be uploading to PyPI (setuptools users will know why this is important thanks to MANIFEST.in). I want the same OS test coverage as testing a PR request. For Python versions, I will test against all supported versions plus the in-development version of Python where I allow for failures (see my blog post on why this is helpful and how to do it on Travis).

With testing and linting being clean, that leaves release-only prep work. I have to update the version if I haven't been doing that continuously. The changelog will also require updating if I haven't been doing it after every commit. With all of this in place I should be ready to build the sdist and wheel(s) and uploading them to PyPI. Finally, the release needs to be tagged in git.

Conclusion (?)

Let's take setting up Black for formatting. That would mean:

  1. List Black as a development dependency
  2. Set up VS Code to run Black
  3. Set up pre-commit to enforce Black
  4. Set up tox or nox to run pre-commit
  5. Set up GitHub Actions to lint using tox or nox

What about mypy?

  1. List mypy as a development dependency
  2. Set up VS Code to run mypy
  3. Set up pre-commit to enforce mypy

Repeat as necessary for other linters. There's a bit of repetition, especially considering how I set up Black will probably be the same as all of my other projects and very similar to other people. And if there is an update to a linter?

  1. Update pre-commit
  2. Potentially update development dependency pin

There's also another form of repetition when you add support for a new version of Python:

  1. Update your Python requirement for build back-end
  2. Update your trove classifiers
  3. Update tox or nox
  4. Update GitHub Actions

Once again, how I do this is very likely the same for all of my projects and lots of other people.

So if I'm doing the same basic thing for the same tools, how can I cut down on this repetition? I could use Cookiecutter to stamp out new repositories with all of this already set up. That does have the drawback of not being able to update things later. Feels like I want a Dependabot for linters and new Python versions.

I also need to automate my release workflow. I've taken a stab at it, but it's not working quite yet. If ditched SemVer for all my projects it would greatly simplify everything. 🤔

14 May 02:12

The Best Cold-Brew Coffee Maker

by Nick Guy, Kevin Purdy, and Daniel Varghese
The Best Cold-Brew Coffee Maker

If you’re anything like us, you’re willing to risk it all for a good iced coffee. And even though strutting into work late with your plastic cup and green straw is a total mood, we have some solutions for getting a good cold brew before you leave the house. Since 2016, we’ve looked at 15 cold-brew coffee makers, analyzed dozens of at-home brewing methods and recipes, made concentrate for more than 300 cups of coffee, and served samples to a tasting panel that included expert baristas. After all our testing, we found that the OXO Good Grips Cold Brew Coffee Maker offers the best way to make smooth, delicious iced coffee at home. It’s easy to use and well designed, and in our tests it made cold coffee with balanced acidity, a stronger aroma, and a cleaner finish.

14 May 01:22

Google to finally kill Play Music, migrate users to YouTube Music

by Brad Bennett

Google is finally starting to transition Google Play Music subscribers over to YouTube Music, its new default streaming service.

This signals the end for Google Play Music. Google Plans to shutter the ageing music app later this year, but it doesn’t have a set date yet.

The service was Google’s primary music streaming option from 2011 to 2018. Google Play Music first launched in Canada back in 2014. Over the past two years, Google has been honing the YouTube Music’s user experience to bring it up to par with Google Play Music’s feature set.

This included a recent visual revamp and the addition of being able to upload your own music to the service’s cloud servers.

Overall, YouTube Music is likely better than Google Play Music now, so most users should be happy when they switch over.

To make moving to YouTube Music as easy as possible, Google is releasing a feature that asks users to upload their taste preferences, playlists, saved music libraries and more to YouTube Music.

To transfer, you need to download or open the YouTube Music app and log in with your Google account. Once you do, the app presents a popup that asks you to transfer your music and settings over. Hit ‘Transfer,’ and you should be good to go.

Google has been hinting at the switch for a long time, so this comes as no surprise. However, it will be interesting to see if Google Play users take this opportunity to try Spotify, Apple Music or Tidal as their main service goes extinct.

This transfer tool should make it easy to stay within Google’s ecosystem, but that said, if you don’t have an expansive cloud library set up, it’s easy to jump to a competing service.

If you do have an extensive cloud library and you’ve been looking for a place to move it to as Google shutters Play Music, Plex is a pretty feature-packed option.

The post Google to finally kill Play Music, migrate users to YouTube Music appeared first on MobileSyrup.

14 May 01:21

Hackers are posing as Zoom, Google Meet and Microsoft Teams to steal information

by Aisha Malik
cybersecurity

Hackers are impersonating popular video conferencing apps like Zoom, Google Meet and Microsoft Teams in a new form of phishing scams.

A new report from Check Point Research outlines that since lots of people are using these services during the COVID-19 pandemic, hackers have registered domains to pose as the services and get users to download malware or allow unauthorized access to sensitive information.

The research firm has found 2,449 domains related to Zoom in the past three weeks. It concluded that 32 of the links are malicious and 320 of them are considered suspicious.

Further, users have fallen prey to a phishing campaign that poses as Microsoft Teams. For instance, hackers are sending out emails titled: “You have been added to a team in Microsoft Teams.” The emails contain a malicious link, and victims ended up downloading the malware when they clicked on the fake “Open Microsoft Teams” icon.

There have also been several fake Google Meet links that led users to suspicious websites in attempts to retrieve personal information.

Check Point Research notes that there are also several phishing email campaigns that pose as the World Health Organization. Some of the emails include files that download malware onto devices, while others ask for donations for the WHO and UN to be sent to bitcoin wallets.

The firm outlines that hackers are registering virus-related domains to represent different stages of the pandemic around the world. The start of the pandemic brought several fake live map domains, and towards the end of March there were relief-based scams related to stimulus payments and government aid.

“Since several countries have started easing restrictions, and begun planning the return to normal life, domains related to life after the coronavirus have become more common, as well as domains about a possible second wave of the virus,” the report states.

So far, there have been 90,284 new domains registered globally in relation to COVID-19.

Source: Check Point Research Via: Engadget 

The post Hackers are posing as Zoom, Google Meet and Microsoft Teams to steal information appeared first on MobileSyrup.

14 May 01:21

The Polestar 2 high-end electric car costs $69,900 in Canada

by Brad Bennett

Polestar, the electric Volvo offshoot, has announced that its impressive Polestar 2 EV is going to start at $69,900 CAD in Canada.

The company plans to begin delivering its car to reservation holders by late summer of 2020, meaning that if you pre-ordered your vehicle, you should get it pretty soon. If you want to order a car today, you can do so on Polestar.com. 

The company’s press release describes the car as, “ an all-wheel drive, five-door electric performance fastback with an output of 408 hp and 487 lb.-ft. (660 Nm) of torque. It is also the first car in the world with an infotainment system powered by Android, with Google Assistant, Google Maps and the Google Play Store built-in.”

Notably, the car’s Android Infotainment system is quite interesting since it provides users with a lot of handy tools and voice controls. This makes interacting with your car simpler in theory while also providing giving you access to Google’s apps like Maps.

You can learn more about the car and its infotainment system in our prior reporting.

In terms of pricing, the car starts at $69,900, but there are some pricy add-ons. You can find them all below:

  • Performance Pack: $6,000
  • Nappa Leather Interior: $5,000
  • 20-Inch Alloy Wheels: $1,200
  • Metallic Paint Colors: $1,200

If you want to see one of these cars in person, the company is planning to open PoleStart Spaces, which will be retail showrooms in Montreal, Toronto and Vancouver.

The post The Polestar 2 high-end electric car costs $69,900 in Canada appeared first on MobileSyrup.

14 May 01:21

Google makes Meet available for free for everyone

by Jonathan Lamont

Google Meet will now be free to use and available for anyone with a Google account.

It’s been a little over a month since Google rebranded its Hangouts Meet platform to Google Meet and in that time it has grown extensively. Originally, Meet was built for secure business meetings and was only available for G Suite users. Google made the service free for G Suite and G Suite for Education users in March and saw daily usage grow 30 times.

Further, Google says Meet now hosts 3 billion minutes of video meetings every day and last month the service added roughly 3 million new users every day.

Near the end of April, Google said it planned to make Meet available for free to everyone. Now it has. Anyone can go to ‘meet.google.com‘ to get started. It’s also available on iOS and Android. Those with an existing Google Account can sign in and start using Google Meet.

Those without a Google Account will have to create one. Google says it requires an email address for security purposes. That email doesn’t have to be a Gmail address, however.

Popular Meet features will be available as well. That includes the ability to create and share meetings or join meetings using meeting codes. Additionally, users can plan video meetings and invite others directly from Google Calendar. Plus, Meet supports AI-powered automatic live captioning, which can help people with hearing loss participate in video meetings.

Finally, Google says it will make Meet accessible right from Gmail for quick access to meetings.

You can learn more about what Google is doing with Meet here.

Source: Google

The post Google makes Meet available for free for everyone appeared first on MobileSyrup.

14 May 01:20

Amazon brings new Fire HD 8 tablet to Canada

by Brad Bennett

Amazon announces new Fire HD 8 and Fire HD 8 Kids Edition.

Both of these devices are spectacularly low-priced with the Fire HD 8 costing $109.99 in Canada.  The Kids Edition is a little pricier with its $179.99 price tag, but it comes with some extra features.  You can pre-order them today.

This new version of the Fire HD 8 is 30 percent faster than its previous version with a new quad-core processor and 2GB of RAM.

When you buy the tablet, it comes with either 32GB or 64GB of storage, but you can use a microSD card to expand it up to an additional 1TB if you want.

Amazon also says that the tablet should get about 12 hours of battery life with mixed usage, which is pretty decent for such a low-cost tablet.

It’s also got a USB-C charger this time around, which should make it easier to charge with your smartphone or laptop charger depending on what devices you have.

Alongside the regular tablet, there’s a Kids version, as I mentioned above. This is basically the same tablet, but with some added hardware and software tweaks.

For instance, it comes in a thick ‘Kid-proof’ case. These cases remind me of Nerf material and should withstand most drops a kid puts a tablet through.

It also comes with a year of ‘Amazon FreeTime Unlimited.‘ This is a kid-friendly software that allows parents to have more control over their child’s tablet experience as well as coming pre-loaded with hundreds of books, apps and videos that are all appropriate for kids.

After the first year of FreeTime Unlimited, if you want to keep it installed, it costs $4 per month if you’re a Prime subscriber, $6 if you’re not.

Generally, these tablets have been really good for kids since they’re low cost and tough case make them very child-friendly. Beyond that, FreeTime Unlimited is nice for younger children too.

You can check out both tablets on Amazon Canada’s website. 

The post Amazon brings new Fire HD 8 tablet to Canada appeared first on MobileSyrup.