Shared posts

08 May 07:34

week ending 2022-04-28 General

by Ducky

Vaccines

This article says that Moderna has formally filed a request that the US FDA approve their vax for children between 6 months and 5 years, yay! Moderna says the two-dose regime is 51% effective against infection. That doesn’t sound like much, but it is compared to kids without vax (who don’t get sick as much as adults to) and against more recent strains. Moderna is also studying a third dose.

Yes, yes, I know that’s USA and not Canada, but vaxmakers usually file in the US first. This article says that Moderna is preparing a submission for Canada.


This article says that Moderna has agreed to open a vax plant in Montreal. (Yay!)

Long COVID

This article reports on a study that Long COVID might be caused by an underactive immune system, and that the monoclonal antibody Leronlimab helps with many symptoms. It was a small study, but the press release from the leronlimab manufacturer said there were clinically meaningful improvements in leronlimab over placebo were observed for cough, stuffy/runny nose, shortness of breath, tightness of chest, feeling of fast heartbeat, fatigue, muscle aches/cramps, muscle weakness, joint pain/swelling, chills/shivering, feeling hot or feverish, difficulty in concentration, sleep disturbance/insomnia, headache, dizziness, tingling/numbness, sense of taste, and sense of smell. (Unfortunately, it didn’t help sore throat, exertional malaise, anxiety, nausea, and vomiting.)


This paper from the UK found that of people who had been hospitalized with COVID-19, only 25·5% had made a full recovery at five months, and only 28·9% at the one-year mark.

Mitigation Measures

This article reports that the Government of Canada has relaxed some border rules (for air/land/sea borders):

  • Unvaccinated kids under 12 accompanied by a fully vaxxed adult are treated the same as vaccinated adults.
  • Fully-vaxxed people do not need to provide a quarantine plan.
  • The requirements for what travellers have to do after they enter are significantly reduced: no mask requirement, no keeping track of contacts, no quarantine if someone in the travel party gets sick.

This paper from the UK says that mental health got worse during lockdown, did not get better when lockdown ended, and continued to get worse through March 2021 (when the study ended). In other words, it’s the pandemic, not the lockdown, that is causing distress.

The study also found that things were worst for people 25-34, next worse for people 35-44, with no difference by race or country (i.e. Wales/Scotland/England/N. Ireland).


This paper says that brainstorming doesn’t work as well in virtual meetings. On the other hand, it says that selecting which idea to pursue works just as well (possibly better!) in a vchat.


This article from the USA reports that workers at unionized locations were less likely to get COVID-19 than workers at non-unionized locations.


Remember that Sunwing party flight where people flouted all kinds of rules, including masking rules? This article reports that people in the group has been fined a total of $59K.


This article reports that the Canadian Armed Forces rejected around three-quarters of requests for vaccine exemptions.

Transmission

This study says that 23% of Canadian blood donors have had COVID-19 (based on antibodies to a piece of the virus that is not in the vaccine). It also said that 99.6% of blood donors have antibodies to the spike protein, with most people getting the antibodies from vaccination. 33.5% of racialized blood donors had had COVID-19, while only 21.2% of white blood donors had.

This article also reports on the above study, plus an unpublished BC serosurvey study which says that 40% of BCians, and two-thirds of kids under 10 have gotten COVID-19.


This article says that BC Ferries had to cancel all ferries to/from Haida Gwaii due to crew illnesses. The article reports they were in the process of hiring helicopters to take passengers.

Treatments

This blog posting says that there are cases where patients get better after starting Paxlovid, but then get worse after stopping it. So it’s really really good, but not a silver bullet. (Maybe it needs to be a longer treatment?)

This paper has a case study of one patient who started Paxlovid and then got sick again when he stopped.


Too bad: This study says that Evusheld doesn’t reduce hospital stay time if given after the patients are already admitted. The good news is that it appeared to be safe and did reduce mortality by 30%.

Pathology

As I mentioned last week, a few hundred young children around the world have gotten acute hepatitis recently. Last week, I pointed out that there might be a link to COVID-19; recent chatter (like this) seem to think that’s less likely. The leading candidate (though they are really unsure) seemed to be adenovirus #41.

Variants

Covariants.org has nice plots of variant cases over time, e.g.:

Recommended Reading

This article wonders what story we will tell about the pandemic when it is over. This article says that we will forget many things about this pandemic (perhaps similarly to how the Spanish Flu pandemic left little cultural footprint?), and that’s not a bad thing.


This article talks about how climate change is going to make it more likely that viruses will jump to humans.

08 May 07:33

week ending 2022-04-28 BC

by Ducky

Okay, folks, I am now willing to say that we are in the middle of a big wave that nobody is talking about. There are currently 570 people in hospital with COVID-19, and the pre-Omicron max hospitalizations was 515 (on 28 April 2021). This is bad.

Yes, yes, some of them are there “with” COVID-19 and not “for” COVID-19, but hospitalization are still increasing which means cases must be increasing. Furthermore, hospitalizations are a lagging indicator, and hospitalizations are rising faster than cases — which is more evidence that the case counts are badly, badly, badly undercounted.

The BC COVID-19 Modelling Group’s latest report estimates that our true case count is ~5K cases / day:

The BC COVID-19 modelling group also reported on a survey done through Facebook which asks if you know people who are sick:

Meanwhile today’s BC CDC Situation Report shows that COVID-19 in wastewater is still rising:

Andthe BC CDC Dashboard shows that the positivity rate is climbing:

All in all, I believe this says that we are not about to get hit by a wave, we are already in a wave.

Vaccines

I don’t understand the Novavax/J&J info on the BC CDC dashboard. They are being lazy and combining Novavax and J&J. I thought that we had basically zero distribution of J&J in the province — I didn’t think that the federal government ever released their stock. My memory is that there were a very small number of shots that were J&J (probably from people who got the shot in the US and then came home), but I didn’t track that.

Well, now the dashboard says that there have been 12,313 J&J and Novavax doses administered, but the dashboard also says that there have only been 457 doses since last week, and I’m pretty sure there were almost no doses last week. Arg. Gotta love the BC data pipeline, not.

Bottom line: I can’t tell you if there has been a great response to Novavax or not.

Statistics

From the BC CDC Weekly Data Report and the BC CDC dashboard:  +2,276 cases (+325.1 per day); 355 hospital admissions, +42 all-cause deaths (6.0 per day).

Currently 570 in hospital / 47 in ICU.

In the past week, roughly +306 first doses, +636 second doses, +3,909 other doses (which I suspect are mostly second boosters).

Charts

From the BC CDC Weekly BC Data Report:


From the BC CDC Situation Report:

08 May 07:33

Six years of failure

by Chris Grey
It is now just over six years since the start of the official campaign for the 2016 referendum, years which have transformed and polarized British politics, economics and culture. What wasted years these have been. For whilst Brexit is mainly discussed, including by Brexiters, in terms of whether or not it has been as damaging as the most dire predictions, it has also had a huge, if incalculable, opportunity cost.

Perhaps these years would have been squandered in some different way, but in principle so much might have been achieved but for all the energy and resources Brexit has soaked up. And it is surely the case that, but for hard Brexit fealty being the sole criterion for appointment, many current ministers would never have got anywhere near holding government office. For example, does anyone really think that, without Brexit, we would have Jacob Rees-Mogg lounging affectedly on the front benches and in a position to write his spiteful little notes in the name of ‘government efficiency’? Would we be saddled with a government which, judged overall, a mere 18% of people now think is ‘competent’?

They have certainly been years of failure, and whilst that failure may not all be down to Brexit it is inseparable from it so that, as I’ve argued before, we need to think in terms of the ‘post-Brexit condition’. That’s not because nothing else has happened or will ever happen apart from Brexit, but because Brexit marked a decisive, historic break in national strategy (according to Brexiters, especially) for the better (according to Brexiters, uniquely) and so it is legitimate to define and judge this period, which we are only at the beginning of, in those terms.

In this (even) longer than usual post I’ll look at some aspects of where these years have now led us. It’s a good time to do so, partly because the contrast is so great with all the pre-referendum leave campaign claims, and partly because the last week or so has seen some really quite significant developments in both the economics and politics of Brexit. Anyway, it’s a Bank Holiday weekend ….

A failing national strategy

I wrote recently of the many ways in which the country the Brexiters are creating is going rotten, with so many basic services not working properly, and each week brings fresh reports of what is now in danger of becoming a national calamity. Agriculture faces a growing crisis, with implications for the availability and price of basic foodstuffs, and there is a widespread shortage of both basic and vital medicines leading to a spate of abusive behaviour towards pharmacists. In both cases Brexit is explicitly reported as one significant factor. It’s the aggregate effect of these multiple Brexit harms that defines the post-Brexit condition. To get a sense of their scope, it is as always worth checking the regularly updated Yorkshire Bylines’ Davis Downside Dossier.

At the more macro-level, consumer confidence is at a 50-year low, not least because of inflation and the cost-of-living crisis, whilst the Office for Budget Responsibility expects this year to see the biggest fall in living standards since records began in the 1950s. Brexit is certainly in the mix of this, too, because from the moment it happened the vote to leave, largely because of the drop in the value of sterling it caused, had an impact on inflation. So, by June 2018, the vote to leave had already raised consumer prices by 2.9%, costing the average household £870, with a related sharp decline in real incomes (figure 2 of link). Once Brexit actually happened, it introduced further inflationary pressures in terms of labour shortages and higher costs of trading with the EU. The eminent economist Adam Posen this week estimated that 80% of UK inflation is attributable to Brexit.

This week also saw the publication of a major study showing that Brexit has caused a 6% increase in food prices, and of another study which confirms a dramatic fall in the number of trade, especially export, relationships with the EU (what this means in practice is that large numbers of small firms have dropped out). As Posen pungently expressed it, the UK is “running a natural experiment in what happens when you run a trade war on yourself”. The results so far, his data also suggest, show just how damaging it is.

Overall, the latest IMF World Economic Outlook published this month has the UK set to be the slowest-growing G7 economy in 2023 at 1.2% (compared with 2.4% average for advanced economies and 2.3% average for Euro area) and to have higher inflation, at an average of 6.3% over the next two years, than Germany (4.2%), France (2.9%) and Italy (3.9%) as well as the non-EU G7, and higher than the advanced economies average (4.1%) and the Euro area average (3.8%).* In other words, it’s not just Covid, Ukraine, and global energy and supply chain factors, which have affected all countries. Something particular has happened to the UK and it has a name: Brexit. Indeed the IMF’s 2022 country report for the UK identifies Brexit, along with the pandemic, as having “magnified structural challenges” facing the economy. This is why, as other major economies ‘bounce back’ from Covid, the UK does so more slowly.

Brexiters like, notoriously, Michael Gove poured scorn on the IMF and similar bodies during the referendum. However, it’s reasonable to make use of their figures if only because, for months now, Boris Johnson has been trumpeting (and cherry-picking) OECD data to claim that the UK was the fastest-growing G7 economy last year. And whilst I don’t know whether he has explicitly tied this to Brexit (though I wouldn’t be surprised if he has at some point), he most certainly has linked it to the speed of the Covid vaccine roll-out, which he and other ministers have repeatedly, and entirely dishonestly, attributed to Brexit. So if such global economic comparisons are to be made then, with more justification than the government, it’s fair to claim that post-Brexit Britain is failing to deliver its promises.

The Brexiters have no new ideas

What’s most striking about that claim is that it isn’t ‘remainer moaning’. It is pretty much what every Brexiter, from Nigel Farage to David Frost to Iain Duncan Smith has been saying for several months now. They, inevitably, ascribe it to a lack of deregulatory zeal and to the need to press ahead with ambitious trade deals. The problem, to their minds, is not Brexit but that Brexit hasn’t been done properly. But the scope for de-regulation remains elusive.

In some cases, like Solvency II reforms or gene editing regulation, it is complex, time-consuming and, actually, likely to end up in a similar place as would have been the case without Brexit. In many areas, like conformity assessment, regional aid and, most obviously, trade, Brexit actually means whole new swathes of regulation and red tape, sometimes duplicating that of the EU, sometimes restoring that which EU membership had abolished. All of this reflects Brexiters near-complete ignorance of how regulation actually works and why it is necessary. Bluntly, they simply had no idea what they were doing.

In still other cases, like the dismantling of employment rights which the Thatcherite Brexiters undoubtedly hunger for, it may be that Brexit makes deregulation possible, but there is no political mandate for it and little political appeal in it. In that respect, such Brexiters are reaping the consequences of having sold Brexit and won the election with the aid of nativist and, literally, conservative votes. Nor, since it will cost some of them their lives, is it likely to prove popular with voters if the government decides to diverge from new EU standards on road vehicle safety in order to prove an ideological point about ‘freedom from Brussels’.

As for trade deals, they also encounter opposition from the public and, anyway, even the dullest-minded Brexiter (a title for which there is considerable competition, even if the field were restricted solely to those whose surnames begin with ‘B’) must be starting to grasp that they offer no economic salvation.

Thus, faced with a burgeoning economic crisis, this post-Brexit government is bereft of workable ideas. Its flagship policy has proved an economic dud, but it is inherent in the government’s very formation to be unable to admit that, or to produce any policies that might ameliorate it. Having smashed up the old order, all they can do is stare in slack-jawed bemusement at the rubble around them, like a convention of peculiarly vandalistic village idiots who accidentally got control of a wrecking-ball.

To the extent they have any ideas of how to proceed, these go in two contradictory directions. One is just to not implement Brexit so far as possible. This has happened with much of the Northern Ireland Protocol and, most strikingly, in the confirmation this week of the long-trailed fourth postponement of import controls. It’s a remarkable and explicit admission that Britain simply can’t afford the Brexit trade deal that Johnson pronounced a triumph, albeit one carrying its own costs and risks, as I’ve discussed at length in previous posts (e.g. here and here).

Their other idea it is to do the same thing all over again but ‘this time properly’, the latest manifestation being suggestions this week that the government is considering unilaterally abolishing several import tariffs, perhaps especially on food. It’s an idea that goes back to Patrick Minford’s extreme version of ‘true’ Brexit, with horrendous consequences for UK farmers and manufacturers, whilst also giving away one of the main bargaining chips for striking free trade agreements.

And it's no good Brexiters saying that leaving the EU was never about economics. First, both during the referendum and since they repeatedly made claims that it would be economically beneficial, and at the very least not harmful, hence all the effort put into the Project Fear rebuttal line. Second, although at one level describable (and dismissible) as ‘just economics’, when people can’t feed their families and are fighting to get medicines it is more than that.

Dangerous political trickery

Meanwhile, the astonishingly dangerous political trick the government used to ‘get Brexit done’ has blown up in its face. That, of course, was agreeing to the Northern Ireland Protocol (NIP) which it is clear the government never intended to honour, and which it sold to its MPs as a supposedly temporary measure. Yet, at the very same time, it was sold to the electorate as part of the ’oven-ready deal’ which would put an end to all the boring Brexit wrangling. Worse still, it was signed as an international treaty with the EU, which certainly didn’t regard it as temporary, any more than does the US.

This was done quite knowingly and entirely cynically, and it’s very difficult to think of any equivalent trickery in modern British political history in terms of that combination of national and international dishonesty. Not only was it dishonest, it was actually – I don’t use this word casually – wicked in that the patsy in this trick was, and is, the people of Northern Ireland and the fragile politics of their hard-won peace. This makes David Frost’s handwringing about that fragility in a repellently self-serving and disgracefully misleading speech about the Protocol this week all the more nauseating (alas, no space here to pull it apart, but see some thoughts from me and from Gavin Barwell, formerly Theresa May’s Chief of Staff, and the highly revealing and unusual comments from a former senior civil servant which clearly contradict some of Frost’s key claims).

What Frost and Johnson so wickedly did in 2019 created a carbuncle that has suppurated ever since. If there has so far been little domestic political price to pay for this, it is only because relatively few British voters outside Northern Ireland really understand or care about it. That’s especially so of English voters, who are the Tory Party’s main concern (£). It has also created less international drama than it might have done because the EU, perhaps partly because it is less careless than Johnson and the Brexiters about Northern Ireland’s peace, perhaps from uncertainty about how to proceed with its new ‘neighbour from hell’, has trodden a very soft path so far. Just how soft can be seen via the thought experiment of imagining the outrage with which the UK, and especially Brexiters, would have reacted if the EU had announced immediately after signing the Withdrawal Agreement that it had never had any intention of being bound by one of its key provisions.

Thus of the three audiences for this confidence trick – apathetic voters, a cautious EU, and Brexiter Tory MPs – it is the latter who have been most vocal in insisting that the government keep to its promise to them, that of treating the NIP as temporary. Northern Irish unionist politicians have also done so, of course, but, unlike Tory MPs, they never pretended to support the deal. Despite his large majority, Johnson has been vulnerable from the outset to ERG obduracy, and as his hold on office gets ever more shaky that increases. And under any conceivable replacement their power will persist. “So,” as RTE’s Tony Connelly concludes his review of the current situation, “nearly six years after the Brexit referendum, the EU and Northern Ireland remain hostage to Tory Party machinations”.

Northern Ireland: a litany of failure

The essence of this current, or at least emerging, situation is, as Connelly explains, the widespread report that the British government is devising a new way to renege on the NIP, or at least the threat of doing so in order to blackmail the EU into allowing it to renege. Rather than continue with the repeated threats to ‘invoke Article 16’, this new approach would amend UK law so as to disapply the NIP. In particular, it is reported by Connelly and others that the government is considering repealing Section 7a of the EU Withdrawal Act, the legislation that enshrines the Protocol into domestic law, a course of action which is also being urged by the Brexit Ultra commentariat (£).

The roots of this go all the way back to the total failure of Brexiters to understand or accept the implications for Northern Ireland of, at least, hard Brexit. I’ve written about that many times, and summarized some of the main issues on a standalone page on this blog. But its more proximate root – and this is an important point – is that the new approach demonstrates the abject failure of the old one. That is to say, all the threats of using Article 16, which started in January 2021, only weeks after the NIP came into operation (and before the aborted EU threat to do so, since used as a justification), have now been exposed as nonsensical. Having treated Article 16 as if it were some kind of route to disapplying or unilaterally rewriting the Protocol, something all credible experts agreed was untrue, the government seems to finally have understood this basic fact.

Along the lines of my previous post, this could be called a ‘we told you so’ moment although, also in line with that post, the government is not learning from its mistakes but compounding them by proposing an even more absurd policy. In fact, in its essence, this new approach relies upon the idea which also informed the eventually aborted illegal clauses in the Internal Market Bill as well as one of the suggestions about how Article 16 could be used for the specific purpose (£) of ending ECJ involvement in the Protocol.

As discussed at that time at length by Mark Elliott, Professor of Public Law at Cambridge University, this underlying idea is the “facile” one, endorsed by the Attorney General and arch-Brexiter Suella Braverman, that the sovereignty of the UK parliament means that it can pass laws that somehow trump international law and treaty obligations. (Elliott also explained this at the time of the Internal Market Bill proposals, in an elegant essay which eviscerates the Brexiters’ entire concept of sovereignty.)

It’s not necessary to be an eminent lawyer, or even a lawyer, to see that this is nonsense: if it were true, no international agreement would have any legal standing at all. However, it seems clear that it is what informs the government’s thinking because when the new approach was first hinted at, by Jacob Rees-Mogg at the EU Scrutiny Committee a couple of weeks ago, he made exactly the point that the UK had the “sovereign right” to override the Protocol (£). (In passing, it is shocking that literally none of the Labour members of the committee turned up to this meeting, nor did the official SNP member, which is part of the wider story, for which I’ve no space here, of Labour’s near silence about the damage of Brexit.)

In any case, apart from its legal fatuity, it’s politically naïve. Domestically, there is bound to be substantial opposition from the House of Lords, and perhaps some Tory MPs, as there was to the Internal Market Bill clauses. Even more importantly, the international repercussions will be huge. It’s not only a matter of the non-trivial damage to the UK’s reputation. There is also the potential, at least eventually, to end up with a trade war with the EU (£) and diplomatic rupture with the US. Already the UK’s conduct over the NIP is costing it dear in terms of the continuing refusal of the EU to ratify UK membership of the Horizon Europe science funding programme in retaliation.

It is especially irresponsible in the context of the Ukraine War, playing into Putin’s hands by undermining the Western alliance against him. The government’s thinking seems to be that the war will actually make the EU more likely to yield to this new threat. It’s a shabby idea in itself, relying on others to put up with our behaviour because, unlike us, they are too responsible to give succour to Putin. And it may not be realistic, anyway. It seems to rely on the government’s illusion that it is somehow the leader of the Western alliance, whereas from an EU perspective the UK is an important, but secondary, player to the primary EU-US-NATO. Keeping the UK sweet by indulging a new tantrum probably won’t be a priority, especially after all the years of British antagonism and dishonesty. Moreover, it’s entirely inconsistent with Johnson’s reported desire (£) to “re-set” relations with France following Macron’s election victory this week.

Trapped in lies and denial

It remains to be seen whether the government is going to push ahead with this new approach – the consensus of knowledgeable commentators seems to be that it will, but that nothing will happen until after the Northern Ireland Assembly elections. But, as with the latest postponement of import controls and the possible continuation of accepting the CE conformity assessment mark (ludicrously described in the Express this week as a ‘Brexit masterstroke’), the continuing ructions over implementing the NIP show the depths of the folly of Brexit in general, and the Brexit the government chose to agree to in particular.

What all three, and much else in the Brexit saga, share is a bull-headed denial of what hard Brexit means for customs and regulatory borders, and what they in turn mean for Northern Ireland, allied to a bone-headed concept of sovereignty. And so it goes on, year after year after wretched year. All the fantasies, lies and denials that permeated the dreadful referendum campaign six long years ago are still running into the rock of the realities of international trade and international relations. The government has no solutions because it has never been a government in any real sense of the term, just a vehicle for precisely the fantasies and denials of that campaign.

In consequence, its responses to the multiple and growing crises it has created veer between the ludicrous and the contemptible, dragging a bitterly divided country, the clear majority of which thinks Brexit was a mistake, ever deeper into poverty, decline, misery and disrepute.

 
 

*Barely, if at all, reported, perhaps because balance of payments scarcely features in UK political discourse any more, are the IMF’s latest projections for the current account balance (Table A10, p.153). Expressed as a percentage of GDP the UK deficit in 2021 was -2.6% with a forecast of -5.5% in 2022, then -4.8% in 2023 (advanced economy averages -0.7%, -0.1% and 0% respectively). It’s worth recalling the furore caused before the referendum when the then Governor of the Bank of England, Mark Carney, warned of his concern that Brexit could test the UK’s reliance on “the kindness of strangers” to fund its current account deficit. At that time, the UK deficit was understood to be -3.7% (now adjusted to -3.6%) considered high by international standards (the advanced economy average was a surplus of +1%).

08 May 07:32

Privacy Washing: Do As I Say, Not As I Do

by Kyle Rankin

People care about their privacy. Some have doubted this in the past, pointing to the amount of personal information people willingly shared, often in exchange for free software or services. Yet I’ve long thought that many people simply were not aware of the privacy implications of sharing their data and how it could be misused […]

The post Privacy Washing: Do As I Say, Not As I Do appeared first on Purism.

08 May 07:32

Apps are too complex so maybe features should be ownable and tradable

Software is too complicated. User interfaces have too many commands. Perhaps the answer is an in-app free market economy.


I subscribe to the excellent Hardcore Software newsletter which narrates the evolution of the PC and desktop software. It’s by Steven Sinofsky who was at Microsoft from 1989, oversaw development of multiple versions of Microsoft Office as it was created and scaled, and ended up as president of the Windows devision.

With Office 2003, Microsoft was able to see the actual commands used for the first time.

  • There were 4,000 commands across all of Office 2003 (Word, Excel, etc)
  • 80% of users only used two commands: copy and paste

Nobody used all the features, but everyone used a different set.

At a deeper level, most in a company might not use a feature such as Track Changes (or Redlining) in Word. But their lawyer would. And contracts or legal letters might arrive via email for review. Rarely used features became part of the work of others. This network of usage was a key advantage of Office.

– Hardcore Software, 077. What Is Software Bloat, Really? [paywall]

What to do?


I’m into Adaptive Menus as an approach.

The first new mechanism, called “Adaptive Menus” or, later, “Personalized Menus” were an attempt to make the top-level menus appear shorter by showing the most popular items first. After a few seconds (or after pushing a chevron at the bottom of the menu) the menu expanded to show the full contents. As you used the menus, items you used often were promoted to the “short” menu and items you never used were demoted to the “long” menu.

– Jenson Harris: An Office UI Blog, Combating the Perception of Bloat (Why the UI, Part 3) (2006)

It’s like frequently-used light switches in your house magically getting bigger.

But it didn’t help for Microsoft’s core problem which was discoverability. Users kept on requesting features which were already in the product.

(The eventually solution was to replace menus with visible commands and icons, making it easier to explore: the Ribbon.)


Kai’s Power Tools, in the mid 1990s, was known for providing awesome Photoshop effects and also for wild experiments with the UI.

Here’s a deep dive into the interface of KPT.

Magic lenses! Single-purpose rooms! A dedicated tool "meant to create collections of special looking orbs."

Also, “Unfolding Functionality.”

This deep dive describes this philosophy as fading out commands when not needed.

But the KPT Wikipedia page is more specific:

The program interface features a reward-based function in which a bonus function is revealed as the user moves towards more complex aspects of the tool.

This is more how I remember it.

I think what would happen is that you would use a particular feature or filter or parameter, and as you used it you would accumulate stars. At a certain number of stars, more advanced features would unlock.

Which is another way to deal with clutter, right? It’s progressive disclosure: the user and interface grow in sophistication together.


Neither Office’s Adaptive Menus nor KPT’s Unfolding Functionality is quite right. (Discovering new features is hard. A feature, when you want to go back to it, might be a different place.)

BUT what they both do is they atomise functionality.

Once functions and commands exist as atoms, the user interface can be displayed according to some kind of logic - unfolded with use; reorganised by context, etc.


Feature flags are a super common engineering pattern for turning feature “atoms” on and off dynamically.

For example: you have an app with a button that you only want to show to people inside the company, because the feature is still being tested. So you set a flag for that feature for all the users who you want to see the button, and they see it, and for everyone else it’s like the feature isn’t even there.

Unpacking this… Feature Toggles (aka Feature Flags) (Pete Hodgson) gives four categories of feature flags:

  • Release toggles – allows half-finished code to be shipped so select people can try it out live
  • Experiment toggles – features can be activated dynamically for one group or another, to gather data in a A/B test
  • Ops toggles – allows compute-intense features to be easily deactivated when the service is under load
  • Permissioning toggles – premium features are available to the premium group of users; beta features are available only to users who can test the system, and so on.

Kai’s Power Tools, through the lens of feature flags, makes up a fifth category: adaptive toggles.

I’m calling them adaptive in the spirit of the lost Adaptive Design movement from the early 2000s: software is “adaptive” if it is co-created by design and user, and conforms to individual user behaviour.

A sixth category might be pick-your-own toggles: feature flags where the user is in control of what features are off and what features are on.


BUILD #1: Pick-your-own feature flags

This is 50% of an idea.

Imagine Microsoft Word but it comes as a plain text editor. No bold/italic/etc. The only commands are open, save, copy, and paste.

You get used to it. Then one day you decide you’d like to style some text… or, better, you receive a doc by email that uses big text, small text, bold text, underlined text, the lot.

What the hey? you say.

There’s a notification at the top of the document. It says: Get the Styles palette to edit styled text here, and create your own doc with styles.

You tap on the notification and it takes you to the Pick-Your-Own Feature Flag Store (name TBC). You pick the “Styles palette” feature and toggle it ON.

So far this is pretty much like the browser flags in Chrome – experimental features in the web browser are hidden behind toggles which are user opt-in.

BUT the difference is that the features aren’t experimental. They are fully-rounded, user-facing, feature “package.”

So the user builds up the capabilities of the app as they go.

The downside? It’s still really hard to discover features. How do you know that text styles (or drawing, or collaboration, or any other feature “package”) is available, unless you go hunting? And why would you go hunting if you don’t already know the feature exists.

That’s why it’s only half an idea.


BUILD #2: Multiplayer, purchasable, tradable, giftable feature flags

The thing is, we’re not in Microsoft Word, we’re in Google Docs – and it’s multiplayer.

I’ve been tracking the emerging multiplayer web for a while. The fact that our day-to-day work apps, like Slack, Notion, Figma, and Google Docs all have a sense of live presence of colleagues is a big deal. Pretty soon we’ll be able to take it for granted in any app. Presence and collaboration will also be part of any future-VR-based operating system – I’m convinced of that after recent VR headset mucking around.

So what if you’re collaborating with your lawyer in Google Docs, and you can see from their avatar that they have the “Track Changes” feature flag activated?

Because you’re in the same doc, you can use it together.

And maybe if you want to use it again, they can just… gift it to you?

Could app feature flags be tradable and giftable? That would answer the discovery problem and the “store” problem.

What we’re talking about is feature flag ownership: a user owns their feature flags, and they carry features with them in a multiplayer space, and can use them together with other people.

Which… kinda parallels the physical world, right?

Like: if you’re having a workshop in a meeting room, then it’s generally one person who brings the post-its and the pens. It’s part of their job.

It should be the same on a Zoom call. You shouldn’t sit on a call waiting for a host. You should sit on the call waiting for the person who has the screen share feature flag, and the annotate screen feature flag, and so on.

What I’m talking about here is a marketplace: maybe a bunch of features in Zoom are free, but you pay $10 for the screen share feature flag, or $100 if you want the “make my webcam look pretty” filter. You can gift it to another user later if you want.


BUILD #3: ok yeah NFTs

I continue to keep a close eye on web3, as I said in January:

Here and there are glimpses of new ways of storing files, new ways of owning and providing access to data, new ways of asserting identity, new forms of payments …

I keep a personal running list of what I find interesting in the Web3 gold rush, in the hope of spotting something useful in its fundamentals that has immediate applicability.

And maybe here’s one?

NFTs (non-fungible tokens) are basically database primary keys that can be bought and sold, outside the originating platform.

What’s key about a NFT is that it is owned by a user.

Primary keys can point to anything, and mostly right now they point to jpegs of cartoon apes, pixellated portraits, blobs of text, and some very cool art made by excellent artists (also some terrible art). There are a ton of scams in this space, so you have to squint a little to see through.

There is also an idea called Functional NFTs which is when the primary key is meaningful to a particular app or service, and it unlocks feature.

Look, what I’m saying is: why not NFT-backed tradable feature flags?

With NFTs you get a whole ecosystem of ownership, marketplaces, dynamic pricing, for-free and for-pay trading, and so on.

If you want to build ownable, tradable feature flags, then it’s actually a relatively sane architecture decision to make use of this chunk of the emerging web3 tech stack to provide it. You might (as the rest of the tech stack comes into play) end up actually having to write less code?

Maybe the ownership experience of NFT-backed feature flags would actually be greater than non-NFT-backed feature flags, and you would be able to charge more? Expensive to provide features (like ones that consume a lot of bandwidth) could even cost more. Applications could end up with a business model that feels more like game DLC?

…but with some fascinating behaviour around users optimising their own apps around different roles (a viewer, a host, a facilitator, an editor, a teacher, etc) to represent the roles they have in their teams, and - in this multiplayer world - mutually learn from one another about how to adapt and co-create their own user interfaces.

3rd party marketplaces to provide and trade feature flags would arise.

And then there should be some fun features too. It’s not all whiteboards. What would it mean to have a rare and therefore somehow valuable feature flag? What would it feel like to be gifted one?

For me, this is maybe something to draw out of web3 – either just as inspiration or actually as some real tech.


Anyway. Adaptive user interfaces, avoiding clutter, adding social discovery, NFT-backed feature flags. Apps would start really simple and then grow in complexity around you as you discover features by meeting others. Add in a business model and it sounds like a real-world economy, right? Lots of user experience and design work to figure it out. Can you imagine Microsoft Office 2026 working like this? Something worth sketching I think.


Thanks to Sofi Lee-Henson, Pearl Pospiech, and others at Sparkle for the work and imagination in developing these thoughts together. Standard disclaimer: this is super speculative. Posting now to generating conversation and get my thoughts lined up.

08 May 07:32

Counting My RSS Feed Subscribers

by Thejesh GN

How many subscribers does my blog have? It's difficult to answer the question. A while back, I moved from Feedburner to my feed hosting. So I can do some estimation. It's based on the following assumptions.

  1. Some of the hosted multi-user feedreaders report the subscribers. We can extract that and use it.
  2. Self-hosted cloud-based ones usually don't. But I consider them as "1" based on the IP address.
  3. Folks using clients on their phone/PC without any cloud component as considered "1" per IP address.
  4. Some cloud-hosted multi-user feedreaders don't report subscribers. Currently, I consider them as one subscriber. I need a better way to figure this out.
It's not easy to answer that question

These are the ones that run through my script that replaced the Feedburner. Some folks use the blog's built-in feed. I have not counted that yet. I need to figure that.

def subscribers_log(event):
    event_data = {}
    event_data["event"] = "subscriber"
    event_data["datetime"] = datetime.utcnow().isoformat()
    event_data["date"] = datetime.utcnow().isoformat()[:10]

    if "requestContext" in event:
        requestContext = event["requestContext"]
        if "path" in requestContext:
            event_data["feed"] = requestContext["path"]
        if "identity" in requestContext:
            identity = requestContext["identity"]
            if "sourceIp" in identity:
                event_data["source_ip"] = identity["sourceIp"]
            if "userAgent" in identity:
                user_agent = identity["userAgent"]
                simplified_user_agent = subscribers_re.sub("X subscribers", user_agent)
                event_data["user_agent"] = user_agent
                event_data["simplified_user_agent"] = simplified_user_agent
                match = subscribers_re.search(user_agent)
                if match:
                    event_data["count"] = int(match.group(1))
                else:
                    event_data["count"] = 1

        # Insert only if you have data
        k = (
            event_data["source_ip"]
            + event_data["simplified_user_agent"]
            + event_data["feed"]
        ).encode("utf8")
        if event_data["count"] > 1:
            k = (event_data["simplified_user_agent"] + event_data["feed"]).encode(
                "utf8"
            )

        # create unique key
        h = hashlib.md5(k).hexdigest()
        event_data["_id"] = event_data["date"] + "_" + str(h)
        print("**----------------------**")
        print(event_data)
        try:
            req = urllib.request.Request(DB_URL)
            req.add_header("Authorization", AUTH_KEY)
            req.add_header("Content-Type", "application/json; charset=utf-8")
            jsondataasbytes = json.dumps(event_data).encode("utf8")
            req.add_header("Content-Length", len(jsondataasbytes))
            response = urllib.request.urlopen(req, jsondataasbytes)
        except:
            print("Error posting data")

Above is the script I use to extract the subscription and add it to CouchDB. It has borrowed from Simon Willison's script. Since _id is the primary key in CouchDB, duplicate inserts are ignored.

{
  "_id": "2022-04-29_04e730996daaa95df14a71e01c9ae326",
  "_rev": "1-d1680d2ec633dffd17c17b70adde0b35",
  "event": "subscriber",
  "datetime": "2022-04-29T03:46:30.710509",
  "date": "2022-04-29",
  "feed": "/thejeshgn",
  "source_ip": "8.29.198.27",
  "user_agent": "Feedly/1.0 (+http://www.feedly.com/fetcher.html; 101 subscribers; like FeedFetcher-Google)",
  "simplified_user_agent": "Feedly/1.0 (+http://www.feedly.com/fetcher.html; X subscribers; like FeedFetcher-Google)",
  "count": 101
}

Then once a day, I pull the previous day's data and do aggregation. I also pull data from WordPress and add it to the aggregate JSON document.

{
    "_id": "2022-04-29_subscriber_count",
    "event": "subscriber_count",
    "date": "2022-04-29",
    "feed": "/thejeshgn",
    "data":
    [
        {
            "provider": "feedly",
            "count": 101,
            "type": "rss"
        },
        {
            "provider": "theoldreader",
            "count": 23,
            "type": "rss"
        },
        {
            "provider": "bloglovin",
            "count": 3,
            "type": "rss"
        },
        {
            "provider": "inoreader",
            "count": 2,
            "type": "rss"
        },
        {
            "provider": "independent",
            "count": 70,
            "type": "rss"
        },
        {
            "provider": "wordpress",
            "count": 1015,
            "type": "follow"
        },
        {
            "provider": "wordpress",
            "count": 1132,
            "type": "email"
        }
    ]
}

The next step is to observe the script for a couple of days and weed out any bugs. And then update my subscription page and widget to reflect these numbers. My final goal is to beat my Twitter followers, and I know it's not easy.

You can read this blog using RSS Feed. But if you are the person who loves getting emails, then you can join my readers by signing up.

Join 2,155 other subscribers

Email Address

Subscribe

The post Counting My RSS Feed Subscribers first appeared on Thejesh GN.
08 May 07:32

How to blog

by Chris Corrigan

I’m single handedly trying to lift a near dead art form up from a seven year slumber. It seems like everyone stopped blogging in around 2015 In the intervening years folks would post “I really should get back to this” blog entries but then would find themselves deep in Facebook world where their writing was hard to find and search and sometimes limited only to friends. Or they would post on twitter where the link sharing would happen but without the added reflection and sometimes you’d have to battle bots and trolls to participate. Sure twitter threads are okay. But why not just blog?

(And when I say “folks” I mean me. Projection is a specialty of mine)

I get why folks don’t blog and would rather post on Facebook. It seems like it needs too much work, seems too polished. Requires a regular schedule. So I want to make it easier with a few things that might help you get blogging. (Again, even)

Get a free platform with an RSS feed. If you don’t know what that means, just sign up at Blogger or WordPress. Those sites have good mobile interfaces so you can write from your phone (like I’m doing right now). They come with great templates. They are upgradeable and transferable to your own domain and they can export your posts. An RSS feed is how we can subscribe to your writing via a newsreader like Inoreader.

Don’t be perfect. It’s a blog, not the front page of the Globe and Mail. Think out loud, make typos (typos drive engagement, lol), put half formed ideas out there. Post whenever you want. Whatever you want.

Don’t worry about your brand. I think this one hamstrung me once I had a professional redesign my site in 2015. My brand IS learning and curiosity and half thought out ideas that folks are interested in. I support innovation and learning. That’s messy and edgy sometimes. Also I’m human. It’s nice to read words written by a human. But I don’t blog to sell my brand.

Give stuff away. If you make things, give some of them away. Blogging is a gifting culture. We up lift one another. My site here is full of stuff that I have made and others have made that has been released into the wild. Generosity is beautiful. Having said that, let us know how we can hire you or buy your art, because that’s how you make a living and it’s nice to give back.

Share links and quote people. Sometimes a blog is a place for your opinions or personal thoughts. Also take time to share good things on the web. The etymology of “blog” is a contraction of the word “weblog” which comes from the idea that we log cool things that we find on the web. You want to know who is REALLY good at that? Dave Pollard especially his periodic link collections. Incredible things to read and think about.

Comment on stuff you read. Facebook has done a marvellous job of colonizing conversation. I have seen some amazing threads there with all kinds of useful content shared and explored. Same on twitter. But, can we find them again? Are they indexed and easily accessible? Nope. They are fleeting. Facebook and Twitter are happy they happened because it improves their semantic learning, but they aren’t interested in your community or your colleagues or you. So go directly to people’s blogs and share your thoughts. I am interested in you.

Basically I’m encouraging you to blog with just as much care and attention as you do with a Facebook post or a tweet. but by blogging you are doing it outside of those places in the wild where everyone can see it and participate. You don’t need to battle trolls or get drawn down algorithmically generated attractor basins because of what you write. You will be free.

What other tips do you have?

08 May 07:32

Four Books I'm Not Writing (Plus One)

I have bits and pieces of several overlapping technical books right now, but can’t decide which if any to complete:

  1. Building Software Together is advice for undergraduates working on their first team project (either in school or in industry). It’s the furthest along, but it’s been a few years since I was working with undergrads in large numbers so I suspect some of my advice is out of date and out of touch.

  2. Data Science for Software Engineers is an introduction to data analysis that (a) spends as much time on the messy business of getting, cleaning, and presenting data as it does on statistics and (b) uses software engineering data for all its examples. This one is more interesting to me personally, but it would be a lot of work to complete, and based on the polite disinterest I’ve received every time I’ve tried to get support for it, I suspect the audience is smaller than I’d like it to be.

  3. Designing Research Software is simultaneously a continuation of Research Software Engineering with Python, a book-length expansion of “Twelve quick tips for software design”, and a re-imagining of Software Design by Example (formerly Software Tools in JavaScript). Each chapter designs, constructs, and critiques a small version of something a research software engineer might actually build: a discrete particle simulator that illustrates the principles of object-oriented design, a file manager for large datasets that does error handling properly, a pipeline framework that records provenance in a reproducible way, and so on. Again, it’s interesting to me personally, but I suspect the audience is small.

  4. Software Engineering: A Compassionate, Evidence-Based Approach is the undergraduate software engineering textbook I think our profession needs today. Its starting points are (a) students should be introduced to the scientific study of programs, programmers, and programming and (b) now that we know how much harm software can do, we should teach prevention and remediation up front, just like the best civil and chemical engineering departments do.

The “which” part of my quandary is just my usual dithering; the “if any” goes a little deeper. Sales of technical books have been dropping steadily for twenty years: even ones as good as The Programmer’s Brain or Crafting Interpreters struggle to get through the noise. Textbooks have fared even worse, and not just because of publishers’ jam-today starve-tomorrow pricing models. I don’t think any book I could write today will ever reach as many people as Beautiful Code did 15 years ago. Maybe that shouldn’t matter to me, but it does.

And going even deeper, the two books I’m proudest of are two you’ve never read: Three Sensible Adventures and Bottle of Light. I wrote the stories in the first for my niece; it’s now out of print, but I have framed copies of the artwork up on the wall in my office and they’ve gotten me through some pretty bleak moments. The second, a middle-grade story about a world without light, is only available through one of Scholastic’s in-school programs for reluctant readers: I’ve tried periodically to buy back the rights, but so far the answer has always been “no”.

I really enjoyed writing both of them. If I could do anything with the years I have left I’d write more stories like the ones I loved growing up. I’d like to introduce you all to a cloudherd named Noxy (short for “Noxious Aftertaste”: the children in her village are given unpleasant names in order to discourage dragons from nibbling on them). I’d like you to meet a bookster’s apprentice named Erileine, a giant robotic dinosaur who may or may not be the original Santa Claus, and an orphaned clone with an odd knack for mechanical things trying to make ends meet in a post-Crunch Antarctica.

But all I’ve done in the last ten years is accumulate rejections, which makes it hard to keep writing. (And yes, I know about authors who got some-large-number of rejections before making a breakthrough sale, but I also know people who’ve won the lottery…) I know I should pick one of the above and get it over the line, but on a colder-than-spring-should-be Saturday morning, it’s all too easy to spend half an hour writing a blog post.

Family’s awake; time to make tea. Be well.

Later: the book I’d actually like to write is Sex and Drugs and Guns and Code, but I still don’t know enough and I’m not a good enough writer. Building Software Together and the software engineering textbook in the #4 spot above are partly attempts to smuggle some of those ideas past the defenses that people like my younger self have erected to protect themselves from feeling uncomfortable about themselves; one of the reasons I set BST aside was the realization that I’m not stealthy enough to pull it off.

08 May 07:31

Polyhierarchy in Taxonomies

by Heather Hedden

A defining characteristic of taxonomies is that terms/concepts are arranged in broader-narrower hierarchies, which may resemble tree structures. A limited number of top concepts each have narrower concepts, which in turn may have narrower concepts, etc., and the narrowest concepts at the bottom of the hierarchy are sometimes referred to as leaf nodes, as “leaf” extends the metaphor of “tree.” The tree model has its limits, though, because taxonomies may also have occasional cases of “polyhierarchy,” whereby a concept may have two or more broader concepts, instead of just one.

 

People who are new to taxonomies, however, might not consider polyhierarchies, because they tend to think of taxonomies as classification systems. Hierarchical information taxonomies have their origin in classification systems, such as the Linnean taxonomy of organisms, library classification systems, and industry classification systems. Classification systems, however, do not allow polyhierarchy within the system. Originally, classification systems were for physical things, such as books, which can belong in only one place, so there could be no polyhierarchy. Standard classification systems, such as industry classification systems, were developed by governmental, international, or nongovernmental organizations with a primary purpose of gathering and organizing statistical data about classes, and thus polyhierarchy is not permitted, as it would lead to double-counting of members of a class.

 

The primary purpose of hierarchy in a taxonomy is to provide guided browsing of topics to end-users, who may start out looking at broad categories and then drill down to find the narrowest concept of interest. Thus, polyhierarchy serves the same purpose. The idea is that different people will start at different points at the top of the hierarchy to arrive at the same concept of interest, which is tagged to the same content set. A polyhierarchy should be implemented if the concept’s relationship is correctly and inherently hierarchical in both of its cases. An example of a polyhierarchy is Educational software, which has both Software and Educational products as broader concepts. Educational software is a kind of software, fully included within Software, and Educational software is a kind of educational product, fully included within Educational products.

 



 

Taxonomy standards and polyhierarchy issues

 

Taxonomy/thesaurus standards (ANSI/NISO Z39.19 and ISO 25964) describe three kinds of hierarchical relationships--generic-specific, generic-instance, and whole-part,--and polyhierarchy may exist within any of these types. Polyhierarchy that combines different hierarchical types, however, can be problematic, so it is best to avoid mixing hierarchical relationship types. For example, the following polyhierarchy mixes different types:

 

Washington, DC

Broader: United States (whole-part)

Broader: Capital cities (generic-instance)

 

The reason to avoid creating a mixed type polyhierarchyis simply that the browsable hierarchy user experience can get compromised and potentially confusing. Extensive hierarchies with large numbers of narrower concept relationships would result. A hierarchical taxonomy tree should be designed with a dominant hierarchy design. An exception is a thesaurus, which is not designed so much for top-down browsing but for browsing from term to term. Mixing hierarchical types within a thesaurus is thus acceptable.

 

It is also recommended to avoid creating hierarchical relationships across different facets in a faceted taxonomy. This is because facets are designed to be mutually exclusively, so that concepts from multiple facets can be used in combination to limit/filter/refine a search. As such, facets are designed to be distinct aspects. There could be an occasional exception of polyhierarchy, though, but more than 2-3 polyhierarchies across an entire faceted taxonomy should be a cause for review.

 

With the wider adoption of the SKOS (Simple Knowledge OrganizationSystem) model for taxonomies and in taxonomy management systems, taxonomies are more commonly organized into concept schemes. A concept scheme can be represented as a facet in a faceted taxonomy, but it is not limited to use as a facet. Utilizing concept schemes, it makes sense to have separate concept schemes with different hierarchical types, some for generic-specific (for type, categories, topics), one or more for whole-part (geography, organizational structures), and some containing lists of instances (named entities). In this model, Washington, DC, would be narrower only to the United States in the whole-part hierarchical concept scheme for geographic places. It could also be linked to Capital cities, which is in a different concept scheme for place types, with a different kind of relationship (“related” or perhaps a semantic relationship from an ontology).

 

Although SKOS permits hierarchical relationships across different concept schemes, it is best practice not to do this but rather to create hierarchical relationships and polyhierarchies confined within a concept scheme, just as it is recommended not to have polyhierarchy across facets.

 

Additional polyhierarchy considerations

Polyhierarchy concerns concepts in the taxonomy, and it is not about objects, items, or assets that get tagged with taxonomy concepts, such as an individual publication, document, image, product record, etc. Each of these may get tagged with multiple taxonomy concepts, and as such may have multiple “classifications” and thus can appear as if they are in a polyhierarchy, if a frontend application displays tagged items as if they are leaf nodes in a taxonomy.

A polyhierarchy usually involves only two broader concepts, not more. Having more than two broader concepts is extremely rare. If you find yourself creating polyhierarchies of three or more multiple times in a taxonomy, check to make sure you are not doing something wrong with the hierarchy design.

Some content management systems, which have built-in taxonomy management and tagging features, do not support polyhierarchy. The best known is SharePoint with taxonomies managed in its Term Store feature. Taxonomy terms may be “reused” across Term Sets, but they are not permitted within a Term Set, where it is most suitable. See my past post, Polyhierarchy in the SharePoint Term Store, for more details
02 May 14:00

Funders Convening on Climate Justice and Digital Rights

by thornet

In October 2021, Mozilla, the Ford Foundation and Ariadne kicked off a new research project exploring how digital rights and climate justice intersect.

In April 2022, we convened 30 digital rights funders interested in applying a climate lens to their work. Together in Lille, France we sought to better equip digital rights funders to craft grantmaking strategies on these issues and to build bridges across our movements. We also discussed the results of a landscape analysis led by The Engine Room and issue briefs prepared by Association for Progressive Communications, BSR and the Open Environmental Data Project.

The fuller proceedings from the event will be published on the project’s wiki page, along with the full research later in June.

Below are my opening remarks from the event.

Move Slowly, Quickly

Welcome to the this convening of digital rights funders learning about climate justice.

We made it. We are here, in person, in Lille, France.

We have been living through the pandemic. It has been hard, and it has been a lot, and it is ongoing.

And we’re living through other crises.

The UN Secretary General just said in a tweet: We have 36 weeks—not 36 years, not 36 months—but 36 weeks, to dramatically reduce emissions from the world’s largest polluters to avert a climate catastrophe. Just 36 weeks.

Right now, there is a heatwave in India. With temperatures of over 45C for days on end, there are millions of people in danger and crops are failing. The people who are least responsible for the fossil fuel emissions causing this heatwave are suffering the most.

Russia’s war on the Ukraine is fueled in part by reliance on fossil gas. A water emergency was just declared in California. And these are simply headlines from this week.

The climate crisis is not a single issue. It is an era. We are living in it now. And we’re going to be living in it for the rest of our lives.

The science is clear. We need to stop all new fossil fuel extraction. We have to transition away fossil fuels as fast as possible. And we have to make rapid and unprecedented change to a more just and sustainable society.

This is not about individual blame and shame. It’s about systems change.

Today, as we gather here, let us see this crisis as urgent. And while we hold this, let us also acknowledge that the solutions may be slow.

I invite us to move slowly, quickly.

In Europe right now, there is a conversation about the twin transitions. This describes a digital transformation and a shift to renewable energy.

Within these agendas, technology is sold as a solution. It is the smart city. The surveilled borders. The automated everything.

The digital rights movement knows that these are false solutions. We have been fighting this fight for a long time. We know the issues around privacy, digital security, misinformation, and internet access.

And there are new perspectives we are learning about:

  • Big Tech’s very lucrative business selling machine learning services to Big Oil to speed up fossil fuel extraction
  • the historic and ongoing practices of colonization and environmental depletion taking on accelerated forms through technology, and
  • how the internet itself is a significant contributor to climate change as it emits 2-3% of global carbon and has become the world’s largest coal-powered machine.

The twin transition is an opportunity.

Let us divest from Big Tech and Big Oil.
Let us invest in sustainable and just alternatives.

Let us, as digital rights organizations, partner with climate groups and grow solidarity with other movements working towards social justice and sustainable, equitable knowledge commons.

There is incredible work already underway. What it needs now is more support and more coalition. Together we can meet this moment.

Let’s move slowly, quickly.

02 May 04:27

Learning Synths - Playground

Ableton, Apr 29, 2022
Icon

One of my rules for OLDaily is that if something eats an hour of my time, I post it here so it'll eat an hour of your time too. Today's instance of that rule in action is this website, which took me step by step through a number of common synthesizer elements. You can simply play with the synth tool, or you can (as I did) work your way through the explanations, starting here. No, you won't know how to play the synthesizer after an hour - but you will know what people are talking about when they talk about the 'brightness' of a sound. I really liked the interactive aspects of this tutorial: you manipulate controls and both see and hear what they're talking about (days like this I wish I had another lifetime just to explore and create electronic music). Related: @aaronbatzdorff, SynthTalk, one of my faves on TikTok. Via Grant Potter.

Web: [Direct Link] [This Post]
02 May 04:26

Doing it for the likes?

by Chris Corrigan

Euan Semple was the first person I ever linked to on my blog. Today he posts a little reflection on his blogging practice:

…I’ve always said, my blog posts are mostly memos to self. They are for me to react to the world around me and to see those reactions placed before me for inspection. Yes inspection by others but mostly by me. Being concerned about whether or not people like what I have written affects how I write.

I guess this process mirrors our struggles to identify our true selves in the rest of our lives. The draw of relationship becomes pressure to conform.

Can we know ourselves without relationship? Can we truly be ourselves if it becomes too important?

In Jonathan Haidt’s latest essay in The Atlantic entitled “Why the past ten years of American Life have been Uniquely Stupid” he writes about how the “like” and “share/retweet” functions came into social media. It changed everything, mostly by gaming the algorithm with likes and speeding up the uncritical consumption of information.

Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom. This was often overwhelming in its volume, but it was an accurate reflection of what others were posting. That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers. Facebook soon copied that innovation with its own “Share” button, which became available to smartphone users in 2012. “Like” and “Share” buttons quickly became standard features of most other platforms.

By 2013, social media had become a new game, with dynamics unlike those in 2008. If you were skillful or lucky, you might create a post that would “go viral” and make you “internet famous” for a few days. If you blundered, you could find yourself buried in hateful comments. Your posts rode to fame or ignominy based on the clicks of thousands of strangers, and you in turn contributed thousands of clicks to the game.

This new game encouraged dishonestyand mob dynamics: Users were guided not just by their true preferences but by their past experiences of reward and punishment, and their prediction of how others would react to each new action. One of the engineers at Twitter who had worked on the “Retweet” button later revealed that he regretted his contribution because it had made Twitter a nastier place. As he watched Twitter mobs forming through the use of the new tool, he thought to himself, “We might have just handed a 4-year-old a loaded weapon.”

As a social psychologist who studies emotion, morality, and politics, I saw this happening too. The newly tweaked platforms were almost perfectly designed to bring out our most moralistic and least reflective selves. The volume of outrage was shocking.

These two functions definitely changed the way I write when I moved most of my writing to social media from the blog. Likes and shares are both powerful attractors but the most powerful of all is the comment. Because that one fosters reflective community and relationship.

02 May 02:32

Do We Know Ourselves Only Through Our Relationships?

by Dave Pollard


Aaron Williamson‘s Model of Identity and Community, 2013

Most of us define ourselves in terms of our relationships — who we work with, who we live with, and who we consider to be in our ‘circles’. When we write a bio, or when we chat with people, we relate, mostly, our interactions with other people and what they have led to.

Euan Semple’s latest post, describing the very different nature of blogging (small, relatively engaged and enduring networks) versus social network posting (huge, anonymous, unengaged and transient networks), ruminates about to what extent we write to be liked.

No matter what we may tell ourselves, and others, the truth is that attention and appreciation and reassurance are inevitably a part of what motivates us to write in public ‘spaces’. For the large majority of former bloggers — those who quit blogging over the past fifteen years — social media simply offered more and easier attention, appreciation and reassurance than blogs. I would be lying if I said I didn’t sometimes wax nostalgic for the early days when I was a “Canadian Top Ten” blogger with 2000+ readers a day. I quit using Twitter, and Facebook (except for occasional visits for a specific purpose) long ago, when I realized they were hazardous to my mental health.

There has recently been a small resurgence of blogs, mostly on platforms like Medium (eg Umair Haque), Substack (eg Caitlin Johnstone), and Ghost (eg Indrajit Samarajiva) that are essentially newsletter content creation tools, since most people rely on email notifications to read blog content, rather than going to a blog’s site or using an “aggregator”. I suspect that the 550 subscribers to my blog using the follow.it free newsletter subscription service, represent a significant proportion of my total readership. Medium and Substack “monetize” blogs by nagging you to pay money to subscribe to “their” blogs (they share some of that money with content providers) and may block you, like the MSM do, if you try to read too much without subscribing.

If you are not already a celebrity and believe that you can become even modestly rich, or an “influencer”, by blogging on one of these platforms, I have a bridge I’d like to sell you.

If you’re not a member of the 1% of 1% that can earn real money on these platforms (anyone remember Clay Shirky’s Power Law of blogs?) the question remains then: Why do you write in a public ‘space’? Is it all about getting attention, appreciation and reassurance? Are you looking for ‘friends’ in all the wrong places?

Or are you seeking, as Euan suggests, to know yourself better through your relationships with others? And, he asks, Can we truly be ourselves if it becomes too important to us that we be liked, listened to, appreciated and reassured we are right?

These are brilliant and important questions, and we can only answer them for ourselves (and only then if we know ourselves well enough to be honest with ourselves).

So this is just my personal answer to these questions:

As I get older, I become less and less concerned with what I know, including what I know about myself. Instead, I’m seeking to be a little more self-aware of why I think what I think, believe what I believe, and do what I do. If you’ve read my other work, you probably know that I believe we do what we do purely out of conditioning, and then rationalize it as being ‘our’ choice afterwards. Some of that conditioning is genetic, visceral. Much of it is cultural, and this is where relationships come in. Who and what we pay attention to, I believe, determines the sources of that conditioning.

But of course, who and what we pay attention to depends (if you buy my ‘conditioning’ logic) on our previous conditioning. That’s why, as someone who has recently and uncomfortably given up believing in free will, I don’t think we have any choice whatsoever about our actions, or about the thinking and belief systems that rationalize (rather than determine) them.

The best we can hope for, I believe, is to be a little more self-aware of how we are being conditioned and hence what underlies our belief systems. The people who have most influenced me, throughout my life, have always challenged their own beliefs, and mine, in search (perhaps in vain) for what is really true. So I’ve been encouraged to challenge everything I believe. That is, I’ve been conditioned to be less concerned with being ‘liked’, appreciated and reassured than about knowing what’s actually true. And I’ve been conditioned to be open (even enthusiastic) to ideas and evidence that completely undermine what I had believed to be true. And I’ve been conditioned (more biologically and genetically than culturally) to trust my instincts, which have told me, all my life: Life shouldn’t have to be this hard. Look at how effortlessly and equanimously wild creatures live. We must have it all wrong.

I’ve been blogging for just about 20 years now, and over that time my beliefs about almost everything have radically changed. I’m much less sure about what I believe than I used to be. And my beliefs are now wildly different from those of most of the people in my ‘circles’ — notably on matters of veganism and eating well, on the inevitability of global collapse, on my belief that everyone is doing their best and no one is ‘to blame’, on free will and the nature of ‘reality’, and, most recently, on the causes and resolution of the Ukraine war.

And I don’t really care that those I have relationships with don’t agree with or even understand what I believe. I mean, it would be nice if everyone agreed with me about these things, but I appreciate that what they all believe is not in anyone’s control (including mine). I will not convince a single soul on any of these matters unless and until they are ready to hear the message. If I do manage to convince someone, then that would have happened anyway.

So, like Euan, I now write my blog basically to think things through for myself, to “see my reactions placed before me for inspection”. When someone agrees with me, it is reassuring, but at the same time a bit troubling: Have I oversold this argument? Am I really sure I believe it myself?

As I get older, there are fewer people in my circles than there used to be. That’s been a mutual decision, as I tend to not burn bridges, nor assert myself to keep a relationship going unilaterally. Perhaps that’s why I feel I ‘know myself’ less well than I did when I was younger and more sure of everything. I am content not knowing myself, and content believing tentatively that one cannot know oneself. I am even dubious that there is a real ‘oneself’ to know.

So why do I have relationships? Not to be liked, or appreciated, or reassured, or to know myself better. It’s simply for the pleasure of their company. Like when they share a novel insight, a new idea, an enjoyable story, a warm hug, or a visit to a beautiful place or interesting event. Enjoying another’s company is an animal thing for me, not a means for self-exploration. Increasingly, I prefer to not even talk much; when I find people whose company I enjoy, I’m content to listen, or even with silence.

So, I don’t know who I am, or even if I am. And therefore I depend not a whit on relationships to try to figure it out, or express it. When asked how I self-identify, in any context, I simply reply that I do not.

Have there been times when I have self-censored or misrepresented my beliefs (which some would simply call lying) in the company of others? Of course. We are conditioned to conform, suppress, compromise. Without that conditioning, our overcrowded, collapsing, overstressed world would be, I think, an unimaginably violent place, and collapse would have occurred long ago. We are not meant to live this way. 

But my concealing my feelings or beliefs is something I (at least now) do consciously, rather than because I want to be liked, or listened to, or appreciated, or reassured. The cognitive dissonance can be massive, but it’s bearable if you don’t take yourself too seriously! If I wanted to be popular, I certainly wouldn’t be writing about radical non-duality, collapse, our lack of free will, or the vulnerability of people across the political spectrum to propaganda, cognitive bias, simplistic thinking, and groupthink.

It is true that I don’t write much about the health benefits of a balanced, minimally-processed, entirely plant-based diet, or the dangers of other diets. This is the only subject in my 20 years of blogging that has attracted death threats. I don’t write about it, not out of fear, or not being liked, or losing readers, but because it’s simply not worth debating. When people are ready to listen to this message, they will hear it, probably from experts in nutrition. I’m not being inauthentic, to them or to myself, to consciously avoid wasting time and energy talking about it.

So, I can’t speak for others, but my sense of self-worth is not tied up with others’ opinions of me. I may be deliberately cautious about what I say to people, and sometimes (I think tactfully) even leave people with the impression I believe something when I don’t. Sometimes it makes sense to pick your battles, or wriggle out of them entirely. There’s no shame in that. But that doesn’t change what I think, believe, or feel. It doesn’t change ‘me’ (if there even is a ‘me’).

Thank you to Euan for posing these fascinating questions. Here are a couple more questions (shades of blogging’s notorious Friday Five!) that I have been thinking about of late, somewhat along the same lines:

  1. If you had to make a map of the 5, 15, and 150 people in your innermost ‘circles’, what questions would you ask yourself, and what criteria would you use, to come up with the list?
  2. On what (non-trivial) subjects have you recently, radically changed your mind, and what caused you to change it?

No answers are required. These are just, perhaps like some of the best questions, provocations to think about things that might help us know ourselves a little better.

02 May 02:30

Twitter and evidence-based software engineering

by Derek Jones

This year’s quest for software engineering data has led me to sign up to Twitter (all the software people I know, or know-of, have been contacted, and discovery through articles found on the Internet is a very slow process).

@evidenceSE is my Twitter handle. If you get into a discussion and want some evidence-based input, feel free to get me involved. Be warned that the most likely response, to many kinds of questions, is that there is no data.

My main reason for joining is to try and obtain software engineering data. Other reasons include trying to introduce an evidence-based approach to software engineering discussions and finding new (to me) problems that people want answers to (that are capable of being answered by analysing data).

The approach I’m taking is to find software engineering tweets discussing a topic for which some data is available, and to jump in with a response relating to this data. Appropriate tweets are found using the search pattern: (agile OR software OR "story points" OR "story point" OR "function points") (estimate OR estimates OR estimating OR estimation OR estimated OR #noestimates OR "evidence based" OR empirical OR evolution OR ecosystems OR cognitive). Suggestions for other keywords or patterns welcome.

My experience is that the only effective way to interact with developers is via meaningful discussion, i.e., cold-calling with a tweet is likely to be unproductive. Also, people with data often don’t think that anybody else would be interested in it, they have to convinced that it can provide valuable insight.

You never know who has data to share. At a minimum, I aim to have a brief tweet discussion with everybody on Twitter involved in software engineering. At a minute per tweet (when I get a lot more proficient than I am now, and have workable templates in place), I could spend two hours per day to reach 100 people, which is 35,000 per year; say 20K by the end of this year. Over the last three days I have managed around 10 per day, and obviously need to improve a lot.

How many developers are on Twitter? Waving arms wildly, say 50 million developers and 1 in 1,000 have a Twitter account, giving 50K developers (of which an unknown percentage are active). A lower bound estimate is the number of followers of popular software related Twitter accounts: CompSciFact has 238K, Unix tool tips has 87K; perhaps 1 in 200 developers have a Twitter account, or some developers have multiple accounts, or there are lots of bots out there.

I need some tools to improve the search process and help track progress and responses. Twitter has an API and a developer program. No need to worry about them blocking me or taking over my business; my usage is small fry and I’ not building a business to take over. I was at Twitter’s London developer meetup in the week (the first in-person event since Covid) and the youngsters present looked a lot younger than usual. I suspect this is because the slightly older youngsters remember how Twitter cut developers off at the knee a few years ago by shutting down some useful API services.

The Twitter version-2 API looks interesting, and the Twitter developer evangelists are keen to attract developers (having ‘wiped out’ many existing API users), and I’m happy to jump in. A Twitter API sandbox for trying things out, and there are lots of example projects on Github. Pointers to interesting tools welcome.

02 May 02:27

Twitter Favorites: [skinnylatte] Folded many dumplings for lunch. Panfrying them. Will soon be eating all of them. https://t.co/v4ziwV0abg

Adrianna Tan @skinnylatte
Folded many dumplings for lunch. Panfrying them. Will soon be eating all of them. pic.twitter.com/v4ziwV0abg
02 May 02:27

Twitter Favorites: [skinnylatte] Crispy crispy pot stickers https://t.co/iGTtxc9CIQ

Adrianna Tan @skinnylatte
Crispy crispy pot stickers pic.twitter.com/iGTtxc9CIQ
02 May 02:13

How we drive positive health outcomes with Facebook and Instagram

by darren

2021 was rough for nonprofits trying to reach people with COVID-19 vaccination and safety information. Social media was – and continues to be – a repository for ever-changing health mandates, research flip flops and outright conflicting COVID information. We wouldn’t have blamed communications managers had they backed away from delivering COVID safety content on social platforms. But, that’s not what happened.

In the second half of 2021, global health organizations – from Senegal to Singapore and Slovenia to South Africa – banded together to run social media campaigns to drive positive COVID-19 prevention and vaccine confidence. Capulet supported global health organizations in using Meta social platforms to persuade people to follow COVID-19 public health guidance and to move them closer to getting a COVID vaccine. Here’s what we learned.

Spoiler alert . . . social media can contribute to knowledge, attitude and behavior change

Social media as a tool for health social behavior change is a relatively new concept. More than half of the health partners’ 113 campaigns showed a statistically significant uptick in knowledge, attitudes and self-reported behaviors – three cornerstones of social behavior change. 

These results were measured by Meta’s Brand Lift Study (BLS) tool. The tool surveys two audiences – one that’s exposed to campaign content and one that’s not – and measures the difference in knowledge, attitudes and behaviors between the two audiences. That difference represents the campaign lift.

“Our Facebook campaign ran in India when the country was going through the second wave of the pandemic,” said Stephen Maina, Social Media Optimization Manager at Population Services International. “Our messaging resulted in an increase in the number of self-reported people wearing masks.”

Black woman standing in a room with a camera and lights

CARE US reached young, vaccine-hesitant men with personal stories from people who chose to get the COVID vaccine. The audience exposed to those stories showed increased acceptance of friends and family getting the shot compared to those who didn’t see the ads.

CARE included a “social norming” question in its Brand Lift Study to learn whether personal stories can help normalize getting the vaccine. After seeing the ads, CARE’s target audience was asked, “When you think of most people whose opinion you value, how much would they approve of people getting a COVID-19 vaccine?” This question showed the greatest overall lift for the campaign – the largest lift of +4.9 percentage points with males ages 45-54 and a +3.2 percentage point lift with men ages 25-34.

FHI 360 Indonesia measured its campaign impact by tracking links from Facebook ads to a national vaccine tool. They pushed more than 300,000 people from ads to a national vaccine appointment booking tool. “I was impressed,” said Benjamin Eveslage, the Technical Associate Director at FHI 360.

These results are encouraging when it comes to using social media for health SBCC, but the struggle with online SBC measurement is real, in part because a disconnect often exists between what happens online and on-the-ground health programs. Plus, tracking uptake on health services from a Facebook or Instagram campaign isn’t a closed loop. How many of those who booked a vaccine appointment from an online ad showed up for the shot? 

Still, Catholic Relief Services says the campaign results are convincing enough to open up a discussion of field teams incorporating social into on-the-ground work.

Programs + Comms = SBCC impact ❤

If you’ve worked inside a health organization, or at any NGO, you know program and communications teams often work in silos. We hypothesized that if country teams could provide local context and inform priority goals and messaging, and health technical staff could layer in social behavior change best practices, then communications teams would be well equipped to design SBCC campaigns. Fortunately, investigating social media as a tool for health social behavior change was a juicy enough topic to get all these teams to the table.

Digital and Social Media Manager, Jim Stipe, observed cross-functional teamwork emerge at Catholic Relief Services. “A side benefit of participating in the program was that our social media and SBC staff truly worked together for the first time, hopefully kicking off a long-term relationship for future collaboration and impact.”

Experimentation for the win

We all had a lot to learn as part of this pilot. More than a hundred nonprofit campaigns used Brand Lift Study data for the first time to measure campaign results, and many of us took our first deep dive into Social Behavior Change Communications. But, perhaps the most important learning was . . . keep learning! 

Some country teams learned the value of A/B testing for the first time and benefited from iterating and sharpening ad content before launching paid campaigns. Senior Communications Officer Sarah McKee at Management Sciences for Health, learned the power of flexibility and iteration. She tossed out her carefully-designed campaign plan to respond to on-the-ground reports that Guatemalans needed help locating where to get second doses of the vaccine. “Seeing the number of people who actively interacted, not just saw the ad and scrolled past, but actually clicked the link and went to the Ministry of Health website blew my mind. It was pretty amazing.” The campaign results showed a +3.7 percentage point lift in the number of Guatemalan women who were confident they could get a COVID-19 vaccine if they wanted to.

Learning from the AB testing, Save the Children Bangladesh recorded pro-vaccine messages from doctors and public figures, including a prominent influencer food blogger. When the food blogger became a target of hurtful comments, the team quickly pivoted to an alternative strategy. They turned off those ads and launched new public health influencer videos featuring medical experts like Dr Firdausi Quadri. Dr. Quadri is a renowned scientist who specializes in immunology and infectious disease research and who was awarded the Ramon Magsaysay Award in 2021. 

Deep social listening, flexibility and the choice to showcase a medical expert like Dr Quadri resulted in the video receiving over 640K link clicks. The videos by Bangladesh medical experts garnered the highest link clicks both globally and at the country level. Overall, the campaign resulted in more than three million visits to a Bangladesh national website sharing vaccine information. 

From selfie videos in Laos to Messenger chatbots in India and “click-to-call” creative in the US, having resources to iterate powered creative learning and output.

At a time when anti-vaccine groups are actively targeting health organizations online, spreading misinformation and launching personal attacks on doctors and citizens, collaborations between trusted health NGOs are needed to spread accurate information on social platforms and counter dangerous misinformation. While each country team ran a campaign specific to their COVID context, the collective impact certainly made its mark and moved many towards choosing to get the COVID vaccine.

The post How we drive positive health outcomes with Facebook and Instagram appeared first on Capulet Communications.

02 May 02:09

What to stream in Canada after Apple TV+’s Severance

by Bradly Shankar
Adam Scott, Zach Cherry, John Turturro and Britt Lower watch on with concern in Severance

One of the surprise TV hits of the year is Severance.

The Apple TV+ sci-fi thriller series has received rave reviews for its gripping mystery-filled story about people who undergo a procedure to separate their work and personal memories for a shady corporation.

Here's the trailer:

https://www.youtube.com/watch?v=xEQP4VVuyrY

 

And here's the incredible, haunting opening title sequence:

https://www.youtube.com/watch?v=NmS3m0OG-Ug

The series, created by Dan Erickson and co-directed by Ben Stiller, stars Adam Scott, Zach Cherry, Britt Lower, John Turturro, Tramell Tillman, Patricia Arquette and Christopher Walken.

If you're like me and have devoured the show since its finale earlier this month, you might be looking for something to fill the void until the now-confirmed second season. With that in mind, we've rounded up 10 shows and movies that are similar to Severance, be it through story, themes or general vibes. As you'll notice, many of these feature sci-fi concepts and/or people rebelling against morally corrupt companies, just like Severance.

Without further ado, here are eight shows (and two movies) to watch after Severance.


Black Mirror

https://www.youtube.com/watch?v=di6emt8_ie8

When people talk about "cautionary media tales about technology," Black Mirror is pretty much the go-to modern example. Outside of a few exceptions, the Charlie Brooker-created series has quite the bleak outlook on both current and hypothetical sinister uses of technology. On the whole, it's darker than Severance, although it does explore different genres through a sci-fi lens, including romance ("San Junipero"), horror ("Playtest") and black comedy ("National Anthem"). Best of all, each episode is standalone, so you can dip your feet into whichever sounds most appealing.

Runtime: 22 episodes (41 minutes to 1 hour, 29 minutes each)
Genre: Sci-fi anthology

Stream Black Mirror on Netflix. It's also worth noting that a standalone film, Black Mirror: Bandersnatch, is also available on Netflix. Unlike the Black Mirror episodes, though, it's an interactive "choose-your-own-adventure" experience.

Brazil

https://www.youtube.com/watch?v=ZKPFC8DA9_8

While this list is mostly focused on shows, we're including a couple of movies based on cited influences by Severance creator Dan Erickson.

Set in a dystopian society, Terry Gilliam's Brazil follows a bureaucrat who becomes the enemy of the state when he pursues the woman of his dreams. Jonathan Pryce, Kim Griest, Robert De Niro, Katherine Helmond and Ian Holm star.

Runtime: Eight episodes (43 to 57 minutes each)
Genre: Dystopian, black-comedy

Unfortunately, Brazil isn't currently on a streaming service. You can, however, rent it on premium video on demand (PVOD) platforms like iTunes and Google Play for $4.99 CAD.

Devs

https://www.youtube.com/watch?v=yJF2cB4hHv4

This show is probably the closest 1:1 comparison to Severance, in that it's a conspiracy-ridden sci-fi thriller that follows a software engineer investigating the death of her boyfriend at a shady tech company.

The miniseries was created by acclaimed Ex Machina filmmaker Alex Garland and stars Sonoya Mizuno, Nick Offerman, Jin Ha, Zach Grenier and Toronto's own Alison Pill.

Runtime: Eight episodes (43 to 57 minutes each)
Genre: Sci-fi thriller

Stream Devs on Disney+.

Homecoming

https://www.youtube.com/watch?v=9WJSdpE-sJQ

Another show, another mysterious company. This anthology series follows an unconventional wellness company and its 'Homecoming Initiative,' which helps soldiers re-transition into civilian life.

Based on Gimlet Media's podcast of the same name, Homecoming was created by Eli Horowitz and Micah Bloomberg and features two different stories in each season. The first season features Julia Roberts, Bobby Cannavale, Shea Wigham and Toronto's own Stephan James, while Season 2 stars Janelle Monáe, Joan Cusack, Chris Cooper and James.

Runtime: Two seasons (17 episodes at 24 to 37 minutes each)
Genre: Psychological thriller

Stream Homecoming here.

Made For Love

https://www.youtube.com/watch?v=lvWgNSLIULw

Hazel escapes from a toxic marriage with tech billionaire Byron Gogol, only to discover that her husband is tracking her through a chip he implanted in her head.

While that's an undeniably creepy premise, the show is a dark comedy overall, featuring fun performances from Cristin Milioti (Hazel), Billy Magnussen (Byron) and, especially, Ray Romano as Hazel's sex doll-loving dad.

Runtime: Eight episodes (first season)
Genre: Sci-fi, dark comedy

Stream Made For Love (Season 1) on Amazon Prime Video. The second season of the show is premiering in the U.S. on HBO Max on April 28th, but Prime Video Canada isn't getting it until May 20th.

Maniac

https://www.youtube.com/watch?v=L6cDDmk-O5A

Two strangers, Annie (Emma Stone) and Owen (Jonah Hill), connect while undertaking a mind-bending pharmaceutical trial in a retro-future New York City.

Notably, Patrick Somerville, who was the showrunner of Made For Love, created this miniseries, which was itself based on one from Norway. No Time to Die director Cary Joji Fukunaga also helmed every episode of Maniac.

Runtime: Ten episodes (26 to 47 minutes each)
Genre: Psychological, dark-comedy

Stream Maniac on Netflix.

Mr. Robot

https://www.youtube.com/watch?v=xIBiJ_SzJTA

A mentally ill cybersecurity engineer joins a hacktivist group targeting the largest conglomerate in the world (See a pattern here?).

The Sam Esmail-created series stars Rami Malek, Christian Slater, Carly Chaikin, Portia Doubleday and Michael Cristofer.

Runtime: Four seasons (45 episodes at 40 to 65 minutes each)
Genre: Drama, thriller

Stream Mr. Robot on Amazon Prime Video.

The Truman Show

https://www.youtube.com/watch?v=dlnmQbPGuls

Here's another film that Erickson has said influenced Severance, and it's easy to see how. The Peter Weir-directed film follows a man (Newmarket, Ontario's own Jim Carrey) who discovers that his seemingly ordinary life was actually a reality TV show.

Runtime: 1 hour, 43 minutes
Genre: Psychological, comedy-drama

Unfortunately, like Brazil, The Truman Show isn't actually on a streaming service at the moment. Instead, you can rent it on PVOD platforms like Google Play ($3.99) and iTunes ($4.99).

Upload

https://www.youtube.com/watch?v=0ZfZj2bn_xg

After dying prematurely, a computer engineering grad gets "uploaded" into a virtual afterlife and must adjust to the pros and cons of his new existence.

The Office's Greg Daniels created the series, which gives you an idea of the tone (i.e. it's not dark like Severance). Notably, Upload stars Toronto's own Robbie Amell and was also filmed in Vancouver.

Runtime: Two seasons (17 episodes at 24 to 46 minutes each)
Genre: Sci-fi, comedy-drama

Stream Upload on Amazon Prime Video.

Westworld

https://www.youtube.com/watch?v=WgdVR3BimVQ

High-paying guests visit a technologically-advanced Wild West-themed amusement park, which is run by android "hosts."

Based on Michael Chrichton's 1973 film of the same name, Westworld was created by Jonathan Nolan and Lisa Joy and features an ensemble cast that includes Evan Rachel Wood, Thandiwe Newton, Jeffrey Wright, James Marsden, Anthony Hopkins, Tessa Thompson and Aaron Paul.

Runtime: Three seasons (28 episodes at 57 to 91 minutes each)
Genre: Sci-fi, dystopian

Stream Westworld on Crave.


Honourable mentions: This is focused on streaming, but we'll also quickly shout out some video games. Erickson has publicly noted that The Stanley Parable -- a first-person narrative PC game about a worker in an office building that's getting a new release on consoles -- influenced Severance. As well, Remedy's third-person shooter Control -- a mind-bending adventure set in an eerie bureau -- has some strong visual and thematic ties to Apple's show.

Are you also obsessed with Severance? Which of these shows and movies do you like or, alternatively, are considering watching for the first time? Let us know in the comments.

Image credit: Apple

29 Apr 02:25

A tiny CI system

A tiny CI system

Christian Ştefănescu shares a recipe for building a tiny self-hosted CI system using Git and Redis. A post-receive hook runs when a commit is pushed to the repo and uses redis-cli to push jobs to a list. Then a separate bash script runs a loop with a blocking "redis-cli blpop jobs" operation which waits for new jobs and then executes the CI job as a shell script.

Via @stchris_

29 Apr 02:16

Convenience stores on every corner

by Aya Tanikawa

Hi, this is Aya from the support team at Datawrapper. You can usually find me answering your questions at support@datawrapper.de.

It’s four in the morning here in Tokyo and I’m struggling to settle on a topic for my Weekly Chart. In a weird hour like this, my favorite place to grab a midnight snack is a convenience store just around the corner.

In Berlin, there are Spätis. It’s tabacs in Paris. There’s a corner store in every country, and in Japan we call them “konbini.” But a konbini is not just a place where you can get food and drinks. It’s where I used to hang out with friends or print out school assignments when I was younger. I use their bathrooms, ATMs, garbage boxes, and electrical plugs on a daily basis.

Here is a list of many other things you can do in a typical konbini:

  • Shop from approximately 3000 products — 100 new products every week
  • Eat hot food, coffee, and bento box
  • Print, scan, fax, and copy documents
  • Print out local government documents, like residence and tax certificates
  • Print photographs and create business cards
  • Buy tickets to concerts and events
  • Subscribe to insurance services (e.g., car insurance, health insurance)
  • Deliver and receive packages
  • Dry cleaning services
  • Bike rental services

    (source: 7-Eleven website; this book on the trends in the convenience store industry)
A sketch of the 7-Eleven closest to home. The products are placed in a specific way to guide the customers around the store. Drinks are always at the back to pull the customers in. The counter is never directly in front, to avoid awkward eye contact with the cashier to make it more inviting.

The biggest convenience for me is that they’re open 24/7 and they’re everywhere. The first convenience store in Japan opened sometime in the 1970s, and now there are more than 56,000 around the country. Just in Tokyo, there are close to 8000.

In an attempt to quantify and visualize this ubiquity, I decided to see how many stores I could find within a one-kilometer radius in central Tokyo.

A photograph of the Scramble Crossing in Shibuya, central Tokyo, Japan. Photo by Timo Volz on Unsplash

Within one kilometer of the famous Shibuya Scramble Crossing, I found 114 convenience stores. Sometimes you’ll even find multiple stores from the same chain on the same block! And this isn’t even the densest area of Tokyo.

Obviously, the density and the types of convenience stores you find will vary across the country, which is visualized beautifully in this Twitter thread. I’m currently staying in the outskirts of Tokyo, but even from my quick 10-minute walk I passed more than eight stores and managed to take photographs of the three top chains in the country: 7-Eleven, Family Mart, and Lawson.

To show you what they look like, I went for a quick stroll to find all three within a few minutes' walk of each other.

To create the map above, I used Yahoo!’s Local Search API to search for convenience stores within a kilometer of the Scramble Crossing. Since I had around 100 markers, I uploaded them as a CSV file to my locator map. Then I used Hans’ tool to draw the 1 km circle and geojson.io to draw the dashed radius line, and imported both as custom markers. Datawrapper locator maps have an option to turn on the 3D buildings, and you can also tilt the perspective slightly by pressing ctrl + drag.

These stores are convenient, but that comes with its own issues. With so many stores operating 24/7, there’s a constant labor shortage. Food waste is another issue. Abundant choice of hot food, lunch boxes, and constant supply of new limited editions — a lot of them have to be discarded at the end of the day. But... who can say no to the more than thirty options of ice cream at 4 AM?

My midnight snack: Matcha Shaved Ice, one of the hundred new products to be shelved this week. 149 yen (= $1.14)

It’s definitely a fun place to visit if you’re in Japan. It’s a microcosm of people’s necessities and what’s trending in the country. Is there a convenience store where you live? What do you mostly use it for?


That’s it for this week! If you’ve read this far, thank you. Next week, Eddie will be back with her Weekly Chart. See you then!

28 Apr 14:33

Seeing files opened by a process using opensnoop

by Simon Willison

I decided to try out atuin, a shell extension that writes your history to a SQLite database.

It's really neat. I wanted to see where the SQLite database lived on disk so I could poke around inside it with Datasette.

The documentation didn't mention the location of the database file, so I decided to figure that out myself.

I worked out a recipe using opensnoop, which comes pre-installed on macOS.

In one terminal window, run this:

sudo opensnoop 2>/dev/null | grep atuin

Then run the atuin history command in another terminal - and the files that it accesses will be dumped out by opensnoop:

  501  51725 atuin          4 /dev/dtracehelper    
  501  51725 atuin         -1 /etc/.mdns_debug     
  501  51725 atuin          4 /usr/local/Cellar/atuin/0.9.1/bin 
  501  51725 atuin         -1 /usr/local/Cellar/atuin/0.9.1/bin/Info.plist 
  501  51725 atuin          4 /dev/autofs_nowait   
  501  51725 atuin          5 /Users/simon/.CFUserTextEncoding 
  501  51725 atuin          4 /dev/autofs_nowait   
  501  51725 atuin          5 /Users/simon/.CFUserTextEncoding 
  501  51725 atuin         10 .                    
  501  51725 atuin         10 /Users/simon/.config/atuin/config.toml 
  501  51725 atuin         10 /Users/simon/.local/share/atuin/history.db 
  501  51725 atuin         11 /Users/simon/.local/share/atuin/history.db-wal 
  501  51725 atuin         12 /Users/simon/.local/share/atuin/history.db-shm 

Then I ran open /Users/simon/.local/share/atuin/history.db (because I have Datasette Desktop installed) and could start exploring that database:

Screenshot of Datasette showing the history table from the atuin database

The 2>/dev/null bit redirects standard error for opensnoop to /dev/null - without this it spews out a noisy volume of dtrace: error ... warnings.

Alternative solutions

My Twitter thread asking about this resulted in a bunch of leads that I've not fully investigated yet, including:

  • FileMonitor
  • FSMonitor
  • iosnoop
  • fs_usage
  • sudo dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }'
  • strace (Linux, not macOS)
28 Apr 14:32

Dave Gets Censored by the CBC

by Dave Pollard

I almost never comment on online news stories, but I was absolutely incensed by Trudeau Jr’s new law legalizing the theft of Russian citizens’ bank accounts and other property without a trial or right of appeal.

So I wrote this comment on the CBC’s story on the new law, in response to another commenter who called the new law what it is: theft. This was my comment:

Yep. We didn’t like the First Nations people so we stole their property. We didn’t like Canadians who came from countries we fought in WW2, so we stole their property and put them in camps. We didn’t like the Taliban so we sat idle as the US stole billions of dollars from Afghani bank accounts, some of it humanitarian aid funds desperately needed by their starving citizens, and gave it out to Americans. Now because we don’t like Russia we’ve legalized the theft of property from Russian citizens. Wonder what will happen if the government decides it doesn’t like us?

Since it was my first comment in three years on an online news article, I was curious to see if the majority of readers would agree or disagree, or whether there would be any response at all. Since I received no notifications, I figured the latter, which is fine, but when I clicked on my name at the top of the CBC site (I had to register to comment), I found out why:

My comment was censored. Right after I posted it. No explanation. Hundreds of inflammatory anti-Russian war- and hate-mongering comments were approved, but apparently mine was unacceptable.

Foolish naive Canadian that I am, I guess I shouldn’t have been surprised, but I was.

28 Apr 14:31

Map and Nested Lists

On StackOverflow, a questioner with a bunch of data frames (already existing as objects in their environment) wanted to split each of them into two based on some threshold being met, or not, on a specific column. Every one of the data frames had this column in it. Their thought was that they’d write a loop, or use lapply after putting the data frames in a list, and write a function that split the data fames, named each one, and wrote them out as separate objects in the environment.

Here’s a tidyverse solution that avoids the need to explicitly write loops, using map instead of lapply. (I have no particular dislike of lapply et fam, I’ll just be working with the tidyverse equivalents.)

I should say at the outset that you could probably do the whole thing using grouped or nested data frames, thus avoiding the need to create the objects in the first place. But if you really do want to create the objects, or perhaps the starting data frames are a little less amenable to living in a single data frame because they have different numbers of columns or something, then working a list of data frames will do very well. You could do something like the following.

First, let’s do some setup to make the example reproducible.

r
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
library(tidyverse)
set.seed(42722)

## Names of the example data frames we'll create
## are df_1 ... df5
df_names <- paste0("df_", 1:5) %>% 
  set_names()

## We'll make the new dfs by sampling from mtcars
base_df <- as_tibble(mtcars, rownames = "model") %>% 
  select(model, cyl, hp)

## Create 5 new data frame objects in our environment.
## Each is a sample of ten rows of three columns from mtcars
df_names %>% 
  walk(~ assign(x = .x,         # each element of df_names in turn
                value = sample_n(base_df, 10), 
                envir = .GlobalEnv))

## Now we have, e.g.
df_1
#> # A tibble: 10 × 3
#>    model               cyl    hp
#>    <chr>             <dbl> <dbl>
#>  1 Chrysler Imperial     8   230
#>  2 Mazda RX4 Wag         6   110
#>  3 Merc 450SE            8   180
#>  4 Porsche 914-2         4    91
#>  5 Toyota Corona         4    97
#>  6 Ford Pantera L        8   264
#>  7 Toyota Corolla        4    65
#>  8 Merc 280C             6   123
#>  9 Duster 360            8   245
#> 10 Merc 230              4    95

This is a handy use of walk(), by the way, which works just the same as map() except it’s for when you are interested in producing side-effects like a bunch of graphs or output files or, as here, new objects, rather than further manipulating a table or tables of data.

Next, get these five data frames and put them in a list, which is where the question starts from.

r
1
df_list <- map(df_names, get)

Here we use get(), which returns the value of a named object (i.e., for our purposes, the object). We have map() feed the vector of df names to it, and because map() always returns a list we get the data frames we created, now bundled into a list.

Because I’m working with Tidyverse functions, I wrote map(df_names, get) out of habit. But there’s also a Base R function that does the same thing, for the case of getting named objects into a named list. It’s mget(). I could have written

r
1
df_list <- mget(df_names) 

to get the same result. It looks like this:

r
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
df_list

#> $df_1
#> # A tibble: 10 × 3
#>    model               cyl    hp
#>    <chr>             <dbl> <dbl>
#>  1 Chrysler Imperial     8   230
#>  2 Mazda RX4 Wag         6   110
#>  3 Merc 450SE            8   180
#>  4 Porsche 914-2         4    91
#>  5 Toyota Corona         4    97
#>  6 Ford Pantera L        8   264
#>  7 Toyota Corolla        4    65
#>  8 Merc 280C             6   123
#>  9 Duster 360            8   245
#> 10 Merc 230              4    95
#> 
#> $df_2
#> # A tibble: 10 × 3
#>    model                 cyl    hp
#>    <chr>               <dbl> <dbl>
#>  1 Toyota Corolla          4    65
#>  2 Pontiac Firebird        8   175
#>  3 Merc 280                6   123
#>  4 Merc 450SE              8   180
#>  5 Toyota Corona           4    97
#>  6 Lincoln Continental     8   215
#>  7 Ferrari Dino            6   175
#>  8 Merc 450SLC             8   180
#>  9 Dodge Challenger        8   150
#> 10 Lotus Europa            4   113
#> 
#> $df_3
#> # A tibble: 10 × 3
#>    model                cyl    hp
#>    <chr>              <dbl> <dbl>
#>  1 Fiat 128               4    66
#>  2 Hornet 4 Drive         6   110
#>  3 Fiat X1-9              4    66
#>  4 Hornet Sportabout      8   175
#>  5 Maserati Bora          8   335
#>  6 Merc 230               4    95
#>  7 Valiant                6   105
#>  8 Mazda RX4 Wag          6   110
#>  9 Toyota Corona          4    97
#> 10 Cadillac Fleetwood     8   205
#> 
#> $df_4
#> # A tibble: 10 × 3
#>    model               cyl    hp
#>    <chr>             <dbl> <dbl>
#>  1 Fiat X1-9             4    66
#>  2 AMC Javelin           8   150
#>  3 Chrysler Imperial     8   230
#>  4 Valiant               6   105
#>  5 Hornet Sportabout     8   175
#>  6 Merc 240D             4    62
#>  7 Merc 280              6   123
#>  8 Mazda RX4 Wag         6   110
#>  9 Lotus Europa          4   113
#> 10 Hornet 4 Drive        6   110
#> 
#> $df_5
#> # A tibble: 10 × 3
#>    model                cyl    hp
#>    <chr>              <dbl> <dbl>
#>  1 Chrysler Imperial      8   230
#>  2 Porsche 914-2          4    91
#>  3 Camaro Z28             8   245
#>  4 Merc 450SL             8   180
#>  5 Toyota Corona          4    97
#>  6 Hornet 4 Drive         6   110
#>  7 Cadillac Fleetwood     8   205
#>  8 Merc 280C              6   123
#>  9 Toyota Corolla         4    65
#> 10 Pontiac Firebird       8   175

Now, working with this list of data frames, we can split each one into the over/under. If the split criteria were more complex we could write a more involved function to do it. But here we use if_else to create a new column in each data frame based on a threshold value of cyl.

r
1
2
3
4
5
6
7
8
9
## - a. Create an over_under column in each df in the list, 
##      based on whether `cyl` in that particular df is < 5 or not
## - b. Split on this new column.
## - c. Put all the results into a new list called `split_list`

split_list <- df_list %>% 
  map(~ mutate(., 
               over_under = if_else(.$cyl > 5, "over", "under"))) %>% 
    map(~ split(., as.factor(.$over_under))) 

The . inside the mutate() and split() functions are pronouns standing for “the thing we’re referring to/computing on right now”. In this case, that’s “the current data frame as we iterate through df_list”. Now we have a nested list. Each of df_1 to df_5 is split into an over or under table. The whole thing looks like this:

r
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
split_list
#> $df_1
#> $df_1$over
#> # A tibble: 6 × 4
#>   model               cyl    hp over_under
#>   <chr>             <dbl> <dbl> <chr>     
#> 1 Chrysler Imperial     8   230 over      
#> 2 Mazda RX4 Wag         6   110 over      
#> 3 Merc 450SE            8   180 over      
#> 4 Ford Pantera L        8   264 over      
#> 5 Merc 280C             6   123 over      
#> 6 Duster 360            8   245 over      
#> 
#> $df_1$under
#> # A tibble: 4 × 4
#>   model            cyl    hp over_under
#>   <chr>          <dbl> <dbl> <chr>     
#> 1 Porsche 914-2      4    91 under     
#> 2 Toyota Corona      4    97 under     
#> 3 Toyota Corolla     4    65 under     
#> 4 Merc 230           4    95 under     
#> 
#> 
#> $df_2
#> $df_2$over
#> # A tibble: 7 × 4
#>   model                 cyl    hp over_under
#>   <chr>               <dbl> <dbl> <chr>     
#> 1 Pontiac Firebird        8   175 over      
#> 2 Merc 280                6   123 over      
#> 3 Merc 450SE              8   180 over      
#> 4 Lincoln Continental     8   215 over      
#> 5 Ferrari Dino            6   175 over      
#> 6 Merc 450SLC             8   180 over      
#> 7 Dodge Challenger        8   150 over      
#> 
#> $df_2$under
#> # A tibble: 3 × 4
#>   model            cyl    hp over_under
#>   <chr>          <dbl> <dbl> <chr>     
#> 1 Toyota Corolla     4    65 under     
#> 2 Toyota Corona      4    97 under     
#> 3 Lotus Europa       4   113 under     
#> 
#> 
#> $df_3
#> $df_3$over
#> # A tibble: 6 × 4
#>   model                cyl    hp over_under
#>   <chr>              <dbl> <dbl> <chr>     
#> 1 Hornet 4 Drive         6   110 over      
#> 2 Hornet Sportabout      8   175 over      
#> 3 Maserati Bora          8   335 over      
#> 4 Valiant                6   105 over      
#> 5 Mazda RX4 Wag          6   110 over      
#> 6 Cadillac Fleetwood     8   205 over      
#> 
#> $df_3$under
#> # A tibble: 4 × 4
#>   model           cyl    hp over_under
#>   <chr>         <dbl> <dbl> <chr>     
#> 1 Fiat 128          4    66 under     
#> 2 Fiat X1-9         4    66 under     
#> 3 Merc 230          4    95 under     
#> 4 Toyota Corona     4    97 under     
#> 
#> 
#> $df_4
#> $df_4$over
#> # A tibble: 7 × 4
#>   model               cyl    hp over_under
#>   <chr>             <dbl> <dbl> <chr>     
#> 1 AMC Javelin           8   150 over      
#> 2 Chrysler Imperial     8   230 over      
#> 3 Valiant               6   105 over      
#> 4 Hornet Sportabout     8   175 over      
#> 5 Merc 280              6   123 over      
#> 6 Mazda RX4 Wag         6   110 over      
#> 7 Hornet 4 Drive        6   110 over      
#> 
#> $df_4$under
#> # A tibble: 3 × 4
#>   model          cyl    hp over_under
#>   <chr>        <dbl> <dbl> <chr>     
#> 1 Fiat X1-9        4    66 under     
#> 2 Merc 240D        4    62 under     
#> 3 Lotus Europa     4   113 under     
#> 
#> 
#> $df_5
#> $df_5$over
#> # A tibble: 7 × 4
#>   model                cyl    hp over_under
#>   <chr>              <dbl> <dbl> <chr>     
#> 1 Chrysler Imperial      8   230 over      
#> 2 Camaro Z28             8   245 over      
#> 3 Merc 450SL             8   180 over      
#> 4 Hornet 4 Drive         6   110 over      
#> 5 Cadillac Fleetwood     8   205 over      
#> 6 Merc 280C              6   123 over      
#> 7 Pontiac Firebird       8   175 over      
#> 
#> $df_5$under
#> # A tibble: 3 × 4
#>   model            cyl    hp over_under
#>   <chr>          <dbl> <dbl> <chr>     
#> 1 Porsche 914-2      4    91 under     
#> 2 Toyota Corona      4    97 under     
#> 3 Toyota Corolla     4    65 under

We can look at particular pieces of this by e.g.

r
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
split_list$df_3$under

#> # A tibble: 6 × 4
#>   model                cyl    hp over_under
#>   <chr>              <dbl> <dbl> <chr>     
#> 1 Hornet 4 Drive         6   110 under     
#> 2 Hornet Sportabout      8   175 under     
#> 3 Maserati Bora          8   335 under     
#> 4 Valiant                6   105 under     
#> 5 Mazda RX4 Wag          6   110 under     
#> 6 Cadillac Fleetwood     8   205 under

This is handy because we can use tab completion in our IDE to investigate the tables in the list.

We could just work with the list like this. Or we could bind them by row into a big data frame, assuming they all have the same columns. But the original questioner wanted them as separate data frame objects with the suffix _over or _under as appropriate. To extract all the “over” data frames from our split_list object and make them objects with names of the form df_1_over etc, we can do

r
1
2
3
4
5
6
7
8
split_list %>% 
  map("over") %>%                               # subset to "over" dfs only
  set_names(nm = ~ paste0(.x, "_over")) %>%     # name each element, add the _over suffix
  walk2(.x = names(.), #                        # write out each df with its name
        .y = .,
        .f = ~ assign(x = .x,
                value = as_tibble(.y),
                envir = .GlobalEnv))

The line map("over") works just like—and in fact, behind the scenes is in fact using— the pluck() function to retrieve the nested list elements named “over”. This is equivalent to something like

r
1
lapply(split_list, `[[`, "over")

in Base R, which applies the [[ selector to the named element.

We use walk2() rather than walk() because the function we’re iterating needs two arguments to work: a vector of names for the objects, and the tibbles to be assigned to those names.

Now in our environment we have e.g.

r
1
2
3
4
5
6
7
8
df_5_over

#> # A tibble: 3 × 4
#>   model            cyl    hp over_under
#>   <chr>          <dbl> <dbl> <chr>     
#> 1 Porsche 914-2      4    91 over      
#> 2 Toyota Corona      4    97 over      
#> 3 Toyota Corolla     4    65 over

We can get the “under” data frames as objects in the same way.

As I said above, depending on what was needed it might—in fact, it would probably—make more sense to do the whole thing from start to finish using a single tibble, grouping or perhaps nesting the data as needed. Or, if we did start for whatever reason with different object, but we knew they all had the same columnar layout, we could bind them by row into a big data frame indexed by their names, like this:

r
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
 
df_all <- bind_rows(df_list, .id = "id")

df_all

#> # A tibble: 50 × 4
#>    id    model               cyl    hp
#>    <chr> <chr>             <dbl> <dbl>
#>  1 df_1  Chrysler Imperial     8   230
#>  2 df_1  Mazda RX4 Wag         6   110
#>  3 df_1  Merc 450SE            8   180
#>  4 df_1  Porsche 914-2         4    91
#>  5 df_1  Toyota Corona         4    97
#>  6 df_1  Ford Pantera L        8   264
#>  7 df_1  Toyota Corolla        4    65
#>  8 df_1  Merc 280C             6   123
#>  9 df_1  Duster 360            8   245
#> 10 df_1  Merc 230              4    95
#> # … with 40 more rows

From there you could group the big data frame by id and then make the over/under measures and whatever else you wanted. But if you do want to mess around with lists (something that becomes curiously more tempting the more time you spend with wither Base R lists or purrr) this is how you might begin.

28 Apr 00:52

These Weeks In Firefox: Issue 114

by Doug Thayer

Highlights

  • The devtools console is now significantly faster! If you are a developer who heavily uses the console, this should be a substantial quality of life improvement. This was a great collaboration between the devtools team and the performance team, and further performance improvements are inbound. – Bug 1753177
  • Thank you to Max, who added several video wrappers for Picture-in-Picture subtitles support
  • Font size of Picture-in-Picture subtitles can be adjusted using the preference media.videocontrols.picture-in-picture.display-text-tracks.size. Options are small, medium, and large. – Bug 1757219
  • Starting in Firefox >= 101, there will be a change to our WebExtension storage API. Each storage API’s storageArea (storage.local, storage.sync etc) will provide its own onChanged event (e.g. browser.storage.local.onChanged and browser.storage.sync.onChanged), in addition to browser.storage.onChanged API event – Bug 1758475
  • Daisuke has fixed an issue in the URL bar where the caret position would move on pages with long load times

Friends of the Firefox team

Introductions/Shout-Outs

  • Thanks to everyone that has helped mentor and guide Outreachy applicants so far, and a huge shout-out to the applicants!

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • av
  • irwp
  • Janvi Bajoria [:janvi01]
  • Max
  • Oriol Brufau [:Oriol]
  • sayuree
  • serge-sans-paille
  • Shane Hughes [:aminomancer]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • As part of the ongoing ManifestVersion 3 work:
    • Menu API support for event pages – Bug 1748558 / Bug 1762394 / Bug 1761814
    • Prevent event page to be terminated on idle while the Addon Debugging devtools toolbox is attached to the extension – Bug 1748530
    • Deprecated tabs APIs have been hidden (for add-ons with manifest_version explicitly set to 3) – Bug 1764566
    • Relax validation of the manifest properties unsupported for manifest_version 3 add-ons – Bug 1723156
      • Firefox will be showing warnings in “about:debugging” for the manifest properties deprecated in manifest_version 3, but the add-on will install successfully.
      • The deprecated manifest properties will be set as undefined in the normalized manifest data.
  • Thanks to Jon Coppeard work in Bug 1761938, a separate module loaders is being used for WebExtensions content scripts dynamic module imports in Firefox >= 101
  • Thanks to Emilio work in Bug 1762298 WebExtensions popups and sidebars will be using the preferred color scheme inherited from the one set for the browser chrome
    • Quoting from Bug 1762298 comment 1: The prefers-color-scheme of a page (and all its subframes) loaded in a <browser> element depends on the used color-scheme property value of that <browser> element”.

Developer Tools

  • Toolbox
    • Fixed pin-to-bottom issue with tall messages in the Console panel (e.g. error with large stacktrace) bug
    • Fixed CSS Grid/Flexbox highlighter in Browser Toolbox (bug)
      • Flex structure of a flexbox highlighted correctly.
  • WebDriver BiDi
    • Started working on a new command, browsingContext.create, which is used to open new tabs and windows. This command is important both for end users as well as for our own tests, to remove another dependency on WebDriver HTTP.
    • We landed a first implementation of the browsingContext.navigate command. Users can rely on this command to navigate tabs or frames, with various wait conditions.
    • On April 11th geckodriver 0.31.0, which is our proxy for W3C WebDriver compatible clients to interact with Gecko-based browsers, was released including some fixes and improvements in our WebDriver BiDi support and a new command for retrieving a DOM node’s ShadowRoot.

Form Autofill

Lint, Docs and Workflow

  • There are various mentored patches in work/landing to fix ESLint no-unused-vars issues in xpcshell-tests. Thank you to the following who have landed fixes so far:
    • Roy Christo
    • Karnik Kanojia
  • Patches have been posted for upgrading to the ESLint v8 series.
    • One thing of note is that ESLint will now catch cases of typeof foo == undefined. These will always return false as they should be compared with the string undefined, i.e. typeof foo == “undefined”.

Password Manager

Picture-in-Picture

Search and Navigation

James added additional flexibility to generating locale and region names based on the users locale/region in search-config.json

28 Apr 00:52

✚ How to Make Bubble Clusters in R

by Nathan Yau

Clusters of bubbles might not be the most visually precise way to show counts, but the elements can lend weight to the individuals that aggregates represent as a whole. The one-to-one ratio between element and count feels less abstract than a bar or a line.

In this tutorial you learn how to create this one-to-one ratio.

Become a member for access to this — plus tutorials, courses, and guides.

28 Apr 00:52

Poly Sync 10: USB-Freisprecheinrichtung

by Volker Weber
Poly Sync 20 mit USB und Bluetooth

Poly fügt der Sync-Familie ein neues Modell ohne Bluetooth hinzu. Ich rate weiterhin zum Sync 20, weil man damit flexibler ist. Den gibt es schon für weniger als 100 Euro und er gefällt mir viel besser als der Jabra Speak 750.

Der Sync 10 ist für Unternehmen, die kein Bluetooth im Büro haben wollen. Kann ich verstehen, will ich aber nicht haben. 🙂

28 Apr 00:48

Masked

by peter@rukavina.net (Peter Rukavina)

Back in mid-early COVID, when Receiver Coffee had reopened in Victoria Row, young Joel turned me on to face-masks from Medium Rare. I bought one. I’ve been wearing their masks ever since, replacing them once every 9 months or so when the nose wire fatigues and breaks.

Yesterday, coincident with the announcement that masks will soon no longer be required in most places on Prince Edward Island, a package of three I ordered arrived in the mail (Cook’s Edge, the local chef supply shop, has stopped carrying them).

I don’t despair at this: I’ll keep wearing a mask after the May 6–“masks will be highly recommended in most indoor settings” says Public Health—and it seems that masks may be with us, in certain situations, in perpetuity.

28 Apr 00:42

Der ADAC empfiehlt seinen Mitgliedern, mit dem Rad zu fahren

by Ronny
mkalus shared this story from Das Kraftfuttermischwerk.

Angesichts der gestiegen Energiepreise empfiehlt Deutschlands größter Autofahrerlobbyist seinen Mitgliedern, das Auto mal stehen zu lassen und „zum Bäcker mit dem Fahrrad anstatt mit dem SUV“ zu fahren, so ADAC-Präsident Christian Reinicke. Und weiter: „Für viele Kurzstrecken ergibt die Autofahrt keinen Sinn. Bei anderen Strecken kann man auch mal den ÖPNV nutzen“. Vielleicht friert jetzt die Hölle zu.

28 Apr 00:41

Elon Musk Doesn’t Want Free Speech on Twitter . . . and Neither Do You

by Josh Bernoff

Elon Musk’s bid to take over Twitter starts with platitudes about free speech. He says he’s a “free speech absolutist.” “Free speech is the bedrock of a functioning democracy,” he says in the news release about his takeover of the company, “and Twitter is the digital town square where matters vital to the future of … Continued

The post Elon Musk Doesn’t Want Free Speech on Twitter . . . and Neither Do You appeared first on without bullshit.

28 Apr 00:40

Elon Musk to buy Twitter for $44B US

Pete Evans, CBC News, Apr 26, 2022
Icon

The big news today is that Elon Musk has reached an apparent agreement with the Twitter board to buy the company for around $US 44 billion. He would then presumably take it private and run it as a fiefdom. This is of course always a possibility with proprietary platforms (while researching today I found that SlideShare is completely blocking me from viewing people's slides, which is what I feared when I left the platform after it was purchased by Scribd). So people shouldn't be surprised. If you want a public square, it has to be publicly owned. The best take on this is probably deAdder's (behind a subscription wall on Jeff Bezos's Washington Post website).

Unsurprisingly, as soon as the announcement hit, the federated social networking site Mastodon was hit with a surge of interest (and subsequennt slowdown, at least on mastodon social). People interested in this open source alternative to Twitter are directed to joinmastodon.org, which will offer then a selection of instances to choose from (the whole idea of Mastodon is that we don't all end up on a single platform owned by a billionaire). Note that you'll still be depending on billionaire owners of telecom platforms, so this isn't an issue that goes away simply by #leavingtwitter.

Web: [Direct Link] [This Post]