Shared posts

20 May 16:47

Canada bans Huawei and ZTE from 5G network

by Nida Zafar

The federal government has banned Huawei and ZTE from Canada's 5G network.

Public Safety Minister Marco Mendicino and François-Philippe Champagne, the Minister of Innovation, Science and Industry of Canada, made the announcement Thursday.

"Today we're announcing our intention to prohibit the inclusion of Huawei and ZTE products and services in Canada's telecommunication systems," Champagne said.

The move sees Canada following the footsteps of its allies in the Five Eyes intelligence network. The alliance includes Australia, New Zealand, the U.K., and the U.S. Each of these countries have restricted the use of Huawei equipment.

Mendicino said the government will introduce new legislation that will establish a framework to help the government protect national security and respond to cyber threats.

The decision has been in the works for years, prompting some companies to move ahead with their 5G networks without Huawei. For example, Telus and Bell are working with Nokia and Ericsson to build their 5G networks. Rogers is working with Ericsson.

Asked about the time it took to make this decision, Champagne said "this was never a race."

"This is about making the right decision. This is about providing a framework to protect our infrastructure."

Champagne said providers who have Huawei equipment installed as part of their systems will be required to remove it. The companies won't be compensated.

Telus and Bell have previously asked the federal government to compensate them to remove Huawei equipment utilized in their 4G networks.

A detailed timeline explaining how Canada got to this point is available here.

Image credit: Shutterstock

20 May 16:47

No Wheels Allowed

by peter@rukavina.net (Peter Rukavina)

They really really (really) don’t want (any) wheeled things at the beach in West River.

20 May 16:47

Starship – A Roadmap to Mars

by Brian Hurley
Amazing infographic created by @alex_adedge
20 May 16:46

Apple Watch Series 6 is $130 off on Amazon

by Karandeep Oberoi

The 2020-released Apple Watch Series 6 is currently on sale at Amazon.

Regularly available for $529.99, the 40mm GPS variant of the smartwatch is currently available for $399.99, marking a $130 or 25 percent discount.

It's worth noting that only the 40mm GPS variant in the Product(RED) colourway is currently discounted.

The 2020-released smartwatch runs on Apple's S6 chipset and features a 1.78-inch Always-on Retina LTPO OLED 448 x 368 pixel resolution display with all the features we've grown to love, including SpO2 detection, ECG, sleep data, fitness tracking and more.

In his review of the Apple Watch Series 6, MobileSyrup managing editor Patrick O'Rourke gave the smartwatch an 8/10 rating.

Follow the link below to purchase the Apple Watch 40mm GPS variant in the Product(RED) from Amazon for $399.99.

Source: Amazon

20 May 16:46

Parkdale High Park Bikes meeting

by jnyyz

This group met in response to an alert about an upcoming discussion of active transportion options around High Park that will come before the Infrastructure and Environment Committee next week. The city posted an interim report on May 11. Interestingly enough, although the title of the report is “Interim Report for the High Park Movement Strategy”, the only detailed proposals that were put forward had to do with different roadway configurations to improve safety along Parkside Drive.

There are nine options being put forward. They are discussed in detail in a blog post by Rob Z. I won’t repeat his discussion here other than to note that our group also strongly supports his conclusions that options #6 and #9 were the only viable ones that also included protected bike lanes, and that there was a further preference for option #6 since option #9 included a bidirectional bike lane on the wide side of the street.

The interim report was also fairly light on plans about transportation within High Park itself. I recall a proposal for closing off the main loop to cars permanently, and allowing limited motor vehicle access between the High Park Blvd entrance and the Grenadier parking lot, but this didn’t appear in the report. There was also some discussion about simply removing car traffic from the park and providing access during the summer with bus service from High Park station.

There was a brief update on the Humber Trail Gap closure meeting that was held this past Tuesday.

There was a reminder that Lakeshore West will be closed to cars this coming Monday (Victoria Day) as part of ActiveTO.

Finally, we will be sending a letter to Gord Perks’ office to get an update on some of the bike infrastructure improvements promised for the ward, including some intersection improvements at Jane and Annette, some configuration changes on Seaforth to make thru access between Brock and Lansdowne, and an update on the status of the construction blockages on Bloor under the two bridges between Lansdowne and Dundas West. Once we get those updates, we can have another audit ride to look things over.

Finally, it was noted that longtime bike advocates (and former leaders of the Ward14 group) Rob and Helen will be moving out of the ward at the end of this month. Fortunately they are not going far, and we wish them well.

20 May 16:45

week ending 2022-05-19 General

by Ducky

Testing

This paper found that dogs could be trained to recognize COVID-19 patients by smelling skin swabs with an accuracy of over 90%.


This report says that:

  • The percentage of all Canadian blood donors who have been infected with COVID-19 was about 28.7%, with 44.3% among the 17-24-year-olds.
  • 99.57% of Canadian donors had evidence of spike antibodies, mostly from vaccination.
  • In BC and Vancouver, the rates of donors who had been infected was 32.09% and 35.72%, respectively.

This study from a polling company says that 30% of Canadians caught COVID-19 during the Omicron wave.


This paper reports that they have developed a highly-accurate pee-test for COVID-19.

Variants

This preprint says that a past BA.1 infection doesn’t protect you against BA.2.12.1 / BA.4 / BA.5 as well as a past Delta infection does. This appears to be because of the L452 mutation which Delta and BA.2.12.1 / BA.4 / BA.5 have but which BA.1 does not. This was true whether or not the infected had been vaccinated or not. (In other words, don’t count on your past Omicron Classic infection to protect you.)

This article from 23 Feb 2021 says that the B.1.427 and B.1.429 variants which slammed Southern California also had the L452 mutation.

Long COVID

This article says that an unpublished study estimated how many COVID-19 patients had Long COVID symptoms after six months:

  • for mild cases, around 5% of women and 2% of men,
  • for hospitalized cases, 26% of women and 15% of men, and
  • for ICU cases, 42% of women and 27% of men.

The article says the same unpublished results say that the average disability score which was equivalent to complete hearing loss or severe traumatic brain injury.


(I hesitated about whether this belonged under Long COVID or not, because it only looks at 90 days past infection.) This paper says that the 90-day chance of getting blood clots is as follows for people who had COVID-19:

in veins in arteries
unhospitalized patients 0.2-0.8% 0.1-1.0%
hospitalized patients up to 4.5% up to 3.1%

Pathology

This correspondence (which I think means something which hasn’t gotten review but which the editors think is important enough to print) proposes a theory for the juvenile hepatitis cases which have been happening:

  1. Infection with COVID-19.
  2. Formation of superantigens.
  3. Child “recovers” from COVID-19, but with continuing reservoir hiding in the gut.
  4. Child is infected with adenovirus 41F.
  5. The adenovirus activates the superantigens and all hell breaks loose.

This paper looked at autoantibodies to two different types of interferon, IFN-α2 and IFN-ω. They found increased relative risk of death (RRD) and infection fatality rate (IFR) with autoantibodies to either or both:

one both
RRD, under 70 y/o 17.0 188.3
RRD, over 70 y/o 5.8 7.2
IFR, under 40 y/o 0.17% 0.84%
IFR, over 80 y/o 26.7% 40.5%

(Note that the RRD is lower for the over-70s than the under-70s because their absolute risk starts out much lower.)

Why does this matter? Because checking for autoantibodies can help tell who needs more care.


This paper says that COVID-19 can be found in donor eyes, meaning that you could catch COVID-19 from a cornea transplant.


This paper says that exposure to particulate pollution increases the risk of developing COVID-19. NOx exposure did not seem to change COVID-19 risk.


This paper says that 11% of all patients hospitalized in Ontario or Alberta with COVID-19 who were discharged either were readmitted or died within the next 30 days.


This article says that the US CDC determined that the incidence of heart issues after getting a shot of Pfizer was lower in 5-11 year old boys than in teens and young adults. (Me, I think it’s related to testosterone levels.)

Vaccines

From this report from the UK, the effectiveness of two Pfizer+mRNA booster keeps waning, although it’s kind of strange — you are worse off with three shots of Pfizer than with two???

Two doses of Moderna plus an mRNA booster looks better, but it’s still not fantastic.

Recommended Reading

This article talks about an interesting new way to treat COVID-19: with anti-aging drugs.


This article is a broad overview of new variants (and why they are problematic).


This article talks about efforts to quantify the disease burden of COVID-19, including of disability.


If you have not been keeping track of the benefits of nasal vaccines, this article will catch you up.


This story is about burnout in health care workers in the US. Reading it will either make you feel good that you don’t live in the US or make you very nervous that you don’t know exactly what is going on in the hospitals near you.


This might not be COVID-19, but it might be. This article discusses what we know and don’t know about the recent outbreaks of pediatric hepatitis.

20 May 16:45

week ending 2022-05-19 BC

by Ducky

I want to make a cautionary note about the stats. For about six weeks now, the over-70s and clinically extremely vulnerable have been eligible for second boosters. Note that those are also the only group which is eligible to get walk-in PCR tests, so the only ones (aside from hospital admits) who show up in the official case count numbers.

This means that we could be seeing something here where there is a drop in cases in the over-70s and CEVs due to the boosters, while there is NOT a drop in the under-70 non-CEVs.

Statistics

The BC COVID-19 Modelling Group has issued another report, and it says… uh… lots of things, some of which I agree with and some of which I do not.

They say that wastewater data is not inconsistent with falling cases, but pffft, the wastewater data is too noisy to really tell.

(Note that the y-axis is not labelled!)

Here’s the wastewater data for the last 60 days from my buddy Jeff’s site:

Yeah, it looks like it is going down in Fraser and Vancouver? But it’s so noisy that it’s hard be sure.

One of the most important things they do is estimate the under-70 case rate based on the over-70 case rate. Their rationale is that the over-70 rate did not change when the testing regime changed, and that the under-70 case curve was parallel to the over-70 case curve before the testing regime changed, so it should be parallel now. In general, I think that’s a clever and fine approach.

HOWEVER, they are using a recent dip in rates for the over-70s to justify a dip in the under-70s rate, WITHOUT acknowledging that the over-70s just just got their second boosters (well, starting 5 April).

Another interesting set of data comes from the UMD Global COVID-19 Trends and Impacts Survey (done via Facebook). That isn’t totally scientific, but its past projections look reasonable, and it doesn’t show a decline AT ALL.

They also estimate the percentage which is BA.1.1 and how much is BA.2. From that, they made this chart, but I don’t understand how they went from “the percentage which is this even more transmissible is going up” to “total case counts are going to go down”. I guess they are basing thing on the case count in over-70s going down very slightly?

One thing I definitely agree with them on is that the data the province gives us is crap. (Uh, tbf, they didn’t actually say that quite so plainly…)


Today’s BC CDC weekly report says that the week ending on 14 May there were: +334 hospital admissions, +1,645 cases, +59 all-cause deaths.

Today’s weekly report says that last week (ending on 7 May) there were: +391 hospital admissions, +1,985 cases, +84 all-cause deaths.

Last week’s weekly report said that last week (ending on 7 May) there were: +331 hospital admissions, +1,987 cases, +59 all-cause deaths.

Charts

From the BC CDC situation report:

From the BC CDC weekly report:


20 May 16:44

my oven has wifi but it still forgets the time if the power goes out so you have to manually push this button in the app to grab it from the internet who even makes this stuff 🤦‍♀️ pic.twitter.com/jhYfvMBxF5

by Internet of Shit (internetofshit)
mkalus shared this story from internetofshit on Twitter.

my oven has wifi but it still forgets the time if the power goes out so you have to manually push this button in the app to grab it from the internet

who even makes this stuff 🤦‍♀️ pic.twitter.com/jhYfvMBxF5





226 likes, 30 retweets
20 May 16:44

Britain is being choked by the knotweed of Brexit lies

by Chris Grey
It is difficult to make sense of what Johnson’s Brexit government is doing, or trying to do, as regards the Northern Ireland Protocol (NIP). I discussed the background in last week’s post, much of which remains relevant, but since then there have been daily, almost hourly, contradictory signals and reports.

What has been confirmed is the recently trailed shift from threatening to invoke Article 16 of the NIP to the much more confrontational threat to pass UK legislation to supposedly unilaterally and permanently override much of the Protocol. This, in itself, is an implicit admission of the failure of the Article 16 threat tactic that has dominated the UK’s approach for well over a year. In truth, it was never viable because, whatever some Brexiters persuaded themselves to believe, it couldn’t do what they thought it would. Another delusion bites the dust, though it continues to be mentioned.

On the ‘legislation’ plan, having failed to act on the rumoured intention to include it in last week’s Queen Speech, it was then set to be announced this week by Liz Truss. This duly happened, on Tuesday, marking the end of any possibility that Truss would ‘re-set’ relations with the EU. But, again contrary to some other previous rumours, the legislation was not tabled and may not be until the summer. So there’s been some slight softening of stance over just a few days. And talks will continue with the EU, although it has previously rejected the substance of Truss’s outline proposals, and the atmosphere will be even more sour as a result of this new threat.

As presaged in my last post, this is a significant escalation but not a decisive moment. Next October is now being spoken of as the deadline for a resolution, its significance being that that is when new elections would have to be held in Northern Ireland if no government has been formed there. But we have seen such deadlines come and go before.

Meanwhile, on Monday, to coincide with a visit to Belfast, Boris Johnson published an article that was more serious and somewhat more emollient than anything he has said before, and he has generally seemed to downplay the significance of the proposed legislation, for example by referring to it as being concerned only with “some relatively minor barriers to trade”. So this seems like a ‘softer’ approach than Truss’s.

The government's tactics are as unclear as ever

Thus, beyond continuing the game of Tom Tiddler’s Ground that has been dragging on for months, it remains unclear what the government’s tactics are. Is the idea of the new threat to satisfy the DUP sufficiently for them to join the power-sharing executive in Northern Ireland? If so, it seems already to have failed since their leader, Jeffrey Donaldson, has suggested that only with the passing of the legislation will that happen.

Is the idea, as discussed in my previous post, to try to garner US support by tying this new approach so closely to upholding the Good Friday (Belfast) Agreement (GFA)? Certainly Truss made this central to her announcement, although border expert Professor Katy Hayward, writing in this morning’s Irish Times, argues strongly that the government’s present approach is actually “giving succour to those who want to destroy" the GFA. Moreover, Suella Braverman, the “stooge” Attorney General, has apparently made the “primordial significance” of the GFA key to her advice that the legislation would be legal, although several leading experts say this term is legally meaningless and that there is no hierarchy of treaties whereby the GFA would 'trump' the NIP. 

This marks a shift in tactics from early attempts to justify the UK position primarily in terms of the economic effects on Northern Ireland (presumably in part because Northern Ireland is actually doing better than the rest of the post-Brexit UK as a result of the NIP). It is also a shift from framing the legal justification in terms of the supposed priority of parliamentary sovereignty over international law in the way that Braverman absurdly advised over the Internal Market Bill (IMB).

Another possible reason for this focus on the GFA, apart from the US dimension, is illustrated by the revealing comment in an otherwise predictably fatuous article by David Davis this week (£) that it gives the UK “the moral high ground”, which I suppose is an implicit acknowledgment of the evident moral low ground of seeking to renege on an agreement the UK negotiated and signed only a couple of years ago. It is hardly compelling, though, since, like the government, it signally fails to answer the question of why the NIP is a threat to the GFA now, whereas at the time of signature Johnson declared the two agreements to be fully compatible. Nor does it explain why a consent mechanism was created, despite the opposition of unionists already being known, that didn’t require cross-community agreement which is now deemed inadequate because of … unionist opposition.

Is the idea to play ‘mind games’ with the EU to try to get maximum concessions, the kind of ‘madman’ negotiating approach which was attempted throughout both the trade negotiations and last year’s NIP negotiations? If so, the EU may have noticed that, ultimately, the government backed down on ‘no (trade) deal’, on the illegal IMB clauses, and on Article 16. But, in the nature of that approach, this time might be different. Or is the idea to present impossible demands to the EU knowing they won’t be met but then enabling the UK to scrap the NIP blaming EU intransigence? To put it another way, is the UK using madman tactics, or is It actually mad?

Do the oscillations in the ‘hardness’ of tone reflect differences within the government between, especially, Johnson and Truss, as some reports have it (£)? Or is it Johnson’s own habitual dithering? Or does it (also) reflect blowing in the winds of, on the one hand, diplomatic pressure from the US and feared economic pressures from the EU, and, on the other, of contradictory political pressures from different groups of Tory MPs? Certainly, as so often throughout these Brexit years, negotiations with the EU seem almost secondary to internal negotiations within the Tory Party.

Within that, what is the significance of David Frost’s now constant background chorus of bellicose, and often frankly silly, rhetoric both in the UK (£) and in the US? Of course it may well reflect his ambitions for new political office, despite the failures, abundantly obvious to all but himself, of his previous ministerial career. But is he, as a result, also now taking a similar role to that played over the years by Nigel Farage (now rather silent about Brexit though, tellingly, he and Frost are now cosying up), pressuring the government from the sidelines to stay true to ‘pure Brexit’?

The government’s strategy is as unclear as ever

Along with, and plainly related to, these tactical questions are the bigger, strategic questions of what the government actually wants. Sometimes, Johnson and other ministers have talked as if the entirety of the NIP, and certainly its core provision of an Irish Sea border, is unacceptable in principle, and many Brexiters have called for it to be entirely scrapped because they have never accepted the need for any border at all. Yet at other times they talk as if what is at stake is only operational reforms to the practical application of the agreed deal, albeit going beyond what the EU has offered so far. In Truss’s proposed changes, this question is fudged: she says she does not want to scrap the Protocol, whilst seeking to change the operations so drastically that, in effect, it would amount to doing so.

One problem with this lack of strategic clarity is that, if the real aim is to gain fresh concessions, it reduces the incentive for the EU to offer them: why do so, if it will just lead to the demand for even more, or if the UK isn’t really seeking concessions but an excuse to collapse the whole agreement? But if the real aim is to collapse the agreement, then for how long can it be credible to keep making threats without carrying them out?

Perhaps an even bigger problem is that, despite what Brexiters think, the EU really isn’t pre-occupied with Brexit in the way that, at times, it once was. So even if there is some cunning British negotiating strategy designed to wrongfoot Brussels, it’s more likely to irritate and exasperate than produce sleepless nights. There’s certainly no longer any great interest in accommodating UK domestic politics. That largely ended when the Withdrawal Agreement was signed. Ironically, having left the EU, Brexiters and the Brexit government spend far more time and energy thinking and talking about the EU than the EU spends on Brexit or Britain. That lack of interest, quite as much as border checks, is part and parcel of what it means to be ‘a third country’.

There can be no clarity because there is no honesty

These and related issues can be endlessly debated. But I think the key to making sense of all of them is to recognize that the government can’t be clear about what it is doing because it can’t be honest about how it got to where it is now, and can’t be honest about where it wants to get to in the future. As was always likely, the skein of contradictory lies told over the last six years is thickening and spreading so as to overwhelm the entire post-Brexit polity. So, like knotweed, it is now choking the very Brexit it created and, with that, British politics more generally. That Northern Ireland should be the most visible manifestation of this is not surprising because, certainly since the announcement of ‘hard Brexit’ by Theresa May in her January 2017 Lancaster House speech, it has been at the epicentre of these contradictory lies.

First and foremost are all the lies of the referendum campaign and since, lies about what Brexit would (or would not) mean for Northern Ireland but, more broadly, about how it would be possible to have hard Brexit and yet have frictionless trade, or have ‘the exact same benefits’ as the single market and customs union membership yet without belonging to either. Perhaps most fundamentally, the lie was the idea that the UK could leave the EU and yet, in some ways, still be treated as if it had not. The implication was that Brexit would be a fundamental change, and yet many things would remain exactly the same, or at least could do, if only the EU did not want to ‘punish’ Britain, or was not ‘sulking’ about Brexit, accusations that have become articles of faith to Brexiters (as much as, once, it was an article of faith that ‘we hold all the cards’).

Johnson was the front man for all this and his “cakeism” precisely encapsulated its central ‘out and yet keep the benefits’ dishonesty. But it’s important to understand that he was not the architect of the lies, nor by any means their sole spokesperson. These were the lies of Brexit itself, and they are why the entirety of Brexit, and not just Brexit in Northern Ireland, is failing.

Johnson’s serial dishonesty

That said, it would be politically astute, and not unfair, for the Labour Party in particular to denounce what has happened as Johnson’s Brexit. To do so would certainly be more realistic and reasonable than the present approach of barely talking about it all. For, indeed, much of the current situation does bear the imprint of Johnson’s trademark dishonesty. For it partly arises from his invariable attempts to avoid hard realities and difficult decisions by lying, which have now caught up with him, badly.

Thus the reason he can’t be clear if the strategic aim is to scrap the NIP as unacceptable in principle is because he lied to the electorate in 2019 when he told them he had negotiated a great oven-ready deal. As a result, this week, under robust questioning, he had to say that he had signed the deal but had not anticipated that the EU would implement it in the way it did. However, because he can’t be clear if his aim is to scrap the NIP, he is unable to satisfy the DUP (and other unionists parties) or the ERG because he lied to them in promising that he would do so and, at least as regards the ERG, secured their support for his deal on that basis. For them, in principle and not just in practice the NIP is unacceptable. So anything the EU might conceivably agree to will not satisfy the ERG, if only because they have become so extreme that the very fact of the EU agreeing it would be enough to damn it in their eyes. As Rafael Behr put it in a superb article this week, “there is no concession big enough, no deal good enough” for them.

Yet if Johnson could give these hardliners what they want, and somehow bamboozle voters into forgetting his electoral promises about his 2019 deal, he would face opposition from what’s left of the centrist or traditional right amongst Tory MPs. It’s clear that Theresa May – who no doubt also reflects on how Johnson’s disloyalty and dishonesty undermined her – and some other Tory MPs will oppose a move to break international law, as will many Tory Peers, just as they did the illegal clauses of the IMB. Some, it seems, may tolerate the threat (but not the passing) of legislation as a ‘negotiating ploy’ with the EU to obtain further flexibilities within the NIP, but that just brings back the same questions: is it such a ploy, or is there a real intention to unilaterally disapply the NIP? Johnson can’t tell them the truth of that, either.

The realities of power

Beyond these domestic considerations, if Johnson does satisfy the hardliners then he faces the formidable problem of EU economic power and US power full stop. Neither the old Brexit lie that ‘they need us more than we need them’, nor the wider Brexit fantasy of untrammelled national sovereignty, survive contact with the realities of those powers. As I said in my previous post, a full-on trade war is not in immediate prospect, not least as the whole process of passing, let alone using, legislation to disapply the NIP will take many months.

But there will be at least some EU retaliations if the hardline path continues, and ultimately it will become impossible to avoid the issues which in essence go right back to 2016-17 and the arguments about ‘sequencing’: the prior condition for a trade agreement with the EU was and is the Withdrawal Agreement, with its three planks of the financial settlement, citizens’ rights, and Northern Ireland. Hardline Brexiters conned themselves that there was no need to accept that, and still blame May for doing so. But the reality is that she did, and she did so because in reality she had to.

Thus, in the very final analysis, if the UK completely reneges on the NIP then the EU will, justifiably, regard the trade agreement as void, as already more than hinted at by Maros Sefcovic. Because all the Brexit lies of 2016 are still lies now. If anything, ‘German car makers’ are even less likely to care than they did in 2016, and the UK is even less well-placed to surviving ‘on WTO terms’ than it was when ‘no deal Brexit’ was in prospect. But, again, Johnson and his fellow Brexiters are not able to admit those truths, either to themselves or the electorate.

So although this will all drag on for a long time yet, what is happening is that Johnson’s reported psychological desire to be liked by everyone, his political modus operandi of telling different lies to please different audiences, his predilection to defer making decisions, and his basic ‘cakeist’ refusal to accept that decisions really need to be made are all, finally, catching up with him. You can’t fool all of the people all of the time. Now, no one believes him, and for this lack of trust, at least, Labour do seem willing to criticize Johnson’s Brexit policy. However, to re-iterate, Johnson’s dishonesty and untrustworthiness, whilst important, have exacerbated rather than created the dishonesty inherent in Brexit.

The most fundamental problem: Brexit itself

One fundamental part of that dishonesty is the avoidance of the paradoxical question: how can you have a border without having a border? That question is central to the running sore of the NIP, but is also implicated in the knots the government is tied in over EU-GB import controls over conformity assessment marking, and over regulatory duplication/ divergence (£)*. Of course one might say that resolving irreconcilable, or even just bitterly contested, issues is the stuff of politics. The peace process and GFA actually provides a good example. However, the politics of Brexit never even attempted the kind of process in which irreconcilabilities and bitter oppositions could be fudged into some kind of workable, durable consensus.

That would have entailed acknowledging the closeness of the vote, the variety of views amongst Brexiters about how it should be done, and the fact that two out of the four nations had voted to remain, as well as the implications for the GFA. This may seem like pointless jobbing back, but it’s crucial to understand that it lies at the heart of the rolling political crisis we have been in since 2016. And the reason is precisely because the realities and the various trade-offs were never honestly admitted, and, worse, that even to try to do so was dishonestly dismissed as undemocratic. That honesty still eludes the British polity, whilst the dishonesty still haunts it.

It has recently become fashionable to say that May’s deal did, if belatedly, face up to the realities and trade-offs but this isn’t really true, or at best it’s only partially true. It is the case that her backstop did so, but the deal was still dishonest because it pretended that this backstop might never need to be used if the subsequent trade agreement was sufficiently deep, or if ‘alternative arrangements’ that would enable a fully open border were to be developed. However, since leaving both the single market and customs union were already red lines, there was no prospect at all of the future trade agreement avoiding the backstop whilst – as realists always said, and has been seen subsequently to be true – there are no ‘alternative arrangements’ sufficient to have replaced the backstop.

If May’s deal is considered honest, it is only by comparison with the infinitely more dishonest deal that Johnson did. But both were dishonest to a degree. May’s by agreeing to what was ostensibly a temporary ‘backstop’ but was actually a permanent ‘frontstop’; Johnson’s by agreeing to what was actually a permanent ‘frontstop’ whilst intending to treat it as a temporary bridge to a much more minimal, or non-existent, border.

A polity choked by lies

The reality is that the only way Brexit could fully be squared with the Northern Ireland situation (and also the only way it could be done without huge economic costs) was through single market membership (perhaps via EFTA) along with a UK-EU customs treaty. Alternatively, within hard Brexit, a deeper trade agreement would – and still could – have allowed a thinner Irish Sea border (and a thinner GB-EU border generally), as would an agreement on dynamic alignment of Sanitary and Phyto-sanitary regulations. Instead of accepting these realities, Brexiters have spent six years lying that there can be a border without having a border and, unsurprisingly, failing to achieve that outcome. In the latest developments, the government is again trying to enact that lie, and it is still failing. And it will inevitably go on failing until reality is accepted (or, perhaps, until Northern Ireland leaves the UK).

The swirl of the current, sometimes confusing, events only postpones facing up to reality, and leaves the country in the limbo which, in one form or another, we have been in since 2016. And it is not cost-free. The general economic damage of hard Brexit is now self-evident except to those who will always deny it. This latest NIP row is also economically damaging, especially to investment (a country on course to a possible trade war with its biggest trade partner isn’t very attractive), as well as being damaging to the fabric of Northern Irish society. It is also damaging to the UK’s international reputation even to be making, yet again, these threats to international law and it is obviously damaging to the UK’s strategic interests in strong, harmonious relationships with the EU and US, especially given the Ukraine war.

But our politics is stuck. It can neither admit reality but nor can it entirely deny it. The consequences of Brexit, including for Northern Ireland, are undeniable. Yet their causes, namely the lies inherent in Brexit, are barely discussable, at least in England. The Tories are too invested in Brexit to be honest about it, and Labour are too scared by Brexit to be vocal about it. So we stagger on, choked by the same old lies, and daily adding new ones, a spreading knotweed first suffocating all other plants in the garden and then undermining the very foundations of the country that used to be our home.

  

*These are all ultimately border questions because hard Brexit has created both a regulatory and a customs border with the EU, so even if not necessarily about what happens ‘at the border’ they all relate to the territory over or within which something (e.g. regulation, conformity assessment, data sharing, tariffs, origin of goods components, recognition of qualifications, validity of passports) applies or happens.

Due to other commitments, I don’t expect to have time to post again until Friday 10 June.

19 May 20:16

Optimizing PNGs in GitHub Actions using Oxipng

by Simon Willison

My datasette-screenshots repository generates screenshots of Datasette using my shot-scraper tool, for people who need them for articles or similar.

Jacob Weisz suggested optimizing these images as they were quite big. I want them to be as high quality as possible (I even take them using --retina mode), but that didn't mean I couldn't use lossless compression on them.

I often use squoosh.app to run a version of Oxipng compiled to WebAssembly in my browser. I decided to figure out how to run that same program in GitHub Actions.

Installing Rust apps using Cargo

Surprisingly there isn't yet a packaged version of Oxipng for Ubuntu - so I needed another way of installing it.

The project README suggests installing it using cargo install oxipng.

I used the tmate trick to try that out in a GitHub Actions worker - the cargo command is available by default but it took over a minute to fetch and compile all of the dependencies.

I didn't want to do this on every run, so I looked into ways to cache the built program. Thankfully the actions/cache action documents how to use it with Rust.

The full recipe for installing Oxipng in GitHub Actions looks like this:

    - name: Cache Oxipng
      uses: actions/cache@v3
      with:
        path: ~/.cargo/
        key: ${{ runner.os }}-cargo
    - name: Install Oxipng
      run: |
        cargo install oxipng

The first time the action runs it does a full compile of Oxipng - but on subsequent runs this gets output instead:

    Updating crates.io index
     Ignored package `oxipng v5.0.1` is already installed, use --force to override

Running Oxipng in an Action

All of the PNGs that I wanted to optimize were in the root of my checkout, so I added this step:

    - name: Optimize PNGs
      run: |-
        oxipng -o 4 -i 0 --strip safe *.png

The -o 4 is the highest recommended level of optimization.

-i 0 causes it to remove interlacing - "Interlacing can add 25-50% to the size of an optimized image" according to the README.

--strip safe strips out any image metadata that is guaranteed not to affect how the image is rendered.

Oxipng updates the specified images in place, hence the *.png at the end.

Testing this in a branch first

I tested this all in a branch first so that I could see if it was working correctly.

Since my workflow usually pushes any changed files back to the same GitHub repository, I added a check to that step which caused it to only run on pushes to the main branch:

    - name: Commit and push
      if: github.ref == 'refs/heads/main'
      run: |-
        git config user.name "Automated"
        ...

But I wanted to preview the generated images - so I added this step in the branch to save them to an artifact zip file that I could then inspect:

    - name: Artifacts
      uses: actions/upload-artifact@v3
      with:
        name: screenshots
        path: "*.png"

Once I got it working, I squash-merged this pull request back into main.

The result

Oxipng worked really well!

It reduced the size of all three of my screenshots:

This commit shows the difference and lets you compare both images.

19 May 20:15

Container Vulnerabilities

by Martin

A lot of today’s services that run on servers do so in containers, either in small setups that use Docker, for example, or in Kubernetes clusters for larger deployments. By design, containers encapsulate an application, so threads in a container can’t modify anything on the host computer that is not specifically attached to the container. Also, threads running in containers can’t see what’s going on outside or what is going on in other containers. So how can programs break out of containers? The answer: If they are able to gain root rights.

The big question then is, how programs running inside a container can obtain root rights. So far, I didn’t pay a lot of attention to this until the “Dirty Pipe” kernel bug was disclosed. When reading about the issue, it became immediately apparent to me how this could be used to break out of a container. Have a look at this link that I googled later for some details.

So how could an attack vector that uses such a vulnerability look like? Let’s say there’s a web server running in the container and a PHP application that can be tricked into opening a reverse shell to the attacker. There seem to be lots of such vulnerabilities around to achieve this on unpatched systems. By itself, getting a reverse shell would ‘only’ compromise a single container and the data that is accessible for it. Bad enough. An attacker could then perhaps try to move laterally, as containers usually talk to other containers and try to exploit security issues of the software in other containers from the inside.

But with a reverse shell and a “Dirty Pipe” chain, the attacker could gain root and break out of the container environment into the host. So long story short: When running containers, have a close look at kernel vulnerabilities, and patch the host system immediately.

19 May 19:38

Felienne Hermans on Naming Things

Felienne Hermans: "How patterns in variable names can make code easier to read." Felienne is an Associate Professor at Leiden University in the Netherlands.

Felienne Hermans

slides | transcript (English) | traducción (Español)

19 May 19:34

The banks collapsed in 2008 – and our food system is about to do the same | George Monbiot

mkalus shared this story from The Guardian.

For the past few years, scientists have been frantically sounding an alarm that governments refuse to hear: the global food system is beginning to look like the global financial system in the run-up to 2008.

While financial collapse would have been devastating to human welfare, food system collapse doesn’t bear thinking about. Yet the evidence that something is going badly wrong has been escalating rapidly. The current surge in food prices looks like the latest sign of systemic instability.

Many people assume that the food crisis was caused by a combination of the pandemic and the invasion of Ukraine. While these are important factors, they aggravate an underlying problem. For years, it looked as if hunger was heading for extinction. The number of undernourished people fell from 811 million in 2005 to 607 million in 2014. But in 2015, the trend began to turn. Hunger has been rising ever since: to 650 million in 2019, and back to 811 million in 2020. This year is likely to be much worse.

Now brace yourself for the really bad news: this has happened at a time of great abundance. Global food production has been rising steadily for more than half a century, comfortably beating population growth. Last year, the global wheat harvest was bigger than ever. Astoundingly, the number of undernourished people began to rise just as world food prices began to fall. In 2014, when fewer people were hungry than at any time since, the global food price index stood at 115 points. In 2015, it fell to 93, and remained below 100 until 2021.

Only in the past two years has it surged. The rise in food prices is now a major driver of inflation, which reached 9% in the UK last month. Food is becoming unaffordable even to many people in rich nations. The impact in poorer countries is much worse.

So what has been going on? Well, global food, like global finance, is a complex system, that develops spontaneously from billions of interactions. Complex systems have counterintuitive properties. They are resilient under certain conditions, as their self-organising properties stabilise them. But as stress escalates, these same properties start transmitting shocks through the network. Beyond a certain point, a small disturbance can tip the entire system over its critical threshold, whereupon it collapses, suddenly and unstoppably.

We now know enough about systems to predict whether they might be resilient or fragile. Scientists represent complex systems as a mesh of nodes and links. The nodes are like the knots in an old-fashioned net; the links are the strings that connect them. In the food system, the nodes include the corporations trading grain, seed and farm chemicals, the major exporters and importers and the ports through which food passes. The links are their commercial and institutional relationships.

If the nodes behave in a variety of ways, and their links to each other are weak, the system is likely to be resilient. If certain nodes become dominant, start to behave in similar ways and are strongly connected, the system is likely to be fragile. In the approach to the 2008 crisis, the big banks developed similar strategies and similar ways of managing risk, as they pursued the same sources of profit. They became strongly linked to each other in ways that regulators scarcely understood. When Lehman Brothers failed, it threatened to pull everyone down.

So here’s what sends cold fear through those who study the global food system. In recent years, just as in finance during the 2000s, key nodes in the food system have swollen, their links have become stronger, business strategies have converged and synchronised, and the features that might impede systemic collapse (“redundancy”, “modularity”, “circuit breakers” and “backup systems”) have been stripped away, exposing the system to “globally contagious” shocks.

On one estimate, just four corporations control 90% of the global grain trade. The same corporations have been buying into seed, chemicals, processing, packing, distribution and retail. In the course of 18 years, the number of trade connections between the exporters and importers of wheat and rice doubled. Nations are now polarising into super-importers and super-exporters. Much of this trade passes through vulnerable chokepoints, such as the Turkish Straits (now obstructed by Russia’s invasion of Ukraine), the Suez and Panama canals and the Straits of Hormuz, Bab-el-Mandeb and Malacca.

One of the fastest cultural shifts in human history is the convergence towards a “Global Standard Diet”. While our food has become locally more diverse, globally it has become less diverse. Just four crops – wheat, rice, maize and soy – account for almost 60% of the calories grown by farmers. Their production is now highly concentrated in a handful of nations, including Russia and Ukraine. The Global Standard Diet is grown by the Global Standard Farm, supplied by the same corporations with the same packages of seed, chemicals and machinery, and vulnerable to the same environmental shocks.

The food industry is becoming tightly coupled to the financial sector, increasing what scientists call the “network density” of the system, making it more susceptible to cascading failure. Around the world, trade barriers have come down and roads and ports upgraded, streamlining the global network. You might imagine that this smooth system would enhance food security. But it has allowed companies to shed the costs of warehousing and inventories, switching from stocks to flows. Mostly, this just-in-time strategy works. But if deliveries are interrupted or there’s a rapid surge in demand, shelves can suddenly empty.

A paper in Nature Sustainability reports that in the food system, “shock frequency has increased through time on land and sea at a global scale”. In researching my book Regenesis, I came to realise that it’s this escalating series of contagious shocks, exacerbated by financial speculation, that has been driving global hunger.

Now the global food system must survive not only its internal frailties, but also environmental and political disruptions that might interact with each other. To give a current example, in mid-April, the Indian government suggested that it could make up the shortfall in global food exports caused by Russia’s invasion of Ukraine. Just a month later, it banned exports of wheat, after crops shrivelled in a devastating heatwave.

We urgently need to diversify global food production, both geographically and in terms of crops and farming techniques. We need to break the grip of massive corporations and financial speculators. We need to create backup systems, producing food by entirely different means. We need to introduce spare capacity into a system threatened by its own efficiencies.

If so many can go hungry at a time of unprecedented bounty, the consequences of the major crop failure that environmental breakdown could cause defy imagination. The system has to change.

  • George Monbiot is a Guardian columnist

  • George Monbiot will discuss his new book, Regenesis, at a Guardian Live event on Monday 30 May. Book tickets in-person or online here

19 May 19:34

(via Opinion | The Covid Pandemic Still Isn’t Over. So What Now?...

19 May 19:22

Rust: A Critical Retrospective

by bunnie

Since I was unable to travel for a couple of years during the pandemic, I decided to take my new-found time and really lean into Rust. After writing over 100k lines of Rust code, I think I am starting to get a feel for the language and like every cranky engineer I have developed opinions and because this is the Internet I’m going to share them.

The reason I learned Rust was to flesh out parts of the Xous OS written by Xobs. Xous is a microkernel message-passing OS written in pure Rust. Its closest relative is probably QNX. Xous is written for lightweight (IoT/embedded scale) security-first platforms like Precursor that support an MMU for hardware-enforced, page-level memory protection.

In the past year, we’ve managed to add a lot of features to the OS: networking (TCP/UDP/DNS), middleware graphics abstractions for modals and multi-lingual text, storage (in the form of an encrypted, plausibly deniable database called the PDDB), trusted boot, and a key management library with self-provisioning and sealing properties.

One of the reasons why we decided to write our own OS instead of using an existing implementation such as SeL4, Tock, QNX, or Linux, was we wanted to really understand what every line of code was doing in our device. For Linux in particular, its source code base is so huge and so dynamic that even though it is open source, you can’t possibly audit every line in the kernel. Code changes are happening at a pace faster than any individual can audit. Thus, in addition to being home-grown, Xous is also very narrowly scoped to support just our platform, to keep as much unnecessary complexity out of the kernel as possible.

Being narrowly scoped means we could also take full advantage of having our CPU run in an FPGA. Thus, Xous targets an unusual RV32-IMAC configuration: one with an MMU + AES extensions. It’s 2022 after all, and transistors are cheap: why don’t all our microcontrollers feature page-level memory protection like their desktop counterparts? Being an FPGA also means we have the ability to fix API bugs at the hardware level, leaving the kernel more streamlined and simplified. This was especially relevant in working through abstraction-busting processes like suspend and resume from RAM. But that’s all for another post: this one is about Rust itself, and how it served as a systems programming language for Xous.

Rust: What Was Sold To Me

Back when we started Xous, we had a look at a broad number of systems programming languages and Rust stood out. Even though its `no-std` support was then-nascent, it was a strongly-typed, memory-safe language with good tooling and a burgeoning ecosystem. I’m personally a huge fan of strongly typed languages, and memory safety is good not just for systems programming, it enables optimizers to do a better job of generating code, plus it makes concurrency less scary. I actually wished for Precursor to have a CPU that had hardware support for tagged pointers and memory capabilities, similar to what was done on CHERI, but after some discussions with the team doing CHERI it was apparent they were very focused on making C better and didn’t have the bandwidth to support Rust (although that may be changing). In the grand scheme of things, C needed CHERI much more than Rust needed CHERI, so that’s a fair prioritization of resources. However, I’m a fan of belt-and-suspenders for security, so I’m still hopeful that someday hardware-enforced fat pointers will make their way into Rust.

That being said, I wasn’t going to go back to the C camp simply to kick the tires on a hardware retrofit that backfills just one poor aspect of C. The glossy brochure for Rust also advertised its ability to prevent bugs before they happened through its strict “borrow checker”. Furthermore, its release philosophy is supposed to avoid what I call “the problem with Python”: your code stops working if you don’t actively keep up with the latest version of the language. Also unlike Python, Rust is not inherently unhygienic, in that the advertised way to install packages is not also the wrong way to install packages. Contrast to Python, where the official docs on packages lead you to add them to system environment, only to be scolded by Python elders with a “but of course you should be using a venv/virtualenv/conda/pipenv/…, everyone knows that”. My experience with Python would have been so much better if this detail was not relegated to Chapter 12 of 16 in the official tutorial. Rust is also supposed to be better than e.g. Node at avoiding the “oops I deleted the Internet” problem when someone unpublishes a popular package, at least if you use fully specified semantic versions for your packages.

In the long term, the philosophy behind Xous is that eventually it should “get good enough”, at which point we should stop futzing with it. I believe it is the mission of engineers to eventually engineer themselves out of a job: systems should get stable and solid enough that it “just works”, with no caveats. Any additional engineering beyond that point only adds bugs or bloat. Rust’s philosophy of “stable is forever” and promising to never break backward-compatibility is very well-aligned from the point of view of getting Xous so polished that I’m no longer needed as an engineer, thus enabling me to spend more of my time and focus supporting users and their applications.

The Rough Edges of Rust

There’s already a plethora of love letters to Rust on the Internet, so I’m going to start by enumerating some of the shortcomings I’ve encountered.

“Line Noise” Syntax

This is a superficial complaint, but I found Rust syntax to be dense, heavy, and difficult to read, like trying to read the output of a UART with line noise:

Trying::to_read::<&'a heavy>(syntax, |like| { this. can_be( maddening ) }).map(|_| ())?;

In more plain terms, the line above does something like invoke a method called “to_read” on the object (actually `struct`) “Trying” with a type annotation of “&heavy” and a lifetime of ‘a with the parameters of “syntax” and a closure taking a generic argument of “like” calling the can_be() method on another instance of a structure named “this” with the parameter “maddening” with any non-error return values mapped to the Rust unit type “()” and errors unwrapped and kicked back up to the caller’s scope.

Deep breath. Surely, I got some of this wrong, but you get the idea of how dense this syntax can be.

And then on top of that you can layer macros and directives which don’t have to follow other Rust syntax rules. For example, if you want to have conditionally compiled code, you use a directive like

#[cfg(all(not(baremetal), any(feature = “hazmat”, feature = “debug_print”)))]

Which says if either the feature “hazmat” or “debug_print” is enabled and you’re not running on bare metal, use the block of code below (and I surely got this wrong too). The most confusing part of about this syntax to me is the use of a single “=” to denote equivalence and not assignment, because, stuff in config directives aren’t Rust code. It’s like a whole separate meta-language with a dictionary of key/value pairs that you query.

I’m not even going to get into the unreadability of Rust macros – even after having written a few Rust macros myself, I have to admit that I feel like they “just barely work” and probably thar be dragons somewhere in them. This isn’t how you’re supposed to feel in a language that bills itself to be reliable. Yes, it is my fault for not being smart enough to parse the language’s syntax, but also, I do have other things to do with my life, like build hardware.

Anyways, this is a superficial complaint. As time passed I eventually got over the learning curve and became more comfortable with it, but it was a hard, steep curve to climb. This is in part because all the Rust documentation is either written in eli5 style (good luck figuring out “feature”s from that example), or you’re greeted with a formal syntax definition (technically, everything you need to know to define a “feature” is in there, but nowhere is it summarized in plain English), and nothing in between.

To be clear, I have a lot of sympathy for how hard it is to write good documentation, so this is not a dig at the people who worked so hard to write so much excellent documentation on the language. I genuinely appreciate the general quality and fecundity of the documentation ecosystem.

Rust just has a steep learning curve in terms of syntax (at least for me).

Rust Is Powerful, but It Is Not Simple

Rust is powerful. I appreciate that it has a standard library which features HashMaps, Vecs, and Threads. These data structures are delicious and addictive. Once we got `std` support in Xous, there was no going back. Coming from a background of C and assembly, Rust’s standard library feels rich and usable — I have read some criticisms that it lacks features, but for my purposes it really hits a sweet spot.

That being said, my addiction to the Rust `std` library has not done any favors in terms of building an auditable code base. One of the criticisms I used to leverage at Linux is like “holy cow, the kernel source includes things like an implementation for red black trees, how is anyone going to audit that”.

Now, having written an OS, I have a deep appreciation for how essential these rich, dynamic data structures are. However, the fact that Xous doesn’t include an implementation of HashMap within its repository doesn’t mean that we are any simpler than Linux: indeed, we have just swept a huge pile of code under the rug; just the `collection`s portion of the standard library represents about 10k+ SLOC at a very high complexity.

So, while Rust’s `std` library allows the Xous code base to focus on being a kernel and not also be its own standard library, from the standpoint of building a minimum attack-surface, “fully-auditable by one human” codebase, I think our reliance on Rust’s `std` library means we fail on that objective, especially so long as we continue to track the latest release of Rust (and I’ll get into why we have to in the next section).

Ideally, at some point, things “settle down” enough that we can stick a fork in it and call it done by well, forking the Rust repo, and saying “this is our attack surface, and we’re not going to change it”. Even then, the Rust `std` repo dwarfs the Xous repo by several multiples in size, and that’s not counting the complexity of the compiler itself.

Rust Isn’t Finished

This next point dovetails into why Rust is not yet suitable for a fully auditable kernel: the language isn’t finished. For example, while we were coding Xous, a thing called `const generic` was introduced. Before this, Rust had no native ability to deal with arrays bigger than 32 elements! This limitation is a bit maddening, and even today there are shortcomings such as the `Default` trait being unable to initialize arrays larger than 32 elements. This friction led us to put limits on many things at 32 elements: for example, when we pass the results of an SSID scan between processes, the structure only reserves space for up to 32 results, because the friction of going to a larger, more generic structure just isn’t worth it. That’s a language-level limitation directly driving a user-facing feature.

Also over the course of writing Xous, things like in-line assembly and workspaces finally reached maturity, which means we need to go back a revisit some unholy things we did to make those critical few lines of initial boot code, written in assembly, integrated into our build system.

I often ask myself “when is the point we’ll get off the Rust release train”, and the answer I think is when they finally make “alloc” no longer a nightly API. At the moment, `no-std` targets have no access to the heap, unless they hop on the “nightly” train, in which case you’re back into the Python-esque nightmare of your code routinely breaking with language releases.

We definitely gave writing an OS in `no-std` + stable a fair shake. The first year of Xous development was all done using `no-std`, at a cost in memory space and complexity. It’s possible to write an OS with nothing but pre-allocated, statically sized data structures, but we had to accommodate the worst-case number of elements in all situations, leading to bloat. Plus, we had to roll a lot of our own core data structures.

About a year ago, that all changed when Xobs ported Rust’s `std` library to Xous. This means we are able to access the heap in stable Rust, but it comes at a price: now Xous is tied to a particular version of Rust, because each version of Rust has its own unique version of `std` packaged with it. This version tie is for a good reason: `std` is where the sausage gets made of turning fundamentally `unsafe` hardware constructions such as memory allocation and thread creation into “safe” Rust structures. (Also fun fact I recently learned: Rust doesn’t have a native allocater for most targets – it simply punts to the native libc `malloc()` and `free()` functions!) In other words, Rust is able to make a strong guarantee about the stable release train not breaking old features in part because of all the loose ends swept into `std`.

I have to keep reminding myself that having `std` doesn’t eliminate the risk of severe security bugs in critical code – it merely shuffles a lot of critical code out of sight, into a standard library. Yes, it is maintained by a talented group of dedicated programmers who are smarter than me, but in the end, we are all only human, and we are all fair targets for software supply chain exploits.

Rust has a clockwork release schedule – every six weeks, it pushes a new version. And because our fork of `std` is tied to a particular version of Rust, it means every six weeks, Xobs has the thankless task of updating our fork and building a new `std` release for it (we’re not a first-class platform in Rust, which means we have to maintain our own `std` library). This means we likewise force all Xous developers to run `rustup update` on their toolchains so we can retain compatibility with the language.

This probably isn’t sustainable. Eventually, we need to lock down the code base, but I don’t have a clear exit strategy for this. Maybe the next point at which we can consider going back to `nostd` is when we can get the stable `alloc` feature, which allows us to have access to the heap again. We could then decouple Xous from the Rust release train, but we’d still need to backfill features such as Vec, HashMap, Thread, and Arc/Mutex/Rc/RefCell/Box constructs that enable Xous to be efficiently coded.

Unfortunately, the `alloc` crate is very hard, and has been in development for many years now. That being said, I really appreciate the transparency of Rust behind the development of this feature, and the hard work and thoughtfulness that is being put into stabilizing this feature.

Rust Has A Limited View of Supply Chain Security

I think this position is summarized well by the installation method recommended by the rustup.rs installation page:

`curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`

“Hi, run this shell script from a random server on your machine.”

To be fair, you can download the script and inspect it before you run it, which is much better than e.g. the Windows .MSI installers for vscode. However, this practice pervades the entire build ecosystem: a stub of code called `build.rs` is potentially compiled and executed whenever you pull in a new crate from crates.io. This, along with “loose” version pinning (you can specify a version to be, for example, simply “2” which means you’ll grab whatever the latest version published is with a major rev of 2), makes me uneasy about the possibility of software supply chain attacks launched through the crates.io ecosystem.

Crates.io is also subject to a kind of typo-squatting, where it’s hard to determine which crates are “good” or “bad”; some crates that are named exactly what you want turn out to just be old or abandoned early attempts at giving you the functionality you wanted, and the more popular, actively-maintained crates have to take on less intuitive names, sometimes differing by just a character or two from others (to be fair, this is not a problem unique to Rust’s package management system).

There’s also the fact that dependencies are chained – when you pull in one thing from crates.io, you also pull in all of that crate’s subordinate dependencies, along with all their build.rs scripts that will eventually get run on your machine. Thus, it is not sufficient to simply audit the crates explicitly specified within your Cargo.toml file — you must also audit all of the dependent crates for potential supply chain attacks as well.

Fortunately, Rust does allow you to pin a crate at a particular version using the `Cargo.lock` file, and you can fully specify a dependent crate down to the minor revision. We try to mitigate this in Xous by having a policy of publishing our Cargo.lock file and specifying all of our first-order dependent crates to the minor revision. We have also vendored in or forked certain crates that would otherwise grow our dependency tree without much benefit.

That being said, much of our debug and test framework relies on some rather fancy and complicated crates that pull in a huge number of dependencies, and much to my chagrin even when I try to run a build just for our target hardware, the dependent crates for running simulations on the host computer are still pulled in and the build.rs scripts are at least built, if not run.

In response to this, I wrote a small tool called `crate-scraper` which downloads the source package for every source specified in our Cargo.toml file, and stores them locally so we can have a snapshot of the code used to build a Xous release. It also runs a quick “analysis” in that it searches for files called build.rs and collates them into a single file so I can more quickly grep through to look for obvious problems. Of course, manual review isn’t a practical way to detect cleverly disguised malware embedded within the build.rs files, but it at least gives me a sense of the scale of the attack surface we’re dealing with — and it is breathtaking, about 5700 lines of code from various third parties that manipulates files, directories, and environment variables, and runs other programs on my machine every time I do a build.

I’m not sure if there is even a good solution to this problem, but, if you are super-paranoid and your goal is to be able to build trustable firmware, be wary of Rust’s expansive software supply chain attack surface!

You Can’t Reproduce Someone Else’s Rust Build

A final nit I have about Rust is that builds are not reproducible between different computers (they are at least reproducible between builds on the same machine if we disable the embedded timestamp that I put into Xous for $reasons).

I think this is primarily because Rust pulls in the full path to the source code as part of the panic and debug strings that are built into the binary. This has lead to uncomfortable situations where we have had builds that worked on Windows, but failed under Linux, because our path names are very different lengths on the two and it would cause some memory objects to be shifted around in target memory. To be fair, those failures were all due to bugs we had in Xous, which have since been fixed. But, it just doesn’t feel good to know that we’re eventually going to have users who report bugs to us that we can’t reproduce because they have a different path on their build system compared to ours. It’s also a problem for users who want to audit our releases by building their own version and comparing the hashes against ours.

There’s some bugs open with the Rust maintainers to address reproducible builds, but with the number of issues they have to deal with in the language, I am not optimistic that this problem will be resolved anytime soon. Assuming the only driver of the unreproducibility is the inclusion of OS paths in the binary, one fix to this would be to re-configure our build system to run in some sort of a chroot environment or a virtual machine that fixes the paths in a way that almost anyone else could reproduce. I say “almost anyone else” because this fix would be OS-dependent, so we’d be able to get reproducible builds under, for example, Linux, but it would not help Windows users where chroot environments are not a thing.

Where Rust Exceeded Expectations

Despite all the gripes laid out here, I think if I had to do it all over again, Rust would still be a very strong contender for the language I’d use for Xous. I’ve done major projects in C, Python, and Java, and all of them eventually suffer from “creeping technical debt” (there’s probably a software engineer term for this, I just don’t know it). The problem often starts with some data structure that I couldn’t quite get right on the first pass, because I didn’t yet know how the system would come together; so in order to figure out how the system comes together, I’d cobble together some code using a half-baked data structure.

Thus begins the descent into chaos: once I get an idea of how things work, I go back and revise the data structure, but now something breaks elsewhere that was unsuspected and subtle. Maybe it’s an off-by-one problem, or the polarity of a sign seems reversed. Maybe it’s a slight race condition that’s hard to tease out. Nevermind, I can patch over this by changing a <= to a <, or fixing the sign, or adding a lock: I’m still fleshing out the system and getting an idea of the entire structure. Eventually, these little hacks tend to metastasize into a cancer that reaches into every dependent module because the whole reason things even worked was because of the “cheat”; when I go back to excise the hack, I eventually conclude it’s not worth the effort and so the next best option is to burn the whole thing down and rewrite it…but unfortunately, we’re already behind schedule and over budget so the re-write never happens, and the hack lives on.

Rust is a difficult language for authoring code because it makes these “cheats” hard – as long as you have the discipline of not using “unsafe” constructions to make cheats easy. However, really hard does not mean impossible – there were definitely some cheats that got swept under the rug during the construction of Xous.

This is where Rust really exceeded expectations for me. The language’s structure and tooling was very good at hunting down these cheats and refactoring the code base, thus curing the cancer without killing the patient, so to speak. This is the point at which Rust’s very strict typing and borrow checker converts from a productivity liability into a productivity asset.

I liken it to replacing a cable in a complicated bundle of cables that runs across a building. In Rust, it’s guaranteed that every strand of wire in a cable chase, no matter how complicated and awful the bundle becomes, is separable and clearly labeled on both ends. Thus, you can always “pull on one end” and see where the other ends are by changing the type of an element in a structure, or the return type of a method. In less strictly typed languages, you don’t get this property; the cables are allowed to merge and affect each other somewhere inside the cable chase, so you’re left “buzzing out” each cable with manual tests after making a change. Even then, you’re never quite sure if the thing you replaced is going to lead to the coffee maker switching off when someone turns on the bathroom lights.

Here’s a direct example of Rust’s refactoring abilities in action in the context of Xous. I had a problem in the way trust levels are handled inside our graphics subsystem, which I call the GAM (Graphical Abstraction Manager). Each Canvas in the system gets a `u8` assigned to it that is a trust level. When I started writing the GAM, I just knew that I wanted some notion of trustability of a Canvas, so I added the variable, but wasn’t quite sure exactly how it would be used. Months later, the system grew the notion of Contexts with Layouts, which are multi-Canvas constructions that define a particular type of interaction. Now, you can have multiple trust levels associated with a single Context, but I had forgotten about the trust variable I had previously put in the Canvas structure – and added another trust level number to the Context structure as well. You can see where this is going: everything kind of worked as long as I had simple test cases, but as we started to get modals popping up over applications and then menus on top of modals and so forth, crazy behavior started manifesting, because I had confused myself over where the trust values were being stored. Sometimes I was updating the value in the Context, sometimes I was updating the one in the Canvas. It would manifest itself sometimes as an off-by-one bug, other times as a concurrency error.

This was always a skeleton in the closet that bothered me while the GAM grew into a 5k-line monstrosity of code with many moving parts. Finally, I decided something had to be done about it, and I was really not looking forward to it. I was assuming that I messed up something terribly, and this investigation was going to conclude with a rewrite of the whole module.

Fortunately, Rust left me a tiny string to pull on. Clippy, the cheerfully named “linter” built into Rust, was throwing a warning that the trust level variable was not being used at a point where I thought it should be – I was storing it in the Context after it was created, but nobody every referred to it after then. That’s strange – it should be necessary for every redraw of the Context! So, I started by removing the variable, and seeing what broke. This rapidly led me to recall that I was also storing the trust level inside the Canvases within the Context when they were being created, which is why I had this dangling reference. Once I had that clue, I was able to refactor the trust computations to refer only to that one source of ground truth. This also led me to discover other bugs that had been lurking because in fact I was never exercising some code paths that I thought I was using on a routine basis. After just a couple hours of poking around, I had a clear-headed view of how this was all working, and I had refactored the trust computation system with tidy APIs that were simple and easier to understand, without having to toss the entire code base.

This is just one of many positive experiences I’ve had with Rust in maintaining the Xous code base. It’s one of the first times I’ve walked into a big release with my head up and a positive attitude, because for the first time ever, I feel like maybe I have a chance of being able deal with hard bugs in an honest fashion. I’m spending less time making excuses in my head to justify why things were done this way and why we can’t take that pull request, and more time thinking about all the ways things can get better, because I know Clippy has my back.

Caveat Coder

Anyways, that’s a lot of ranting about software for a hardware guy. Software people are quick to remind me that first and foremost, I make circuits and aluminum cases, not code, therefore I have no place ranting about software. They’re right – I actually have no “formal” training to write code “the right way”. When I was in college, I learned Maxwell’s equations, not algorithms. I could never be a professional programmer, because I couldn’t pass even the simplest coding interview. Don’t ask me to write a linked list: I already know that I don’t know how to do it correctly; you don’t need to prove that to me. This is because whenever I find myself writing a linked list (or any other foundational data structure for that matter), I immediately stop myself and question all the life choices that brought me to that point: isn’t this what libraries are for? Do I really need to be re-inventing the wheel? If there is any correlation between doing well in a coding interview and actual coding ability, then you should definitely take my opinions with the grain of salt.

Still, after spending a couple years in the foxhole with Rust and reading countless glowing articles about the language, I felt like maybe a post that shared some critical perspectives about the language would be a refreshing change of pace.

19 May 19:09

OnePlus’ Nord Buds are the best $49 earbuds I’ve ever tested

by Brad Bennett

OnePlus has impressed me for years with its reasonably priced earbuds, and the new Nord Buds are no different -- except this time, the price is even lower.

I've been using the new buds for over a week, and while they didn't capture my heart like the Nothing Ear 1s, they still stood up to regular use and surprised me with how good they are for $49.

Sound quality

OnePlus actually managed to jam in a 12.4mm speaker driver into each earbud. Even if it's not the biggest earbud speaker around, it's still loud enough and larger than the 11mm option inside the Nothing Ear 1s.

The playback isn't perfect, and the buds do show their limitations when the volume is maxed out, but in most situations, they're fine, and I really haven't felt the need to push the volume that loud unless I was biking with only a single bud in. Since these buds are so light, they work well for this.

The tuning is balanced, and it's not tinny or distorted. There's even an equalizer in the 'HeyMelody' app, so anyone that's not satisfied with the Nord Buds can try and tune them on their own. Generally, I keep the buds on the 'Balanced' preset, but the 'Bold' option is pretty fun and offers a little more depth and sharper pitch tuning.

The user experience

As I alluded to above, one of my favourite things about the Nord Buds is how light and comfortable they are. They have a small stemmed design that looks pretty cool and keeps the earbud's centre of gravity close to my head, so they always feel secure.

The buds themselves offer seven hours of listening and a total of 30 hours with the case. I have yet to kill the buds in a single listening session, and the case has lasted more than a week off its initial charge. That being said, at the time of writing, it's getting low and is at 30 percent. Of course, OnePlus also added fast charging, so five minutes will give the case 10 hours of battery life and the buds three.

Overall, battery life wasn't really an issue with the OnePlus Nord Buds.

The OnePlus Nord Buds also feature an IP55 rating, so they can repel a little bit of water which is always reassuring to know that you can wear the buds in a bit of rain. They also offer Bluetooth 5.2 and utilize Android Fast Pair technology.

What I didn't like about them was the fact that when I was pulling them out of my ears or my pockets, the ear tip would often fall off too. It's not attached as securely as other buds, so this happens a lot. I haven't lost a tip yet, but it seems like only a matter of time. Regardless, the tips fit into my ears nicely and blocked outside sound well for earbuds without noise-cancelling.

The other negative is that the touch controls are a bit laggy, so whenever I would tap once, I'd often tap again, thinking it didn't register me, but it was just being slow.

Unboxing

 

Good budget buds

I've only been able to use these earbuds for just over a week at the time of writing, and while I got an hour or more of listening in every day, the only thing that worries me about the buds is how long they'll last over time. There's nothing to suggest that they'll die over time, but they feel too good to be true.

I don't think they'll replace my AirPods or even the OnePlus Buds Pros for me, but without a doubt, I feel comfortable recommending them to people looking to get a solid pair of wireless earbuds on a budget. In a previous review, I said that wireless earbuds typically fall into three categories: decent, terrible and surprisingly good.

When stacked up against other buds in the $100+ range, the Nord Buds are 'decent,' but when you factor in that they're half the price of the nearest competition, they feel 'surprisingly good.'

The OnePlus Nord Buds are available to pre-order now for $49 from OnePlus and come in Black.

19 May 03:14

Comby

Comby

Describes itself as "Structural search and replace for any language". Lets you execute search and replace patterns that look a little bit like simplified regular expressions, but with some deep OCaml-powered magic that makes them aware of comment, string and nested parenthesis rules for different languages. This means you can use it to construct scripts that automate common refactoring or code upgrade tasks.

Via Hacker News

19 May 03:11

Grief and Distraction

by peter@rukavina.net (Peter Rukavina)

Sarah Kennedy writes about grief and distraction for Film Stories:

Grief makes you more vulnerable and that’s frightening, but in that there’s a strength; something I couldn’t even begin to comprehend a decade ago. I can empathise with my favourite characters better now, and enjoy their stories with a whole new, sometimes painful, perspective. That means I can empathise with you better too. And hopefully treat you with the kindness you deserve. Let’s sit here together. We don’t have to talk. What do you want to watch?

19 May 03:10

Turing Pi 2 Kickstarter

by Rui Carmo

I want to like this, seeing as it would make a lot of sense (such as things are) for me to get one, but with compute modules being sold out everywhere, ready access to vastly more powerful cloud (and local) resources and little time to tinker beyond a couple of hours a week, I just can’t justify it–especially since I gave away my Raspberry Pi 2 cluster last year, and just haven’t missed it.

But it would be really nice (budget permitting) to fill one with NVIDIA Jetsons since those would let me tinker with PyTorch in earnest (even if more slowly than a consumer-grade GPU).

Guess I need to become a YouTuber or something.


19 May 03:09

The history of the internet is repeatedly reduced to the story of the singular Arpanet, but BBSs were just as important, if not more

Danie van der Merwe, GadgeteerZA, May 18, 2022
Icon

This shorter article summarizes a longer article in Wired, which may or may not be paywalled for you, and that in turn excerpts from a book, because that's how the publicity machine works. It's a story that has been told many times before, but is worth hearing if it's new to you, and that's the story of what the internet was like before there was an internet. It's the world of bulletin board services (BBS) and communication networks like FidoNet (not the phone company). I was a part of that world, hosting my own Athabaska BBS (on MaximusBBS) and visiting other services as much as I could. There were also commercial online information services (AOL, GEnie, Prodigy, Compuserv, Delphi) that were swallowed by the internet when it became popular. But those were for people who could pay $6 per hour (plus long distance).

Web: [Direct Link] [This Post]
19 May 03:09

Bike: An Elegant Outliner For Mac-Focused Workflows

by John Voorhees

Bike is a brand new Mac-only outlining app from Hog Bay Software that executes the fundamentals of outlining flawlessly. The outline creation and editing workflows are polished, and the keyboard-focused navigation makes moving around a large outline effortless.

The app’s feature set is limited by design. That focus is part of what makes Bike such a good outliner. The care and attention that has gone into building a solid outlining foundation are immediately evident.

However, that focus comes with a downside. Bike is a simple app that won’t meet the needs of users looking for iPhone or iPad support, formatting options, Shortcuts support, or other features.

Overall, I like the approach Bike has taken a lot, but I think it has gone too far, limiting the app’s utility more than is necessary to maintain its simplicity. Let me explain what I mean.

Bike reminds me of Drafts 1.0. As originally conceived, Drafts was a fast way to input text on the iPhone that was destined for another app. The focus was on speed and simplicity.

Bike 1.0’s strength lies in its simplicity too. The app features unadorned text organized in a foldable, hierarchical structure. That’s it – text and a tab-based outline structure. There are no formatting options, fonts to pick, text color, highlighting, or any of the other customization choices offered by other outliners. There’s an elegance to Bike’s limitations that insist that you focus on organizing your ideas instead of procrastinating by decorating your outline, which I love.

Bike’s approach serves its feature set well, too. A lot of outlining apps are just glorified text editors. That works to a point, but the structured nature of an outline demands an approach tailored to organizing ideas.

Every major feature of Bike has a keyboard shortcut.

Every major feature of Bike has a keyboard shortcut.

One of the most important features to get right for an outlining app is indenting and outdenting content. Bike gets this right, letting users indent and outdent using the Tab and Shift+Tab anywhere on a row to change its level, which is something that not all outliners support. Indenting and outdenting can also be accomplished with ⌘ + ] and ⌘ + [ or ⌘ + ⌃ + → and ⌘ + ⌃ + ←, which I appreciate because it accommodates a variety of personal preferences. Indenting and outdenting work when multiple rows are selected too.

Keyboard navigation and editing are at the core of what makes Bike such a powerful outliner. Multiple rows can be selected and a new parent row added above them with a single keystroke. Rows can be moved up and down through an outline’s hierarchy individually or as a group when multiple rows are selected, sections can be collapsed and expanded individually and all at once, and you can focus on one section of an outline at a time, hiding the others.

Bike's outline mode selects entire rows, making it simple to move them around.

Bike’s outline mode selects entire rows, making it simple to move them around.

Bike also has a separate outline mode that’s toggled on and off using the Escape key on your Mac. When enabled, outlining mode highlights entire rows at once, allowing you to treat each as a unit. Because so many of the keyboard shortcuts for navigating an outline are also available in editing mode, I haven’t found myself turning to outline mode often, but it does allow you to use the left and right arrow keys to expand and collapse sections, whereas, in editing mode, those keys simply move the cursor. In editing mode, expanding and collapsing sections is accomplished with ⌘ + 9 and ⌘ + 0.

Bike outlines are HTML files under the hood.

Bike outlines are HTML files under the hood.

Outlines can be saved in three different formats. By default, Bike saves to a proprietary .bike file that is really just HTML. That means you can right-click on a .bike file, open it in Safari or an editor like BBEdit if you’d like, or manipulate it as HTML in Shortcuts, for example. Bike’s outlines can also be saved as opml files, which is a common outliner format used by other apps like OmniOutliner, or as plain text, which structures your outline using tabs and new line characters.

The file format for your outline is only available in the initial Save dialog.

The file format for your outline is only available in the initial Save dialog.

Bike also allows users to generate links to outlines using a URL scheme in the following format:

bike://JtnNiH_I#it

The part of the URL before the pound symbol is the outline’s identifier that opens the file you want. The part after the pound symbol is optional and specifies the row which will open the outline in the app’s focus mode, so you’ll only see that section of the outline.

Bike’s URL scheme is a useful way to link to outlines in other apps, so you can get back to them or a particular section quickly. However, there’s no UI affordance to let you know that you’re in Bike’s focus mode, so opening directly into a section of an outline can be disorienting. I’d like to see some sort of focus mode indicator added to Bike to let users know when there’s more to an outline than what’s currently onscreen.

Bike includes a find and replace panel.

Bike includes a find and replace panel.

Bike can also check your spelling and grammar.

Bike can also check your spelling and grammar.

Other notable Bike features include find and replace and spelling and grammar panels that can be activated from the Edit menu or with a keyboard shortcut. Bike also supports floating windows above all others and an extensive AppleScript dictionary of commands for accessing the components of an outline down to the level of individual rows.

Bike includes extensive AppleScript support.

Bike includes extensive AppleScript support.

Apart from the couple of items mentioned above, there are a few big-ticket items that I’d like to see added to Bike in the future. The first is an iOS and iPadOS app. The multiple file formats offered by Bike mean you can save to a format that can be opened on an iPhone or iPad for editing in a different app. However, that means abandoning Bike’s excellent editing and navigation features on those devices, which is a shame because outlines are the sort of content that is useful on any device. However, if you work primarily on a Mac, the fact that Bike is Mac-only likely won’t be an issue for you.

I wish Bike supported bulleted and numbered lists.

I wish Bike supported bulleted and numbered lists.

Second, I’m not a fan of the decision to not offer bulleted and numbered outline options. That would undoubtedly increase Bike’s complexity, but both options are so common in Markdown and Rich Text environments that their absence makes Bike outlines less useful in other apps.

Finally, I’d like to see Bike’s powerful AppleScript actions adapted for use in Shortcuts. The lack of bulleted and numbered lists in Bike immediately made me wonder if I could automate adding them. That should be possible with AppleScript, but the lack of Shortcuts support unnecessarily limits Bike’s automation capabilities to AppleScript users.


Bike is the most elegantly designed outlining app I’ve tried on the Mac in a long time. Working with outlines is fast and effortless regardless of their size. As a result, if you like organizing your ideas in a structured outline format and work primarily on a Mac, you can’t go wrong with Bike. Its focused approach to outlining is excellent.

However, the choice is less clear for Mac users who want to outline on an iPhone or iPad too. On the one hand, Bike supports OPML and plain text file formats, which are both highly portable. There are plenty of iOS and iPadOS apps that you could use to fill the gap. On the other hand, though, if you’d prefer a unified approach across devices, you can’t get that with Bike.

Personally, I’ll continue to experiment with Bike when I’m on my Mac. The app’s limitations mean that I’ll stick with Obsidian for most of my outlining needs. However, Bike’s exceptional keyboard-driven approach is one I’ll be keeping an eye on and looking for ways to fit into my workflows.

Bike is available directly from Hog Bay Software for $29.99.


Support MacStories and Unlock Extras

Founded in 2015, Club MacStories has delivered exclusive content every week for over six years.

In that time, members have enjoyed nearly 400 weekly and monthly newsletters packed with more of your favorite MacStories writing as well as Club-only podcasts, eBooks, discounts on apps, icons, and services. Join today, and you’ll get everything new that we publish every week, plus access to our entire archive of back issues and downloadable perks.

The Club expanded in 2021 with Club MacStories+ and Club Premier. Club MacStories+ members enjoy even more exclusive stories, a vibrant Discord community, a rotating roster of app discounts, and more. And, with Club Premier, you get everything we offer at every Club level plus an extended, ad-free version of our podcast AppStories that is delivered early each week in high-bitrate audio.

Join Now
19 May 03:09

Lightbulbs were so startup

Edison invented the first practical incandescent electric light in 1879. (That is, the filament lasted more than a few hours.)

But it couldn’t be brought to consumers because there was no commercially-available electricity.

AND SO:

In 1882, the Edison Illuminating Company opened the first commercial power plant in the United States: Pearl Street Station (Wikipedia) in Lower Manhattan.

Starting small… "it started generating electricity on September 4, 1882, serving an initial load of 400 lamps at 82 customers. By 1884, Pearl Street Station was serving 508 customers with 10,164 lamps."

They had to invent a new kind of dynamo to generate the electricity. Ahead of the power station, Edison ran a number of pop-ups as prototypes.

HOWEVER: power distribution.

… perhaps the greatest challenge was building the elaborate network of wires and underground tubes (called “conduits”) needed to deliver energy to customers. New York City politicians were initially skeptical and rejected Edison’s proposal to dig up the streets of lower Manhattan to install the needed 100,000 feet of wiring. Eventually, however, Edison was able to convince the mayor of the city otherwise. The conduit installation proved to be one of the most expensive parts of the whole project.

– Engineering and Technology History Wiki, Pearl Street Station

Not only a distribution problem, but a backchannel conversation between people with power to get around the rules. Capitalists gonna capitalise.

AND THERE’S MORE: the business model.

Since the early 1800s there had been special instruments to detect the flow of a current and indicate how much of it was flowing, but there was not an instrument to record that flow over time. Not until the spring of 1882 was a successful design for an electric meter available. However, Edison did not send bills to his customers until the whole system was running reliably, which took some more time. The first electric bill was sent to the Ansonia brass and copper company on 18 January 1883 and was for $50.44.

(That quote from the same ETHW article linked above.)

In addition light bulbs cost $1 ea.

The setup cost (including real estate) was $300,000 – a lot to return, 50 bucks at a time.

The company ran at a loss until 1884. Pearl Street Station burnt down in 1890, was rebuilt, then in 1895 decommissioned when it was made obsolete by newer, larger power stations.


A single value-creating innovation requiring a vast hinterland of enabling technologies in order to connect the product to its market.

It’s so startup it hurts.

Questions I have:

  • How can we compare the cost of “inventing the lightbulb” vs digging the streets, building the power station, getting to profitability, etc? Say, in terms of hours of human effort? Was it 50:50, 80:20, 1:99?
  • Were other models other than metered electricity considered? Like, light-as-a-service with bundled bulbs? Or somehow connecting the cost of the service to the value for the customer – could customer businesses be charged a fraction of revenue, say? My guess: it comes down to what’s easy to measure. Businesses nowadays will grab a % (the “take”) from their customers simply because it’s possible to do so.
  • Were other applications for electricity actively considered, or was this a pure lighting play? The fractional horse-power motor scaled down factory automation to the home, and the electrification boom brought with it vacuum cleaning and washing machines… yet the enabling tech wasn’t invented till 1888. But was it visible on the horizon? Or was it only lighting that justified the huge investment in Pearl Street Station and the 100,000 feet of conduits?
  • Why go for this very broad market first? Why not build smaller generators in the basement of department stores, lighting up just single buildings or single streets? Isn’t the value in (say) adding an extra hour of brightly-lit shopping every evening more obvious than trying to replace gas and oil for home customers?

Did Edison have a team of 1880s MBA-equivalents, crunching the numbers to figure out what to do?

What was the mood around electric lighting back then? Did it feel like a hype train? Was the social media of the time full of wild speculation about the social changes that would be unleashed and the fortunes that would be made?

Electricity, generally, had that aura of excitement. A few years back I read every issue of Electrical Review from the 1880s and 1890s which covered the rollout of the telegraph and then lighting, simultaneous with figuring out the science of how electricity behaved (I wrote up my observations here) – but I think I need to go back and read the preceding decade too.

19 May 03:07

Opposing a new LNG port in Átl’ka7tsem

by Chris Corrigan

I live at the open end of a fjord called Átl’ka7tsem/Howe Sound, on the south coast of British Columbia. It is a broad mouthed inlet that narrows as you head 45 kilometers up towards Squamish. It is home to a small archipelago of islands and some small villages and towns. The inlet has been recovering from massive industrial abuse for most of the last 100 years, mostly from horrendas mining and logging practices, and now we have herring, sea lions, seals, whales, dolphins and porpoises and even more important sea life, like extremely rare glass sponge reefs and healthy plankton blooms. showing up in ever increasing numbers. You can read more about this amazing place and its citizen-led recovery at the Howe Sound Marine Guide Átl’?a7tsem/Howe Sound Marine Stewardship Initiative website. This place is so special that last year the inlet was named Canada 19th UNESCO Biosphere Reserve.

The inlet forms most of the southern half of Skwxwú7mesh-ulh Temíxw and the Skwxwú7mesh Úxwumixw (Squamish Nation Government) is playing an increasingly important role in the jurisdiction and stewardship of this place, as is right. The Nation is the only government whose jurisdiction maps most precisely on the whole of the ecosystem, from mountain tops to the ocean floor from the source to the Strait of Georgia, and they are the government with by far the longest tenure in this place, dating back tens of thousands of years, into time immemorial. The deepest stories about this place extend into the Squamish period of history that was dominated by the Transformer brothers Xaay Xaays and the supernatural beings that formed and transformed the earth.

Next week, the proponents of Woodfibre LNG will be presenting to our Council on Bowen Island. I’m not sure what they will say, but I do know that it is important to be on the record opposing the project. This blog post will be my submission to Council.

I am opposed to any new fossil fuel infrastructure development. Anything that helps add to the amount of fossils fuels being burned is a contribution towards the increasingly likely potential that we will propagate an extinction level event on our home planet.

The Skwxwú7mesh Úxwumixw has entered into a benefits agreement with Woodfibre LNG and the Skwxwú7mesh Úxwumixw environmental review process has approved the construction of the project. The company has worked with the Nation to mitigate the impact of the development at Swiyát, which is an especially significant place for herring spawning. I want to go on record as saying that I don’t blame the Skwxwú7mesh Úxwumixw one bit for this decision. They have been clear from the beginning taht development in the territories needs to to meet their standards, and this development has done that. They have been transparent about their process and they have made decisions in the best interests of the Nation.

Since European contact, the Skwxwú7mesh Úxwumixw and its constituent communities and leaders have been systemically and deliberately denied the opportunity to benefit from economic activity within the territory. The fact that they have asserted this right and signed an impact agreement worth more than $1 billion is good. In fact, it is surprising and shocking that ANY economic activity at all happens within Squamish Nation territories without some benefit accruing to the Nation.

Skwxwú7mesh Úxwumixw. is entirely within its rights to review and approve the project from the perspective of their. environmental and economic interests. This is a key part of the principle of free prior and informed consent recognized under the UN Declaration on the Rights of Indigenous Peoples and it stand as an example to all of us who operate in these territories. If you aren’t already contributing something to the Nation as a person who “lives, works and plays” here, then it might be time to consider how you too can share your benefits with the traditional and historic owners of this territory.

The major objection I have to Woodfibre LNG is the fact that it introduces new fossil fuels into the earth’s atmosphere, at a time when we are confronting an existential crises on this planet. Woodfibre LNG will tell you that this is a clean project because it uses hydroelectricity for its operations. However, it fails to take any responsibility for the amount of LNG being shipped through the facility and burned in the world. This is like saying there has never been a fatality in a bomb factory, and therefore there has never been a more benign bomb factory. It fails to take into account the cumulative effect of the burning of new amounts of liquid natural gas over the lifetime of the project. I have asked the company what the estimates are for the amount of carbon added to the atmosphere from the gas shipped through Woodfibre, and if they reply I will update this post to reflect that. At the very least, the facility is intended to ship 2.7 million tons of LNG a year which, when burned, will produce about 2.76 times that amount, or 7,452,000 tons of CO2 without taking into consideration the supply chain emissions, or more importantly direct leaks and emissions of methane into the atmosphere. That Woodfibre is run on electricity is merely one dent an overall supply chain that uses and emits the gas that it mines.

We should not be building new fossil fuel infrastructure at all at this point in time. We have long since passed the the time when we should have stopped. All of us now need to stand in the face of our descendents and the future impacts of life on the planet and admit that at the very least, we didn’t do enough in a timely manner to address this issue. But some of us will need to say more. That even when we knew what negative impact we could expect from the short term gain we championed, we did it anyway.

Sorry won’t pay for this grief.

19 May 03:04

Inception

by peter@rukavina.net (Peter Rukavina)

Turning an iPhone on its side and taking a panoramic photo up-down sometimes produces interesting results. Like this Inception-like photo of dusk on Prince Street.

19 May 03:02

Class Disrupted S3 E18: Revisiting the Promise and Potential of Charter Schools 30 Years Later

Michael B. Horn, Diane Tavenner, The 74, May 18, 2022
Icon

Once you get past the two authors gushing with praise for each other this is actually an interesting and in-depth discussion of the 30-year old history of charter schools. It's an innovation that has frankly not lived up to the promises that were made, especially with regard to student success, and which in many ways was co-opted to serve other agendas, but which offered room for innovation and exploration - albeit at the cost of damaging the public school system by draining resources and expertise from it.

Web: [Direct Link] [This Post]
19 May 03:01

Canada: trial of white men who killed two Indigenous hunters in 2020 begins | Canada

mkalus shared this story from The Guardian.

Two white Canadian men followed and then shot dead two Indigenous hunters because they believed they were thieves, prosecutors have told a court at the start of a murder trial in Alberta.

Roger Bilodeau, 58, and his son Anthony Bilodeau, 33, have both pleaded not guilty to second-degree murder over the deaths of Jacob Sansom and his uncle, Maurice Cardinal in March 2020.

The bodies of Sansom, 39, and Cardinal, 57, were found early on 28 March beside Sansom’s pickup truck on a country road near Glendon, a farming town 160 miles north-east of Edmonton.

Sansom had recently lost his job as a mechanic and worked as a volunteer firefighter. Cardinal was a keen hunter and outdoorsman. Both were Métis – a distinct group that traces lineage to both Indigenous nations and European settlers – and had permission to hunt the area out of season.

The killing of the men, who were returning from a successful moose hunting trip to help provide food for family members, shocked the region.

Prosecutors told an Edmonton jury on Monday that the two Bilodeaus followed the two hunters, assuming the men were thieves. Roger Bilodeau believed the hunters’ truck resembled a vehicle that had been on his property earlier that day.

As he followed the truck, Bilodeau called his son and asked him to follow behind and to bring a gun, said the Crown.

Roger Bilodeau and the hunters stopped their trucks along a country road near Glendon.

Anthony Bilodeau arrived moments later and prosecutors say he shot Sansom, then Cardinal. A postmortem concluded that Sansom was shot once in the chest and Cardinal was shot three times in his shoulder.

The Bilodeaus then drove away without notifying police or paramedics. The bodies of the two men – Sansom lying in the middle of the road and Cardinal in a ditch – were discovered early the next morning by a motorist.

“These were in no way justified killings,” said prosecutor Jordan Kerr, adding that the younger Bilodeau “freely made the decision to arm himself” and pursue the two men. Roger Bilodeau “clearly anticipated having a confrontation” and so “recruited” his son into bringing a weapon, Kerr said.

But a lawyer for the Bilodeaus say the men acted in self defence amid concerns over property crime in the area.

Lawyer Shawn Gerstel told the jury that the encounter on the rural road that night quickly escalated and that Sansom had smashed a window of Roger’s truck and punched him multiple times.

“[Roger] asked for a gun for protection because he didn’t know who he was dealing with,” said Gerstel. The defence said the collar of Roger’s shirt was torn half off and Sansom’s blood was found on three areas of Bilodeau’s shirt.

The defence also alleges the hunters were drunk and a medical examiner is expected to testify that Sansom’s blood alcohol levelwas almost triple the legal driving limit. Cardinal’s blood alcohol limit was nearly double the legal the limit for driving, the defence says.

On Monday, Sansom’s brother James told the court that Jacob was trained as a martial artist and had the ability to de-escalate tense situations.

The trial continues.

19 May 03:01

Reality check: the Northern Ireland protocol isn’t the problem, Brexit is | Rafael Behr

mkalus shared this story from The Guardian.

The Conservative party was happy with Brexit, but not for long. A deal that was great in 2019 is now not great. What could fix it? What change would bring enduring satisfaction? The answer is obvious to anyone familiar with the patterns of English Euroscepticism – nothing. There is no concession big enough, no deal good enough, just as no single fix can end the cravings of a drug addict. The long-term solution is to get sober.

That is not on Liz Truss’s agenda. On Tuesday, the foreign secretary informed parliament of a government plan to assert its own version of the Northern Ireland protocol. That is a threat designed to prod the EU into renegotiating the 2019 withdrawal agreement, which was itself the outcome of a renegotiation made necessary because Theresa May had done a deal that Conservative MPs also didn’t like.

One reason continental leaders don’t want to talk about changes amounting to a new treaty is their certain knowledge that the Tories would be dissatisfied again soon enough. Another reason is that a revised deal would involve trusting Boris Johnson, which EU governments have done before and which no one does twice.

Truss’s account of the problem in Northern Ireland elides frustration with border checks across the Irish sea and a wider complaint about the residue of EU jurisdiction in Northern Ireland that Brexit hardliners see as a stain on UK sovereignty. She is egged on by Tory backbenchers who are convinced that the protocol was foisted on Britain; that it amounts to a regulatory land-grab and that its provisions are applied with pernickety spite as punishment by Brussels of an ex-colony that had the temerity to break free.

Believing that version of events requires two psychological traits that come easily to the fervent Eurosceptic. One is a capacity to forget that every problem currently associated with Brexit, including the specific danger in Northern Ireland, was signalled by remainers and dismissed with contempt as scaremongering by leavers. The other is a need to still feel victimised by Brussels even after leaving the EU, since ending that ordeal removes any excuse for Brexit not delivering its promised bounties.

That is the addiction – the sadomasochistic compulsion to be oppressed by foreigners for fear of taking responsibility for the consequences of liberation.

It is true that customs checks in the Irish Sea are a symbolic injury to unionist feeling in Northern Ireland. But it is also true that Johnson knowingly inflicted that injury, denied he had done it, then whipped the grievance up when he should have been hosing it down. A constitutional crisis at Stormont was not prefigured in the letter of the protocol, but it was made likely by the prime minister’s irresponsible and negligent handling of the politics of the protocol from the day he signed it.

Meanwhile, if Tory backbenchers had not found all the resentment they needed in Northern Ireland, they would have gone hunting for reasons to be dissatisfied with Brexit in England instead.

One of Johnson’s complaints about an Irish Sea border, as expressed in an interview earlier this week, was that regulatory checks create “extra barriers to trade and burdens on business.” That generates “a great deal of faff and botheration”, which increases living costs. Those barriers are uniquely upsetting to Northern Ireland unionists on the level of national identity, but the faff and botheration incur costs also at Dover, Grimsby, Felixstowe; any place where goods move between Britain and the EU.

In other words, the prime minister’s economic rationale for wanting to fix the Northern Ireland protocol contains a complaint about conditions that are intrinsic to the Brexit model he chose.

That is yet another reason why no one in Brussels wants to reopen the 2019 deal. The negotiation would founder on first principles. Brussels says that if Britain is no longer automatically applying EU rules, it must prove that its exports comply. The Brexit ultras think that Brussels is only imposing that requirement out of petty vindictiveness and that the very Britishness of British standards should be sufficient guarantee of quality. That has been the impasse in every chilly phone call and deadlocked meeting between the two sides since 2016.

The Tories cannot budge on that point because doing so would involve accepting two indisputable facts about Brexit. First, exiting the single market was bad for UK businesses (and the losses are not made up by free-trade deals with other countries). Second, Britain had the levers to steer EU policy as a member state and surrendered that power when it left.

No minister serving in the current cabinet can admit those truths. Until that changes, UK policy towards the EU will amount to little more than rattling the cage of delusion that Brexit imposes on its believers. Some Eurosceptics find perverse pleasure in captivity, but that is their fetish and not something anyone else needs to indulge.

When policies fail on such a Titanic scale, it is usual to have some debate about a change of direction. That isn’t happening, because the opposition has no alternative destination in mind, or none that it advertises in public.

Keir Starmer is mindful that his support for a second referendum back in the day is still a vulnerability in constituencies where the Tories want to drive Brexit ever deeper as a wedge between Labour and its estranged core voters. One function of Truss’s bill overriding the Northern Ireland protocol is that anyone opposing it can be cast as an unrepentant remainer.

Labour’s absence from the conversation is not only metaphorical. Two opposition seats on the European scrutiny committee, which notionally holds the government to account on EU matters, are effectively vacant because the Labour MPs who sat there have moved on to frontbench jobs, and haven’t been replaced.

Labour strategists take the view that sanity in EU policy only becomes available by winning an election fought on other issues – things voters actually care about – and not by dancing to a drum that Johnson beats to distract from all his other failures. That is probably true. But it means the parameters of Brexit debate are set by marginal differences between maniacs and hardliners over the optimal pace for fleeing reality.

It is a formula for perpetual crisis. The constitutional mess that Johnson has made of Northern Ireland is so far the gravest episode, but unlikely to be the last. The problem isn’t that the protocol cannot be made to work as written, but that it was written to enact a Brexit that doesn’t work.

  • Rafael Behr is a Guardian columnist

19 May 03:00

Toronto Ride of Silence 2022

by jnyyz

Today was the first in person Ride of Silence in Toronto for three years. Thirty eight cyclists rode to remember those who have been killed while cycling on these mean streets of the GTA.

both photos by Joey Schwartz

Last year was a bad year with seven ghost bikes installed in the GTA. The trend that has become apparent is that most deaths are now outside of the downtown area, and in fact only two of the seven in 2021 were in the city itself. The outer suburbs are also hazardous for cyclists.

The updated list of cyclists to be remembered in 2021 is as follows:

Sept 17   Ignacio Viana    (81) Lower Base Line West and 6th Line, Milton

Sept 11      Male cyclist           Eglinton and Leslie

Sept 1       Nikita Victoria Belykh  (11) Thornhill

June 17     Male cyclist  (60s)  Queen St E and HWY 50 (Brampton)

August 18   Miguel Joshua Escanan (18) Avenue Rd and Bloor    

June 10      Boy  (11)  HWY 407 and Warden, Markham

May 20    Darren Williams (51)   Muskoka

May 4    Rayyan Ali  (5)   Hurontario and Evans

Thanks to everyone who rode tonight. Thanks to Joey and Geoffrey for leading tonight’s ride.

Perry has posted some photos and a video of Joey reading the names of departed cyclists at the end of the ride.

https://www.photodorks.com/2022/05/ride-of-silence-toronto-2022.html

Geoffrey’s video

19 May 02:58

New Anker 563 USB-C docking station supports triple-displays on M1 Macs

by Steve Vegvari

Anker is launching a new USB-C docking stating, bringing a variety of support and ports. More importantly, the Anker 563 USB-C dock features two HDMI ports and a DisplayPort port for multiple displays.

Upon the introduction of the M1 MacBook line, Apple’s devices only supported single external displays. Though, eager users and third-party developers found ways around that. However, for many users, the process was very obtuse. This is where the 10-in-1 Anker 563 dock.

https://www.youtube.com/watch?v=XoKvJLfXTbI&feature=emb_title

M1 Mac users can now enjoy the use of up to three additional displays. When using a single 4K display via HDMI, the docking station can render at 30Hz. A second display using an HDMI and/or DisplayPort will be restricted to 2K at 50Hz or 60Hz.

So although the Anker 563 does give reliable options for external display, users should have their expectations set accordingly. Many of the limitations are due to the station running on over a single USC-C cable.

On top of its display ports, the Anker 563 features a USB-C Power Delivery port, supporting 100 watts to the host computer. Additionally, it features two 5 Gbps USB ports, one a USB-A and the other a USB-C. Two standard USB-A ports are also available. Finally, the station includes the standard 3.5mm headphone jack, ethernet, and power supply.

Based on its design, the Anker 563 station is compact to fit a number of setups. It’s small enough to fit in a bag and be taken to the office or be kept at your home desk.

It is currently available on Anker’s website for $249 USD (roughly $319 CAD).

Image credit: Anker

Via: MacRumors

19 May 02:57

Google’s Subsidiary in Russia Could File for Bankruptcy Due to Heavy Penalties

by Chandraveer Mathur
Google recently complied with the US government sanctions against Russia because of the ongoing war with Ukraine. The Russian authorities have retaliated with actions that could force Google’s Russian subsidiary to file for bankruptcy. A Reuters report says the Kremlin has stopped just short of blocking access to the search giant’s services. Continue reading →