Shared posts

20 Dec 01:54

Formal Methods Only Solve Half My Problems

by marcbrooker@gmail.com (Marc Brooker)

Formal Methods Only Solve Half My Problems

At most half my problems. I have a lot of problems.

The following is a one-page summary I wrote as a submission to HPTS’22. Hopefully it’s of broader interest.

Formal methods, like TLA+ and P, have proven to be extremely valuable to the builders of large scale distributed systems1, and to researchers working on distributed protocols. In industry, these tools typically aren’t used for full verification. Instead, effort is focused on interactions and protocols that engineers expect to be particularly tricky or error-prone. Formal specifications play multiple roles in this setting, from bug finding in final designs, to accelerating exploration of the design space, to serving as precise documentation of the implemented protocol. Typically, verification or model checking of these specifications is focused on safety and liveness. This makes sense: safety violations cause issues like data corruption and loss which are correctly considered to be among the most serious issues with distributed systems. But safety and liveness are only a small part of a larger overall picture. Many of the questions that designers face can’t be adequately tackled with these methods, because they lie outside the realm of safety, liveness, and related properties.

What latency can customers expect, on average and in outlier cases? What will it cost us to run this service? How do those costs scale with different usage patterns, and dimensions of load (data size, throughput, transaction rates, etc)? What type of hardware do we need for this service, and how much? How sensitive is the design to network latency or packet loss? How do availability and durability scale with the number of replicas? How will the system behave under overload?

We address these questions with prototyping, closed-form modelling, and with simulation. Prototyping, and benchmarking those prototypes, is clearly valuable but too expensive and slow to be used at the exploration stage. Developing prototypes is time-consuming, and prototypes tend to conflate core design decisions with less-critical implementation decisions. Closed-form modelling is useful, but becomes difficult when systems become complex. Dealing with that complexity sometimes require assumptions that reduce the validity of the results. Simulations, generally Monte Carlo and Markov Chain Monte Carlo simulations, are among the most useful tools. Like prototypes, good simulations require a lot of development effort, and there’s a lack of widely-applicable tools for simulating system properties in distributed systems. Simulation results also tend to be sensitive to modelling assumptions, in ways that require additional effort to explore. Despite these challenges, simulations are widely used, and have proven very useful. Systems and database research approaches are similar: prototyping (sometimes with frameworks that make prototyping easier), some symbolic models, and some modelling and simulation work2.

What I want is tools that do both: tools that allow development of formal models in a language like Pluscal or P, model checking of critical parameters, and then allow us to ask those models questions about design performance. Ideally, those tools would allow real-world data on network performance, packet loss, and user workloads to be used, alongside parametric models. The ideal tool would focus on sensitivity analyses, that show how various system properties vary with changing inputs, and with changing modelling assumptions. These types of analyses are useful both in guiding investments in infrastructure (“how much would halving network latency reduce customer perceived end-to-end latency?”), and in identifying risks of designs (like finding workloads that perform surprisingly poorly).

This is an opportunity for the formal methods community and systems and database communities to work together. Tools that help us explore the design space of systems and databases, and provide precise quantitative predictions of design performance, would be tremendously useful to both researchers and industry practitioners.

Later commentary

This gap is one small part of a larger gap in the way that we, as practitioners, design and build distributed systems. While we have some in-the-small quantitative approaches (e.g. reasoning about device and network speeds and feeds), some widely-used modelling approaches (e.g. Markov modelling of storage and erasure code durability), most of our engineering approach is based on experience and opinion. Or, worse, à la mode best-practices or “that’s how it was in the 70s” curmudgeonliness. Formal tools have, in the teams around me, made a lot of the strict correctness arguments into quantitative arguments. Mental models like CAP, PACELC, and CALM have provided ways for people to reason semi-formally about tradeoffs. But I haven’t seen a similar transition for other properties, like latency and scalability, and it seems overdue.

Quantitative design has three benefits: it gives us a higher chance of finding designs that work, it forces us to think through requirements very crisply, and it allows us to explore the design space nimbly. We’ve very successfully applied techniques like prototyping and ad hoc simulation to create a partially quantitative design approach, but it seems like its time for broadly applicable tools.

Footnotes

  1. See, for example Using Lightweight Formal Methods to Validate a Key-Value Storage Node in Amazon S3, and How Amazon Web Services Uses Formal Methods
  2. E.g. the classic Concurrency control performance modeling: alternatives and implications, from 1987.
14 Jul 22:54

Marketing our privacy products while preserving privacy 

by Jenifer Boscacci

When we launched Mozilla VPN, a fast and easy-to-use VPN, it was in a market crowded by companies making promises about privacy and security and we believed our reputation for building products that help you keep your information safe would make our product stand-out. To date, tens of thousands of people have signed up to subscribe to our Mozilla VPN, which provides encryption, device-level protection of your connection and information whenever you are on the web.

As we continue to look for new ways to grow our audience, we saw that many of our competitors used affiliate marketing as a way to get people to buy their service. The challenge is that affiliate marketing is a space rife with tons of data collection practices. At Mozilla, online privacy has always been one of our top priorities. We knew that in order for us to pursue affiliate marketing we would have to do it in a transparent way with as little data as possible to provide people with the best privacy possible. 

Prioritizing privacy right from the start with affiliate marketing

You’re in the market for a new phone and start by doing research on reviews sites, then pick one of the site’s top choices and click on the link to buy? This is an example of affiliate marketing. Essentially, a trusted media site is giving their thumbs up so that once you click on the link to buy they get paid for that referral. Companies use affiliate marketing as a method to leverage well-known publishers to bring awareness of their products and drive people to buy. 

Data collection happens when people’s information is shared once they click on a link either through a publisher’s site or using an influencer’s code. There is an attribution tagged to that link or code so that the publisher or influencer gets credit for sending that user to the company’s site. During that transaction, information like an IP address or any other additional data about the person is passed through a third party, an affiliate network partner. 

At Mozilla where we prioritize people’s privacy, choosing the right affiliate network partner was our first step. We selected Commission Junction because they have experience in the VPN category and were willing to collaborate with us to build a privacy-forward solution for attribution. 

Together with our engineering team we integrated affiliate marketing in a way that was designed to limit data transfer and collection. We only collect and pass the data necessary to credit an affiliate network partner with the subscription they helped us get. We use a server-to-server integration to prevent leaking user IP addresses. Our code is open source to ensure transparency around this data. 

We inform people about this upon accessing our website and people who do not wish to see this data collected can opt out.

Upon accessing our website, you can opt out if you do not want this data collected

This approach is in line and a reflection with our lean data practices and our privacy promise.

Since the launch of our affiliate marketing program in April, we continue to evaluate companies who are aligned with the same values that we embody and represent the best of the web.

Now available with a seven-day risk-free trial

Summer is just around the corner and as you begin planning your vacation, whether you’re staying in a hotel or renting a home, or traveling between airports or coffee shops, you can have peace of mind using Mozilla VPN wherever you are. Mozilla VPN encrypts your connection protecting your privacy and concealing your location. You can feel safe knowing that Mozilla VPN does not log or sell your data. Starting this month we’re offering a seven day-free trial for mobile devices (Android and iOS) so you can get protection “on-the-go” wherever you browse the web. 

Building privacy products at Mozilla

We know that it’s more important than ever for you to be safe, and for you to know that what you do online is your own business. Developed by Mozilla, a mission-driven company with a 20-year track record of fighting for online privacy and a healthier internet, we are committed to innovate and bring new features to the Mozilla VPN. By subscribing to Mozilla VPN, users support both Mozilla’s product development and our mission to build a better web for all. 

Check out the Mozilla VPN and subscribe today from our website.

Device-level encryption from a name you can trust

Try Mozilla VPN

For more on Mozilla’s privacy products:

The post Marketing our privacy products while preserving privacy  appeared first on The Mozilla Blog.

13 Jun 02:06

ESOP Fables

by peter@rukavina.net (Peter Rukavina)

From earlier this spring, Signs of a magnetic pole flip in company ownership by Matt Web:

What if the dominant model of company ownership inverts? What if we’re at the end of an era of companies being owned by external stockholders, and at the beginning of bottom-up ownership by the people who do the work – the employees? 

The signature example of employee-ownership Matt cites is ustwo, a company—er, fampany—that I have some familiarity with, as my friend Jonas worked from their Malmö branch for some years.

Yankee Publishing, my client for more than 25 years, made the transition from 84 years of family ownership to becoming an ESOP—employee stock ownership plan in 2019:

ESOPs are a way for family-owned business to keep their business intact. Yankee, which has been family owned for 84 years, now has 17 owners including the fourth generation, with no clear successor, Jamie Trowbridge, president and CEO — and a third-generation owner himself — explained to NH Business Review.

“The family members were all in favor of this,” said Trowbridge, who has been working on the plan to transition the company to an ESOP for the last 2 1/2 years. “The alternative would probably have been to sell the assets off to different buyers, breaking up the company, and perhaps moving pieces elsewhere. That was unacceptable to us. We didn’t like the idea of good paying jobs leaving New Hampshire.”

I’m not a part of the ESOP—I’m a vendor, not an employee—but the benefits of the ESOP to me, inasmuch as they’re allowing the company to remain intact and right-scaled, are clear.

02 Jun 03:15

Research may reveal why people can suddenly become frail in their 70s | Ageing

mkalus shared this story from The Guardian.

A groundbreaking theory of ageing that explains why people can suddenly become frail after reaching their 70s has raised the prospect of new therapies for the decline and diseases of old age.

Researchers in Cambridge discovered a process that drives a “catastrophic” change in the composition of blood in older age, increasing the risk of blood cancers and anaemia, and impairing the effectiveness of white blood cells to fight infection.

The scientists believe similar changes occur in organs throughout the body, from the skin to the brain, potentially underpinning why people often age healthily for decades before experiencing a more rapid decline in their 70s and 80s.

“What’s exciting about this work is there may be a common set of processes at work,” said Dr Peter Campbell, a senior author on the study and head of the cancer, ageing and somatic mutation programme at the Sanger Institute in Cambridge. “Ultimately the goal would be slowing or intervening in the ageing process, but at the very least we see an option to use this to measure biological age.”

Ageing is a complex process, but many scientists have suspected that the gradual buildup of mutations in cells gradually degrades the body’s ability to function properly. The latest research suggests that thinking is wrong, or at best incomplete, and places the blame instead on “selfish” cells that rise to dominance in old age.

Working with scientists at the Wellcome-MRC Cambridge Stem Cell Institute, Campbell and his colleagues studied blood cells across the age range from newborns to people in their 70s and 80s. They found that adults under 65 had a wide range of red and white blood cells produced by a diverse population of 20,000 to 200,000 different types of stem cells in their bone marrow.

In the over-65s, the picture was radically different. About half of their blood cells came from a measly 10 or 20 distinct stem cells, dramatically reducing the diversity of the person’s blood cells, with consequences for their health.

Writing in the journal Nature, the researchers explain that while stem cells involved in making blood gather mutations over time, most of these changes are harmless. But problems arise when rare “driver” mutations make stem cells grow faster, often producing lower-quality blood cells as a trade-off. When a person is in their 30s and 40s, the growth advantage of the aberrant stem cells makes little difference, but at 70 and over these fast-growing cells come to dominate blood cell production.

“The exponential growth explains why there is such a sudden change in frailty after the age of 70, why ageing hits at that sort of age,” said Campbell. Faster-growing blood stem cells are linked to blood cancers and anaemia, but also make people less resilient to infections and medical treatments such as chemotherapy.

“What we know about other organ systems is that many of the same observations apply,” Campbell added. The researchers now intend to look for the same process in skin to understand why ageing leads to wrinkles and slower wound healing.

Dr Elisa Laurenti, an assistant professor at the Wellcome-MRC Cambridge Stem Cell Institute and joint senior researcher on the study, said chronic inflammation, smoking, infection and chemotherapy could all produce stem cells with cancer-causing mutations.

“We predict that these factors also bring forward the decline in blood stem cell diversity associated with ageing,” she said. “It is possible that there are factors that might slow this process down, too. We now have the exciting task of figuring out how these newly discovered mutations affect blood function in the elderly, so we can learn how to minimise disease risk and promote healthy ageing.”

02 Jun 00:44

There’s no such thing as data

by Benedict Evans

Technology is full of narratives, but one of the loudest is around something called ‘data’. AI is the future, and it’s all about data, and data is the future, and we should own it and maybe be paid for it, and countries need data strategies and data sovereignty. Data is the new oil!

This is mostly nonsense. There is no such thing as ‘data’, it isn’t worth anything, and it doesn’t really belong to you anyway.

Most obviously, ‘data’ is not one thing, but innumerable different collections of information, each of them specific to a particular application, that aren’t interchangeable. Siemens has wind turbine telemetry and Transport for London has ticket swipes, and you can’t use the turbine telemetry to plan a new bus route. If you gave both sets of data to Google or Tencent, that wouldn’t help them build a better image recognition system.

This might seem trivial put so bluntly, but it points to the uselessness of very common assertions, especially from people outside tech, on the lines of ’China has more data’ or ‘America will have more data’ - more of what data? Meituan delivers 50m restaurant orders a day, and that lets it build a more efficient routing algorithm, but you can’t use that for a missile guidance system. You might not even be able to use it to build restaurant delivery in London. ‘Data’ does not exist as one, single, unified thing, where you can add every row and table of every different kind to one giant pool and get more and more insight. Creating a ‘national data strategy’ is like demanding a ‘national spreadsheet strategy’ or ‘national SQL strategy’.

Of course, when people talk about ‘data’ they mostly really mean your data - your personal information and the things that you do on the internet, some of which is sifted, aggregated and deployed by technology companies. We want more privacy controls, but we also think we should have ownership of that data, wherever it is.

The trouble is, most of the meaning and hence the value in most of ‘your’ data is not in you but in all of the intersections with other people. What you post on Instagram means very little: the signal is in who liked your posts and what else they liked, in what you liked and who else liked it, and in who follows you, who else they follow and who follows them, and so-on outwards in a mesh of interactions between a billion people. If I like your picture, that is not your ‘my’ data or ‘your’ data alone, and it’s not worth much without the context of all the other likes and follows. You can't take that with you, because it’s a lot of other people’s data (and privacy!) as well, and even if you did you probably couldn’t plug it into TikTok, because TikTok has a different mesh and the users don’t overlap.

That is, for many of these systems the value isn't in the ‘data’ at all but in the flow of activity around it - the meaning is not in the picture or video you post but in how the network reacts to it, and how the products creates and captures that reaction. You could see Instagram, TikTok or PageRank as vast mechanical Turks - we do not (yet) have AI that can understand what every page, picture or video are in themselves, and so we need humans - all of us - in the loop somewhere, at the right point of leverage, liking, linking, clicking and watching (and, of course, creating). These are systems, not data, and the value is in the flow.

All of this prompted Tim O'Reilly to say that ‘data isn't oil - it's sand’ - data is valuable only in the aggregate of millions. Indeed, this can be true even on a simple cashflow basis - in Q1 2022 Meta made just 99 cents of free cashflow per daily active user per month.

This also applies even for ‘personal’ data where you can meaningfully say that it’s ‘yours’. Your electricity usage is not about other people, but it’s not valuable by itself, only in the aggregate of all domestic electricity usage in south London or Brooklyn. And DeepMind’s researchers might be able to uncover some new and clinically important correlation from a million chest x-rays - but yours, by itself, doesn’t get them anything, and they didn’t feed those x-rays into AlphaGo. Again, data isn’t one thing.

We’ve been here before: today’s discussions around AI and around data look a lot like discussions around databases in the 1980s. We transform what we can do with information and what questions we can ask, and how organisations can function. When databases were new, we worried, and some of those worries were real, but no-one today asks if America has more SQL, or if it matters that SAP is German. No-one at Davos talks about ‘SQL colonialism’. These technologies are not national strategic assets - anyone can have them, but what for? Databases enabled just-in-time supply chains, and Walmart, and let Apple make iPhones in China - those are the strategic questions. The same for AI, and ‘data’ - it’s not the new oil, just more software, so what do you build with it?

A version of this essay appeared in the Financial Times this weekend.
30 May 16:20

Online responses to Blue Origin infographics attacking SpaceX

by Brian Hurley
Stumbled across these infographics which are hilarious.
30 May 16:19

Denae Ford Robinson on Online Community Safety in Software Engineering

Denae Ford Robinson: "Online community and safety in software engineering." Denae is a Research Scientist at Microsoft Research in the United States.

Denae Ford Robinson

slides | transcript (English) | transcripción (Español)

30 May 16:08

Lenovo ThinkPad X1 Yoga Gen 7

by Volker Weber

Der Lenovo ThinkPad X1 Yoga ist schon lange für mich einer der besten Laptops, die man kaufen kann. 360-Grad-Scharnier, Touchscreen, Stift in einem Silo, hervorragende Tastatur ohne faule Kompromisse, Trackpad plus TrackPoint. Ich habe nun getauscht von einem Gerät der sechsten Generation auf eins der siebten.

Der Intel Core i7 ist nun ein wenig neuer und flotter. Das drückt sich in Anwendungs-Benchmarks mit einem Plus von 15 bis 20 Prozent aus, aber das ist kaum spürbar. Beide sind dicke ausreichend für meine Workloads. Das Trackpad ist ein wenig breiter geworden, aber auch das alte war bereits prima. Legt man die beiden Geräte aufeinander, dann gleichen sie mit den gleichen Ports wie ein Ei dem anderen.

Der wesentliche Unterschied ist kaum erkennbar: Am oberen Rand des Bildschirms gibt es eine kleine Ausbuchtung nach oben. Darin befindet sich eine neue Webcam, die nun statt 720p eine Auflösung von 1080p hat. Das bringt einen größeren Bildausschnitt und durch die f/2.0-Optik auch eine schärfere und lichtstärkere Abbildung. Der Unterschied ist deutlich erkennbar, vor allem bei schlechter Beleuchtung. Links von der Kamera sieht man die Leuchte für Windows Hello. Beide ThinkPad haben außerdem einen Fingerabdrucksensor in der Powertaste. Am oberen Rand findet man außerdem einen kleinen Privacy Shutter, der die Kameras physisch abdeckt.

ThinkPads gibt es in unterschiedlichsten Konfigurationen. Während das Gen 6 bei mir die Maximalausstattung mit spiegelndem 4k Display, 32 GB RAM und 2 TB Festplatte hatte, ist Gen 7 viel sinnvoller ausgestattet: Entspiegeltes Full HD Display für mehr Akkulaufzeit, 16 GB RAM und 512 GB Festplatte. Die Speicherausstattung entspricht damit meinem Surface Pro 8.

Lenovo hat mich aber noch auf ein anderes Feature aufmerksam gemacht, das ich bereits beim Gen 6 übersehen habe: Ein Mikron-Array mit 4 Mikrofonen am oberen Bildschirmrand. Als Headset-Nutzer war mir das nicht aufgefallen. Das gibt es auch im Surface Laptop Studio, aber nicht im Surface Pro 8. Die Aufnahmequalität unterscheidet sich deutlich. Headset aufsetzen, damit ihr den Unterschied hört:

ThinkPad X1 Yoga Gen 7
Surface Pro 8

Beim Surface Pro 8 hört man den Widerhall des Raums viel deutlicher als beim ThinkPad. Der Unterschied wird noch mal deutlicher, wenn man von Hintergrundgeräuschen umgeben ist.

Mit Hintergrundgeräuschen: ThinkPad X1 Yoga Gen 7
Mit Hintergrundgeräuschen: Surface Pro 8

Diese Verbesserungen bei Webcam und Mikrofonen bedeuten, dass man so ein Gerät tatsächlich ohne externe Kamera und ohne Konferenzlautsprecher oder Headset verwenden kann, sofern man nicht andere Menschen im gleichen Raum stört.

Mich wird das nicht davon abhalten, weiterhin auf Jabra Panacast 20 und Jabra Evolve2 75 zu setzen. 😉

30 May 16:07

What’s up with SUMO – May 2022

by Rizki Kelimutu

Hi everybody,

Q2 is a busy quarter with so many exciting projects on the line. The onboarding project implementation is ongoing, mobile support project also goes smoothly so far (we even start to scale to support Apple AppStore), but we also managed to audit our localization process (with the help of our amazing contributors!). Let’s dive more into it without further ado.

Welcome note and shout-outs

  • Welcome to the social support program to Magno Reis and Samuel. They both are long-time contributors on the forum who are spreading their wings to Social Support.
  • Welcome to the world of the SUMO forum to Dropa, YongHan, jeyson1099, simhk, and zianshi17.
  • Welcome to the KB world to kaie, alineee, and rodrigo.bucheli.
  • Welcome to the KB localization to Agehrg4 (ru), YongHan (zh-tw), ibrahimakgedik3 (tr), gabriele.siukstaite (t), apokvietyte (lt), Anokhi (ms), erinxwmeow (ms), and dvyarajan7 (ms). Welcome to the SUMO family!
  • Thanks to the localization contributors who helped me understand their workflow and pain points on the localization process. So many insightful feedback and things that we may not understand without your input. I can’t thank everybody enough for your input!
  • Huge shout outs to Kaio Duarte Costa for stepping up as Social Support moderator. He’s been an amazing contributor to the program since 2020, and I believe that he’ll be a great role model for the community. Thank you and congratulations!

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • I highly recommend checking out KB call from May. We talked about many interesting topics, from KB review queue, to a group exercise on writing content for localization.
  • It’s been 2 months since we on board Dayana as a Community Support Advocate (read the intro blog post here) and we can’t wait to share more about our learnings and accomplishment!
  • Please read this forum thread and this bug report for those of you who experiences trouble with uploading images to Mozilla Support.

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in April! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • There’s also KB call, this one was the recording for the month of May. Find out more about KB call from this wikipage.
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Check out SUMO Engineering Board to see what the platform team is currently doing.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only

Month Page views Vs previous month
Apr 2022 7,407,129 -1.26%

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Mar 2022 pageviews (*) Localization progress (per Apr, 11)(**)
de 7.99% 98%
zh-CN 7.42% 100%
fr 6.27% 87%
es 6.07% 30%
pt-BR 4.94% 54%
ru 4.60% 82%
ja 3.80% 48%
It 2.36% 100%
pl 2.06% 87%
ca 1.67% 0%

* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

-TBD-

 

Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total incoming message Conv interacted Resolution rate
Apr 2022 504 316 75.00%

Top 5 Social Support contributors in the past 2 months: 

  1. Bithiah Koshy
  2. Christophe Villeneuve
  3. Magno Reis
  4. Md Monirul Alom
  5. Felipe Koji

Play Store Support

Channel Apr 2022
Total priority review Total priority review replied Total reviews replied
Firefox for Android 1226 234 291
Firefox Focus for Android 109 0 4
Firefox Klar Android 1 0 0

Top 5 Play Store contributors in the past 2 months: 

  • Paul Wright
  • Tim Maks
  • Selim Şumlu
  • Bithiah Koshy

Product updates

Firefox desktop

Firefox mobile

  • TBD

Other products / Experiments

  • TBD

Useful links:

30 May 15:56

(via Opinion | No One Can Hide From This Weapon in the War in...

29 May 17:16

Monkeypox Is Here and COVID Truthers Are Losing It

mkalus shared this story from The Daily Beast Latest Articles.

29 May 17:16

This is Savannah. She is officially the first dog to circumnavigate the globe. She went from being abandoned as a puppy, to experiencing the entire world with her human, Tom. Earlier this week they were joined by fans as they completed their 7-year journey. 14/10 historic as h*ck pic.twitter.com/7wqWAmDcAG

by WeRateDogs® (dog_rates)
mkalus shared this story from dog_rates on Twitter.

This is Savannah. She is officially the first dog to circumnavigate the globe. She went from being abandoned as a puppy, to experiencing the entire world with her human, Tom. Earlier this week they were joined by fans as they completed their 7-year journey. 14/10 historic as h*ck pic.twitter.com/7wqWAmDcAG








17350 likes, 1258 retweets
29 May 17:16

Bookmarked Commuting is Morally Bankrupt (by St...

by Ton Zijlstra

Bookmarked Commuting is Morally Bankrupt (by Stowe Boyd)

Commuting is unavoidable if physical presence is needed. Moving around big cities always takes heaps of time (Paolo always says ‘about 50 minutes’ when you ask him how much time it will take to get to some place in London, for Paris my rule of thumb is 30+mins each movement, in Amsterdam I assume 15mins because I cycle there and it’s much smaller)

When I still had an office to go to (1997-2004) it was a 10 minute cycle, later a 5 min walk, each way (same office, moved house). Then, as consultant visiting client offices (2004-2008), it would regulary be 5 hours each day, but it wasn’t a commute and it was different each week. I would work from home on days I wasn’t visiting client offices, as my employer did not have offices. From 2008 onwards working at home was the default, while aiming for at most 3 days per week visiting client offices. Since moving to the middle of the Netherlands, travel times are below an hour each way to most of the rest of the country, for the one or two days per week I don’t work from home. Despite travel I haven’t ‘commuted’ for 18 years.

My company has offices in Utrecht (in the walkable city center across from a public transport hub), and we have them both as a meeting space and because our more recent employees want a place to work. We provide the tools and means of course to be able to collaborate and communicate asynchronously and remotely.

We’re 8 people. Four live within cycling distance of the office. The other four within 30-50 minutes each way (by public transport or car). None of us are expected to show up in the office each day. Some are there 3-4 days, others once each week. I am usually there once every other week. We don’t provide lease cars, we do provide public transport cards (which include cycle and car rental/parking when needed for the ‘last mile’ from the nearest train station). Travel time to client offices generally is counted as work time, not a commute.

Commute times are the result of balancing three building blocks I think. Your own work location and nature of your job, the place you would want to or can afford to live, and the work location and nature of the job of your partner. Usually those three don’t shift simultaneously, meaning the arrangement of them is almost by definition suboptimal. Generally it seems people optimise for a commute below an hour.

Is commuting really morally bankrupt? Not by definition. Place of residence and the jobs you and your partner want are individual choices (though most certainly not free of constraints).
The moral choices of my role as employer concern less the commute, more the demands we make of people to be at our office, the expectations we have w.r.t. how many days per week a team member works on client projects and at their offices, and what expectations of clients we do or don’t cater for. E.g. we don’t take on projects where the client expects us to be there 5 days per week, and mostly aim for 3 days of client work per week as a healthy balance with other tasks. Office presence mostly is the result of wanting to be there. Commute time is the result of those choices and their morality. Because of other considerations other than the employer’s playing a role in commute time, it’s mostly not even a good proxy for the employer’s morality. Unless the employer’s choices are the dominant factor determining the commute, whether by deliberate choice or as consequence of inconsiderance. That’s when an employer doesn’t fulfull its duty of care for their employees.

29 May 17:15

B.C. company opens Canada's largest licensed psychedelic mushroom growing facility

mkalus shared this story .

B.C.-based company Optimi Health has harvested its first cultivation of psilocybin mushrooms at its Health Canada-licensed facilities in Princeton, B.C., positioning itself as a major player in the burgeoning psychedelic sector.

The $14-million venture consists of two 10,000-square-feet facilities with a combined total of 10 growing rooms that can produce approximately 2,000 kilograms of dried psilocybin mushrooms a month, according to Optimi's head cultivator, Todd Henderson.

"[It's] a phenomenal scale ... There is nobody else in the world doing what [we're] doing here right now," Henderson said.

"Thousands of years ago the Chinese and the Indigenous people were using these in solving all kinds of issues on their own and here it is thousands of years later, we are reverting back to it."

Psilocybin mushrooms, commonly called magic mushrooms, are a controlled substance in Canada, making it illegal to grow, possess or sell unless authorized by Health Canada.

'Astounding to see what the research is showing'

This month the agency granted Optimi a licence to produce the mushrooms as well as a research exemption to extract the psychedelic components of psilocybin and psilocin for use in clinical trials, according to the company. 

Optimi says their facilities are built to meet Good Manufacturaing Practice (GMP) requirements, a quality assurance standard required by Health Canada to produce psilocybin for clinical research. 

"We are the only GMP organic facility in the world that can supply what we do. We have contacts all over the world for people who would like to do research with psilocybin," said Leigh Grant, director of operations for Optimi Health.

The idea to build the facility came out of a desire to explore the medicinal benefits of natural products, said Bryan Safarik, chief operating officer.

"It really is astounding to see what the research is showing as far as medicinal benefits [of psilocybin mushrooms] and where this could all go," Safarik said.

Canadian entrepreneur Chip Wilson, who founded Lululemon, is an advisory to the company and his son, J.J. Wilson is Optimi's board chair. 

The company is also growing non-regulated varieties of fungi including lion's mane and chaga mushrooms, which are commonly found in health food stores. 

Therapeutic benefits

The main product remains psychedelic mushrooms, of which Optimi hopes to position itself as major supplier on the global market for medical grade psilocybin. 

In recent years, scientists have been studying the therapeutic benefits of psychedelic mushrooms to treat everything from addiction to easing end-of-life anxiety for terminally ill patients. 

University of British Columbia psychology professor Zach Walsh has researched psilocybin for the past decade, including a recent study on microdosing the compound where people took repeated small dosages of mushrooms to treat depression and anxiety. 

"There's some growing evidence that psilocybin can help resolve treatment-resistant depression in a way that's as effective or perhaps more effective even than traditional anti-depressants," Walsh said.

"People have mystical experiences on psilocybin ... Depression can be a loss of meaning in life and a loss of sense of purpose and so having those kind of profound experiences can really revitalize folks."

In January, Health Canada restored its "Special Access Program" — abolished under former prime minister Stephen Harper in 2013 — allowing physicians to request access to restricted drugs like psilocybin to treat patients with mental health conditions such as post-traumatic stress disorder, depression and anxiety.

29 May 17:15

Love, Death and Robots - Season 3

by Michael Kalus
Love, Death and Robots - Season 3

The current state of Science Fiction on “TV” is pretty… miserable. What a change from a decade ago when there was at least an attempt to make decent SciFi, if at times corny. So it is nice to see that Netflix manages to put out an anthology series that is not only entertaining and, mostly, well written but also shows us what Computer Graphics are capable of these days.

So let’s dig into Season 3 of “Love, Death and Robots”.

Episode 1: Three Robots Exit Strategies

Love, Death and Robots - Season 3

This is probably the weakest of all the new episodes. We follow a bunch of Robots around as they commiserate over the way humanity met its end. It’s well animated and the robots are “likeable” but the writing is very much on the nose. This shouldn’t really surprise, it’s written by John Scalzi who…. You either like or not.

Rating: 5/10

Episode 2: Bad Travelling

Love, Death and Robots - Season 3

Now here is a high point of what the show can do and what good writers can deliver. Bad Travelling was written by Neal Asher and for all the graphic violence it is also an engaging story about the hard choices you sometimes have to make.

Rating: 9/10

Episode 3: The very pulse of the Machine

Love, Death and Robots - Season 3

This was not what I expected. I got some Heavy Metal vibes there, though it stays away from that craziness. The story is engaging though and the ending is not quite what I expected. It is based on a short story by Michael Swanwick, who I have to admit I do not know or have read anything of.

Rating: 7/10

Episode 4: Night of the Mini Dead

Love, Death and Robots - Season 3

This, is just fun. There is utter violence and chaos, but because of the way it was / is presented it is also incredibly funny. It is short, sweet and funny and was written by Jeff Fowler and Tim Miller specifically for the show.

Rating: 7.5/10

Episode 5: Kill Team Kill

Love, Death and Robots - Season 3

Watching this I initially thought it was probably written by Garth Ennis, though it is “too comical” to be really his writing and it wasn’t. It was written by Justin Coates who has written quite a bit of horror by the looks of it and it is a bit Rambo meets crazy science. Entertaining for sure.

Rating: 6/10

Episode 6: Swarm

Love, Death and Robots - Season 3

Another highlight, with extremely good animation and an interesting question: Are we really top-dog? The ending actually had me wanting more. I actually do want to know how it ends / continues. At this point I feel teased and really hope they continue the story.

It was written by Bruce Sterling and it shows in the themes and set-up. More please.

Rating: 8.5/10

Episode 7: Mason’s Rats

Love, Death and Robots - Season 3

Wallace and Grommit meets Adult Swim is probably the best way to describe it. A farmer has a rat problem and tries all kinds of technology to get rid of the pest. Until….

Funny, with a heart. The story was, like “Bad Travelling” written by Neal Asher, but totally different in tone.

Rating: 7/10

Episode 8: In Vaulted Halls Entombed

Love, Death and Robots - Season 3

A Lovecraftian story, with animations that have you wonder a lot of the time if it is actually CGI or real action. The ending is also true to Lovecraft. The whole episode was surprising in a few ways. It is based on a short story by Alan Baxter.

Rating: 7/10

Episode 9: Jibaro

Love, Death and Robots - Season 3

Jibaro, together with “Bad Travelling” is the highlight of this series for me. Both the story itself, the visuals and the overall presentation. Like with “In Vaulted Halls Entombed” the visuals really have you wonder if it is real or not.

Jibaro was written by Alberto Mielgo who also wrote “The Witness” in the first season. Both episodes are at the top end in regards to story and execution.

Rating: 9/10

Conclusion

Season 3 of Love, Death and Robots is definitely worth a watch. There are no bad episodes in this season. My dislike for Scalzi’s writing has probably the most to do with me not really finding a lot of joy in the first episode. But otherwise this is a highlight in the SciFi desert we currently live in.

I do hope that Netflix gives it another series, but we’re talking Netflix here and I have my doubts.

29 May 17:14

2022-05-27 BC tiny

by Ducky

Testing

Here’s this week’s wastewater charts, from my buddy Jeff’s spreadsheet:

Fraser is now clearly going down. Vancouver might be going up, but that might just be noise. North Shore, Richmond, and Langley are looking stable.

Note that while the levels are declining, they still aren’t good. They are still two or three times what they were during Delta:

Vaccines

The vaccines by age over time graph did not come out this week.

29 May 17:14

Love, Death and Robots - Season 3

by Michael Kalus
mkalus shared this story from Michael Kalus.ca.

Love, Death and Robots - Season 3

The current state of Science Fiction on “TV” is pretty… miserable. What a change from a decade ago when there was at least an attempt to make decent SciFi, if at times corny. So it is nice to see that Netflix manages to put out an anthology series that is not only entertaining and, mostly, well written but also shows us what Computer Graphics are capable of these days.

So let’s dig into Season 3 of “Love, Death and Robots”.

Episode 1: Three Robots Exit Strategies

Love, Death and Robots - Season 3

This is probably the weakest of all the new episodes. We follow a bunch of Robots around as they commiserate over the way humanity met its end. It’s well animated and the robots are “likeable” but the writing is very much on the nose. This shouldn’t really surprise, it’s written by John Scalzi who…. You either like or not.

Rating: 5/10

Episode 2: Bad Travelling

Love, Death and Robots - Season 3

Now here is a high point of what the show can do and what good writers can deliver. Bad Travelling was written by Neal Asher and for all the graphic violence it is also an engaging story about the hard choices you sometimes have to make.

Rating: 9/10

Episode 3: The very pulse of the Machine

Love, Death and Robots - Season 3

This was not what I expected. I got some Heavy Metal vibes there, though it stays away from that craziness. The story is engaging though and the ending is not quite what I expected. It is based on a short story by Michael Swanwick, who I have to admit I do not know or have read anything of.

Rating: 7/10

Episode 4: Night of the Mini Dead

Love, Death and Robots - Season 3

This, is just fun. There is utter violence and chaos, but because of the way it was / is presented it is also incredibly funny. It is short, sweet and funny and was written by Jeff Fowler and Tim Miller specifically for the show.

Rating: 7.5/10

Episode 5: Kill Team Kill

Love, Death and Robots - Season 3

Watching this I initially thought it was probably written by Garth Ennis, though it is “too comical” to be really his writing and it wasn’t. It was written by Justin Coates who has written quite a bit of horror by the looks of it and it is a bit Rambo meets crazy science. Entertaining for sure.

Rating: 6/10

Episode 6: Swarm

Love, Death and Robots - Season 3

Another highlight, with extremely good animation and an interesting question: Are we really top-dog? The ending actually had me wanting more. I actually do want to know how it ends / continues. At this point I feel teased and really hope they continue the story.

It was written by Bruce Sterling and it shows in the themes and set-up. More please.

Rating: 8.5/10

Episode 7: Mason’s Rats

Love, Death and Robots - Season 3

Wallace and Grommit meets Adult Swim is probably the best way to describe it. A farmer has a rat problem and tries all kinds of technology to get rid of the pest. Until….

Funny, with a heart. The story was, like “Bad Travelling” written by Neal Asher, but totally different in tone.

Rating: 7/10

Episode 8: In Vaulted Halls Entombed

Love, Death and Robots - Season 3

A Lovecraftian story, with animations that have you wonder a lot of the time if it is actually CGI or real action. The ending is also true to Lovecraft. The whole episode was surprising in a few ways. It is based on a short story by Alan Baxter.

Rating: 7/10

Episode 9: Jibaro

Love, Death and Robots - Season 3

Jibaro, together with “Bad Travelling” is the highlight of this series for me. Both the story itself, the visuals and the overall presentation. Like with “In Vaulted Halls Entombed” the visuals really have you wonder if it is real or not.

Jibaro was written by Alberto Mielgo who also wrote “The Witness” in the first season. Both episodes are at the top end in regards to story and execution.

Rating: 9/10

Conclusion

Season 3 of Love, Death and Robots is definitely worth a watch. There are no bad episodes in this season. My dislike for Scalzi’s writing has probably the most to do with me not really finding a lot of joy in the first episode. But otherwise this is a highlight in the SciFi desert we currently live in.

I do hope that Netflix gives it another series, but we’re talking Netflix here and I have my doubts.

29 May 17:13

Field-Value Automata

When I introduced Quamina, I described the core trick: You prepare an arbitrary JSON blob for automaton-based matching by “Flattening” it into a list of name/value pairs, then sorting them in name order. Today, a closer look at how you work with the flattened data.

This is one of a series of essays on programming topics motivated by my work on eventing services at AWS and, since then, on Quamina, a content-filtering library implemented in Go (GitHub).

By example

Consider the following Pattern:

{"x": ["a"], "y": [1, 2]}

It should match any JSON blob containing a top-level x field whose value is the string "a" and a top-level y field whose value is 1 or 2.

Since we sort the fields of incoming events in order of name, we know how to build the automaton. There are two things that aren’t obvious:

  1. Matching each field is a two-step process: First, see if the field name matches, then the value.

  2. This pattern doesn’t care about any fields except for the top-level x and y. So the automaton has to bypass all the others.

Given that, here’s the little automaton that could:

simple finite automaton

Notes

There are two kinds of things in the picture: field matchers and value matchers, labeled in the diagram with names beginning fm and vm. In Go code, they’re implemented by types named fieldMatcher and valueMatcher.

The matchers transition straightforwardly, looking for an x field with value "a" and then a y field with value 1 or 2. But, each of these matchers have a * representing “anything else” that loops back to the field-matcher state. This is how fields that don’t appear in the pattern are ignored. The automaton loops in fm0 until it sees an x field, then moves along to fm1 if the field’s value is "a", otherwise looping back to fm0. If it does get to fm1, it’ll loop there forever until it sees a y field with value 1 or 2, transitioning to fm2 or fm3 respectively and bypassing any other fields we don’t care about.

The big red !T on fm2 and fm3 says reaching either state means you’ve matched the pattern T that was added in the code sample above.

Problem

I’d used finite automata more than once before this, but on every other occasion I was parsing a programming language or Internet data format, which have none of this “skip any fields you’re not looking for” crap. Which meant that the state machine I drew on the whiteboard in my AWS office looked like the one just above, but without those loopbacks labeled *. Yeah, even though they’re implicit in the design. What can I say, I was in a hurry.

So when I sat down to write the code to traverse the automaton, I just couldn’t figure out how to make it work. I felt, as I often feel, that I’d gotten in over my head.

Since I wasn’t smart enough to write down the correct automaton, I had to figure out how to code around the one I had. Sitting on the living-room couch in my Mom’s over-full house at Christmas 2014, I came up with this. Go check it out if you like reading code, but let me try to explain it…

In English

You have an automaton that looks like the picture above, only without the * back-links. It has a start state. You don’t know which fields in the event (if any) are going to match the pattern. You do know that any match has to begin with the start state.

So, you make a little type called a Proposal, which says, state S might match field F. And you have a pool of Proposals to work on. To start with, that pool contains one proposal for each field, suggesting that the start state might match it.

Then you turn a loop loose that runs as long as there are any proposals in the pool. It reads a proposal and tries matching its state to its field. If it works, which means the field name/value combo transitions to another state, you toss proposals for that state and all the following fields back into the pool. Let’s work a quick example

Suppose you have 3 fields (F0, F1, F2), so you load the pool with proposals for F0/Start-State, F1/Start-State, and F2/Start-State. Let’s say that neither F1 nor F2 matches the Start-State, but F0 does, transitioning to State-X. So you toss proposals for F1/State-X and F2/State-X into the pool. F1 doesn’t match State-X but F2 does, transitioning to State-Y. State-Y has an annotation that you’ve matched a pattern, so you have something to return to your caller. The pool is now empty and there are no more fields after F2 to build new proposals, you’re done.

The fact that the fields are sorted by name really matters here; as you work your way through the automaton, you never have to worry about transferring back to a previous state.

It’s all in less than fifty lines of code (once again, starting here), which I’m not going to try to squeeze into this skinny blog column. If you’re looking at that code, please ignore (for the moment) the static about “exists:false matches” and “Array conflict”; but both are maybe interesting enough to get a write-up later in this series.

I don’t know if this approach to traversing automata (a) has been investigated, (b) has a name, or (c) is any good. I do know that it worked really well in practice, handling many millions of events per second in multiple AWS services.

I’ve done a bit of pen-and-paper analysis and don’t think the amount of work is meaningfully different from a conventional traversal. But it did occur to me that in principle this approach could be made multithreaded; you could process multiple proposals in parallel on different cores. But anyhow, the profiler says this part of traversing the automaton is hardly visible as part of the total compute. So I left it this way in Quamina for sentimental reasons.

Tables?

In the first cut, the field matcher was just map[string]*valueMatcher and the value-matcher was map[string]*fieldMatcher. It worked OK and the fieldMatcher is still like that but, for reasons I’ll write about later, that’s a bad choice for the valueMatcher.

Smarter than me

At some level, Quamina is. One symptom is places like this in the code, distinguished by extended verbose comments. These are where I got stuck and bashed my head across the wall until I got something that worked, and knew I’d have no chance of understanding it later (nor would any subsequent visitor) unless I could squeeze out a coherent English explanation. As Prof. Feynman said, if you can’t explain something in simple language, you don’t really understand it. The observation that computer programmers can build executable abstractions that work but they then have trouble understanding is not new and not surprising. Lots of our code is smarter than we are.

The code is also smart because at AWS I had extremely talented collaborators who added things that I didn’t think were possible until they worked. The proportion of the useful ideas in here that are actually mine is probably less than 50% now.

Finally, we had the insane luxury of running this in production against millions-per-second event flows and watching what broke. And of hearing from other teams using it about what they had managed to break. I guarantee: Nobody is smart enough to predict the behavior of software under this kind of stress without experiencing it.

So if you are face-to-face with a piece of production software that’s new to you, and find yourself feeling stupid, don’t worry about it. That software represents a collection of flashes of peak-energy inspiration from people more or less as smart as you, and there were probably quite a few of those people, and it’s been enhanced with the wisdom that comes only from being used at length for real work.

If course it’s smarter than you. That’s OK, you can still improve it.

News

As of late May, Quamina has picked up a couple of collaborators with way more GitHub expertise than me, and its repo is growing all sorts of bells and whistles, mostly on the CI/CD front. Which means that I hope to do a release next month and see if anyone actually wants to use this. Also, more stuff to write about!

29 May 17:11

Photo



29 May 17:11

FujiFilm’s Instax Mini Evo is fun regardless of its specs

by Brad Bennett

Fujifilm's Instax line of cameras has taken the world by storm over the past decade, and the Instax Mini Evo seems like the culmination of several of the company's best ideas into a compact device.

It's a ton of fun to use, and it's one of the only cameras I've brought out with my friends that they asked questions about and used. It has some drawbacks since it only features a tiny smartphone-sized sensor, but if you're looking for the best Instax around, this is it.

What makes the Mini Evo so special?

This camera is both cute and cool at the same time. It bridges classic Fuji vintage styling with the small size and large (if kind of fake) lens of an Instax.

Beyond that, this is one of the Instax models that also features a screen on the back and a MicroSD card slot so it can be used as a standard digital camera. The low-res screen isn't anything special, and photos shot with the Mini Evo look better when you offload them to a phone or computer. They're not nearly as good as my iPhone 13 Pro or any other modern smartphone, but it's close in some regards.

You can, of course, also load a standard Instax Mini film cartridge and print out little pictures on the go. This is a lot of fun, and unlike a standard Instax, the digital nature of the Mini Evo means you can take multiple pics and only print off the best ones. Anyone that's ever printed off a blurry photo and wasted a print (which costs about $1) knows how helpful this is.

My printer started to make a pretty annoying mechanical whine whenever it prints, which is sad to see from an Instax this expensive, but it hasn't stopped working yet. I'll also mention that the first few times I loaded the film, I needed to put it in, close the case, and then squeeze the camera in order actually to get it to click into place. Nothing broke during this process, but it's disappointing nonetheless.

Still, regardless of all of this, the Mini Evo is a blast to use and feels nicer in your hand than you'd expect. The software could use some work, and a larger sensor and faster processor will go a long way toward improving the inevitable 2nd-gen version of this camera in a few years.

 The Instax Evo app

There's also a connected app for this Instax, which means you can send photos from your phone to be printed on it and vice versa. However, there's a weird catch that only allows you to send images from the camera to your phone if you've printed them. It costs roughly $1 to get images from your camera to your phone, which is ridiculous. If you're using a MicroSD card, you can take it out and transfer pictures the old-fashioned way, but it's pretty annoying that Fujifilm has the tech to make this easy, but is essentially pay-walling it.

I'll admit that this wasn't a feature I used often. But, the one time that I wanted to edit a photo on my phone, having to print it off before I could edit it felt like I was being taken advantage of -- especially since I then needed to print it off again post edit.

You can also use the app for remote shooting, which is fun and works well enough to line up shots.

A camera worth your time?

If you're a fan of the Instax brand or someone looking to get into instant photography, this is a great device. However, its price tag reflects that.

The Mini Evo clocks in at $250 which costs a little more than buying a standard Instax Mini and a separate Instax printer to connect to my phone, making it a tough sell in a sense.

I'd still recommend most people go for the Mini Evo over two separate devices due to convenience, but it would have been nice to see Fuji price this a little more compatibly with its other products. For instance, I think it would be a must-buy at $200.

I still think I'm going to carry my point and shoot 35mm film cameras with me more than the Instax Mini Evo since I like the look of real film more than instant prints, but there's no denying that this is the best Instax yet.

You can buy Fujifilm Mini Evo for $250.

MobileSyrup utilizes affiliate partnerships. These partnerships do not influence our editorial content, though we may earn a commission on purchases made via these links that helps fund the journalism provided free on our website.

28 May 02:27

The apps I use to read and write for this blog

Because I’ve been asked a couple times recently:

I keep up with 345 websites and newsletters using an app called NetNewsWire. It’s free and I have it on my Mac, iPhone, and iPad. It’s a type of app called a newsreader.

When I find a blog or website I want to follow, I look for an RSS feed (sometimes just called a “feed”) and subscribe. This is also free. NetNewsWire grabs the feeds periodically, and presents the articles so that I can read them without ads or design.

(There are many newsreader apps out there. NetNewsWire is my favourite because it’s clean, easy, and fast.)

(Learn the basics about using RSS at AboutFeeds.com. The feed for this blog is here.)

How does NetNewsWire keep my subscriptions in sync between my various devices? When you run the app for the first time, it asks you to set up an account with one of various providers. It’s a bit like the way your email app will ask you who hosts your email. One free option for syncing is iCloud. I didn’t do that. Instead I first created an account with Feedbin for which I pay $5/month. NetNewsWire uses Feedbin to sync my devices.

I pay for Feedbin for one big reason: it gives me a secret email address that I can forward anything into. Here’s how I use it:

I read a lot of email newsletters, and email newsletters don’t have RSS. But my email client is a terrible place to read long articles. My inbox is full of distractions and I often miss things. So when I subscribe to a newsletter, I also set up an auto-forward rule from Gmail to my secret Feedbin email (and auto-archive the original email). Now newsletters appear in Feedbin, and therefore I get to read them in NetNewsWire.

Here’s my subscription list (you’ll find a bunch of blogs there you can also subscribe to).

How I use NetNewsWire:

I don’t read everything.

NetNewsWire has a “smart feed” called Today which only shows articles that have been published today. I look at that multiple times daily, then occasionally at particular favourite blogs to see if I’ve missed anything. I have about 6,000 unread articles. That’s fine.


I do almost all my writing in an app called Ulysses. It’s on my Mac, iPhone, and iPad and keeps in sync with iCloud.

It keeps short text notes in an overall library, organised by folder. I’ve used it for years. It’s well-designed, simple but not over-simple, and reliable.

I have top-level folders for

  • Work
  • Projects (organised by year then project)
  • This blog
  • And miscellaneous others, such as cooking, writing fiction, talks, and so on.

(I don’t keep track of to-dos much, but when I need to I use an app called Things which I have on all my devices. It’s also well-designed, structured without enforcing too much structure, and simple without being too simple. I organise tasks by project, and tag them by person and by whether I’m expecting to hear from them or they’re expecting to hear from me. This keeps general to-dos out of my notes.)

Inside the top-level folder for this blog the main two folders are:

  • Links
  • Posts, broken into Drafts and Posted.

My writing process is as follows.

Whenever I see an interesting link, on the web or reading subscriptions, I use the share icon and save it to my Links folder in Ulysses. I copy and paste a little context, or add a few words to make sure I can find it later. I don’t sweat it with tags or detail. I save maybe a dozen links a week.

(My long history of keeping these links is also how I put together talks, or do invention work when I’m in client work mode. I don’t have to be smart about a topic, I just have to have been keeping notes for longer than most people would think reasonable.)

Whenever I have an idea for a blog post, I make a quick note in Drafts. This might be a single line or it might be a paragraph. Drafts tend to start with an observation, a question, me trying to explain something to myself, or a connection between two ideas. Ideas for post appear suddenly and disappear just as fast – so I’m diligent about writing them down immediately even if I’m half awake or walking down the street. I write down maybe 3 or 4 ideas a week.

When I’m in the mood to write, I browse through my collected links and my drafts and wait for something to catch my eye. This is rarely at the same point as capturing an idea.

Often what happens is that two drafts rhyme with one another, so I bring the two together.

Another frequent occurrence is that I can’t think of anything to write, so I start by trying to explain in plain language why I find something interesting. I might get a few paragraphs along before I feel like the post isn’t going anywhere, so then I stop but I leave the expanded text in the draft.

I do this probably daily, 20 or 30 minutes expanding notes, trying bits of narrative, connecting ideas, and generally reminding myself of what’s in my notes.

Two or three times a week a post gets all the way to the end. Sketching and thinking my way through an idea is a different process to actually writing. I often surprise myself in this process – I usually don’t write with an endpoint in mind. It’s more like improv, and my opinion sometimes turns 180 through the process. I barely copy edit when done. Once over quickly, that’s it.

Then I title it and it’s done.


My blog is a homegrown setup and it doesn’t include an editor, web-based or otherwise. Posts are in Markdown (for the last decade; the first decade they were in XML). It’s all templated and I wrote my own server-side app for rendering.

When I write a post I save the text file in a directory with today’s date and add it to the code repository using git. On my Mac that means using Terminal. On my phone and iPad I use an excellent app called Working Copy which is a git client. The code gets pushed to Github. Then I connect to my server over SSH and run a script which deploys the latest code, including the new post.

This is a pretty baroque process. I wouldn’t recommend it to anyone. But I like controlling my own code and having the ability to tweak the way my blog works, so it’s good for me.


I’m very often not happy with what I write. Sometimes I’m super excited. However I also know (from experience) that my feelings about a particular post do not correlate well with how it will be received. So out into the world it goes, either way.

27 May 21:11

Architecture Notes: Datasette

Architecture Notes: Datasette

I was interviewed for the first edition of Architecture Notes - a new publication (website and newsletter) about software architecture created by Mahdi Yusuf. We covered a bunch of topics in detail: ASGI, SQLIte and asyncio, Baked Data, plugin hook design, Python in WebAssembly, Python in an Electron app and more. Mahdi also turned my scrappy diagrams into beautiful illustrations for the piece.

Via @arcnotes

27 May 21:10

Solana loses track of time

Solana logo: three horizontal lines with purple-to-green gradients resembling an S, followed by white caps text reading "Solana"

The Solana blockchain clock drifted about 30 minutes behind real-world time on May 26, as a result of slower-than-usual slot times. Solana's status page read that "this has no impact on performance or network operations", though The Block noted that this time drift could result in smaller staking payouts.

Blockchain timekeeping is also selling point of Solana, which talks up its "proof of history" algorithm in a blog post where Solana Labs co-founder Anatoly Yakovenko says, "our clocks never drift".

27 May 15:13

well-placed periods and commas

by peter@rukavina.net (Peter Rukavina)

Sarah Miller:

Badger told me about a book he read with no punctuation. I said fuck those no punctuation people. I read him a paragraph from a P.G. Wodehouse story which wasn’t even that good, but it did have well-placed periods and commas. I told the Badger how I was hoping I was truly done eating animals because I saw a sad photo of two cows. “I really don’t ever want to eat another cow again,” I said. “They just look so much like my Ruthie, with their sweet eyes.” I tried not to cry and I succeeded. My Fake Son was parking their truck and we waved and they came over and told us about all the people they know who have Covid. We all went for a walk around the block and talked about times we’d projectile vomited. My Fake Son said they’d done it once looking into their friend’s eyes, their story was the best.

I simply love everything about that paragraph.

27 May 14:54

week ending 2022-05-26 BC

by Ducky

Statistics

Reminder: people over 70 and clinically extremely vulnerable people are overrepresented in cases (because those are the ones who are allowed to get PCR tests), and hospitalizations/deaths (because they are most vulnerable). They are also the exact demographic which has been getting boosters, so it is possible that that cohort is doing better now while everybody else is doing worse. We just don’t know. 🙁

The good news is that the positivity rate is going down in most age groups (see charts below), so maybe the case rate is going down.

Anecdotally, there’s a lot of COVID-19 out there. This article gathers some anecdotes about businesses having trouble with people being out sic.

Statistics

Today’s BC CDC weekly report says that the week ending on 21 May there were: +282 hospital admissions, +1,358 cases, +42 all-cause deaths.

Today’s weekly report says that last week (ending on 14 May) there were: +388 hospital admissions, +1,644 cases, +86 all-cause deaths.

Last week’s weekly report said that last week (ending on 14 May) there were: +334 hospital admissions, +1,645 cases, +59 all-cause deaths.

Charts

From the BC CDC weekly report:


From the BC CDC Situation Report:


27 May 14:54

Videogrep

by jwz
mkalus shared this story from jwz.

How to Make Automatic Supercuts.

The author also teaches a course in "Scrapism" at the School for Poetic Computation:

"Scrapism" is the practice of web scraping for artistic, emotional, and critical ends. By combining aspects of data journalism, conceptual art, and hoarding, it offers a methodology to make sense of a world in which everything we do is mediated by internet companies. [...]

In this class participants will learn how to scrape massive quantities of material from the web with Python, and then use this source material in projects that probe the politics and poetics of the internet. We will cover multiple web scraping techniques, as well as different techniques for manipulating and presenting textual content.

Previously, previously, previously, previously, previously, previously, previously.

27 May 14:53

Ginni and Clarence Thomas are telling us exactly how they'll steal the 2024 election.

mkalus shared this story .

On Friday, the Washington Post broke the news that Ginni Thomas sent emails to Arizona elections officials asking them to set aside the will of the voters and submit a slate of fake electors who would support Donald Trump, even after he demonstrably lost the 2020 presidential election. The news barely caused a ripple because there is seemingly nothing to be done about Justice Clarence Thomas’ refusal to recuse in cases that materially affect his spouse, even as he has already decided several matters surrounding the 2020 election … and also because that same spouse had written far more inflammatory, QAnon-style texts to Trump’s chief of staff urging him to set aside the 2020 contest, and nothing was done about that either.

In reality, of course, there is plenty to be done about Supreme Court justices who decline to be bound by federal recusal statutes and judicial ethics canons. But unless and until there is a ravenous public appetite for reforms to the court, we will continue to watch this play out in mute horror, as though it’s a Netflix special about the Tudors, and the only recourse we have is to return to our mutton farming. Reforms aside, there is another more crucial lesson from the Thomases’ tag-team efforts to seat a president who lost an election: What’s past is prologue, and what was done sloppily in 2020 is being mapped out by experts for 2024.

It’s easy to dismiss the demented texts and emails from a sitting justice’s spouse to public officials who have long-standing professional connections to that justice as zany conspiracy theorizing. Ginni Thomas can be lumped into the QAnon weirdos bucket with Cleta Mitchell, Sidney Powell, Rudy Giuliani, and Mike Lindell—hapless insurrection enthusiasts who were unable to marshal a single winning argument in an actual court of law after the 2020 election. But the other way to look at the texts and emails that were pinging around the highest echelons of power and influence in the weeks after November 2020 is as a warning and road map for what is already being put into place for the next presidential contest. But next time, the lawyers won’t be sweating brown makeup or referencing crackpot theories of Italian election meddling.

What Thomas was emailing was a prefabbed piece of legal advocacy that urged Arizona state officials to “Please stand strong in the face of political and media pressure. Please reflect on the awesome authority granted to you by our constitution. And then please take action to ensure that a clean slate of electors is chosen for our state.” That isn’t just words. It’s actually a theory underlying the subversion of an entire presidential election. It’s also a theory her husband has endorsed as a matter of constitutional law. It didn’t work in 2020 because the legal and political structures to support it weren’t in place at the time. Those pieces are being put into place as we type this.

Recall, for instance, that back in November of 2020, it wasn’t clear there were five votes at the Supreme Court to support the proposition that state legislatures could simply set aside election results they deemed tainted by impropriety. Recall that when lawyer/insurrectionists John Eastman (a Thomas clerk) and Jeffrey Bossert Clark floated that notion at the White House and elsewhere, serious DOJ attorneys told them in no uncertain terms to go away. Recall finally that one of the lawmakers in Arizona, Shawnna Bolick, is married to a state Supreme Court justice and is parent to Clarence Thomas’ godchild. Bolick, as Jane Mayer of the New Yorker reported in 2021, later introduced legislation that “would enable a majority of the legislature to override the popular vote … and dictate the state’s electoral college votes itself.” In other words, what Bolick couldn’t lawfully do in 2020 is a thing she hopes to do under color of law in future. Oh, and Bolick is now running for secretary of state, the office that oversees state elections.

The New York Times reported this weekend on the proliferation of Bolick’s fellow travelers: election-deniers seeking or holding office in states that will decide the winner of the 2024 presidential race. According to their tally, at least 357 sitting Republican legislators in swing states “have used the power of their office to discredit or try to overturn the results of the 2020 presidential election.” That number amounts to “44 percent of the Republican legislators in the nine states where the presidential race was most narrowly decided.” Moreover, election deniers around the country are running for secretary of state and attorney general—vying to be swing states’ top election officer and top cop, respectively. If successful, they can use this power to aggressively investigate bogus claims of voter fraud, attempt to nullify Democratic ballots, refuse to certify the true results, and even try to approve an “alternative” slate of electors for the loser. This is what Ginni Thomas was pushing two years ago, and what her husband has already deemed constitutionally permissible.

Will any of this work? The Thomases clearly think it will. At the same time Ginni was lobbying state legislators to overturn the results, Clarence was developing and promoting a constitutional theory that would lend legitimacy to just such a brazen coup. The justice has become an avid fan of the “independent state legislature doctrine,” a verifiably false, pseudo-originalist theory that allows state legislatures to ignore the real results and rig elections for Republicans. Thomas repeatedly deployed this theory during the 2020 race in an effort to void mail ballots in battleground states (which disproportionately favored Democrats). He later peddled a somewhat sanitized version of the Big Lie, falsely asserting that mail ballots—specifically, those used to elect Joe Biden in 2020—are highly susceptible to voter fraud. Ever since, he has continued to champion the theory whenever lower courts’ election law rulings happen to help Democrats. As our colleague Richard Hasen has pointed out, it looks increasingly likely that the Supreme Court will decide this issue by 2024.

The symmetry between Ginni and Clarence Thomas’ work has never been more obvious. While Clarence fought to give state legislatures the constitutional authority to reject election results, Ginni lobbied state legislators to do exactly that. A casual observer might assume they were working in tandem, with Clarence handling the law and Ginni working on the political side. They aren’t particularly subtle about it. You need only read Ginni’s emails and Clarence’s opinions to see exactly how the 2024 coup attempt will go down because it’s identical to the 2020 coup attempt: If a Democrat prevails, red state officials will question the legitimacy of the results, giving state legislatures an opportunity to throw them out and declare the Republican to be the real winner. This has nothing to do with liberals squelching Ginni Thomas’ feminist drive to have a separate and distinct political life apart from that of her spouse. More power to her. This has everything to do with two public actors working together to ensure that red state legislatures decide future elections in lieu of the voters.

The question remaining isn’t whether it’ll happen; the question is whether it’ll succeed. The Thomases tried this approach in 2020, but like Trump’s other allies, they developed their strategy a bit too late and didn’t buff out the crackpot edges until recently. But this time they’re putting in the work with plenty of time to spare. If Republicans succeed in pulling off a much more respectable coup in 2024, Americans will have every right to be furious and appalled. But no one will have any excuse to be surprised, because Clarence, Ginni, Eastman, Clark, and their many powerful friends are currently laying out the game plan right before our eyes.

For more on Ginni Thomas’ role in efforts to overturn the 2020 election, listen to this episode of What Next.

27 May 14:53

Shane McIntosh on Mining Software Build Systems

Shane McIntosh: "The unintended consequences of mining software build systems." Shane is an Associate Professor at the University of Waterloo in Canada.

Shane McIntosh

slides | transcript (English) | transcripción (Español)

27 May 14:53

Brittany Johnson-Matthews on Causal Testing

Brittany Johnson-Matthews: "Causal testing: understanding the root causes of defects." Brittany is an Assistant Professor at George Mason University in the United States.

Brittany Johnson-Matthews

slides | transcript (English) | transcripción (Español)

27 May 14:53

Kelly Blincoe on Destructive Criticism

Kelly Blincoe: "The effects of destructive criticism in code review." Kelly is a Senior Lecturer at the University of Auckland in New Zealand.

Kelly Blincoe

slides | transcript (English) | transcripción (Español)