Shared posts

13 Nov 02:43

What to blog about

You should start a blog. Having your own little corner of the internet is good for the soul!

But what should you write about?

It's easy to get hung up on this. I've definitely felt the self-imposed pressure to only write something if it's new, and unique, and feels like it's never been said before. This is a mental trap that does nothing but hold you back.

Here are two types of content that I guarantee you can produce and feel great about producing: TILs, and writing descriptions of your projects.

Today I Learned

A TIL - Today I Learned - is the most liberating form of content I know of.

Did you just learn how to do something? Write about that.

Call it a TIL - that way you're not promising anyone a revelation or an in-depth tutorial. You're saying "I just figured this out: here are my notes, you may find them useful too".

I also like the humility of this kind of content. Part of the reason I publish them is to emphasize that even with 25 years of professional experience you should still celebrate learning even the most basic of things.

I learned the "interact" command in pdb the other day! Here's my TIL.

I started publishing TILs in April 2020. I'm up to 346 now, and most of them took less than 10 minutes to write. It's such a great format for quick and satisfying online writing.

My collection lives at https://til.simonwillison.net - which publishes content from my simonw/til GitHub repository.

Write about your projects

If you do a project, you should write about it.

I recommend adding "write about it" to your definition of "done" for anything that you build or create.

Like with TILs, this takes away the pressure to be unique. It doesn't matter if your project overlaps with thousands of others: the experience of building it is unique to you. You deserve to have a few paragraphs and a screenshot out there explaining (and quietly celebrating) what you made.

The screenshot is particularly important. Will your project still exist and work in a decade? I hope so, but we all know how quickly things succumb to bit-rot.

Even better than a screenshot: an animated GIF screenshot! I capture these with LICEcap. And a video is even better than that, but those take a lot more effort to produce.

It's incredibly tempting to skip the step where you write about a project. But any time you do that you're leaving a huge amount of uncaptured value from that project on the table.

These days I make myself do it: I tell myself that writing about something is the cost I have to pay for building it. And I always end up feeling that the effort was more than worthwhile.

Check out my projects tag for examples of this kind of content.

So that's my advice for blogging: write about things you've learned, and write about things you've built!

13 Nov 02:42

Contract Day

by downes

With all the delicacy of a sledgehammer, Doug Ford’s conservative government in Ontario has crafted an elegant solution to negotiating with public sector unions:

Don’t.

Instead, as a government, simply define what you want the contract to be, override any constitutional protections the union may have by using the ‘notwithstanding’ clause, and impose it on public sector employees. There’s no need to bargain at all, no need for arbitrators, and any protests can be met with severe legal consequences.

Contract Day. All the uncertainty around public sector bargaining reduced to a single announcement where the government tells you how much you’ll make, how long the current contract lasts, and the penalties for disagreeing with it.

This is why conservatives distrust government so much. They know what they’re willing to do if they get their hands on it.

Never mind what the merits of the Education Workers’ arguments may be. They wish to be paid a living wage; nothing new in that. Any fair-minded arbitrator would give them close to what they’re asking for, which is why the Ford government can’t risk leaving anything to chance.

Also, never mind why the Ford government is doing this. There are some quite plausible arguments being made to the effect that by underfunding education, Ford is setting the stage for its privatization. I would also argue that capitalists look at public sector pension funds with envy and avarice in their hearts.

What surprises me most right now is that the entire Ontario public sector isn’t on strike. After all, if Ford’s plan works here, it works everywhere. It eliminates all labour rights at once, with the stroke of a pen. Maybe the unions are waiting to hear from the Labour Relations Board. I don’t know.

There are wider reasons why Canadians should be concerned about this move. It’s no coincidence that this tactic is being tried against the weakest and most poorly paid public sector employees. The conservatives want it to work, before they move against harder targets. But this isn’t just about labour rights – they’re just the lowest hanging fruit.

Think of our other rights. Freedom of religion, say. What would prevent a government from preemptively using the notwithstanding clause to prohibit the public practice of certain religions? We’ve seen partial bans of religious apparel in the past with prohibitions against such things as turbans or headscarves. Now displays of non-Christian religion could be blanket banned.

Or even the rights we have before our criminal justice system. The current Ford legislation provides for Draconian penalties that are far out of proportion for the putative offense. $4,000 a day for not showing up to work? It’s clear in this case that employees need protection from the law.

There are good reasons to ensure the supremacy of Parliament in a democratic society; there are too many examples worldwide of judges preventing elected representatives from doing their jobs. But there are also good reasons why fundamental human rights, derived and drafted through extensive constitutional negotiations among elected representatives from all levels of government, ought also to prevail.

The Ford government simply disregards that. It does away with human rights altogether. Its actions should not be allowed to stand.

13 Nov 02:41

Lettre à un boomer

by Tristan

C’était il y a quelques jours, sur LinkedIn. Les “hasards” de l’algorithme de recommandation me proposent un billet énervé d’un ancien de Total Energies et de BNP Paribas. Il s’enflamme dans un propos pas très cohérent « on va encore avoir droit à la jubilation de tous ces éco-hypocrites qui vont glousser et à nouveau taper sur nos fleurons nationaux ! ». Il faisait sûrement référence au fait que ses deux précédents employeurs venaient d’être épinglés pour le manque d’action pour le climat. Je ne mets pas de lien vers son billet, ça n’a pas beaucoup d’intérêt en tant que tel : je ne souhaite pas pousser les gens à lui tomber dessus.

Par contre, j’ai eu envie de lui faire une réponse, pas tant pour lui que pour tous ceux — et ils sont nombreux — qui sont encore dans le déni face au climat. Tous ceux aussi qui ont travaillé pour des grandes entreprises qui ont hypothéqué le futur des humains sur cette planète, ont participé à la destruction de la biodiversité et émis de grandes quantités de gaz à effet de serre sans essayer de changer leurs pratiques. Autrement dit, presque tous les professionnels.

Voici donc ma réponse à ce monsieur :


Monsieur, je comprends qu’en tant qu’ancien de BNP Paribas et de Total Energies, vous ayez du mal à accepter d’avoir apporté votre contribution à des entreprises qui tuent la planète et ruinent l’avenir de l’humanité.

Je comprends aussi votre difficulté à admettre que ce jeu qu’est votre carrière et dans laquelle vous avez réussi soit devenu un jeu où les jeunes ne veulent plus jouer, car ils ont compris les conséquences de leurs actes et préfèrent avoir un avenir plutôt qu’une carrière.

Mais cracher sur ceux qui ont compris cette problématique est déshonorant pour vous. N’ajoutez pas ce déshonneur à la douleur d’avoir perdu vos rêves de croissance infinie dans un monde fini.

Dans ce contexte, la colère est inévitable. Votre déni est trop commun. Il vous faut faire le deuil de vos rêves de boomer. Je sais, j’ai eu les mêmes.

Il va falloir passer à l’acceptation.

Débranchez LinkedIn, allez prendre l’air, prenez du recul face à votre carrière et faite face à la réalité : le monde de demain ne peut pas reposer sur les épaules et les méthodes de gens qui veulent faire comme avant. Il est temps de changer, de relever nos manches et de faire mieux, différemment. Vous ne regretterez pas ces efforts, croyez-moi.

13 Nov 02:41

A photo stream for my site

by Dries
A underground subway tunnel with an incoming train.

It's no secret that I don't like using Facebook, Instagram or Twitter as my primary platform for sharing photos and status updates. I don't trust these platforms with my data, and I don't like that they track my friends and family each time I post something.

For those reasons, I set a personal challenge in 2018 to take back control over my social media, photos and more. As a result, I quit Facebook and stopped using Twitter for personal status updates. I still use Twitter for Drupal-related updates.

To this date, I still occasionally post on Instagram. The main reason I still post on Instagram is that it's simply the easiest way for friends and family to follow me. But every time I post a photo on Instagram, I cringe. I don't like that I cause friends and family to be tracked.

My workflow is to upload photos to my website first. After uploading photos to my website, I occasionally POSSE a photo to Instagram. I have used this workflow for many years. As a result, I have over 10,000 photos on my website, and only 300 photos on Instagram. By all means, my website is my primary platform for sharing photos.

I decided it was time to make it a bit easier for people to follow my photography and adventures from my website. Last month I silently added a photo stream, complete with an RSS feed. Now there is a page that shows my newest photos and an easy way to subscribe to them.

While an RSS feed doesn't have the same convenience factor as Instagram, it is better than Instagram in the following ways: more photos, no tracking, no algorithmic filtering, no account required, and no advertising.

My photo stream and photo galleries aspire to the privacy of a printed photo album.

I encourage you to subscribe to my photos RSS feed and to unfollow me on Instagram.

Step by step, I will continue to build my audience here, on my blog, on the edge of the Open Web, on my own terms.

From a technical point of view, my photo stream uses responsive images that are lazy loaded — it should be pretty fast. And it uses the Open Graph Protocol for improved link sharing.

PS: The photo at the top is the subway in Boston. Taken on my way home from work.

Also posted on IndieNews.

13 Nov 02:41

Understanding carbon

by russell davies
13 Nov 02:40

HATETRIS

Code » Play Hate Tetris. This is bad Tetris. It's hateful Tetris. It's Tetris according to the evil AI from "I Have No Mouth And I Must Scream". I have to be honest, this is not an entirely original idea. However, RRRR's implementation of the concept of a game of Tetris which always gives you the worst possible pieces leaves much to be desired: the keyboard interface frequently doesn't work the conditions for failure are ambiguous and inconsistent the playing field is only 8 blocks wide as compared to the standard 10 the AI is either overly generous, or stupid, and frequently does NOT provide you with the worst possible piece (UPDATE: there is also Bastet, which can be played online here, but is also far too forgiving.) HATETRIS was an experiment to rectify those flaws. Also, every coder has to build Tetris at least once in their life. In addition, I also have a passionate hatred of JavaScript, but I felt that that hatred was borne out of ignorance, so this was an attempt to r...
13 Nov 02:39

Evidence-based Software Engineering book: two years later

by Derek Jones

Two years ago, my book Evidence-based Software Engineering: based on the publicly available data was released. The first two weeks saw 0.25 million downloads, and 0.5 million after six months. The paperback version on Amazon has sold perhaps 20 copies.

How have the book contents fared, and how well has my claim to have discussed all the publicly available software engineering data stood up?

The contents have survived almost completely unscathed. This is primarily because reader feedback has been almost non-existent, and I have hardly spent any time rereading it.

In the last two years I have discovered maybe a dozen software engineering datasets that would have been included, had I known about them, and maybe another dozen non-software related datasets that could have been included in the Human behavior/Cognitive capitalism/Ecosystems/Reliability chapters. About half of these have been the subject of blog posts (links below), with the others waiting to be covered.

Each dataset provides a sliver of insight into the much larger picture that is software engineering; joining the appropriate dots, by analyzing multiple datasets, can provide a larger sliver of insight into the bigger picture. I have not spent much time attempting to join dots, but have joined a few tiny ones, and a few that are not so small, e.g., Estimating using a granular sequence of values and Task backlog waiting times are power laws.

I spent the first year, after the book came out, working through the backlog of tasks that had built up during the 10-years of writing. The second year was mostly dedicated to trying to find software project data (including joining Twitter), and reading papers at a much reduced rate.

The plot below shows the number of monthly downloads of the A4 and mobile friendly pdfs, along with the average kbytes per download (code+data):

Downloads of A4 and mobile pdf over 2-years.

The monthly averages for 2022 are around 6K A4 and 700 mobile friendly pdfs.

I have been averaging one in-person meetup per week in London. Nearly everybody I tell about the book has not previously heard of it.

The following is a list of blog posts either analyzing existing data or discussing/analyzing new data.

Introduction
analysis: Software effort estimation is mostly fake research
analysis: Moore’s law was a socially constructed project

Human behavior
data (reasoning): The impact of believability on reasoning performance
data: The Approximate Number System and software estimating
data (social conformance): How large an impact does social conformity have on estimates?
data (anchoring): Estimating quantities from several hundred to several thousand
data: Cognitive effort, whatever it might be

Ecosystems
data: Growth in number of packages for widely used languages
data: Analysis of a subset of the Linux Counter data
data: Overview of broad US data on IT job hiring/firing and quitting

Projects
analysis: Delphi and group estimation
analysis: The CESAW dataset: a brief introduction
analysis: Parkinson’s law, striving to meet a deadline, or happenstance?
analysis: Evaluating estimation performance
analysis: Complex software makes economic sense
analysis: Cost-effectiveness decision for fixing a known coding mistake
analysis: Optimal sizing of a product backlog
analysis: Evolution of the DORA metrics
analysis: Two failed software development projects in the High Court

data: Pomodoros worked during a day: an analysis of Alex’s data
data: Multi-state survival modeling of a Jira issues snapshot
data: Over/under estimation factor for ‘most estimates’
data: Estimation accuracy in the (building|road) construction industry
data: Rounding and heaping in non-software estimates
data: Patterns in the LSST:DM Sprint/Story-point/Story ‘done’ issues
data: Shopper estimates of the total value of items in their basket

Reliability
analysis: Most percentages are more than half

Statistical techniques
Fitting discontinuous data from disparate sources
Testing rounded data for a circular uniform distribution

Post 2020 data
Pomodoros worked during a day: an analysis of Alex’s data
Impact of number of files on number of review comments
Finding patterns in construction project drawing creation dates

12 Nov 21:52

AI-based image generation ethics

by Nathan Yau

AI-based image generation is having a moment. Time some text and you can get a piece of art that resembles the style of your favorite artist. However, there’s an ethical dilemma with the source material. Andy Baio talked to Hollie Mengert, whose artwork was used to create a model for Stable Diffusion:

“For me, personally, it feels like someone’s taking work that I’ve done, you know, things that I’ve learned — I’ve been a working artist since I graduated art school in 2011 — and is using it to create art that that I didn’t consent to and didn’t give permission for,” she said. “I think the biggest thing for me is just that my name is attached to it. Because it’s one thing to be like, this is a stylized image creator. Then if people make something weird with it, something that doesn’t look like me, then I have some distance from it. But to have my name on it is ultimately very uncomfortable and invasive for me.”

AI-generated charts are only tangentially a thing so far. We humans still have a leg up in the context and meaning part of understanding data.

Tags: AI, Andy Baio, art, ethics, Hollie Mengert, Stable Diffusion

12 Nov 21:52

The Foundation of Product

by Marty Cagan

Recently I wrote an article about some product ops anti-patterns, but several people pointed out to me that I had buried the lede in that article, and that the most important part of that article applied far beyond the relatively minor topic of product ops. So in this article, I’d like to focus on this...

The post The Foundation of Product appeared first on Silicon Valley Product Group.

12 Nov 21:52

MoviePass Was Not a Good Business

by Matt Levine
Also Binance vs. FTX and some more Elon Musk Twitter chaos.
12 Nov 21:52

Blessed.rs Crate List

Blessed.rs Crate List

Rust doesn't have a very large standard library, so part of learning Rust is figuring out which of the third-party crates are the best for tackling common problems. This here is an opinionated guide to crates, which looks like it could be really useful.

Via Hacker News

12 Nov 21:46

Self hosting Mastodon & supporting collectives & code

I was reminded that I wrote about running your own Mastodon (on Heroku!) back in 2017. Heroku is on its way out, so I wouldn’t recommend it today.1

Interesting to look back at that post and see how the beginnings of what would become Fission are there.

Digital Ocean also has one click installs & @RangerMauve reports that you’ll need the $12/month droplet size.

This is fine for highly technical people who like to experiment. @walkah is running his own Pleroma instance — I think even at home in his lab?2

When I rejoined Mastodon more seriously 2 years ago, I joined Social.Coop. I didn’t want to worry about the care and feeding of a server, database, backups, and so on. I did want to be part of a group that supported such an activity.

As you pick a Mastodon server to be part of, do some research. Who are the admins? Where is it hosted? Do they have a place to accept donations and support? Can you get involved in governance of the server? Which instances do they block? Or not block?

At this point my main recommendation is to look for a donation and support link. As a “regular” user, $5/month or $50/year seems about the right amount to be donating.

If you do a search on Open Collective you’ll see several pages of results of server admins having set up collectives.

This also includes the “core” Mastodon project.

If you’re going to be running a server instance for yourself or for an organization, don’t forget to also donate to your upstream. The way Open Collective works, you can use your collective to fund other collectives, so you can make this part of your standard way of operating. Supporting open source software development in this way should be a standard expense.3

And anyone who says that’s “too hard for everyone” — sure, those of us who are more tech and media literate may need to go first. But if we’re not willing to support something better, who is?


  1. I haven’t had serious time to look at new alternatives to Heroku. I’m still allergic to Docker, and Digital Ocean works for a lot of classic apps. Railway is the one I’ve got my eye on but haven’t had the right project to try with it yet. [return]
  2. Pleroma is an ActivityPub server written in Elixir that is compatible with Mastodon. Or rather, Mastodon also runs ActivityPub, the open protocol that people can implement compatible servers for. [return]
  3. yes, you could also contribute in kind with code, answering support questions, documentation, etc. Please do, it’s likely even more valuable than cash donations! [return]
12 Nov 21:44

Ministry for the future / jackpot vibes

by russell davies
12 Nov 21:44

Enable Raspbian Images to Boot on Libre Computers Boards

by James A. Chambers
Enable Raspbian Images to Boot on Libre Computers Boards GuideLibre Computers is a company making single board computers that are much more open-sourced than the Raspberry Pi (especially when it comes to hardware). They are offering a USB 2.0 model (the "Le Potato") for $40 and a USB 3.0 model (the "Renegade") for $50. Those are not theoretical MSRP prices that are impossible to find either. Those are the listed prices available today! When I first covered these boards the Libre reddit account let me know about a utility they had available that could enable most Raspberry Pi images to boot on Libre Computers boards such as the "Le Potato" and "Renegade". I tried out the tool and it worked great! In this guide I will show you where to get the tool and how to use it. Let's begin!

Source

12 Nov 21:27

Logitech Brio 505: Mehr 505 als Brio

by Volker Weber

Ich habe recht lange auf ein Testgerät der Brio 505 von Logitech gewartet. Seit letzter Woche habe ich sie nun im Einsatz. Brio 505 ist eine Webcam mit 1080p Auflösung, also 1920×1080 Bildpunkten. Das entspricht dem, was die Webcams in den neuesten Laptops können und kein Vergleich zu den 720p-Webcams, die bis 2021 gang und gäbe waren.

Ich mag das pfiffige Logi-Design. Die Bildschirmhalterung ist magnetisch, das passende Gegenstück ist ein Metallschraube im Stativgewinde an der Unterseite der Kamera. Da sich die Halterung am Monitor festkleben lässt, kann man die Webcam jederzeit schnell greifen, wenn man etwas zeigen will.

Zwei Mikrofone, Privacy Shutter und eine LED

Der innere Teil der Halterung ist kippbar. Und mit einem Lagesensor erkennt die Kamera, ob sie gerade nach vorne oder nach unten schaut. Wenn man den sogenannten Show Mode aktiviert, dann wird das Kamerabild um 180 Grad gekippt.

Mit Show Mode
Ohne Show Mode

Da die Brio 505 einen Autofokus mit Gesichtserkennung hat, stellt sie sowohl im Normalmodus als auch bei gedrehtem Modus stets das Bild scharf. Wenn man etwas zeigen will, kann man es kurz in die Kamera halten. Den Show Mode finde ich besonders nützlich, weil man das Gefühl hat, über die Schulter zu schauen. Ich habe so etwa das korrekte Setup dieses Mischpultes demonstriert.

Die Brio 505 hat kein Windows Hello, eine Anmeldung durch Gesichtserkennung ist deshalb nicht möglich. Sie hat einen griffigen Ring am rechten Ende, mit dem man die Kamera schließen kann, wenn man sie nicht nutzt. Eine LED warnt, wenn die Kamera aktiv ist und sie hat zwei Mikrofone, die nützlich beim Einsatz mit einem Desktop PC sind.

Logitech Brio 505

Das Bild der Brio 505 ist nicht ganz so hell und knackscharf wie das der 4k auflösenden Brio, aber für die kleinen Bildchen in Teams, Zoom & Co ist es ausreichend. Je nach Abstand zur Kamera kann man zwischen 90, 78 und 65 Grad Blickwinkel umschalten. Man muss dazu die Software LogiTune installieren. Die Kamera behält ihre Einstellungen, wenn man sie vom Strom trennt. So lässt sie sich auch am Privat-PC konfigurieren, falls man in einer restriktiven IT-Landschaft arbeitet.

Zum Vergleich: Logitech Brio

Ich habe mir die Brio 505 angeschaut. Das ist Business-Version der Brio 500. Sie lässt sich zentral verwalten und aktualisieren und ist damit die bessere Wahl im Unternehmenseinsatz. Wer sie privat anschaffen will, kann auch einfach zur Brio 500 greifen.

12 Nov 21:25

Taking a leave from Twitter

by Volker Weber

The wrecking ball has been taken to Twitter, and much in the same way that I have taken a leave from Facebook, I am now taking a leave from Twitter.

It pays off that I never depended on any commercial network. Xing, Twitter, Facebook, Instagram, LinkedIn, they have always been playgrounds. This circus has now moved to Mastodon, on the chaos.social instance. My address is @vowe@chaos.social and it resides at chaos.social/@vowe. If you do not intend to join Mastodon, my page also has a public RSS feed: chaos.social/@vowe.rss.

12 Nov 21:02

Mastodon is just blogs

And that's great. It's also the return of Google Reader!

Mastodon is really confusing for newcomers. There are memes about it.

If you're an internet user of a certain age, you may find an analogy that's been working for me really useful:

Mastodon is just blogs.

Every Mastodon account is a little blog. Mine is at https://fedi.simonwillison.net/@simon.

You can post text and images to it. You can link to things. It's a blog.

You can also subscribe to other people's blogs - either by "following" them (a subscribe in disguise) or - fun trick - you can add .rss to their page and subscribe in a regular news reader (here's my feed).

A Mastodon server (often called an instance) is just a shared blog host. Kind of like putting your personal blog in a folder on a domain on shared hosting with some of your friends.

Want to go it alone? You can do that: run your own dedicated Mastodon instance on your own domain (or pay someone to do that for you - I'm using masto.host).

Feeling really nerdy? You can build your own instance from scratch, by implementing the ActivityPub specification and a few others, plus matching some Mastodon conventions.

Differences from regular blogs

Mastodon (actually mostly ActivityPub - Mastodon is just the most popular open source implementation) does add some extra features that you won't get with a regular blog:

  • Follows: you can follow other blogs, and see who you are following and who is following you
  • Likes: you can like a post - people will see that you liked it
  • Retweets: these are called "boosts". They duplicate someone's post on your blog too, promoting it to your followers
  • Replies: you can reply to other people's posts with your own
  • Privacy levels: you can make a post public, visible only to your followers, or visible only to specific people (effectively a group direct message)

These features are what makes it interesting, and also what makes it significantly more complicated - both to understand and to operate.

Add all of these features to a blog and you get a blog that's lightly disguised as a Twitter account. It's still a blog though!

It doesn't have to be a shared host

This shared hosting aspect is the root of many of the common complaints about Mastodon: "The server admins can read your private messages! They can ban you for no reason! They can delete your account! If they lose interest the entire server could go away one day!"

All of this is true.

This is why I like the shared blog hosting analogy: the same is true there too.

In both cases, the ultimate solution is to host it yourself. Mastodon has more moving pieces than a regular static blog, so this is harder - but it's not impossibly hard.

I'm paying to host my own server for exactly this reason.

It's also a shared feed reader

This is where things get a little bit more complicated.

Do you still miss Google Reader, almost a decade after it was shut down? It's back!

A Mastodon server is a feed reader, shared by everyone who uses that server.

Users on one server can follow users on any other server - and see their posts in their feed in near-enough real time.

This works because each Mastodon server implements a flurry of background activity. My personal server, serving just me, already tells me it has processed 586,934 Sidekiq jobs since I started using it.

Blogs and feed readers work by polling for changes every few hours. ActivityPub is more ambitious: any time you post something, your server actively sends your new post out to every server that your followers are on.

Every time someone followed by you (or any other user on your server) posts, your server receives that post, stores a copy and adds it to your feed.

Servers offer a "federated" timeline. That's effectively a combined feed of all of the public posts from every account on Mastodon that's followed by at least one user on your server.

It's like you're running a little standalone copy of the Google Reader server application and sharing it with a few dozen/hundred/thousand of your friends.

May a thousand servers bloom

If you're reading this with a web engineering background, you may be thinking that this sounds pretty alarming! Half a million Sidekiq jobs to support a single user? Huge amounts of webhooks firing every time someone posts?

Somehow it seems to work. But can it scale?

The key to scaling Mastodon is spreading the cost of all of that background activity across a large number of servers.

And unlike something like Twitter, where you need to host all of those yourself, Mastodon scales by encouraging people to run their own servers.

On November 2nd Mastodon founder Eugen Rochko posted the following:

199,430 is the number of new users across different Mastodon servers since October 27, along with 437 new servers. This bring last day's total to 608,837 active users, which is without precedent the highest it's ever been for Mastodon and the fediverse.

That's 457 new users for each new server.

Any time anyone builds something decentralized like this, the natural pressure is to centralize it again.

In Mastodon's case though, decentralization is key to getting it to scale. And the organization behind mastodon.social, the largest server, is a German non-profit with an incentive to encourage new servers to help spread the load.

Will it break? I don't think so. Regular blogs never had to worry about scaling, because that's like worrying that the internet will run out of space for new content.

Mastodon servers are a lot chattier and expensive to run, but they don't need to talk to everything else on the network - they only have to cover the social graph of the people using them.

It may prove unsustainable to run a single Mastodon server with a million users - but if you split that up into ten servers covering 100,000 users each I feel like it should probably work.

Running on multiple, independently governed servers is also Mastodon's answer to the incredibly hard problem of scaling moderation. There's a lot more to be said about this and I'm not going to try and do it justice here, but I recommend reading this Time interview with Mastodon founder Eugen for a good introduction.

How does this all get paid for?

One of the really refreshing things about Mastodon is the business model. There are no ads. There's no VC investment, burning early money to grow market share for later.

There are just servers, and people paying to run them and volunteering their time to maintain them.

Elon did us all a favour here by setting $8/month as the intended price for Twitter Blue. That's now my benchmark for how much I should be contributing to my Mastodon server. If everyone who can afford to do so does that, I think we'll be OK.

And it's very clear what you're getting for the money. How much each server costs to run can be a matter of public record.

The oldest cliche about online business models is "if you're not paying for the product, you are the product being sold".

Mastodon is our chance to show that we've learned that lesson and we're finally ready to pay up!

Is it actually going to work?

Mastodon has been around for six years now - and the various standards it is built on have been in development I believe since 2008.

A whole generation of early adopters have been kicking the tyres on this thing for years. It is not a new, untested piece of software. A lot of smart people have put a lot of work into this for a long time.

No-one could have predicted that Elon would drive it into hockeystick growth mode in under a week. Despite the fact that it's run by volunteers with no profit motive anywhere to be found, it's holding together impressively well.

My hunch is that this is going to work out just fine.

Don't judge a website by its mobile app

Just like blogs, Mastodon is very much a creature of the Web.

There's an official Mastodon app, and it's decent, but it suffers the classic problem of so many mobile apps in that it doesn't quite keep up with the web version in terms of features.

More importantly, its onboarding process for creating a new account is pretty confusing!

I'm seeing a lot of people get frustrated and write-off Mastodon as completely impenetrable. I have a hunch that many of these are people who's only experience has come from downloading the official app.

So don't judge a federated web ecosystem exclusively by its mobile app! If you begin your initial Mastodon exploration on a regular computer you may find it easier to get started.

Other apps exist - in fact the official app is a relatively recent addition to the scene, just over a year old. I'm personally a fan of Toot! for iOS, which includes some delightful elephant animations.

The expanded analogy

Here's my expanded version of that initial analogy:

Mastodon is just blogs and Google Reader, skinned to look like Twitter.

12 Nov 21:01

Mastodon Does Microformats, ActivityPub Does Check-Ins and Travel Plans

by Ton Zijlstra

I hadn’t really looked, but it turns out that Mastodon has incorporated microformats. It has h-feed and h-card, h-entry (a status), and h-cite (a boost). Plaint text properties (p-), e-content, and link properties (u-) are implemented. Indeed, they all surface when looking at a profile’s HTML source. This is what makes it possible to e.g. follow Mastodon feeds as h-feed, next to the existing RSS output and ActivityPub, and that e.g. Brid.gy can do its work to carry over any interaction on a Mastodon post to a blogpost here.

What I haven’t found was what I was looking for.
The ActivityPub protocol in its specs has several so-called Activity Types that drew my attention:

In short ActivityPub supports FourSquare and Dopplr like check-ins and travel plans. I’ve recently added that to my site in terms of microformats and was still wondering how to create a useful stream for it. I’ve been thinking about an OPML outline with schema.org attributes, or a dedicated RSS feed or h-feed. An ActivityPub stream might be of interest too, or even more. There’s a PHP implementation of ActivityPub that includes these Activity Types as well, meaning there’s potential to experiment for me.

I wonder, are there any actual implementations of these ActivityPub types currently?

12 Nov 21:01

Dashboard Design Patterns

by Nathan Yau

Dashboards aren’t really my thing, but we’ve seen, especially over the past few years, that a quick view of data that is checked regularly for a current status can be useful in some contexts. Dashboard Design Patterns offers a collection of research, guidance, and cheatsheets for your dashboard designing needs.

Tags: dashboard

12 Nov 21:00

Home invasion

by Hugh Rundle

Nobody goes there anymore. It’s too crowded. Yogi Berra, et al

Well, it looks like it's finally happened. As news sites began reporting that Elon Musk had finalised his purchase of Twitter, the fediverse's Eternal September — hoped for and feared in equal numbers amongst its existing user base — began.

We've had waves of new people before — most recently earlier this year when Musk first announced his purchase offer — but what's been happening for the last week is different in both scale and nature. It's clear that a sizeable portion of Twitter's users are choosing to deplatform themselves en masse, and many have been directed to Mastodon, the most famous and populated software on the fediverse.

The fediverse is a group of web publishing systems linked together via the ActivityPub, Diaspora and OStatus protocols.

Two kinds of party

In Hobart in the late 1990s, there were basically three nightclubs. They were all sleazy to different degrees, loud to varying levels, and people went to them because that's where other people were — to have fun with friends, to attract attention, to assert their social status, and so on. This is Twitter.

I had a friend who lived in a sharehouse around the corner from one of these popular clubs. He hosted house parties on weekends. Small, just friends and a few friends of friends. This is the fediverse.

Murmuration

For those of us who have been using Mastodon for a while (I started my own Mastodon server 4 years ago), this week has been overwhelming. I've been thinking of metaphors to try to understand why I've found it so upsetting. This is supposed to be what we wanted, right? Yet it feels like something else. Like when you're sitting in a quiet carriage softly chatting with a couple of friends and then an entire platform of football fans get on at Jolimont Station after their team lost. They don't usually catch trains and don't know the protocol. They assume everyone on the train was at the game or at least follows football. They crowd the doors and complain about the seat configuration.

It's not entirely the Twitter people's fault. They've been taught to behave in certain ways. To chase likes and retweets/boosts. To promote themselves. To perform. All of that sort of thing is anathema to most of the people who were on Mastodon a week ago. It was part of the reason many moved to Mastodon in the first place. This means there's been a jarring culture clash all week as a huge murmuration of tweeters descended onto Mastodon in ever increasing waves each day. To the Twitter people it feels like a confusing new world, whilst they mourn their old life on Twitter. They call themselves "refugees", but to the Mastodon locals it feels like a busload of Kontiki tourists just arrived, blundering around yelling at each other and complaining that they don't know how to order room service. We also mourn the world we're losing.

Viral

On Saturday evening I published a post explaining a couple of things about Mastodon's history of dealing with toxic nodes on the network. Then everything went bananas. By 10pm I had locked my account to require followers to be approved, and muted the entire thread I had myself created. Before November 2022 Mastodon users used to joke that you'd "gone viral" if you got more than 5 boost or likes on a post. In an average week, perhaps one or two people might follow my account. Often nobody did. My post was now getting hundreds of interactions. Thousands. I've had over 250 follow requests since then — so many I can't bear to look at them, and I have no criteria by which to judge who to accept or reject. Early this week, I realised that some people had cross-posted my Mastodon post into Twitter. Someone else had posted a screenshot of it on Twitter.

Nobody thought to ask if I wanted that.

To users of corporate apps like Twitter or Instagram this may sound like boasting. Isn't "going viral" and getting big follower counts what it's all about? But to me it was something else. I struggled to understand what I was feeling, or the word to describe it. I finally realised on Monday that the word I was looking for was "traumatic". In October I would have interacted regularly with perhaps a dozen people a week on Mastodon, across about 4 or 5 different servers. Suddenly having hundreds of people asking (or not) to join those conversations without having acclimatised themselves to the social norms felt like a violation, an assault. I know I'm not the only one who felt like this.

It probably didn't help that every Mastodon server administrator I know, including myself, was suddenly dealing with an deluge of new registrants, requests to join (if they didn't have open registration), and then the inevitable server overloads. Aus.social buckled under the strain, going offline for several hours as the admin desperately tried to reconfigure things and upgrade hardware. Chinwag closed registrations temporarily. Even the "flagship instance" mastodon.social was publishing posts hours after they'd been written, with messages being created faster than they could be sent. I was nervously watching the file storage creep up on the ausglam.space wondering if I'd make it to the end of the weekend before the hard drive filled up, and starting to draft new Rules and Terms of Use for the server to make explicit things that previously "everybody knew" implicitly because we previously could acculturate people one by one.

Consent

I hadn't fully understood — really appreciated — how much corporate publishing systems steer people's behaviour until this week. Twitter encourages a very extractive attitude from everyone it touches. The people re-publishing my Mastodon posts on Twitter didn't think to ask whether I was ok with them doing that. The librarians wondering loudly about how this "new" social media environment could be systematically archived didn't ask anyone whether they want their fediverse posts to be captured and stored by government institutions. The academics excitedly considering how to replicate their Twitter research projects on a new corpus of "Mastodon" posts didn't seem to wonder whether we wanted to be studied by them. The people creating, publishing, and requesting public lists of Mastodon usernames for certain categories of person (journalists, academics in a particular field, climate activists...) didn't appear to have checked whether any of those people felt safe to be on a public list. They didn't appear to have considered that there are names for the sort of person who makes lists of people so others can monitor their communications. They're not nice names.

The tools, protocols and culture of the fediverse were built by trans and queer feminists. Those people had already started to feel sidelined from their own project when people like me started turning up a few years ago. This isn't the first time fediverse users have had to deal with a significant state change and feeling of loss. Nevertheless, the basic principles have mostly held up to now: the culture and technical systems were deliberately designed on principles of consent, agency, and community safety. Whilst there are definitely improvements that could be made to Mastodon in terms of moderation tools and more fine-grained control over posting, in general these are significantly superior to the Twitter experience. It's hardly surprising that the sorts of people who have been targets for harrassment by fascist trolls for most of their lives built in protections against unwanted attention when they created a new social media toolchain. It is the very tools and settings that provide so much more agency to users that pundits claim make Mastodon "too complicated".

If the people who built the fediverse generally sought to protect users, corporate platforms like Twitter seek to control their users. Twitter claims jurisdiction over all "content" on the platform. The loudest complaints about this come from people who want to publish horrible things and are sad when the Twitter bureaucracy eventually, sometimes, tells them they aren't allowed to. The real problem with this arrangement, however, is what it does to how people think about consent and control over our own voices. Academics and advertisers who want to study the utterances, social graphs, and demographics of Twitter users merely need to ask Twitter Corporation for permission. They can claim that legally Twitter has the right to do whatever it wants with this data, and ethically users gave permission for this data to be used in any way when they ticked "I agree" to the Terms of Service. This is complete bullshit of course (The ToS are inpenetrable, change on a whim, and the power imbalance is enormous), but it's convenient. So researchers convince themselves they believe it, or they simply don't care.

This attitude has moved with the new influx. Loudly proclaiming that content warnings are censorship, that functionality that has been deliberately unimplemented due to community safety concerns are "missing" or "broken", and that volunteer-run servers maintaining control over who they allow and under what conditions are "exclusionary". No consideration is given to why the norms and affordances of Mastodon and the broader fediverse exist, and whether the actor they are designed to protect against might be you. The Twitter people believe in the same fantasy of a "public square" as the person they are allegedly fleeing. Like fourteenth century Europeans, they bring the contagion with them as they flee.

Anarchism

The irony of it all is that my "viral toot thread" was largely about the fediverse's anarchist consent-based nature. Many of the newcomers saw very quickly that their server admins were struggling heroically to keep things running, and donated money or signed up to a Patreon account to ensure the servers could keep running or be upgraded to deal with the load. Admins were sending private and public messages of support to each other, sharing advice and feelings of solidarity. Old hands shared #FediTips to help guide behaviour in a positive direction. This is, of course, mutual aid.

Mutual Aid is the reciprocal exchange of resources and services for mutual benefit. The term "mutual aid" was popularised by the anarchist philosopher Peter Kropotkin.

It's very exciting to see so many people experiencing anarchic online social tools. The clever people who build ActivityPub and other fediverse protocols and tools have designed it in ways that seek to elude monopolistic capture. The software is universally Free and Open Source, but the protocols and standards are also both open and extensible. Whilst many will be happy to try replicating what they know from Twitter — a kind of combination of LinkedIn and Instagram, with the 4chan and #auspol people always lurking menacingly — others will explore new ways to communicate and collaborate. We are, after all, social creatures. I am surprised to find I have become a regular contributor (as in, code contributor 😲) to Bookwyrm, a social reading tool (think GoodReads) built on the ActivityPub protocol used by Mastodon. This is just one of many applications and ideas in the broader fediverse. More will come, that will no longer simply be "X for Fedi" but rather brand new ideas. Whilst there are already some commercial services running ActivityPub-based systems, a great many of the new applications are likely to be built and operated on the same mutual aid, volunteerist basis that currently characterises the vast majority of the fediverse.

Grief

Many people were excited about what happened this week. Newcomers saw the possibilities of federated social software. Old hands saw the possibilities of critical mass. But it's important that this isn't the only story told about early November 2022. Mastodon and the rest of the fediverse may be very new to those who arrived this week, but some people have been working on and playing in the fediverse for over a decade. There were already communities on the fediverse, and they've suddenly changed forever.

I was a reasonably early user of Twitter, just as I was a reasonably early user of Mastodon. I've met some of my firmest friends through Twitter, and it helped to shape my career opportunities. So I understand and empathise with those who have been mourning the experience they've had on Twitter — a life they know is now over. But Twitter has slowly been rotting for years — I went through that grieving process myself a couple of years ago and frankly don't really understand what's so different now compared to two weeks ago.

There's another, smaller group of people mourning a social media experience that was destroyed this week — the people who were active on Mastodon and the broader fediverse prior to November 2022. The nightclub has a new brash owner, and the dancefloor has emptied. People are pouring in to the quiet houseparty around the corner, cocktails still in hand, demanding that the music be turned up, walking mud into the carpet, and yelling over the top of the quiet conversation.

All of us lost something this week. It's ok to mourn it.


12 Nov 21:00

Getting Closer

I don’t have hard copies yet, but we’re getting closer:

Cover of 'Software Design by Example'
12 Nov 17:22

Thunderbird Supernova Preview: The New Calendar Design

by Jason Evangelho

Thunderbird 115 Calendar Mockup: Monthly View

In 2023, Thunderbird will reinvent itself with the “Supernova” release, featuring a modernized interface and brand new features like Firefox Sync. One of the major improvements you can look forward to is an overhaul to our calendar UI (user interface). Today we’re excited to give you a preview of what it looks like!

Since this is a work-in-progress, bear with us for a few disclaimers. The most important one is that these screenshots are mock-ups which guide the direction of the new calendar interface. Here are a few other things to consider:

  • We’ve intentionally made this calendar pretty busy to demonstrate how the cleaner UI makes the calendar more visually digestible, even when dealing with many events.
  • Dialogs, popups, tool-tips, and all the companion calendar elements are also being redesigned.
  • Many of the visual changes will be user-customizable.
  • Any inconsistent font sizes you see are only present in the mock-up.
  • Right now we’re showing Light Mode. Dark and High Contrast mode will both be designed and shared in the near future.
  • These current mock-ups were done with the “Relaxed” Density setting in mind, but of course a tighter interface with scalable font-size will be possible.

Thunderbird Supernova Calendar: Monthly, Weekly, Daily Views

Thunderbird 115 Calendar Mockup: Monthly View
Thunderbird Supernova Calendar: Monthly View

The first thing you may notice is that Saturday and Sunday are only partially visible. You can choose to visually collapse the weekends to save space.

But wait, we don’t all work Monday through Friday! That’s why you’ll be able to define what your weekend is, and collapse those days instead.

And do you see that empty toolbar at the top? Don’t worry, all the calendar actions will be reachable in context, and the toolbar will be customizable. Flexibility and customization is what you’ve come to expect from Thunderbird, and we’ll continue to provide that.

Thunderbird Supernova 115 Calendar Weekly View
Thunderbird Supernova Calendar: Weekly View

Speaking of customization, visual customization options for the calendar will be available via a menu popup. Some (but not all) of the options you’ll see here are:

  • Hide calendar color
  • Hide calendar icons
  • Swap calendar color with category color
  • Collapse weekends
  • Completely remove your weekend days
Thunderbird Supernova 115 Calendar Daily View
Thunderbird Supernova Calendar: Daily View

You’ll also see some new hotkey hints in the Search boxes (top middle, top right).

Speaking of Search, we’re moving the “Find Events” area into the side pane. A drop-down will allow choosing which information (such as title, location, and date) you want each event to show.

Thunderbird Supernova Calendar: Event View

Thunderbird 115 Calendar: Event View
Thunderbird Supernova Calendar: Event View

The Event view also gets a decidedly modernized look. The important details have a lot more breathing room, yet subheadings like Location, Organizer and Attendees are easier to spot at a glance. Plus, you’ll be able to easily sort and identify the list of attendees by their current RSVP status.

By default, getting to this event preview screen requires only 1 click. And it’s 2 clicks to open the edit view (which you can do either in a new tab or a separate floating window). Because you love customization, you can control the click behavior. Do you want to skip the event preview screen and open the edit screen with just 1 click? We’ll have an option for that in preferences.

Feedback? Questions?

Life gets busy, so we want our new calendar design to look and feel comfortable. It will help you more efficiently sift, sort, and digest all the crucial details of your day.

Do you have questions or feedback about the new calendar in Thunderbird Supernova? We have a public mailing list specifically for User Interface and User Experience in Thunderbird, and it’s very easy to join.

Just head over to this link on TopicBox and click the “Join The Conversation” button!


The post Thunderbird Supernova Preview: The New Calendar Design appeared first on The Thunderbird Blog.

12 Nov 05:15

On Twitter 2.0

by Doc Searls

So far the experience of using Twitter under Musk is pretty much unchanged. Same goes for Facebook.

Yes, there is a lot of hand-wringing, and the stock market hates Meta (the corporate parent to which Facebook gave birth); but so far the experience of using both is pretty much unchanged.

This is aside from the fact that the two services are run by feudal overlords with crazy obsessions and not much feel for roads they both pave and ride.

As for Meta (and its Reality Labs division), virtual and augmented realities (VR and AR) via headgear are today where “Ginger” was before she became the Segway: promising a vast horizontal market that won’t materialize because its utilities are too narrow.

VR/AR will, like the Segway, will find some niche uses. For Segway, it was warehouses, cops, and tourism. For VR/AR headgear it will be gaming, medicine, and hookups in meta-space. The porn possibilities are beyond immense.

As for business, both Twitter and Facebook will continue to be hit by a decline in personalized advertising and possibly a return to the old-fashioned non-tracking-based kind, which the industry has mostly forgotten how to do. But it will press on.

Not much discussed, but a real possibility is that advertising overall will at least partially collapse. This has been coming for a long time. (I’ve been predicting it at least since 2008.) First, there is near-zero (and widespread negative) demand for advertising on the receiving end. Second, Apple is doing a good job of working for its customers by providing ways to turn off or thwart the tracking that aims most ads online. And Apple, while not a monopoly, is pretty damn huge.

It may also help to remember that trees don’t grow to the sky. There is a life cycle for companies just as there is for living things.

11 Nov 19:46

Just Don’t

Sometimes it’s wrong to begin a phrase with the word “just”. I offer as evidence two such situations. I think there’s a common thread to be drawn.

Stuck

People with mental-health issues can get stuck. For example, when some combination of depression and anxiety means they can’t get out of bed all day, and can’t say why. Or when they really need to get dressed or packed or organized for some imminent un-reschedulable event, and can’t get started.

It would be easy to  — sorry, it is easy, I know this because I have — say something like “Just stand up and look out the window, it’s sunny.” Or “Just grab some random underwear and drop them in the suitcase, then you’ll be started”. Or “Just get the binder out of your knapsack and look at the first page.”

This. Will. Not. Help.

Broken

Suppose a colleague at work who takes care of an important high-volume Web Service is dealing with a horrible problem: Spiking latencies, or an actual outage. They’re not making good progress, and someone’s asked you if you can help. You look at some graphs and error messages. It’s easy (once again, I speak from experience) to say something like “Could you just cache the hot partition keys?” or “So, just scan the logs for the high-latency signals and frequency-sort them.”

This. Will. Not. Help.

Philology

Pardon this sidebar; I’ve got a history with dictionaries. And, we have lots of ’em within pretty easy reach of where I’m sitting to write this. I was curious about this usage of “just” in a sense which sort of means “merely”, diminishing the difficulty of whatever action is being proposed.

So I grabbed the Shorter OED (4th ed.), which is a lot easier on the wrists and eyes than the full OED and which, cognoscenti will tell you, is maybe a little stronger on etymology than its larger sibling. Disappointingly, I turned up nothing. So I went and took out the Compact Oxford English Dictionary. Note: This is not the teeny little Compact Oxford Dictionary, it’s a 1989 printing of the then-new full Second Edition of the OED, photographically reduced with nine pages on each of its (very large) leaves. Which means it strains the wrist and you need a magnifying glass to read the text.

Wondering why I care? I helped produce these artifacts. Must write about that someday… anyhow.

In the big OED I struck gold. Which surprised me since I have always thought that the Shorter is enough to suffice anyone who’s not trying to figure out what Chaucer meant or when Indian vocabulary begin to infuse into English. In fact, this is the first time in decades that I’ve actually needed to go to the big OED.

Below I reproduce the relevant page. Warning: If you enlarge this you’ll get the full-resolution picture, and it’s not small. The page starts halfway through the adverbial senses of “just”.

The OED on the adverbial form of “just”

There, we learn (see sense 5) of the usage which means “No more than; merely; barely”. The first supporting quotation is from Robert Hooke in 1665, but my favorite is from Macaulay in 1849: “Men who … seemed to think they had given an illustrious proof of loyalty by just stopping short of regicide.”

But let’s not ignore sense 5.c: “Used to extenuate the action expressed by a verb, and so to represent it as a small thing.” The first known usage from by Walter Scott in 1815. The most recent quotation is from 1898 and resonates with me: “Mother! Do just get in with me for a few minutes till the train starts.” I wonder why Mother wouldn’t get in, and I’m pretty sure that “just” didn’t help.

Take-away

Do not, to quote the OED, “represent as a small thing” the difficulty of something you’re asking someone else to do, when you’re not inside their head and don’t understand what they see and feel. The word “just” is a signal that you’re not taking their problem seriously.

So, don’t do that.

I’d like to say “Just don’t” but obviously shouldn’t.

11 Nov 03:49

Designing a write API for Datasette

Building out Datasette Cloud has made one thing clear to me: Datasette needs a write API for ingesting new data into its attached SQLite databases.

I had originally thought that this could be left entirely to plugins: my datasette-insert plugin already provides a JSON API for inserting data, and other plugins like datasette-upload-csvs also implement data import functionality.

But some things deserve to live in core. An API for manipulating data is one of them, because it can hopefully open up a floodgate of opportunities for other plugins and external applications to build on top of it.

I've been working on this over the past two weeks, in between getting distracted by Mastodon (it's just blogs!).

Designing the API

You can follow my progress in this tracking issue: Write API in Datasette core #1850. I'm building the new functionality in a branch (called 1.0-dev, because this is going to be one of the defining features of Datasette 1.0 - and will be previewed in alphas of that release).

Here's the functionality I'm aiming for in the first alpha:

  • API for writing new records (singular or plural) to a table
  • API for updating an existing record
  • API for deleting an existing record
  • API for creating a new table - either with an explicit schema or by inferring it from a set of provided rows
  • API for dropping a table

I have a bunch of things I plan to add later, but I think the above represents a powerful, coherent set of initial functionality.

In terms of building this, I have a secret weapon: sqlite-utils. It already has both a Python client library and a comprehensive CLI interface for inserting data and creating tables. I've evolved the design of those over multiple major versions, and I'm confident that they're solid. Datasette's write API will mostly implement the same patterns I've eventually settled on for sqlite-utils.

I still need to design the higher level aspects of the API though - the endpoint URLs and the JSON format that will be used.

This is still in flux, but my current design looks like this.

To insert records:

POST /database/table/-/insert
{
    "rows": [
        {"id": 1, "name": "Simon"},
        {"id": 2, "name": "Cleo"}
    ]
}

Or use "row": {...} to insert a single row.

To create a new table with an explicit schema:

POST /database/-/create
{
    "name": "people",
    "columns": [
        {
            "name": "id",
            "type": "integer"
        },
        {
            "name": "title",
            "type": "text"
        }
    ]
   "pk": "id"
}

To create a new table with a schema automatically derived from some initial rows:

POST /database/-/create
{
    "name": "my new table",
    "rows": [
        {"id": 1, "name": "Simon"},
        {"id": 2, "name": "Cleo"}
    ]
   "pk": "id"
}

To update a record:

POST /database/table/134/-/update
{
    "update": {
        "name": "New name"
    }
}

Where 134 in the URL is the primary key of the record. Datasette supports compound primary keys too, so this could be /database/docs/article,242/-/update for a table with a compound primary key.

I'm using a "update" nested object here rather than having everything at the root of the document because that frees me up to add extra future fields that control the update - "alter": true to specify that the table schema should be updated to add new columns, for example.

To delete a record:

POST /database/table/134/-/delete

I thought about using the HTTP DELETE verb here and I'm ready to be convinced that it's a good idea, but thinking back over my career I can't see any times where I've seen DELETE offered a concrete benefit over just sticking with POST for this kind of thing.

This isn't going to be a pure REST API, and I'm OK with that.

So many details

There are so many interesting details to consider here - especially given that Datasette is designed to support ANY schema that's possible in SQLite.

  • Should you be allowed to update the primary key of an existing record?
  • What happens if you try to insert a record that violates a foreign key constraint?
  • What happens if you try to insert a record that violates a unique constraint?
  • How should inserting binary data work, given that JSON doesn't have a binary type?
  • What permissions should the different API endpoints require (I'm looking to add a bunch of new ones)
  • How should compound primary keys be treated?
  • Should the API return a copy of the records that were just inserted? Initially I thought yes, but it turns out to be a big impact on insert speeds, at least in SQLite versions before the RETURNING clause was added in SQLite 3.35.0 (in March 2021, so not necessarily widely available yet).
  • How should the interactive API explorer work? I've been building that in this issue.

I'm working through these questions in the various issues attached to my tracking issue. If you have opinions to share you're welcome to join me there!

Token authentication

This is another area that I've previously left to plugins. datasette-auth-tokens adds Authorization: Bearer xxx authentication to Datasette, but if there's a write API in core there really needs to be a default token authentication mechanism too.

I've implemented a default mechanism based around generating signed tokens, described in issue #1852 and described in this in-progress documentation.

The basic idea is to support tokens that are signed JSON objects (similar to JWT but not JWT, because JWT is a flawed standard - I rolled my own using itsdangerous).

The signed content of a token looks like this:

{
    "a": "user_id",
    "t": 1668022423,
    "d": 3600
}

The "a" field captures the ID of the user created that token. The token can then inherit the permissions of that user.

The "t" field shows when the token was initially created.

The "d" field is optional, and indicates after how many seconds duration the token should expire. This allows for the creation of time-limited tokens.

Tokens can be created using the new /-/create-token page or the new datasette create-token CLI command.

It's important to note that this is not intended to be the only way tokens work in Datasette. There are plenty of applications where database-backed tokens makes more sense, since it allows tokens to be revoked individually without rotating secrets and revoking every issued token at once. I plan to implement this pattern myself for Datasette Cloud.

But I think this is a reasonable default scheme to include in Datasette core. It can even be turned off entirely using the new --setting allow_signed_tokens off option.

I'm also planning a variant of these tokens that can apply additional restrictions. Let's say you want to issue a token that acts as your user but is only allowed to insert rows into the docs table in the primary database. You'll be able to create a token that looks like this:

{
    "a": "simonw",
    "t": 1668022423,
    "r": {
        "t": {
            "primary: {
                "docs": ["ir"]
            }
        }
    }
}

"r" means restrictions. The "t" key indicates per-table restrictions, and the "ir" is an acronym for the insert-row permission.

I'm still fleshing out how this will work, but it feels like an important feature of any permissions system. I find it frustrating any time I'm working with a a system that doesn't allow me to create scoped-down tokens.

Releases this week

TIL this week

11 Nov 03:45

PyScript Updates: Bytecode Alliance, Pyodide, and MicroPython

PyScript Updates: Bytecode Alliance, Pyodide, and MicroPython

Absolutely huge news about Python on the Web tucked into this announcement: Anaconda have managed to get a version of MicroPython compiled to WebAssembly running in the browser. Pyodide weighs in at around 6.5MB compressed, but the MicroPython build is just 303KB - the size of a large image. This makes Python in the web browser applicable to so many more potential areas.

11 Nov 03:45

Beck Tench and Zen of Zooming

by Nancy White
I was so, so, so sure I had blogged about this wonderful online practice shared in 2020 by the amazing Beck Tench. But when I went to find it to reshare, I could not find it in my archives. So today I am getting this on the blog! On Zoom we have all these little … Continue reading Beck Tench and Zen of Zooming

Source

11 Nov 03:45

a survivability onion for privacy tools?

This is intended to be part 1 of a series of notes on figuring out how to apply Integrated Survivability Assessment, or something similar, to personal privacy protection.

Starting with some good news. There are several versions of the Survivability Onion but most appear to be US government work and so not copyrighted. I'm going to borrow it because it looks like a good starting point for setting priorities for designing a privacy tools and and services stack. Yes, in the long run, the real impact of individual privacy measures will be not so much in how you’re protected as an individual, but in how you help drive future investments away from surveillance and toward more constructive projects.

It would be good to get more privacy people leveled up:

Level 1 mix of effective and ineffective actions

Level 2 effective actions, but applied haphazardly (this is about where I am now)

Level 3 effective actions, efficiently selected and applied

If you want privacy, prepare for surveillance? All right, onion time.

Integrated SoS Survivability Onion chart, showing layers: pre-emptive encounter, pre-emptive kill,
  avoid/prevent encounter/exposure, avoid detection, avoid targeting, avoid engagement, avoid hit/application, avoid kill

A survivability onion is a way to visualize layers of protection. From Integrated Survivability Assessment:

The separate and independent “layers” of functions, which the threat has to “penetrate” to kill the system in a typical engagement, are most often represented mathematically by independent probabilities; thus, the overall probability of survival is the product of the independent component probabilities.

Since you have limited resources when designing an armored vehicle or whatever, you can apply your limited weight and money budgets to the most effective combinations of layers. The object is to maximize the probability of survival, which is the product of the probabilities of the attack getting through each layer.

And hey, that sounds familiar. We have a limited amount of time, money, and political juice for privacy stuff too. I think we can visualize the protection options in a similar way. Here's a first attempt at a survivability onion for a personal privacy stack, with some examples of what fits into what layer.

  • Don't do a trackable activity (delete a surveillance app, don't visit a surveilled location, boycott a vendor)

  • Don't send tracking info (block tracking traffic, either by using a tool like Disconnect to keep a tracking script from loading, or using a network filter like Pi-hole to prevent tracking SDKs from communicating with their hosts)

  • Send tracking info that is hard to link to your real info (use an auto-generated email address system like Firefox Relay, churn tracking cookies with Cookie AutoDelete)

  • Object or opt out when doing a tracked activity (Global Privacy Control)

  • Object, exercise the right to delete, or opt out later, after data has been collected but before you are targeted (CCPA Authorized Agents, RtD automation tools like Mine)

So that's step one—define the layers of the onion.

Next step: assessing threats. (Will add a link here soon.)

Bonus links

Devs: It’s Time to Consider IPFS as an Alternative to HTTP

The Remote Control Killers Behind Russia’s Cruise Missile Strikes on Ukraine

Keep your family’s internet private with Total Cookie Protection on Firefox

Collections: Strategic Airpower 101

Google Ads has become a massive dark money operation

Rent Going Up? One Company’s Algorithm Could Be Why.

11 Nov 03:44

With #twittermigration still continuing, althou...

by Ton Zijlstra

With #twittermigration still continuing, although it seems at a lower speed by now, I am wondering about my multiple Twitter accounts that I’ve used over the years, next to my two personal ones, and that I infrequently still use.

My personal accounts and my company account will stay for a while yet, depending on the path Twitter will take in the coming months. Those are all in ‘broadcast’ mode anyway. I will close have closed down the Things Network Enschede account (no activity since 2017 since I moved away from Enschede, other than my own occasional RT, and there’s no active group there anymore). I will migrate have migrated the IndieWebNL Twitter account to indieweb.social. Not sure yet about some of the others, some of which are nominally group accounts, and some of which have a long history, like the EUdata account I’ve used since 2009.



This is a RSS only posting for regular readers. Not secret, just unlisted. Comments / webmention / pingback all ok.
Read more about RSS Club
11 Nov 03:44

NaBloPoMo2022 : Snow!

by Ms. Jen
The last two days of rain in SoCal has produced some snow on the local LA area mountains and a good deal of snow on California’s North-South spine, the Sierra Nevada Mountains. Today I drove from SoCal to the Eastern Sierras and here is...