Shared posts

20 Nov 21:56

Software Design by Example

The (hopefully) final version of Software Design by Example: A Tool-Based Introduction with JavaScript has gone to the publisher, and physical copies should be available by the end of the year.

Lots of things didn’t get into the book, partly because I ran out of steam but also because I wanted to see if the book had any impact before investing more effort. The most important was a text editor about the size of kilo (1, 2) with undo/redo (which is complex enough to need a whole chapter). The second would be a message queue or pub/sub system like RabbitMQ because partial failure in distributed systems is now a fact of most programmers’ lives, but is rarely taught. I would also have liked to include a fuzz tester, the world’s smallest useful database, something to show readers how model checking works, and other tools.

But what I’d like most is to edit these chapters rather than write them. It’s not less work (trust me), but I’d really like to help other people get a chance to raise their profiles. So if you’re a junior developer and would like to see your name in a book, or an undergrad student who needs a senior project that combines research, design, coding, and writing, please reach out, especially if you’re from an underrepresented or marginalized group.

We make beautiful things, and we ought to share them.


I’ve also now started on a Python version; the topics and some key ideas are listed below, and I’d be grateful for feedback on what else you’d like to see included.

Tool Key Ideas
unit testing framework code as data; introspection
file backup hashing; interface vs. implementation
an interpreter code as data; the read-eval-print cycle
dataframes interface vs. implementation; performance optimization
pipelines inheritance vs. composition; traceability; task execution
a build manager dependencies as graphs; task execution
pattern matching recursive search; composability
a regular expression parser lazy vs. greedy execution; composability
a web server event-driven computation; error handling; security
a large file cache hashing; interface vs. implementation
a log-structured database systems programming; performance optimization
a persistence framework introspection; interface vs. implementation
binary data storage interpreter internals
a page templating tool code as data; introspection
a package manager recursive search; performance optimization
a page layout tool recursive search; interface vs. implementation
a code quality checker recursive search; introspection; extensibility
a code generator code as data
a virtual machine interpreter internals
an interactive debugger interpreter internals
20 Nov 21:56

Beautiful Bikes of Bespoked UK - Part 1

by noreply@blogger.com (VeloOrange)
by Igor

This was our first time to the Bespoked UK show and it was a blast. It was so great to catch up with friends, meet new people, finally put faces to IG handles, and see cool custom bikes - all the while with track racing all around us!

Bespoked UK Velo orange track racing


Bespoked is a London-based expo focused around custom ("bespoke" if you're in the UK) and high-end production bikes and accessories - with a focus mostly on UK and European companies. This year it was hosted in the 2012 Olympic Velodrome. It's really inspiring to be in the same facility that some of the highest class of riders in the WORLD raced in. But enough chit-chat, let's get to the galleries!

Clandestine

Pi does a masterful job of balancing performance, utility, and classic styling like none other. The integration of racks, mudguards (we're in the UK, remember?), and dynamo lighting creates a perfect package specifically designed for this bike. This all-road touring build features our 650b Smooth Mudguards, Long Setback Seatpost, Disc Rear Hub, Voyager Rims, Retro Bottle Cages, Copenhagen Kickstand, and a previous generation Crazy Bar. The icing on the cake is super slick, integrated steering stop. It's super useful to eliminate the wheel from flopping around when loading the front of the bike up.

Clandestine Touring Bike with Crazy Bars Dynamo Lighting steel frame custom bike velo orange hubs fenders mudguards

Clandestine Touring Bike with Crazy Bars Dynamo Lighting steel frame custom bike velo orange hubs fenders mudguards

Clandestine Touring Bike with Crazy Bars Dynamo Lighting steel frame custom bike velo orange hubs fenders mudguards

Clandestine Touring Bike with Crazy Bars Dynamo Lighting steel frame custom bike velo orange hubs fenders mudguards

Clandestine Touring Bike with Crazy Bars Dynamo Lighting steel frame custom bike velo orange hubs fenders mudguards

Clandestine Touring Bike with Crazy Bars Dynamo Lighting steel frame custom bike velo orange hubs fenders mudguards

Clandestine Touring Bike with Crazy Bars Dynamo Lighting steel frame custom bike velo orange hubs fenders mudguards

Crossley Metal

Duncan is not one who is afraid to experiment with bike designs. One has a bunch of VO, and the other doesn't have a lick of it - I'll let you figure out which one is which.

This is his Chrome Tourer, designed for a gentleman to travel between vineyards in Tuscany. This was faaaaancy. Rohloff hub, Cigne Stem, Randonneur Front Rack, 0 Setback Seatpost, and Chrome 1 1/8" Threadless Headset. The frame and fork are fully chromed and polished to *chef's kiss* perfection.

Crossley Metal bike builder velo orange cigne stem randonneur 0 setback seatpost silver components rene herse pacenti

Crossley Metal bike builder velo orange cigne stem randonneur 0 setback seatpost silver components rene herse pacenti

Crossley Metal bike builder velo orange cigne stem randonneur 0 setback seatpost silver components rene herse pacenti

The other bike he displayed was a Grass Track Racing Bike! I've never heard of grass track racing, but it looks super fun. It's like track racing on a flat oval, but on grass. So while you still can't use brakes, you do need the extra grip afforded by some fatter tires. This one also has fitting for mudguards for off-track training - a nod to the traditional UK path-racer. The frame is a mix of all sorts of tubing and the fork is custom painted with a bunch of scenes.

Crossley Metal bike builder velo orange cigne stem randonneur 0 setback seatpost silver components rene herse pacenti

Crossley Metal bike builder velo orange cigne stem randonneur 0 setback seatpost silver components rene herse pacenti

Crossley Metal bike builder velo orange cigne stem randonneur 0 setback seatpost silver components rene herse pacenti

Etoile Cycles

I love seeing builders experimenting with tube junctions and designs. This quadruple triangle tourer/commuter from Etoile Cycles features our Milan Handlebars, cushy tires, custom build front rack, and loads of details. Super nice, Elodie!

Etoile Cycles Touring bike custom steel bike quadruple triangles velo orange

Etoile Cycles Touring bike custom steel bike quadruple triangles velo orange milan handlebars bars

Etoile Cycles Touring bike custom steel bike quadruple triangles velo orange

Etoile Cycles Touring bike custom steel bike quadruple triangles velo orange

Sour Bicycles

Sour debuted their new Clueless gravel bike featuring a group of limited edition Shimano GRX and a complete suite of VO bits! They have done an amazing job of developing production of their bikes to Germany as well as utilizing existing European production capabilities. This bike has our Nouveau Randonneur Bars, Rubbery Bar Tape, Long Setback Seatpost, 31.8mm Threadless Stem, and Mini Rando Bag.

Ride Sour Bicycles Clueless gravel bike shimano grx custom steel carbon fork

Ride Sour Bicycles Clueless gravel bike shimano grx custom steel carbon fork

Ride Sour Bicycles Clueless gravel bike shimano grx custom steel carbon fork

Ride Sour Bicycles Clueless gravel bike shimano grx custom steel carbon fork

Ride Sour Bicycles Clueless gravel bike shimano grx custom steel carbon fork

Ride Sour Bicycles Clueless gravel bike shimano grx custom steel carbon fork

Stay tuned for part 2 coming shortly!
20 Nov 21:40

Time is an illusion, Unix time doubly so...

Unix counts time as seconds since the epoch. Seems straight forward. What could possibly go wrong?
20 Nov 05:01

Let me recruit AI teammates into Figma

Okay sorry this is a post mainly about startup growth and KPIs. I apologise in advance.

So we’re on the same page, let me recap some hunches:

  1. This is uncontroversial: AI synthesis has gotten really good. AI that writes code (given a ticket), or makes an illustration (given a brief), or writes an article or makes a diagram or summarises complex information (given a prompt) is here. The technical problems have been solved, and now it’s a matter of improvements, integrations, and UX.
  2. AIs will augment our teams. Sure a single engineer can have smart autocomplete, or an AI word processor can expand your text so you don’t have to type full sentences. But if you’re looking for 10x productivity improvements, then think about teams: an engineer will now be an engineering manager who is wrangling the code contributions of a dozen AIs, submitting their synthesised code as pull requests. A writer will work in a Google Doc alongside an AI editor making suggestions, and an AI fact checker and researcher doing the running, and an AI sub doing the wordsmithing.
  3. NPCs are a better UI for interacting with teammate AIs. Interfaces for all the different ways that AIs can help a team would be incredibly cumbersome – think of the bureaucracy of Jira etc! But if, in our Google Doc, our AI editor can appear as a “non-player character,” using all the regular features that humans do (presence, suggest changes, comments for clarifications etc), then there’s no need for extra UI – it’s just another specialist teammate. Ditto in Github, ditto in Figma, ditto in Zoom for video AIs. Humans and non-humans working together. This is why the multiplayer web is important: it’s a runtime for AIs.

Ok so we’re got a capability and an interaction paradigm. What’s missing is the economics.


Revenue is a lagging indicator. What I mean by economics in this context are the metrics that precede revenue: acquisition and retention.

  • Users will pay for valuable software – but only if they (a) find out about it, (b) use it, and (c) continue using it
  • Software that is forgotten (has low retention) will eventually not be paid for
  • The model for paying for software has to correspond with the underlying costs of the seller.

This is worth figuring out because otherwise this new model won’t emerge. Companies offering the service won’t grow.

SaaS was an innovation that unlocked Web 2.0 (in b2b). Selling software as a service meant that:

  • Sellers can cover their ongoing developments and running costs because they get to charge per user per month
  • Recurring revenue unlocks the ability to experiment with free trials and other customer acquisition tactics; and once the metrics CAC and CLTV were developed, it was possible to engineer growth, not just hope for it.

This model doesn’t hold in the “AI teammate” world, so we’ll need to find something to replace SaaS:

  • Compute is a meaningful expense for AIs, unlike Web 2.0. It doesn’t make sense to charge per month for an AI editor if the difference between editing 1 article and editing 10 is a meaningful number of dollars to run the models
  • If free trials are off the table, how does acquisition work?

More problems! Given these AIs aren’t essential tools that you open again and again, what’s the retention mechanism? How will users remember that their AI editor teammate function can be invoked?


Here’s a guess: social discovery is the key.

Perhaps app features should be ownable and tradable. A pocketful of feature flags. In short: instead of having thousands of features, mostly unused, undiscovered in a thousand menus, you would see a colleague using a feature in a multiplayer app (like an editing feature in a doc, or co-presenter in Zoom), and then… they could just give it to you. (Or you could buy it.)

Or to put it another way, adopting the NPC = UI metaphor: with AI teammates, instead of having to find the “editor” function in a menu, you would be introduced to the editor NPC in a multiplayer space. (This is why I care so much about the multiplayer web.) You wouldn’t purchase or subscribe, you would recruit.

This takes care of awareness and also the de-risking part of acquisition currently catered for by free trials (if you see somebody else’s editor NPC in action, you’ll already be 50% convinced).

The revenue model is secondary but I think, to begin with, it’ll be a bit like buying credits. You’ll buy X photos synthesised per month, or something like that, and step up and down tiers. Your photo synthesiser NPC (or editor NPC, or engineer NPC) can let you know when you need more.

(Monthly subscriptions won’t work because of the highly variable underlying compute cost. I’ve already seen a few AI projects playing with credits, it makes sense.)


That’s discovery and revenue. What about retention?

The more I think about this, the more I realise that this is a part of the “AI synthesis” capability set which hasn’t yet been built.

Let’s imagine we have an AI teammate. If it’s like today’s software then, for anything powerful, you’re going have to hit a button. But teammates don’t wait to be instructed to take on a task, they jump in.

A human editor teammate will maybe make a single suggestion on a doc, and - if you accept it - they will go ahead and do the rest of the work.

If they’re feeling underutilised, they’ll reach and actively ask you for things to do – if a clear route isn’t evident for a task, they will request the prompt. Or they’ll keep an eye on your shared files and projects and make suggestions about where they could help.

Making this work will be hard._

AI synthesis necessarily includes a view of what “good” looks like. What is a good image; what is good code; etc. That’s possible because we have a ton of images in the world; there’s a ton of code, and so on.

BUT: AIs will also need to synthesise what good team behaviour looks like – and jump in. What actions will help the group? Where is it useful to jump in? What will further the goals of the org? How can that even be measured?

As far as I know, self-setting goals is an AI capability that doesn’t exist yet, and it’s beyond the scope of the type of AI synthesis that has been coming along in leaps and bounds these last few months.

Until we have it, I can see people making prototypes of AIs that are useful for teams, but I can’t see startups growing around them.


What are the metrics that will allow for optimising all of this? Interactions per month. Mean social group size per introduction. Introductions per interaction. Unsolicited interaction rate. There will be a whole industry around measuring, correlating, and optimising these.

Hey, here’s another question: what’s the standard NPC API that a multiplayer app (like Figma etc) can offer, such that my new AI helper can join the team? Appearing in the presence bar, being invitable by @ mentions of their name, etc.

A lot to do!

20 Nov 05:01

We need an HTML Document standard

by Russell Beattie

We need an HTML Document standard

I’ve recently been messing with Markdown (to my utter chagrin) with the idea of using it to update my resume and blog. I was simply astounded by how much recreation of solved problems is going on in that space. I prefer JavaScript/Node so I’ve played with UnifiedJS (remark/rehype), MarkdownIt, MarkedJS, and others. It’s honestly just absurd how over-engineered each project is.

If all you’re producing is HTML - which is 99% of what Markdown is used for - having to deal with a raw AST in order to modify a document is like deciding to use Assembly instead of Python. Sure, it could be more efficient if you really want to spend the time, but it’s generally a step backwards in every way. I soon realized the only predictable, reliable and maintainable way to deal with Markdown was to extract the frontmatter, then convert the rest into bog-standard CommonMark HTML and then use JSDOM to do any additional manipulation. So instead of fighting with some wonky AST tree and APIs, I can use the DOM and standard web tools and code.

But then I gave up. Markdown was created in 2006, when web browsers were anemic compared to today's monsters. Using Markdown in 2022 is like using COBOL. Sure, it works, but we can do better.

What I would like to see is a new HTML Document standard (none of the various implementations out there qualify) that mimics the core reason Markdown and other plain-text systems like AsciiDoc or LaTeX exist: To separate the writing from the presentation, but with some basic formatting as needed for most documents. There are various custom HTML doc formats out there: ePub and mobi files use HTML inside, as does Microsoft’s CHM and MHT. And there’s a hundred zipped XML file formats out there - docx, odt, etc. But they’re either write-only, proprietary or are too complicated for this purpose. This doesn't even include MIME HTML, CBOR files like Web Bundles, the Internet Archives WebArchive format, WARC and more. 

What I would want is a simple .htmd standard file format, which - like all the "lightweight markup languages" out there - is just text containing a strict subset of HTML (no forms and iframes) and CSS which basically mimics the output of Markdown. It wouldn't have any JavaScript, enforced by the file extension/mime-type and CSP, nor embedded files like images. The subset would be limited to just semantic tags and reasonable formatting, to guarantee editable HTML. Nothing dynamic or crazy. Just pure WYSIWYG. If the W3C were to adopt the standard,  also allow custom editor skins like CKEditor, TinyMCE, Trix etc. But again, with standard output. This would be great for online forums like HN or reddit. In standalone apps, like Apple’s Text Editor or Microsoft’s WordPad, the output would be a cross platform rich text document that is readable and writable by any browser or standard .htmd editor.

The idea is to Keep It Simple Stupid, but also provide basic cross-platform WYSIWYG editing where the simple, clean formatting is always displayed exactly like it looks when editing. I used Typora, which is a great little rich text editor that uses WebKit for the interface, and then exports Markdown, which I then process into a web page. It’s insane. We need to cut out that moronic middle step. 

Since a basic HTML Document editor doesn't exist yet, I made one. It's called Hypertext. Go try it out. I'm using it now to write this.

Browser engines have progressed so far since Markdown was created. It’s all a matter of standardization at this point. Keep the spec simple and focused on just creating simple documents. If someone wants to use the output as a full-on web page, then it’s just a matter of post-processing (just like is done now) and adding full-strength CSS, JavaScript, etc. The CommonMark spec could even be updated so that .htmd is the standard output of a processed .md text file.

The web has tilted too far towards the dynamic app end of the spectrum, and lost its roots as a document format. I think something like this would be a great way to get back to that.

-Russ

20 Nov 05:01

The Hypertext HTML Document Editor

by Russell Beattie

The Hypertext HTML Document Editor

Introducing the Hypertext HTML Document Editor

World Wide Web: Phase 2

Hypertext is a new rich-text editor for creating documents using HTML instead of using an 18 year old text format or complex word processor files. It's an app created using web technologies for creating actual documents, not websites, web pages or web apps. 

In the second decade of the 21st century, there is now a full-featured, powerful web browser on literally every device with a screen. These browsers - the result of countless millions of man hours - are more than capable of being used as editors for HTML. In fact, most writing is moving online, using web-based word processor like Google Docs or Microsoft Office 365, blog creation sites like WordPress, Substack or Medium or even services like Slack or Notion.

What if you just want to write a document on your computer, using basic rich-text formatting like bold, italic, headings, etc. ? The choices for for documentation, a report, an academic paper or something similar aren't great. You can use a bloated desktop word processor like Microsoft Word or Pages on your Mac, built-in text editors like Word Pad or TextEdit which are nearly useless, or use a text editor to create plain text file with awkward, non-standard markup for formatting.

The question is, why can't we just use HTML for documents - as it was originally designed?

Phase 2 -- Target: 6 months from start

In this important phase, we aim to allow the creation of new links and new material by readers. At this stage, authorship becomes universal.

Sir Tim Berners-Lee, 1990
WorldWideWeb: Proposal for a HyperText Project

The problem is that HTML has become a read-only format, despite being more than capable of being both a viewer and editor, and has for years. There's no real reason for this besides simple organization and standardization. The problem is that HTML can now do so much, that any attempts to create a consumer-focused app to edit it soon get unfocused and unusable. Inevitably they end up as full-featured "page design" apps, bogged down with so many features and functionality it becomes unwieldy and unused.

For the past decade or so, web browsers have been focused on incorporating more and more interactive functionality to becoming a platform for powerful JavaScript powered applications, and its original vision of being a platform for anyone to create content which could be both read and updated by anyone. In fact, some of the earliest browsers like Netscape Navigator had an editor built in. Today, creating a page in HTML is done by developers, and using a browser to create content is limited. 

Project Fugu is an effort to add in the functionality needed to create full-fledged web powered desktop applications which are indistinguishable from native apps. Hypertext is taking advantage of some of these recently launched features to create the first HTML Document editor.

HTML Documents, as the name implies, are rich-text documents using web standards. Hypertext is both a web app for creating and editing these documents, as well as a proposal for a new .htmd document format. The vision is to create a specification for a standard, safe, guaranteed-editable document using a strict-subset of HTML and CSS which replaces the variety of Markdown, AsciiText, Office file formats, etc. currently in use. The .htmd format is self-contained, non-proprietary and includes the necessary Content Security Policy header to make sure it doesn't contain JavaScript. The latter keeping it focused on just content, with security and privacy built in. By using just semantic HTML tags, the documents will be guaranteed to be able to be read and edited, even if all CSS styles are removed. Finally, the format will have a separate MIME type and standard .htmd extension so that browsers can modify their parser and security rules accordingly (like .hmt and .xhtml today) as well as add in the option for basic, built-in editing.

The vision is that by finally defining a standard format,  in the future, very browser app will enable their users to be able to both view and edit the document, finally fulfilling Sir TBL's second phase for the World Wide Web he proposed in 1990.

Rich text without distractions

Rich text, such as bold and italic, among other examples, shouldn't be optional or considered extraneous to language. The fact that computing technology has gone so long ignoring these essential parts of communication is bewildering. You can send a text message from your phone including a variety of customizable emojis in various skin tones, but basic text formats used for literally hundreds of years are either impossible to enter, or lost in transmission. There's more than subtle a difference between, "You really should do something," and "You really should do something". Having to write out ideas using plain text with weird symbols such as _ this _ or * this * is truly a loss, and in the 21st century, completely inexcusable.

One of the main complaints about standard word processing apps and doc formats is the distractions and compatibility problems that are caused by having too many visual design options available to writers and readers. Sending an highly customized or newer .docx file to someone with a different version of Word is always a disaster. Additionally, the proprietary nature of these apps result in files that are either inaccessibly stored online, or nearly impossible to version using git, maintain by others or used in post-processing, making them unusable for documentation or other collaborative writing projects.  

Using a standard set of semantic HTML and CSS as the underlying markup for documents solves these problems. The class-based styling means the document's visual design will be more organized and maintained, and anyone can just turn off or remove the custom styling and still see the basic rich-text underneath (as bold, italic, headings, tables, bullets, etc. will still be visible). The files themselves are just text, just like the rest of the other text formatting options out there. But instead of being a proprietary mess, .htmd documents use a universally supported markup language familiar to literally billions of people.

Open source and happy for input

If you agree, help make this a reality. The code to Hypertext is up on Github with an MIT license. It's just an prototype of what is possible. But the goal isn't to fork this project, but to help create a new document standard. The objective is to lock down the .htmd format as well as the capabilities in a standard HTML Document Editor so that Google, Mozilla and Apple and the folks at the W3C can just adopt it wholesale and include the functionality in future browsers.

In a few years every browser should be able to view and edit HTML Documents, then they can start to optimize the browser engines so that composing documents becomes fast and reliable, every forum, blog or web-based app where people write text can start embedding HTML Document formatting sections in their pages, maybe with a <textarea rich="true"> style tag or similar. Imagine how great it would be for anyone from children to office workers can create a "web page" without signing up to some online service or needing a bloated app to do it.

And just think how happy TBL will be to finally have Phase 2 completed after 30 years.

-Russ

20 Nov 05:00

Things I've made and done

Programming environments

  • Code Lauren - An online IDE for beginners. Includes a vm that lets the user run their program forwards or backwards. Watch a demo or try it out.
  • Isla - A livecoding interface and programming language for young children.

Games

  • Pistol Slut - A platform shooter. Guns, grenades, parallax scrolling, particle effects. The enemies work in teams. I talked about the game at JSConf.
  • Empty Black - A puzzle platform shooter. Throw crates, set off bombs, fire missiles, stab with your sword. Featured in Kill Screen, PC Gamer and others.

Frameworks

  • Coquette - A micro framework for JavaScript games.

Building to learn

  • Gitlet - Git implemented in 1000 lines of JavaScript. I used what I learned building it to write an essay and talk on the innards of Git.
  • Little Lisp - A Lisp interpreter in JavaScript and an essay about how it works.

Music

Essays

Talks

Interviews

20 Nov 04:59

Why is Markdown popular?

by Russell Beattie

Why is Markdown so popular?

Notable's Markdown Editor... The left is somehow better than the right.

I truly loathe Markdown. Truly. But given the widespread use of Markdown, it might seem strange that I have such aversion to it. If you somehow really like it, or are so used to it by now, you might be tempted to think I'm the oddball. But I'm definitely not alone in my dislike of the format:

You get the idea. My point is, I'm not the only one.

First the good...

Markdown definitely has some benefits, otherwise people wouldn't use it. Off the top of my head:

  1. It separates the design of a document from its content, allowing writers to focus on what they’re writing.
  2. The plain-text nature means that .md docs are lightweight and portable across platforms.
  3. The simple number of formatting rules is easily pulled into any publishing system and styled as needed.
  4. It appeals to techies -Markdown’s principle users - who want to use vim or emacs to write up documentation and notes (or say they do, anyways).
  5. It’s developer-friendly. Like code, it’s in plain-text so is usable within an IDE and easy to copy/paste chunks of code into documentation.
  6. It contains 95% of the functionality that most people need to create a document.
  7. It’s extensible: If you need additional content in your doc - like a video, or extra formatting options - these can easily be added with HTML tags or custom text markers that can be post-processed.
  8. Because it’s plain-text, it can easily be diffed and managed via git.
  9. Parsing can be as easy as literally 10 lines of RegEx
  10. Widely supported at this point, with lots of libraries, projects and editors.
  11. Meta-data can be thrown into a YAML frontmatter section at the top.
  12. It’s the only option available, really, as how else are you going to write documents?

So just suck it up and use Markdown

Because, as pointed out by others before me, Markdown sucks. Let me list some reasons why: 

  1. It’s barely a spec - just a cobbled together bunch of general rules which every implementation breaks in one way or another.
  2. There are at least a dozen variations, GFM being the most common, but also MultiMarkdown, Pandoc, CommonMark, etc. And site-specific variations such as Wikimedia, Reddit, WordPress and more.
  3. Each variation produces different default HTML output.
  4. The HTML output is antiquated at best. Though the basic structure of headers and paragraphs is generally semantic, there's no modern semantic elements such as main, article, section, nav, header, footer, figure, picture, etc. Embedding videos, social media widgets, etc. isn't possible at all. 
  5. Adding in any sort of extra meta data usually requires using YAML, the rules of which are a mystery to me. 
  6. In order to be truly useful, Markdown needs to be post-processed (so the text can be used for blog posts, research papers or online docs, for example) and needs to be extended via embedded HTML or custom tags. For example, Markdoc adds in tags for post processing, Hugo adds in templates for blog posts, etc. And the nightmare which is MDX is… just… wow. Not even once.
  7. Parts of MarkDown are truly atrocious, even if you love the general simplicity of it:
    1. Tables: The reason you don’t see many tables in Markdown is because they’re ludicrous. You need to use a WYSIWYG tool or you don’t use it.
    2. Image: links are hard to tell
    3. Blockquotes. Are people really typing > before each paragraph? No, they’re using an editor.
    4. Numbered lists. Again, you need to use an editor to stay sane.
    5. What the hell are task lists anyways? Why do they exist?
  8. The above means that anyone who writes Markdown regularly using a WYSIWYG editor or an IDE already, so the whole ‘plain text’ thing doesn’t matter. Why not use a format that isn't completely hamstrung?
  9. The markup is brittle as hell ends up causing weird edge cases, challenging even the best parsers. (###**this is __test__** … Is that bold with italics? Italic bold? In a heading? Wait…)
  10. The libraries for manipulating Markdown seem to be either RegEx based, or otherwise use an Abstract Syntax Tree. If you haven’t tried manipulating a document using an AST, let me assure you it's a non-trivial effort. The APIs are either so low-level as to make any change a 50 line script, or they end up using a badly made version of the standard DOM API. I spent several hours trying to convince one library to simply wrap an element with another before finally giving up, loading up JSDOM and doing it in 3 lines of code. 
  11. Markdown will never get beyond developers. It’s a way for people who are used to writing code to write docs without being bothered about how it looks. But the fact is that the output has to be readable and decent looking.
  12. Any developer who does care about how the output looks - say for someone trying to set up a new blog or documentation - either spends hours trying to figure out various libraries or gives up and uses an existing project with mixed results. 
  13. Goddamn it’s fugly. This is last, but honestly, this is my number one gripe. It’s 2022, we shouldn’t be using ascii-text to write documents.

OK, so use something else...

Like what? Plain text doesn't provide rich text. And the standard apps for creating documents are neither simple, open nor standard.

  1. Word processors are generally bloated, non-standard, self-contained applications that have file formats that aren’t suited for post-processing, or creating web pages. They’re barely usable for creating usable documents at this point.
  2. Google Docs doesn't even have a document format. It's all online or exported in different formats - all except .docx are read-only. If you don't feel like storing your docs on Google's servers, you're out of luck. 
  3. Word processors and editors that support HTML exporting - from TextEdit or WordPad to Google Docs and Microsoft Word - create HTML that is non-standard, bloated, and ugly. Each one has their own ways of formatting. This isn't that surprising, given there's no standard HTML Document format, and given the web's flexibility, that means there are dozens of ways to format a page. Bold text could be <b> or <strong> or a <span style="font-weight:bold"> or <span class="bold">, and all are used regularly.
  4. The HTML “web design” apps that do exist - and there aren’t many left - are focused on site design. They aren't a tool to be used for writing, but for layout of already created content.
  5. HTML document editors - aka word processors - focused on writing and sharing docs don’t exist. (Until now.)

This means that if you want to write a document in HTML as easily as you create a Google or Word Doc, you’re out of luck. It's not that word processors always create horrible output - Google Docs does a decent job of exporting a web page as an .html or zip file - but it's always a one-way process. Good luck importing that back into the editor or Word or anything else. And again, each has its own custom way of implementing each style. Oh, and the exported web page inevitably doesn’t actually look exactly like the document you were just editing - margins and spacing are off, etc.

Why don’t we have an HTML-based rich text standard yet?

I have zero idea. In fact, we seem to be getting farther away from one as Markdown popularity surges. 

The other day I read a blog post by the folks at Mozilla - who are supposed to be the standard bearers of, um, web standards, and was truly shocked that they decided to convert all their documentation to Markdown last year. What?? In fact, they stopped using HTML in order to do it. 

The blog posts starts out by saying that the reason was because Markdown is "much more approachable and friendlier" then proceeds to list the various Mozilla-specific Github projects that one needs to download and install (not including Node, Yarn and Git), and the multiple steps needed to actually generate the docs. Again... what??

And of course, Mozilla too has its own variant of Markdown, which builds on GFM. And of course, since Markdown doesn't do much besides headers, paragraphs, bold, italics and bullets, they need custom macro tags to make it do what they need. Which is of course both non-standard, as well as being completely invisible to the writer, so who knows what the end result will be: 

Kill Me Now

I mean, kill me now. If Mozilla, of all organizations, have dumped HTML in favor of Markdown and consider the above better than, you know, <span class="foo">, then I must be tilting at windmills. I get it.

That doesn't mean I'm wrong.

As I wrote in my previous post, we need an HTML Document standard. It's not a matter of technology at this point, it's a matter of simply deciding on a manageable subset of HTML and CSS and then calling it a standard. Then it needs to built in to every browser on the planet. I'm using my own Hypertext HTML Document editor to write this, but I shouldn't have needed to go through the effort. Mozilla should really be ashamed of itself for not doing it first, quite honestly. There is a W3C Working Group dedicated to Web Editing, but they're just focused on a few APIs it seems, basically continuing the focus on "web apps". The ePub Working Group is only focused on read-only e-books, and don't seem to be concerned at all about doing anything to enable actually writing them using web standards. 

My next post will be about the three decade long quagmire of "encapsulated HTML" formats that are out there, as it's an interesting topic of dead-ends and disagreements between browser makers. Then after that I'll be posting on a proposal for what an HTML Document spec would look like. 

-Russ

19 Nov 16:39

Why you’re not a writer (yet)

by Josh Bernoff

Let’s do a thought experiment. Imagine a time in your near future. You’ve written a book. It has made a big impact on a lot of people. Everyone who reads it says, “Wow, now I understand.” You have many fans, and they love what you’ve taught them. They talk about it often. You see people … Continued

The post Why you’re not a writer (yet) appeared first on without bullshit.

12 Nov 21:31

Microsoft Devkit für ARM

by Volker Weber

Ich mag das Surface Pro X sehr und mit Surface Pro 9 ist die ARM-Architektur nun alternativ zu Intel verfügbar. Dafür hat Microsoft die AMD-Prozessoren aus der Linie geworfen.

Meine Skepsis bezüglich der Kompatibilität schwindet langsam, weil ich nun auch WireGuard auf Surface Pro X laufen habe. Solche Kernel-Treiber sind bisher selten. Ich vermisse sie beispielsweise beim Audio-Equipment.

Noch mag der Weg steinig sein, aber die Zukunft gehört der ARM-Architektur. Um die Entwicklung von Windows Anwendungen für Desktop und Server zu beschleunigen, bietet Microsoft nun ein Devkit an, das ich sogar ganz gerne als Schreibtischrechner nutzen würde:

  • 32 GB LPDDR4x RAM und 512 GB schneller NVMe-Speicher
  • Snapdragon® 8cx Gen 3 Compute Platform
  • Ports: 3x USB-A, 2x USB-C, Mini-Display (HBR2-Unterstützung), Ethernet (RJ45)

Mit knapp 700 Euro ist es nicht mal besonders teuer.

11 Nov 19:49

Running the numbers on the journey to insight

by Jim

I’m a product of the case method approach to an MBA. After two years of analyzing three cases a day, I then spent time as a case writer. One of the first questions you would always face was “have you run the numbers?”

Figuring out which numbers you should run and what the heck “running the numbers” was supposed to mean was all part of the learning process.

Vaclav Smil’s most recent book, How the World Really Works: The Science Behind How We Got Here and Where We’re Going provides an excellent example of the power of this strategy. It also offers a flavor of the experience I encountered too often when I faced a professor without running the numbers first. Here’s Smil’s motivation for this book:

The gap between wishful thinking and reality is vast, but in a democratic society no contest of ideas and proposals can proceed in rational ways without all sides sharing at least a modicum of relevant information about the real world, rather than trotting out their biases and advancing claims disconnected from physical possibilities.

Smil lays out his case for the relevant information about the real world that we ought to share. He starts with energy and food. Hard to get much more fundamental than that. If you’ve got eight billion people on the planet, that’s going to call for a lot of food to produce and distribute. That production and distribution depends on energy and most of that energy comes from fossil fuels. Fossil fuels that won’t be easily displaced by renewable sources at the scale implied by a population of eight billion.

This is a theme that Smil continues to hammer on; that you have to look at systems and scale in sync. He drives that home in a series of chapters examining his candidates for the four material systems that underpin our current economic environment; steel, cement, plastics, and ammonia. Not exactly the “software is eating the world” message that we’ve become accustomed to.

Smil stops short of offering specific policy recommendations. His desire is to see policy debates grounded in a better understanding of the relevant underlying systems and their scale. He hints at options that he deems plausible;

Solutions, adjustments, and adaptations are available. Affluent countries could reduce their average per capita energy use by large margins and still retain a comfortable quality of life. Widespread diffusion of simple technical fixes ranging from mandated triple windows to designs of more durable vehicles would have significant cumulative effects. The halving of food waste and changing the composition of global meat consumption would reduce carbon emissions without degrading the quality of food supply. Remarkably, these measures are absent, or rank low, in typical recitals of coming low-carbon “revolutions” that rely on as-yet-unavailable mass-scale electricity storage or on the promise of unrealistically massive carbon capture and its permanent storage underground.

The reality is that any sufficiently effective steps will be decidedly non-magical, gradual, and costly.

This is a book that should be widely read. What it needs is a companion volume that delves into the human systems side of how we might go about tying the politics of large scale system change with a grounded acceptance of the facts on the ground.

 

The post Running the numbers on the journey to insight appeared first on McGee's Musings.

01 Nov 01:36

The Socials

by Rob Campbell
This week, the man with the most billions shuffled some bits around, grabbed a sink, and strolled into the headquarters of Twitter Inc after months of speculation, dodging and hijynx. This self-styled meme-lord and grad-A shit-poster kicked out the stewards of our pre-eminent online toilet to the enthusiastic cries and wailings of his army of […]
01 Nov 01:36

I Don't Know What to Teach

I’ve been working with biologists and bioinformaticians for a year now, and the more time I spend with them, the more I question what I’ve been teaching and preaching for the past dozen years. For example, one of my colleagues has several hundred Jupyter notebooks in a Git repo, each of which analyses one or a few datasets. When a lab scientist sends him new data he makes a copy of one of the notebooks, tweaks it (a process which can take anywhere from minutes to days), and then runs it to produce the plots and tables the scientist needs.

“Why don’t you just have one parameterized notebook?” I asked. “Because every single analysis is different,” he said, patiently and somewhat wearily. The scientists are exploring new compounds, new experimental methods, new everything. He’s already refactored the shared bits of code into a library, but if he adds enough options and configuration parameters to those functions to handle every case, he will essentially have moved his file structure into an untraceable tangle of if/else statements and overridden methods.

I described this to a fellow programmer a few days ago, and she said, “Oh come on: there has to be a way.” That’s when I realized that I don’t know how to teach even more than I thought. The mindset, methods, and tools that software engineer have developed to solve their problems aren’t automatically a fit for researchers and data analysts. We’re manufacturing t-shirts in standard sizes; they’re tailoring bespoke suits, and it’s wrong in a very self-centered way for us to say that what they’re doing must somehow fit into our paradigm. I therefore need to find things to teach to researchers that are a better fit to their needs, and find ways to convince software engineers that those needs really are different and really do need different kinds of support.

Later: a data scientist DM’d me to ask why there isn’t a git cp command to record the fact that file X is a copy-with-modifications of file Y. They’ve seen tools that draw the branch-and-merge structure of Git commits, and want something similar to show the genealogy of files that coexist simultaneously in the same branch. I’m going to have to think about this one…

01 Nov 01:35

Escape From the Rest of Us

“For them, the future of technology is about only one thing: escape from the rest of us.”
– Douglas Rushkoff, Survival of the Richest

01 Nov 01:35

git bisect

by Simon Willison

I extracted and enhanced this TIL from my April 8th 2020 weeknotes to make it easier to find.

I fixed two bugs in Datasette using git bisect run - a tool which lets you run an automated binary search against a commit log to find the source of a bug.

Since I was figuring out a new tool, I fired up another GitHub issue self-conversation: in issue #716 I document my process of both learning to use git bisect run and using it to find a solution to that particular bug.

It worked great, so I used the same trick on issue #689 as well.

Watching git bisect run churn through 32 revisions in a few seconds and pinpoint the exact moment a bug was introduced is pretty delightful.

The first step is to tell it the range of commits that you want to search in, using git bisect start:

$ git bisect start main 0.34
Bisecting: 32 revisions left to test after this (roughly 5 steps)
[dc80e779a2e708b2685fc641df99e6aae9ad6f97] Handle scope path if it is a string

Then you provide a script that will return an error if the bug is present.

Usually you would use pytest or similar for this, but for the bug I was investigating here I wrote this custom script and saved it as check_templates_considered.py:

import asyncio
import pathlib
from datasette.app import Datasette
import httpx

async def run_check():
    ds = Datasette([])
    async with httpx.AsyncClient(app=ds.app()) as client:
        response = await client.get("http://localhost/")
        assert 200 == response.status_code
        assert "Templates considered" in response.text

if __name__ == "__main__":
    loop = asyncio.get_event_loop()
    loop.run_until_complete(run_check())

This script will fail with an assertion error if Templates considered is not included in the HTML for the homepage.

To run the bisection, use git bisect run <script goes here>:

$ git bisect run python check_templates_considered.py
running python check_templates_considered.py
Traceback (most recent call last):
...
AssertionError
Bisecting: 15 revisions left to test after this (roughly 4 steps)
[7c6a9c35299f251f9abfb03fd8e85143e4361709] Better tests for prepare_connection() plugin hook, refs #678
running python check_templates_considered.py
Traceback (most recent call last):
...
AssertionError
Bisecting: 7 revisions left to test after this (roughly 3 steps)
[0091dfe3e5a3db94af8881038d3f1b8312bb857d] More reliable tie-break ordering for facet results
running python check_templates_considered.py
Traceback (most recent call last):
...
AssertionError
Bisecting: 3 revisions left to test after this (roughly 2 steps)
[ce12244037b60ba0202c814871218c1dab38d729] Release notes for 0.35
running python check_templates_considered.py
Traceback (most recent call last):
...
AssertionError
Bisecting: 1 revision left to test after this (roughly 1 step)
[70b915fb4bc214f9d064179f87671f8a378aa127] Datasette.render_template() method, closes #577
running python check_templates_considered.py
Traceback (most recent call last):
...
AssertionError
Bisecting: 0 revisions left to test after this (roughly 0 steps)
[286ed286b68793532c2a38436a08343b45cfbc91] geojson-to-sqlite
running python check_templates_considered.py
70b915fb4bc214f9d064179f87671f8a378aa127 is the first bad commit
commit 70b915fb4bc214f9d064179f87671f8a378aa127
Author: Simon Willison
Date:   Tue Feb 4 12:26:17 2020 -0800

    Datasette.render_template() method, closes #577

    Pull request #664.

:040000 040000 def9e31252e056845609de36c66d4320dd0c47f8 da19b7f8c26d50a4c05e5a7f05220b968429725c M	datasette
bisect run success

The final output shows exactly which commit introduced the bug. In this case it was 70b915fb4bc214f9d064179f87671f8a378aa127 (the "first bad commit").

01 Nov 01:18

The story of this empty box

A file folder box filled with empty file folders

I’ve spent the last week or so cleaning, purging, and reorganizing my office. I had encapsulated myself in a micro-hoarder situation in the shed which made it difficult to get focused. Clutter, as I’m learning, is adversarial to my distractible brain. One task that’s been looming over my head was this box of file folders.

This box of file folders was a collection of all expenses and tax documents from the years 2012-2016. And I mean everything. Every gas, electric, internet, and credit card bill that came to my house. Every business expense. Every health expense. Every mortgage payment. The reason I hoarded all these documents? Self-employment. All those numbers factor into my tax bill every year, so I kept every physical document that was relevant. America glamorizes starting your own business as “entrepreneurship” or “making your own hours” but it’s a lot like paying a lot of extra self-employment taxes for the privilege of meticulously holding on to and organizing your literal garbage.

But, because my accountant said I need to keep five years worth of documents I now get the privilege of shredding each of them in my little shredder (that I still have the receipt for, natch 😉).

Shredding this box of documents was an emotional journey.

  • 2012 was the year Paravel did the Microsoft homepage. Business was going good for Paravel, but this was like strapping ourselves to a rocketship. It wasn’t buy a yacht money by any means, but life-changing in different and intangible ways.
  • 2013 we found out my wife was pregnant after years of trying. Prenatal hospital bills filled this year’s folder as well as the receipt for the cash payment of $5,300 for the birth of my son because my insurance was so bad that was the cheapest option.
  • 2014 also filled with hospital bills for the new baby’s checkups. Business receipts for conferences in other countries began to show up. I built my first shed at my old house. This was also the year my accountant ghosted me and I learned they basically guessed a number on my 2013 taxes. Yikes.
  • 2015 oops, pregnant again. More hospital bills. More cash payments for births. This was also the year of #davegoeswindows, so lots of receipts as I navigated that experience.
  • 2016 less hospital bills, more computer parts, but our life was beginning to stabilize as we adjusted to our new life with our two kids.

From entering a new era of business, to having our two kids, to colossal tax liabilities, to changing my entire workspace and workstation; this box was a major arc in my (and my family’s) life. There were a lot of memories living inside that box of mundane files. There was also a lot of garbage. Two whole entire compost bins full of shredded document garbage, to be exact. I’m thankful for the memories and the opportunity to reflect, but glad I eradicated those little paper horcruxes.

31 Oct 02:30

About the sqlite3 WASM/JS Subproject

About the sqlite3 WASM/JS Subproject

SQLite now maintains an official WebAssembly build. It's influenced by sql.js but is a fresh implementation with its own API design. It also supports Origin-Private FileSystem (OPFS) - a very new standard which doesn't yet have wide browser support that allows websites to save and load files using a dedicated folder on the host machine.

31 Oct 02:29

Ten Things That Astonish Me

by Dave Pollard

photo by Mitchell Kaneshkevich

Thanks to PS Pirro for the prompt for this post, asking us what still astonishes us; here’s my list:

  1. The realization that everything we think of as real is either just an appearance (like separate ‘things’ in space and time) or just a mental fabrication (like our ‘selves’). Scientists keep discovering this to be true, and even they find it too astonishing to believe, and keep looking for a different answer.
  2. The staggering diversity of complex life — from bats to jellyfish to water-bears to sharks to seeds. And birds!
  3. The complexity and resilience of the human (or any creature’s) body. And the fact that it’s not a ‘thing’ but a borderless complicity, a trillion trillion inseparable things, endlessly, coherently interacting.
  4. Fungi.
  5. Music — its effect on us, what makes it ‘music’, and the mysterious process of its composition.
  6. The evocative power of light — firelight, street-lamps, candlelight, the light of the sun (especially at dawn, at dusk, and reflected), the moon and the stars.
  7. Gaia — the evolution and complicity of all life on earth.
  8. The macroscopic and microscopic universe, which, I have to believe, are infinite. Go as far or as deep as you like, you’ll never find the end. It’s turtles all the way down, and all the way up.
  9. Electricity. (Or more precisely, electromagnetism.) We have no idea how it ‘works’. Neither do the eels who’ve employed it for seven million years, or the birds who’ve navigated by it for 150 million.
  10. Imagery — reflections in water, prints in the sand, photographs, and all forms of art that conjure up images — that ‘imageine’.

Yes, I know, nothing really in this list specifically about the human species, or its ‘soul’ or ‘spirit’ or mental capacity or accomplishments or propensity to fall in love and persevere and make stuff. I am undoubtedly a misanthrope, but I see nothing extraordinary or astonishing about homo sapiens. Even our destructiveness is unremarkable.

But these more-than-human things — astonishing!

What would be on your list? No sarcasm please, though I’m sure it’s tempting; plenty of time for that later. Of human accomplishments, what would you rate as most astonishing? Language and how we’ve spread it? Some medical advance? The arrowhead and other weaponry? The control of fire and water?

31 Oct 02:29

Talking With Those Who Disagree With You

by Dave Pollard

Resilience co-editor Bart Anderson recently sent me a link to an article by astrophysicist Ethan Siegel called Four Rules of Persuasion, which Ethan developed mainly for dealing with people who spread, and believe, mis- or disinformation. His preamble:

The large amount of misinformation, disinformation, and loudly opinionated ignorance that’s out there makes the task of … communication more difficult than ever. Much of what passes for journalism these days falls into the trap of giving equal time, space, and weight to positions of vastly unequal merit; don’t fall for it. Even if it may not seem like it’s the case, the general public has a great craving to learn the truth. Here’s how you can tell it persuasively, without getting distracted by the noise.

And his four rules are:

  1. Never waste your time explaining yourself to someone who is committed to misunderstanding you.
  2. When speaking in front of an audience that’s been misinformed, don’t address your response to the misinformer, but rather to those who would benefit from hearing a correct, and different, narrative.
  3. Do not be afraid to make your own points that you believe are important. The audience will not recall 100% of what you say, so make sure you emphasize and repeat the most vital takeaways.
  4. [Be open to] criticism [but understand it] is part of the game. Chew on the meat and throw away the bones. And if it’s all bones, throw it all away.

(I have slightly edited the fourth rule for clarity.)

The first rule is a variation on Daniel Quinn’s famous exhortation:

People will listen when they’re ready to listen and not before. Probably, once upon a time, you weren’t ready to listen to an idea than now seems to you obvious, even urgent. Let people come to it in their own time. Nagging or bullying will only alienate them. Don’t preach. Don’t waste time with people who want to argue. They’ll keep you immobilized forever. Look for people who are already open to something new.

I prefer Daniel’s phrase “aren’t ready to listen” to Ethan’s “are committed to misunderstanding” simply because it’s more inclusive. If open-minded people aren’t ready to listen to what you have to say, perhaps because it’s too radical for them to accept yet, or because it would take you more time than either of you has to provide sufficient context for them to appreciate your argument, then it simply won’t be heard. Even if they’re not “committed to misunderstanding you”.

The second rule, of addressing yourself to a “reasonable person”, even if it’s a stretch to even imagine one present in a hostile group, is far more useful than trying to disentangle their misinformation. This is akin to what George Lakoff calls “reframing”. You can be tied up in knots if you’re constantly trying to explain yourself using the terminology and frame of reference of someone who’s misinformed. Start with a clean slate, providing the facts and evidence to support your argument, rather than trying to refute someone else’s.

Related to this, I think, is starting with the presumption that we’re all doing our best, and that people who spread mis- and dis-information honestly believe what they’re saying — that they’ve been led astray by someone else. Starting with genuine curiosity about why someone would believe something that is clearly incorrect, rather than staring from a place of defensiveness or animosity, seems to me a sensible approach.

Ethan’s third rule is, I think, the most critical one. It’s amazing what people will hear only when you’ve said it several times. And how open people are to new information, presented factually and supported, even when it doesn’t fit with their worldview

The fourth rule is the toughest. I think that’s because it’s really hard for us not to take criticism personally, whether or not it is intended to be taken that way.

So, for example, when I am criticized or attacked for saying that I believe the state has no right to tell somewhat what they can or can’t do with their body (including aborting a fetus or taking their own life) provided that action or inaction does not aversely affect their community, I can be attacked (and charged with hypocrisy) from both sides. “Taking one’s own life hurts dependants and loved ones, and is therefore selfish and immoral”, I have been told, while anti-vaxxers say “No one has the right to force anyone else to take a vaccine that might be dangerous to them personally”.

These are powerful emotional arguments, not easily dealt with by providing more facts. I could spout data forever showing that statistically for every x people refusing to get vaccinated, one vulnerable person will die ‘unnecessarily’ of the disease, or that your chances of getting a chronic illness or dying of the disease personally are y times higher if you haven’t been fully vaccinated (where y is many times higher than the risk of comparable consequences from getting the vaccine). That will get me exactly nowhere.

Instead, drawing on Ethan’s first two rules, it would be better, in my discussions with people making those arguments, for me to provide facts, and supporting information, to other people in the conversation who, if presented with new information, new ideas, or new perspectives, are open to them. And not to waste time debating or arguing with those who are not.

There have been many situations, especially over the past decade or so, when I have, as a result of hearing or reading new information, ideas, or perspectives, suddenly and radically changed my beliefs. These 20 beliefs for example. I’d like to say that someone skilled in the Four Rules was responsible, but the truth is, almost none of these changes in beliefs came about as a result of conversation. They came about after a lot of reading and thinking, and they came about gradually, when I was “ready” to hear the new, conflicting belief, as a result of conditioning from a multitude of sources. The “aha” moment only happened after a ton of reading and interactions pushed me to the point I was ready, finally, past the “tipping point”.

As I’ve said before, the only one who can change anyone’s mind is themselves.

It’s also important, I think, not to get too attached to our own beliefs, which, some would say, are only just opinions. If you equate an attack on your beliefs (no matter how fervently held) with an attack on you, personally, it’s almost certain to end badly. I’ve found that anger and hostility are almost always masks for fear. If someone’s afraid of the information, ideas, or perspectives you give them, that’s all about them, not about you.

So what does that mean for the Four Rules? I still think they’re valid. We may never know how our calm, reasoned, attentive argument might contribute to someone’s conditioning in such a way that, later, something else will push them past the “tipping point”. So that suggests we should do our best to try to nudge them in the right direction, if they’re ready to listen.

Here then is my personalized version of the rules, which I think apply not only in addressing a misinformed audience, but also in one-on-one and small group conversations with those with sharply conflicting views:

  1. Try to bring genuine curiosity to why others hold beliefs and opinions that seem misguided or unsupported. Appreciate that we’re all doing our best to make sense of the world.
  2. Don’t waste time talking with people who aren’t ready to listen.
  3. When speaking with an audience that’s been misinformed, reframe: Address those who might value hearing new information, ideas, or perspectives, rather than responding directly to dis- or misinformation.
  4. Do not be afraid to make your own points that you believe are important. Your audience will not recall 100% of what you say, so make sure you emphasize and repeat the most vital takeaways.
  5. Be open to criticism, but don’t take it personally, and disregard it if it is malicious, invalid or manipulative.

Thanks to Bart for the link, and Ethan for the list.

image from pxhere, CC0

31 Oct 02:14

Twitter After The Bird’s Capture, Find Me At @ton@m.tzyl.nl

by Ton Zijlstra

Now that the deal is done and Musk captured the bird, i.e. Twitter, let’s see what happens. Will there be a wave(let) of people migrating to decentralised places in the fediverse? There were mulitple connection requests in my inbox this morning.

It might be a strange experience for most newly migratory birds, as finding the others on Mastodon isn’t as easy. Especially not finding your current others that you interact with already on Twitter. The path that one needs for this is like it used to be: once you connect to someone you check-out the people they follow and are followed by. We did that for blog rolls, and for every YASN (yet another social network) we joined, and we asked people in person for their e-mail addresses before that. Now I am doing the same for people using Hypothes.is. The difference is probably that many never encountered that tactic before, because it wasn’t needed and you can follow the recommendations of the platforms who do the ‘finding the others’ for you (for their definition of finding, not yours).

Anyway. I am on Mastodon since 2017, find me there. I run my own instance since 2018, hosted by Masto.host run by Hugo Gameiro, who provides a great service. But you’re more likely to start at a existing bigger instance: here’s a useful tool to help you decide.
Zoek je een Nederlandse Mastodon server? Kijk naar mastodon.nl, beheerd door Maarten den Braber.

Come find me. That’s how you find the others.


An AI generated image (using Dall-E) with the prompt ‘A blue bird has an encounter with a grey mammoth’

31 Oct 02:14

Elon, you bought yourself for 44 billion dollar

by Ton Zijlstra

Bookmarked Welcome to hell, Elon by Nilay Patel / The Verge

Such a fantastic phrasing, in an otherwise entertaining read about the issues with Twitter and large social media platforms. Those issues center on how to moderate mostly (this in turn is why I never saw those platforms as communities, as maintaining a fine balance between safe space, to feel at home, and excitement, to make you return, and a balance between internal and external perspective is important there, yet severely lacking in those platforms). The quote centers on what drives engagement on Twitter, and who feeds that engagement in order to be able to feed off it. A group that includes Musk.

Twitter, the company, makes very little interesting technology; the tech stack is not the valuable asset. The asset is the user base: hopelessly addicted politicians, reporters, celebrities, and other people who should know better but keep posting anyway. You! You, Elon Musk, are addicted to Twitter. You’re the asset. You just bought yourself for $44 billion dollars.

Nilay Patel

31 Oct 02:10

Homelab Update

by Rui Carmo

Every now and then I spend an hour or so in the evenings poking at my homelab, which kind of adds up into actual productivity over a couple of months.

So here are my notes from that time:

Pimox, The Great Little Hypervisor

I haven’t yet gotten around to upgrade my KVM server to Proxmox (I’ve decided to get new hardware first, and I’m eyeing the HX90g1), but on a particularly rainy evening I decided I was going to put one of my 4GB Raspberry Pi 4s to good use and decided to install Pimox on an external SSD drive:

Not exactly a datacenter, but plenty of control.

This is, of course, completely unsupported, but the great thing is that it works beautifully with LXC containers. Just go to the LXD Image Server and grab an arm64 rootfs of your favorite distribution, give it the URL, and you’re in business.

Among other things, this allowed me to catch up on the 7.x release (which I hadn’t yet had a chance to try). And I bet it should work on just about any ARM SBC that runs Debian, which opens up a lot of possibilities for cheap, low-power development sandboxes.

I’m now using it to set up a few ARM test environments, and although I haven’t yet figured out a clean way to use cloud-init with LXC containers, it’s only a matter of time.

And if you want to get rid of the annoying warning about it being an unsupported version, just run this as root:

sed -Ezi.bak "s/(Ext.Msg.show\(\{\s+title: gettext\('No valid sub)/void\(\{ \/\/\1/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service

The Elementary Plymouth Incident

This is something I’m publishing here because I’m positive other people will have similar issues:

My KVM server is nominally running Ubuntu 20.04 LTS, but it is actually an Elementary install that I stripped the desktop off and put in the closet. Since all the core packages are exactly the same, I didn’t give it a second thought, but of late it had been failing to upgrade the kernel when running apt dist-upgrade, and it was never the right time to shut down all the clients and have a proper look.

This is what I was seeing:

Setting up initramfs-tools (0.136ubuntu6.7) ...
update-initramfs: deferring update (trigger activated)
Setting up linux-firmware (1.187.33) ...
update-initramfs: Generating /boot/initrd.img-5.4.0-90-generic
E: /usr/share/initramfs-tools/hooks/plymouth failed with return 1.
update-initramfs: failed for /boot/initrd.img-5.4.0-90-generic with 1.
dpkg: error processing package linux-firmware (--configure):
 installed linux-firmware package post-installation script subprocess returned error exit status 1
Processing triggers for initramfs-tools (0.136ubuntu6.7) ...
update-initramfs: Generating /boot/initrd.img-5.4.0-90-generic
E: /usr/share/initramfs-tools/hooks/plymouth failed with return 1.
update-initramfs: failed for /boot/initrd.img-5.4.0-90-generic with 1.
dpkg: error processing package initramfs-tools (--configure):
 installed initramfs-tools package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
 linux-firmware
 initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)

…as it turned out, Elementary‘s default plymouth settings required the Inter font to be installed in the system. Otherwise, the post-install shell script fails, hence blocking update-initramfs.

That was an easy enough fix, but since I had removed Inter when I got rid of all the desktop environment packages, this also means Elementary isn’t keeping track of all package dependencies – such are the pitfalls of maintaining a derived distribution, I guess.

Better Archiving

I am still using ArchiveBox to take snapshots of interesting articles and other useful webpages, but with over 2000 items its built-in search functionality just wasn’t working out for me.

Fortunately, it supports Sonic, a nice little Rust-based indexing engine that can take the full-text extracts and provide lightning-quick responses, and I set that up as a sidecar container.

This sort of setup is highly recommended if, like me, you’re fed up with bits of the Internet dropping off just when you need to refer to them.

Most of my ArchiveBox is now obscure blogs and electronics-related stuff that tend to vanish after a few years, but since I imported my del.icio.us account (remember that?) and let it run for a few days, I also have a delightful set of 10-year old tech notes that are still surprisingly online and very useful to this day.

All Along The Watchtower

Another thing I did was to set up watchtower to do regular updates of some containers. It is an amazingly deploy-and-forget thing, and although I had to resort to SSH and docker-compose to set it up on my Synology (because its Docker management doesn’t allow for manifests), it was a painless and quite rewarding experience.

For extra peace of mind, I set it up to notify me of any updates via Pushover and to exclude a few designated containers.

Here’s an example of doing that and excluding Syncthing (I actually don’t exclude it, but I had the file around anyway, and it is a good example of something you likely want to update yourself):

version: "3.2"

services:
  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    hostname: ${HOSTNAME}
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - TZ=Europe/Lisbon
      - WATCHTOWER_CLEANUP=true
      - WATCHTOWER_LABEL_ENABLE=true
      - WATCHTOWER_SCHEDULE=0 30 3 * * *
      - WATCHTOWER_ROLLING_RESTART=true
      - WATCHTOWER_TIMEOUT=30s
      - WATCHTOWER_NOTIFICATIONS=shoutrrr
      - WATCHTOWER_NOTIFICATION_URL=pushover://shoutrrr:${PUSHOVER_API_KEY}@${PUSHOVER_USER_KEY}/
    cpu_count: 1
    cpu_percent: 25
    mem_limit: 64m

  syncthing:
    image: syncthing/syncthing
    container_name: syncthing
    hostname: ${HOSTNAME}
    restart: always
    network_mode: "host"
    labels:
        - "com.centurylinklabs.watchtower.enable=false"
    volumes:
        - /home/${USER}/.config/syncthing:/var/syncthing
        - /home/${USER}/Sync:/Sync
    environment:
      - TZ=Europe/Lisbon
    cpu_count: 1
    cpu_percent: 50
    mem_limit: 256m

This has saved me a fair amount of time manually upgrading some of the Docker containers I run on my Synology, and is heartily recommended.

Fixing GNOME Thumbnails inside LXD

I was getting a bit annoyed with my Fedora container not showing file previews so I decided to investigate, which led me down a particularly niche rabbit hole.

As it turns out, GNOME wisely decided to sandbox its thumbnail generator to avoid security issues, but that took me a long time to figure out since there was no logging of any kind to indicate why it was failing.

That was quickly fixed by setting:

lxc config set gnome security.nesting true

…in the host. This also helps with flatpak and other forms of sandboxing.

Airdrop for the Masses

Although we’re an Apple-centric home with a NAS, there is occasional need to send small files between various kinds of machines, and AirDrop has its foibles even among Apple devices.

So I’ve set up a local Snapdrop instance to transfer files and URLs via WebRTC between just about any browser, and it’s been working great:

Works great with Windows, too.

There are loads of similar solutions out there (I also like ShareDrop), but I liked the look, feel and overall simplicity of Snapdrop, so I did a few changes to it:

  • Got rid of the nginx Docker container it was using for SSL and serving static files over TLS (doing anything with SSL on a home LAN is a mess, and it wasn’t really necessary).
  • Moved static file serving to its (tiny) index.js by adding express to the otherwise pure websocket server.
  • Commented out all of the stuff that would require SSL or make no sense in a home setting (PWA support, notifications, etc.).
  • Changed its peer discovery mechanism (which is the only thing the server does, really, since WebRTC file transfers are fully peer-to-peer) to understand clients were on the same LAN.

The original code was quite simple, but it was following the assumption that clients from the same IP address would be in the same LAN, which is sort of broken.

The end result takes almost zero resources and is easily deployable via Piku, so that was a very satisfying quick hack that I just cleaned up and released on GitHub.

I’ve been doing a few more, of course, but they aren’t fully baked yet.


  1. It looks like a great little machine to play around with PCI passthrough (it has a discrete Radeon 6600M GPU, which is nothing to sneeze at), but I am a bit sad that it has less options for internal storage than the plain HX90 model (which sported dual 2.5” drive bays and was discounted by 25% recently). ↩︎


31 Oct 00:58

My iPhone translates from Dutch into English now…

by peter@rukavina.net (Peter Rukavina)

When the built-in translation features for iOS were first launched the list of language pairs was limited, to the point where I stopped even trying to use the feature, as Dutch, Swedish, Danish, German weren’t on the list.

I discovered this morning that, although Swedish and Danish are still absent, Dutch and German have showed up. Which makes it a little bit easier to read the blogs of friends like Frank.

31 Oct 00:58

What it requires of men is only this: step back. Quit guarding the doors.

by peter@rukavina.net (Peter Rukavina)

Annie Mueller’s What feminism means to me is a stunning piece of writing.

31 Oct 00:57

“What it requires of men is only this: step back. Quit guarding the doors.”

by peter@rukavina.net (Peter Rukavina)

Annie Mueller’s What feminism means to me is a stunning piece of writing.

30 Oct 02:26

A Mozilla product manager on his career path and what creating safe spaces online means to him

by Kristina Bravo

As a staff product manager for Mozilla’s security and privacy team, Tony Amaral-Cinotto thinks a lot about how you can protect your personal information, including, most recently, your phone number. 

Firefox Relay has been protecting email addresses from spammers since 2020. Tony’s team just released a new feature applying the same idea to your phone number: You get a uniquely generated number mask so you don’t have to enter your true number on website forms, or in other places like restaurants when making reservations and online marketplaces when putting up items for sale.

This new phone masking feature prevents companies from sending your true number to their third-party partners, so you get to do what you want online and in real life while preventing more spam calls and texts on your phone. Not sure if you want to give somebody your number just yet? You can also use Firefox Relay’s phone number mask for that. (You can learn more about how phone masking works here.)

Tony Amaral-Cinotto smiles at the camera.
Tony Amaral-Cinotto, staff product manager at Mozilla

Email addresses and phone numbers are part of Mozilla’s security and privacy efforts over the last two years: Mozilla VPN, launched in 2020, hides your IP addresses. Firefox’s Total Cookie Protection, rolled out to all users last June, prevents companies from using internet cookies to track your online activities. 

“We do this work for the good of the internet,” Tony said. “It’s easy to fall into this trap of acquiring users in ways that maybe aren’t ethical, but we live by what we stand for. Sometimes, it takes us longer to do things because we’re doing what’s right for the internet, what’s right for our users and being very intentional about that.”

We talked to Tony about his career path as a product manager, what he loves to do online and off, and why creating safe spaces on the internet means a lot to him. 

What’s a typical day for you as a product manager at Mozilla?

Right now my typical day is doing a lot of research, strategy and workshop sessions with team members to uncover different user segments for security and privacy products, and then taking all that information to build and prioritize for those users. Luckily, I have a lot of great counterparts who are helping me out.

In an interview about leaving and returning to work at Mozilla, you mentioned that working in privacy and security means a lot to you as part of the LGBTQI+ community, and that you want other people to be able to ask questions and learn like you did in a comfortable environment. Can you talk more about that?

When my parents didn’t know yet that I was gay, private browsing mode was a safety shield until I felt ready to have that conversation with them. That’s where I feel like private browsing is very important and using tools like Firefox Focus.

The other thing, too, is having cookie protections, which are on by default on Firefox. Without them, you could start getting targeted ads about yourself based off cookies — like Google ads or Facebook inferring that you’re gay, for instance. Imagine I was back in high school or college and I’m not out and I’m showing my friends something on Facebook, and something that could be detrimental pops up.

People could also use Firefox Relay [which masks email addresses from public view] if they wanted to sign up for forums or communities but fear being hacked or exposed. 

Besides your job, what do you love doing online?

I love music from around the world. I use apps to listen to music or watch music videos. I love traveling so anything around travel makes me very happy, like looking for cheap flights. I’ve had a lot of really cool unique experiences using online vacation rental platforms that have just warmed my heart every time I think about them.

I am also going back and playing Final Fantasy 9, even though it came out in 2000, I never got the chance to play it and I love the Final Fantasy series. I’m also playing Pokemon Legends: Arceus, which is a beautiful mix of nostalgia and a new adventure.

Do you have any advice for someone aspiring to become a product manager?

There isn’t really formal training at school, and a lot of people don’t get hired as a product manager. I’d say do internship opportunities as much as you can if you’re still in college. Email people. When you’re a student, people are so kind to you. Take the opportunity to get some help and to learn from people with internships, even shadowing.

What I see a lot of people struggle with in their career is learning about product management and making that shift. I really recommend networking inside your company. A lot of times it’s easier to move roles within the company because people know you. They know what your skill sets are, and they’ll take a bit of a risk to help you. That’s compared to, say I want to be a product manager and I’m gonna go apply to a bunch of different companies but my background is not product management. It’s a lot harder to get into that role. 

I think the other thing that’s really great too is reaching out to other product managers, saying, “Hi, I’m very interested in product management. Could I maybe dedicate 15% of my time to a small product project to learn how to do product management, see if I like it before fully diving in?”

Something else I did too, was diving into it myself. I actually built my own website where I felt like I was the product manager. If I had failed, it was all on me. So it was really good practice for getting into product management. And then also, if your company has anything like hackathons, innovation weeks, that’s a great time where you can serve as a product manager just for a week as well to see what that skill set looks like.


Visit blog.mozilla.org/careers to learn about life at Mozilla. Check out the following stories for more about what our team has been working on lately:

Layer on even more protection with phone number masking

Sign up for Firefox Relay

The post A Mozilla product manager on his career path and what creating safe spaces online means to him appeared first on The Mozilla Blog.

30 Oct 02:26

Firefox Presents: Cosplayer Rachel Maksy gives main character energy

by Kristina Bravo

Rachel Maksy has been Wonder Woman, Princess Leia, and, for one week, multiple Jane Austen characters. She goes to conventions to meet other cosplayers wearing costumes that she created. But mostly, her handmade outfits are seen and watched by her nearly 1.5 million followers across YouTube and Instagram. 

“Just like a kid dressing up on Halloween as their favorite superhero, there’s something special about making that costume and doing your makeup and hair like them. You just feel empowered,” the 30-year-old content creator said. 

Rachel loves Halloween so much that she starts planning her looks in January for what she calls “13 days of Maks-o-ween.” This month, her costumes include a raven dressed as a Victorian woman and a pinup wolfman. (If you still need Halloween costume ideas, check out her Instagram.)

Recently, outside of Halloween, she’s been dabbling in styles from the 19th and 20th centuries. 

Rachel Maksy is sitting down while filming.
Photo: Tara Morris for Mozilla

“Even though the fashion that I choose has changed over the years, I think my theme has always been that I always wanted to express myself,” Rachel said. “It’s a lot more fun to put together outfits when you get to choose. How I dress now, I’m kind of just making myself my own main character.”

Read more from Rachel about being a content creator: 

On YouTube as a career

Rachel: In high school, there were these few YouTubers that I would watch all the time and I would think, “How cool would it be just to be able to do that?” I don’t think I even thought about making it a job because I was 14 or 15. So I started making really embarrassing little YouTube videos. 

After [attending] film school, I ventured into Boston film jobs. I was also exploring vintage fashion, and I decided to start a YouTube channel mostly doing vintage hair and makeup tutorials. There was this makeup competition that I ended up winning. The prize money that I won, I decided to use as a cushion to make the leap into YouTube full-time, which was absolutely terrifying. I remember handing my two weeks to my boss, and he was very confused.

Rachel Maksy wears a skull mask while filming.
Photo: Tara Morris for Mozilla

On viewers taking comfort from her channel

Rachel: So the pandemic for me was kind of a 50-50 split. I had my personal life, which was scary and unpredictable. I think everyone was really frightened and didn’t know what was going to happen. But then the work part of me was really able to, since I was home all the time, delve into projects. People were sort of craving that wholesome corner of the internet. I was able to, hopefully, provide that. Everything else in the world really, really just sucked, and it was chaos. So being able to provide soothing, calming videos of just something silly was a way for me to combat the scariness of real life. I was escaping into my own projects, and also allowing that escapism for other people.

On hitting 1 million YouTube subscribers

Rachel: I hit 1 million subscribers I think back in August. It’s wild because it was sort of this mythical number that I had just in my head from when I started my channel. I thought if I ever hit a million, that’d be crazy. It’s weird to think that many people actually care about what I put out into the world. 

Rachel Maksy looks up while standing in front of a house.
Photo: Tara Morris for Mozilla

On mental health

Rachel: A huge part of being a content creator online is the mental game that goes with it. It’s so easy to fall into the trap of comparing your content to other people’s. Or, comparing your own content to months before. Something maybe did well the first time you posted it, and then not so well the second time. 

There’s always that potential disappointment when you think something is gonna do really well and it doesn’t. That’s why I’ve shifted into just making videos that I’m really, really excited about and learning new crafts because even if it doesn’t do well by my own fake standards in my brain, it’s something that I had a ton of fun with making.

On comments

Rachel: There are definitely times when people can be hurtful. But I am honestly a little bit impressed with how I have learned that a lot of times, they’re just saying stuff to vent, or they’re going through things in their own life that they feel like they need to project onto other people. I’ve gotten a lot better with not letting it affect me because then for every negative comment you have, you know, 10 great people who are really feeling like your content is changing their life and giving them a wholesome space online.

Rachel Maksy holds her dog while sitting.
Photo: Tara Morris for Mozilla

On the internet

Rachel: My job wouldn’t exist without the internet, but not only that, it’s where I get inspiration. It’s where I research and learn new crafts by watching videos. It’s had a huge role in my life, whether it’s just sharing things I like or connecting with a community of people who are excited about the same things as me. It makes me more excited to make cool stuff because the more people I can convert into my little weird community, the better.

Firefox is exploring all the ways the internet makes our planet an awesome place. Almost everything we do today ties back to the online world in some way — so, join us in highlighting the funny, weird, inspiring and courageous stories that remind us why we love the world wide web.

Rachel Maksy smiles at the camera.

Get the browser that makes a difference

Download Firefox

The post Firefox Presents: Cosplayer Rachel Maksy gives main character energy appeared first on The Mozilla Blog.

30 Oct 02:24

Neon Riders

by jnyyz

Tonight was advertised as the last ride of the year for the Neon Riders. It’s been a while since I’ve ridden with them, so I decided to join in. Started at Nathan Phillips Square as per usual.

Leaving the square now.

On Chestnut, waiting to turn onto Dundas.

On Dundas.

Sherbourne.

At Lakeshore.

Zigzagging along the lakeshore.

First stop was a spot where we had a nice view of the CN Tower.

I took my leave at this point since I was tired, and I have an early start tomorrow. As I type this, those with more energy are still out there on the remainder of the ride. I hope they are having a good time. I certainly did, however briefly.

Two short video clips: first at NPS, the second by the lake at the foot of Yonge St.

28 Oct 23:39

need ML to generate halloween puns / 2022-10-28

by Luis Villa
need ML to generate halloween puns / 2022-10-28

No big meta-themes this week. However, an observation: the industry seems to be coalescing quickly around "generative AI" as the name for "ML that makes creative things". I like it; harkens back (in a good way) to Prof. Jonathan Zittrain's conception of the "generative internet" c. 2006. Expect to see that term a lot going forward.

Open(ish) values

Lowering barriers to entry

Making systems legible

  • What are we studying? I've talked here, optimistically, about the growing toolkit for analyzing models, to help us understand (and therefore improve and govern) them. But this thread points out that much of "AIthropology" is really study of OpenAI and OpenAI's choices, not study of "AI". The author points to study of more open models as a cure, but I have to wonder if the nature of training still means there's some black-box-ness. I will endeavor to be more clear here, going forward, between research and tools that are specific to a particular model, and research and tools that are more truly generic.
  • Bias evaluation: Here's another addition to the tools for evaluating bias in large language models—essentially an (open data) test suite of bias-inducing prompts. There's going to be a lot of these; will be interesting to see if this one comes out on top since it is coming out of Huggingface.

Governance and ethics

  • Weinberg on Copilot: Michael Weinberg, of many good things, has a worthwhile piece on the (potential) Copilot litigation. From the conclusion, something I increasingly wholeheartedly agree with: "Looking to copyright for solutions has the potential to stretch copyright law in strange directions, cause unexpected side effects, and misaddressing the thing you really care about."

Open(ish) techniques

Model improvement(?)

  • Training models for specific styles: The first wave of copyright-infringement concerns in image-generating AIs were based on "styles" that the model learned somewhat 'organically' from captions in the training set. Now we've got a new technique that raises much more pointed questions, deliberately teaching Stable Diffusion about ("finetuning") specific styles. Meta: Is this open(ish)? I think yes, because "we don't need your permission to innovate" is, for better and for worse, a long-term correlate of traditional open.
need ML to generate halloween puns / 2022-10-28
Morgan Freeman in a variety of styles.

Instilling norms

  • Norm of outcome-focus: It's very interesting to me, as a former QA guy, that the ML community treats quality of outcomes as worthy of academic research and publication, and that's taken seriously by practitioners! As best as I can tell, this is a side effect of ML's inherent unpredictability, and it feels like a very healthy norm to me—treat outcomes as importantly as you treat, say, performance or flexibility, and outcomes might actually improve. This thought brought to you by this paper on using prompts to improve "reliability"—including definitions of reliability in this context.

Changes

"creation engines"

At least this week, this section gets all the demos:

need ML to generate halloween puns / 2022-10-28

Meta/misc.

Some notes on what AI means for open from conversations and thinking I had this week.

  • "Predictability": for users, a value of traditional open is predictability: they can read the license and figure out more or less what they can do, more or less quickly. I find this to be somewhat overrated (the scope of the GPL has never been super-predictable, but we got over that quickly when Linux became unavoidable) but it's still something I'll consider adding to my personal definition of open(ish). We're definitely in a period where this is not the case in open ML, but I suspect that's inevitable—and through a combination of ethical concerns and government regulation we may never get back to the simple, predictable world of traditional open. Nevertheless it's a factor for us to keep in mind.
  • "Permissionless innovation": While thinking about the throwback phrase 'generative', and the variety of weird/exciting/problematic tools around Stable Diffusion, I keep coming back to the idea of "permissionless innovation", which is sometimes attributed to Adm. Grace Hopper (though perhaps not accurately)?

Thanks!

Thanks for continuing to join me on this ride :)

28 Oct 23:39

Elon Musk’s Twitter is going to be a disaster | Hamilton Nolan

mkalus shared this story from The Guardian.

Twitter is free. You can go on there and type your embarrassing little thoughts for the whole world to see any time you like. Millions of us have been doing this for years. Revealing to everyone how dumb your inner thoughts are may cost you your reputation, sure, but it won’t cost you any money. Not even if you’re the richest man in the country.

So why spend $44bn to buy it? That’s a fair piece of change, even to someone whose net worth hovers above $200bn. It’s also much more than the company is actually worth, as evidenced by Elon Musk’s desperate attempt to get out of the deal almost as soon as he had gotten irretrievably into it. The price is too high to be a pure lifestyle purchase – if you want a media property just for the social cachet and party invitations, it can be had much cheaper. Fellow villainous mega-billionaire space tourist Jeff Bezos bought the Washington Post for a mere $250m, less than 1% of what Musk just paid for a cacophonous global collection of weird, hollering self-promotion.

In truth, Musk probably bought Twitter for the same reason that sickeningly rich people throughout history have become press barons: to try to control the conversation. About themselves, in particular, and secondly about their own economic interests, and thirdly about their own inevitably selfish, bizarre, half-witted political beliefs. Once you have ascended the ladder of wealth past buying real estate and cars and boats and models and the other tawdry baubles that come with money, there comes a time when a hardworking plutocrat begins to be irked by the fact that, beyond their sphere of servants, people are still talking trash about them. It upsets their sense of omnipotence. After the thrill of bending the material world to their whim has worn off, the desire to bend the public conversation – and, by extension, the public mind – to their own liking takes root.

This has always been a hit-and-miss proposition. It’s easiest for those who are interested in straightforward political influence. From William Randolph Hearst to Rupert Murdoch, media moguls have a well-tested playbook for using sensationalism and propaganda and fearmongering to produce political outcomes. That’s the easy stuff. When the motives get mixed, though, things get dicey for the aspiring media moguls.

For those who don’t have Bezos-sized wallets, owning a news outlet can be a painfully expensive way to get party invitations. For those who don’t have Murdoch’s ruthlessness, it can be a frustratingly difficult way to change the minds of readers and viewers. And for those who lack perfect clarity about what they are doing, well, the possibility for hilarious disaster is very real.

Elon Musk is, ironically, the exact type of person for whom Twitter is poison. Wealthy, powerful and celebrated, he could have kept his mouth shut and let his work speak for itself; instead, he uses Twitter, and reveals to all of us that the richest man in the richest nation in the history of the world is an unfunny meme guy easily seduced by the same sorts of ideas that grab the minds of Reddit-scrolling 13-year-old boys. There is a lesson there about the inability of wealth to make someone interesting – but, setting that aside, there is a more relevant lesson about the danger of vast concentrations of wealth. Because when you mix the immature, half-baked, self-righteous grandiosity of a guy like Elon Musk with the ability to buy and sell multibillion-dollar global public corporations like toys, you have a recipe for chaotic, destructive stupidity on a staggering scale.

Many on the left suspect that Musk is in fact a Rupert Murdoch of the tech generation, planning to use Twitter’s algorithms to disperse and promote rightwing ideas. Others say this was all a way for him to diversify his finances and sell off a ton of Tesla shares without sparking investor panic. Those tidy explanations would be more satisfying than the likely truth: this guy saw a chance to put his sophomoric ideas about “free speech” into practice on his favorite app, and he did.

It is as if a half-bright fantasy football player bought your favorite NFL franchise and ran it right into the ground. (Except the implications of this – intertwined as they are with politics and media and Trumpism – are much worse.) Above all, we are in for an abject demonstration of why it is bad to have random businessmen running around your society with enough money to do absolutely anything they want. As kaleidoscopic as the factions of Twitter users are, this episode should be enough to make them all unite behind the idea of a wealth tax.

In a message to advertisers posted the day the sale closed, Musk wrote that “The reason I acquired Twitter is because it is important to the future of civilization to have a common digital town square, where a wide range of beliefs can be debated in a healthy manner.” OK. Sure. Forty-four billion dollars seems like a lot to pay for the right to grandly reinstate @NaziAnime666’s right to post gifs about Pizzagate or whatever, but a fool and his money are soon parted and all that.

You may notice that Musk’s stated goal of being an angel of open, civilized discourse is not wholly compatible with all billionaires’ natural desire to bend the content of that discourse in a direction that they agree with. Reconciling these two competing mandates is one of the oldest and most central tasks of news media companies, or discourse-shaping social media companies, owned by rich guys. Many thoughtful journalists, pundits and professors have spent their careers wrestling with this tension – unfettered truth versus the blinkered logic of capitalism. Two hundred years into the history of American journalism, it’s still an open question how it will all be resolved.

But I do know one thing for sure: Elon Musk, zillionaire king of the nerds, alt-right meme lord, and committed union buster, is not the guy who’s gonna figure it out. If you are wondering what lies in store for all of us after his big purchase, just assume the dumbest possibilities will come true. An owner who is not cool or funny will do things that he thinks are cool or funny. Trouble will ensue. Tech and media are both inherently unstable industries, so the good news is that the possibility of Twitter spectacularly imploding does not make them that much different from everywhere else. Enjoy it while it lasts. Stay focused on eradicating billionaires forever. And, if you have the misfortune of working at Twitter, get yourself a union as soon as possible.

  • Hamilton Nolan is a writer at In These Times