Shared posts

19 Nov 17:05

Gradually, Then Suddenly

by Tristan Louis

This story initiall appeared on TNL.net and was written by Tristan Louis. Accept no substitute for the original.

The rise of Mastodon and ActivityPub (collectively known as the Fediverse) represent a new age of social media

The post Gradually, Then Suddenly appeared first on TNL.net.

19 Nov 16:54

The Inventor of Assembly Language

by Eugene Wallingford

This weekend, I learned that Kathleen Booth, a British mathematician and computer scientist, invented assembly language. An October 29 obituary reported that Booth died on September 29 at the age of 100. By 1950, when she received her PhD in applied mathematics from the University of London, she had already collaborated on building at least two early digital computers. But her contributions weren't limited to hardware:

As well as building the hardware for the first machines, she wrote all the software for the ARC2 and SEC machines, in the process inventing what she called "Contracted Notation" and would later be known as assembly language.

Her 1958 book, Programming for an Automatic Digital Calculator, may have been the first one on programming written by a woman.

I love the phrase "Contracted Notation".

Thanks to several people in my Twitter feed for sharing this link. Here's hoping that Twitter doesn't become uninhabitable, or that a viable alternative arises; otherwise, I'm going to miss out on a whole lotta learning.

19 Nov 16:54

A translation from a while back

by Liz

Every once in a while I think of this poem by Nicanor Parra, and want to find my translation again. So here it is! I think it is weirdly compelling and it also makes me laugh even if it is a somewhat bitter or wry laugh. There’s a lot in there.

Frases

No nos echemos tierra a los ojos
El automóvil es una silla de ruedas
El león está hecho de corderos
Los poetas no tienen biografía
La muerte es un hábito colectivo
Los niños nacen para ser felices
La realidad tiende a desaparecer
Fornicar es un acto diabólico
Dios es un buen amigo de los pobres.

– Nicanor Parra, 1962

Sentences

Let’s not throw dust in our own eyes
The car is a wheelchair
The lion is made out of lambs
Poets don’t have life stories
Death is a collective habit
Kids are born to be happy
Reality tends to disappear
Fucking is a diabolical act
God is a good friend to the poor.

19 Nov 16:52

Umstellungsschmerzen auf USB-C

by Volker Weber

Als die ersten (Apple-)Laptops auftauchten, die nur noch USB-C Anschlüsse hatten, habe ich mir ein paar Adapter besorgt, mit denen ich alte USB-A Geräte. etwas Speichersticks in die neuen Geräte stecken kann. Mittlerweile habe ich das umgekehrte Problem. Neuere Geräte kommen häufig mit USB-C Kabeln, manche sogar fest verbunden. Meine Hubs oder meine Bildschirme haben jedoch mehr USB-A als USB-C Anschlüsse. Also brauche ich nur das Gegenteil: USB-C Buchse auf USB-A Stecker. Zum Glück sind die nicht teuer, aber man muss sie haben.

Wenn Apple dann gezwungen wird, nach mehr als 10 Jahren Lightning aufzugeben, dann geht der Zirkus noch mal los.

19 Nov 16:51

Week Notes 22-45

by Ton Zijlstra

It feels like we’re already rushing headlong to the end of the year. I’ve had conversations this week that made me realise how few weeks of work are left in this year, while the amount of stuff to get done hasn’t shortened in response. On top of that I’ve spent more time than I should have on playing with and conversing on Mastodon, with the new influx of people. And, on top of that, it’s the time of year I get a bit more reflective so I feel a strong urge to make lots of notes.

This week I

  • Had a meeting with my business partners, talking about what we see ahead in the coming 6 to 9 months.
  • Participated in a client meeting preparing for the official launch of an interprovinical digital ethics commission by the Minister for Digitisation.
  • Had the weekly client meetings
  • Did a session with a few provinces discussing the incoming EU data legislation
  • Had a meeting about the implementation process for the EU High Value Data list with the responsible Digital Society team at the Ministry for the Interior.
  • Prepared a meeting of the Dutch tactical council for EU data policy, which takes place next week
  • Had a session with the consortium that is doing the preparatory work for the Green Deal data space for the European Commission
  • Did some invoicing
  • Had a board meeting for the Open Nederland association, in preparation for the general assembly end of the month
  • Picked up the work again for the Dutch national flora and fauna database that is being restructured both technically and governance wise, as well as gaining a new role in the overall Dutch data infrastructure. It’s a database that contains data from both public and private sources and is used for both public tasks, research and commercial work (building sites e.g.). Those different angles creates interesting tensions w.r.t. organisation, open data, and governance.
  • Walked the neighbourhood with Y for St. Maarten, resulting in an impressive stash of sweets and candy for Y.
  • Installed a new VPS with Yunohost, as a sandbox for various indie and ActivityPub related things
  • Had a conversation with M who maintains a very fast growing Dutch Mastodon instance, about how to structure it’s growth, create a governance layer and ensure its sustainability while also creating a broader base for federated tools in general. To be continued in the next few weeks.


This is a RSS only posting for regular readers. Not secret, just unlisted. Comments / webmention / pingback all ok.
Read more about RSS Club
19 Nov 16:51

Litra Beam: Ein schlankes Videolicht

by Volker Weber

Seit Ende Oktober laufen Meetings bis in die dunklen Abendstunden. Da war es an der Zeit, mal etwas besseres Licht für die Kamera aufzubauen. Idealerweise sieht das wie im Fotostudio aus, mit großen Softboxen und ganz viel Licht. Es geht aber auch schlanker, etwa mit dem Logitech Litra Beam, der an eine moderne Schreibtischleuchte erinnert.

Wichtig ist, dass das keine punktförmige Lichtquelle ist, sondern ein lange Lichtleiste. Das vermeidet harte Schatten im Gesicht. Die Lampe hat drei Stativgewinde, eins am Ende und zwei in der Mitte unten und hinten. Damit lassen sich sehr unterschiedliche Aufstellungen arrangieren. Lampe, Stativ und USB-Kabel sind mitgeliefert, die Kabelführung ist vorbildlich.

Ich habe das USB-Kabel einfach in mein Surface Dock gesteckt. Vor dort wird die Leuchte mit Strom versorgt und kommuniziert mit dem Surface Pro. Alternativ kann man auch ein normales Netzteil verwenden und den PC via Bluetooth verbinden.

Was mir nicht so gut gefällt: Der Litra Beam wird nur durch Logitech G-Hub unterstützt, nicht aber mit LogiTune (für Headsets) oder Logi Options+ für Keyboard und Maus). Das wäre dann das dritte Stück Software von Logitech, das ich benötige. G-Hub bot sofort ein Firmware Update an, ansonsten muss man es nicht unbedingt verwenden. Die Leuchte hat einen Druckschalter zum Ein-und Ausschalten sowie zwei Doppelschalter für Helligkeit und Farbe. Hat man das einmal korrekt konfiguriert, reicht einfaches Ein- und Ausschalten. Und diese Funktion würde ich gerne über Options+ auf einen Hotkey legen.

Ich finde Litra Beam ausgesprochen formschön. Sie spricht mich mehr an als die verbreiteten Elgato Key Lights.

19 Nov 16:50

Keeping track of Twitter drama

by Volker Weber

New Twitter owner Elon Mask is clearly out of his depth running Twitter, a network of over 200 million active users. “Twitter is going great” is keeping track of the drama. If you heard a similar title first, that was “Web3 is going great” by the wonderful Molly White who follows the crypto scams. The Twitter site is also built with her Static Timeline Generator.

19 Nov 16:49

Hey Google, Stop!

by peter@rukavina.net (Peter Rukavina)

Why is it that my Google Home speaker can understand “Meh Groovle” and “Fey Rupal” from 100 feet away, and spring instantly to life, but its reaction to “Hey Google, stop,” when I want it to stop playing something, is to completely ignore me repeatedly.

19 Nov 16:48

Mastodon Special Issue: Moving day

Wendy M. Grossman, net.wars, Nov 14, 2022
Icon

Posts on the experience of moving from Twitter to Mastodon: Wendy Grossman observes, "the huge influx of new users doesn't bring revenue or staff to help manage it. This will be a big, unplanned test of the system's resilience." Mastodon feels like the old internet, writes Alex Hern. "Mastodon isn't one site. Instead, it's a protocol, a system of rules for spinning up your own social network." Miguel Guhlin writes, Farewell to #Twitter: A Tsunami of #Twexits. Some fun: I do not want a Mastodon, an Oliver Darkshire One Page RPG. Wired: what you should know about switching to Mastodon. ZDNet: Mastodon isn't Twitter but it's Glorious.

Web: [Direct Link] [This Post]
19 Nov 16:48

Textbaustein kgn

by Volker Weber

kgn steht für “kann gerade nicht” und wenn ich die drei Buchstaben tippe, dann schreibt das iPhone “Ich kann das im Moment gerade nicht abhören, mach ich gleich morgen früh. Wenn’s was Wichtiges ist, schreib’s mir kurz.”

Das ist meine Antwort an jeden, der mir statt einer Textnachricht, die ich schnell lesen kann, eine Sprachaufnahme schickt, für die ich alles unterbrechen muss, was ich gerade mache. Dabei ist das vollkommen unsinnig, weil sowohl Android als auch iPhone so schnell schreiben können, wie du diktierst.

Was passiert als nächstes? Wenn es wichtig ist, bekomme ich einen Text. Und sonst ist das Tippen so lästig, dass es sich nicht lohnt, zu schreiben. Dann lohnt es sich auch nicht, abzuhören.

Wie macht man das? Einstellungen, Allgemein, Tastaturen, Textersetzung.

19 Nov 16:47

Narratives

by Ben Thompson

What Elon Musk got wrong about Twitter, journalists and VCs got wrong about FTX, and Peter Thiel got wrong about crypto and AI — and why I made many of the same mistakes along the way.


Listen to this Update in your podcast player


Two pieces of news dominated the tech industry last week: Elon Musk and Twitter, and Sam Bankman-Fried and FTX. Both showed how narratives can lead people astray. Another piece of news, though, flew under the radar: yet another development in AI, which is a reminder that the only narratives that last are rooted in product.

Twitter and the Wrong Narrative

I did give Elon Musk the benefit of the doubt.

Back in 2016 I wrote It’s a Tesla, marveling at the way Musk had built a brand that transcended far beyond a mere car company; what was remarkable about Musk’s approach is that said brand was a prerequisite to Tesla, in contrast to a company like Apple, the obvious analog as far as customer devotion goes. From 2021’s Mistakes and Memes:

This comparison works as far as it goes, but it doesn’t tell the entire story: after all, Apple’s brand was derived from decades building products, which had made it the most profitable company in the world. Tesla, meanwhile, always seemed to be weeks from going bankrupt, at least until it issued ever more stock, strengthening the conviction of Tesla skeptics and shorts. That, though, was the crazy thing: you would think that issuing stock would lead to Tesla’s stock price slumping; after all, existing shares were being diluted. Time after time, though, Tesla announcements about stock issuances would lead to the stock going up. It didn’t make any sense, at least if you thought about the stock as representing a company.

It turned out, though, that TSLA was itself a meme, one about a car company, but also sustainability, and most of all, about Elon Musk himself. Issuing more stock was not diluting existing shareholders; it was extending the opportunity to propagate the TSLA meme to that many more people, and while Musk’s haters multiplied, so did his fans. The Internet, after all, is about abundance, not scarcity. The end result is that instead of infrastructure leading to a movement, a movement, via the stock market, funded the building out of infrastructure.

TSLA is not at the level it was during the heights of the bull market, but Tesla is a real company, with real cars, and real profits; last quarter the electric car company made more money than Toyota (thanks in part to a special charge for Toyota; Toyota’s operating profit was still greater). SpaceX is a real company, with real rockets that land on real rafts, and while the company is not yet profitable, there is certainly a viable path to making money; the company’s impact on both humanity’s long-term potential and the U.S.’s national security is already profound.

Twitter, meanwhile, is a real product that has largely failed as company; I wrote earlier this year when Musk first made a bid:

Twitter has, over 19 different funding rounds (including pre-IPO, IPO, and post-IPO), raised $4.4 billion in funding; meanwhile the company has lost a cumulative $861 million in its lifetime as a public company (i.e. excluding pre-IPO losses). During that time the company has held 33 earnings calls; the company reported a profit in only 14 of them.

Given this financial performance it is kind of amazing that the company was valued at $30 billion the day before Musk’s investment was revealed; such is the value of Twitter’s social graph and its cultural impact: despite there being no evidence that Twitter can even be sustainably profitable, much less return billions of dollars to shareholders, hope springs eternal that the company is on the verge of unlocking its potential. At the same time, these three factors — Twitter’s financials, its social graph, and its cultural impact — get at why Musk’s offer to take Twitter private is so intriguing.

Stop right there: can you see where I opened the door for an error of omission as far as my analysis is concerned? Yes, Musk has successfully built two companies, and yes, Twitter is not a successful company; what followed in that Article, though, was my own vision of what Twitter might become. I should have taken the time to think more critically about Musk’s vision…which doesn’t appear to exist.

Oh sure, Musk and his coterie of advisors have narratives: bots are bad and blue checks are about status. And, to be fair, both are true as far as it goes. The problem with bots is self-explanatory, while those who actually need blue checks — brands, celebrities, and reliable news breakers — likely care about about them the least; the rest of us were happy to get our checkmark despite the fact there was no real risk of anyone impersonating us in any damaging way just because it made us feel special (speaking for myself anyway: I don’t much care about it now, but I was pretty delighted when I got it back in 2014 or so).

Of course Musk felt these problems more acutely than most: his high profile, active usage of Twitter, and popularity in crypto communities meant Musk tweets were the most likely place to encounter bots on the service; meanwhile Musk’s own grievances with journalists generally could, one imagine, engender a certain antipathy for “Bluechecks”, given that the easiest way to get one was to work for a media organization. The problem, though, is that Musk’s Twitter experience — thought to be an asset, including by yours truly — isn’t really relevant to the actual day-to-day reality of the site as experience by Twitter’s actual users.

And so we got last week’s verified disaster, where Musk could have his revenge on bluechecks by selling them to everyone, with the most eager buyers being those eager to impersonate brands, celebrities, and Musk himself. It was certainly funny, and I believe Musk that Twitter usage was off the charts, but it wasn’t a particularly prudent move for a company reliant on brand advertising in the middle of an economic slowdown.

This is not, to be clear, to criticize Musk for acting, or even for acting quickly: Twitter needed a kick in the pants (and, even had the company not been sold, was almost certainly in line for significant layoffs), and it’s understandable that mistakes will be made; the point of rapid iteration is to learn more quickly, which is to say that Twitter has, for years, not been learning very much at all. Rather, what was concerning about this mistake in particular is the degree to which it was so clearly rooted in Musk’s personal grievances, which (1) were knowable before he acted and (2) were not the biggest problems facing Twitter. That was knowable by me as an analyst, and I regret not pointing them out.

Indeed, these aren’t the only Musk narratives that have bothered me; here is his letter to advertisers posted on his first day on the job:

I wanted to reach out personally to share my motivation in acquiring Twitter. There has been much speculation about why I bought Twitter and what I think about advertising. Most of it has been wrong.

The reason I acquired Twitter is because it is important to the future of civilization to have a common digital town square, where a wide range of beliefs can be debated in a healthy manner, without resorting to violence. There is currently great danger that social media will splinter into far right wing and far left wing echo chambers that generate more hate and divide our society.

In the relentless pursuit of clicks, much of traditional media has fueled and catered to those polarized extremes, as they believe that is what brings in the money, but, in doing so, the opportunity for dialogue is lost.

This is why I bought Twitter. I didn’t do it because it would be easy. I didn’t do it to make more money. I did it to try to help humanity, whom I love. And I do so with humility, recognizing that failure in pursuing this goal, despite our best efforts, is a very real possibility.

That said, Twitter obviously cannnot become a free-for-all hellscape, where anything can be said with no consequences! In addition to adhering to the laws of the land, our platform must be warm and welcoming to all, where you can choose your desired experience according to your preferences, just as you can choose, for example, to see movies or play video games ranging from all ages to mature.

I also very much believe that advertising, when done right, can delight, entertain and inform you; it can show you a service or product or medical treatment that you never knew existed, but is right for you. For this to be true, it is essential to show Twitter users advertising that is as relevant as possible to their needs. Low relevancy ads are spam, but highly relevant ads are actually content!

Fundamentally, Twitter aspires to be the most respected advertising platform in the world that strengthens your brand and grows your enterprise. To everyone who has partnered with us, I thank you. Let us build something extraordinary together.

All of this sounds good, and on closer examination is mostly wrong. Obviously relevant ads are better, but Twitter’s problem is not just poor execution in terms of its ad product but also that it’s a terrible place for ads. I do agree that giving users more control is a better approach to content moderation, but the obsession with the doing-it-for-the-clicks narrative ignores the nichification of the media. And, when it comes to the good of humanity, I think the biggest learning from Twitter is that putting together people who disagree with each other is actually a terrible idea; yes, it is why Twitter will never be replicated, but also why it has likely been a net negative for society. The digital town square is the Internet broadly; Twitter is more akin to a digital cage match, perhaps best monetized on a pay-per-view basis.

In short, it seems clear that Musk has the wrong narrative, and that’s going to mean more mistakes. And, for my part, I should have noted that sooner.

FTX and the Diversionary Narrative

Eric Newcomer wrote on Twitter with regards to the FTX blow-up:

There are a few different ways to interpret Sam Bankman-Fried’s political activism:1

  • That he believed in the causes he supported sincerely and made a mistake with his business.
  • That he supported the causes cynically as a way to curry favor and hide his fraud.
  • That he believed he was some sort of savior gripped with an ends-justify-the-means mindset that led him to believe fraud was actually the right course of action.

In the end, whichever explanation is true doesn’t really matter: the real world impact was that customers lost around $10 billion in assets, and counting. What is interesting is that all of the explanations are an outgrowth of the view that business ought to be about more than business: to simply want to make money is somehow wrong; business is only good insofar as it is dedicated to furthering goals that don’t have anything to do with the business in question.

To put it another way, there tends to be cynicism about the idea of changing the world by building a business; entrepreneurs are judged by whether their intentions beyond business are sufficiently large and politically correct. That, though, is precisely why Bankman-Fried was viewed with such credulousness: he had the “right” ambitions and the “right” politics, so of course he was running the “right” business; he wasn’t one of those “true believers” who simply wanted to get rich off of blockchains.

In the end, though, the person who arguably comes out of this disaster looking the best is Changpeng Zhao (CZ), the founder and CEO of Binance, and the person whose tweet started the run that revealed FTX’s insolvency.2 No one, as far as I know, holds up CZ as any sort of activist or political actor for anything outside of crypto; isn’t that better? Perhaps had Bankman-Fried done nothing but run Alameda Research and FTX there would have been more focus on his actual business; too many folks, though, including journalists and venture capitalists, were too busy looking at things everyone claims are important but which were, in the end, a diversion from massive fraud.

Crypto and the Theory Narrative

I never wrote about Bankman-Fried; for what it’s worth, I always found his narrative suspect (this isn’t a brag, as I will explain in a moment). More broadly, I never wrote much about crypto-currency based financial applications either, beyond this article which was mostly about Bitcoin,3 and this article that argued that digital currencies didn’t make sense in the physical world but had utility in a virtual one.4 This was mostly a matter of uncertainty: yes, many of the financial instruments on exchanges like FTX were modeled on products that were first created on Wall Street, but at the end of the day Wall Street is undergirded by actual companies building actual products (and even then, things can quite obviously go sideways). Crypto-currency financial applications were undergirded by electricity and collective belief, and nothing more, and yet so many smart people seemed on-board.

What I did write about was Technological Revolutions and the possibility that crypto was the birth of something new; why Aggregation Theory would apply to crypto not despite, but because, of its decentralization; and Internet 3.0 and the theory that political considerations would drive decentralization. That last Article was explicitly not about cryptocurrencies, but it certainly fit the general crypto narrative of decentralization being an important response to the increased centralization of the Web 2.0 era.

What was weird in retrospect is that the Internet 3.0 Article was written a week after the article about Aggregation Theory and OpenSea, where I wrote:

One of the reasons that crypto is so interesting, at least in a theoretical sense, is that it seems like a natural antidote to Aggregators; I’ve suggested as such. After all, Aggregators are a product of abundance; scarcity is the opposite. The OpenSea example, though, is a reminder that I have forgotten one of my own arguments about Aggregators: demand matters more than supply…What is striking is that the primary way that most users interact with Web 3 are via centralized companies like Coinbase and FTX on the exchange side, Discord for communication and community, and OpenSea for NFTs. It is also not a surprise: centralized companies deliver a better user experience, which encompasses everything from UI to security to at least knocking down the value of your stolen assets on your behalf; a better user experience leads to more users, which increases power over supply, further enhancing the user experience, in the virtuous cycle described by Aggregation Theory.

That Aggregation Theory applies to Web 3 is not some sort of condemnation of the idea; it is, perhaps, a challenge to the insistence that crypto is something fundamentally different than the web. That’s fine — as I wrote before the break, the Internet is already pretty great, and its full value is only just starting to be exploited. And, as I argued in The Great Bifurcation, the most likely outcome is that crypto provides a useful layer on what already exists, as opposed to replacing it.

Of the three Articles I listed, this one seems to be the most correct, and I think the reason is obvious: that was the only Article written about an actual product — OpenSea — while the other ones were about theory and narrative. When that narrative was likely wrong — that crypto is the foundation of a new technological revolution, for example — then the output that resulted was wrong, not unlike Musk’s wrong narrative leading to major mistakes at Twitter.

What I regret more, though, was keeping quiet about my uncertainty about what exactly all of these folks were creating these complex financial products out of: here I suffered from my own diversionary narrative, paying too much heed to the reputation and viewpoint of people certain that there was a there there, instead of being honest that while I could see the utility of a blockchain as a distributed-but-very-slow database, all of these financial instruments seemed to be based on, well, nothing.

The FTX case is not, technically speaking, about cryptocurrency utility; it is a pretty straight-forward case of fraud. Moreover, it was, as I noted in passing in that OpenSea article, a problem of centralization, as opposed to true DeFi. Such disclaimers do, though, have a whiff of “communism just hasn’t been done properly”: I already made the case that centralization is an inevitability at scale, and in terms of utility, that’s the entire problem. An entire financial ecosystem with a void in terms of underlying assets may not be fraud in a legal sense, but it sure seems fraudulent in terms of intrinsic value. I am disappointed in myself for not saying so before.

AI and the Product Narrative

Peter Thiel said in a 2018 debate with Reid Hoffman:

One axis that I am struck by is the centralization versus decentralization axis…for example, two of the areas of tech that people are very excited about in Silicon Valley today are crypto on the one hand and AI on the other. Even though I think these things are under-determined, I do think these two map in a way politically very tightly on this centralization-decentralization thing. Crypto is decentralizing, AI is centralizing, or if you want to frame in a more ideologically, you could say crypto is libertarian, and AI is communist…

AI is communist in the sense it’s about big data, it’s about big governments controlling all the data, knowing more about you than you know about yourself, so a bureaucrat in Moscow could in fact set the prices of potatoes in Leningrad and hold the whole system together. If you look at the Chinese Communist Party, it loves AI and it hates crypto, so it actually fits pretty closely on that level, and I think that’s a purely technological version of this debate. There probably are ways that AI could be libertarian and there are ways that crypto could be communist, but I think that’s harder to do.

This is a narrative that makes all kind of sense in theory; I just noted, though, that my crypto Article that holds up the best is based on a realized product, and my takeaway was the opposite: crypto in practice and at scale tends towards centralization. What has been an even bigger surprise, though, is the degree to which it is AI that appears to have the potential for far more decentralization than anyone thought. I wrote earlier this fall in The AI Unbundling:

This, by extension, hints at an even more surprising takeaway: the widespread assumption — including by yours truly — that AI is fundamentally centralizing may be mistaken. If not just data but clean data was presumed to be a prerequisite, then it seemed obvious that massively centralized platforms with the resources to both harvest and clean data — Google, Facebook, etc. — would have a big advantage. This, I would admit, was also a conclusion I was particularly susceptible to, given my focus on Aggregation Theory and its description of how the Internet, contrary to initial assumptions, leads to centralization.

The initial roll-out of large language models seemed to confirm this point of view: the two most prominent large language models have come from OpenAI and Google; while both describe how their text (GPT and GLaM, respectively) and image (DALL-E and Imagen, respectively) generation models work, you either access them through OpenAI’s controlled API, or in the case of Google don’t access them at all. But then came this summer’s unveiling of the aforementioned Midjourney, which is free to anyone via its Discord bot. An even bigger surprise was the release of Stable Diffusion, which is not only free, but also open source — and the resultant models can be run on your own computer…

What is important to note, though, is the direction of each project’s path, not where they are in the journey. To the extent that large language models (and I should note that while I’m focusing on image generation, there are a whole host of companies working on text output as well) are dependent not on carefully curated data, but rather on the Internet itself, is the extent to which AI will be democratized, for better or worse.

Just as the theory of crypto was decentralization but the product manifestation tended towards centralization, the theory of AI was centralization but a huge amount of the product excitement over the last few months has been decentralized and open source. This does, in retrospect, make sense: the malleability of software, combined with the free corpus of data that is the Internet, is much more accessible and flexible than blockchains that require network effects to be valuable, and where a single coding error results in the loss of money.

The relevance to this Article and introspection, though, is that this realization about AI is rooted in a product-based narrative, not theory. To that end, the third piece of news that happened last week was the release of Midjourney V4; the jump in quality and coherence is remarkable, even if the Midjourney aesthetic that was a hallmark of V3 is less distinct. Here is the image I used in The AI Unbundling, and a new version made with V4:

"Paperboy on a bike" with Midjourney V3 and V4

One of the things I found striking about my interview with MidJourney founder and CEO David Holz was how Midjourney came out of a process of exploration and uncertainty:

I had this goal, which was we needed to somehow create a more imaginative world. I mean, one of the biggest risks in the world I think is a collapse in belief, a belief in ourselves, a belief in the future. And part of that I think comes from a lack of imagination, a lack of imagination of what we can be, lack of imagination of what the future can be. And so this imagination thing I think is an important pillar of something that we need in the world. And I was thinking about this and I saw this, I’m like, “I can turn this into a force that can expand the imagination of the human species.” It was what we put on our company thing now. And that felt realistic. So that was really exciting.

Well, your prompt is, “/Imagine”, which is perfect.

So that was kind of the vision. But I mean, there is a lot of stuff we didn’t know. We didn’t know, how do people interact with this? What do they actually want out of it? What is the social thing? What is that? And there’s a lot of things. What are the mechanisms? What are the interfaces? What are the components that you build this experiences through? And so we kind of just have to go into that without too many opinions and just try things. And I kind of used a lot of lessons from Leap here, which was that instead of trying to go in and design a whole experience out of nothing, presupposing that you can somehow see 10 steps into the future, just make a bunch of things and see what’s cool and what people like. And then take a few of those and put them together.

It’s amazing how you try 10 things and you find the three coolest pieces, and you put them together, it feels like a lot more than three things. It kind of multiplies out in complexity and detail and it feels like it has depth, even though it doesn’t seem like a lot. And so yeah, there’s something magic about finding three cool things and then starting to build a product out of that.

In the end, the best way of knowing is starting by consciously not-knowing. Narratives are tempting but too often they are wrong, a diversion, or based on theory without any tether to reality. Narratives that are right, on the other hand, follow from products, which means that if you want to control the narrative in the long run, you have to build the product first, whether that be a software product, a publication, or a company.

That does leave open the question of Musk, and the way he seemed to meme Tesla into existence, while building a rocket ship on the side. I suspect the distinction is that both companies are rooted in the physical world: physics has a wonderful grounding effect on the most fantastical of narratives. Digital services like Twitter, though, built as they are on infinitely malleable software, are ultimately about people and how they interact with each other. The paradox is that this makes narratives that much more alluring, even — especially! — if they are wrong.


  1. Beyond the conspiracy theories that he was actually some sort of secret agent sent to destroy crypto, a close cousin of the conspiracy theory that Musk’s goal is to actually destroy Twitter; I mean, you can make a case for both! 

  2. Past performance is no guarantee of future results! 

  3. Given Bitcoin’s performance in a high inflationary environment the argument that it is a legitimate store of value looks quite poor 

  4. TBD 


Add to your podcast player: Stratechery | Sharp Tech | Dithering | Sharp China | GOAT


Subscription Information

Member: Roland Tanglao
Email: rolandt@gmail.com

Manage your account

19 Nov 16:43

Pluralsight Developer Success Survey

The Developer Success Lab at Pluralsight Flow (a dedicated team of developer experience researchers) has developed a short survey to help learn more about developer satisfaction and how to increase it. They’d love to hear about your successes, struggles, and lessons learned. If you are interested in participating in this survey, please forward this to your internal email lists, or share the link below.

Findings from this research will be shared directly with you and your team. And in gratitude for your participation, Pluralsight will also donate to an OSS organization of your choice.

19 Nov 16:42

Writing tests with Copilot

by Simon Willison

I needed to write a relatively repetitive collection of tests, for a number of different possible error states.

The code I was testing looks like this:

columns = data.get("columns")
rows = data.get("rows")
row = data.get("row")
if not columns and not rows and not row:
    return _error(["columns, rows or row is required"])

if rows and row:
    return _error(["Cannot specify both rows and row"])

if columns:
    if rows or row:
        return _error(["Cannot specify columns with rows or row"])
    if not isinstance(columns, list):
        return _error(["columns must be a list"])
    for column in columns:
        if not isinstance(column, dict):
            return _error(["columns must be a list of objects"])
        if not column.get("name") or not isinstance(column.get("name"), str):
            return _error(["Column name is required"])
        if not column.get("type"):
            column["type"] = "text"
        if column["type"] not in self._supported_column_types:
            return _error(
                ["Unsupported column type: {}".format(column["type"])]
            )
    # No duplicate column names
    dupes = {c["name"] for c in columns if columns.count(c) > 1}
    if dupes:
        return _error(["Duplicate column name: {}".format(", ".join(dupes))])

I wanted to write tests for each of the error cases. I'd already constructed the start of a parameterized pytest test for these.

I got Copilot/GPT-3 to write most of the tests for me.

First I used VS Code to select all of the _error(...) lines. I pasted those into a new document and turned them into a sequence of comments, like this:

# Error: columns must be a list
# Error: columns must be a list of objects
# Error: Column name is required
# Error: Unsupported column type
# Error: Duplicate column name
# Error: rows must be a list
# Error: rows must be a list of objects
# Error: pk must be a string

I pasted those comments into my test file inside the existing list of parameterized tests, then wrote each test by adding a newline beneath a comment and hitting tab until Copilot had written the test for me.

It correctly guessed both the error assertion and the desired invalid input for each one!

Here's an animated screenshot:

In this animation I start typing below the comment that says rows must be a list of objects - Copilot correctly deduces that I neeed an example with a rows item that is a list, and that the expected status code is 400, and that the returned error should match the text in the comment.

The finished tests are here.

19 Nov 16:41

HTML datalist

by Simon Willison

A Datasette feature suggestion concerning autocomplete against a list of known values inspired me to learn how to use the HTML <datalist> element (see MDN).

It's really easy to use! It allows you to attach a list of suggested values to any input text box, and browsers will then suggest those while the user is typing - while also allowing them to type something entirely custom.

Here's a basic example:

<input type="text" list="party" name="political_party">

<datalist id="party">
<option value="Anti-Administration">
<option value="Pro-Administration">
<option value="Republican">
<option value="Federalist">
<option value="Democratic Republican">
<option value="Pro-administration">
<option value="Anti-administration">
<option value="Unknown">
<option value="Adams">
<option value="Jackson">
<option value="Jackson Republican">
<option value="Crawford Republican">
<option value="Whig">
<option value="Jacksonian Republican">
<option value="Jacksonian">
<option value="Anti-Jacksonian">
<option value="Adams Democrat">
<option value="Nullifier">
<option value="Anti Mason">
<option value="Anti Masonic">
<option value="Anti Jacksonian">
<option value="Democrat">
<option value="Anti Jackson">
<option value="Union Democrat">
<option value="Conservative">
<option value="Ind. Democrat">
<option value="Independent">
<option value="Law and Order">
<option value="American">
<option value="Liberty">
<option value="Free Soil">
<option value="Ind. Republican-Democrat">
<option value="Ind. Whig">
<option value="Unionist">
<option value="States Rights">
<option value="Anti-Lecompton Democrat">
<option value="Constitutional Unionist">
<option value="Independent Democrat">
<option value="Unconditional Unionist">
<option value="Conservative Republican">
<option value="Ind. Republican">
<option value="Liberal Republican">
<option value="National Greenbacker">
<option value="Readjuster Democrat">
<option value="Readjuster">
<option value="Union">
<option value="Union Labor">
<option value="Populist">
<option value="Silver Republican">
<option value="Free Silver">
<option value="Silver">
<option value="Democratic and Union Labor">
<option value="Progressive Republican">
<option value="Progressive">
<option value="Prohibitionist">
<option value="Socialist">
<option value="Farmer-Labor">
<option value="American Labor">
<option value="Nonpartisan">
<option value="Coalitionist">
<option value="Popular Democrat">
<option value="Liberal">
<option value="New Progressive">
<option value="Republican-Conservative">
<option value="Democrat-Liberal">
<option value="AL">
<option value="Libertarian">
</datalist>

And here's what that looks like in Firefox:

Animation showing autocomplete against that list

Creating them in JavaScript

You can also create these in JavaScript using the DOM, for example:

var parties = [
  "Anti-Administration",
  "Pro-Administration",
  "Republican",
  "Federalist",
  "Democratic Republican",
  "Pro-administration",
  "Anti-administration",
  "Unknown",
  "Adams",
  "Jackson"
];
var datalist = document.createElement("datalist");
datalist.id = "parties";
parties.forEach(function (party) {
    var option = document.createElement("option");
    option.value = party;
    datalist.appendChild(option);
});
document.body.appendChild(datalist);

Then use input.setAttribute('id', 'parties') on any input elemnt you want to use that new datalist.

19 Nov 16:41

The internet of good things

by russell davies

Kettle Companion

I was very excited about the internet of things. And ambient intimacy. All that stuff. That was probably naive of me.

Lots of it failed because of bad ideas.

But some of it failed because execution was really hard back then. Developments with phones and bluetooth and everything have made lots of IOT stuff easier.

And the simple, clever ideas are starting to get built again. Properly. Functionally.

Here's one:

It's called Kettle Companion. Someone, let's say an older relative of yours, gets a little plug adaptor thing that plugs between their kettle lead and the wall. Doesn't get in the way, is unobtrusive, doesn't need switching on and off. It connects to their wifi. But then they don't have to do anything.

Someone else, let's say it's you, gets a little kettle shaped device that plugs into your wall and connects to your wifi. It takes a little bit of setting up but that's it. You can just leave it.

The kettle glows blue at midnight. Then if/when your elderly relative switches their kettle on to make a cuppa it glows green. If they don't boil the kettle before 10AM it goes red and you should probably give them a call and make sure they're OK. And, of course, if it goes green you know someone is up and about. It's a good time to call.

There's some additional nuance but that's basically it.

It's obviously only suitable for someone with a stereotypically British relationship with tea. And it could be seen as surveillance. And it shouldn't be a substitute for an actual relationship, you should still call occasionally anyway. BUT it's nice, it's gentle, it's clever. It works because it's very limited, it sends a very simple signal. Someone in that house has recently boiled a kettle. That's it. No one has to open an app, no one has to decide how they're feeling, no one has to declare their status. 

 

19 Nov 16:41

busy weeks / 2022-11-14

by Luis Villa
busy weeks / 2022-11-14

The newsletter has been quiet, but the world has not. Some thoughts after a couple of very busy weeks.

Talks and takeaways

  • Speaking: I spoke twice last week to Linux Foundation groups (first to lawyers, virtually, and then to the Member Summit, in person, with Justin Colannino of Microsoft). Slides for the longer talk are available here. Nothing to surprising in there for readers of the newsletter, but the feedback was terrific.
  • Over-focus on generative: One key takeaway for me from the hallway track was that I have been overfocused on generative AI, when many other tasks are also showing lots of promise. Three of the four most downloaded models on  huggingface, for example, are image classifiers — in other words, instead of generating new images, they tell you what is in an existing image. This is not nearly as fun to play with, but can be very useful in enterprise applications. I'll try to pay more attention to this evaluative/non-generative ML going forward.
  • Modifications: Another key discussion theme was modification. I may have been overly-focused on data as the key here, despite an increasing number of examples of "tuning" of already-trained models. Besides the relative efficiency of additional training or tuning, that approach may also have privacy and efficiency gains as well. So I'll be looking more explicitly to understand and explain that here in coming weeks.

Observations

Couple of things, mostly about gut feel, that I want to put a stake in the ground on and would welcome feedback on.

  • What role will the legal industry play? I found myself repeatedly talking about this (slightly oversimplified) history of the legal industry's response to GPL v2 and AGPL v3. In the late 1990s, IBM found the industry uncomfortable about GPL v2 and responded in part by funding legal education, because it wanted its customers comfortable buying Linux. The FSF, similarly, invested a lot of time in improving its documentation on the meaning of the GPL, and wrote the LGPL to give additional certainty for certain use cases. In stark contrast, in the late 2000s, the industry was uncomfortable about the Affero GPL and no one stepped up to make it more palatable: the FSF clarified virtually nothing in its FAQ; no one wrote a "lesser" AGPL; and the industry worked hard to actively FUD it. I made my case to the assembled worthies that our path with the various AI licensing (and regulatory) initiatives should be a lot closer to GPL v2 than to AGPL, but I'm not yet optimistic.
  • Transparency: Related to the previous point: I think there's a role to play, contours still undefined, for a meeting between licensing and regulability/explainability, akin to LGPL's requirements around modification of libraries, GPL v3's requirements around reinstallation and DRM, or CAL's data requirements. This may be a more fruitful approach to ethical AI, by empowering regulators (and end-users!) rather than attempting to re-create entire legal and ethical systems inside a copyright license.

Not me

  • GitHub Copilot: I still am pretty grumpy about how this lawsuit attempts to end-run fair use by focusing on Section 1202 of the DMCA, so I have not written much about it. In the meantime, read what my friend Kate has written on it.
  • Burnout and resourcing: This article on burnout amongst AI ethics practitioners is quite good. It took on an extra tinge a few days after publication when the head of the Twitter ethical AI team, quoted extensively in the piece, and her entire team, were laid off by Twitter.
  • "Fine-tuning sprint" hackathon: I'm fascinated by this variation on an online hackathon, which recruits people to hack on tuning models for specific languages. We're going to see new forms of collaboration tied to this new technology—is this one? I'm very curious to drop by and observe if I can.
  • ML might make bias actively worse: I suppose no huge surprise, but this research suggests that models may not just mirror biases from an underlying data set—it may well amplify those biases. This will likely become a go-to paper for when people ask me "but why is AI more ethically challenging than traditional software".
  • Deviant Art tries hard, maybe fails: Deviant Art is trying to put together a model allowing the artists on its platform to use, opt out of, and benefit from generative ML models. It will not be a surprise that Deviant Art users are not thrilled. The most cogent and challenging critique, I think, is that if Deviant Art's ML is trained based on a model that already includes Deviant Art posts, the "opt out" is somewhat specious. But not clear how to avoid that if open(ish) models are what we start to train from, rather than everyone training their own large models from (very carbon- and dollar-intensive) scratch. Related: here's a deeper dive with a cartoon artist who was (used? targeted? honored?) via 'invasive diffusion'.
  • Carbon: One reason not to retrain everything from scratch is that training these big models is very carbon intensive. The final training run (not counting aborted runs, etc., etc.) of the large-language BLOOM model was on the order of a few dozen trans-atlantic flights.

A lighter note

I enjoyed this fun little thought experiment on "re-impressionism", an imagined (but possible) art trend of the year... 2023:

19 Nov 16:39

Responding to a draft? Put it in writing.

by Josh Bernoff

If I’m writing a document for you, you probably want to tell me what you thought of my draft. You could edit it with markup software. You could write me a note about it. You could forget to turn on the markup and just edit — don’t worry, I have tools that allow me to … Continued

The post Responding to a draft? Put it in writing. appeared first on without bullshit.

19 Nov 16:38

iPhone-Notruf über Sat auch in Deutschland

by Volker Weber

Damit habe ich nicht gerechnet. Apple Pressemitteilung:

iPhone 14 Anwender:innen können ab sofort auch dann eine Verbindung zum Notruf herstellen, wenn weder Mobilfunkabdeckung noch WLAN-Empfang verfügbar sind; der Service wird im Dezember auf Deutschland, Frankreich, Großbritannien und Irland ausgeweitet. … Der Service ist ab dem Zeitpunkt der Aktivierung eines neuen iPhone 14, iPhone 14 Plus, iPhone 14 Pro und iPhone 14 Pro Max für zwei Jahre kostenlos enthalten.

Ich war davon ausgegangen, dass man das zunächst nur in USA nutzen kann. Auch in Deutschland gibt es Gegenden ohne Mobilfunk-Empfang. Das ist zwar frech und bedauerlich, aber wenigstens kann dir geholfen werden. Interessant ist, dass man zunächst durch das iPhone zur Notlage befragt wird und dann alle Infos in einem Zug an die Notrufstelle per Satellit übertragen wird. Das iPhone zeigt dabei an, auf welchen Bereich des Himmels es zu richten ist.

DC Rainmaker hat es bereits in Nordamerika getestet.

14 Nov 09:25

Datasette is 5 today: a call for birthday presents

Five years ago today I published the first release of Datasette, in Datasette: instantly create and publish an API for your SQLite databases.

Five years, 117 releases, 69 contributors, 2,123 commits and 102 plugins later I'm still finding new things to get excited about with th project every single day. I fully expect to be working on this for the next decade-plus.

Datasette is the ideal project for me because it can be applied to pretty much everything that interests me - and I'm interested in a lot of things!

I can use it to experiment with GIS, explore machine learning data, catalog cryptozoological creatures and collect tiny museums. It can power blogs and analyze genomes and figure out my dog's favourite coffee shop.

The official Datasette website calls it "an open source multi-tool for exploring and publishing data". This definitely fits how I think about the project today, but I don't know that it really captures my vision for its future.

In "x for y" terms I've started thinking of it as Wordpress for Data.

Wordpress powers 39.5% of the web because its thousands of plugins let it solve any publishing problem you can think of.

I want Datasette to be able to do the same thing for any data analysis, visualization, exploration or publishing problem.

There's still so much more work to do!

Call for birthday presents

To celebrate this open source project's birthday, I've decided to try something new: I'm going to ask for birthday presents.

An aspect of Datastte's marketing that I've so far neglected is social proof. I think it's time to change that: I know people are using the software to do cool things, but this often happens behind closed doors.

For Datastte's birthday, I'm looking for endorsements and case studies and just general demonstrations that show how people are using it do so cool stuff.

So: if you've used Datasette to solve a problem, and you're willing to publicize it, please give us the gift of your endorsement!

How far you want to go is up to you:

  • Not ready or able to go public? Drop me an email. I'll keep it confidential but just knowing that you're using these tools will give me a big boost, especially if you can help me understand what I can do to make Datasette more useful to you
  • Add a comment to this issue thread describing what you're doing. Just a few sentences is fine - though a screenshot or even a link to a live instance would be even better
  • Best of all: a case study - a few paragraphs describing your problem and how you're solving it, plus permission to list your logo as an organization that uses Datasette. The most visible social proof of all!

I thrive on talking to people who are using Datasette, so if you want to have an in-person conversation you can sign up for a Zoom office hours conversation on a Friday.

I'm also happy to accept endorsements in replies to these posts on Mastodon or on Twitter.

Here's to the next five years of Datasette. I'm excited to see where it goes next!

14 Nov 09:25

The Notebook Not Taken

I had the pleasure of seeing Alison Hill talk about computational notebooks last week. Jupyter, Quarto, Wolfram Notebooks, and the like are now many scientists’ preferred way to think in code. Having used several, I can’t help but wonder if there’s a universe out there in which we took a different path. Instead of starting with Markdown and slowly edging toward a full-featured editor, did someone on Earth-978 write a plugin for LibreOffice to run code and insert its output into the document? It’s technically feasible: there’s no reason something like the Jupyer protocol couldn’t have been invented with a WYSIWYG editor as the first front end instead of a browser.

Earth-977’s history took yet another path. There, Microsoft included what they called a “computational bridge” in Office 2007. It was designed to help people create automated reports, but scientists almost immediately adopted it as well: most of them were already using Word and Excel, and found that learning a bit of VB.NET to push around their dataframes was a lot easier than shifting to an entirely new suite of tools. By the time Google Docs appeared, active code blocks were as normal (and as expected) as drawings and tables. A few holdouts continued to use Python and R in text-only editors and IDEs, but once a consortium sponsored by Microsoft, Google, and Apple implemented diff and merge for office documents, the battle was effectively over.

I suspect our universe unfolded differently for two reasons:

  1. Most computational scientists don’t know Java or .NET.

  2. Most programmers look down on WYSIWYG editors. As a result, auxiliary tools from grep to Git can’t handle things that aren’t backward-compatible with punchcards.

I believe the LibreOffice path is still viable in our universe. The unseen 99% of scientists (data or otherwise) don’t yet use computational notebooks of any kind. With the distinction between desktop and cloud growing ever blurrier, and with so many of the pieces needed for this approach already available, I think a startup could make a compelling case that accountants, marketing executives, and others would prefer something evolutionary over a browser-based dashboarding tool or something as alien as today’s notebooks.

Note: this post was inspired in part by my move from Twitter to Mastodon. The differences between the two have got me thinking about how chancy and evitable our technologies are, and about how many alternatives we have yet to explore.

14 Nov 09:23

MastodonConf

Is there a MastodonConf yet? It would be near the top of my must-do list: the combination of tech talks about scaling federated services with process talks on encouraging or enforcing social rules for distributed content moderation and social scientists analyzing how people are adapting legacy (Twitter-era) practices to a similar-but-different ecosystem would be fascinating.

14 Nov 09:23

Generating OpenAPI specifications using GPT-3

by Simon Willison

I wanted to start playing around with OpenAPI. I decided to see if I could get GPT-3 to generate the first version of a specification for me.

I used this prompt:

Write an OpenAPI JSON specification for the following APIs:

GET /data/docs.json

Returns:

{
  "database": "data",
  "table": "docs",
  "rows": [
    {
      "id": 1,
      "title": "A title"
    },
    {
      "id": 2,
      "title": "Another doc"
    }
  ]
}

POST /data/-/create
{
  "table": "my_new_table",
  "columns": [
    {
      "name": "id",
      "type": "integer"
    },
    {
      "name": "age",
      "type": "float"
    },
    {
      "name": "title",
      "type": "text"
    }
  ],
  "pk": "id"
}

Output:

{
  "ok": true,
  "database": "data",
  "table": "my_new_table",
  "table_url": "http://127.0.0.1:8001/data/my_new_table"
}

I had to hit the button in the Playground a few times due to the length, but the eventual output looked like this:

{
  "swagger": "2.0",
  "info": {
    "version": "1.0.0",
    "title": "Data API"
  },
  "paths": {
    "/data/docs.json": {
      "get": {
        "tags": [
          "data"
        ],
        "summary": "Get data from docs.json",
        "operationId": "getData",
        "produces": [
          "application/json"
        ],
        "parameters": [],
        "responses": {
          "200": {
            "description": "Successful response",
            "schema": {
              "type": "object",
              "properties": {
                "database": {
                  "type": "string"
                },
                "table": {
                  "type": "string"
                },
                "rows": {
                  "type": "array",
                  "items": {
                    "type": "object",
                    "properties": {
                      "id": {
                        "type": "integer"
                      },
                      "title": {
                        "type": "string"
                      }
                    }
                  }
                }
              }
            }
          }
        }
      }
    },
    "/data/{id}": {
      "post": {
        "tags": [
          "data"
        ],
        "summary": "Create a new table",
        "operationId": "createTable",
        "produces": [
          "application/json"
        ],
        "parameters": [
          {
            "name": "table",
            "in": "path",
            "required": true,
            "type": "string"
          },
          {
            "name": "columns",
            "in": "body",
            "required": true,
            "schema": {
              "type": "array",
              "items": {
                "type": "object",
                "properties": {
                  "name": {
                    "type": "string"
                  },
                  "type": {
                    "type": "string"
                  }
                }
              }
            }
          },
          {
            "name": "pk",
            "in": "body",
            "required": true,
            "type": "string"
          }
        ],
        "responses": {
          "200": {
            "description": "Successful response",
            "schema": {
              "type": "object",
              "properties": {
                "ok": {
                  "type": "boolean"
                },
                "database": {
                  "type": "string"
                },
                "table": {
                  "type": "string"
                },
                "table_url": {
                  "type": "string"
                }
              }
            }
          }
        }
      }
    }
  }
}
14 Nov 09:22

Some human biases in conditional reasoning

by Derek Jones

Tracking down coding mistakes is a common developer activity (for which training is rarely provided).

Debugging code involves reasoning about differences between the actual and expected output produced by particular program input. The goal is to figure out the coding mistake, or at least narrow down the portion of code likely to contain the mistake.

Interest in human reasoning dates back to at least ancient Greece, e.g., Aristotle and his syllogisms. The study of the psychology of reasoning is very recent; the field was essentially kick-started in 1966 by the surprising results of the Wason selection task.

Debugging involves a form of deductive reasoning known as conditional reasoning. The simplest form of conditional reasoning involves an input that can take one of two states, along with an output that can take one of two states. Using coding notation, this might be written as:

    if (p) then q       if (p) then !q
    if (!p) then q      if (!p) then !q

The notation used by the researchers who run these studies is a 2×2 contingency table (or conditional matrix):

          OUTPUT
          1    0
   
      1   A    B
INPUT
      0   C    D

where: A, B, C, and D are the number of occurrences of each case; in code notation, p is the input and q the output.

The fertilizer-plant problem is an example of the kind of scenario subjects answer questions about in studies. Subjects are told that a horticultural laboratory is testing the effectiveness of 31 fertilizers on the flowering of plants; they are told the number of plants that flowered when given fertilizer (A), the number that did not flower when given fertilizer (B), the number that flowered when not given fertilizer (C), and the number that did not flower when not given any fertilizer (D). They are then asked to evaluate the effectiveness of the fertilizer on plant flowering. After the experiment, subjects are asked about any strategies they used to make judgments.

Needless to say, subjects do not make use of the available information in a way that researchers consider to be optimal, e.g., Allan’s Delta p index Delta p=P(A vert C)-P(B vert D)=A/{A+B}-C/{C+D} (sorry about the double, vert, rather than single, vertical lines).

What do we know after 40+ years of active research into this basic form of conditional reasoning?

The results consistently find, for this and other problems, that the information A is given more weight than B, which is given by weight than C, which is given more weight than D.

That information provided by A and B is given more weight than C and D is an example of a positive test strategy, a well-known human characteristic.

Various models have been proposed to ‘explain’ the relative ordering of information weighting: w(A)>w(B) > w(C) > w(D)” title=”w(A)>w(B) > w(C) > w(D)”/><a href=, e.g., that subjects have a bias towards sufficiency information compared to necessary information.

Subjects do not always analyse separate contingency tables in isolation. The term blocking is given to the situation where the predictive strength of one input is influenced by the predictive strength of another input (this process is sometimes known as the cue competition effect). Debugging is an evolutionary process, often involving multiple test inputs. I’m sure readers will be familiar with the situation where the output behavior from one input motivates a misinterpretation of the behaviour produced by a different input.

The use of logical inference is a commonly used approach to the debugging process (my suggestions that a statistical approach may at times be more effective tend to attract odd looks). Early studies of contingency reasoning were dominated by statistical models, with inferential models appearing later.

Debugging also involves causal reasoning, i.e., searching for the coding mistake that is causing the current output to be different from that expected. False beliefs about causal relationships can be a huge waste of developer time, and research on the illusion of causality investigates, among other things, how human interpretation of the information contained in contingency tables can be ‘de-biased’.

The apparently simple problem of human conditional reasoning over two variables, each having two states, has proven to be a surprisingly difficult to model. It is tempting to think that the performance of professional software developers would be closer to the ideal, compared to the typical experimental subject (e.g., psychology undergraduates or Mturk workers), but I’m not sure whether I would put money on it.

14 Nov 09:22

Instapaper Liked: We Need to Talk About the "Good for Her" Genre

Being very online means witnessing the very birth of trends and tropes, and getting to watch as they dominate popular culture. The term “good for her” was…
14 Nov 09:22

Pluralistic: 13 Nov 2022 The Framework is the most exciting laptop I've ever broken

by Cory Doctorow
mkalus shared this story from Pluralistic: Daily links from Cory Doctorow.


Today's links



A disassembled Framework laptop; a man's hand reaches into the shot with a replacement screen.

The Framework is the most exciting laptop I've ever broken (permalink)

From the moment I started using computers, I wanted to help other people use them. I was everyone's tech support for years, which prepared me for the decade or so when I was a CIO-for-hire. In the early days of the internet, I spent endless hours helping my BBS friends find their way onto the net.

Helping other people use technology requires humility: you have to want to help them realize their goals, which may be totally unlike your own. You have to listen carefully and take care not to make assumptions about how they "should" use tech. You may be a tech expert, but they are experts on themselves.

This is a balancing act, because it's possible to be too deferential to someone else's needs. As much as other people know about how they want technology to work, if you're their guide, you have to help them understand how technology will fail.

For example, using the same memorable, short password for all your services works well, but it fails horribly. When one of those passwords leak, identity thieves can take over all of your friend's accounts. They may think, "Oh, no one would bother with my account, I've got nothing of value," so you have to help them understand how opportunistic attacks work.

Yes, they might never be individually targeted, but they might be targeted collectively, say, to have their social media accounts hijacked to spread malware to their contacts.

Paying attention to how things work without thinking about how they fail is a recipe for disaster. It's the reasoning that has people plow their savings into speculative assets that are going up and up, without any theory of when that bubble might pop and leave them ruined.

It's hard to learn about failure without experiencing it, so those of us who have lived through failures have a duty to help the people we care about understand those calamities without living through them themselves.

That's why, for two decades, I've always bought my hardware with an eye to how it fails every bit as much as how it works. Back when I was a Mac user – and supporting hundreds of other Mac users – I bought two Powerbooks at a time.

I knew from hard experience that Applecare service depots were completely unpredictable and that once you mailed off your computer for service, it might disappear into the organization's bowels for weeks or even (in one memorable case), months.

I knew that I would eventually break my laptop, and so I kept a second one in sync with it through regular system-to-system transfers. When my primary system died, I'd wipe it (if I could!) and return it to Apple and switch to the backup and hope the main system came back to me before I broke the backup system.

This wasn't just expensive – it was very technologically challenging. The proliferation of DRM and other "anti-piracy" measures on the Mac increasingly caused key processes to fail if you simply copied a dead system's drive into a good one.

Then, in 2006, I switched operating systems to Ubuntu, a user-centric, easy-to-use flavor of GNU/Linux. Ubuntu was originally developed with the idea that its users would include Sub-Saharan African classrooms, where network access was spotty and where technical experts might be far from users.

To fulfill this design requirement, the Ubuntu team focused themselves on working well, but also failing gracefully, with the idea that users might have to troubleshoot their own technological problems.

One advantage of Ubuntu: it would run on lots of different hardware, including IBM's Thinkpads. The Thinkpads were legendarily rugged, but even more importantly, Thinkpad owners could opt into a far more reliable service regime that Applecare.

For about $150/year, IBM offered a next-day, on-site, worldwide hardware replacement warranty. That meant that if your laptop broke, IBM would dispatch a technician with parts to wherever you were, anywhere in the world, and fix your computer, within a day or so.

This was a remnant of the IBM Global Services business, created to supply tech support to people who bought million-dollar mainframes, and laptop users could ride on its coattails. It worked beautifully – I'll never forget the day an IBM technician showed up at my Mumbai hotel while I was there researching a novel and fixed my laptop on the hotel-room desk.

This service was made possible in part by the Thinkpad's hardware design. Unlike the Powerbook, Thinkpads were easy to take apart. Early on in my Thinkpad years, I realized I could save a lot of money by buying my own hard-drives and RAM separately and installing them myself, which took one screwdriver and about five minutes.

The keyboards were also beautifully simple to replace, which was great because I'm a thumpy typist and I would inevitably wear out at least one keyboard. The first Thinkpad keyboard swap I did took less than a minute, and I performed it one-handed, while holding my infant daughter in my other hand, and didn't even need to read the documentation!

But then IBM sold the business to Lenovo and it started to go downhill. Keyboard replacements got harder, the hardware itself became far less reliable, and they started to move proprietary blobs onto their motherboards that made installing Ubuntu into a major technical challenge.

Then, in 2021, I heard about a new kind of computer: the Framework, which was designed to be maintained by its users, even if they weren't very technical.

https://frame.work/

The Framework was small and light – about the same size as a Macbook – and very powerful, but you could field-strip it in 15 minutes with a single screwdriver, which shipped with the laptop.

I pre-ordered a Framework as soon as I heard about it, and got mine as part of the first batch of systems. I ordered mine as a kit – disassembled, requiring that I install the drive, RAM and wifi card, as well as the amazing, snap-fit modular expansion ports. It was a breeze to set up, even if I did struggle a little with the wifi card antenna connectors (they subsequently posted a video that made this step a lot easier):

https://twitter.com/frameworkputer/status/1433320060429373440

The Framework works beautifully, but it fails even better. Not long after I got my Framework, I had a hip replacement; as if in sympathy, my Framework's hinges also needed replacing (a hazard of buying the first batch of a new system is that you get to help the manufacturer spot problems in their parts).

My Framework "failed" – it needed a new hinge – but it failed so well. Framework shipped me a new part, and I swapped my computer's hinges, one day after my hip replacement. I couldn't sit up more than 40 degrees, I was high af on painkillers, and I managed the swap in under 15 minutes. That's graceful failure.

https://guides.frame.work/Guide/Hinge+Replacement+Guide/104

After a few weeks' use, I was convinced. I published my review, calling the Framework "the most exciting laptop I've ever used."

https://pluralistic.net/2021/09/21/monica-byrne/#think-different

That was more than a year ago. In the intervening time, I've got to discover just how much punishment my Framework can take (I've been back out on the road with various book publicity events and speaking engagements) and also where its limits are. I've replaced the screen and the keyboard, and I've even upgraded the processor:

https://guides.frame.work/Guide/Mainboard+Replacement+Guide/79

I'm loving this computer so. damn. much. But as of this morning, I love it even more. On Thursday, I was in Edinburgh for the UK launch of "Chokepoint Capitalism," my latest book, which I co-authored with Rebecca Giblin.

As I was getting out of a cab for a launch-day podcast appearance, I dropped my Framework from a height of five feet, right onto the pavement. I had been working on the laptop right until the moment the cab arrived because touring is nuts. I've got about 150% more commitments than I normally do, and I basically start working every day at 5AM and keep going until I drop at midnight, every single day.

As rugged as my Framework is, that drop did for it. It got an ugly dent in the input cover assembly and – far, far worse – I cracked my screen. The whole left third of my screen was black, and the rest of it was crazed with artefacts and lines.

This is a catastrophe. I don't have any time for downtime. Just today, I've got two columns due, a conference appearance and a radio interview, which all require my laptop. I got in touch with Framework and explained my dire straits and they helpfully expedited shipping of a new $179 screen.

Yesterday, my laptop screen stopped working altogether. I was in Oxford all day, and finished my last book event at about 9PM. I got back to my hotel in London at 11:30, and my display was waiting for me at the front desk. I staggered bleary-eyed to my room, sat down at the desk, and, in about fifteen minutes flat, I swapped out the old screen and put in the new one.

https://guides.frame.work/Guide/Display+Replacement+Guide/86

That is a fucking astoundingly graceful failure mode.

Entropy is an unavoidable fact of life. "Just don't drop your laptop" is great advice, but it's easier said than done, especially when you're racing from one commitment to the next without a spare moment in between.

Framework has designed a small, powerful, lightweight machine – it works well. But they've also designs a computer that, when you drop it, you can fix yourself. That attention to graceful failure saved my ass.

If you hear me today on CBC Sunday Magazine, or tune into my Aaron Swartz Day talk, or read my columns at Medium and Locus, that's all down to this graceful failure mode. Framework's computers aren't just the most exciting laptops I've ever used – they're the most exciting laptops I've ever broken.


Hey look at this (permalink)



This day in history (permalink)

#20yrsago Argentina: stranger than fiction https://infinitematrix.net/columns/sterling/sterling54.html

#20yrsago Reliable TCP’s weird symbiosis with unreliable IP https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/

#15yrsago John Scalzi’s snarky science fiction tour of the Creation Museum https://whatever.scalzi.com/2007/11/12/your-creation-museum-report/

#15yrsago Geek Mafia: Mile Zero: nerdy caper novel, the sequel https://memex.craphound.com/2007/11/12/geek-mafia-mile-zero-nerdy-caper-novel-the-sequel/

#15yrsago A dusty epidemic of cremains at Disneyland https://web.archive.org/web/20071213025122/http://www.miceage.com/allutz/al111307d.htm

#15yrsago JK Rowling sues to stop publication of Potter reference book https://web.archive.org/web/20071115040824/http://machinist.salon.com/blog/2007/11/13/harry_potter/

#15yrsago Gitmo operating manual leak https://web.archive.org/web/20071216093109/https://www.wikileaks.org/wiki/Gitmo-sop.pdf

#15yrsago Magic and Showmanship: Classic book about conjuring has many lessons for writers https://memex.craphound.com/2007/11/13/magic-and-showmanship-classic-book-about-conjuring-has-many-lessons-for-writers/

#10yrsago UPS to Scouts: no more money until you drop anti-gay policy https://www.bizjournals.com/atlanta/news/2012/11/12/ups-cuts-funding-to-boy-scouts-over.html

#10yrsago Wall Street is not made up of “numbers guys” https://scienceblogs.com/principles/2012/11/12/financiers-still-arent-rocket-scientists

#10yrsago Tune: Derek Kirk Kim’s alien abduction romcom https://memex.craphound.com/2012/11/13/tune-derek-kirk-kims-alien-abduction-romcom/

#10yrsago Ken Macleod on socialism, Singularity, and the rapture of the nerds https://web.archive.org/web/20130121093234/http://www.aeonmagazine.com/world-views/ken-macleod-socialism-and-transhumanism/

#10yrsago Super Scratch Programming Adventure! an excellent way to get started in Scratch https://memex.craphound.com/2012/11/12/super-scratch-programming-adventure-an-excellent-way-to-get-started-in-scratch/

#5yrsago Equifax’s total bill for leaking 145.5 million US records to date: $87.5 million https://www.bleepingcomputer.com/news/business/hack-cost-equifax-only-87-5-million-for-now/

#5yrsago One week after release, iPhone X’s Face ID reportedly defeated by a $150 mask https://www.theverge.com/2017/11/13/16642690/bkav-iphone-x-faceid-mask

#5yrsago The secretive wealthy family behind the opioid epidemic are using the same tactics to kill public education https://www.salon.com/2017/11/15/the-super-wealthy-oxycontin-family-supports-school-privatization_partner/

#5yrsago Bernie Sanders: to fix the Democratic Party, curb superdelegates, make it easier to vote in primaries, and account for funds https://www.politico.com/magazine/story/2017/11/10/bernie-sanders-how-to-fix-democratic-party-215813/

#5yrsago Watson for Oncology isn’t an AI that fights cancer, it’s an unproven mechanical turk that represents the guesses of a small group of doctors https://www.statnews.com/2017/09/05/watson-ibm-cancer/

#5yrsago Roy Moore’s scandal is just the tip of American evangelical Christianity’s child bride problem https://web.archive.org/web/20171119043741/http://www.latimes.com/opinion/op-ed/la-oe-brightbill-roy-moore-evangelical-culture-20171110-story.html

#1yrago How to be safe(r) online https://pluralistic.net/2021/11/13/opsec-soup-to-nuts/#secured

#1yrago American corporate criminals in the crosshairs (finally) https://pluralistic.net/2021/11/12/with-a-fountain-pen/#recidivism



Colophon (permalink)

Currently writing:

  • The Bezzle, a Martin Hench noir thriller novel about the prison-tech industry. Friday's progress: 526 words (61019 words total)
  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. (92849 words total) – ON PAUSE

  • A Little Brother short story about DIY insulin PLANNING

  • The Internet Con: How to Seize the Means of Computation, a nonfiction book about interoperability for Verso. FIRST DRAFT COMPLETE, WAITING FOR EDITORIAL REVIEW

  • Vigilant, Little Brother short story about remote invigilation. FIRST DRAFT COMPLETE, WAITING FOR EXPERT REVIEW

  • Moral Hazard, a short story for MIT Tech Review's 12 Tomorrows. FIRST DRAFT COMPLETE, ACCEPTED FOR PUBLICATION

  • Spill, a Little Brother short story about pipeline protests. FINAL DRAFT COMPLETE

  • A post-GND utopian novel, "The Lost Cause." FINISHED

  • A cyberpunk noir thriller novel, "Red Team Blues." FINISHED

Currently reading: Analogia by George Dyson.

Latest podcast: Sound Money https://craphound.com/news/2022/09/11/sound-money/

Upcoming appearances:

Recent appearances:

Latest books:

Upcoming books:

  • Red Team Blues: "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books, April 2023

This work licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/web/accounts/303320

Medium (no ads, paywalled):

https://doctorow.medium.com/

(Latest Medium column: "The End of the Road to Serfdom" https://doctorow.medium.com/the-end-of-the-road-to-serfdom-bfad6f3b35a9)

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

14 Nov 09:20

After Twitter

For writers, artists, podcasters, journalists, and people who make things in public, Twitter was the one social networking site we all had to use.

It’s as if Twitter has been stretched out across the map of the internet, and whatever parts of the map it didn’t cover, it could still reach. Everything happened there. Even things that started on some blog, podcast, or real life still really happened on Twitter.

It was that way because we agreed to it. We may not have liked it all the time, but by our actions we were part of that consensus.

This has just changed, and now Twitter is only one of many nations on that map, no longer the indispensable one. And, worse, it’s showing clear signs of failing.

It Was Always a Bad Idea

The internet’s town square should never have been one specific website with its own specific rules and incentives. It should have been, and should be, the web itself.

Having one entity own and police that square could only deform the worldwide conversation, to disastrous ends, even with the smartest and most humane people at work.

Twitter’s new owner is certainly not one of those people. But it doesn’t matter: he unintentionally brought the change that needed to happen, the break in the consensus.

The Building Slowdown

I resented Twitter because it had become mandatory, because it had become the one place. And I resented it because it seemed to subdue the creative energy of the web.

You built a thing? How nice. Well, there’s Twitter, so we don’t need it. Or: how’s this going to replace Twitter? It’s not, so save your energy.

Everything you might build that had to do with communication, reading and writing and otherwise, was compared to Twitter or somehow in relation to Twitter, even when unasked for, and Twitter was the enormous factor in that equation.

A New Hope

With the fall of the Twitter consensus I am energized. I remember what it was like in the 2000s; I remember the liveliness and sparkle of those days on the web.

I don’t suggest dialing the clock back to old tech, but I hope we use the good parts from those days to help us build new and amazing things. There is so much to do, and so many grand problems to solve.

The web is wide open again, for the first time in what feels like forever.

DVD Extras

PS I outright stole the “stretched out across the map” thing from Franz Kafka’s “Letter to His Father.” It was just right for this. Come at me, Kafka.

PPS An earlier draft of this article had the line “Twitter was the island in the middle of the kitchen where we hung out, and now it’s a junk drawer of brands and nazis,” which I preserve because it’s true.

13 Nov 02:47

Springsteen’s Covers Playlist

by bob
https://spoti.fi/3Uzjbkq 1. “Hey, Western Union Man” – Bruce Springsteen 2. “Hey, Western Union Man” – Jerry Butler 3. “Hey, Western Union Man” – Al Kooper 4. “The Sun Ain’t Gonna Shine Anymore” – Bruce Springsteen 5. “The Sun Ain’t Gonna Shine (Anymore)” – Frankie Valli 6. “The Sun Ain’t Gonna Shine Anymore” – Walker Brothers […]
13 Nov 02:34

Birthday Cake

by peter@rukavina.net (Peter Rukavina)

Ann Cakes hit it out of the park for Olivia’s second birthday party tonight: their tiramisu cake was delicious and expertly-crafted.

13 Nov 02:34

The Other Side of ‘No Me’

by Dave Pollard

For seven years now, I have been watching/listening to radical non-duality speakers. Although I write about it a lot, I’m really not trying to convince anyone of anything — I’m just trying to sort it out in my own mind. I want to understand how a message that is so useless and so counter to everything I’ve been taught about the nature of reality, can be at the same time so resonant, so compelling.

There are quite a few scientists and science buffs who are starting to write about how this strange message — that there is no free will, no self, no time or space, no thing and no one ‘separate’, and nothing ‘really’ happening — is surprisingly aligned with the newest discoveries and theories in astrophysics, quantum science, and neuroscience.

But while the veracity of this message may be scientifically supportable, and somehow intuited, and even ‘glimpsed’ in ‘moments’ when the self seemingly ‘disappears’, there is no path to actually ‘seeing’ it. There is no way for this creature’s concoction I call my self, to realize the non-existence of itself. The self cannot get out of its own way.

Still, once you’ve been bitten by this message it can be impossible to shake. Somehow, I know it’s ‘right’. So although it’s futile, I keep on trying to make sense of everything that ‘I’ have always thought was true, all over again, on the premise that this message is correct. It’s a frustrating and exhausting process.

The first and easiest step in this process, I think, is the acknowledgement that there is no such thing as free will, and that all our (bodies’) beliefs and behaviours are biologically or culturally conditioned given the circumstances of the moment. We certainly seem to have free will, but there is lots of empirical and scientific evidence that we do not. I did not come to accept this easily, but when I did, it started to seem pretty obvious to me, and to explain a lot of previously seemingly paradoxical and incoherent human behaviours.

Accepting that was rather revolutionary in itself. Our systems of punishment and incarceration, for example, suddenly seemed ludicrous to me. Blaming people for their behaviour suddenly seemed unfair and even cruel. An enormous weight of self-imposed responsibility for my own actions was lifted from my shoulders (after the usual Doubting Thomas skepticism that I might just be using it as an excuse for ‘shirking’ my responsibilities and not taking responsibility for what this body had done).

The next and much more difficult step was acknowledging that there is no such thing as time. Scientifically this isn’t hard to justify — most scientists have acknowledged that time isn’t ‘fundamental’ to the nature of reality and many have conceded that it’s just a mental shorthand, the brain’s way of categorizing its observations. Many scientists are also willing to acknowledge that what we see as a continuity of ‘happenings’  ‘over’ time is also just a convincing illusion — that time is just an order that our brains place our perceptions into, to try to make sense of what appears to be happening. And that our entire sense of our lives, memories and plans has to be reinvented and re-placed in ‘time’ order in the brain, every time we awaken.

But the non-existence of time, for me at least, is much harder for me to rationalize than our lack of free will. Some very basic concepts, like the ‘big bang’, evolution, causality, memory, and our sense of, and preoccupation with, the trajectory of our lives, and our death, all have to be re-thought if we acknowledge the un-reality of time.

My way of doing this is to suggest that perhaps all of these apparent happenings, if they are not ‘real’, are so apparently so that perhaps their ‘actual’ reality is not worth quibbling about. We don’t argue that a film showing us pictures at 50fps is fraudulent because the appearance of movement in it is essentially an optical illusion. When I talk with radical non-duality speakers they don’t seem to care very much whether time is an ‘appearance’ or an ‘illusion’; they just assert that it’s not real. So I kind of put my reservations about this on hold, and go onto the next step.

The third step is a doozy — it’s the acknowledgement that there is no ‘self’, no ‘you’ no ‘one’, no ‘thing’ apart. Everything is nothing appearing as everything. Not only is there no free will, there is no ‘one’ to have free will. Not only is there no time, there is nothing really happening.

Neuroscientists have largely given up looking for evidence of a ‘self’ anywhere(s) in the brain or body; there is simply no physical evidence one exists. And they have also demonstrated that the body kinetically acts on what are its apparent decisions before the ‘decision’ shows up in the brain’s neural pathways, suggesting that the brain is rationalizing the decision as being ‘its’ decision after the fact, rather than making decisions at all. Just ‘making sense of it’ for ‘next time’.

Whether or not you believe neuroscience (which is still mostly quackery, as studies debunking fMRI theories continue to demonstrate), the most compelling evidence for the non-existence of the self is when it apparently completely disappears, with no apparent consequences on the body that that self presumed to occupy.

There are quite a few apparent individuals with no axe to grind who give quite compelling accounts of how their sense of self vanished (for no reason) and how that has not affected their capacity to function at all. They still do the same things and have the same preferences. They just no see that ‘obviously’ there is no self there, or anywhere else, doing anything. It’s all just appearances.

And there are many more, including me, who have had so-called ‘glimpses‘ when the self simply was not there, when an astonishingly different perception of the nature of reality was apparent… until the ‘self’ ‘came back’ and tried to make sense of it. It was absolutely obvious ‘during’ the glimpse that what was ‘seen’ then was how everything really was, and that what ‘I’ had always thought was real was not.

It’s been 6 1/2 years since the last ‘glimpse’. I’m not disheartened by that; as I wrote at the time:

There was no temptation to grasp onto it lest it be quickly lost again. It was clearly always here, everywhere, not ‘going’ anywhere, accessible always. My ‘self’ would have been anxious not to lose it, but my self was, in that moment, not present.

I am, somehow, absolutely confident that ‘this’ is just ‘waiting’ for ‘me’ to get out of the way, so it can be seen.

Still, science is of little help here. It does suggest that ‘self-consciousness’ is completely unnecessary to a full and vibrant (apparent) ‘life’, and that the more-than-human world ‘lives’ in a full, intense way that humans, living through the veil of self, cannot. Most living creatures clearly feel pain and pleasure, fear and anger, for example — you don’t have to have a ‘self’ and a sense of self-consciousness and separation to be sentient — but it seems only humans also feel ‘self-generated’ emotions like hatred, guilt and shame.

Yesterday, I was exercising on a treadmill in our apartment’s gym, listening to a recording of Tim Cliss on YouTube speaking at one of his meetings in Copenhagen as I did so. There were four other people in the gym at the time, all of us preoccupied with what we were doing, though the gym is surrounded by mirrors so you could always see who else was there.

Tim was talking about loneliness, and about the question he often gets about whether ‘without a self’ you feel more or less lonely. The point he made, which stunned me to the point I almost fell off the treadmill, was that the realization that there is ‘obviously’ no ‘you’ wasn’t what was important — in fact the absence of the ‘you’ is not even noticed until and unless there are memories of a time there was seemingly a ‘you’ and all of a sudden the absence of a ‘you’ is noticed.

What was important, however, was that, in addition to there being no ‘you’, it becomes obvious that there is no one else either. That kind of ‘loneliness’ must be shattering. The realization of utter aloneness.

That is not to say that there aren’t conversations and camaraderie (Tim remains somewhat obsessed with golf.) But these are simply conditioned behaviours and responses. There are conversations, laughter, anger perhaps, but they are seen as not being ‘anyone’s’ doing. They are just appearances, what the apparent conditioned body seems inclined to do, without any purpose or reason or meaning. And they are apparent happenings of no one. They entail no relationship with any ‘one’ else. Could anything be lonelier than that?

Tim has two young sons that he adores, but he describes his attentive behaviours with them as if he were describing an adult fox looking after its kits. Instinctive, fully alive, devoted and passionate, yet there is no ‘one’ there.

So now I look around the gym at the other apparent people working out. And I think: I have no problem with there being no ‘me’. But what am I to make of these other intense, sweating bodies not being real either? This gym is actually empty — there is no one here.

There is no one anywhere. There never has been anyone. This was obvious during the ‘glimpses’. I know there is no path to actually seeing this. It makes no sense — how can anything be seen with no seer? But for a brief moment in that gym it was obvious — the room was empty. There were only appearances, reflections in the mirrors.

I know of people who have had a similar moment of frisson — quite often they tell me (as my friend Djô did most recently) that it happens when they are looking in a mirror and suddenly see there is no one looking back — at no one. So I’m standing on the treadmill looking into a wall of mirrors that reflects back multiple times everyone and everything in the room — and for a brief second it is obvious there is no one in the room. No ‘me’, which is perfectly fine (at least so I tell myself), but no ‘one’ else, either. No, no, that can’t be.

And then they were back. And so was I.

12 Nov 21:43

Last TBN ride of the year (Georgetown to Rockwood)

by jnyyz

All of the weekly ride programs that TBN runs have wound up for the year. However, it was still possible to post rides under program X, and there was one such ride today out of Trafalgar Sport Park in Stewarttown. Given that the forecast is for snow this coming week, more than likely this was the last TBN ride for the year.

There were ten riders today. Rick was our ride leader.

One of the other riders took this picture so that I could appear. The starting temp was 8°C with not a huge amount of wind, so my knickers were just fine.

Off we go.

Cars were waiting for a freight train to pass, but by the time we got to the crossing it was no longer blocked.

Today’s route was 75 km, but there was a possible short cut to make it 68 km. Here I am taking the short cut by continuing straight on 15 Side Rd, but everyone else turned left on Fourth Line.

This route covered some ground that was new to me. It went through Eden Mills, and then Rockwood. I was informed that bikes can’t enter the Rockwood Conservation Area for free, so I elected to have a brief rest stop at the Eramosa Cafe. No butter tarts were in evidence, but I enjoyed this oatmeal raisin cookie.

One of the charming things about Rockwood was these very polite speed sensor signs. It didn’t come out in this picture, but it is flashing “thank you” since I was well under the speed limit.

Thanks to Rick for organizing today’s ride, and everyone else for good company at the start, and the first 6 km of the ride. Hope that you all stayed warm, dry and safe. See you on the roads next year.


Update: this just in from Dave Mader

Hi Rick and Jun,

How about an article in the TBN newsletter about today’s ride.

  • The ominous weather
  • The 10 hardy souls
  • The bike-eating railway crossing

And then, with 13 km left to go, my rear derailleur got jammed into the spokes. Kevin was with me. A pickup driver stopped to see if we needed help. 

Well yes, My rear wheel was so jammed it had jumped out of the rear fork drops.

The driver was Sergio. He took me and my bike back to Georgetown. He talked the whole time about looking after each other, especially in the countryside. So here is one pickup driver who did a kindness for a cyclist.

Cheers,

Dave

Further update: I hear that Sandra had double pinch flats at a railway crossing on Side Road 10, on the section that was cut off by the short cut. This would be the spot.

and a note has been added to the route map.