Shared posts

08 May 07:14

The dimensions of hybrid equity

by Alex

Those of us who enjoy the professional and personal benefits of remote work are a large and lucky minority—but yes, a minority. Most estimates show that about 60% of North American jobs just have to be done in person.

Not incidentally, these are jobs that are more likely to be done by people of color, by less educated workers, by younger workers, and in many countries, by women. One of the major reasons that Covid disproportionately affected communities of color is because people of color had less opportunity to work from the safe isolation of their own homes.

Reckoning with this divide—the gap in experience and opportunity between those of us who can work remotely, at least part of the time, and those who must do all of their work in an actual physical workplace—is the next big challenge for organizations and for society as a whole. That’s why I highlighted the issue of hybrid equity in my forecast for Microsoft’s Future of Work report, published today.

While many organizations are giving a lot of thought to their hybrid transition, most of the attention is on the employees who’ve been remote throughout the pandemic: How do we get them back to the office? How much of the time will they spend on site? How much control will they have over their specific schedules?

These are crucial questions, but they obscure other foundational problems: How can we build a sense of common culture and mission if our organization is split between on-site and hybrid employees? How can we ensure pathways to mentorship and advancement if junior, on-site workers barely interact with mid-level hybrid employees, or have only limited opportunities to learn the online collaboration skills that are increasingly essential to hybrid work? How can we develop a model of work-life balance and personal wellbeing that is available to on-site employees, as well as those with the flexibility to work remotely?

Creating a hybrid equity strategy

The way organizations address these questions—or fail to address them—will determine the shape of organizational culture and organizational success for years to come. I’ve mapped the key dimensions of this choice: It comes down to whether some or all of your employees have access to remote-work flexibility, and whether they work remotely some or all of the time.

diagram showing dimensions of hybrid equity

In the best-case scenario, organizations will opt for hybrid inclusion, finding ways to offer at least some measure of flexibility to literally every employee in the organization. That may sound like a radical idea, but even a half-day each month of “flex time” helps accommodate medical appointments, kid emergencies or just a little bit of peace and quiet while you take care of remote-feasible work like filling in timesheets. While the cost of that flex time could be substantial, in many organizations it will still be significantly less than the cost of motivating or replacing on-site employees who accumulate resentment towards hybrid colleagues.

What matters most, at this moment, is to put the issue of hybrid equity on the C-suite’s agenda. It’s time to pay attention to who has access to remote-work flexibility, how that flexibility affects advancement opportunities, and how we are building relationships between on-site and hybrid teams. The most successful organizations—the ones that will deliver great customer experiences, and attract and retain great talent—will be those that develop strategies, tools and rituals that connect on-site and hybrid employees, and build a common experience of hybrid inclusion.

Because bringing people back to the workplace can only go so far if we fail to think about those who have been there all along.

08 May 07:13

Look Who is Blogging Again

by Nancy White
Here comes a wander. Be warned. There are some bulbs along our driveway that were here when we bought our house in 1984. In the Spring, they put up a bunch of large green, strappy leaves which dry and fade away as the Summer heat comes on. Then, come Autumn, large pink crocus-like flowers emerge. … Continue reading Look Who is Blogging Again

Source

08 May 07:10

In Passing, Noting Python Apps, and Datasette, Running Purely in the Browser

by Tony Hirst

With pyodide now pretty much established, and from what I can tell, possibly with better optimised/lighter builds on the roadmap, I started wondering again about running Python apps purely in the browser.

One way of doing this is to create ipywidget powered applications in a JuptyerLite context (although I don’t think you can “appify” these yet, Voila style?) but that ties you into the overhead of running JupyterLite.

The React in Python with Pyodide post on the official pyodide blog looked like it might provide a way in to this, but over the weekend I noticed an annoucement from Anaconda regarding Python in the Browser using a new framework called PyScript (examples). This framework provides a (growing?) set of custom HTML components that appear to simplify the process of building pyodide Python powered web apps that run purely in the browser.

I also noticed over the weekend that sqlite_utils and datasette now run in a pyodide context, the latter providing the sql api run against an in-memory database (datasette test example).

The client will also return the datasette HTML, so now I wonder: what would be required to be able to run a datasette app in the JuptyerLite/JupyterLab context? The datasette server must be intercepting the local URL calls somehow, but I imagine that the Jupyter server is ignorant of them. So how could datasette “proxy” its URL calls via JupyterLite so that the local links in the datasette app can be resolved? (We surely wouldn’t want to have to make all the links handled button elements?)

UPDATE 5/5/22: It didn’t take Simon long… Datasette now runs as a full web app in the browser under pyodide. Announcement post here: Datasette Lite: a server-side Python web application running in a browser.

So now I’m wondering again… is there a way to “proxy” a Python app so that it can power a web app, running purely in the browser, via Pyodide?

08 May 07:09

How Pushing Past Fear Leads to Growth

Two weeks ago our latest group for the Psychological Safety Workshop kicked off, and as always, humans continue to leave me feeling floored and inspired.

The first week of this course is always interesting because everyone is working on letting their guard down a little bit in a group full of strangers, all while learning new things and experimenting with how much they feel comfortable sharing. Mondays are about digging into new content, and by the end of class all of us have specific behavioral focuses that we journal about over the next few days. Thursdays are applied learning sessions, where we reflect on our experiences from the week.

This past Thursday when we asked folks about what sort of experiences they’d had throughout the week a few individuals shared some positive experiences. Then someone spoke up and said “I’d like to share a very negative experience I had from taking this class.”

In our kickoff session we ask students that when they start thinking about psychological safety, starting with ourselves is vital. As we call it in class, The Self.

Cartoon person sitting down, hugging a sad face

Image cred: Jan Buchczak

We as humans all have our own unique history: stories, experiences, biases, heartbreak and joy that make us who we are and how we respond to those we interact with. And the more we get to know ourselves, the more we can know how we show up for ourselves and others.

With bated breath we were all silent, gave them the floor, and listened.

This student said that asking them to look within was tough. Really tough. While it was amazing to look back at how far they’ve come in their career, it also asked them to take a look in the mirror at the toxic people and situations they’d made it through. There were still scars, and they said it felt like some of them reopened a little bit.

Then they said, “I almost didn’t come back to class. It was so much to process and so incredibly painful I didn’t know if I could keep digging. But that’s why I know this is exactly what I need to show up wholly in the world. Thank you for the support; this course is exactly what I need right now for personal and professional growth.”

This class of 13 complete strangers showed up and held space for this person. And that’s the best thing about teaching this class, really. It isn’t anything about us as teachers, it’s all about how people show up as themselves, ready to get uncomfortable for the sake of themselves and others around them. That’s how we evolve. Not alone, but with one another.

Experiences and feelings aren’t fluff. When we pay attention to what we bring to the table we’re capable of producing healthier relationships and incredible work. I can’t wait to see what’s next for this group.

Join us for The Psychological Safety Workshop! Learn more here.

08 May 07:09

Datasette Lite: a server-side Python web application running in a browser

Datasette Lite is a new way to run Datasette: entirely in a browser, taking advantage of the incredible Pyodide project which provides Python compiled to WebAssembly plus a whole suite of useful extras.

You can try it out here:

https://lite.datasette.io/

A screenshot of the pypi_packages database table running in Google Chrome in a page with the URL of lite.datasette.io/#/content/pypi_packages?_facet=author

The initial example loads two databases - the classic fixtures.db used by the Datasette test suite, and the content.db database that powers the official datasette.io website (described in some detail in my post about Baked Data).

You can instead use the "Load database by URL to a SQLite DB" button to paste in a URL to your own database. That file will need to be served with CORS headers that allow it to be fetched by the website (see README).

Try this URL, for example:

https://congress-legislators.datasettes.com/legislators.db

You can follow this link to open that database in Datasette Lite.

Datasette Lite supports almost all of Datasette's regular functionality: you can view tables, apply facets, run your own custom SQL results and export the results as CSV or JSON.

It's basically the full Datasette experience, except it's running entirely in your browser with no server (other than the static file hosting provided here by GitHub Pages) required.

I’m pretty stunned that this is possible now.

I had to make some small changes to Datasette to get this to work, detailed below, but really nothing extravagant - the demo is running the exact same Python code as the regular server-side Datasette application, just inside a web worker process in a browser rather than on a server.

The implementation is pretty small - around 300 lines of JavaScript. You can see the code in the simonw/datasette-lite repository - in two files, index.html and webworker.js

Why build this?

I built this because I want as many people as possible to be able to use my software.

I've invested a ton of effort in reducing the friction to getting started with Datasette. I've documented the install process, I've packaged it for Homebrew, I've written guides to running it on Glitch, I've built tools to help deploy it to Heroku, Cloud Run, Vercel and Fly.io. I even taught myself Electron and built a macOS Datasette Desktop application, so people could install it without having to think about their Python environment.

Datasette Lite is my latest attempt at this. Anyone with a browser that can run WebAssembly can now run Datasette in it - if they can afford the 10MB load (which in many places with metered internet access is way too much).

I also built this because I'm fascinated by WebAssembly and I've been looking for an opportunity to really try it out.

And, I find this project deeply amusing. Running a Python server-side web application in a browser still feels like an absurd thing to do. I love that it works.

I'm deeply inspired by JupyterLite. Datasette Lite's name is a tribute to that project.

How it works: Python in a Web Worker

Datasette Lite does most of its work in a Web Worker - a separate process that can run expensive CPU operations (like an entire Python interpreter) without blocking the main browser's UI thread.

The worker starts running when you load the page. It loads a WebAssembly compiled Python interpreter from a CDN, then installs Datasette and its dependencies into that interpreter using micropip.

It also downloads the specified SQLite database files using the browser's HTTP fetching mechanism and writes them to a virtual in-memory filesystem managed by Pyodide.

Once everything is installed, it imports datasette and creates a Datasette() object called ds. This object stays resident in the web worker.

To render pages, the index.html page sends a message to the web worker specifying which Datasette path has been requested - / for the homepage, /fixtures for the database index page, /fixtures/facetable for a table page and so on.

The web worker then simulates an HTTP GET against that path within Datasette using the following code:

response = await ds.client.get(path, follow_redirects=True)

This takes advantage of a really useful internal Datasette API: datasette.client is an HTTPX client object that can be used to execute HTTP requests against Datasette internally, without doing a round-trip across the network.

I initially added datasette.client with the goal of making any JSON APIs that Datasette provides available for internal calls by plugins as well, and to make it easier to write automated tests. It turns out to have other interesting applications too!

The web worker sends a message back to index.html with the status code, content type and content retrieved from Datasette. JavaScript in index.html then injects that HTML into the page using .innerHTML.

To get internal links working, Datasette Lite uses a trick I originally learned from jQuery: it applies a capturing event listener to the area of the page displaying the content, such that any link clicks or form submissions will be intercepted by a JavaScript function. That JavaScript can then turn them into new messages to the web worker rather than navigating to another page.

Some annotated code

Here are annotated versions of the most important pieces of code. In index.html this code manages the worker and updates the page when it recieves messages from it:

// Load the worker script
const datasetteWorker = new Worker("webworker.js");

// Extract the ?url= from the current page's URL
const initialUrl = new URLSearchParams(location.search).get('url');

// Message that to the worker: {type: 'startup', initialUrl: url}
datasetteWorker.postMessage({type: 'startup', initialUrl});

// This function does most of the work - it responds to messages sent
// back from the worker to the index page:
datasetteWorker.onmessage = (event) => {
  // {type: log, line: ...} messages are appended to a log textarea:
  var ta = document.getElementById('loading-logs');
  if (event.data.type == 'log') {
    loadingLogs.push(event.data.line);
    ta.value = loadingLogs.join("\n");
    ta.scrollTop = ta.scrollHeight;
    return;
  }
  let html = '';
  // If it's an {error: ...} message show it in a <pre> in a <div>
  if (event.data.error) {
    html = `<div style="padding: 0.5em"><h3>Error</h3><pre>${escapeHtml(event.data.error)}</pre></div>`;
  // If contentType is text/html, show it as straight HTML
  } else if (/^text\/html/.exec(event.data.contentType)) {
    html = event.data.text;
  // For contentType of application/json parse and pretty-print it
  } else if (/^application\/json/.exec(event.data.contentType)) {
    html = `<pre style="padding: 0.5em">${escapeHtml(JSON.stringify(JSON.parse(event.data.text), null, 4))}</pre>`;
  // Anything else (likely CSV data) escape it and show in a <pre>
  } else {
    html = `<pre style="padding: 0.5em">${escapeHtml(event.data.text)}</pre>`;
  }
  // Add the result to <div id="output"> using innerHTML
  document.getElementById("output").innerHTML = html;
  // Update the document.title if a <title> element is present
  let title = document.getElementById("output").querySelector("title");
  if (title) {
    document.title = title.innerText;
  }
  // Scroll to the top of the page after each new page is loaded
  window.scrollTo({top: 0, left: 0});
  // If we're showing the initial loading indicator, hide it
  document.getElementById('loading-indicator').style.display = 'none';
};

The webworker.js script is where the real magic happens:

// Load Pyodide from the CDN
importScripts("https://cdn.jsdelivr.net/pyodide/dev/full/pyodide.js");

// Deliver log messages back to the index.html page
function log(line) {
  self.postMessage({type: 'log', line: line});
}

// This function initializes Pyodide and installs Datasette
async function startDatasette(initialUrl) {
  // Mechanism for downloading and saving specified DB files
  let toLoad = [];
  if (initialUrl) {
    let name = initialUrl.split('.db')[0].split('/').slice(-1)[0];
    toLoad.push([name, initialUrl]);
  } else {
    // If no ?url= provided, loads these two demo databases instead:
    toLoad.push(["fixtures.db", "https://latest.datasette.io/fixtures.db"]);
    toLoad.push(["content.db", "https://datasette.io/content.db"]);
  }
  // This does a LOT of work - it pulls down the WASM blob and starts it running
  self.pyodide = await loadPyodide({
    indexURL: "https://cdn.jsdelivr.net/pyodide/dev/full/"
  });
  // We need these packages for the next bit of code to work
  await pyodide.loadPackage('micropip', log);
  await pyodide.loadPackage('ssl', log);
  await pyodide.loadPackage('setuptools', log); // For pkg_resources
  try {
    // Now we switch to Python code
    await self.pyodide.runPythonAsync(`
    # Here's where we download and save those .db files - they are saved
    # to a virtual in-memory filesystem provided by Pyodide

    # pyfetch is a wrapper around the JS fetch() function - calls using
    # it are handled by the browser's regular HTTP fetching mechanism
    from pyodide.http import pyfetch
    names = []
    for name, url in ${JSON.stringify(toLoad)}:
        response = await pyfetch(url)
        with open(name, "wb") as fp:
            fp.write(await response.bytes())
        names.append(name)

    import micropip
    # Workaround for Requested 'h11<0.13,>=0.11', but h11==0.13.0 is already installed
    await micropip.install("h11==0.12.0")
    # Install Datasette itself!
    await micropip.install("datasette==0.62a0")
    # Now we can create a Datasette() object that can respond to fake requests
    from datasette.app import Datasette
    ds = Datasette(names, settings={
        "num_sql_threads": 0,
    }, metadata = {
        # This metadata is displayed in Datasette's footer
        "about": "Datasette Lite",
        "about_url": "https://github.com/simonw/datasette-lite"
    })
    `);
    datasetteLiteReady();
  } catch (error) {
    self.postMessage({error: error.message});
  }
}

// Outside promise pattern
// https://github.com/simonw/datasette-lite/issues/25#issuecomment-1116948381
let datasetteLiteReady;
let readyPromise = new Promise(function(resolve) {
  datasetteLiteReady = resolve;
});

// This function handles messages sent from index.html to webworker.js
self.onmessage = async (event) => {
  // The first message should be that startup message, carrying the URL
  if (event.data.type == 'startup') {
    await startDatasette(event.data.initialUrl);
    return;
  }
  // This promise trick ensures that we don't run the next block until we
  // are certain that startDatasette() has finished and the ds.client
  // Python object is ready to use
  await readyPromise;
  // Run the reuest in Python to get a status code, content type and text
  try {
    let [status, contentType, text] = await self.pyodide.runPythonAsync(
      `
      import json
      # ds.client.get(path) simulates running a request through Datasette
      response = await ds.client.get(
          # Using json here is a quick way to generate a quoted string
          ${JSON.stringify(event.data.path)},
          # If Datasette redirects to another page we want to follow that
          follow_redirects=True
      )
      [response.status_code, response.headers.get("content-type"), response.text]
      `
    );
    // Message the results back to index.html
    self.postMessage({status, contentType, text});
  } catch (error) {
    // If an error occurred, send that back as a {error: ...} message
    self.postMessage({error: error.message});
  }
};

One last bit of code: here's the JavaScript in index.html which intercepts clicks on links and turns them into messages to the worker:

let output = document.getElementById('output');
// This captures any click on any element within <div id="output">
output.addEventListener('click', (ev => {
  // .closest("a") traverses up the DOM to find if this is an a
  // or an element nested in an a. We ignore other clicks.
  var link = ev.srcElement.closest("a");
  if (link && link.href) {
    // It was a click on a <a href="..."> link! Cancel the event:
    ev.stopPropagation();
    ev.preventDefault();
    // I want #fragment links to still work, using scrollIntoView()
    if (isFragmentLink(link.href)) {
      // Jump them to that element, but don't update the URL bar
      // since we use # in the URL to mean something else
      let fragment = new URL(link.href).hash.replace("#", "");
      if (fragment) {
        let el = document.getElementById(fragment);
        el.scrollIntoView();
      }
      return;
    }
    let href = link.getAttribute("href");
    // Links to external sites should open in a new window
    if (isExternal(href)) {
      window.open(href);
      return;
    }
    // It's an internal link navigation - send it to the worker
    loadPath(href);
  }
}), true);

function loadPath(path) {
  // We don't want anything after #, and we only want the /path
  path = path.split("#")[0].replace("http://localhost", "");
  // Update the URL with the new # location
  history.pushState({path: path}, path, "#" + path);
  // Plausible analytics, see:
  // https://github.com/simonw/datasette-lite/issues/22
  useAnalytics && plausible('pageview', {u: location.href.replace('?url=', '').replace('#', '/')});
  // Send a {path: "/path"} message to the worker
  datasetteWorker.postMessage({path});
}

Getting Datasette to work in Pyodide

Pyodide is the secret sauce that makes this all possible. That project provides several key components:

  • A custom WebAssembly build of the core Python interpreter, bundling the standard library (including a compiled WASM version of SQLite)
  • micropip - a package that can install additional Python dependencies by downloading them from PyPI
  • A comprehensive JavaScript to Python bridge, including mechanisms for translating Python objects to JavaScript and vice-versa
  • A JavaScript API for launching and then managing a Python interpreter process

I found the documentation on Using Pyodide in a web worker particularly helpful.

I had to make a few changes to Datasette to get it working with Pyodide. My tracking issue for that has the full details, but the short version is:

  • Ensure each of Datasette's dependencies had a wheel package on PyPI (as opposed to just a .tar.gz) - micropip only works with wheels. I ended up removing python-baseconv as a dependency and replacing click-default-group with my own click-default-group-wheel forked package (repo here). I got sqlite-utils working in Pyodide with this change too, see the 3.26.1 release notes.
  • Work around an error caused by importing uvicorn. Since Datasette Lite doesn't actually run its own web server that dependency wasn't necessary, so I changed my code to catch the ImportError in the right place.
  • The biggest change: WebAssembly can't run threads, which means Python can't run threads, which means any attempts to start a thread in Python cause an error. Datasette only uses threads in one place: to execute SQL queries in a thread pool where they won't block the event loop. I added a new --setting num_sql_threads 0 feature for disabling threading entirely, see issue 1735.

Having made those changes I shipped them in a Datasette 0.62a0 release. It's this release that Datasette Lite installs from PyPI.

Fragment hashes for navigation

You may have noticed that as you navigate through Datasette Lite the URL bar updates with URLs that look like the following:

https://lite.datasette.io/#/content/pypi_packages?_facet=author

I'm using the # here to separate out the path within the virtual Datasette instance from the URL to the Datasette Lite application itself.

Maintaining the state in the URL like this means that the Back and Forward browser buttons work, and also means that users can bookmark pages within the application and share links to them.

I usually like to avoid # URLs - the HTML history API makes it possible to use "real" URLs these days, even for JavaScript applications. But in the case of Datasette Lite those URLs wouldn't actually work - if someone attempted to refresh the page or navigate to a link GitHub Pages wouldn't know what file to serve.

I could run this on my own domain with a catch-all page handler that serves the Datasette Lite HTML and JavaScript no matter what path is requested, but I wanted to keep this as pure and simple as possible.

This also means I can reserve Datasette Lite's own query string for things like specifying the database to load, and potentially other options in the future.

Web Workers or Service Workers?

My initial idea for this project was to build it with Service Workers.

Service Workers are some deep, deep browser magic: they let you install a process that can intercept browser traffic to a specific domain (or path within that domain) and run custom code to return a result. Effectively they let you run your own server-side code in the browser itself.

They're mainly designed for building offline applications, but my hope was that I could use them to offer a full simulation of a server-side application instead.

Here's my TIL on Intercepting fetch in a service worker that came out of my initial research.

I managed to get a server-side JavaScript "hello world" demo working, but when I tried to add Pyodide I ran into some unavoidable road blocks. It turns out Service Workers are very restricted in which APIs they provide - in particular, they don't allow XMLHttpRequest calls. Pyodide apparently depends on XMLHttpRequest, so it was unable to run in a Service Worker at all. I filed an issue about it with the Pyodide project.

Initially I thought this would block the whole project, but eventually I figured out a way to achieve the same goals using Web Workers instead.

Is this an SPA or an MPA?

SPAs are Single Page Applications. MPAs are Multi Page Applications. Datasette Lite is a weird hybrid of the two.

This amuses me greatly.

Datasette itself is very deliberately architected as a multi page application.

I think SPAs, as developed over the last decade, have mostly been a mistake. In my experience they take longer to build, have more bugs and provide worse performance than a server-side, multi-page alternatives implementation.

Obviously if you are building Figma or VS Code then SPAs are the right way to go. But most web applications are not Figma, and don't need to be!

(I used to think Gmail was a shining example of an SPA, but it's so sludgy and slow loading these days that I now see it as more of an argument against the paradigm.)

Datasette Lite is an SPA wrapper around an MPA. It literally simulates the existing MPA by running it in a web worker.

It's very heavy - it loads 11MB of assets before it can show you anything. But it also inherits many of the benefits of the underlying MPA: it has obvious distinctions between pages, a deeply interlinked interface, working back and forward buttons, it's bookmarkable and it's easy to maintain and add new features.

I'm not sure what my conclusion here is. I'm skeptical of SPAs, and now I've built a particularly weird one. Is this even a good idea? I'm looking forward to finding that out for myself.

Coming soon: JavaScript!

Another amusing detail about Datasette Lite is that the one part of Datasette that doesn't work yet is Datasette's existing JavaScript features!

Datasette currently makes very sparing use of JavaScript in the UI: it's used to add some drop-down interactive menus (including the handy "cog" menu on column headings) and for a CodeMirror-enhanced SQL editing interface.

JavaScript is used much more extensively by several popular Datasette plugins, including datasette-cluster-map and datasette-vega.

Unfortunately none of this works in Datasette Lite at the moment - because I don't yet have a good way to turn <script src="..."> links into things that can load content from the Web Worker.

This is one of the reasons I was initially hopeful about Service Workers.

Thankfully, since Datasette is built on the principles of progressive enhancement this doesn't matter: the application remains usable even if none of the JavaScript enhancements are applied.

I have an open issue for this. I welcome suggestions as to how I can get all of Datasette's existing JavaScript working in the new environment with as little effort as possible.

Bonus: Testing it with shot-scraper

In building Datasette Lite, I've committed to making Pyodide a supported runtime environment for Datasette. How can I ensure that future changes I make to Datasette - accidentally introducing a new dependency that doesn't work there for example - don't break in Pyodide without me noticing?

This felt like a great opportunity to exercise my shot-scraper CLI tool, in particular its ability to run some JavaScript against a page and pass or fail a CI job depending on if that JavaScript throws an error.

Pyodide needs you to run it from a real web server, not just an HTML file saved to disk - so I put together a very scrappy shell script which builds a Datasette wheel package, starts a localhost file server (using python3 -m http.server), then uses shot-scraper javascript to execute a test against it that installs Datasette from the wheel using micropip and confirms that it can execute a simple SQL query via the JSON API.

Here's the script in full, with extra comments:

#!/bin/bash
set -e
# I always forget to do this in my bash scripts - without it, any
# commands that fail in the script won't result in the script itself
# returning a non-zero exit code. I need it for running tests in CI.

# Build the wheel - this generates a file with a name similar to
# dist/datasette-0.62a0-py3-none-any.whl
python3 -m build

# Find the name of that wheel file, strip off the dist/
wheel=$(basename $(ls dist/*.whl))
# $wheel is now datasette-0.62a0-py3-none-any.whl

# Create a blank index page that loads Pyodide
echo '
<script src="https://cdn.jsdelivr.net/pyodide/v0.20.0/full/pyodide.js"></script>
' > dist/index.html

# Run a localhost web server for that dist/ folder, in the background
# so we can do more stuff in this script
cd dist
python3 -m http.server 8529 &
cd ..

# Now we use shot-scraper to run a block of JavaScript against our
# temporary web server. This will execute in the context of that
# index.html page we created earlier, which has loaded Pyodide
shot-scraper javascript http://localhost:8529/ "
async () => {
  // Load Pyodide and all of its necessary assets
  let pyodide = await loadPyodide();
  // We also need these packages for Datasette to work
  await pyodide.loadPackage(['micropip', 'ssl', 'setuptools']);
  // We need to escape the backticks because of Bash escaping rules
  let output = await pyodide.runPythonAsync(\`
    import micropip
    // This is needed to avoid a dependency conflict error
    await micropip.install('h11==0.12.0')
    // Here we install the Datasette wheel package we created earlier
    await micropip.install('http://localhost:8529/$wheel')
    // These imports avoid Pyodide errors importing datasette itself
    import ssl
    import setuptools
    from datasette.app import Datasette
    // num_sql_threads=0 is essential or Datasette will crash, since
    // Pyodide and WebAssembly cannot start threads
    ds = Datasette(memory=True, settings={'num_sql_threads': 0})
    // Simulate a hit to execute 'select 55 as itworks' and return the text
    (await ds.client.get(
      '/_memory.json?sql=select+55+as+itworks&_shape=array'
    )).text
  \`);
  // The last expression in the runPythonAsync block is returned, here
  // that's the text returned by the simulated HTTP response to the JSON API
  if (JSON.parse(output)[0].itworks != 55) {
    // This throws if the JSON API did not return the expected result
    // shot-scraper turns that into a non-zero exit code for the script
    // which will cause the CI task to fail
    throw 'Got ' + output + ', expected itworks: 55';
  }
  // This gets displayed on the console, with a 0 exit code for a pass
  return 'Test passed!';
}
"

# Shut down the server we started earlier, by searching for and killing
# a process that's running on the port we selected
pkill -f 'http.server 8529'
08 May 07:06

Case Study: How many colors are too many colors for Windows Terminal?

by lhecker

A group of users were trying to implement a simple, terminal-based video game and found the performance under Windows Terminal to be entirely unsuitable for such a task. The performance issue could be replicated by repeatedly drawing a “rainbow” and measuring how many frames per second (FPS) we can achieve. The one below has 20 distinct colors and could be drawn at around 30 FPS on my Surface Book with an Intel i7-6700HQ CPU. However, if we draw the same rainbow with 21 or more distinct colors it would drop down to less than 10 FPS. This drop is consistent and doesn’t get worse even with thousands of distinct colors.

Screenshot of a rainbow drawn in Windows Terminal

Initial investigation with Windows Performance Analyzer

Initially the culprit wasn’t immediately obvious of course. Does the performance drop because we’re misusing Direct2D or DirectWrite? Does our virtual terminal (VT) sequence parser have any issues with quickly processing colors? We usually begin any performance investigations with Windows Performance Analyzer (WPA). It requires us to create an “.etl” trace file, which we can be done using Windows Performance Recorder (WPR).

The “Flame by Process” view inside WPA is my personal favorite. In a flame graph each horizontal bar represents a specific function call. The widths of the bars correspond to the total CPU time spent within that function, including time spent in all functions it calls recursively. This makes it trivial to visually spot changes between two flame graphs of the same application, or to find outliers which are easily visible as overly wide bars.

Flamegraph of Windows Terminal showing CPU usage when drawing 20 and 21 colors respectively

To replicate this investigation, you’ll need to install Windows Terminal 1.12 as well as a tool, called rainbowbench. After compiling rainbowbench with cmake and your favorite compiler, you should run the commands rainbowbench 20 and rainbowbench 21 for at least 10 seconds each inside Windows Terminal. Make sure to have Windows Performance Recorder (WPR) running and recording a performance trace for you during that time. Afterwards you can open the .etl file with Windows Performance Analyzer (WPA). Within the menu bar you can tell WPA to “Load Symbols” for you.

On the left side in the image above we can see the CPU usage of our text rendering thread when it’s continuously redrawing the same 20 distinct colors and on the right side if we draw 21 colors instead. Thanks to the flame graph we can immediately spot that a drastic change in behavior inside Direct2D must have taken place and that the likely culprit is a function called TextLookupTableAtlas::Fill6x5ContrastRow inside Direct2D. An “atlas” in a graphical application is usually referring to a texture atlas and considering how Direct2D uses the GPU for rendering by default, this is likely code dealing with a texture atlas on the GPU. Luckily several tools already exist with which we can easily debug applications running on the GPU.

PIX and RenderDoc – Debug graphical performance issues with ease

PIX is an application similar to the venerable open-source project RenderDoc, both of which are tremendously helpful to debug and understand graphical performance issues like this one.

While PIX offers support for packaged applications like Windows Terminal (which PIX refers to as “UWP”) and a large number of helpful metrics out of the box, I found it easier to generate the following visualizations using RenderDoc. Both applications work almost identical however, which makes it easy to switch between them.

Windows Terminal ships with a modern version of conhost.exe, called OpenConsole.exe, featuring several enhancements not present in conhost.exe, one of which are alternative rendering engines. You can find and run OpenConsole.exe from inside Windows Terminal’s application package, or from one of Terminal’s release archives. Afterwards you can create a DWORD key at HKEY_CURRENT_USER\Console\UseDx and assign it the value 0 to get the classic GDI text renderer, 1 for the standard Direct2D renderer and 2 for the new Direct3D engine which solves this issue. This trick is helpful for RenderDoc, which doesn’t support packaged applications like Windows Terminal.

Simply drag and drop an executable onto RenderDoc and “Launch” it. Afterwards snapshots can be “captured” and retroactively analyzed and debugged.

Screenshot of the Launch Application dialog in RenderDoc Screenshot of the captured snapshots in RenderDoc

Opening a capture will show the draw commands Direct2D executed on the GPU on the left side. Selecting the “Texture Viewer” will initially yield nothing, but as it turns out certain events in the “Output” tab, like DrawIndexedInstanced, will seemingly present us with the state of our renderer in the middle of execution. Furthermore the “Input” tab contains a texture called “D2D Internal: Grayscale Lookup Table”:

Screenshot of the Texture Viewer in RenderDoc

The existence of such a “lookup table” seems highly relevant to our finding that anything over 20 colors slows down our application dramatically and seems related to our problematic TextLookupTableAtlas::Fill6x5ContrastRow function we found using WPA. What if the table’s size is limited? Simply scrolling through all events already confirms our suspicion. The table gets filled with new colors hundreds of times every frame, because it can’t fit 21 colors into a table that only fits 20:

Video of RenderDoc

If we limit our test application to 20 distinct colors, the table’s contents stay constant:

Video of RenderDoc

So, as it turns out our terminal is running into a corner case for Direct2D: It’s only optimized to handle up to 20 distinct colors in the general case (as of April 2022). Direct2D’s approach isn’t a coincidence either, as the use of a fixed lookup table for colorization reduces its computational complexity and power draw, especially on the older hardware it was originally written for. Additionally, most applications, websites, etc. stay below this limit and if they do happen to exceed it, more often than not, the text is static and doesn’t need to be redrawn 60 times a second. In comparison, it’s not uncommon to see a terminal-based application doing exactly that.

Solving the issue with more aggressive caching

The solution is trivial: We simply create our own, much larger color lookup table and provide it to Direct2D! Unfortunately we can’t just tell Direct2D to use our custom cache. In fact, relying on its rendering logic at all would be problematic here, since the maximum amount of performant colors would always remain finite. As such, we’ll have to write our own custom text renderer after all.

Update May 9, 2022: This article was originally published without giving proper credit where it is due. We would like to thank Joe Wilm of Alacritty for establishing modern GPU terminal rendering, Christian Parpart of Contour for the continued support and advice, and Tom Szilagyi for describing the idea previously. Special thanks to Casey Muratori for suggesting this approach and Mārtiņš Možeiko for providing a reference HLSL shader. I deeply apologize to everyone mentioned.

Turning fonts and the glyphs they contain into actual rasterized images is generally very expensive, which is why implementing a “glyph cache” of sorts will be critical for performance. A primitive way to cache a glyph would be to draw it into a tiny texture when we first encounter it. Whenever we encounter it again, we can reference the cached texture instead. Just like Direct2D with its lookup table atlas for colorization we can use our own texture atlas for caching glyphs. Instead of drawing 1000 glyphs into 1000 tiny textures, we’ll just allocate one huge texture and subdivide it into a grid of 1000 glyph cells.

Now let’s say we have a tiny terminal of just 6 by 2 cells and want to draw some colored “Hello, World!” text. We already know the first step is to build up a texture atlas of our glyphs:

Flow chart for glyph extraction and caching on the CPU

After replacing the characters and their glyphs in our terminal with references into our texture atlas, we’re left with a “metadata buffer” that is the same size as the terminal and still stores the color information. The texture atlas contains only the deduplicated and uncolored rasterized glyph textures. But wait a minute… Can’t we just flip this around to get back the original input? And that’s exactly how our GPU shader works:

Flow chart for composition on the GPU

By writing a primitive pixel shader, we can copy our glyphs from the atlas texture to the display output directly on the GPU. If we ignore more advanced topics like gamma-correctness or ClearType, colorizing the glyphs is just a matter of multiplying our glyph’s alpha mask with any color we want them to be. And our metadata buffer stores both, the index of the glyph we want to copy for each grid cell and the color it’s supposed to be.

Result

The exact performance benefit of this approach depends heavily on the hardware it runs on. Generally, however we found it to be at least on par with our previous Direct2D-based renderer while avoiding any limitations regarding glyph colorization.

We measured the performance with the following hardware:

  • CPU: AMD Ryzen 9 5950X
  • GPU: NVIDIA RTX 3080 FE
  • RAM: 64GB 3200 MHz CL16
  • Display: 3840×2160 @ 60 Hz

We’ve measured CPU and GPU usage based on the values shown in the Task Manager, as that’s what users will most likely check first when encountering performance issues. Additionally, the total GPU power draw was measured as it’s the best indicator for potential power savings, independent of frequency scaling, etc.

CPU (%) GPU (%) GPU (Watt) FPS
DxEngine Cursor blinking 0.0% 0.1% 17W
DxEngine ≤ 20 colors 1.5% 7.0% 24W 60
DxEngine ≥ 21 colors 5.5% 27% 27W 30
AtlasEngine Cursor blinking 0.0% 0.0% 17W
AtlasEngine ≤ 20 colors 0.6% 0.3% 21W ≥60
AtlasEngine ≥ 21 colors 0.6% 0.3% 21W ≥60

“DxEngine” is the internal name for our previous, Direct2D-based text renderer and “AtlasEngine” for the new renderer. According to these measurements the new renderer not just reduces CPU and GPU usage in general, but also makes it independent of the content that’s being drawn.

Conclusion

Direct2D implements text rendering using a built-in texture atlas to cache rasterized glyphs and a lookup-table for coloring those glyphs. The latter exists, because it reduces the computational cost of coloring glyphs, but unfortunately, as a trade-off, requires an upper limit of distinct colors it can hold at a time. Once you exceed this limit by drawing highly colored text, Direct2D is forced to remove some to make space for new ones, which can lead to an excessive amount of time being spent on updating the lookup-table, causing a steep drop in performance.

This issue isn’t much of a problem for most applications, since text is usually rather static, or doesn’t exceed the upper limit in the first place, but for Terminals we routinely see applications coloring their entire background in block characters, animating text at >60 FPS, etc., where this starts to be a problem.

Our new renderer is specifically written with modern hardware in mind and exclusively supports drawing monospace text in a rectangular grid. The former allows us to take advantage of today’s GPUs with their performant calculations, good support for conditionals and branches and relatively large memories. That way we can safely increase performance by caching more data and perform glyph colorization without lookup-tables, despite the added computational cost. And by only supporting rectangular grids of monospace text, we’re able to dramatically simplify the implementation, offsetting the added computational cost and matching or even exceeding the performance and efficiency of our previous Direct2D-based renderer.

The initial implementation can be seen in pull request #11623. The pull request is quite complex, but the most relevant parts can be found in the renderer/atlas subdirectory. The “parser”, or “CPU-side” part of the engine, can be found inside AtlasEngine.cpp as AtlasEngine::_flushBufferLine and the pixel shader, or “GPU-side”, in shader_ps.hlsl.

Since then, several improvements have been added. The current state at the time of writing can be found here. It includes a gamma-correct implementation of Direct2D’s and DirectWrite’s text blending algorithm inside the 3 files named “dwrite”, as well as an implementation for ClearType blending as a GPU shader. An independent demonstration for this can be seen in the dwrite-hlsl demo project.

The post Case Study: How many colors are too many colors for Windows Terminal? appeared first on Windows Command Line.

08 May 07:05

Kids’ Stuff

by Richard Woodall

In 2010, venture capitalist Chris Dixon wrote a blog post that sought to explain why the multimillion-dollar businesses that once dominated the consumer web at the turn of the millennium — companies like AOL or AltaVista — had been displaced in a single decade by the likes of Facebook and Google. The answer, he argued, was not that the older companies were complacent or stupid. Rather, “the reason big new things sneak by incumbents is that the next big thing always starts out being dismissed as a ‘toy.’”

By this he meant that when disruptive technologies first emerge, vendors and consumers often overlook their novel utilities, since their assumptions about use and value are still framed by old paradigms. Compared with the existing tools that seem to sufficiently address consumer needs, the new tech seems trivial, redundant — a toy. In time, however, the new invention disrupts our understanding of utility itself: It creates new kinds of demand, new use cases and new satisfactions, cornering new markets and eventually turning the tables so that the old systems now seem like kids’ stuff.

The “commoditoy” refigures consumption as a continuous loop

As a theory of how particular technologies come to dominate the market, Dixon’s idea is not especially illuminating. Facebook and Google did not succeed because their competitors misunderstood what they were offering; rather they emerged victoriously from a commercial struggle with a range of companies offering similar services. But it does work as a piece of rhetorical ju-jitsu, seeming to transform any criticism into a source of strength. Every snarky takedown further proves that the target is a misunderstood visionary, being punished for peeking a little further over the horizon than their short-sighted peers. In this sense, it perhaps offers some insight into the behavior of certain Silicon Valley founders — recall Mark Zuckerberg attending a meeting with Sequoia Capital in his pajamas, for instance. In a culture where innovation is fetishized, creating the impression of being written off by the square world helps confirm your disruptive bona fides.

We are 12 years on from Dixon’s blog post. Its author is now general partner at Andreessen Horowitz, the VC firm funding some of the most hyped startups in the rapidly expanding crypto/metaverse/web3 space. His prolific posting has turned him into something of a guru, and among his followers, “the next big thing will start out looking like a toy” is one of his most popular mantras. This in itself is no surprise. Many crypto enterprises depend on mobilizing a critical mass of investors around a project with extremely basic functionality. If their confidence starts to get shaky, then you’ll need some good slogans to stave off the “FUD.”

However, a look at some of the biggest projects in the crypto space suggest that the “future will look like a toy” is not just rhetoric but a core element of their design philosophy. I don’t mean that they offer hidden utilities that transcend consumers’ blinkered understanding of their own desires. I mean that they seem like they’re made for children. Take a look at the collections at the top of NFT marketplaces, or “play-to-earn” games like Axie Infinity, or any of the multitude of metaverses currently being peddled, and it appears that Dixon’s prediction has been turned inside out: Start out with a toy, and it will become the next big thing.

Across the crypto space in general, there are a wide variety of aesthetics, as one would expect given that visual artists have been at the vanguard of experiments with NFTs, DAOs, and other cryptoeconomic structures. But at the top of the market, things are far more homogenized. Hordes of profile-pic-style (PFP) NFT projects hew to the same stylistic templates, typically either Cryptopunk-esque pixel art or post–Bored Ape cartoons. The most prominent metaverses, from Meta and Microsoft to Decentraland, are cut from the same cloth: simple lines and shapes; blocks of bright color, primary, pastel, and neon; cartoonish cutesiness, sometimes blended with mild puerility. Elements of visual style are borrowed from media directed at or favored by children, whether it’s the blockiness of Minecraft or the pitiless black-eyed stare of Funko Pops. Mark Zuckerberg’s metaverse seems to cop its look almost entirely from the Nintendo Wii, with its egg-headed avatars and sterile laminate finish.

Why have these purportedly revolutionary technical innovations adopted such a juvenile aesthetic? Some of this might be attributed to technical teething problems. Profile-pic NFTs are produced in batches of thousands, often through procedural generation, so simplicity is essential. Meanwhile, an interactive 3-D world that can support thousands of simultaneous users remains a daunting programming challenge, which can be mitigated by adopting a low-res, cartoony finish.

Then there’s the problem of mass adoption. Crypto is a notoriously opaque and unforgiving milieu, so a toyish visual style can be seen as an attempt to help outsiders over the intimidating technical and demographic hurdles. It’s familiar, and it seems to promise innocent fun, unlike the anhedonic Skinner boxes of current social media. Framing crypto assets as toys also compensates for their apparent lack of utility or fundamental value, encouraging buyers to imbue them with sentimental worth just as children invest inanimate objects with imaginary characteristics.

But the toylike aesthetics are also a manifestation of an older trend, in which designs and styles hitherto targeted at teens permeate mass culture. Today’s biggest movie franchises are based on intellectual property that a few decades ago might have been seen as the preserve of teenagers. Video-gaming has likewise transitioned from being widely seen as an adolescent pastime to a more or less universal pursuit. Collectible dolls and trading cards are now marketed to all ages. If we want to understand why crypto looks the way it does, it may help to situate it within this broader process.


When we think of toys, we typically imagine objects which have been designed purposefully for children to play with. But as anyone who has spent much time around kids will observe, this is only half the story. The word toy is both a noun and a verb: It’s not just a type of object with particular qualities but a way of relating to any object, regardless of its design. The space between these two uses of toy is where the qualities of creativity and imagination often associated with childhood come into their own. Kids don’t just imbue the toys they are given with magical powers and lifelike agency; they also appropriate otherwise mundane objects (pieces of luggage, twigs, dangerous kitchen implements) into their universe of make-believe. In this way, children are prodigious inventors and discoverers of new meanings, utilities, and values in the inert world of objectivity.

For some writers, this inventive capacity helps make children powerful instinctual critics of the absurdities and hypocrisies of the grownup world. In his essay “Toys and Play” Walter Benjamin observed that “toys are a site of conflict” where the adult’s desire to impose aspirations and behavioral standards clashes with the child’s inclination to experimentation, discovery, and free play. Children’s relationship to their toys lets them take issue with the value systems being handed down to them.

Likewise, Benjamin’s friend Theodor Adorno argued in Minima Moralia that child’s play has a utopian dimension. “In his purposeless activity,” Adorno wrote where, “the child, by a subterfuge, sides with use value against exchange value … The little trucks travel nowhere and the tiny barrels on them are empty; yet they remain true to their destiny by not performing, not participating in the process of abstraction that levels down their destiny, but instead abide as allegories of what they are specifically for.” To the child’s mind, there is no such thing as profit; the truck completes its imaginary logistical adventures for their own sake, not for capital accumulation. Here, toys represent a kind of pure use value, unpolluted by the contrived equivalence of the commodity form.

Some of the biggest projects in crypto seem like they’re made for children

But the idea that childhood represents a realm of imaginative freedom outside economic processes is historically contingent. Modern institutions insulate children from the direct pressures of the capitalist marketplace only to discipline them in readiness for it. “A child,” wrote Benjamin, “is no Robinson Crusoe … they belong to the nation and the class they come from.” To the extent that childhood is a “precapitalist” space outside the market, it inevitably becomes the target of capital’s search for new sources of accumulation. Hence what some academics have called the “children’s culture industry”: a global network of media and toy conglomerates whose business is to find new ways to commodify kids’ creativity and sell it back to them.

The emergence of this culture industry in the late 1970s and early ’80s heralded a shift in how children’s toys were produced and sold. Big toy manufacturers began to move away from generic play objects like blocks, dolls, and train sets and toward what sociologist Beryl Langer calls “commoditoys”: branded items from an extended universe of products and content. Franchises create a space in which the object of the child’s fantasy is immediately available to them as an item to be purchased and possessed.

The Star Wars series is perhaps the paradigmatic instance of this new phenomenon, with its 1970s action figures and novelizations presaging the constant deluge of derivative content seen today. For increasingly consolidated entertainment corporations, this strategy is a bonanza, allowing them to republish the same intellectual property across multiple formats. After the Reagan administration loosened FCC regulation of commercials during children’s programming, there was a huge increase in shows where the protagonists were themselves toys, available for purchase at all good retailers. This gave us Transformers, G.I. Joe, He-Man, and a cornucopia of other IPs that will now presumably be constantly recycled until the sun explodes. Between 1982 and 1994, Hasbro produced more than 750 G.I. Joe toys, many hundreds of which featured in the accompanying animated serial.

With commoditoys, Langer writes, “each act of consumption is a beginning rather than an end, the first or next step in an endless series for which each particular toy is an advertisement … the moment of possession is the beginning of desire.” The commoditoy refigures consumption as a continuous loop, leveraging the child’s propensity for imaginative association to build dense fictional worlds in which each purchase refers them to another. The child’s impulse to “toyify” — that is, to discover new utilities and values in a thing by treating it as a toy — is met by a universe of readymade meanings, characters, stories and emotions, each of which simultaneously constitutes a potential commercial transaction. The process of “bringing to life” that occurs so often in child’s play is reified as the possibility of buying and possessing the object of your fantasy in toy form. An imaginative leap leads directly into an act of consumption, a process which can be extended across a potentially infinite series. Play becomes a form of collecting.


In succeeding decades, the logic of the commoditoy has been absorbed into the heart of the culture industry’s global business model. Blockbuster films are invariably contained within a broader “cinematic universe,” replete with opportunities for merchandising and cross-promotion, while old franchises are constantly rebooted or scavenged for saleable parts. Following the acquisition of a popular license, derivative content blossoms across all channels — sequels, prequels, spinoffs, animated series, videogames, Lego, Lego video games. In an industry terrified of commercial and aesthetic risk and clinging to its remaining clutch of valuable properties, what was once a marketing strategy directed explicitly at children has become a defining rationale.

The fusion of entertainment and tech has created a host of new opportunities for this kind of serialized consumption, while also speeding up the tempo with which play shuttles into consumption. Fortnite has pioneered the integration of culture industry IP with a microtransactional digital economy, creating a space in which all our favorite toys are available to us quite literally as “consumables.” Roblox, meanwhile, has made great strides closing the gap between play and child labor. These 3-D sandbox experiences are one of the main models for the metaverse-style experiences that tech companies are hoping to mainstream in the coming years. In this sense, just as early adopters of social media platforms like Twitter or Facebook were test subjects for the gamified engagement mechanisms that define those platforms, so have children been recruited as pioneers at the next frontier of cultural consumption, one based around the perpetual consumption of virtual toys. To quote Mark Fisher, “Blitzed with capitalist hyperstimulus … children occupy the frontier-zones of capitalism, operating as probe-heads in what, for adults, is the future.”

An imaginative leap leads directly into an act of consumption

The toylike aesthetic that predominates at the commercial end of crypto applications is one ramification of this. It’s become conventional for PFP NFT projects to describe their ultimate goal as creating entertainment franchises in the mold of Star Wars or Marvel. Bored Apes have their own cartoon, their own “band,” an upcoming film, and, most recently, a virtual land register for their upcoming metaverse, which has so far absorbed around $300 million in investment — all aiming to at least give the impression that owners of their NFTs will one day hold a stake in a sprawling multimedia empire.

It’s easy to pour scorn on such a wide disparity between means (cartoon animal drawings) and ends (global media empire), but these projects are essentially attempts to build a commoditoy from the bottom up, beginning with the structure of serialized financial transactions and then, supposedly, creating the stories, characters, and worlds that are the pretext for all that consumption afterward. The toylike aesthetic is both an advertisement for the universe of “content” that will one day validate the NFT as an investment vehicle and compensate for its current lack of meaning or utility, encouraging the buyer to form an affective attachment to what is right now nothing more than a few lines of code in a distributed ledger. “Crypto-toys” collapse the distinction between a financial and an emotional investment — a necessary maneuver for a digital asset class ultimately backed by nothing more than flows of sentiment.

Of course, the pot of gold at the end of all those NFT project roadmaps is not a cross-media IP franchise in the traditional sense but a metaverse. Again, these ambitions are pure expressions of commoditoy logic, imagining a condition in which not just individual toys but the entire “extended universe” of IP come to life as a continuous whole, erasing the distinction between content, advertising, and commerce once and for all. This vision of the metaverse is as a closed loop of serialized consumption, where each user’s desire expresses itself as an ever-expanding collection of toys.

It would be easy to conclude that if aspiring tech disrupters are making products that look like toys, it’s because they want their users to act like children. However, the issue here is not so much the prospect of universal infantilization as the model of childhood being offered. Commoditoys seek to exploit the creative and affective receptivity of kids, their inclination to experiment with value and meaning, while suppressing their imaginative freedom and autonomy. Crypto-toys mobilize the same structures in an effort to recruit a mass audience to the crypto world’s gamified financial systems.

In the process, they disseminate a version of childhood in which toys and play have been accommodated almost entirely to consumption. But this vision does not exhaust the possibilities of any of these things. One of children’s most admirable qualities is that they never accept the world as it’s given to them. A playing child measures the limits of her circumstances against the extent of her imagination; her toys, at once dumb things and animate beings, help her see that the nature of objecthood isn’t necessarily as objective as it might appear. These are the kind of attitudes worth cultivating at any age. Faced with attempts to reconfigure childhood as an infinite cycle of commodification, the best response may not be to tell everyone to grow up, but instead to decide what kind of kids we want to be.

08 May 06:46

Where to Buy a Cargo Bike in Vancouver

by Lisa Corriveau
Cargo bike party outside Velo Star Cafe!
If you're looking to buy a cargo bike in Vancouver, the options are getting better & better. More shops are starting to stock cargobikes & it's getting easier to test ride them, though you may end up going to a few different  shops to test ride a range. Quite a few brands of cargo bike, like our Bakfiets, aren't even sold here in brick & mortar stores--your only option is buying online.

A note about buying a bike online without a local dealer: you may save some money this way, or be able to get a specific bike that you want, but if you ever have warranty issues, it can be a major pain to get them addressed without a local shop that's invested in helping you out. Depending on the bike--& more so with lower end electric assist bikes--you may also have difficulty finding a shop who will work on the bike or can get parts in for you. I highly recommend buying a bike from a shop located near home so that you can easily bring it in for service.

Until we get a great family & cargo biking shop (like Seattle's G&O Cyclery) here in Vancouver, you'll have to pound the pavement a bit. 

In the interests of making that easier for you, I've updated my list of bike shops that sell cargo bikes here in Vancouver. There are increasing numbers of cargo bike retailers now, which is fantastic--so many more companies are offering different styles & price points as well. Here's what I have found & what brands they sell, alphabetically.

Cit-E-Cycles 3466 West Broadway Vancouver 604-734-2717
Riese & Mueller Load & Packster; Tern GSD & HSD; Cube Cargo; Moustache Lundi.
Cit-E has multiple locations in the Lower Mainland, so call if you're looking for a specific model--they may have it in stock at another shop.

Dream Cycle 1010 Commercial Drive Vancouver 604-253-3737
Soma Tradesman (cycle truck, not really a kid hauler); Surly Big Dummy (wait list til 2023)
Dream focuses on building custom bikes from the frame up. They have a Soma Tradesman in the shop as of May 4 2022 to test ride, but call ahead if you want a test ride in case it's sold.

JV Bike 929 Expo Boulevard 604-694-2453
Tern GSD
JV is an all-round shop that sells a variety of bikes & ebikes, with just the Tern GSD cargobike. They generally have a Tern GSD available for a test ride, but best to call ahead to check.

Mac Talla Cycles 2626 East Hastings Vancouver 604-707-0822
Bullitt; Kona Ute
Mac Talla tries to have floor models available for test rides, but with supply chain issues they can't always keep them in stock. Call ahead to ask!

Rad Power Bikes 3296 E 29th Avenue Vancouver 1-800-939-0310
Radwagon
Note that you can test ride bikes at this location, but they are currently not offering ebike purchases from the Vancouver retail store. All ebike purchases must be made online.

Reckless Bike Stores 110 Davie Street Vancouver 604-648-2600
Babboe City; Benno eBoost; Tern GSD; Urban Arrow
Reckless has two locations, they generally have one of each available for test rides but it's best to call ahead so they can schedule some time for your questions.

Sidesaddle Bikes 3469 Commercial Street 604-428-2453
Yuba Boda Boda; Bike Friday Haul-A-Day & Ever-E-Day; Bombtrack Munro Cargo (cycle truck, not really a kid hauler)
Andrea opened Sidesaddle specifically as a women-friendly shop & I posted about the shop not long after they opened. You can drop in to look at the bikes, but if you're looking at test riding & buying a cargo bike, it's best to call ahead so they can set aside time to answer all your questions. 

Velo Lifestyle 1350 Nanaimo Street 604-216-0111
Muli; Creme; Veloe; Triobike; Black Iron Horse.
This shop is the latest addition to my list, only opened their Vancouver location about a year ago. They specialize in European brands, mainly front loaders & several models of trikes.

Velo Star Cafe 3195 Heather Street Vancouver 604-376-8223
Due to supply chain issues, it's been hard for Clint to get cargobikes in, but he occasionally has or knows of used ones on sale.
VeloStar is where we go for service on all our bikes--the mechanics here really know & love cargo bikes & we've always had gread service there.


That about wraps up my updates to the list. Please let me know if you know of other cargo bike selling shops in the city that I've missed!

Disclaimer: I was not compensated in any way to list these bike shops & have no affiliation with them, other than being friends with Clint & Kat from VeloStar. The above information is correct as of the writing of this post & opinions above are my own, as always.



Follow Spokesmama here too:
      


Follow Spokesmama here too:
08 May 04:06

Change the face of policy and governance

by Chris Corrigan

There is no really easy way to write this, so perhaps its just best to be polemical about it.

I am no longer going to be supporting cishet white men who are running for office. Basically guys that look like me. We’ve had our run, we have propagated genocide, mass destruction and murder, war, criminal economic inequality and the destruction of the life support systems of the planet we live on and now I think it is time to stop. Of course folks will “not all…” me on this, but just stop. Our role now is to support different people than us. Because what happens when we feel the MEREREST slipping away of power and influence is that we do ridiculous things like driving hundreds of trucks into the middle of Ottawa and demanding that the unelected Senate assist us in the overthrow of the government. Or worse. Much worse.

We do shit like this:

Just read the replies on that thread. I’m not going to tell you how bad it is.

Policy making matters. The people who make policy matter. Our job now is to use our power, money and influence to get behind different decision makers and support their election to office, or their appointment to the judiciary. because we need different decisions and we need to change the face and experience base of those making those decisions.

Three years ago the Canadian inquiry into Missing and Murdered Indigenous Women and Girls concluded – quite rightly – that what has happened and continues to happen to Indigenous people in Canada constitutes genocide. And what continues to happen to women, non-binary, and trans folks is a good indicator of a country’s character and perspective. In Louisiana if this law goes through, any woman who terminates a pregnancy because it is ectopic and life threatening is a murderer. A women who has an unimplanted fertilized egg that flows out with her period is technically a murderer. And a judge that seeks to stay the charges is to be automatically impeached.

Let us stop being outraged and surprised at this continued pursuit of genocidal policies and fascist radical Christian extremism, for none of this is new. Let us instead change the game by changing the people with their hands of power. Make laws not blog posts.

08 May 04:06

I’m not a goblin, I just play one in Google Docs

Once upon a time, when I was a young teen, I went with my friends to a place called Cheddar Gorge which is a cave system in the west of England (yes, near where the cheese comes from) and there, in the underground tunnels, we ran around in the dark and pretended to be goblins and hit people pretending to be adventurers with rubber swords.

Larping, is what this is called. (LARP = Live Action Role-Play)

ANYWAY. I just looked at my LinkedIn newsfeed.

I went through a period of my life where I was retrospectively ashamed and never talked about my early teens one-off experience running around in dark tunnels, goblins, rubber swords etc. HOWEVER now I believe it was pretty cool actually.

Larping is improv, right? But whereas participating in your improv theatre group is “highbrow” and “culture” and gets talked about in the Sunday supplements, larping is maybe not seen that way. Maybe it is now! I hope so.

My overwhelming feeling, peering in at LinkedIn, was a sense that I was watching everyone performing an elaborate dance.

I know these people! I barely recognise them!

There are common steps like product launches and hiring and life lessons and being blessed by luck, and these incredible matador flourishes of the cloak like pointing credit at someone else to gather some yourself or a delicate humblebrag that can never quite be called out. And the supporting comments! An art in themselves.

It’s not ungenuine, not insincere. I feel energised and encouraged and amplified just reading LinkedIn. I love it.

I feel like people on LinkedIn are accessing parts of their own potential that perhaps can’t be accessed any other way? Like, LinkedIn is a collaborative machine to summon… something? It’s good. It’s weird. It’s good.


Back in 2009, Phil Gyford started an email list called Pretend Office. It was for a bunch of freelancers to experience the camaraderie of being workmates in an office.

BUT THEN:

And a weird thing happened.

With no planning, we all started acting as if we were people in a real office. Almost immediately we began to adopt characters and send officious announcements. Soon we were referring to characters in the office who didn’t exist in real life. Meeting rooms were booked, couriers arrived, servers went down, timesheets were requested, and embarrassing emails were accidentally sent to everyone in the company.

I can’t remember the last time I laughed at email so much. It was, and is, the most fun I’ve had on email for a long, long time.

– Phil Gyford’s website, Pretend Office (2009)

Larping office work.

You can read the archives! Here’s the first email: "If you’re reading this then the boffins in IT have got the emailing list working!!!"

And here’s a representative month – click on a few emails and start reading. It’s… baffling? Hilarious? Mundane?

Everyone knows what to do.


I spend most of my social media time on Twitter. What on earth are we larping there. Good grief. What a performance.

But again, it doesn’t feel like a performance.

And actually, because I’m closer to it than LinkedIn, to me it doesn’t feel like a performance at all. But I bet it looks like one from the outside.

So, I guess, two thoughts:

  • the gap between my “real me” and the “social media performer me” is more extreme than I had credited – and everyone has this really huge difference between their “selves”
  • we fall into performing different selves so easily and so NATURALLY that this can only be part of the fundamental human condition.

These aren’t performances; there’s no pretence going on.

Being able to become multiple divergent selves is just what we we are as humans.

It’s nice to acknowledge that.

Maybe we wouldn’t all get so angry on Twitter if there were psychological cues to remind us that, yes, fundamentally it’s all role-play. It’s real AND it’s pretend both at once.


I wonder whether work (job work or creative work or whatever) would be easier if we leant into the larping aspect.

What if Google Docs, Figma, Slack, and all the other apps of the modern workplace were built around the idea that we were adopting a character and doing improv? Like, we have roles at old-school work, and I think that helps? Maybe we should have characters in software too?

Maybe locking ourselves into a single identity that remains fixed for all our time with a particular team and a particular app is a kind of mental straitjacket somehow.

I’m reminded of the way that the four ghosts in Pac-Man embody four different algorithms – they chase around the maze using: pursue; ambush; fake-outs; idling. You need them all!

What if, when I opened an app, I swiped in a different direction to consciously adopt a different character – a different personality algorithm. How would I collaborate on a doc as a healer versus a knight, or write email as a wizard versus a goblin?

What if “character” is a top-level entity in the database, as important as “user”?

What’s the minimum viable feature I need to see myself and for others to see me differently, to allow larping-instinct to kick in?

Does my user profile pic need a hat to show which character I’m playing today?


I’m on a discord where, in most of the channels, people are discussing art or activism or potential technical protocols for speculative platforms or socialising about where they live, and in one channel (genuinely) they are running around and very seriously getting excited about goblins.

Goblins again.

08 May 04:01

Why We Love the Yoozon Selfie Stick

by Signe Brewster
Why We Love the Yoozon Selfie Stick

The inexpensive Yoozon Selfie Stick is a tripod, selfie stick, and camera remote all in one. A few years ago, I might have told you that selfie sticks deserved to stay in 2014 with the Ice Bucket Challenge and the series finale of How I Met Your Mother. But then I came across the Yoozon Selfie Stick while I was researching Wirecutter’s guide to tripods for smartphones, and I had to admit how wrong I was.

Dismiss
06 May 01:39

Recently

by Tom MacWright

The last thirteen days, after I opened the doors and let Placemark into the world, have been a blur. Mostly the good kind!

It came down to a question of whether the things I thought should be priorities were actually the priorities. I had a backlog that could easily have stretched a few more weeks into the future, and by the time I was done with those tasks I probably would have found some more that seemed like a priority. But there’s a real danger of going too far down the wrong road with these things.

What I learned was a great mix of confirmations - that what I’m building is useful - and surprises, about what’s important and what troubles people run into. Reacting and improving in response to those has been exciting. For quite a few emails, I was able to ship the feature or the fix before I wrote a response. My focus now is on improvements, then changelogs, then another changelog, then more documentation to round it all out.

Placemark repo screenshot

Of course, it’s also a harrowing experience. Knowing that there are bugs in software in the wild. That it might not be what people want, that it doesn’t live up to expectations. I’m very thankful to have a lot of people who believe in me and support me, and at least a big chunk of that is tied to the idea that I make good things, and more than anything I want to deliver on that promise.

Placemark is still a small bootstrapped company, but the intrepid folks who signed up are helping it become a little more sustainable. I am going to make it better until it’s a no-brainer for anyone who wants to make maps. Every morning I wake up and push it along.


Music

I heard this “The Lijadu Sisters” song at a bar. There are a few versions floating around the internet, some pitched-down, some pitched-up, probably all digitized from an old vinyl.

BUNNY by Mister Goblin

I sort of knew Sam and his band back in DC, and I’m jealous that he’s still at it.

Nothing’s set in stone yet, but we might be pressing some new Teen Mom vinyl.

Reading

This talk about Perl from its creator is wacky and enjoyable. My friend Dave launched a new newsletter about tech. Wildbit got acquired, like Mailchimp, but gave their employees a chunk of the profits, unlike Mailchimp.

The wildest Odd Lots episode so far, with Sam Bankman-Fried explaining staking as a ponzi scheme.


I have a few things I’m going to write about, but what would you want me to write about? Let me know - tom@macwright.com or @tmcw.

06 May 01:38

UI Browser for macOS to Be Retired in October 2022

by Federico Viticci

Longtime MacStories readers may be familiar with UI Browser, an incredible scripting tool for macOS created by Bill Cheeseman. UI Browser lets you discover the AppleScript structure of an app’s menu system, taking advantage of Apple’s Accessibility APIs to make it easier to script UI, which is not – how do I put this – normally “fun”, per se. UI Browser developer Bill Cheeseman, having turned 79 years old, has decided it is now time to “bring this good work to a conclusion”, and the app will be retired in October.

Here’s what John Gruber wrote about UI Browser last week:

Long story as short as possible: “Regular” AppleScript scripting is accomplished using the programming syntax terms defined in scriptable apps’ scripting dictionaries. If you ever merely tinkered with writing or tweaking AppleScript scripts, this is almost certainly what you know. But as an expansion of accessibility features under Mac OS X, Apple added UI scripting — a way to automate apps that either don’t support AppleScript properly at all, or to accomplish something unscriptable in an otherwise scriptable app. UI scripting is, basically, a way to expose everything accessible to the Accessibility APIs to anyone writing an AppleScript script. They’re not APIs per se but just ways to automate the things you — a human — can do on screen.

A great idea. The only downside: scripting the user interface this way is tedious (very verbose) at best, and inscrutable at worst. Cheeseman’s UI Browser makes it easy. Arguably — but I’ll argue this side — “regular” AppleScript scripting is easier than “UI” AppleScript scripting, but “UI” AppleScript scripting with UI Browser is easier than anything else. UI Browser is both incredibly well-designed and well-named: it lets you browse the user interface of an app and copy the scripting syntax to automate elements of it.

I first covered UI Browser in 2019, when I published a story on how I could control my Mac mini from the iPad Pro using Luna Display and some AppleScript, which I was able to learn thanks to UI Browser. I then mentioned UI Browser twice last month for Automation April: it was thanks to the app that I managed to create shortcuts to toggle the Lyrics and Up Next sidebars in the Music app for Monterey. Maybe it’s silly, but I think there’s something beautiful in the fact that the last thing I did with UI Browser was bridging the old world of AppleScript with the modern reality of Shortcuts.

Gruber argued that Apple should acquire UI Browser and make it part of their built-in scripting tools for macOS; while I don’t disagree, I think it’s more realistic to hope another indie developer or studio picks up UI Browser and continues developing for as long as possible. There’s nothing else like it on the market, and I’d like to thank Bill Cheeseman for his amazing work on this application over the years. It’ll be missed.

→ Source: pfiddlesoft.com

05 May 18:02

Ride of the Rebel Pilots

by jnyyz

In honour of May the fourth, a group was organized to do some crafting of X wing pilot costumes, and today was the day to show them off on a bike ride. I did my bit my decorating a spare helmet.

I also dressed up my bike with some election sign material, pool noodles and zip ties.

The ride started at Christie Pits, but I arrived late, and I didn’t see anyone, so I took off along the posted route to try to catch the group. Here you can see by the shadow that my S-Foils are locked in attack position while I rode down Rosedale Valley Rd. I also noticed that the cars were giving me a wider berth than usual.

I got all the way down to Corktown Commons where I met Chris, but still no rebel pilots.

As it turned out, they left Christie Pits around this time, and I ended up doing another circuit from Bloor and St. George, and down to Corktown Commons once again before I finally caught up with the group. Here are some familiar faces from the bike team.

Thanks to Bill for taking this picture of me and my X-Wing.

Off we go.

David Pecault Square

Along Wellington.

Event organizer and Red Leader Natalie.

Across Garrison Crossing.

Thanks to Natalie, Gerry and everyone else for the fun evening. I imagine that much of the same crew will be riding again tomorrow for the Neon Rider kickoff.

Kevin’s video of the event.

05 May 18:01

Vergleich von E-Auto, Verbrenner und Hybrid in Bezug auf Klimaschädlichkeit

by Andrea

Terra X Lesch & Co: Wie klimafreundlich sind E-Autos wirklich? (YouTube, 26:49min)

“Wir beantworten sie: Die drängendsten und dringendsten Fragen zu vollelektrischen Autos! Sollte die nächste Anschaffung ein Elektroauto sein? Wie viel CO2 verursacht ein E-Auto beim Fahren mit dem heutigen Strommix in Deutschland – der noch längst nicht zu 100% aus erneuerbaren Energieträgern stammt? Wie ist die Klimabilanz bei der Herstellung im Vergleich zu der eines Autos mit Verbrennungsmotor? Schließlich „frisst“ die Produktion von Batterien viel Energie. Was passiert, wenn die Batterie alt und gebraucht ist? Und: Ist unser Energiesystem überhaupt für 15 Millionen E-Autos gewappnet?”

In den Shownotes sind die zitierten Studien verlinkt:

05 May 18:01

Drupal is for ambitious site builders

by Dries

With Drupal 10 around the corner, it's time to start laying out Drupal 11's development roadmap.

It's important we begin that work by reflecting on Drupal's purpose. Drupal's purpose has evolved over the years. In the past, the goal might have been to build the world's most powerful CMS. Today, I believe Drupal has become much bigger than a CMS alone.

Drupal enables everyone to participate in an Open Web. The web is one of the most important public resources. As a result, the Drupal community's shared purpose is to make that resource open, safe, and accessible to all. With 1 in 30 websites running on Drupal, we have a lot of influence on building the future of the web we want to see. In fact, we have an opportunity to help build a digital future that is better than the one we have today.

Drupal enables everyone to participate in an Open Web, a decentralized, public resource that is open, safe and accessible to all

To align with that purpose, and to drive the most impact, our vision also has to evolve. Five years ago, I declared that Drupal is for ambitious digital experiences. I'd argue that we have achieved that vision by investing in headless Drupal, Media, Layout Builder, and additional features that help enable the creation of ambitious digital experiences.

That is why I propose evolving our vision statement to "Drupal is for ambitious site builders".

Drupal is for ambitious site builders

Attracting more Drupal site builders will increase Drupal's potential user base, and in turn create a more open, accessible and inclusive web for all.

This shift also brings us back to our roots, which I've talked about in several of my previous DrupalCon keynotes.

What is an ambitious site builder?

An ambitious site builder sits in between the developer hand-coding everything using a framework, and the content author using a SaaS solution. There is a gap between developers and content authors that Drupal fills really well.

Drupal's unique strength is the Ambitious Site Builder

An ambitious site builder can get a lot of things done by installing and configuring modules, and using Drupal through the UI. But when needed, they can use custom code to make their site exactly how they want it to be. Ambitious site builders are the reason why Drupal became so successful in the first place.

I'm excited to see this vision come to life through the key initiatives for Drupal 11, which I'll talk about in my next blog post.

05 May 14:54

Map of clinics at risk of closure

by Nathan Yau

If Roe v. Wade is overturned, over 200 clinics would potentially have to close. Bloomberg mapped it, along with charts showing more than half of child-bearing people in the United States with new restrictions.

Tags: Bloomberg, clinics, Roe v. Wade

05 May 14:54

Number of abortions in each state, by restriction status

by Nathan Yau

The Washington Post has a set of charts showing the current status of abortion in the United States. The treemap above shows counts by state in 2017, based on estimates from the Guttmacher Institute.

Twelve percent took place in states that have trigger bans, laws passed that would immediately outlaw most abortions in the first and second trimesters if Roe were overturned. (Those states are already some of the most restrictive.) And 27 percent occurred in states that plan to enact other new restrictions.

Tags: abortion, Roe v. Wade, Washington Post

05 May 14:50

Twitter Favorites: [skinnylatte] It's a good restaurant, but unless there are very old women yelling at me in Cantonese it definitely won't have the… https://t.co/Th3rPwdWnW

Adrianna Tan @skinnylatte
It's a good restaurant, but unless there are very old women yelling at me in Cantonese it definitely won't have the… twitter.com/i/web/status/1…
05 May 14:50

Twitter Favorites: [BlueJays] Michael gave Derek a home run ball last night. Today, they reunited 🤗 Baseball is the best 💙 https://t.co/Qca6U7UD09

Toronto Blue Jays @BlueJays
Michael gave Derek a home run ball last night. Today, they reunited 🤗 Baseball is the best 💙 pic.twitter.com/Qca6U7UD09
05 May 14:49

Twitter’s decentralized, open-source offshoot just released its first code

Adi Robertson, The Verge, May 05, 2022
Icon

Source code for Twitter's decentralized open-source 'Bluesky' social network has been released and is available on GitHub. It's a "data protocol which we've termed ADX: the Authenticated Data eXperiment. The 'X' stands for 'experiment' while the project lives in an early exploratory state" and developers are warned "please do not try to build anything with this!" The system is based on decentralized identifiers (DID) which the codebase says is "the canoncial, unchanging identifier for a user." One of the recent changes removes a reference to "user IDs at a low cost to users" leaving open the question of the nature and makeup of "the (currently-unnamed) DID Consortium." To me, the real question is how DID will be integrated with ActivityPub, a concept explored here.

Web: [Direct Link] [This Post]
05 May 14:48

Roe v Wade v Sanity

by Charlie Stross
mkalus shared this story from Charlie's Diary.

Supreme court voted to overturn Roe v Wade abortion law, leaked draft opinion reportedly shows.

Here is the leaked draft opinion by Justice Alito. (Format: PDF.)

I am not a lawyer.

The opinion apparently overturns Roe v. Wade by junking the implied constitutional right to privacy that it created. However, a bunch of other US legal precedents rely on the right to privacy. Notably:

  • Lawrence v. Texas (2003) determined that it's unconstitutional to punish people for committing "Sodomy" (any sex act other than missionary-position penis-in-vagina between a married man and woman)

  • Griswold v. Connecticut (1965) protects the ability of married couples to buy contraceptives without government interference

  • Loving v. Virginia (1968): right to privacy was used to overturn laws banning interracial marriage

  • Stanley v. Georgia (1969): right to privacy protects personal possession of pornography

  • Obergefell v. Hodges (2015): right to privacy and equal protection clause were used to argue for legality of same sex marriage

  • Meyer v. Nebraska (1923): ruling allows families to decide for themselves if they want their children to learn a language other than English (overturning the right to privacy could open the door for racist states to outlaw parents teaching their children their natal language)

  • Skinner v. Oklahoma (1942): this ruling found it unconstitutional to forcibly sterilize people (it violated the Equal Protection clause)

I am going to note that the US congressional mid-term elections take place in about six months' time.

Wider point: if Alito's leaked ruling represents current USSC opinion, then it appears that the USSC is intent on turning back the clock all the way to the 19th century.

Another point: it is unwise to underestimate the degree to which extreme white supremacism in the USA is enmeshed with a panic about "white" people being "out-bred" by other races: this also meshes in with extreme authoritarian patriarchal values, the weird folk religion that names itself "Christianity" and takes pride in its guns and hatred of others, homophobia, transphobia, an unhealthy obsession with eugenics (and a low-key desire to eliminate the disabled which plays into COVID19 denialism, anti-vaxx, and anti-mask sentiment), misogyny, incel culture, QAnon, classic anti-semitic Blood Libel, and Christian Dominionism (which latter holds that the USA is a Christian nation—and by Christian they mean that aforementioned weird folk religion derived from protestantism I mentioned earlier—and their religious beliefs must be enshrined in law).

Okay, so, it's open season in the comments here. (Meanwhile discussion of RvW on other blog post comment threads is officially forbidden.)

PS: There are no indications they're going to use this ruling as an opening shot for bringing back slavery. Why would they? Slavery never went away. (The 13th Amendment has a gigantic loophole permitting enslavement as punishment, and the prison-industrial sector in the USA clearly enforces chattel slavery—only under government/corporate management rather than as personal property.)

05 May 14:09

Eero’s new Home Mesh Wi-Fi systems are on sale at Best Buy

by Karandeep Oberoi

Amazon-owned Eero's latest Home Mesh Wi-Fi systems -- the Eero Pro 6E and Eero 6+ are currently discounted at Best Buy.

The 6+ and Pro 6E’s predecessors only supported Wi-Fi 6, limiting them to use 2.4 GHz or 5GHz radio bands, whereas the new Eero Pro 6E mesh Wi-fi systems can operate on the 6 GHz band, allowing multiple devices to use the internet at once without congestion, great for dense network environments like offices.

The new Eero Pro 6E can support over 100 devices simultaneously, with speeds up to 2.3Gbps, which is great for gaming, streaming 8K videos or live streaming. Additionally, since the device has two ethernet ports (2.5 GbE and 1.0 GbE), the Eero Pro 6E can support multigigabit internet plans.

Regularly available for $999.99, the Eero Pro 6E's 3-pack is currently available at Best Buy for $848.99, marking a $151 discount on the recently-released product.

Similarly, the Eero 6+ can support over 75 devices at once, though it does not support Wi-Fi 6E. It is ideal for those with a gigabit internet connection, with a single unit covering up to 140 square metres and a 3-pack covering about 420 square metres.

Regularly available for $429.99, the Eero 6+ 3-pack is currently available at Best Buy for $364.99, marking a $65 discount.

MobileSyrup utilizes affiliate partnerships. These partnerships do not influence our editorial content, though we may earn a commission on purchases made via these links that helps fund the journalism provided free on our website.

Via: RedFlagDeals

05 May 14:08

New book explains why Jony Ive quit Apple

by Patrick O'Rourke
Tim Cook and Jony Ive

Since Jony Ive's departure from Apple, several of the tech giant's products have changed drastically.

There's the very capable/chunky, but port-filled MacBook Pro (2021), the death of the Butterfly keyboard and most recently, the impressive Mac Studio and slightly less impressive Studio Display. These new products and revisions to existing devices probably wouldn't have happened if the tech giant's long-serving chief designer was still around, given Ive's design-forward focus and fondness for thin devices.

Since Ive left Apple in 2019, it's been unclear why he opted to move on after almost three decades of working at the company and shaping its design legacy. Now, a New York Times article focused on an excerpt from Tipp Mickle's upcoming book, After Steve: How Apple Became a Trillion Dollar Company and Lost Its Soul, sheds light on Ive's decision to leave.

According to the book, Ive left Apple following years of frustration as the company shifted from a focus on design to one that's more utilitarian. Mickle's book delves into Ive's relationship with Apple's former CEO and co-founder Steve Jobs during the development of the iMac. Following Job's death, the book describes how Ive's role at the company shifted and outlines how his relationship with Tim Cook, Apple's current CEO, wasn't as close.

As the Apple Watch shifted from focusing on fashion to fitness, Ive spoke to Cook about stepping back from the business side of Apple. The executive was reportedly frustrated with managing hundreds of staff instead of a smaller design team. This resulted in Cook giving Ive the chief design officer position and Ive switching to reviewing product progress on a weekly cadence instead of daily.

Mickle's book also delves into Ive being late for meetings and how he became slow to approve designs. In an amusing twist, Ive invited his design team to watch the movie 'Yesterday' as a "two-hour exploration of the eternal conflict between art and commerce."

Ive received an exit package valued at more than $100 million, according to Mickle.

After Steve: How Apple Became a Trillion Dollar Company and Lost Its Soul releases on May 3rd.

04 May 04:23

Presence in VR should show tiny people, not user avatars

Since picking up a virtual reality headset a couple weeks ago, I’ve been asking myself: how should the future operating system for apps work?

Like: how do you write docs? How do you collaborate on a deck? How do you launch your messaging app? Games are easy because they get to be weird. But for apps you need standard behaviours.

So I’m trying to think through this from first principles and see what comes to mind…

ALSO keep in mind that I have become obsessed with the overview mode in Walkabout Mini Golf. It’s incredible to have the “Godzilla’s eye view” (as I called it in that post at the top): gazing over the course with all its trees and ponds, a mountain halfway up my chest. And then being able to literally kneel on the floor and stick my head into a cave, examining closely all the intricate objects and furnishing in the rooms inside.


For me, the key difference between a screen-based user interface and a VR-based UI comes down to this:

If there is a small icon on my laptop screen, no amount of me moving closer will magically add resolution. But if there is a small icon in VR, leaning in will resolve more detail.

Quick maths: let’s say an icon (like a user profile pic) is 1cm across and apparently 20cm away, and I lean in to halve that distance to 10cm. The number of pixels dedicated to that icon increases 4x.

You can pack a ton more information into 4x pixels!

This is crazy fundamental. On our phones, we “see” with our fingers - panning, swiping - but although you can pinch-zoom on a photo, there’s nothing you can do that lets you peer closer at an interface element and get more data than is already there. You can in VR. And compared to tapping or pointing with an input device, moving your head a small amount is zero cost.

Like, imagine looking at the tiny wi-fi icon in the top bar on your home screen. Simply lean your head towards it a little (unconsciously) and you are able to read the name of the current wi-fi network; buttons for commands appear.


Seeing is an active process. We move our eyes, heads, and bodies to see the whole time.

This is the topic of J. J. Gibson’s incredible 1979 book The Ecological Approach to Visual Perception (Amazon), which counters then-conventional psychological research in perception which strapped the subject’s head in place. Instead: "natural vision depends on the eyes in the head on a body supported by the ground, the brain being only the central organ of a complete visual system. When no constraints are put on the visual system, people look around, walk up to something interesting and move around it so as to see it from all sides, and go from one vista to another."

Here’s a quote I noted down last time I read it (2004):

This is why to perceive something is also to perceive how to approach it and what to do about it. Information in a medium is not propagated as signals are propagated but is contained. Wherever one goes, one can see, hear, and smell. Hence, perception in the medium accompanies location in the medium.

(Designers may like to know that Gibson also coined the term “affordances,” used heavily in HCI and interaction design thanks to Don Norman, and this book is the underpinning for why visual perception and action are intimately connected.)


SOME RELATED LINKS:

I tried controlling my desktop cursor with head tracking last year and it was tantalising: "It is so close to being something I would use in preference to a mouse… or rather, alongside one."

So that’s why I’m a believer that computer interaction can be way more embodied than it is today.

Another reference point is the zooming user interface (Wikipedia) which has been a concept for decades. I’ve built prototypes in the past, and it’s actually kinda neat to (for example) zoom in from a document icon to edit the document itself. BUT also frustrating: cognitively it feels like you end up with too much stuff in your head. Your brain misses the necessary cues to deallocate information stored from old screens. You need the attentional ergonomics of the Doorway Effect (as previously discussed) to help with focusing.

My point is that the possibilities of the future user interface are wiiiiiide open. Building on these kind of ideas is what I have in mind.


If “natural zooming” is a UI primitive for our future OS then another fundamental is presence.

VR and multiplayer are intrinsically linked. Virtual reality is a medium that has emerged in the networked age. Of course apps will be multiplayer and collaborative. Why would they be otherwise?

The metaverse, right?

A couple years ago, I was speculating on how to retrofit “multiplayer” into today’s desktop or phone operating system. One idea was noisy icons:

Imagine seeing ripples around the Google Docs app as if there were some deep, distant activity. Open it… and there’s a particular document humming with comments. You listen at the door, you can tell who’s active, and the frequency of the interactions, but not what they’re saying precisely… a ping as your name is mentioned (the notification of which wouldn’t have bubbled all the way up to your home screen as it’s not important enough, but since you’re here) - so you enter and join your colleagues.

– Interconnected, Multiplayer docs, webcam fashion, noisy icons: three ideas (2020)

…and this is kinda hard to imagine actually being implemented today, right?

But with VR, where the OS is being built from scratch, maybe this is the kind of paradigm that can be there from the get-go.

Long story short: you should be able to see which of your friends are “in” an app before you launch it, and see who is “around” in the app while you’re using it. Presence.

Today “presence” means showing a green I’m Online! marker next to a floating avatar or, at best, Figma-style cursors charging around on screen. It works but it’s oh so abstract.


How it works today:

Meta Quest 2 has a button that brings up a universal menu. It hovers in space. It doesn’t add more detail if I lean closer (it just looks sharper). It should!

You can see the menu in this review. (Search for “The best place to see the changes in the hardware’s displays is currently in the menus.”)

The app launcher gives me a grid of images, hanging in space on the same virtual screen.

How could it look?

I want to see apps. I want to see if an app is busy or rather: occupied (to keep the spatial metaphor). If I peer closer, I want to pick out my friends. Let’s use that zooming UI possibility and let me discern both crowds and individuals, both at once.

This doesn’t have to look like a virtual screen hanging in space. Nor does it have to look like a 3D rendered virtual office (or whatever). That’s skeumorphically cargo-culting the real world. We use abstractions because they’re more efficient for certain kinds of thinking.

In my head the app launcher looks like Peter Molyneux’s 1989 video game Populous (Wikipedia) which invented the “god game” genre.

In Populous: You look down at an isometric landscape on your desk. You see people scurrying about. If a building is busy, there are lots of people there. (This Populous review embeds a gameplay video. Check it out!)

So that’s what my imagined home screen looks like: a landscape of apps, presence shown by people near the apps. But not mini profile pics. We only have those because of the limitations of desktop screens. Let’s have teeny little people instead. When you lean closer, more pixels are dedicated, and you can recognise the faces of your friends.


Ok this isn’t all you need for a VR-based multiplayer operating system, not by a long chalk. But I’m into the fact that there are a ton of new interaction design primitives to use on old problems.

And perhaps this is a neat starting point to open up the design space. With PCs you have the desktop metaphor. With VR, how about the landscape metaphor?

04 May 04:21

Book Review “Blowout” by Rachel Maddow

by Stephen Rees

Published by Crown 2019

ISBN 978-0-525-57547-4

Ebook ISBN 978-0-525-57549-8

I am very fortunate to have a neighbour who likes to buy hardback books and then rather than keep them looks for someone who might like to read them. Even though $40 Canadian is, I suppose, not out of reach, it is still a delight to get my hands on an almost new book, for free. In this case, covering the history of the oil and gas industry is mostly familiar territory, although there is quite a lot here that I seem to have managed to miss at the time, or had perhaps just forgotten. And just because it is three years old does not mean it is out of date since nothing much has changed since it was published.

For any kind of life to continue on earth, the oil and gas industry must, as a matter of urgency, be brought under control. Its trajectory is still to expand the production of the fossil fuels that have now produced the unprecedented threat of the climate crisis.

“The oil and gas industry, as ever, is wholly incapable of any real self-examination, or of policing or reforming itself. Might as well ask the lions to take up a plant based diet. If we want the most powerful and consequential industry on our planet to operate safely, and rationally, and with actual accountability, well make it. It’s not mission-to-Mars complicated either, but it works”.

Maddow’s book is mainly concerned with the United States, of course. Not that matters in Canada are any different. We too pour subsidies at both federal and provincial level into oil companies whose profits have been growing exponentially. We used to get considerable revenues from the royalties levied on these companies. Now that is next to nothing and, at the same time, the favorable tax treatments and supports are in the billions on dollars. Yes billions with a B. Maddow does not mention how Norway has been treating the oil and gas industry – it is not even listed in the index – but that might have been a welcome sign that reform is possible. But probably not very likely as long as Republicans still dominate Congress. Though there was one shining moment that she does mention when both parties and both houses got together to ensure that Trump could not unilaterally cancel sanctions on Russia. Which was a very definite objective of Putin’s campaign to get him elected.

There is much detail of the recent activities of the industry, including of course the Deepwater Horizon – which got so much coverage at the time – as well as a second Gulf drilling rig leak which went on for much longer and was even worse but got hardly any attention. The Taylor oil spill started in 2004 when Hurricane Ivan struck. It remained a secret until 2010, and by 2018 was still leaking seven hundred barrels of oil into the Gulf every single day. The industry still has little more than paper towels and dish liquid to clean up spills and very little oversight to ensure that spills don’t happen. “For every 1,000 wells in state and federal waters, there’s an average of 20 uncontrolled releases – or blowouts – every year.” (US Bureau of Safety and Environmental Enforcement)

Then there is the tale of fracking and the damage to water resources, homes, farms and businesses from a vast earthquake “swarm”. Again Maddow has plenty on this but misses the way that the industry has been very much aware that it loses vast amounts of methane (a far worse greenhouse gas than CO2) but simply regards that as a cost of doing business – and not something that it highlights as in many cases the methane gas they do manage to capture is simply flared, as liquid fuel for motor vehicles is by far the greatest source of demand for the industry’s output. Outright lying, and obfuscation, is naturally the industry’s preferred method of dealing with this issue. Though they do have a commitment to increase the use of methane – “natural gas” – which is claimed to be the cleanest fuel when in reality it is anything but. It is only recently that I have seen mainstream media picking up the story that gas appliances in the home – mostly stoves – are responsible for indoor air quality to be worse than anything that would be permitted industrially. And in this region Terasen (which used to be BC Gas) is proposing a large LNG export terminal in the Fraser estuary at Tilbury. There is already a smaller terminal there and it is also the case that in the US, where ports get more oversight from local authorities than in Canada, would be very unlikely to be permitted due to the proximity of many other businesses and even residential development. LNG production and transportation in general is also bedevilled by methane leaks that are underreported and difficult to control.

Maddow has a very engaging style and the book reads very easily. There is a substantial (nearly 20 pages) of Notes on Sources. With, of course, copious links to information available online. And there is also a very careful analysis of the mind set and ambitions of Russian dictator Putin, including exactly why he has such a vast and successful social media presence and which has done so much damage to democracy and public discourse. It well worth the read. Both the book and the audiobook are currently available at the Vancouver Public Library but there is a short wait list for the ebook.

04 May 04:20

Thoughts on Office-Bound Work

by Rui Carmo

An open letter from Apple employees regarding returning to the office.

Two passages that immediately caught my eye, for distinct reasons:

This one is a very stark contrast to what I experience daily at Microsoft, where we dogfood Teams and Office collaboration on a daily basis:

We tell all of our customers how great our products are for remote work, yet, we ourselves, cannot use them to work remotely? How can we expect our customers to take that seriously? How can we understand what problems of remote work need solving in our products, if we don’t live it?

…and this one is something I’ve had an interesting run-in with myself:

How can we expect to convince the best people to come work with us, if we reject anyone who needs the smallest bit of flexibility?

There is a lot more to unpack in the open letter (including procedural and work-life balance aspects that feel foreign to me these days), but in an age when even very conservative, non-technical companies are allowing full remote or flexible hybrid work, the scenario it portrays is… out of touch, at the very least.


04 May 04:20

Remembering history

by Chris Corrigan

May Day came and went, a day to celebrate both the beginning of Celtic summer, lighting the fires of Beltaine to burn away the previous year, and a day to remember the international struggled for workers rights.

My friend and neighbour here on Nexwlélexwm (Bowen Island) Meribeth Deen wrote a beautiful and thoughtful article about the bloody labour history of Vancouver Island and the story of Ginger Goodwin. (Meribeth is a beautiful writer, by the way and you should hire her for things.). Goodwin was an organizer of coal mine workers who was killed in the bush by a police officer in 1918, prompting Canada’s first General Strike.

The coal fields of British Columbia were the sites of some of Canada most fierce union activity largely because the men who owned the coal mines were, to put not too fine a point on it, complete bastards. I admit that the story of Ginger Goodwin was not familiar to me but certainly the names of Dunsmuir and Bowser are. Dunsmuir, because his name adorns a major street in downtown Vancouver, and Bowser, because there is a town named for him on Vancouver Island. But despite Bowser’s name, I never knew that he was a xenophobic racist who mass imprisoned migrant workers from eastern Europe, include Ukraine, because he considered them a threat to Canada while the First World War was raging 8000 kms away.

Last year when statues were being toppled and things renamed (like Ryerson University) one of the intellectually lazy objections to these actions was that we would forget history if these names were removed, that these people did incredible things, and they should be honoured. But reading Meribeth’s piece reminded me that in naming streets and places and statues after these folks what we are actually doing is forgetting history, erasing it. We wash it clean, assuming that everyone with a statue or a road or a town named after them was a good person. In fact, if with people of well known names, we have to deeply research the history of these people to really know them and, unsurprisingly for a country that was founded on genocide, the exploitation of workers and the ruthless pursuit of profit and wealth, these places are often named for people who more than likely pursued one of these strategies.

A key part of colonization is erasing the knowledge of what is here in favour of a more comfortable and familiar set of names. The parts of Ontario I grew up in were named by settlers from Ireland and Scotland and named for places that were meaningful for them. It reminded them of home. And it erased the Anishinaabe and Onkwehón:we names that were already on the landscape and that encoded a much deeper story of home and belonging.

Here in Skwxwúmesh-ulh Temíxw where i now live a famous example is the naming of a pair of distinct mountain peaks called “The Lions.” Towering over Vancouver, these twin peaks got named by settlers after the totemic animal of the British empire- the lion. The lion has been a feature of British heraldry for nearly 1000 years and so it was pretty much the ultimate naming. Boom. Lions. Putting the British in British Columbia; and because you can see these peaks from everywhere, you’ll never forget it.

But 1000 years is a mere blip in time when you consider that from time immemorial those two peaks have be called Ch’ich’iyúy and Elxwí?n and are the embodiment of two sisters who brought a fierce peace to the coast. From the Squamish Atlas:

Ch’ich’iyúy is one of two names used for the mountains known as The Lions. The other name is Elxwí?n. While the meaning of the name “Elxwí?n” is not known, “Ch’ich’iyúy” means “twins”. These mountains have the name for “twins” because they are said to be two Squamish sisters. There are different stories about these two sisters, but the most famous is a story about peace: When a girl becomes a woman, the Squamish tradition is to celebrate with a big feast. A great chief had two daughters that came of age in the same spring, and he prepared to host the biggest feast the Coast had ever seen, inviting all the neighbouring peoples to come for several days of eating, dancing, and celebration! A few days before the feast, the daughters went to their father to ask a favour – they asked if he would also invite a tribe from the north which the Squamish people had been at war with since ancient times. They wanted peace for their peoples, and all the peoples of the region. Their father agreed and the northern tribe came to the feast, welcoming in a new era of peace. When the Great Spirit saw what the two sisters had done, he decided to make them immortal by turning them into the two mountains, Ch’ich’iyúy, so that they could be a symbol of peace in the region forever

Story as told to Pauline Johnson and recorded in The Two Sisters.

Almost all of the historical and Indigenous place names in this territory refer to the physical characterists of a place, it’s traditional use or to events contained in an ancient story that encodes a teaching like this. There are no place names named for people, and on the contrary many people carry the names of places.

History is not an objective set of facts. It is a whole series of contested and different stories and experiences, and is as subject to the whims and dynamics of power as branding. marketing, and narrative manipulation today. When we choose to name a place, we bring a projection on to it. Perhaps Bowser didn’t know much about the town that was named after him. But what does it say about the people that DID name that town? What were they thinking? By encoding his name on the landscape, it reveals the intentions of settlers – much in the same way that the erection of Confederate statues long after the end of the Civil War were a message that Jim Crow laws were in effect in this place. The photo on this blog post is the inscription on a Confederate soldier statue that still stands in the town square of Denton, Texas, taken in 2019.

I have no trouble removing or changing the names of places or removing the statues of racists. I’m not totally in favour of naming things after individuals anyway. But if you feel that something is being lost by changing names, consider what was intended by the naming in the first place and ask yourself if it’s time for a different story.

04 May 04:18

Instapaper Liked: How to write longform Git commits for better software development

I wrote up an article on the @meedan blog about longform git commits and the various ways they can make your life as a software developer easier. Ban "-m" from…
04 May 01:08

Microsoft Edge now the second most popular desktop browser, overtaking Safari

by Steve Vegvari

Microsoft Edge is slowly climbing the ladder and has now stolen the second-place spot of the most popular desktop browser from Safari. What used to belong to Apple, now belongs to Microsoft as it inched its way past in market share.

According to data shared by Statcounter, Microsoft Edge saw a bit of a bump in usage since March 2021. Safari has now been dethroned as Microsoft Edge has 10.07 percent of the market share. Safari is not trailing too far behind in third with 9.61 percent. However, both wildly pale in comparison to Google Chrome with more than 66 percent of the user base.

Firefox and “Other” browsers fall short of the top three spots on the list. Firefox has roughly 7.8 percent of the market share, “Other” encompasses roughly 2.5 percent. Rounding out the list at the bottom is Opera with 2.44 percent.

There’s a fair bit to glean from this information. Microsoft has larger mindshare when it comes to desktops and the like over Apple. This likely translates to more users jumping to Microsoft Edge. Additionally, one can speculate whether or not Mac users are prioritizing time on Microsoft Edge over Safari as well.

Another factor potentially playing into the shift could be Apple’s new design of Safari. Late last year, Safari received a fairly substantial coat of paint. It adds the top bar’s colour-changing feature, smaller tabs, and more. This was then met with critical user feedback. These changes could have been a catalyst for users to migrate over to Chrome and Microsoft Edge.

Image credit: Statcounter

Source: Statcounter Via: 9to5Mac