Shared posts

08 May 07:39

Role Clarity Deficiencies Can Wreck Agile Teams

I cannot do a better job of summarizing this paper on agile teams than the authors do themselves:

  1. Self-organization becomes very hard when there is incomplete understanding of the roles in the team;
  2. this happens suprisingly often;
  3. coach-facilitated reflection on roles can presumably help;
  4. achieving sufficiently complete understanding and acceptance is difficult even then.
  5. Once achieved, good role understanding and acceptance helps a team to be effective and work together well.
  6. The process of gaining role clarity is difficult and can be emotionally challenging.
  7. Technical skills can be less important than being self-reflective.

Reading this paper has made me reflect yet again on how resistant most programmers (including my younger self) are to the idea of treating teamwork as a first-class skill. I played rugby in high school and ultimate frisbee in my thirties; in both cases, I expected and accepted that my teammates and I would do drills and practice together in order to learn how to play together. When was the last time you and the developers you work with created, reviewed, merged, and deployed a practice pull request while someone observed and offered advice on how you could do it better? And yes, lots of companies have or bring in agile coaches, but in most of the cases I've seen, this is only done for a short transition period ("We're going agile this week…"). As Barke2019 says, "Team reflection about roles appears to be a near-continuous need."

Barke2019 Helena Barke and Lutz Prechelt. Role clarity deficiencies can wreck agile teams. PeerJ Computer Science, 5:e241, 2019, doi:10.7717/peerj-cs.241.

Background: One of the twelve agile principles is to build projects around motivated individuals and trust them to get the job done. Such agile teams must self-organize, but this involves conflict, making self-organization difficult. One area of difficulty is agreeing on everybody's role. What dynamics arise in a self-organizing team from the negotiation of everybody's role? Method: We conceptualize observations from five agile teams (work observations, interviews) by Charmazian Grounded Theory Methodology. Results: We define role as something transient and implicit, not fixed and named. The roles are characterized by the responsibilities and expectations of each team member. Every team member must understand and accept their own roles (Local role clarity) and everbody else's roles (Team-wide role clarity). Role clarity allows a team to work smoothly and effectively and to develop its members' skills fast. Lack of role clarity creates friction that not only hampers the day-to-day work, but also appears to lead to high employee turnover. Agile coaches are critical to create and maintain role clarity. Conclusions: Agile teams should pay close attention to the levels of Local role clarity of each member and Team-wide role clarity overall, because role clarity deficits are highly detrimental.

08 May 07:38

Format code examples in documentation with blacken-docs

by Simon Willison

I decided to enforce that all code examples in the Datasette documentation be formatted using Black. Here's issue 1718 where I researched the options for doing this.

I found the blacken-docs tool. Here's how to run it against a folder full of reStructuredText files:

pip install blacken-docs
blacken-docs docs/*.rst

This modifies the files in place.

Setting a different line length

I read most documentation on my phone, so when I'm writing code examples I tend to try to keep the line lengths a little bit shorter to avoid having to scroll sideways when reading.

blacken-docs has a -l option for changing the length (Black defaults to 88 characters) which can be used like this:

blacken-docs -l 60 docs/*.rst

Missing function bodies with ...

I was getting errors with some of my code examples that looked like this:

@pytest.fixture
def datasette(tmp_path_factory):
    # This fixture will be executed repeatedly for every test

This is because of the missing function body. It turns out adding ... (which looks prettier than pass) fixes this issue:

@pytest.fixture
def datasette(tmp_path_factory):
    # This fixture will be executed repeatedly for every test
    ...

Running this in CI

The blacken-docs command outputs errors if it finds any Python examples it cannot parse. I actually found a couple of bugs in my examples using this, so it's a handy feature.

This also causes the tool to exit with a status code of 1:

% blacken-docs -l 60 docs/*.rst                        
docs/authentication.rst: Rewriting...
docs/internals.rst:196: code block parse error Cannot parse: 14:0: <line number missing in source>
docs/json_api.rst:449: code block parse error Cannot parse: 1:0: <link rel="alternate"
docs/plugin_hooks.rst:250: code block parse error Cannot parse: 6:4:     ]
docs/plugin_hooks.rst:311: code block parse error Cannot parse: 38:0: <line number missing in source>
% echo $?
1

I also wanted my CI to fail if the author had forgotten to run blacken-docs against the repository before pushing the commit.

I filed a feature request asking for an equivalent of the black . --check option, but it turns out that feature isn't necessary - blacken-docs returns a non-zero exit code if it makes any changes. So just running the following in CI works for checking if it should have been applied:

    - name: Check if blacken-docs needs to be run
      run: |
        blacken-docs -l 60 docs/*.rst
08 May 07:38

Useful tricks with pip install URL and GitHub

The pip install command can accept a URL to a zip file or tarball. GitHub provides URLs that can create a zip file of any branch, tag or commit in any repository. Combining these is a really useful trick for maintaining Python packages.

pip install URL

The most common way of using pip is with package names from PyPi:

pip install datasette

But the pip install command has a bunch of other abilities - it can install files, pull from various version control systems and most importantly it can install packages from a URL.

I sometimes use this to distribute ad-hoc packages that I don’t want to upload to PyPI. Here’s a quick and simple Datasette plugin I built a while ago that I install using this option:

pip install 'https://static.simonwillison.net/static/2021/datasette_expose_some_environment_variables-0.1-py3-none-any.whl'

(Source code here)

You can also list URLs like this directly in your requirements.txt file, one per line.

datasette install

Datasette has a datasette install command which wraps pip install. It exists purely so that people can install Datasette plugins easily without first having to figure out the location of Datasette's Python virtual environment.

This works with URLs too, so you can install that plugin like so:

datasette install https://static.simonwillison.net/static/2021/datasette_expose_some_environment_variables-0.1-py3-none-any.whl

The datasette publish commands have an --install option for installing plugin which works with URLs too:

datasette publish cloudrun mydatabase.db \
  --service=plugins-demo \
  --install datasette-vega \
  --install https://static.simonwillison.net/static/2021/datasette_expose_some_environment_variables-0.1-py3-none-any.whl \
  --install datasette-graphql

Installing branches, tags and commits

Any reference in a GitHub repository can be downloaded as a zip file or tarball - that means branches, tags and commits are all available.

If your repository contains a Python package with a setup.py file, those URLs will be compatible with pip install.

This means you can use URLs to install tags, branches and even exact commits!

Some examples:

  • pip install https://github.com/simonw/datasette/archive/refs/heads/main.zip installs the latest main branch from the simonw/datasette repository.
  • pip install https://github.com/simonw/datasette/archive/refs/tags/0.61.1.zip - installs version 0.61.1 of Datasette, via this tag.
  • pip install https://github.com/simonw/datasette/archive/refs/heads/0.60.x.zip - installs the latest head from my 0.60.x branch.
  • pip install https://github.com/simonw/datasette/archive/e64d14e4.zip - installs the package from the snapshot at commit e64d14e413a955a10df88e106a8b5f1572ec8613 - note that you can use just the first few characters in the URL rather than the full commit hash.

That last option, installing for a specific commit hash, is particularly useful in requirements.txt files since unlike branches or tags you can be certain that the content will not change in the future.

As you can see, the URLs are all predictable - GitHub has really good URL design. But if you don't want to remember or look them up you can instead find them using the Code -> Download ZIP menu item for any view onto the repository:

Screenshot of the GitHub web interface - click on the green Code button, then right click on Download ZIP and selecet Copy Link

Installing from a fork

I sometimes use this trick when I find a bug in an open source Python library and need to apply my fix before it has been accepted by upstream.

I create a fork on GitHub, apply my fix and send a pull request to the project.

Then in my requirements.txt file I drop in a URL to the fix in my own repository - with a comment reminding me to switch back to the official package as soon as they've applied the bug fix.

Installing pull requests

This is a new trick I discovered this morning: there's a hard-to-find URL that lets you do the same thing for code in pull requests.

Consider PR #1717 against Datasette, by Tim Sherratt, adding a --timeout option the datasette publish cloudrun command.

I can install that in a fresh environment on my machine using:

pip install https://api.github.com/repos/simonw/datasette/zipball/pull/1717/head

This isn't as useful as checking out the code directly, since it's harder to review the code in a text editor - but it's useful knowing it's possible.

Installing gists

GitHub Gists also get URLs to zip files. This means it's possible to create and host a full Python package just using a Gist, by packaging together a setup.py file and one or more Python modules.

Here's an example Gist containing my datasette-expose-some-environment-variables plugin.

You can right click and copy link on the "Download ZIP" button to get this URL:

https://gist.github.com/simonw/b6dbb230d755c33490087581821d7082/archive/872818f6b928d9393737eee541c3c76d6aa4b1ba.zip

Then pass that to pip install or datasette install to install it.

That Gist has two files - a setup.py file containing the following:

from setuptools import setup

VERSION = "0.1"

setup(
    name="datasette-expose-some-environment-variables",
    description="Expose environment variables in Datasette at /-/env",
    author="Simon Willison",
    license="Apache License, Version 2.0",
    version=VERSION,
    py_modules=["datasette_expose_some_environment_variables"],
    entry_points={
        "datasette": [
            "expose_some_environment_variables = datasette_expose_some_environment_variables"
        ]
    },
    install_requires=["datasette"],
)

And a datasette_expose_some_environment_variables.py file containing the actual plugin:

from datasette import hookimpl
from datasette.utils.asgi import Response
import os

REDACT = {"GPG_KEY"}


async def env(request):
    output = []
    for key, value in os.environ.items():
        if key not in REDACT:
            output.append("{}={}".format(key, value))
    return Response.text("\n".join(output))


@hookimpl
def register_routes():
    return [
        (r"^/-/env$", env)
    ]
08 May 07:38

If You Care About Eco-Collapse, What Do You Do Now?

by Dave Pollard

tar sands howl arts collective
Alberta Tar Sands, soon to cover an area larger than NY State; its toxic sludge ponds alone are large enough to be visible from space. Photo by Dru Oja Jay, Howl Arts Collective, for The Dominion CC-BY-2.0

A large and powerful bully — the iron-fisted boss of this one-industry company town — has his knee on the neck of an even more powerful but seriously sick woman — she’s ruthless, and owns all the agricultural land in the area, on which the residents depend for their food. The bully plans to seize her land and use it to put up more mega-polluting factories. A large crowd of people stand around, debating what should be done. One person, a paramedic, notes the horrific condition of the victim, and shouts “Unless someone acts, she’ll be dead in eight seconds! There’s still time to save her!”, though he’s too far from the scene to interject personally, and he is being restrained by the bully’s henchmen.

Some of the observers say “That can’t be right, it’s just a tussle, there’s no real danger”, and a few are even egging the bully on. Others insist that it’s not their job to intervene, and that by shouting at the bully to urge him to ease up, they’re doing all they can. Still others say that they can’t intervene because it would mess up their clothes, and they’re on their way to a very important event. Others, watching from a nearby balcony, mutter that they depend on the bully for their livelihood, and are not ready to risk that relationship, and besides, they’re too far away to make a difference now anyway. But some of them shout “Someone needs to do something drastic to stop this right now!”

Both sides have powerful supporters, and an all-out war between them seems inevitable, especially if/when the woman dies.

It’s an imperfect metaphor, of course. But the bully is the capitalist industrial growth economy, and the victim is our beleaguered planet. The observers are the world’s rich and powerful — governments, corporations and institutions. The balcony-watchers are we, the citizens of the world. The paramedic is the IPCC, and the eight seconds are the eight years the IPCC says we have left to prevent ecological collapse. The important events are the political and economic priorities that, insanely, outrank the survival of a healthy planet.

The balcony-watchers are correct in lamenting their lack of power. It is not their job to tackle the bully, even though they feel they are, in a small way, complicit in the murder. They’re paralyzed into inaction.

We can’t care about an event we deny is happening. And we don’t dare care much about an event that we believe is not our fight, or about which we can do nothing, or about which taking direct action may produce immediate, negative personal consequences for us.

The situation seems, and is, hopeless. But each of us has a couple of very unsatisfactory options:

  1. Option 1: We can do nothing. We can convince ourselves, with some justification, that there’s nothing we can do. So we can just enjoy our final moments of relative peace and prosperity before it’s gone. Perhaps we’ll learn to grow some of our own food, and perhaps we’ll get used to the endless ecological disasters, the haze of smoke, the desperate precarity, the migration of billions of climate refugees, which we may be part of. In the meantime we can get together with others and learn to manage our grief, our shame, our anger, over the planet’s death, the perilous future, and the hopelessness of the situation. There is a grieving process that we can learn, if we think it will help. We can prepare ourselves to face the inevitable.
  2. Option 2: We can take direct action, which means working to smash the capitalist industrial growth economy, with all the commensurate risks that entails. We will lose, but in the end it won’t matter because there will be no winners in this war. It’s a war of principle, not a war with hope for victory. Not even a Pyrrhic one.

We cannot do both — we have to choose. As collapse worsens, especially in areas of the world we never hear about where collapse is already in full swing, there will be a propensity for more and more of us to choose the second option.

The message of the first option is stark and simple:

We’re fucked. We did our best. All we can do now is face what’s to come as best we can.

There have been several documentary and sci-fi/cli-fi films of late that have introduced — usually subtly, usually conveyed in the voice of an indigenous elder, or a deep green activist, or a young female protester — a different, astonishing message:

The planet’s life is more important than any individual’s life.

I am spending time every day, now, just sitting and thinking about this second message, and what it means. It sits, printed out, below the keyboard of my laptop. Wild creatures, I think, understand this at a profound, intuitive level that is no longer accessible to us disconnected humans. We have forgotten.

The grim paradox we face in comparing the two messages above is that they’re both right, and that neither helps us make a decision between the two options that follow from them. What they mean to each of us will, more than anything else, determine what we will do and what will happen to us in the decade ahead, and beyond.

There is no third option. There is no way we can, like Wile E Coyote reversing course and streaking back onto land after running off the edge of the cliff, turn the ship of industrial civilization around in eight years to prevent climate collapse. If you still believe the absurd claims to the contrary, I’m sorry, you’re just not paying attention. Read between the lines of the IPCC report to see what they’re really saying, that they’re unwilling or afraid or not permitted to say overtly — yet. There are only two options.

The option we choose doesn’t really matter — it won’t change anything. But it matters to us. To us, now, it’s the only thing that really matters.

08 May 07:36

Beyond Aggregation: Amazon as a Service

by Ben Thompson

Five months and $134 billion in market cap ago (before the stock slipped by 68%), Bloomberg Businessweek purported to explain How Shopify Outfoxed Amazon to Become the Everywhere Store. One of the key parts of the story was how Shopify pulled one over on Amazon seven years ago:

An even more critical event came a few months after the IPO. Amazon also operated a service that let independent merchants run their websites, called Webstore. Bang & Olufsen, Fruit of the Loom, and Lacoste were among the 80,000 or so companies that used it to run their online shops. If he wanted to, Bezos surely had the resources and engineering prowess to crush Shopify and steal its momentum.

But Amazon execs from that time admit that the Webstore service wasn’t very good, and its sales were dwarfed by all the rich opportunities the company was seeing in its global marketplace, where customers shop on Amazon.com, not on merchant websites…In late 2015, in one of Bezos’ periodic purges of underachieving businesses, he agreed to close Webstore. Then, in a rare strategic mistake that’s likely to go down in the annals of corporate blunders, Amazon sent its customers to Shopify and proclaimed publicly that the Canadian company was its preferred partner for the Webstore diaspora. In exchange, Shopify agreed to offer Amazon Pay to its merchants and let them easily list their products on Amazon’s marketplace. Shopify also paid Amazon $1 million—a financial arrangement that’s never been previously reported.

Bezos and his colleagues believed that supporting small retailers and their online shops was never going to be a large, profitable business. They were wrong—small online retailers generated about $153 billion in sales in 2020, according to AMI Partners. “Shopify made us look like fools,” says the former Amazon executive.

If only we could all make such excellent mistakes; Amazon’s move looks like a strategic masterstroke.

Shopify’s Revenue Streams

Three major things have changed, will change, or should change about Shopify’s business in the years since the company made that deal with Amazon.

What has changed is the composition of Shopify’s business. While the company started out with a SaaS model, the business has transformed into a commission-based one:

Shopify's two revenue streams

“Subscription Solutions” are Shopify’s platform fees, including the cost to use the platform (or upgrade to the company’s Pro offering), commissions from the sales of themes and apps, and domain name registration. “Merchant Solutions”, meanwhile, are all of the fees that are generated from ongoing sales; the largest part of this are payment processing fees from Shopify Payments, but other fees include advertising revenue, referral fees, Shopify Shipping, etc.

It’s the shipping part that is line for big changes: while Shopify first announced the Shopify Fulfillment Network back in 2019, it is only recently that the company has committed to actually building out important pieces of said network on its own, the better to compete with Amazon’s full-scale offering.

As for what should change, I argued back in February that Shopify needed to build out an advertising network; this recommendation is more pertinent than ever, in large part because the second item on this list might be in big trouble.

Buy With Prime

From the Wall Street Journal:

Amazon.com Inc. is extending some of the offerings of its popular Prime membership program to merchants off its platform with a new service that embeds the online retailing giant’s payment and fulfillment options onto third-party sites. Called Buy with Prime, the service will allow merchants to show the Prime logo and offer Amazon’s speedy delivery options on products listed on their own websites…

The company said the Buy with Prime offer will be rolled out by invitation only through 2022 for those who already sell on Amazon and use the company’s fulfillment services. Later, Amazon plans to extend Buy with Prime to other merchants, including those that don’t sell on its platform. Participating merchants will use the Prime logo and display expected delivery dates on eligible products. Checkout will go through Amazon Pay and the company’s fulfillment network. Amazon will also manage free returns for eligible orders.

This is a move that you could see coming for a long time; back in 2016 I wrote an article called The Amazon Tax that explained that the best way to understand Amazon as a whole was to understand Amazon Web Services (AWS):

The “primitives” model modularized Amazon’s infrastructure, effectively transforming raw data center components into storage, computing, databases, etc. which could be used on an ad-hoc basis not only by Amazon’s internal teams but also outside developers:

A drawing of The AWS Layer

This AWS layer in the middle has several key characteristics:

  • AWS has massive fixed costs but benefits tremendously from economies of scale
  • The cost to build AWS was justified because the first and best customer is Amazon’s e-commerce business
  • AWS’s focus on “primitives” meant it could be sold as-is to developers beyond Amazon, increasing the returns to scale and, by extension, deepening AWS’ moat

This last point was a win-win: developers would have access to enterprise-level computing resources with zero up-front investment; Amazon, meanwhile, would get that much more scale for a set of products for which they would be the first and best customer.

As I noted in that article, the AWS model was being increasingly applied to e-commerce as Amazon shifted from being a retailer to being a services provider:

Prime is a super experience with superior prices and superior selection, and it too feeds into a scale play. The result is a business that looks like this:

A drawing of The Transformation of Amazon’s E-Commerce Business

That is, of course, the same structure as AWS — and it shares similar characteristics:

  • E-commerce distribution has massive fixed costs but benefits tremendously from economies of scale
  • The cost to build-out Amazon’s fulfillment centers was justified because the first and best customer is Amazon’s e-commerce business
  • That last bullet point may seem odd, but in fact 40% of Amazon’s sales (on a unit basis) are sold by 3rd-party merchants; most of these merchants leverage Fulfilled-by-Amazon, which means their goods are stored in Amazon’s fulfillment centers and covered by Prime. This increases the return to scale for Amazon’s fulfillment centers, increases the value of Prime, and deepens Amazon’s moat

My prediction in that Article was that Amazon’s burgeoning logistics business would eventually follow the same path:

It seems increasingly clear that Amazon intends to repeat the model when it comes to logistics…how might this play out? Well, start with the fact that Amazon itself would be this logistics network’s first-and-best customer, just as was the case with AWS. This justifies the massive expenditure necessary to build out a logistics network that competes with UPS, Fedex, et al, and most outlets are framing these moves as a way for Amazon to rein in shipping costs and improve reliability, especially around the holidays.

However, I think it is a mistake to think that Amazon will stop there: just as they have with AWS and e-commerce distribution I expect the company to offer its logistics network to third parties, which will increase the returns to scale, and, by extension, deepen Amazon’s eventual moat

Today Amazon’s logistics is massive and fully integrated from the fulfillment center to the doorstep, even though it only serves Amazon; the obvious next step is opening it up to non-Amazon retailers, and that is exactly what is happening.

Beyond Aggregation

At first glance, this might seem like a bit of a surprise; after all, Stratechery is well known for describing Aggregation Theory, which is predicated on controlling demand. In the case of Amazon that has meant controlling the website where customers order goods — Amazon.com — even if those goods were sold by 3rd-party merchants. Why would Amazon give that up?

The reasoning is straightforward: while Amazon has had Aggregator characteristics, the company’s business model and differentiation has always been rooted in the real world, which, by extension, means it is not an Aggregator at all. I noted in 2017’s Defining Aggregators that Aggregators benefit from zero marginal costs, which only describes a certain set of digital businesses like Google and Facebook:

Companies traditionally have had to incur (up to) three types of marginal costs when it comes to serving users/customers directly.

  • The cost of goods sold (COGS), that is, the cost of producing an item or providing a service
  • Distribution costs, that is the cost of getting an item to the customer (usually via retail) or facilitating the provision of a service (usually via real estate)
  • Transaction costs, that is the cost of executing a transaction for a good or service, providing customer service, etc.

Aggregators incur none of these costs:

  • The goods “sold” by an Aggregator are digital and thus have zero marginal costs (they may, of course, have significant fixed costs)
  • These digital goods are delivered via the Internet, which results in zero distribution costs
  • Transactions are handled automatically through automatic account management, credit card payments, etc.

This characteristic means that businesses like Apple hardware and Amazon’s traditional retail operations are not Aggregators; both bear significant costs in serving the marginal customer (and, in the case of Amazon in particular, have achieved such scale that the service’s relative cost of distribution is actually a moat).

Amazon’s control of demand has been — and will continue to be — a tremendous advantage; Amazon not only has power over its suppliers, but it also gets all of the relevant data from consumers, which it can feed into a self-contained ad platform that is untouched by regulation from either governments or Apple.

At the same time, limiting a business to customer touchpoints that you control means limiting your overall addressable market. This may not matter in markets where there are network effects (which means you appeal to everyone) and you are an Aggregator dealing with zero marginal costs (and thus can realistically cover every consumer); in the case of e-commerce, though, Amazon will never be the only option, particularly given The Anti-Amazon Alliance working hard to reach consumers.

A core part of the Anti-Amazon Alliance are companies that have invested in brands that attract customers on their own; these companies don’t need to be on Amazon fighting to be the answer to generic search terms, but can rather drive customers to their own Shopify-powered websites both organically and via paid advertising (Facebook is a huge player in the Anti-Amazon Alliance). Still, every customer that visits these websites has an Amazon-driven expectation in terms of shipping; Shippo CEO Laura Behrens Wu told me in a Stratechery interview:

Consumers have those expectations from Amazon that shipping should be free, it should be two days, and whatever those are expectations are, returns should be free. That is still carried over when I’m buying on this branded website. If the expectations are not met, consumers decide to buy somewhere else…Merchants are constantly trying to play catch up, whatever Amazon is doing they need to follow suit.

Now Amazon has — or soon will have, in the case of Shopify-only merchants — a solution: the best way to get an Amazon-like shipping experience is to ship via Amazon. And, in contrast to the crappy Webstore product, you can keep using Shopify and its ecosystem for your website. Amazon may have given away business to Shopify in 2015, but that doesn’t much matter if said business ends up being a commoditized complement to Amazon’s true differentiation in logistics. That business, thanks to the sheer expense necessary to build it out, has a nearly impregnable moat that is not only attractive to all of the businesses competing to be consumer touchpoints — thus increasing Amazon’s addressable market — but is also one that sees its moat deepen the larger it becomes.

Shopify’s Predicament and Amazon’s Opportunity

The reason this announcement is so damaging to Shopify goes back to the transformation in the company’s revenue I charted above: Amazon’s shipping solution requires the customer to pay with Amazon; I love Shop Pay’s checkout process, but it’s not as if I don’t have an Amazon account with all of my relevant details already included, and I absolutely trust Amazon’s shipping more than I do whatever option some Shopify merchant offers me. If I had a choice I’m taking the Amazon option every time.

Granted, I may not be representative; I’m a bit of a special case given where I live. What is worth noting, though, is that every transaction that Amazon processes is one not processed by Shopify, which again, is the company’s primary revenue driver. Moreover, the more volume that Amazon processes, the more difficult it will be for Shopify to get their own shipping solution to scale. This endangers the company’s current major initiative.

That is why I think the company needs to think more deeply than ever about advertising: the implication of Amazon being willing to be a services provider and not necessarily an Aggregator is that Amazon is surrendering some number of customer touchpoints to competitors; to put it another way, Amazon is actually making competitive customer touchpoints better — touchpoints that Shopify controls. Shopify ought to leverage that control.

That noted, there is a potential wrench in this plan: if Shopify doesn’t own payment processing, will they have sufficient data to build a competitive conversion-data-driven advertising product? Amazon — and Apple — would likely argue that that data is Amazon’s, but the merchant will obviously know what is going on (by the same token, will Amazon be able to tie this off-platform conversion data back to its own advertising product? It’s not clear what would stop them).

Shopify has two additional saving graces:

  • First, the fact that Amazon will be able to collect data is a big reason for many merchants not to use Amazon’s new offering. Shopify’s offering will always be differentiated in this regard.
  • Second, as the announcement noted, this is going to take Amazon a year or two to fully roll out, and lots of stuff can change in the meantime.

Moreover, while AWS has always been excellent at serving external customers — which it did before Amazon.com ever actually moved over — Amazon’s off-Amazon.com merchant offerings have never been particularly successful (including the aforementioned Webstores).

With that in mind, I think it’s meaningful that “Buy With Prime” is the first major initiative of new CEO Andy Jassy’s regime; I don’t think it’s an accident that it is so clearly inspired by AWS. AWS’s strength is its focus on infrastructure at scale; successfully moving e-commerce beyond aggregation to the same type of service business model would put his stamp on the company in a meaningful way, and, contra that quote in Bloomberg Businessweek, mark Jassy as nobody’s fool.

08 May 07:35

Apple is mainstreaming the inventions of Alan Kay. Maybe the metaverse is next

I have this semi-stupid semi-serious theory about Apple which is that they are, one by one, mainstreaming the inventions of computer scientist Alan Kay, and that is their raison d’etre. It’s a theory with predictive power because you can speculate where they’re going next.

Dr Alan Kay: a pioneering computer scientist. Here’s a bio and list of inventions (though I’ll get to those). He started at the University of Utah with ARPA funding in the 1960s, then went to Xerox PARC.

Apple:

Ok so first there was the Macintosh which was the first mainstream personal computer, and as we know it was inspired by two tours Steve Jobs and Apple employees took round Xerox PARC in 1979.

They saw the Xerox Alto which had been developed in 1972, the first attempt at the bringing to market the PC as invented by Doug Engelbart and team in 1968 (as previously discussed).

The story of that visit:

That day Jobs and his engineers sat in awe as Goldberg and the Alto team demoed the intricacies of the mouse-driven GUI, Smalltalk, as well as the Ethernet technology they had developed to network their Altos together. Jobs was so blown away by the details of the GUI, the rest passed over him like a fog. “It was one of those apocalyptic moments,” Jobs said. “I remember within ten minutes of seeing the graphical user interface stuff, just knowing that every computer would work this way someday. It was obvious.”

– Living Computers, What Really Happened: Steve Jobs @ Xerox PARC ‘79 (2020)

(GUI = graphical user interface.)

BUT

Three years ago, in an answer on Quora, Alan Kay himself reflects on what it was like at Xerox PARC during the Jobs visit: "it was the work of my group and myself that Steve saw, yet the Quora question is the first time that anyone has asked me what happened."

A second important fact about the 1979 demo to Steve, was that he missed most of what we showed him. More than 15 years later he admits this in this interview: How Steve Jobs got the ideas of GUI from XEROX [YouTube] where he says that we showed him three things but he was so blinded by the first one (the GUI) that he missed both networking and real object-oriented systems programming.

Kay didn’t invent the personal computer, Apple’s first breakthrough mainstream product. That pre-existed his involvement.

But Kay was involved in the Xerox Alto as a networked workstation (that’s what it’s called on his Wikipedia page). Un-networked computers and networked computers take you to completely different ways of working.

Jobs made good on the first miss, networking, with the iMac – the beginning of Apple’s rebirth under Jobs: "The ‘i’ stands for ‘Internet,’" he said.

Jobs made good on the second miss, object-oriented systems programming with his second computer company, NeXT.

The NeXT operating system was famously, heavily object-oriented – which made it insanely easy to develop for. (Using a NeXTstation was what got me into coding.) NeXT was acquired by Apple, Jobs returned (and launched the iMac), and the OS became the underpinnings of first Mac OS and then iOS. The reason iPhones ignited such an incredible app ecosystem is down the NeXT-derived operating system.

(If you don’t believe me, then you never tried to develop for Symbian, the operating system for the Nokia Series 60 line, far and away the most popular smartphone line when iPhone came out. Symbian was so incredibly difficult to develop for that I remember seeing a flashlight app on Nokia’s app store selling for like $20. The flashlight was literally the tutorial app you would build when you set up the developer environment. Developer friction was that high.)

Anyway: Alan Kay invented object-oriented programming. That was his thing.


Alan Kay also invented the Dynabook.

The KiddiComp concept, envisioned by Alan Kay in 1968 while a PhD candidate, and later developed and described as the Dynabook in his 1972 proposal “A personal computer for children of all ages”, outlines the requirements for a conceptual portable educational device that would offer similar functionality to that now supplied via a laptop computer or (in some of its other incarnations) a tablet or slate computer with the exception of the requirement for any Dynabook device offering near eternal battery life. Adults could also use a Dynabook, but the target audience was children.

– Wikipedia, Dynabook

The illustration of the Dynabook concept (which is on that Wikipedia page) is basically an iPad with a keyboard. Kay didn’t originate that form, but he made it believable.

For Steve Jobs, the lineage was clear. He invited Kay to the bombshell launch of the iPhone.

I think he invited me to the 2007 iPhone unveiling partly because it was kind of a tiny “Dynabook” – and he had always wanted to do one – and partly because he was going to use a quote of mine that he had always taken to heart “People who are really serious about software should make their own hardware”.

The photo of us chatting was taken right after the event. He brought the iPhone to me, put it in my hands, and asked: “Alan, is this good enough to be criticized?”. My reply was to make a shape with my hands the size of an iPad: “Steve, make it this size and you’ll rule the world”.

When the iPhone had been revealed a few minutes earlier I realized that they must already have done an iPad/Dynabook-like machine (easier) and that the “iPhone first” must have been a marketing/timing decision.

(From an article which excerpts another of Kay’s Quora answers: Alan Kay Talks What he and Steve Jobs Talked About at the Original iPhone Keynote in 2007.)

Although Jobs saw the “slate” form factor as key to the Dynabook, Alan Kay believes that Apple has missed the point: in another Quora answer, Kay goes into the goals of the Dynabook concept and points out that it’s for children and it’s programmable. But Apple didn’t allow this: "users (even children) were forbidden to make actively programmable things on the iPad and share them on the Internet."

Hey but… Swift Playgrounds, Apple’s game-like environment for learning programming and creating apps on iPad, which is mysteriously getting more and more powerful each year? I always wonder why so much effort goes into it. Something going on there.


So, beyond personal computers and the Macintosh, we’ve got…

  • networked workstations: the iMac
  • object-oriented programming: NeXT, Mac OS, and iOS
  • Dynabook, slate computers: iPad
  • Dynabook, kids and programming: Swift Playgrounds

What’s next?

WELL.


I’m not saying that pursuing Alan Kay’s inventions is exclusively what Apple does. In bringing something to market, it never looks exactly like the original vision. It’s slower maybe, or the step-by-step strategy reveals something else along the way that becomes a preoccupation for a decade. But Apple trends back towards the North Star, which is Alan Kay.

And am I genuinely saying that this is what Apple consciously does? That there is someone in corp strat combing over Kay’s work and figuring out what to mine? Probably not. Not really.

(Maybe Steve Jobs wrote it into a giant Seldon’s Plan for the future of the fruit company, maintained and monitored to this day in an inner sanctum by a secret cadre of mini Tim Cooks.)

“Apple is channelling Alan Kay” is not really a theory, strictly. It’s a heuristic metaphor, imaginative scaffolding. I carry around a whole bundle of these in my head, as do we all I suppose. I’m not bothered about the truth value of a heuristic metaphor, mainly that it’s crudely on the money enough such that I can (a) reason more rapidly and also (b) stretch it to arrive at new possibilities.

So what happens if we stretch this one?


Reading Alan Kay’s Wikipedia bio, after forays into Atari, Apple itself, and the One Laptop Per Child project, the great as-yet-not-mainstreamed invention that really stands out to me is Croquet.

Croquet is "Croquet is a software development kit (SDK) for use in developing collaborative virtual world applications."

It’s a programming environment to create multiplayer 3D worlds… which are themselves programmable.

Look: it’s a 3D persistent world that multiple people can be in at the same time.

Sounds… metaverse-y?

Like, the whole emerging strategy for the 2020s of Facebook (now Meta) and the tech startup ecosystem?

Only Croquet is way more than that.

Croquet allows users to edit the source code of the 3D world from within the world, and immediately see the result, while the world is still running. The running program does not need to be ended, and there is no compile-link-run-debug development loop. Any part of the program may be edited, down to the VM and OpenGL calls.

– Wikipedia, Croquet Project

I have seen Kay present Croquet on stage and it is WILD. Almost two decades ago.

Back in 2003:

Alan Kay was giving a talk about old ideas from the history of computing. Two big projector screens, one at each end of the stage.

It starts as a slideshow.

Then turns into interactive, object-oriented programming. Kay right clicks on an object in the deck and rewrites the code: the slide becomes a dynamic simulation of a car driving down a road. We thought it was a slide but actually it’s live code.

AND THEN: the screens show different views. The screen on the right pulls back, and it turns out we weren’t looking at a slide, or even a screen-based coding environment. We were duped. We were zoomed in on a flat television in a fully realised 3D virtual world.

The screen on the left pulls back too, then travels through a portal, looks to the side, and sees the avatar that represents the screen on the right.

It was mind blowing.

I still have my notes…

3d environment on the left, then the 2d screen on the right zooms out and suddenly we’re in a 3d environment with panels all over it. all 3d OO, an avatar. each panel is a portal. the left is doing things from a first person POV, replication architecture doing realtime transactions over the internet. late binding protocol – message based system. all objects have their own objects.

every object is different, and can communicate with its peers. so you pull down the objects to fill in the environments, massively parallel realtime transaction stuff. then messages are synced up.

left screen manipulated a portal into another space. he enters it, and he’s in a new 3d space. left turns round, and sees alan on right screen enter the portal into the 3d view of the surface of mars.

– Notes from ETech 2003, Daddy Are We There Yet (Alan Kay keynote)

(This is from back when conference talks were given sufficient time and were super dense.)

I sound pretty hilariously amazed, breathless even in plain text. I still am tbh.

What Kay really cracked, with Croquet, is more general than in-world programming. It’s the overall user experience paradigm to interact with objects in 3D space in a way that makes intuitive sense, without dropping out to a separate console.

Update 26 Apr: It turns out that the whole keynote is on YouTube. It looks pretty old school now, but you can see past that – it’s remarkable. And you just ask… what if we’d spent the last 20 years iterating on this vision? The path not taken. But it’s still possible to pick up the ideas. Watch it here: Alan Kay: Croquet Demo (2003) (58 mins).


Multiplayer, persistent virtual reality worlds, programmable from the inside – that’s the Alan Kay invention that Apple hasn’t mainstreamed yet.

Then there are all those rumours about Apple and virtual reality smart glasses… all the jigsaw pieces that Apple has dropped like spatial audio, and augmented reality toolkits, and ultra-wideband low latency high precision positioning chips, and insanely high-powered low-energy silicon, which isn’t really being used anywhere… and all the work they’ve done around iCloud, and identity, and in-app app development…

And I was thinking about VR headsets the other day, noticing that the missing piece is the operating system… For all of Apple’s work towards the technology of smart glasses, we don’t know what “reality OS” is going to be like to use. Is it going to be immersive and 3D? Is it going to be a shared social space? Will the user experience build on the lessons of “object oriented” direct manipulation, like the original desktop metaphor but updated for virtual environments?

I can’t help but wish that what Apple is working their way towards, slowly, over many years, is Alan Kay’s vision of the metaverse.

02 May 01:52

Future of Education Speaker Series Episode 1 – Students thinking about future skills

by dave

As I’ve mentioned various times on this blog, I have had the good fortune of working with about 70 Co-op students throughout the pandemic. They were students who would mostly have gone out to their engineering or kinesiology placements, but could not due to the pandemic. They’ve been wonderful to work with. The students I’ve worked with in the past had mostly self-selected into the work that we were doing. These students had not. They were doing their best (like so many of us) to make the best of a Covid situation. I learned a lot from them…

One of the things we’ve worked on extensively… is about what it means to be ‘prepared’ for the world that we have in front of us.

This Friday at noon EST (April 29th, 2022) my students will be presenting the results of their Futures thinking activity. They have been tasked to consider what skills they might need to succeed in the future. They started with the trends that are part of the SSHRC future challenges and built from there. It is capstone presentation at the end of their four month work term in the Office of Open Learning at UWindsor. Join us. We’d love to hear from you.

Why are we doing this?

I do my best to have my student employees do meaningful work. I also try and do the kind of training that might make the experience worthwhile to them. One of the key focuses of that training in the last two years has been the gap between their expectations of what it means to work and what I am actually asking them to do. I have found that I need to spend significant time making students believe that I am actually looking for their ‘opinion’ and not ‘the right answer’. They struggle (at first) confronting things that are uncertain. They want a clear question with a clear answer.

This experience led me to take a closer look at the ‘future preparation of students’ conversation. Whether we are talking about 21st century skills, or preparing people for future jobs or whatever… what are we preparing them for? Is confronting uncertainty a 21st century skill? Are other people in my field seeing the same things from students? What other things should I be trying to prepare them for that haven’t occurred to me that I’ve overlooked? How many of those are the results of my own embodiment and privilege? What, eventually, does this mean we should be doing to change what and how we teach?

I don’t know. I know that I care about the work I’ve done with my students and I believe that there is some kind of disconnect. Over the next ten months I’m going to be hosting a series of conversation to talk about it. I’m planning an open course for the fall. I’m also currently applying for funding for a conference in February of 2023. Stay tuned.

A few thoughts going forward.

Uncertainty

You might believe that uncertainty is the product of our current times (pandemic, war in Europe, housing and oil prices, climate change etc…). You could see the next 20 years as a time of potentially unprecedented uncertainty. You might also believe that the abundance of access to voices and information have unveiled the uncertainty that has always lain underneath the veneer of the post WWII ‘clear objectives’ global north west. Either way. I feel pretty comfortable suggesting that many of the new challenges our OOL students will be facing in the next 20 years don’t have ‘answers’ and, frankly, the ones we’re handing on (eg. poverty) don’t have ‘answers’ either.

I think that preparing people for uncertainty is different than preparing them for certainty. I was talking about uncertainty a few weeks ago and was told that we need to ‘teach the basics‘ so that people can even enter the conversation. We need to teach the certainties before we teach the uncertainties. I hear it. We were making the same criticisms of whole language learning in the 80s. I just have this feeling that this conversation about uncertainty and what we can do for it is important.

Futures thinking

Futures thinking is a method of examining current trends through the lens of the future. It is NOT prediction. Take your time machine back 5 years and make some guesses… your predictions were probably wrong. If they were right… no one listened to you.

Futures thinking is about creating ‘possible futures’ that give us a chance to discuss our current trends outside of the current disagreements we may have about them. We combine trends and think about what would happen if they became a dominant trend in our culture. What if, 20 years from now, housing prices went up by 500%? What if advances in cyborginess gave us all unlimited mental storage?

The possibilities are endless, but the future is not the thing. The real advantage of taking a futures approach is the chance to think about the trend. The outcome is a better understanding of what we should be doing in our world right now.

What do I hope to get from this?

Well. I have this conversation that I want to be in. I can’t find it… so I’m hoping to start one version of it and find the others that are already ongoing.

I’m also hoping to pull together the wisdom we come across. We’ll see how things develop, how many voices decide to join. It’d be great if that October open course actually became a MOOC. I’d like that. We’ll see.

Interested?

For sure come to the presentation on Friday. For now drop a comment on this post if you’re interested. I haven’t quite settled on the platform for communications (what with the recent unpleasantness in the social mediasphere).

02 May 01:52

An unplanned open redesign

Howdy! If you frequent this website often, you’ll notice some changes. This weekend I started on my planned redesign, but as I started nudging my About section, I found myself in a fight with my typography. After some hemming and hawing, I made the drastic decision to nuke my stylesheets and start over.

~81 file changes later, I made another decision to yeet my changes live and do an open redesign. Not my original plan, but it’ll be good blog fodder. One thing I want to avoid is killing blogging momentum by getting sunk in a redesign branch.

Without further ado, as with any redesign, let’s see what we’re dealing with first…

Website feature inventory

Here’s a list of all the features on my little website that I maintain:

  • 330+ blog posts
  • Art-directed posts with custom CSS, JavaScript, and SVGs
  • Open Graph images (handmade)
  • sitemap.xml
  • RSS feed
  • ███ ████
  • Code samples
  • Codepen, YouTube, and Vimeo embeds
  • About Timeline (YAML-powered)
  • Bookshelf (YAML-powered)
  • Likes (RSS-powered)
  • /uses page
  • Service worker + offline page

My checks are all manual right now, but not rocket science. I created a “secret” page to roll up all my art-directed posts and I’m happy to report just a few are super fucked up. One day I’ll add proper visual integration tests, but today is not that day.

Which leads me to the next step, let’s define the goals and non-goals for this project so we can finish on time.

Goals and non-goals

The goals I have outlined for myself are pretty simple:

  • Redo my typography setup - After years and years of subtle tweaks, I ruined it. Time to start over, more fluid this time.
  • New homepage - This is the lowest priority, but nothing sings “redesign” like a new homepage.
  • Remove Ads - Ads make me ~$20/mo. Originally a motivator to blog consistently, I’ve got a flow now. Ads were also an intentional third-party constraint I used to inform client work. Since I’m moving away from client work, that experiment can end.
  • ✅ Remove Fathom Analytics - Fathom is great, but I’m paying $5/mo to self-host and I check it zero times a year. I have Netlify’s server-side analytics running on this as well. There’s a 26× disparity between those two analytics services, enough to make me think there’s no truth to metrics at all. Jim Nielsen covers this phenomenon in his post Comparing Data in Google and Netlify Analytics.
  • ✅ Expose tags - I want to do a better job at tagging and categorization of posts for my own sake and I think elevating that will help.
  • ✅ Annual stats - Another feature for me, I like to keep track of my progress over time.
  • ✅ Syntax Highlighting - I had this once with client-side Prism.js, then I took it off, now I want it back. I’m using server-side Rouge + Commonmark in Jekyll.
  • Support page - How can you give me $10? If I’m removing ads, might as well sell my own products and services.

My list of non-goals would include:

  • Do not replatform the blog to Eleventy (or Astro) or whatever. I’d love to not be on Jekyll, but that’s a separate project. Tooling is a lot better on a Node stack.
  • Do not go down a testing rabbit hole. Spot checks or automated webperf/accessibility tests would be beneficial, but it’d be easier on a Node stack.
  • Do not automate opengraph images - Auto-generated opengraph images would be rad, but again, easier on a Node stack.

With those parameters defined, let’s do a status check…

Status check!

What you’re looking at now is the lightly styled bones of my site. No Sass, no JavaScript, nothing but Jekyll static site compilation. I’m pleased with how the website is shaping up with basic browser defaults. I’ve got ~2.37kb of global CSS (926b gzipped) and I don’t see the need for much beyond that.

RIP Newshammer, long live Newshammer: Astute readers will notice Newshammer is not on the site anymore. That’s not a permanent decision, but I am leaving opportunity/room to do something different.

Typography: I’m embracing browser defaults for type right now, with two or three exceptions. I’m soft-committing to system-ui for now because I had some recent trouble rendering headings in other languages. The article H1 is the only heading I see that feels a little too small, but I’d be happy if that’s all the typesetting I need to do.

Spacing: I have seven margin declarations and most of those are resets. Liberal usage of Grid and Flexbox using gap to manage consistent spacing. Super happy about this. Even the posts are one giant grid. I may abstract my grids into some layout utility classes…

.layout-grid { --gap: 1.5rem; display: grid; gap: var(--gap); }
.layout-grid > * { min-width: 0; } /* https://daverupert.com/2017/09/breaking-the-grid/ */
.layout-grid[data-vibe="compact"]     { --gap: 0.75rem }
.layout-grid[data-vibe="comfortable"] { --gap: 3rem }

This would probably remove half my existing CSS.

Charts n’ graphs: I followed Josh Collinsworth’s post on CSS Grid bar charts for my Archive page. Happy with this. It needs some work to be fully accessible (context-less labels) but it’s presentational at its core.

The flagpole: One stylized component I have right now is the “flagpole” look for my Post Archives and About Timeline. I’m not attached to the flagpole treatment, but I do think visually grouping content by years is super helpful.

Beyond that, I’ve already been able to land some accessibility fixes around my document heading structure which I’m pretty glad about. I noticed my HTML is also “over-classed” with a lot of half-baked .hentry microformats that I brought over from WordPress ten years ago, I’m looking forward to mowing those down.

That’s all I have for now. Tune in next week for more changes and CSS riffin’!

02 May 01:52

Extracting Rationale for Open Source Development Decisions

I've never tried to measure how much of my time I spend staring at a screen full of code and wondering, "Why the hell is it doing that?" Even small teams with little turnover can struggle to keep track of the reasons for particular decisions, and newcomers frequently have to recapitulate their predecessors' journeys of discovery in order to figure out why alpha has to be re-initialized at that particular moment.

Sharma2021 explores how well decision rationales can be extracted after the fact from Python's email archives. The authors combine term patterns such as Proposal-State-Reason, proximity-based heuristics, role-based heuristics, and the presence of special terms like "BDFL pronouncement" to identify promising messages, then scored how well parameterized combinations of those heuristics did compared to a manually-curated set of messages. They found that, "If we consider the top 10 ranked results, [aggregating at the message level] captures 74% and 86% of rationale sentences, respectively; and top 15 increases this further to 82% and 91%, respectively." That's much better than I would have expected, and I hope the authors will make their tool available as a specialized search engine for practitioners to try out.

Sharma2021 Pankajeshwara Nand Sharma, Bastin Tony Roy Savarimuthu, and Nigel Stanger. Extracting rationale for open source software development decisions—a study of Python email archives. Proc. ICSE 2021, doi:10.1109/icse43902.2021.00095.

A sound Decision-Making (DM) process is key to the successful governance of software projects. In many Open Source Software Development (OSSD) communities, DM processes lie buried amongst vast amounts of publicly available data. Hidden within this data lie the rationale for decisions that led to the evolution and maintenance of software products. While there have been some efforts to extract DM processes from publicly available data, the rationale behind 'how' the decisions are made have seldom been explored. Extracting the rationale for these decisions can facilitate transparency (by making them known), and also promote accountability on the part of decision-makers. This work bridges this gap by means of a large-scale study that unearths the rationale behind decisions from Python development email archives comprising about 1.5 million emails. This paper makes two main contributions. First, it makes a knowledge contribution by unearthing and presenting the rationale behind decisions made. Second, it makes a methodological contribution by presenting a heuristics-based rationale extraction system called Rationale Miner that employs multiple heuristics, and follows a data-driven, bottom-up approach to infer the rationale behind specific decisions (e.g., whether a new module is implemented based on core developer consensus or benevolent dictator's pronouncement). Our approach can be applied to extract rationale in other OSSD communities that have similar governance structures.

02 May 01:50

Programmable Notes

An essay about the utility of active agents in note-taking systems, by Maggie Appleton.

01 May 19:44

On The Origins Of Hypertext In The Disasters Of The Short 20th Century

My paper for the special History track of the 2o22 Web Conference, “On The Origins Of Hypertext In The Disasters Of The Short 20th Century”, is available in the ACM Digital Library, and here (pdf).

The development of hypertext and the World Wide Web is most frequently explained by reference to changes in underlying technologies — Moore’s Law giving rise to faster computers, more ample memory, increased bandwidth, inexpensive color displays. That story is true, but it is not complete: hypertext and the Web are also built on a foundation of ideas. Specifically, I believe the Web we know arose from ideas rooted in the disasters of the short twentieth century, 1914–1989. The experience of these disasters differed in the Americas and in Eurasia, and this distinction helps explain many long-standing tensions in research and practice alike.
01 May 19:44

Telcos should focus on "connected data"​ not just "edge computing"​

by Dean Bubley

Note: A version of this article first appeared as a guest blog post written for Cloudera, linked to a webinar presentation on May 4, 2022. See the sign-up link in the comments. This version has minor changes to fit the tone & audience of this newsletter, and tie in with previous themes. This version is also published on my LinkedIn newsletter with a comments thread (here).

Telcos and other CSPs are rethinking their approach to enterprise services in the era of advanced wireless connectivity - including their 5G, fibre and Software-Defined Wide Area Network (SD-WAN) portfolios. 

Many consumer-centric operators are developing propositions for “verticals”, often combining on-site or campus mobile networks with edge computing, plus deeper solutions for specific industries or horizontal applications. Part of this involves helping enterprises deal with their data and overall cloud connectivity as well as local networks. (The original MNO vision of delivering enterprise networks as "5G network slices" partitioned from their national infrastructure has taken a back seat. There is more interest currently in the creation of dedicated on-premise private 5G networks, via telcos' enterprise or integrator units).

No alt text provided for this image

At the same time, telecom operators are also becoming more data- and cloud-centric themselves. They are using disaggregated systems such as Open RAN and cloud-native 5G cores, plus distributed compute and data, for their own requirements. This is aimed at running their networks more efficiently, and dealing with customers and operations more flexibly. There are both public and private cloud approaches to this, with hyperscalers like Amazon and disruptors such as Rakuten Symphony and Totogi promising revolutions in future.

As I've said for some time, “The first industry that 5G will transform is the telecom industry itself.

This poses both opportunities and challenges. Telcos’ internal data and cloud needs may not mirror their corporate customers’ strategies and timing perfectly, especially given the diverse connectivity landscape.

If operators truly want to blend their own transformation journey with that of their customers, what is needed is a much broader view of the “networked cloud” and "distributed data", not just the “telco cloud” or "telco edge" that many like to discuss.

Networked data and cloud are not just “edge computing”

Telecom operators’ discussions around edge/cloud have gone in two separate directions in recent years:

  • External edge computing: The desire by MNOs to deploy in-network edge nodes for end-user applications such as V2X, IoT control, smart city functions, low-latency cloud gaming, or enterprise private networks. Often called “MEC” (mobile edge computing), this spans both in-house edge solutions and a variety of collaborations with hyperscalers such as Azure, Google Cloud Platform, and Amazon Web Services.
  • Internal: The use of cloud platforms for telcos’ own infrastructure and systems, especially for cloud-native cores, flexible billing, and operational support systems (BSS/OSS), plus new open and virtualised RAN technology for disaggregated 4G/5G deployments. Some functions need to be deployed at the edge of the network (such as 5G DUs and UPF cores), while others can be more centralised.

Of these two trends, the latter has seen more real-world utilisation. It is linked to solving clear and immediate problems for the CSPs themselves.

Many operators are working with public and private clouds for their operational needs—running networks, managing subscriber data and experience, and enabling more automation and control. While there are raging debates about “openness” vs. outsourcing to hyperscalers, the underlying story—cloudification of telcos’ networks and IT estates—is consistent and accelerating. The timing constraints of radio signal processing in Open RAN, and the desire to manage ultra-low latency 5G “slices” in future 3GPP releases are examples that need edge compute. There may also be roles for edge billing/charging, and various security functions.

In contrast, telcos' customer-facing cloud, edge and data offers have been much slower to emerge. The focus and hype about MEC has meant operators’ emphasis has been on deploying “mini data centres” deep in their networks—at cell towers or aggregation sites, or fixed-operators’ existing central office locations. Discussion has centred on “low latency” applications as the key differentiator for CSP-enabled 5G edge. The focus has also been centred on compute rather than data storage and analysis. Few telcos have given much consideration to "data at rest" rather than "data in motion" - but both are important for developers.

This has meant a disconnect between the original MEC concept and the real needs of enterprises and developers. In reality, enterprises need their data and compute to occur in multiple locations, and to be used across multiple time frames—from real time closed-loop actions, to analysis of long-term archived data. It may also span multiple clouds—as well as on-premise and on-device capabilities beyond the network itself.

What is needed is a more holistic sense of “networked cloud” to tie these diverse data storage and processing needs together, along with documentation of connectivity and the physical source and path of data transmission.

No alt text provided for this image

Potentially there are some real sources of telco differentiation here - as opposed to some of the more fanciful MEC visions, which are more realistically MNOs just acting as channel partners for AWS Outposts and Azure's equivalent Private MEC.

An example of the “networked cloud”

Consider an example: video cameras for a smart city. There are numerous applications, ranging from public transit and congestion control, to security and law enforcement, identification of free parking spots, road toll enforcement, or analysing footfall trends for retailers and urban planners. In some places, cameras have been used to monitor social-distancing or mask-wearing during the pandemic. The applications vary widely in terms of immediacy, privacy issues, use of historical data, or the need for correlation between multiple cameras. 

CSPs have numerous potential roles here, both for underlying connectivity and the higher-value services and applications.

But there may be a large gap between when “compute” occurs, compared to when data is collected and how it is stored. Short-term image data storage and real-time analysis might be performed on the cameras themselves, an in-network MEC node, or at a large data centre, perhaps with external AI resources or combined with other data sets. Longer-term data for trend analysis or historic access to event footage could be archived either in a city-specific facility or in hyperscale sites.

(I wrote a long article about Edge AI and analytics last year - see here)

No alt text provided for this image

For some applications, there will need to be strong proofs of security and data custody, especially if there are evidentiary requirements for law enforcement. That may extend to knowing (and controlling) the specific paths across which data transits, how it is stored, and the privacy and tamper-resistance compliance mechanisms employed.

Similar situations—with both opportunities and challenges—exist in verticals from vehicle-to-everything to healthcare to education to financial services and manufacturing. CSPs could become involved in the “networked cloud” and data-management across these areas—but they need to look beyond narrow views of edge-compute. Telcos are far from being the only contenders to run these types of services, but some operators are taking it seriously - Singtel offers video analytics for retail stores, for instance.

Location-specific data

As a result, the next couple of years may see something of a shift in telcos’ discussions and ambitions around enterprise data. There will be huge opportunities emerging around enterprise data’s chain-of-custody and audit trails—not only defining where processing takes place, but also where and how data is stored, when it is transmitted, and the paths it takes across the network(s) and cloud(s).

(A theme for another newsletter article or LI post is on enterprises' growing compliance headaches for data transit - especially for international networks. There may be cybersecurity risks or sanctions restrictions on transit through some countries or intermediary networks, for instance. Some corporations are even getting direct access into Internet exchanges and peering-points for greater control).

In some cases, CSPs will take a lead role here, especially where they own and control the endpoints and applications involved. Then they can better coordinate the compute and data-storage resources. In other cases, they will play supporting roles to others that have true end-to-end visibility. There will need to be bi-directional APIs—essentially, telcos become both importers and exporters of data and connectivity. This is especially true in the mobile and 5G domain, where there will inevitably be connectivity “borders” that data will need to transit. (A recent post on the need for telcos to take on both lead and support roles is here)

There may be particular advantages for location-specific data collected or managed by operators. For example, weather sensors co-located with mobile towers could provide useful situational awareness both for the telco’s own operational purposes as well as to enterprise or public-sector customers, such as smart city authorities or agricultural groups. 

Telcos also have a variety of end-device fleets that they directly own, or could offer as a managed service—for instance their own vehicles, or city-wide security cameras. These can leverage the operator’s own connectivity (typically 5G) as well as anchor some of the data origination and consumption.

Conclusion

Telecom operators should shift their enterprise focus from mobile edge computing (MEC) to a wider approach built around "networked data". Much of the enterprise edge will reside beyond the network and telco control, in devices or on-premise gateways and servers. Essentially no enterprise IT/IoT systems will be wholly run "in" the 5G or fixed telco network, as virtual functions in a 3GPP or ORAN stack.

They instead should look for involvement in end-point devices, where data is generated, where and when it is stored and processed—and also the paths through the network it takes. This would align their propositions with connectivity (between objects or applications) as well as property (the physical location of edge data centres or network assets).

There are multiple stages to get to this new proposition of “networked cloud”, and not all operators will be willing or able to fulfil the whole vision. They will likely need to partner with the cloud players, as well as think carefully about treatment of network and regulatory boundaries.

Nevertheless, the broadening of scope from “edge compute” to “networked cloud” seems inevitable. The role of telcos as pure-play "edge" specialists makes little sense and may even be a distraction from the real opportunities emerging at higher levels of abstraction.

The original version of this article is at https://blog.cloudera.com/telco-5g-returns-will-come-from-enterprise-data-solutions/

I'll be speaking on an upcoming webinar with @cloudera about "Enterprise data in the #5G era" on May 4, 2022 - https://register.gotowebinar.com/register/3531625172953644816

#cloud #edgecomputing #5G #telecoms #latency #IoT #smartcities #mobile #telcos

01 May 19:42

Solving Tetris in C

Code » Exactly one of the following statements is true. In Tetris, you can always get at least one line before losing. In Tetris, a sufficiently evil AI can always force you to lose without getting a single line, by providing bad pieces. Which one? Prove it. The setup I created HATETRIS in 2010 because I wanted Tetris players to suffer. It was something I created for my own amusement and as a programming challenge and to investigate the use of JavaScript as a gaming platform. HATETRIS is a version of Tetris which always supplies the worst possible piece. HATETRIS explores only to a depth of 1. It looks at the immediate future and chooses the single worst piece for the present situation, ignoring the possibility of the player using the current piece to set up a situation where the second piece forces a line. After the first iteration of the game, I started using a bit mask to express each row of the playing field ("well"), which made testing the insertion and removal of possible p...
01 May 19:42

Words and meaning are two different things

by Doug Belshaw

Sometimes, things that seem unproblematic and obvious don’t bear much scrutiny. Let’s take a popular case in point. Elon Musk, you may have heard, is buying Twitter. He tweeted the following of his intentions for the platform:

Elon Musk tweeting the following unattributed screenshot: "Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated," said Mr. Musk. "I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans. Twitter has tremendous potential - I look forward to working with the company and the community of users to unlock it."

There is so much to unpack here. I almost want to go line by line. But instead, let’s zoom out a moment and just think about what’s happening here. The world’s richest man is buying a platform which has been known to have had an effect on democratic elections in at least two major western democracies. So he is absolutely right when he calls it a “digital town square where matters vital to the future of humanity are debated”. More than that, though, it’s a giant memetic influence machine which will no doubt help decide who will become next President of the USA.

Twitter isn’t “the” town square, however, in two senses. First, town squares aren’t usually privately-owned. They’re public spaces, owned by the people. Second, although it has a lot of users, Twitter doesn’t even have as many as Pinterest, never mind Instagram or Facebook. Also, worldwide, people use on average over seven different social platforms on average.

But, hang on, it’s a good thing that Musk’s is aiming to make Twitter better by “open source the algorithms to increase trust, defeating the spam bots, and authenticating all humans,” right? Well, perhaps not. Although I appreciate the sentiment, just because something can be inspected doesn’t mean it can be understood.

However, my main issue is with the idea that everything would be OK if we got rid of spam bots and ensured that everyone was in some way verified. Although it seems like common sense that the use of real names leads to civility, this has proved to be incorrect. It’s actually my experience on the internet as well. I’ve had some of the best interactions with people who I don’t know the real names of, nor where they’re located.

Someone sent me a link to counter.social which is a reductio ad absurdum of what Twitter could turn into. It claims: “No Trolls. No Abuse. No Ads. No Fake News. No Foreign Influence Ops.” Good luck with that. Again, although it sounds like something that social media users would want, it’s a thinly-veiled signpost to authoritarianism.

The answer to the problem of how to do online democracy is citizen education together with strengthening democratic institutions. It’s not allowing a billionaire who’s had some success buying companies before to parachute himself in to ‘fix’ things. Thank goodness I’ve spent the last five years on the Fediverse and deleted my Twitter account around six months ago…

The post Words and meaning are two different things first appeared on Open Thinkering.
29 Apr 02:26

Brendan Smialowski, Buying Twitter, Elon Musk Will Face Reality...



Brendan Smialowski, Buying Twitter, Elon Musk Will Face Reality of His Free-Speech Talk


Elon Musk will be constrained by laws about content – like The European Union’s Digital Services Act – rather than being able to do whatever he wants with his shiny new toy, Twitter. Likewise, he has raised a bunch of debt finanancing so he can’t let the company fall into the shitter. If anything, his goal should be to increase the value of Twitter by creating more opportunities for product innovation, and to compete with Facebook, Instagram, etc.

But will he unravel the minimal protections Twitter has managed to put in place to try for what Isaiah Berlin referred to as ‘positive free speech’. As Anand Giridharadas puts it,

The constitutional protection of speech does not, on its own, engender a society in which the chance to be heard is truly abundant and free and equitably distributed.

“Freedom for the wolves has often meant death to the sheep,” Mr. Berlin once said.

We’ll have to see if Musk can walk the line.

Personally, I am looking forward to the edit button and the end of the 280 character limit, which are stupid constraints. I am not sure that anyone – especially Musk or the oligarchs – can solve our free speech problems. But I don’t look to him to do it, but democracy.

If we don’t figure out how to allow people to talk in the public square without being threatened or lied to by clowns with megaphones, it won’t matter who 'owns’ the megaphones.

24 Apr 02:10

Epoch Times writers mass-mail unsolicited "newspaper" promoting crypto

Photograph of the front page of a newspaper, titled "Wall Street Today" and with the headlines "Why Investors Are Making a Killing with Cryptocurrency" and "Slashing Bitcoin Costs by Up to 75%"

Bob Byrne and Tim Collins, two prolific contributors to the far-right Epoch Times, have expanded their grift to crypto. A twenty-page-long "newspaper" titled Wall Street Today appeared in many mailboxes, featuring misleading charts and a multi-page-long advertisement for a Bitcoin mining company—evidently hoping that its recipients might invest in crypto or in the penny stock for the mining firm. A small-print disclosure on page 17 revealed that the firm, Creek Road Miners, paid $1.9 million for the glowing "review".

Byrne and Collins published the paper via their co-founded company Streetlight Equity. The firm has also published ostensibly economic-focused articles that include conspiracy theories about how U.S. sanctions on Russia are all a part of a plan to "force the left's green agenda", and rail against pandemic lockdowns.

This is not the first unsolicited newspaper from the Epoch Times or its associates; the Falun Gong-associated and strongly anti-Chinese Communist Party publication previously distributed an unsolicited "special edition" which described COVID-19 as the "CCP virus". This led to pushback from Canadian postal union, who urged the Canadian government to ban its distribution as hate speech they feared would endanger Asian Canadians. Epoch Times have also spread QAnon and anti-vaccine conspiracy theories, spread false claims of fraud in the 2020 United States presidential election, and promoted far-right politicians in Europe.

24 Apr 02:09

Apple to delist completed games that haven’t been updated ‘in a significant amount of time’

by Steve Vegvari

Indie developers have begun receiving emails from Apple notifying them that the company is delisting their app from the App Store. The notification states that due to insufficient updates, Apple will remove the game or app in 30 days.

Match three game Motivoto faces delisting as a developer of the game took to Twitter to show Apple's email. In it, the company says the app has “not been updated in a significant amount of time.” Therefore, it is scheduled to remove the app from the App Store in 30 days.

https://twitter.com/protopop/status/1517701619374338050

Apple continues and says “no action is required for the app to remain available to users who have already downloaded the app.” However, developer Protopop Games must submit an update in 30 days for review. Otherwise, Motivoto will be removed from sale.

Clarifying the last time Motivoto was updated, Protopop Games states the game hasn’t received an update since March 2019. However, the developer claims Motivoto is “fully functioning” and has been for the last three years.

Similarly, another indie developer took to Twitter stating Apple is delisting “a few” older games for the same reason. As with Motivoto, Twitter user @lazerwalker claims that the games in question “aren’t suitable for updates” and “they’re finished artworks from years ago.”

https://twitter.com/lazerwalker/status/1517849201148932096

Apple’s policy to remove apps without regular updates is a part of the company’s App Store Improvements measure. The policy aims to ensure all “apps available on the App Store are functional and up-to-date.” However, doing so raises a pain point for preservation. Users who have purchased the game or app experience no interruptions. However, delisting prevents new players and users from accessing the app.

To avoid being delisted, Apple suggests regular updates “to fix bugs, offer new content, provide additional services, or make other improvements.” However, as both developers point out, that’s not always sustainable.

Source: @protopop

24 Apr 02:07

Tokyo’s Manuscript Writing Cafe won’t let you leave until you finish your novel. ‹ Literary Hub

mkalus shared this story from Literary Hub.

Cafés can be lovely places to grab a coffee and a snack while you noodle around with your writing project, but the thing they’ve long been missing is cold, hard accountability. Well, no more, thanks to The Manuscript Writing Cafe!

The Tokyo café was designed to help writers trying to hit deadlines—so much so that you aren’t allowed in unless you have one, and you can’t leave unless you meet it. A sign in the café explains: “The Manuscript Writing Cafe only allows in people who have a writing deadline to face! It’s in order to maintain a level of focus and tense atmosphere at the cafe!”

For people who thrive with external pressure and constant oversight (hi, yes, sorry to my agent), this place sounds like a godsend. The rules are as follows:

1. Upon entering the store, write down at the reception desk how many words and by what time you are going to write your manuscript.

2. The manager asks you every hour how your manuscript is coming along.

3. You are not allowed to leave the store until you have finished writing your manuscript or writing project.

The café charges by the hour (so the whole “you can’t leave till you’re finished” is actually a pretty savvy business decision). To me, though, the “just circling back” manager is the most brilliant touch—nothing will get me writing faster than trying to avoid explaining my stunningly low productivity to another human.

Petition to franchise these in the US!

[via Grape]

Like this:

Like Loading...

23 Apr 19:54

Who made my clothes?

by Margaux Bourdon

Hi there! I’m Margaux, from the support team, and since I’m on Weekly Chart duty today, we’re gonna talk aboutfashion

Despite its glamorous image, fashion is one of the most polluting industries in the world — think water waste and toxic chemicals released into rivers and landfills full of old H&M sweaters. It’s also riddled with a host of social and workers’ rights issues, and even disasters like the collapse of the Rana Plaza garment factory building in Dhaka, Bangladesh in 2013. This week is Fashion Revolution Week, an annual campaign to memorialize those workers and raise awareness about the industry’s impact on the people and the planet.

In light of that event, I thought it would be interesting to take a look at my own closet. Where were my clothes manufactured? How long have I owned them? Do I really own as much secondhand as I think I do? So I set out to sort through every. single. item. in my wardrobe, looking at the tags to find their origins, and deep in my head to remember when and how I’d acquired them (which I freakishly remembered for every item, give or take a few months).

One thing I quickly learned is that Europe does not require manufacturers to indicate the country of origin of your garments, although most do provide it. Combine this with damaged or cut tags, and I’ve got 46 items, or 31% of my wardrobe, from unknown origins. Among those that do have such information, 29 items were manufactured in China, which comes as no surprise as Asia is the biggest manufacturer when it comes to clothing. Another major player in the field is Turkey, with 13 items (9% of my closet), followed by Bangladesh at 9 items, or 6%. 

One thing most countries in this chart have in common is the fashion industry’s lack of transparency when it comes to suppliers and the working conditions in their factories. The truth is, I might know which countries my clothes come from, but I have no idea of the conditions in which there were produced. In addition to the Rana Plaza collapse in 2013, other scandals and tragedies surrounding the fashion industry occur on a regular basis. To cite one of the most horrifying examples, in the past two years information has emerged about big brands benefiting from the forced labor of the Uyghur, a Muslim minority suffering massive repression by the Chinese government. (This list of brands who are involved is the most up-to-date one I can find.) There are also numerous reports about child labor in Turkey and in other countries, as listed by the U.S. Department of Labor in this report. The Fashion Revolution movement’s motto is “Who made my clothes?”, but that’s not so easy to find out. 


Along with social and workers’ rights issues, the clothing industry is one of the most polluting in the world. Fast fashion never stops accelerating, garments have a very short turnaround in people’s closets, and literal tons of items are dumped in landfill every year. (See for example these surreal pictures of the Atacama desert, where 39,000 tons of clothing lie in the sun.) The waste is driven by fast-fashion retailers like the giant Shein, which according to Business of Fashion adds an average of 314,877 styles to its website every day, and has grown so much in the U.S. in the past three years that it now tops Swedish retailer H&M.

Even if the problem sits mostly on the big corporations’ side, as consumers we have a role to play — buying less, buying secondhand, and keeping clothes for longer are three easy steps that can reduce your impact. I’ve been shopping secondhand and swapping clothes for a long time now, but this is the first time I’ve taken a precise look at how much I shop, and how much of it is actually “new.”

Above, you can see the evolution of my clothes consumption and its share of new and secondhand. I’ve also been knitting since 2016, so a tiny share of my clothes are handmade since then. I’m quite happy to see a big chunk of my closet is on the more sustainable side, even if there’s a notable increase in items bought new since 2020. (Online shopping as a coping mechanism for pandemic blues, anyone?) On average I acquire just above 15 items a year, which, believe it or not, is far below average in some countries, like the USA (68 items) or the Netherlands (43.6 items, according to a 2014 study).

If we look at it by category there are big differences, with all of my coats being secondhand. Oversized fit works well there, and vintage has more interesting pieces. But only 41% of the tops were used, probably because good basics are hard to thrift. All in all, 52% of my closet is secondhand, which was a bit disappointing to me, but it’s important to remember that how long you keep an item is almost as important as how you buy it. I guess some items I’ve bought new but owned for more than ten years come close to qualifying for secondhand after all!


And you, what’s in your closet? Who made your clothes? Do you still have that one pair of jeans you’ve owned for fifteen years because there’s just none better? You can tell me all about it at margaux@datawrapper.de — and if you have ideas on which household items I should list and analyze next, feel free to suggest. Next week you’ll hear from Aya, one of my teammates from support.

23 Apr 19:54

The humanities are facing a credibility crisis

Aaron Hanlon, Washington Post, Apr 21, 2022
Icon

Hopefully this post is not blocked for you (it's not blocked for me). The argument here is that researchers in the humanities are undermining their position by being advocates for political change rather than neutral researchers and commentators. "The idea that humanities scholars are activists first and only then scholars leaves much of the public skeptical of the work we do," writes Aaron Hanlon. "It's clear there's a fork in the road. Down one path is understanding the humanities foremost as knowledge work and therefore requiring institutional and civic credibility to function and thrive... down the other path is understanding the humanities as a kind of pure activism committed to rejecting the values that govern institutional and civic credibility." To be honest, I'm not interested in either path. If the choice is to either capitulate or to agitate, I choose neither.

Web: [Direct Link] [This Post]
23 Apr 19:54

Insomniac Technologies

by Sierra Komar

They look like something you might buy from one of those minimalist, millennial jewelry brands, or an up-and-coming yoga and fitness studio. Understated, titanium rings; silicone bracelets with natural stone settings; teardrop-shaped pendants of composite wood on simple, stainless-steel chains. The rings are neutral and unisex — the bracelets and necklaces more conventionally feminine — yet all are relatively casual, intended for everyday wear. To the untrained eye, it is difficult to tell that they have anything to do with sleep. 

Sleep wearables promote rest reconfigured as productivity

Beneath their nondescript exteriors, however, each of these pieces is studded with a range of sensors capable of detecting body temperature, heart rate, motion, and more. As they sit discreetly on the pulse points, each amasses mountains of this biological data, processing it via complex algorithms into simplified, wellness-oriented reports that are made accessible to the wearer via an app. These reports provide information on things like activity level and overall health — some even claim to be able to detect pre-symptomatic cases of Covid-19. Most importantly, they offer insight into that most mysterious and inaccessible of all bodily functions: sleep. Do you want to clarify the contours of your nebulous eight hours and see just how much of it was spent in deep sleep, light sleep, and REM sleep? Just ask your bracelet. Do you want personalized recommendations about the best time to close your eyes to achieve your most optimal amount of rest? Just ask your ring. Light, compact, and virtually indistinguishable from their purely ornamental counterparts, today’s sleep-tracking jewelry pieces are the culmination of years of advancements in sensor design and computing power. And yet, they are but the latest in a long line of rest-related accessories and accoutrements: objects whose shifting material configurations express — and produce — equally diverse ideologies of sleep itself. 


In 1930, Edward and Elsie Hemphill of San Francisco, California filed a patent for a product called the “sleep eye shade”: an invention designed to protect its sleeping wearer from disruption by light. Consisting of a piece of soft, opaque fabric that fastened snugly over the eyes, the “sleep eye shade,” or sleeping mask, was one way of promoting rest amongst 20th-century city dwellers reckoning with a sudden rise in ambient electric lighting. Decades later, a host of artists and fashion designers would offer their own variations on this theme, producing an array of voluminous, textile-based items that aimed to bring the bed to the body. These include the down-filled dresses with built-in pillows designed by Italian postmodernist Cinzia Ruggeri (re-interpreted in 2005 by Viktor and Rolf), Norma Kamali’s sleeping bag jackets, and Martin Margiela’s iconic duvet coats. More playful than practical, these garments experimented with earlier, more pragmatic down-based construction techniques developed by workwear designer Eddie Bauer for his 1936 “Skyliner” jacket, the first puffer jacket to be patented in America. Intervening at the surface of the body, they approach sleep as an external phenomenon: one that can be encouraged through the regulation of environmental variables such as warmth and softness. 

In 1959, Hugh Hefner’s famous circular bed was installed in the Playboy mansion. Designed by architect R. Donald Jaye, the bed not only vibrated and rotated, but also came equipped with a variety of state-of-the-art media and communications technologies, including a telephone, an intercom, and a television. The bed of architect Richard Neutra, located at his VDL research house in Silver Lake, Los Angeles, boasted a similar array of hi-tech devices, including two phones, three intercoms, and a gramophone that could be operated via controls embedded in the headboard. A few years later, in 1969, Italian architect and designer Joe Colombo would unveil his own take on the tech-bed: a piece of furniture he called the “Cabriolet Bed.” Part of his futuristic “Living Machines” series, the bed merged a sleeping surface with a cigarette lighter, a radio, a television, and an electric fan.

These mid-century tech beds display a curious ambivalence — if not outright hostility — towards the biological function of sleep itself. Re-imagining rest as something to be combined with other, more industrious activities — reading, writing, corresponding with friends and colleagues — they appear far more concerned with staving it off than they are with encouraging it. As architectural theorist Beatriz Colomina has pointed out, they contain a premonitory glimpse of the state of constant connectivity that would be facilitated by the rise of the smartphone and other personal computing devices. They also represent an early, design-based manifestation of what Jonathan Crary has diagnosed as the widespread, institutionalized insomnia of our late-capitalist present. 

For Crary, contemporary life increasingly takes place in the confines of the “24/7 universe,” defined by its emphasis on constant productivity and its utter disregard for the natural rhythms of human life. Demanding from its subjects the same capacity for continuous, uninterrupted operation that is exhibited (or at least simulated) by inert systems like global markets and wireless networks, this universe has no room for sleep and the “incalculable losses it causes in production time, circulation, and consumption.” An act (or, more accurately, a mode of inaction) that brings both labor and the movements of accumulation — shopping, scrolling, downloading — to a forcible pause, sleep is anathema to the 24/7 universe. Correspondingly, it is also the target of increasingly aggressive efforts to lessen its frequency, inevitability, and overall hold over human life. Though caffeine has been a part of human life for centuries, the energy drinks of the past three decades encourage its consumption in volumes that far exceed those that naturally occur in coffee and tea, and that frequently verge on the dangerous. Twenty-first century amphetamines like Modafinil take this chemical stimulation to a new level, promising users the ability to stay awake for up to 48 hours. While both of these examples already make the hi-tech beds of Jaye, Neutra, and Colombo seem positively relaxing, recent experiments conducted by the U.S. Department of Defense best capture the ultimate goal of the 24/7 universe with regards to human rest. Their focus? The biological mechanisms that enable the white-crowned sparrow to remain awake for seven consecutive days. 


Rigid rather than soft, sleek rather than pillowy: the sleep wearables of today are the material opposite of the “analogue” wearables of the past. Eschewing more comfort-based approaches, they locate the key to a good rest inside the individual body itself, treating sleep as an internal, purely physiological phenomenon. Examples include Sleepon’s Go2Sleep ring (which recently added blood-oxygen monitoring to its roster of metrics), as well as a ring by Australian-based company Thim, which incorporates aspects of sleep training as well as sleep tracking. For those interested in adorning other parts of their body, bracelets and necklaces from Bellabeat also offer significant sleep-related capabilities, along with other, more generalized wellness functions. By far the most well-known sleep wearable is the eponymous ring made by Finnish company Oura, which as of March 2022 had sold over one million units. 

These mid-century tech beds display an ambivalence or outright hostility toward sleep itself

Although personal biometric tracking devices from Fitbit, Nike, and Jawbone had been available for at least a decade when the Oura ring was released, Oura’s was the first to claim sleep — rather than activity — as its raison d’être. First made available in 2015 under the slogan “Improve Sleep. Perform better,” it represented a significant mutation in the tech wearable insofar as it targeted the sleeping body, rather than the waking one. While earlier, activity-focused wearables could only provide users with rudimentary estimates of their total sleep time (if they incorporated any sleep functionality at all) the Oura ring offered calculations of the precise time their wearer had spent in particular sleep stages. It also established a feedback loop between sleep and performance, mobilizing activity data in service of sleep (through features called “actionable insights”) and quantifying, as a “readiness score,” the effects of sleep on waking activity. For example: If, according to the Oura ring’s evaluation, you had a night of poor sleep, you would be likely to receive a poor “readiness score” the following day. This score would likely be accompanied by words of encouragement to “train carefully” or “go easy” until your rest levels were restored. On the other hand, if you had accrued what the ring judged to be a good night’s rest, you would be likely to receive the opposite advice: that which prompted you to “go strong.” Although it now emphasizes both sleep-tracking and more general wellness-related functions, sleep tracking remains the Oura ring’s best-known role. As a 2021 New York Times review of the device put it: “The Oura Ring Is a $300 Sleep Tracker That Provides Tons of Data.”

In the midst of our sleepless present, do sleep wearables not represent a glimmer of hope? Does their pursuit of the perfect rest not signal a valuable moment of respite in the ongoing war against respite itself? On the surface — and especially when positioned next to some of the more categorically anti-sleep technologies described above — the answer appears to be “yes.” Instead of attempting to outrun it through various chemical, pharmaceutical, or other yet-to-be-imagined means, sleep wearables underscore, in various quantified ways, the negative effects that sleep deprivation can have on waking life. Though detrimental for some, these quantifications — a low sleep score, a notification that encourages one to “take it easy” — are for others a necessary encouragement to slow down, stop moving, and set aside some time for rest. 

They also accomplish something else. By linking sleep with activity, the Oura ring promises significant improvements to both realms. As the company’s original Kickstarter put it: “By understanding how well you slept and recharged, [the Oura ring] can determine your readiness to perform and help you adjust the intensity and duration of your day’s activities. It can also uncover actionable insights for changes to your daily activities that can help you sleep better.” While this project is one of optimization — one that seeks to uncover an “ideal” sleep perfectly balanced to the needs of a particular individual — it is also one that effects a rhetorical transformation of sleep, from passive event to active process. Though the Oura app provides users with a “raw” sleep score, users are equally, if not more likely to consult their “readiness score,” which presents sleep not as a valuable or necessary experience in itself, but as a mere substrate of performance: an invisible background factor in a user’s ability to “take on the day.” The ring’s “actionable insights” —  “action” being the key word — have a similar effect, embellishing the fundamental passivity of sleep under a layer of activities and practices so that it too, in the end, resembles something that can be consciously endeavored. Although sleep wearables seem to be promoting rest, what they actually promote is rest reconfigured as productivity.

If you had what the Oura ring judged to be a good night’s rest, you would be prompted to “go strong”

A basic premise of the sleep wearable — whether it takes the form of a ring, a bracelet, a necklace, or a brooch — is that it lasts continuously through the night, something reflected in the long battery life of even the most inexpensive devices. A properly charged wearable is thus always “awake,” functioning as a kind of surrogate, low-level awareness that functions even while the user is forced to abdicate their own. More significantly, these devices do not just simply stay awake: they record. No longer content to let sleep slip away like a fog with the light of morning, sleep wearables disturb the fundamental psychic balance implied by Freud’s “mystic writing pad” which, though it preserves every trace, stores most of them in a place inaccessible to the conscious mind. Unlike this “mystic writing pad,” the digital inscriptions of the sleep wearable are always at hand: accessible with the push of a button.

A properly-charged wearable is — in other words — always awake, acting as a sort of surrogate, low-level consciousness that keeps running and recording even while you temporarily abdicate your own. Not only do they make sleep optimizable, they gesture strangely toward transforming sleep from an unconscious act into a conscious one. Underneath their professed “sleep-positivity,” sleep wearables are insomniac technologies. 


It could be argued that the shifts detailed above have little to no effect on the biological reality of sleep itself. Though they may re-sculpt it in the figurative domain, that could never be enough to change sleep’s physical inevitability, nor its constitutive qualities of passivity and unproductivity. And yet, it is worth asking if their effect on our cultural imaginary around rest could not have the additional, more insidious effect of paving the way for future incursions into this already threatened domain. As we become accustomed to the increasing convergence of sleep and action — and increasingly unaccustomed to experiencing sleep outside frameworks of production and activity — who knows what further reductions, adulterations, and debasements to our repose we might accept? There is also the simple, non-rhetorical fact that, as they wrest strings of manipulable data from our most base-level physiological functions, sleep wearables quite literally transform rest into an extractive resource. Under their endless watch, sleeping bodies become productive ones, a process that mounts a disturbing challenge to Crary’s optimistic assertion that “in spite of all the scientific research…[sleep] frustrates and confounds any strategies to exploit or reshape it.”

In a 1961 essay entitled “From Gemstones to Jewelry,” Roland Barthes makes the polemical assertion that modern jewelry has lost its “infernal aspect.” This is, he argues, because it no longer makes substantial use of gemstones — the ultimate relic of the underworld — but rather understated, earthly materials such as glass, wood, and inexpensive metal. For Barthes, today’s jewelry is just another object: something banal, unremarkable, and definitively un-supernatural. “Having originated in the ancestral world of the damned,” he writes, “the piece of jewelry has in one word become secularized.” With their neutral palettes, composite wood, and inoffensively minimalist design, the sleep wearables of the past decade initially seem to confirm this thesis. And yet: There is something damned, or at least paranormal, about rings that transubstantiate the vapors of sleep into visible, instrumental data. Bracelets whose invisible eyes are always open, even while yours are closed.

23 Apr 19:53

Elon Got His Money

by Matt Levine
Also high-water marks, arbitration and debt-for-nature swaps.
23 Apr 19:53

Web Components as Progressive Enhancement

Web Components as Progressive Enhancement

I think this is a key aspect of Web Components I had been missing: since they default to rendering their contents, you can use them as a wrapper around regular HTML elements that can then be progressively enhanced once the JavaScript has loaded.

Via Alex Russell

23 Apr 19:53

Small Home Tour: Rosa, Husband and Baby in 850 square feet

by Alison Mazurek

I love peeking into other’s apartments and seeing a moment of their lives, but more importantly I’m looking at their layout, light fixtures, and art, always gathering ideas. I find dusk the perfect time to go for an evening walk and peek in apartments. Often people haven’t drawn their curtains or blinds yet so you can see in. We’ve taken to walking to our local library an evening each week and Trevor often teases me as I’m looking into apartments and homes. Anyway I think that’s why I love these Home Tours so much. We get a glimpse into someone’s home without being a creep! They invited us!

Today we have Rosa in Vancouver with this lovely 850 square foot apartment that is a portion of a home. I truly wish more single family homes were zoned for multi-family residences. I think Vancouver is moving in that direction but it’s not fast enough for me. I just think of all the environmental and social benefits. Anyway I’ll leave you with Rosa and her lovely, calm space.

Intro

 

Hi, I’m Rosa. I grew up on Vancouver Island but I’ve lived in Vancouver for over 15 years so this city really feels like home. You can find me on Instagram @rosameyer.

 

How big is your home and what is the layout?

 

We live on the top floor of a house and our living area is about 850sqft. We have two teeny tiny bedrooms, a small bathroom, and a good-sized open-plan kitchen, dining room and living area. The open floor plan, combined with the high ceilings make our space feel a lot bigger than it is.

Dining Room

Kitchen

 

Who lives there?

 

My husband and I along with our 1-year-old daughter Jane and our beagle mix Arthur.

 

Tell me about your choice to live small. Was it a conscious decision or did it just evolve?

 

As with many of us in cities, it’s been a bit of necessity mixed with my personal desire to keep our space and belongings simple and manageable.

I feel like I’m on quite a journey with living small. I’m very sensitive to clutter and disorganization in my environment. I try to be such a good minimalist but, honestly, it’s hard! Especially with a baby. I feel like I spend so much time going through our belongings (particularly baby items) organizing, storing, editing and re-organizing. It’s exhausting and I feel like I’m probably doing it wrong because Isn’t that the whole point? Isn’t staying organized supposed to feel effortless now because I’ve already put energy into pairing everything down and making it simple and aesthetically pleasing?!

 Anyway, It’s a goal I keep working towards because I actually just can’t imagine another way. I’m very drawn to a paired-down space and lifestyle and, when I do manage to achieve that (even in a small way), I feel a real sense of peace and well-being around it.

All this to say that I do feel proud of my home. It’s a cozy and (generally) peaceful space for me and my family, However, it still feels like quite a bit of work to keep it feeling/looking like that.

Main Bedroom

 

How would you describe your home style? ex) modern, minimal,  bohemian, vintage?

 I am a lover of vintage (particularly modernist pieces) and I also love minimalism so it’s a constant give-and-take between the two.  It’s hard to appreciate vintage and not become something of a collector (hoarder?). That said, buying used or buying vintage has made it possible for me to buy much better quality in a lot of cases than I would have gotten purchasing new. I think this works to my advantage as I get pieces that I love and cherish for years rather than more disposable-feeling pieces that I’ll want to replace much sooner.

 

Is there a piece of furniture or accessory that you couldn't live without that makes living in your space easier?

 Living small with a baby would be much harder without a white noise machine! I also really love the little Ikea wardrobe that I put into Jane’s room. We left the legs off of it to keep it at a lower height. It works as a change table and will be great when she’s just a bit older and can start to pick her own things out at her level. I also love that it's not "kid furniture" per se. I can use it as extra storage in any future bedroom (kid's or adult's) or I could even see it in an office down the road. 

Kid Room

What is something you love about living small?

 

I love that having a limited footprint makes it that much more attainable to live only with the things that we really love. We don’t need to fill up a big space, in fact, we need to get really honest about what is important enough to take up real estate in our home and in our lives. I love that.

Kid Room

What is something you hate?

 

My husband would say that he hates having our laundry in the main living area. This surprisingly doesn’t really bother me. That said, we stayed in an Airbnb recently with a little laundry room off the kitchen and I have to say it was quite a delight closing the door and not having laundry to look at. 

 

Honestly, in spite of my complaining about having to organize all the baby stuff, I don’t think this would be better if I had more space. I would probably just hold onto more and feel more overwhelmed by all of it. I  actually think that our current setup is the perfect amount of space for our family as it exists right now. I do see challenges down the road as we would love to have another baby. It’s certainly doable in our current home but the size of the bedrooms will make it more interesting. There is also zero privacy but that isn’t really a thing with toddlers anyway.

Living Room

Open Floorplan Living and Dining

 

I think Small Space-ers need to stick together and share all their best tricks. Do you have any storage or organizational tips you want to share? 

 

My best “trick” by far has been to find a color pallet that works for us and to stick to it pretty religiously. This is obviously true for our entire home (as you can see in the photos) but I apply it to my closet and even to my daughter’s clothing and belongings. More than anything else I’ve done, sticking to a tonal color scheme just keeps things feeling calm for me. It truly brings me peace and sparks joy (haha). Even when things are a bit messy, I find it’s more of a “beautiful mess” when there aren’t a lot of clashing colors.

It should be said that my commitment to raising my daughter in a “colorless world” has become fodder for more than a few jokes from my family and friends. I also just discovered the Instagram @sadbeigetoys and it's basically a satire account directed at my life choices. It’s hilarious. I love it. I have chosen a pretty neutral scheme that certainly isn’t for everyone. For what it’s worth, I do think the same principle would absolutely work with more vivid colors.

Rosa and baby

Patio ed. note love what we have the same lattice on our patio but yours is painted white and mine is black!

Thanks again Rosa, for opening up your beautiful home to us. I think you will be comfortable there for longer than you might think ;).  

Photos by @pineconecamp and @russelldalby

 

23 Apr 19:52

Sometimes, 'we told you so' is all that can be said

by Chris Grey
I don’t have time this week for my normal long post, but in any case last Friday's largely continues to cover the current political situation, dominated as it is by Boris Johnson’s lawbreaking, lying, and refusal to resign over them. It is a scandal described by the eminent, and generally measured, political historian Professor Lord Hennessy as “the most severe constitutional crisis involving a Prime Minister” that he can remember, with Johnson “the great debaser in modern times of decency in public and political life”. Yet neither the Tory MPs and party members who made him their leader, nor the voters who elected him, can say that they were not told what he would be like.

It is, as ever, worth recalling that Johnson and Brexit are as inseparable as a dog and its vomit. Yet even if the Prime Minister ends up being toppled by Partygate that will only remove the dog from the metaphor. That aside, this week’s Brexit news has been a case of fresh reports of the ongoing slow puncture effect so long warned of. Yet another firm departing Britain for the EU, this one the largest employer in leave-voting Newark, with the loss of 110 jobs. The realisation that EU regional development funds won’t be fully replaced, this time in leave-voting Wales. The realisation, reported in the leave-supporting Telegraph, of the high cost of post-Brexit pet passports (£). A Kent MP bemoaning the weeks of traffic chaos in her county, as if it had nothing to do with the Brexit trade agreement she voted for.

Warnings become facts

It’s a gradual accumulation of harms, each contributing a little more to the ‘rot’ discussed in my previous post (the rot motif also featured prominently in the media last weekend [e.g. here and here], suggesting I’m not alone in detecting the stench). As that accumulation proceeds we are seeing the warnings of Project Fear turn remorselessly into established fact.

One story this week provides a particular illustration of that process. It concerns the number of British firms setting up distribution hubs in the Netherlands, especially, in order to service the EU market (£). This enables them to avoid delays and other costs, such as multiple VAT registrations, but of course also impacts negatively on employment located and taxes paid within Britain. The reason this story is interesting is not because it is news, so much as because it isn’t. That’s to say, as the report makes clear, this shift has been under way since 2017 and was in part a consequence of all of the uncertainty about what the future trading relationship would be, as well as, later, a response to what it actually turned out to be.

The point is that it’s only now that it’s possible to see the scale of what has happened. In 2017 and subsequently commentators, including me in my own small way, were talking of how there would be such business decisions going on, unreported and unknown outside of the companies themselves, which would, indeed, be slowly damaging the UK economy. But that could be dismissed as alarmism or speculation in the absence of publicly available information, though even without such information it was obvious to anyone with a cursory understanding that it must be happening. Now we can see that it was the case.

Control of our borders: trouble ahead

Versions of this are going to go on happening for years as more and more of the effects of Brexit come to light though, just as the warnings were dismissed as speculation, when they come true it will be dismissed as ancient history. This is likely to be true of the heavily-trailed decision, reported with growing certainty this week, to yet again postpone the introduction of UK import controls, which now seems to be imminent.

Whilst being, preposterously, talked of by Jacob Rees-Mogg as if it is a benefit of Brexit to be able to ‘decide our own control system’, the reality is that the government has been forced to realise that the country simply cannot afford the consequences of the form of Brexit it chose, negotiated and celebrated. So, at least on those aspects of its implementation that the government controls, it seeks to avoid those costs without actually admitting to its errors or acknowledging that the warnings about them were true.

However, as I explained in a recent post, all this does is to create a new set of costs and risks, including those of a human or animal health scandal. As with the distribution hubs issue, at the moment that can be dismissed as speculation and scaremongering. If and when it happens it will be too late. So whilst ‘we told you so’ is one of the least appetising of personal or political reactions, it is one the Brexiters have forced upon those of us who warned, over and over again, what the consequences of all their decisions would be. What else are we supposed to say? What else can we say? It’s not – for me at least – said with any relish, just with resignation.

Control of our borders: contradictory promises

In a somewhat similar way, the news that there has been a huge rise in post-Brexit immigration from non-EU countries (£) is hardly likely to be welcome to at least one significant swathe of leave voters. Certainly, despite the Express’s headlining of the story as being one of ‘Brexit Britain showing its power’, its readers’ comments evince little enthusiasm for this development. But those paying attention during the referendum campaign would have noticed that voters were being given contradictory messages, which couldn’t all be true.

The loudest one, no doubt, was that Brexit would reduce or even end immigration. In a notorious vox pop, one voter said he voted leave “to stop Muslims coming into the country”. But ethnic minorities from non-EU countries were being promised easier rules for their relatives and ‘globalist’ Brexiters were saying, if not very loudly until the day after the referendum, that, overall, Brexit might very well mean higher net migration.

Personally, I think the news on immigration is welcome and necessary, in and of itself. However, it is crucial to recognize that ‘migration’ and ‘freedom of movement of people’ are very different things, with the latter, which has been lost, very much preferable. It is preferable economically, because it makes it easier and far less bureaucratic to employ workers from overseas. It is preferable socially, in enabling an easy, flexible and genuine intermingling of families and cultures compared with a system based of time-limited work visas and points. And, much forgotten in the whole discussion, freedom of movement was a right that British people also had and have now lost, and with no compensatory easing of their ability to move to non-EU countries.

So, once again, as the realities of Brexit gradually emerge, so too does it emerge that pretty much no-one – leave or remain voters alike – is getting what they wanted or were promised. Whose fault is that? Not really those who voted to leave, and obviously not those who voted to remain. Certainly not those who warned as strenuously as possible of what would happen if we left. That only leaves those who made the false promises, those who insisted that Brexit must be done in the form it has been, and those who negotiated that Brexit. There are plenty of names on that list, but Johnson’s is at the top.

Still lying, still not learning

That is why the present debate, although it is about his refusal to take responsibility for breaking Covid rules and his lies about having done so, stands as a proxy for a much bigger one. When will he and his many adjutants take responsibility for their lies about Brexit? If ever they do, there will be no need and little force in saying ‘we told you so’. Brexit might instead become a shared problem to be solved, a national error to be mitigated and, even, corrected. Until they do, Brexit remains their responsibility, their mess, their guilt, their shame, and their legacy.

Certainly that is the situation now, as they continue to double down on the dishonesty. This week, Rees-Mogg, in his trademark tones of contemptuous yet ill-informed arrogance, again suggested that the government is planning to break the terms of the Northern Ireland Protocol, something seemingly confirmed by subsequent reports (£). Whilst noting the widespread comment that the government had been told that signing up to it meant being bound by it – in other words, that ‘we told you so’ – he shrugged this off as “absolute nonsense”.

Instead, he advanced the bogus rationale that the government had only signed it in the expectation that it “would” be changed. That would be an absurd enough thing to say of an international treaty in itself, but in the very same breath Rees-Mogg spoke, correctly, of the fact that the Protocol has provisions such that it “could” be changed. There’s obviously all the difference in the world between ‘would be’ and ‘could be’, and what makes his dishonesty, like Johnson’s, all the more grotesque is that it is so brazen.

Equally grotesque is the refusal to learn from past mistakes. In this case, these include the profound damage to the UK’s reputation done by the previous flagrant threat to illegally renege on the Protocol in the Internal Market Bill in 2019, and by the probably illegal unilateral extension of the Protocol grace periods in 2021. The Brexiters were warned of the damage, pressed ahead anyway, and the damage was done. To continue on the same path now is all the more irresponsible given that the Ukraine War makes international solidarity and co-operation so crucial.

All that can be done is warn them, yet again, of their impending folly. Sir Jonathan Jones, who was Head of the Government Legal Service until he resigned over the illegal clauses of the Internal Market Bill, has done just that over Rees-Mogg’s latest threat (£). Will such warnings make a difference? Very likely not. In which case, once the next bit of damage is done, there will be nothing else to say except, once again, ‘we told you so’.

Brexiters hold the key

It’s true that ‘we told you so’ gets us nowhere but, until the Brexiters – by which I mean, as always, the leading advocates rather than ordinary leave voters - admit that ‘you told us so’, it’s all that can be said. If they ever make such an admission then, as with individual leave voters who regret their choice, there can and must be generosity of spirit. Then, there can be soothing words about how Brexiters were well-intentioned, and meant for the best. But until then there’s no point being mealy-mouthed: the lies of Brexiters have been totally discredited.

As a country, we can’t and won’t ‘move on’ until there’s been an honest reckoning of where we are and how we got here. Only Brexiters can get us to that point, and if an admission of their failure is too much to realistically expect then at least a withdrawal into silence and out of public life might suffice. But, as with Johnson, there is no sign whatsoever of them doing so, perhaps because, as with Johnson, they are bereft of both honour and shame.

23 Apr 19:43

We are globally burnt out and we need a global reset. How to create a global system of care?

by Raul Pacheco-Vega

I have a hypothesis about what we are seeing right now: students, faculty, everyone on the planet is exhausted of this pandemic. This absolute exhaustion is yielding poor outcomes on everything: low attendance, missed work, missed deadlines.

We are all burnt out, folks.

And herein lies the rub: we can’t individually self-care out of this rut. Somehow we need to engage in global collective action to create conditions that make our lives slower and gentler so we can recover from the global collective grief and loss that we’ve experienced.

We can’t “return to normal”. We need to create a new normal where risk assessment and policy choices are not individualized but the result of collective genuine care for one another.

We literally need a global transformation. We aren’t “returning to normal” because there isn’t one to return to.

The old normal was leading us down a spiral of collective burnout. This pandemic highlighted how poorly prepared our societies were and are for shocks of this magnitude. The old normal left us with many stressed individuals trying to cope with dwindling resources.

Terrible.

This new normal needs to create systemic conditions of care that value human lives across multiple groups. Otherwise we will just try to return to a system where we try to recreate the contexts that left us in this rut in the first place.

Time for a global transformation.

23 Apr 15:38

Look before you leap.

by swardley

The different types of situational awareness.

I’m a bit of a stuck record when it comes to supply chains, situational awareness and mapping. In order to get myself off the subject, I thought I’d have a bit of a rant about this bugbear of a topic. I’m going to do this with the aid of multi-domain defence.


Setting the scene.

Why do we need defence? Let us start with Government.

Government needs a society i.e. something to govern. This doesn’t mean society needs government, however that’s a conversation for another time. A society also needs citizens otherwise you’re governing the society of yourself.

Beyond a society to govern, it also needs legitimacy in governing. That legitimacy can take many forms but if some other Government can exert its percieved legitimacy more than yours then you won’t be governing — they will.

Legitimacy implies something to be sovereign over such as a physical territory. Sovereignty implies some sort of theatre to act within. For example, land or sea or air. To operate in that theatre assumes there is actually a landscape to operate within, that you have the capability to do so and some way of operating (an operating model). Finally, a theatre does imply that you are aware of the existence of the landscape. This will be our basic scene for discussing defence and I’ve drawn it up in a map.

Step 1 — the basic scene.










Belonging and success.

Societies have norms of behaviour — a combination of principles of operating (or heuristics) with the beliefs (or values) of that society. But having those norms is not enough, a society must be successful in spreading or at least maintaining its norms. Citizens also require a sense of belonging which in turn requires some element of trust and an appearance of success. A key thing to note, appearance of success is not the same as actual success i.e. it’s perfectly possible to convince a population that we’ve “won this war” when the opposite is happening.


Trust

On the map we have several pipelines (those rectangular blocks). Whilst most of a map represents statements of logical AND such as this needs this AND that. A pipeline represents a non exclusive OR. For example, trust consists of many components which share a meaning of “trust”. This includes integrity (I trust you to do what you say), benevolence (I trust you to work for my benefit) and competence (I trust you have the skill). You can choose to be competent but lack benevolence or integrity, you can choose to have all these components or you can choose to be weak at all. The level of trust will depend upon how much of each you invoke.

Our understanding of what these components mean is also evolving. It should be noted that all components evolve, the arrow in the pipeline doesn’t mean one evolves to the other but that they are all evolving independently.

Naturally, the components of trust are connected to other components — competence is tied to success, norms of behaviour are tied to integrity, benevolence is a key part of belonging. Benevolence doesn’t mean it won’t cause you harm, abusive arrangements can be based upon a “greater” threat i.e. you sacrifice rights to be protected against a perceived “greater” threat.










Sovereignty

Moving down the map, we can now flesh out the concepts of sovereignty. Typically we think of this as territorial sovereignty (i.e. protecting our land) but there are many elements to sovereignty — protecting our culture, our political systems, our economy and even our technology (i.e. digital). For a Government to have legitimacy it requires control over the political system rather than being appointed by someone else (i.e. a puppet government of others). If it is a puppet then the others are exerting their legitimacy to determine how the political system works.

Success is often tied to protecting “our” land along with our economic and technological progress. Integrity and benevolence is often tied to our cultural systems and the stories we tell — the myths of great deeds or tales on the balance of conflict i.e. robbing the rich to help the poor (Robin Hood) versus robbing the poor to pay the rich (Sheriff of Nottingham). The “appearance” of success and integrity is also tied to our political systems. The society might be failing but we can still build belonging if our political system has the social capital to tell a few fibs.

One thing to note is that all maps are imperfect representations of a space. What I’m showing you are my assumptions and my biases on the subject. Hence, all maps are open to challenge and improvement.











Theatre

Sovereignty applies to a theatre. For example physical sovereignty applies to land, sea, air and space. Digital sovereignty applies to the area of cyber including technology and social media. Culture applies to the theatre of art. These theatres are where we compete with others. There are many forms of competition including conflict (fighting others), co-operation (helping others) and collaboration (working with others).

Let us take the theatre of art. Hollywood was historically used to spread Western culture through the medium of film. Today, the most significant theatre of art is immersive video games hence organisations like Hezbollah produce video games. These might seem unusual theatres of competition but we shouldn’t limit ourselves to land and sea. Areas such as social media can be used to enhance or undermine (through radicalisation) our political systems. The economy itself is a theatre of war from sanctions to exclusions such as the recent use of SWIFT protocol.










Critical national infrastructure (CNI)

Another theatre is CNI. This covers many areas from food to water to energy to telecoms. I’m going to expand this one out because it is tied to the appearance of success. It’s hard to use political capital to convince people within the society which you govern that things are going well when they are starving or lack access to fresh water or heating for their homes.

Other theatres are also tied to elements of CNI. You can’t manage the theatre of cyber if you’ve lost control of telecoms. It’s hard to have an economy if you’ve lost control of finance.










Landscapes (Territory)

These theatres apply to landscapes. The one we are most familiar is concerned with physical sovereignty i.e. our land, our borders, our territory. Of course, territory itself varies a lot i.e. France is not UK which is not Germany. Understanding your landscape and the elements within it (situational awareness of the physical landscape) is critically important. If you don’t know where your resources are or where an attacker is or the type of land then you will be at a severe disadvantage. Sending tanks to defend against an invader might sound sensible but quickly degenerates into farce when you discover that the land you’ve sent the tanks to is a swamp, the invader has significantly more numbers and they’re not even at the swamp you’ve sent the tanks to.

Fortunately, we’ve learned these lessons over time and are are pretty good at physical awareness. We have built a host of systems to enable this from radar to accurate maps.










Landscapes (Supply chain)

Whilst some of our theatres operate over landscapes we are familiar with, not all of them do. For example economic and cyber theatres and elements of CNI operate over supply chains. Even our capabilities are built on supply chains.

Supply chains themselves are more uniform than territory — there are only so many ways to build a bullet or to make a window. Unfortunately our awareness of those supply chains is in general … atrocious. The UK has been through a number of shocks from brexit to covid in which a better understanding of our supply chains would have helped. Today, we have conflict related to Ukraine and shocks such as climate change facing us. The problem with acting in theatres (such as the use of sanctions) is that if you don’t understand the supply chain landscape then … well, it’s a bit like dropping a bomb on a physical landscape without looking at where you are dropping the bomb. Targetted approaches do require you to look at the space.











A note to the future

Across these theatres, our ability to defend ourselves and others (whether from shocks or conflict) and our ability to effectively co-operate and collaborate with others depends upon our awareness of these landscapes. Supply chain attacks across software, across physical goods, across CNI, across our political systems (in a future of AI generated video content when you can’t tell whether the person speaking to you is real or not), across our cultural systems (the use of radicalisation, the alteration of history) will become more common over time. Kinetic warfare (throwing deadly stuff at others) is expensive and opponents will look for asymmetric advantages.

For the last decade, I have repeatedly watched the executive functions of government and corporations ignore these issues and be outplayed by others. Let me be clear, our weakness is in our lack of situational awareness over our supply chains from digital to physical. These spaces are dominated by stories and graphs with rarely a map to be seen










I’m hoping that in the future we embrace a concept of defence far beyond land, sea, air, space and cyber which is built upon the ideas of situational awareness of not just territory but the landscape of supply chains.

Feel free to take the map and modify it yourself. I’d like to see someone improve the map. As I said, it comes complete with my own biases, inertia and assumptions and should be challenged — https://github.com/swardley/Research2022/tree/main/defence/additions

Rant over.

23 Apr 15:37

Internet spring cleaning: How to delete Instagram, Facebook and other accounts

by Kristina Bravo

So you’ve washed your sheets and vacuumed under the couch in the name of spring cleaning. But what about your online clutter?

Apps that delight, inform and keep you connected deserve room in your digital space. But if you haven’t used your Facebook account in years, or you’re looking to permanently take back some time from doomscrolling, you might consider getting rid of some accounts once and for all. Of course, like other services that profit off of you – in this case, off your user informationsocial media and other online platforms can make it hard to cut ties. Here’s a guide to that elusive delete button for when you’ve made up your mind.

How to delete Facebook 

On Firefox, you can limit the social media giant tracking your online activities through the Facebook Container extension without deleting your Facebook account. Though, *wildly gestures at news headlines over the past couple of years*, we don’t blame you if you want to stop using the app for good. You may have even deactivated your Facebook account before and found that you can log right back in. Here’s how to actually delete Facebook through the mobile app:

  • Find the menu icon on the bottom right.
  • Scroll all the way down.
  • From there, go to settings & privacy > personal and account information > account ownership and control > deactivation and deletion.
  • Select delete account, then continue to account deletion > continue to account deletion > delete account.
  • Enter your password, hit continue and click on delete account. Facebook will start deleting your account in 30 days, just make sure not to log back in before then.

Here’s how to delete Facebook through your browser:

  • Find the account icon (it looks like an upside-down triangle in the top right corner).
  • From there, go to settings & privacy > settings > your Facebook information > deactivation and deletion.
  • Select delete account, then continue to account deletion > delete account > confirm deletion.

It may take up to 90 days to delete everything you’ve posted, according to Facebook. Also note that the company says after that period, it may keep copies of your data in “backup storage” to recover in some instances such as software error and legal issues. 

More information from Facebook

How to delete Instagram 

So you’ve decided to shut down your Instagram account. Maybe you want to cleanse your digital space from its parent company, Meta. Perhaps you’re tired of deleting the app only to reinstall it later. Whatever your reason, here’s how to delete your Instagram:

  • Visit https://instagram.com/accounts/remove/request/permanent and log in, if you aren’t already logged in.
  • You’ll see a question about why you want to delete your account. Pick an option from the dropdown menu. 
  • Re-enter your password.
  • Click on delete [username].
  • When prompted, confirm that you want to delete your account. 
  • You’ll see a page saying your account will be deleted after a month. You’ll be able to log in before then if you choose to keep your account.

More information from Instagram

How to delete Snapchat

Whether you’ve migrated to another similar social media platform, or have simply outgrown it, you may be tempted to just delete the Snapchat app from your phone and get on with it. But you’ll want to scrub your data, too. Here’s how to delete your Snapchat account from an iOS app:

  • Click on the profile icon on the top left, then the settings icon on the top right.
  • Scroll all the way down and hit delete account.
  • Enter your password then continue. Your account will be deleted in 30 days.

Here’s how to delete your Snapchat account from your browser:

More information from Snapchat

How to delete Twitter

Twitter can be a trove of information. It can also enable endless doomscrolling. If you’d rather get your news and and debate people on the latest hot take elsewhere, here’s how to delete your Twitter account from a browser: 

  • Once you’re logged in, click on more on the left-hand side of your Twitter homepage. 
  • Click on settings & privacy > your account > deactivate your account > deactivate.
  • Enter your password and hit confirm.

Here’s how to delete your Twitter account from the app:

  • Click on the profile icon, then go to settings and privacy > your account > deactivate your account > deactivate.
  • Enter your password.

More information from Twitter

How to delete Google

Google’s share of the global search market stands at about 85%. While the tech giant will likely continue to loom large over our lives, from search to email to our calendars, we can delete inactive or unnecessary Google accounts. Here’s how to do that:

More information from Google

How to delete Amazon

Amazon has had its fair share of controversies, particularly about data collection and how the retail giant treats its workers. If you’ve decided that easy access and quick deliveries aren’t worth the price anymore, here’s how to delete your Amazon account:

  • Go to https://www.amazon.com/privacy/data-deletion.
  • Make sure to read which Amazon services you won’t have access to after you delete your account. 
  • Check “Yes, I want to permanently close my Amazon Account and delete my data.”
  • Hit close my account.
  • Check your text messages or emails for a notification from Amazon.
  • Click on the confirm account closure link. 
  • Enter your password. 

More information from Amazon

How to delete Venmo

Payment app Venmo has made it easier to split bills and pay for things without cash. But if you’ve decided to use other ways to do that, you’ll want to delete your account along with your bank information with it. You’ll first have to transfer any funds in your Venmo account to your bank account. Another option: return the funds to sender. If you have any pending transactions, you’ll need to address them before you can close your account. Once you’re set, here’s how to delete your Venmo account on your browser:

Here’s how to close your Venmo account in the app:

  • On the bottom right, click on the person icon.
  • On the top right, go to settings by clicking on the gear icon
  • Click on account > close Venmo account > continue > confirm

More information from Venmo

How to delete TikTok

TikTok has exploded in popularity, surpassing Twitter and Snapchat’s combined ad revenue in February. If you’ve tried the app and decided it’s not for you, here’s how to delete your TikTok account: 

  • In the app, click the profile icon on the bottom right.
  • Click the three-line icon on the top right.
  • Click on settings and privacy > manage account > delete account.
  • Follow the prompts.

More information from TikTok

How to delete Spotify 

Whether you want to follow in Neil Young’s footsteps or are already streaming music and podcasts through another service, deleting your stagnant Spotify account is a good idea. If you have a subscription, you’ll need to cancel that first. Once you’re ready, here’s how to delete your Spotify account. 

More information from Spotify

With our lives so online, our digital space can get messy with inactive and unnecessary accounts — and forgetting about them can pose a security risk. While you’re off to a good start with our one-stop shop for deleting online accounts, it’s far from exhaustive. So here’s a bonus tip: Sign up for Firefox Monitor. It alerts you when your data shows up in any breaches, including on websites that you’ve forgotten giving your information to. 

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post Internet spring cleaning: How to delete Instagram, Facebook and other accounts appeared first on The Mozilla Blog.

23 Apr 15:37

2022-04-22 BC (tiny)

by Ducky

There are several charts with come out on Fridays, so I will make this tiny little stub of a blog post just for those charts.

Charts

From my buddy Jeff (who says that yes he collects manually, but it’s cut and paste so unlikely to be numerically incorrect)’s Google Sheet:

See also the Lower Mainland Wastewater Twitter bot if you don’t like Jeff’s charts. Or check Justin McElroy’s Twitter feed also — he usually posts a wastewater chart some time on Fridays.


The vax charts I published yesterday were six days old. Here’s the ones which were published today. From the federal vax page, vax uptake by age over time for British Columbia, doses 1, 2, and 3:

23 Apr 15:22

Introducing Dayana Galeano

by Rizki Kelimutu

Hi everybody, 

I’m excited to welcome Dayana Galeano, our new Community Support Advocate, to the Customer Experience team.

Here’s a short introduction from Dayana: 

Hi everyone! My name is Dayana and I’ll be helping out with mobile support for Firefox. I’ll be pitching in to help respond to app reviews and identifying trends to help track feedback. I’m excited to join this community and work alongside all of you!

Since the Community Support Advocate role is new for Mozilla Support, we’d like to take a moment to describe the role and how it will enhance our current support efforts. 

Open-source culture has been at the center of Mozilla’s identity since the beginning, and this has been our guide for how we support our products. Our “peer to peer” support model, powered by the SUMO community, has enabled us to support Firefox and other products through periods of rapid growth and change, and it’s been a crucial strategy to our success. 

With the recent launches of premium products like Mozilla VPN and Firefox Relay, we’ve adapted our support strategy to meet the needs and expectations of subscribers. We’ve set up processes to effectively categorize and identify issues and trends, enabling us to pull meaningful insights out of each support interaction. In turn, this has strengthened our relationships with product teams and improved our influence when it comes to improving customer experience. With this new role, we hope to apply some of these processes  to our peer to peer support efforts as well.

To be clear about our intentions, this is not a step away from peer to peer support at Mozilla. Instead, we are optimistic that this will deepen the impact our peer to peer support strategy will have with the product teams, enabling us to better segment our support data, share more insightful reports on releases, and showcase the hard work that our community is putting into SUMO each and every day. This can then pave the way for additional investment into resources, training, and more effective onboarding for new members of the community. 

Dayana’s primary focus will be supporting the mobile ecosystem, including Firefox for Android, Firefox for iOS, Firefox Focus (Android and iOS), as well as Firefox Klar. The role will initially emphasize support question moderation, including tagging and categorizing our inbound questions, and the primary support channels will be app reviews on iOS and Android. This will evolve over time, and we will be sure to communicate about these changes.

And with that, please join me to give a warm welcome to Dayana!