Shared posts

28 Feb 22:48

Native Development

by Rui Carmo

I’ve been on a long-term quest to find a simple, fast, and user-friendly way to develop native applications for a variety of platforms, and this page holds the results of that research.

What I am looking for is something that is:

  • Not Electron: I want to be able to write code that runs natively on the target platform without having to rely on a web browser or a large runtime.
  • Fast: I want to be able to iterate quickly and not have to wait for a long time for builds or deployments.
  • Simple: I want to be able to write code that is easy to understand and maintain.
  • Native Looking: I want to be able to write code that looks and feels like a native application on the target platform.
  • Portable: I want to be able to write code that runs on a variety of platforms without having to rewrite it from scratch.
  • Open-source: I want to be able to use code that is open-source and doesn’t have a lot of restrictions on how it can be used.

We have lost the ability to do RAD (Rapid Application Development) due to the prevalence of web applications, and I personally don’t like that, so I’ve been looking for a way to fix that within a small set of tools I want to build.

Resources

Core Language Date Platform Link Notes
C 2024 Windows, Linux, macOS FLTK

a classic, but aesthetically challenged

C# 2023 Avalonia

XAML-based. Likely the best approach for bigger apps. Not very native-looking on the Mac, but has good documentation.

C++ wxWidgets

an old classic with completely native looks and lots of language bindings

Clojure HumbleUI

a cross-platform GUI toolkit. Not aiming to look native.

Lua 2024 GLU

a Lua GUI toolkit using wxWidgets

macOS, iOS Bounces

a macOS/iOS IDE that allows for live reloading. Very nice, severely let down by a subscription model.

2023 Windows, Linux, macOS wxlua

Lua bindings for wxWidgets

macOS lua-macos-app

a pure Lua/FFI implementation of a macOS app. Mind-bending.

Rust 2024 Windows, Linux, macOS Tauri

Uses native web renderer and widgets. Still requires JavaScript and CSS.


28 Feb 22:37

The Logitech K380

by Rui Carmo

This is one of those pieces of gear that I’ve sat in front of for years and never actually wrote about–either because it was too obvious, or because it seemed too trivial.

It's a very pretty keyboard, and it's also very good at what it does.

But even as I am now starting to phase it out and put mine into storage, I realize I owe it a debt of gratitude–because, you see, when I decided to go back to using a US key layout full-time back during the pandemic (and who hasn’t made life-changing choices during that time?), it was literally the best bang for the buck I could find.

This may seem odd to most of my readers, but getting a bona fide, non-ISO US keyboard with a “long return” key and the slash/pipe key atop it is not a trivial thing in Europe, and the K380 was (and still is) easily available in various layouts and colorways1.

And the reason I much prefer the US layout is because harking back to my college years toiling away at VT220 terminals and 68k Macs, everything about computers just makes a lot more sense with a US layout, from symbols and brackets to vim commands.

I can type accented characters faster with the US key combos than in a normal Portuguese keyboard, too, which is kind of funny.

The K380 won a place in my heart because it has three superpowers:

  • It is battery-powered. Yes, two AAA batteries you can get literally anywhere and last apparently forever (I use IKEA LADDA rechargeable batteries, and I get the feeling I recharge them every… year?).
  • There are no cables. Zilch. Nada. Instead, it supports three independent, stupefyingly reliable Bluetooth connections2, which was a lifesaver for me when I had to juggle two laptops on my desk and switch between them (plus my iPad) on the fly. The three hotkeys displace three possible slots for media or window management functions from the top row3, but I find it an acceptable trade-off.
  • It is built extremely well for the price. I’m not going to claim it is indestructible, but it makes for an excellent travel keyboard if you have a full-size iPad (and haven’t gotten a keyboard cover, which is what I did eventually).

And the rounded, soft edge design not only looks good but makes it trivial to pack or slide and out of a messenger bag.

It is, if you’ll pardon the pun, a very smooth operator.

There are a few shortcomings, though:

  • The round keys can make it a little challenging to home in all of your fingers. It does have home key nubs, but in my experience I tend to slide off non-central keys sometimes since the keycap depressions (such as they are) are very shallow and the roundness takes away from keycap area, so it can be challenging to hit off-centre keys reliably at speed.
  • The Ctrl and Fn key placement is exactly the opposite from an Apple keyboard, which means that switching between this and a MacBook can be quite tricky.
  • The Esc, left and right cursor keys are quite small (that’s sort of excusable in the top row, but not in the cursor block).
  • Other than the Bluetooth/on lights, it has no other indicator lights–or backlight, which is kind of OK (I very seldom look at the keyboard, and often have enough ambient backscatter to make do) but also something I’ve started wanting as a default.

And, of course, it lacks Touch ID. But that is such an Apple-specific thing that I’m not going to hold it against Logitech.

As to keyboard sound and feel (always a key thing with aficionados), of course it is a quiet, reliable membrane keyboard. I wouldn’t use in my (sometimes preternaturally quiet) office if it wasn’t.

It does have some springiness (and a somewhat tactile feel before you bottom out), but anyone who likes full-size keys will find it either mushy or unsatisfactory travel-wise.

I prefer low-profile, linear keys when I use “mechanical” keyboards, no matter how extreme, but am literally fine with it.

The only thing I ever found myself wishing it had was (ironically) a USB-C port. Not for charging or even daily use, but just plugging in to devices while setting them up, since for a long while it was the one keyboard I took everywhere.

But its last superpower (an actual physical on/off button on the top left hand side) more than makes up for that.

I will remember the time it spent on my desk fondly, even as I remove the batteries and store it away, certain that in a few years it will probably be one of the few accessories that will still just work.

Thank you, my friend. You will always be welcome at my desk.


  1. It is also very cheaply available in places like AliExpress (my second one was bought there). ↩︎

  2. I also have two M720 mice that share this feature, one of which is my daily driver. I should write about that too some day, since the soft plastic is getting a little worn out and might need replacing. ↩︎

  3. There is also the option of using Logi’s software to customize some of those keys. I never bothered given the way I used it across multiple machines. ↩︎


28 Feb 22:37

Living With Our Machine Sidekicks

by Rui Carmo

I’ve been using GitHub Copilot (and various other similar things) for a long while now, and I think it’s time to take stock and ponder their impact.

It’s All About The Use Case

First of all, since there is a 50% chance you have something to do with technology or code by landing here, I’m not going to focus on code generation, because for me it’s been… sub-optimal.

The reasons for that are pretty simple: the kind of code I write isn’t mainstream web apps or conventional back-end services (it’s not even in conventional languages, or common frameworks, although I do use JavaScript, Python and C++ a fair bit), so for me it’s been more of a “smarter autocomplete” than something that actually solves problems I have while coding.

Disclaimer: I’m a Microsoft employee, but I have no direct involvement with GitHub Copilot or Visual Studio Code (I just use them a lot), and I’m not privy to any of the inner workings of either other than what I can glean from poking under the hood myself. And in case you haven’t noticed the name of this site, I have my own opinions.

LLMs haven’t helped me in:

  • Understanding the problem domain
  • Structuring code
  • Writing complex logic
  • Debugging anything but the most trivial of issues
  • Looking up documentation

…which is around 80% of what I end up doing.

They have been useful in completing boilerplate code (for which a roundtrip network request that pings a GPU that costs about as much as a car is, arguably, overkill), and, more often than not, for those “what is the name of the API call that does this?” and “can I do this synchronously?” kinds of discussions you’d ordinarily have with a rubber duck and a search engine in tandem.

I have a few funny stories about these scenarios that I can’t write about yet, but the gist of things is that if you’re an experienced programmer, LLMs will at best provide you with broad navigational awareness, and at worst lead you to bear traps.

But if you’re a beginner (or just need to quickly get up to speed on a popular language), I can see how they can be useful accelerators, especially if you’re working on a project that has a lot of boilerplate or is based on a well-known framework.

If you’re just here for the development bit, the key takeaway is that they’re nowhere good enough to a point where you can replace skilled developers. And if you’re thinking of decreasing the number of developers, well, then, I have news for you: deep insight derived from experience is still irreplaceable, and you’ll actually lose momentum if you try to rely only on LLMs as accelerators.

People who survived the first AI winter and remember heuristics will get this. People who don’t, well, they’re in for a world of hurt.

Parenthesis: Overbearing Corporate Policies

This should be a side note, but it’s been on my mind so often that I think it belongs here:

One thing that really annoys me is when you have a corporate policy that tries to avoid intellectual property issues by blocking public code from replies:

Sorry, the response matched public code so it was blocked. Please rephrase your prompt.

I get this every time I am trying to start or edit a project with a certain very popular (and dirt common) framework, because (guess what) their blueprints/samples are publicly available in umpteen forms.

And yes, I get that IP is a risky proposition. But we certainly need better guardrails than overbearing ones that prevent people from using well-known, public domain pieces of code as accelerators…

But this is a human generated problem, not a technical one, so let’s get back to actual technological trade-offs.

What About Non-Developers Then?

This is actually what I have really been pondering the most, although it is really difficult to resist the allure of clever marketing tactics that claim massive productivity increases in… well, in Marketing, really.

But I am getting ahead of myself.

What I have been noticing when using LLMs as “sidekicks” (because calling them “copilots” is, quite honestly, too serious a moniker, and I actually used the Borland one) is their impact on three kinds of things:

  • Harnessing drudgery like lists or tables of data (I have a lot of stuff in YAML files, whose format I picked because it is great for “append-only” maintenance of data for which you may not know all the columns in advance).
  • Trying to make sense of information (like summarizing or suggesting related content).
  • Actual bona fide writing (as in creating original prose).

Eliding The Drudgery

A marked quality of life improvement I can attribute to LLMs over the past year is that when I’m jotting down useful projects or links to resources (of which this site has hundreds of pages, like, for instance, the AI one, or the Music one, or a programming language listing like Go or even Janet), maintaining the files for that in Visual Studio Code (with GitHub Copilot) is a breeze.

A typical entry in one of those files looks like this:

- url: https://github.com/foo/bar
  link: Project Name
  date: 2024-02-21
  category: Frameworks
  notes: Yet another JavaScript framework

What usually happens when I go over and append a resource using Visual Studio Code is that when I simply place the cursor at the bottom, it suggests - url: for me, which is always an auspicious start.

I then hit TAB to accept, paste the URL and (more often than not) it completes the entire entry with pretty sane defaults, even (sometimes) down to the correct date and a description. As a nice bonus, if it’s something that’s been on the Internet for a while, the description is quite likely correct, too. Praise be quantization of all human knowledge, I guess.

This kind of thing, albeit simple, is a huge time saver for note taking and references. Even if I have to edit the description substantially, if you take the example above and consider that my projects, notes and this site are huge Markdown trees, that essentially means that several kinds of similar (non-creative) toil have simply vanished from my daily interaction with computers–if I use Visual Studio Code to edit them.

And yes, it is just “smarter autocomplete” (and there are gigantic amounts of context to guide the LLM here, especially in places like the ones in my JavaScript page), but it is certainly useful.

And the same goes for creating Markdown front matter–Visual Studio Code is pretty good at autocompleting post tags based on Markdown headings, and more often than not I’m cleaning up a final draft on it and it will also suggest some interesting text completions.

Making Sense of Things

One area where things are just rubbish, though (and likely will remain so) is searching and summarizing data.

One of my test cases is this very site, and I routinely import the roughly 10.000, highly interlinked pages it consists of into various vector databases, indexers and whatnot (a recent example was Reor) and try to get LLMs to summarise or even suggest related pages, and so far nothing has even come close to matching a simple full-text-search with proper ranking or (even worse) my own recollection of pieces of content.

In my experience, summaries for personal notes either miss the point or are hilariously off, suggestions for related pages prioritise matching fluff over tags (yes, it’s that bad from a knowledge management perspective), and “chatting with my documents” is, in a word, stupid.

In fact, after many, many years of dealing with chatbots (“there’s gold in them call centres!”), I am staunchly of the opinion that knowledge management shouldn’t be about conversational interfaces–conversations are exchanges between two entities that display not just an understanding of content but also have the agency to highlight relationships or correlations, which in turn goes hand in hand with the insight to understand which of those are more important given context.

So far LLMs lack any of those abilities, even when prompted (or bribed) to fake them.

Don’t get me wrong, they can be pretty decent research assistants, but even then you’re more likely than not to be disappointed:

Badabing
One of these was created by the biggest search company in the world. Can you spot which?

But let’s get to the positives.

Writing

I do a lot of my writing in iA Writer these days, and thanks to macOS and iOS’s feature sets, it does a fair bit of autocompletion–but only on a per-word/per-sentence basis.

That can still be a powerful accelerator for jotting down outlines, quick notes and first drafts, but what I’ve come to appreciate is that even GitHub Copilot (which focuses on code generation, not prose) can go much farther when I move my drafts over to Visual Studio Code and start revising them there.

Contextual Tricks

Let’s do a little parenthesis here and jump back to writing “code”.

In case you’ve never used GitHub Copilot, the gist of things is that having multiple files open in Visual Studio Code, especially ones related to the code you’re writing, has a tremendous impact on the quality of suggestions (which should be no surprise to anyone, but many people miss the basics and wonder why it doesn’t do anything)–Copilot will often reach into a module you’ve opened to fetch function signatures as you type a call (which is very handy).

But it also picks up all sorts of other hints, and the real magic starts to happen when you have more context like comments and doc strings–so it’s actually useful to write down what your code is going to do in a comment before you actually write it.

Prosaic stuff like loops, JSON fields, etc. gets suggested in ways that do indeed make it faster to flesh out chunks of program logic, which is obviously handy.

The good bits come when, based on your comments, Copilot actually goes out and remembers the library calls (and function signatures) for you even when you haven’t imported the library yet.

However, these moments of seemingly superhuman insight are few and far between. I keep spotting off-by-ones and just plain wrong completions all the time (especially in parameter types), so there’s a lot to improve here.

Dealing With Prose

The interesting bits come in the intersection of both worlds–the LLM behind Copilot is tuned for coding, of course, but it can do generally the same for standard English prose, which can be really helpful sometimes–just try opening a Markdown document, paste in a few bullets of text with the topics you want to write about and start writing.

The suggestions can be… chillingly decent. Here’s an example after writing just the starting header by myself and hitting TAB:

My favorite demo to sales people
This is fine.

It’s not as if it’s able to write the next Great American Novel (it is too helpful, whimsical and optimistic for that), but I can see it helping people writing better documents overall, or filling in when you’re missing the right turn of phrase.

And that is where I think the real impact of LLMs will be felt–not in code generation, but in shaping the way we communicate, and in particular in polishing the way we write just a little too much.

This is, to be honest, something that annoys me very much indeed, especially since I’ve always been picky about my own writing and have a very characteristic writing style (that’s one of the reasons I start drafts in iA Writer).

But given my many experiments with characters like Werner Hertzog and oblique stunts like improving the output of my news summarizer by prompting it with You are a news editor at the Economist, it doesn’t strike me as unfeasible that someone will eventually go off and train an LLM to do style transfer from The Great Gatsby1.

It’s probablydefinitely happening already in many Marketing departments, because it is still more expensive to lobotomise interns than to have an LLM re-phrase a bulleted list in jaunty, excited American corporate speak.

Heck, I even went ahead and wrote a set of macOS system services to do just that the other day, based on Notes Ollama.

I'm not even sorry
So yes, one of oldest, cheekiest my T-Shirts has actually come to pass.

The Enshittification of Writing

All the above ranting leads me to one of my key concerns with LLMs– not just Copilot or ChatGPT, but even things lower down the totem pole like macOS/iOS smart sentence suggestions–if you don’t stay focused on what you want to convey, they tend hijack your writing and lead you down the smooth, polished path of least resistance.

This may seem fine when you are writing a sales pitch or a press release, but it is a terrible thing when you are trying to convey your thoughts, your ideas, your narrative and your individual writing style gets steamrolled into a sort of bland, flavorless “mainstream” English2.

And the fact that around 50% of the paragraph above can be written as-is with iOS smart autocomplete alone should give people pause–I actually tried it.

I also went and gave GitHub Copilot, Copilot on the web and Gemini three bullets of text with the generics and asked them to re-phrase them.

Gemini said, among other things (and I quote): “It requires human understanding, 情感, and intention” and used the formal hanzi for “emotion” completely out of the blue, unprompted, before going on a rather peculiar rant saying it could not discuss the topic further lest it “cause harm” (and yes, Google has a problem here).

Either Copilot took things somewhat off base. Their suggestions were overly optimistic rubbish, but the scary thing is that if I had a more positive take on the topic they might actually be fit to publish.

The best take was from old, reliable gpt35-turbo, which said:

LLMs (Language Models) reduce the quality of human expression and limit creative freedom. They lead to the creation of insipid and uninteresting content, and are incapable of producing genuine creative expression.

Now this seems like an entity I can actually reason with, so maybe I should fire up mixtral on my RTX3060 (don’t worry, I have 128GB of system RAM to compensate for those measly 12GB VRAM, and it runs OK), hack together a feedback loop of some sort and invite gpt35-turbo over to discuss last year’s post over some (metaphorical) drinks.

Maybe we’ll even get to the bottom of the whole “New AI winter” thing before it comes to pass.


  1. I’m rather partial to the notion that a lot of VCs behind the current plague of AI-driven companies should read Gone With The Wind (or at the very least Moby Dick) to ken the world beyond their narrow focus, but let’s just leave this as a footnote for now. ↩︎

  2. The torrent of half-baked sales presentations that lack a narrative (or even a focal point) and are just flavourless re-hashing of factoids has always been a particular nightmare of mine, and guess what, I’m not seeing a lot of concern from actual people to prevent it from happening. ↩︎


28 Feb 22:34

Building OpenAI Writing Aids as macOS Services using JavaScript for Automation

by Rui Carmo

A couple of days ago I came upon NotesOllama and decided to take a look at how to build a macOS Service that would invoke Azure OpenAI to manipulate selected text in any editor.

Serendipitously, I’ve been chipping away at finding a sane way to build native desktop apps to wrap around a few simple tools, so I was already investigating JavaScript for Automation.

This seemed like a good opportunity to paper over some of its gaps and figure out how to do REST calls in the most native way possible, so after a little bit of digging around and revisiting my Objective-C days, I came up with the following JXA script, which you can just drop into a Run JavaScript for Automation Shortcuts action:

function run(input, parameters) {

    ObjC.import('Foundation')
    ObjC.import('Cocoa')

    let app = Application.currentApplication();
    app.includeStandardAdditions = true

    let AZURE_ENDPOINT="endpoint.openai.azure.com",
        DEPLOYMENT_NAME="default",
        // this is the easiest way to grab something off the keychain
        OPENAI_API_KEY = app.doShellScript(`security find-generic-password -w -s ${AZURE_ENDPOINT} -a ${DEPLOYMENT_NAME}`)
        OPENAI_API_VERSION="2023-05-15",
        url = `https://${AZURE_ENDPOINT}/openai/deployments/${DEPLOYMENT_NAME}/chat/completions?api-version=${OPENAI_API_VERSION}`,
        postData = {
            "temperature": 0.4,
            "messages": [{ 
                "role": "system",
                "content": "Act as a writer. Summarize the text in a few sentences highlighting the key takeaways. Output only the text and nothing else, do not chat, no preamble, get to the point.",
            },{
                "role": "user", 
                "content": input.join("\n")
            }]/*,{
                role: "assistant",
                Use this if you need JSON formatting
                content: ""
            */
        },
        request = $.NSMutableURLRequest.requestWithURL($.NSURL.URLWithString(url));

      request.setHTTPMethod("POST");
      request.setHTTPBody($.NSString.alloc.initWithUTF8String(JSON.stringify(postData)).dataUsingEncoding($.NSUTF8StringEncoding));
      request.setValueForHTTPHeaderField("application/json; charset=UTF-8", "Content-Type");
      request.setValueForHTTPHeaderField(OPENAI_API_KEY, "api-key");

    // This bit performs a synchronous HTTP request, and can be used separately
    let error = $(),
        response = $(),
        data = $.NSURLConnection.sendSynchronousRequestReturningResponseError(request, response, error);

    if (error[0]) {
        return "Error: " + error[0].localizedDescription;
    } else {
        var json = JSON.parse($.NSString.alloc.initWithDataEncoding(data, $.NSUTF8StringEncoding).js);
        if(json.error) {
            return json.error.message ;
        } else {
            return json.choices[0].message.content;
        }
    }
}

Fortunately the symbol mangling is minimal, and the ObjC bridge is quite straightforward if you know what you’re doing. The bridge can unpack NSStrings for you, but I had to remember to use the tiny little .js accessor to get at something you can use JSON.parse on.

You will need to create a keychain entry for endpoint.openai.azure.com and default with your API key, of course. I briefly considering accessing the keychain directly, but the resulting code would have been twice the size and much less readable, so I just cheated and used doShellScript to grab the key.

Two minutes of hackish cut and paste later, I had ten different macOS services that would invoke gpt35-turbo in Azure OpenAI with different prompts:

first pass
Azure blue seemed appropriate

Dropping these into Shortcuts has a few advantages:

  • It saves me the trouble of wrapping them manually and dropping them into ~/Library/Services
  • They sync via iCloud to all my devices
  • as system Services, I can now invoke them from any app:
simple things
This is the kind of power I miss in other operating systems

The Prompts

For reference, here are the prompts I used:

# shamelessly stolen from https://github.com/andersrex/notesollama/blob/main/NotesOllama/Menu/commands.swift (MIT licensed)
prompts = [ 
    {
        "name": "Summarize selection",
        "prompt": "Act as a writer. Summarize the text in a view sentences highlighting the key takeaways. Output only the text and nothing else, do not chat, no preamble, get to the point."
    },
    {
        "name": "Explain selection",
        "prompt": "Act as a writer. Explain the text in simple and concise terms keeping the same meaning. Output only the text and nothing else, do not chat, no preamble, get to the point."
    },
    {
        "name": "Expand selection",
        "prompt": "Act as a writer. Expand the text by adding more details while keeping the same meaning. Output only the text and nothing else, do not chat, no preamble, get to the point."
    },
    {
        "name": "Answer selection",
        "prompt": "Act as a writer. Answer the question in the text in simple and concise terms. Output only the text and nothing else, do not chat, no preamble, get to the point."
    },
    {
        "name": "Rewrite selection (formal)",
        "prompt": "Act as a writer. Rewrite the text in a more formal style while keeping the same meaning. Output only the text and nothing else, do not chat, no preamble, get to the point."
    },
    {
        "name": "Rewrite selection (casual)",
        "prompt": "Act as a writer. Rewrite the text in a more casual style while keeping the same meaning. Output only the text and nothing else, do not chat, no preamble, get to the point."
    },
    {
        "name": "Rewrite selection (active voice)",
        "prompt": "Act as a writer. Rewrite the text in with an active voice while keeping the same meaning. Output only the text and nothing else, do not chat, no preamble, get to the point."
    },
    {
        "name": "Rewrite selection (bullet points)",
        "prompt": "Act as a writer. Rewrite the text into bullet points while keeping the same meaning. Output only the text and nothing else, do not chat, no preamble, get to the point."
    },
    {
        "name": "Caption selection",
        "prompt": "Act as a writer. Create only one single heading for the whole text that is giving a good understanding of what the reader can expect. Output only the caption and nothing else, do not chat, no preamble, get to the point. Your format should be ## Caption."
    }
]

I actually tried doing this in Python first, but Automator stopped supporting it recently.

Nevertheless, I put up this gist, where I cleaned up the original NotesOllama prompts a bit, and also have a Swift version…

Next Steps

These only work on macOS for the moment, but I’m already turning them into iOS actions with the Get Content from URL action in Shortcuts and a bit of JSON templating. Get Dictionary From Input seems to be able to generate the kind of nested JSON payload I need off a simple text template, but I haven’t quite figured out how to get the keychain to work yet, so I’m still poking at that on my iPad.

For the moment, you can try a draft version of the shortcut I’ve shared here that will require you to enter your endpoint and API key manually (as usual with all iCloud links, this one is prone to rot, so if it doesn’t work, ping me).

That wasn’t my first approach since Shortcuts are abominably limited and slow, but primarily because I needed the JXA version so I can eventually build a little native app that uses other Azure OpenAI services but with a simple GUI in the spirit of lua-macos-app.

An interesting thing is that, as far as I can tell, I’m the first person who cared enough to figure out how to go about issuing HTTP requests and invoking APIs from JXA, which is… really awkward.

Update: Well, this escalated quickly:

A few experiments
I regret nothing

21 Apr 02:07

Girl Groups

This is a reaction to parts of three evenings watching Coachella 2023 and motivated by the fact that more or less all of the music that grabbed me featured females. This notably included Blackpink, the biggest Girl Group in the world. But they weren’t the best women artists there, not even close. Spoiler: boygenius was.

Part of the problem was that every time I switched to a Coachella channel with male names on it, the males were either bobbing their heads over a DJ deck, or banging out the hip-hop. I.e. few musical instruments or vocal melodies were involved. I have no patience whatever for EDM and maybe one hip-hop track in ten grabs me. Usually that track turns out to be Drill which, I read, is bad music by bad people and I shouldn’t like it. Oops.

Please don’t be objecting to the term Girl Group around me. You wanna diss the Marvelettes? The Supremes? The Ronettes? Labelle? Destiny’s Child? To the extent that a business as corrupt as Music can have proud traditions, Girl Groups are one of those.

Not all men

OK, a few weren’t terrible. FKJ was OK, his set featured a big sofa that he and the accompanists relax on and it was that kind of music. The tunes were fairly generic but the playing was good and the arrangements were surprising and often excellent.

And then there was Tukker of Sofi Tukker; he’s the less-interesting half of that outfit but he’s still pretty interesting. Plus their music was good, the instrumentation was surprising, and they have lots of charisma, I’d go see them. They were an example of a distinct Coachella Thing this year: black-and-white outfits, in particular flowing white outfits, an angelic aesthetic.

Weyes Blood, new to me, was also definitely leaning into the angelic thing. The music had no sharp corners but took some surprising curves, and was pretty, which there’s nothing wrong with.

Coachella was willing to take big chances, including bands that I thought were at best marginally competent, in terms of being in tune and in sync. I’m totally OK with bands going Outside The Lines when driven by passion or musical invention but this wasn’t that, it was just basic rock or whatever played at medium speed, badly. I think this was a conscious low-rent aesthetic? Not naming names. Not gender specific.

More women I hadn’t heard of

Ashnikko (apparently erupted outta TikTok) puts on a good show, loads of charisma.

Christine and the Queens is French and extremely intense in ways I liked.

When I switched over to Kali Uchis she was rapping and had the usual complement of twerky dancers — is it just me or do they all have the same choreographer? I was about to switch away and then she switched to to singing and wow, she’s really good.

Saturday night

Why I’m writing this is, I watched the boygenius and Blackpink sets and was left shaken. Let’s flip the order and start with Blackpink. These are screen-caps.

Blackpink on stage at Coachella 2023 Blackpink on stage at Coachella 2023

I had never previously managed to watch a whole K-pop set because it’s so formulaic and boring. But Blackpink kept my attention if not entirely my affection. The choreography and attention to detail is awesome, mesmerising. The music is meh. The beauty is crushingly conventional but also crushing in its intensity. I felt like a Beauty Beam was coming off the screen, bending back all the retinas it impacted.

You don’t have to look very hard to read terribly sad stories about the life of a K-pop star. They sign ten+ year contracts with the Big Company (those years starting when they get big) by which the company gets 80% of the revenue and the band splits the rest, after paying back the BigCo for their training and promotion. And there were a couple of moments when one of the four was in a choreography dead zone and for just an instant wore an expression of infinite fatigue.

To be fair, their technique was dazzling. They had an actual band with actual musicians, although the intermittent lip-syncing wasn’t subtle. And when they stopped to chat with the crowd (in fluent English) they seemed like real people. Just not when singing and dancing.

You know what’s weird? I’m a heterosexual male and they were dressed in these “daring” suits with lots of bare flesh showing, but even with all that beauty, they weren’t sexy at all.

Anyhow, I ended up respecting them. They delivered. But still, that’s enough K-Pop for another decade or so.

The Boys Are Back In Town

That’s a dumb old rock song by Thin Lizzy, and it was the soundtrack for boygenius’s walk onstage. You see, they’re smart and not a second of the set, start to end, was in the slightest disposable. There’s a lyric from their song Without You Without Them: I want you to hear my story and be a part of it. They mean it.

boygenius on stage at Coachella 2023 boygenius on stage at Coachella 2023

Their Coachella set was messy, chaotic, and, I thought, magnificent. The mix wasn’t all that great and some of the lyrics were lost, a real pity with this band. But out of the chaos there kept coming bursts of extremely beautiful melody, exquisite harmony, and lyric fragments that grab your brain and won’t let go.

The songs are about love mostly and are romantic and arrogant and pathetic and many visit violence, emotional and physical too: When you fell down the stairs / It looked like it hurt and I wasn't sorry / I should've left you right there / With your hostages, my heart and my car keys — that’s from Letter To An Old Poet.

Their faces aren’t conventional at all, but so alive; I couldn’t stop watching them. Julien Baker in particular, when she digs into a song, becomes scary, ferocious. But they each inhabit each song completely.

Also memorable was their excellent Fuck-Ron-DeSantis rant.

Anyhow, at the end of the set, they were off their feet, rolling around together on the stage while Ms Baker shredded, just unbelievably intense. Always an angel, never a God they sing, over and over, but there were no flowing white garments — they were wearing mock-schoolboy outfits with ties — and something divine seemed in progress.

Back story

If you haven’t heard it, drop by their Wikipedia article and catch up; it’s interesting. Particularly the gender-related stuff.

My own back story was, I liked Phoebe Bridgers but hadn’t really picked up on boygenius. Then earlier this year, my 16-year-old daughter had a friend with a spare concert ticket and could Dad cough up the money for it? I’m an indulgent parent when it comes to music so she got her ticket.

Pretty soon thereafter I noticed the boygenius buzz and tried to get a ticket for myself but they were long-sold-out by then. Just not hip enough.

Oh well, I won’t forget that Coachella show any time soon.

“Girl Group”?

Per Wikipedia, it means “a music act featuring several female singers who generally harmonize together”. By that metric boygenius is way ahead of Blackpink, who do harmonize a bit but it’s not central to their delivery. On the other hand, if we traverse the Wikipedia taxonomy we arrive at “dance-pop girl groups” and Blackpink is definitely one of those, they dance like hell.

Look, boygenius obviously are Women with a capital “W”. But they’re subversive too, I bet if you asked ’em, they might gleefully wave the Girl Group banner.

21 Apr 02:03

AirPods Pro 2: Perfection

by Volker Weber

I was in a small group of people who checked the AirPods Pro before they were officially announced on October 29, 2019 at 13.00 CET. A year later I was once again blown away by the AirPods Max. Those are still the best headphones I have.

The AirPods Pro became the headphones of choice for lots of people, rightfully so. But when the second generation AirPods Pro came out, I was lagging. After three years, my original AirPods Pro started rattling and Apple had replaced the two earbuds while sending me back the original case. I was good to go for another three years, and besides, could Apple really improve on the AirPods Pro that much? As it turns out, I was wrong.

Fast forward to 2023 and I have now used the AirPods Pro 2 for two months, and they have noticeably improved on all fronts. The transparency mode is even better than the already excellent original AirPods Pro, they filter out more ambient noise, they sound fuller, and they have little benefits I missed on the original. You can finally set the volume without speaking “Hey Siri louder” into thin air. The case beeps when it wants to provide feedback or when you are searching for it, and finally, it warns you through the Find My app when you leave them behind, much like the AirPods Max.

I have many headphones, but if I could only have one, this would be it. With voice isolation, you can even use them for phone calls.

21 Apr 02:02

Once in Perugia, always in Perugia

by Gregor Aisch
Photo: Luca Vanzella, CC BY-SA 2.0

Hi, this is Gregor, co-founder and head of the data visualization team at Datawrapper, with a Weekly Chart written on a night train somewhere between Austria and Italy.

After a long pause, I am very happy to be returning to Perugia for the International Journalism Festival (IJF). On the train ride to Italy, I was browsing this year’s festival program and saw many new speakers and faces I recognized from previous years!

This got me wondering how much fun it would be to see who spoke most at the conference in the past. Fortunately, the festival website lists all speakers since 2012. So without further ado, here’s the list of the most frequent speakers at the IJF:

You may be wondering why this list only includes international speakers. It’s a choice the festival made for the program as well, which initially excludes Italian speakers on its English-language website. You can click the link above the table to switch to a version that includes Italian speakers.

While there are only two women among the ten most invited speakers at the conference, it’s worth noting that the overall diversity of the IJF seems to have gotten a lot better in recent years, with almost 60% of this year’s speakers being women compared to 24% in 2012!

At Datawrapper, we’re big fans of the festival, not just for the talks and panels and the amazingly beautiful location, but also for the chance to meet so many of our users. If you’re around, drop us a note or leave a comment if you’d like to say hi.


See you in Perugia — or next week for the first Weekly Chart from our product specialist, Guillermina.

21 Apr 02:02

Google Fi Is Now Google Fi Wireless: New Benefits, Free eSIM Trial

by Ronil
Google refreshes Fi with a new name, logo, and perks. The company’s wireless carrier started as Project Fi, then became Googe Fi. And today, it’s getting another name change, becoming Google Fi Wireless.  The logo didn’t get a complete overhaul and still has the aesthetics of the old Google Fi logo. Instead of four bars […]
21 Apr 02:01

Twitter Favorites: [tomhawthorn] @sillygwailo Too soon.

Tom Hawthorn 🇺🇦🇨🇦 @tomhawthorn
@sillygwailo Too soon.
21 Apr 01:57

Designing The Community-Powered Customer Support Hub

by Richard Millington

The famous flaw in most enterprise communities is the majority of people don’t visit to join a community, they visit to solve a problem.

This is why the majority of contributors to a community have made precisely one post.

They asked a question, received an answer (or didn’t), and then left never to return. The vast majority of attempts to shift these numbers have failed miserably. This audience simply doesn’t want to connect, share, and ‘join the conversation’. They want to solve their problem and be on their way.

The challenge facing most enterprise community professionals is less about trying to nurture a sense of community amongst members and more about building the right support experience around the community – where community plays a critical role in supporting every other support channel.

Enter the community-powered customer support hub.

 

 

The Five Layers Of The Customer Support Hub

 

The trend in recent years has been fairly clear, organisations are creating integrated support hubs which include knowledge bases, federated search, virtual agents, community, customer support tickets, and product suggestions.

This is slightly different from a customer portal which handles a broader range of use cases than support (e.g. it might include the academy, events, product updates etc…)

To get the support hub right, you need to execute extremely well on five distinct layers. These are shown here.

Five layers of the support hub are knowledge base, virtual agents/search, community/social media, customer support, and product suggestions.

We can tackle each layer in turn.

 

 

The Foundation Layer: The Knowledge Base

 

Pretty much every organisation has a knowledge base for customers to browse. The purpose of this knowledge base is to preempt the majority of questions customers are likely to have and help customers follow best practices in the setup and usage of products.

Whenever a customer searches for an answer to a question, the knowledge base is often the first place they visit. A great knowledge base will have articles showing up in search results, in chatbot recommendations, in responses to community questions, and be referenced by support staff.

The cheapest way to solve a customer query is in the knowledge base. This can reduce 80% or more of questions from downstream channels.

(Aside, this is also the problem of measuring activity levels in a community, the community should inform the knowledge base – which in turn should reduce the number of questions in the community.)

However, the knowledge base has to perform a tricky juggling act. It should be comprehensive enough to resolve the 20% of queries which represent 80% of the volume of questions. But it shouldn’t aim to be so comprehensive to try and tackle every query.

This becomes impossible to maintain and is overwhelming for members. The knowledge hub needs to have well-maintained content that’s refreshed regularly. It also needs to archive out-of-date articles.

A great knowledge base aims to maintain a smaller number of articles up to an incredibly high standard. For other queries, there are other channels.

An excellent example of this is the Asana knowledge base.

Notice the clean interface, great use of white space, clear search bar, and collapsible navigation menus on the left-hand side so as not to be overwhelming.

The tabs are well categorised by likely challenges and by problems members will potentially encounter. All of this makes navigation simple.

Asana seems to have taken a less is more approach – maintaining a fewer number of high-quality articles instead of tackling every eventuality. Developer articles have also been smartly separated from other articles.

The knowledge base should always be receptive to what’s happening in a community (and other channels). If the same question repeatedly appears in the community, it needs to be tackled as a knowledge hub article (don’t forget to update community discussions with the link to the knowledge article). Likewise, if an article in the knowledge base isn’t getting much traffic, it might be time to archive the article.

Ultimately, the knowledge base is the foundation layer of support. If it’s well-designed and implemented, it makes the entire support experience much better for everyone.

 

The Search Layer

This is the layer where people are on the site and search for the information they want to find.

This comes in the form of search bars and chatbots.

Unified Search 

Traditionally this was by a search bar which, like Google, would often retrieve the result which best matched the query.

The biggest problem with this is organisations frequently use different platforms for the knowledge base, documentation, academy, community, etc…But the search tool used was often limited to retrieving information from a single database. In recent years, organisations have shifted to using unified (Federated) and cognitive search tools.

These tools can retrieve results from multiple databases and enable the organisation to assign weightings to different results to skew where customers are most likely to visit as needed. This means the results can be retrieved from the corporate site, dev community, webinars, roadmap, pricing, knowledge base and more. This has a big impact on reducing knowledge silos within the community.

A good example of this is the Logitech support centre.

A screenshot of the Logitech Support Center

You can see here that relevant results are retrieved from the knowledge base, product documentation, downloads, and the community. When it works well, it gives members a single box to enter a question and find what they want.

At the moment, many organisations still don’t deploy a federated search tool – this has a hugely negative impact on the customer experience who must then visit multiple places to find the answers they need. Note: Documentation is not the same as a knowledge base.

 

Chatbots

In the past, chatbots were rudimentary tools that operated on decision trees that relied heavily on keywords. By following a set process, the chatbot would either provide you with the information you were seeking or guide you to the next step in your journey

An example of this in practice is the Logitech Chat Bot below:

A screenshot of the logitech chat bot in action

You can see above that it’s still following a fairly basic decision-tree format to try and guide the individual to the right answer. It’s becoming increasingly common for chatbots to act as a screener to solve a problem before redirecting the question to a virtual support agent (skipping the community entirely).

Recent incarnations (up to 2023) were far more advanced and used a combination of natural language processing to be able to ask questions, check what has been attempted before and try to guide someone to the right solution.

The biggest question is how quickly ChatGPT (or ChatGPTesque) will be incorporated. This will greatly enable higher quality levels of interaction and the ability for members to get detailed answers specific to their questions. If this works well, it should significantly reduce the number of challenges which drop through to the next level.

A community also supports this process by providing a huge amount of training data to process. The more community questions there are, the better the AI bot will be able to surface the right answer for members. Over time this should lead to fewer and fewer questions reaching the community.

As we’ve written before, ChatGPT and similar tools thrive when customers either don’t know what terms to search for or can’t browse mountains of information to find what they need. They fail, however, when the customer needs the best solution to a problem, needs a workaround (edge case), or is looking for personal experiences (e.g. would you recommend [vendor?]).

 

 

The Community Layer

 

The community and social media layer is where customers go to ask the in-between questions.

These questions aren’t so easy they can be solved by existing documentation, but don’t require the customer to reveal personal information to get a resolution.

Generally speaking, the success of a community hangs upon how many in-between questions people have.

One of two things happens at this layer.

First, people ask the question in a search engine and they land on a question which has already been asked in the community. This typically accounts for the majority of traffic in most communities.

Second, if they don’t find the answer to their question, they might ask the question themselves in the community.

By community, we’re not just referring to a hosted community but any place where members can interact with other members to get an answer. This includes social media and third-party platforms (Reddit, StackExchange, YouTube, Twitch etc…).

The community layer should resolve as many of those in-between questions as possible. It should also be used to inform other layers. It should highlight questions which should be tackled by knowledge articles, provide on-site search and chatbots with answers to the surface, and provide support agents with ideas they can try to resolve the customer issue.

Atlassian (a FeverBee client), is probably one of the best examples of this today.

A screenshot of the Atlassian community

This doesn’t mean a community is exclusively for in-between questions, there are plenty of people who simply prefer a community compared to filing a support ticket. A community helps reinforce the innate desire for self-service.

There are also plenty of use cases a community offers which don’t involve Q&A (user groups, events and activities etc…).

 

 

The Customer Support Agent Layer

 

The next layer is where a human support agent is involved.

In my experience, organisations can take one of two approaches.

The first is they want to reduce the number of calls which reach support agents as much as possible. Sometimes the cost of resolving an issue can reach hundreds of dollars per call. This means there is a huge benefit from resolving these issues in the community.

The second is they want to have as much time with customers as possible. In this approach, the goal isn’t to get customers off the phone as quickly as possible but to use the opportunity to build a better relationship with them and understand their needs.

In my experience, the former is more common than the latter – but some truly customer-centric organisations excel here. If your scaled support system is implemented correctly, your customer support team should only be tackling questions which either:

    1. Require a member to share personal data to be able to resolve.
    2. Are too complex or rare for the community to resolve (edge cases).
    3. Are from the highest value customers paying for a dedicated support agent.
    4. Are from customers who have a natural preference for support agents (often older customers).

 

Whenever community support questions require Personally Identifiable Information (PII) from the customer and are moved to a private support ticket for resolution, it’s critical to also provide an update on the original community thread.

This approach prevents the interaction from appearing to have abruptly halted with a move to a private support channel, which could leave the community feeling uninformed, probably resulting in more support tickets about the topic, which is counterproductive to the self-support model.

Ultimately, whether by support ticket or by phone call, the goal is to route the question to a person with the right answer as quickly as possible. The customer support layer is primarily the catch-all for issues which can’t be resolved anywhere else. You want your paid staff to be tackling the toughest questions for the highest value customers. 

Ideally, the customer will be able to file a support ticket via the same destination as they can search for information, ask the community, engage with a chatbot etc…

 

 

The Product Ideas Layer

 

The final layer is the product suggestion (or ideas) layer.

It’s relatively common for customer support staff to recommend a customer post any problem which can’t be resolved as a suggestion for a product enhancement. This is a common route to suggesting an idea but it’s not the only route.

Often customers will simply want to suggest an idea without having to begin with an initial problem. Regardless, the product ideas layer is the next. It aims to be the final place to offer customers hope for a solution when any other channel fails.

A good example of this is the DigitalOcean community (hosted by Pendo).

A list of ideas posted on the DigitalOcean Community

You can see here it’s easy to add the idea and track its progress over time. The big danger of having an ideation area, however, is when the ideas aren’t utilised. 

This quickly becomes seen as a dumping ground which upsets members. You should only implement an ideas or suggestions layer when there is a product team eager to receive, filter, and provide feedback on the ideas. If that’s not in place, don’t offer ideas.

 

Build Your Community-Driven Support System

 

Customers are like water following the path of least resistance. Whichever pathway seems like the easiest (or quickest) way to resolve their challenge is the one they will take.

If that means asking a question in a community, they will happily do so. If that means searching for an answer in the search bar, they will do it. If that means filing a support ticket, they will do that too.

But here’s the rub, customers don’t want to visit five different places to do those things. They want to fuss around logging into different systems. They want all of these things to be accessible to them in a single destination. They want to achieve their goals with the least amount of effort. They don’t want to ask the same question across multiple channels to get an answer.

The challenge for community and support professionals is to build a seamless support hub.

A common mistake by community professionals right now is to focus on increasing less important areas of the community experience (e.g. gamification, groups etc..) at the expense of the areas which would immediately add more value to people looking for answers to problems (better integrations, search optimization, question flows).

The future of enterprise communities isn’t to build the most perfect standalone destination but to integrate the community as deeply into the support experience as possible.

The post Designing The Community-Powered Customer Support Hub first appeared on FeverBee - Community Consultancy.
21 Apr 01:54

Data analysis with SQLite and Python for PyCon 2023

I'm at PyCon 2023 in Salt Lake City this week.

Yesterday afternoon I presented a three hour tutorial on Data Analysis with SQLite and Python. I think it went well!

I covered basics of using SQLite in Python through the sqlite3 module in the standard library, and then expanded that to demonstrate sqlite-utils, Datasette and even spent a bit of time on Datasette Lite.

One of the things I learned from the Carpentries teacher training a while ago is that a really great way to run a workshop like this is to have detailed, extensive notes available and then to work through those, slowly, at the front of the room.

I don't know if I've quite nailed the "slowly" part, but I do find that having an extensive pre-prepared handout really helps keep things on track. It also gives attendees a chance to work at their own pace.

You can find the full 9-page workshop handout I prepared here:

sqlite-tutorial-pycon-2023.readthedocs.io

Screenshot of the handout. Data analysis with SQLite and Python, PyCon 2023

    What you’ll need
        python3 and pip
        Optional: GitHub Codespaces
    Introduction to SQLite
        Why SQLite?
        First steps with Python
        Creating a table
        Inserting some data
        UPDATE and DELETE
        SQLite column types
        Transactions
    Exploring data with Datasette
        Installing Datasette locally
        Try a database: legislators.db
        Install some plugins
        Learning SQL with Datasette

I built the handout site using Sphinx and Markdown, with myst-parser and sphinx_rtd_theme and hosted on Read the Docs. The underlying GitHub repository is here:

github.com/simonw/sqlite-tutorial-pycon-2023

I'm hoping to recycle some of the material from the tutorial to extend Datasette's official tutorial series - I find that presenting workshops is an excellent opportunity to bulk up Datasette's own documentation.

The Advanced SQL section in particular would benefit from being extended. It covers aggregations, subqueries, CTEs, SQLite's JSON features and window functions - each of which could easily be expanded into their own full tutorial.

21 Apr 01:52

Proposed policy clarifies what's appropriate swimwear at Vancouver's public pools | CBC News

mkalus shared this story :
"Despite a B.C. Supreme Court decision that backed women's right to bear their breasts in public, Digby believes it would be "excessive" in a pool setting." I see the 1950s NPA is back in action. If they push that through I hope they get sued.

British Columbia·New

The Vancouver Park Board is set to vote on a city staff report aimed at tackling inappropriate swimwear at public pools by defining what can and cannot be worn.

Park board commissioner says policy aims to create an inclusive environment for families

The Vancouver Park Board is set to vote on a city staff report aimed at tackling inappropriate swimwear at public pools by defining what can and cannot be worn.

The report follows concern from staff at the city's aquatic centres who have asked for a clear policy to help them navigate situations where patrons have, according to the report, "presented in attire that has had cause for attention, due to various levels of tolerance by both staff and members of the public as to what is acceptable attire for swimming in public aquatic facilities."

City staff say the policy will address safety concerns about swimming outfits that present a risk, adding that swimwear should allow the body to move freely, should not impede buoyancy and should not increase the safety risk to the swimmer or a lifeguard.

In the report, appropriate swimming attire is listed as:

  • bathing suit;
  • swim trunks or board shorts; 
  • T-shirts and shorts; 
  • burkini;
  • swim hijab, leggings and tunic; 
  • rash guard; 
  • and wet suit.

Unacceptable attire, according to the report, includes items designed for sexual or intimate purposes, clothing that absorbs water and becomes heavy, like jeans and sweatpants, and long, flowing fabrics. Swimwear must also fully cover the genitals, the report says.

It defines appropriate swimwear as "what other Canadians find as an acceptable level of tolerance in a family public swimming environment." 

Bare breasts 'excessive' at pools: commissioner

The park board will discuss and vote on the report on April 24.

Commissioner Tom Digby says he's leaning toward voting in favour of the policy.

"It's a complex question of social equity in the city," he said. 

"Because for every person who wants to wear a string bikini, there could be 10 families from some conservative community… that won't go to the swimming pool because they're afraid of confronting a string bikini in the change room, which is a very reasonable concern."

Digby said the city is trying to create an environment that is welcoming to all families.

"There's a lot of communities [that] have fairly conservative standards. There are many cultures here that won't tolerate a lot of exposure," he said.

Despite a B.C. Supreme Court decision that backed women's right to bear their breasts in public, Digby believes it would be "excessive" in a pool setting.

The city of Edmonton amended its topless policy in February, clarifying that all patrons are allowed to swim and lounge at the city's pools without a top on, regardless of their gender identity.

With files from the Early Edition

21 Apr 01:51

On the shift to oat and the milk hysteresis curve

We appear to be at a tipping point to oat milk for coffee, and it’s an interesting case study in what change means and feels like.

I always specify “dairy” when I get my daily coffee, wherever I am. “Dairy flat white” is the usual order.

The reason being that several years, when alt milks were becoming a thing, I was asked what milk I wanted and I said “normal” – at which point I got scowled at because what is normal anyway.

And that made sense to me. And while I believe rationally that being vegan is probably the way of the future, personally I quite like meat and milk, so the minimum viable way for me to sit on the fence is to always specify dairy but refuse to normalise it. So that’s what I’ve done since. My bit for the cause.

(My life is littered with these absurd and invisible solitary commitments. Another one: I will always write the date as “3 April” instead of “April 3” because humanity may one day live on a planet with a really long year and we may want to have multiple Aprils, so better not be ambiguous.)

Anyway, I’m used to the conversation going either like this:

  • Dairy flat white please
  • Ok great

Or:

  • Dairy flat white please
  • What flat white?
  • Dairy
  • We have oat or soy
  • No like cow’s milk
  • Like just normal? Regular milk?
  • Yes
  • Ok right. Flat white then

Rarely - ok just once - I was told off by a shop for specifying “dairy” every day because nobody has oat and, well, they see me every day and they remember what I want.

But that was about 18 months ago.

Recently pushback had decreased, quite a lot and quite suddenly.

So I’ve been idly asking coffee places what their proportion of dairy milk vs oat milk is, when I get my daily coffee, wherever it is.

Near me, in south London, one of my local places is 60-70% oat over dairy (factoring out coffees without milk). Another is 50/50, probably with oat leading by a nose.

That’s the general picture round here.

I asked for a dairy flat white in north London and got the old familiar bafflement. Apparently east London is more alt milk again. There’s a neighbourhood thing going on.


I’ve asked why (at the majority oat places) and nobody really knows. Fashion (one placed suggested); all alt milks are now oat; general awareness. I’ve noticed that places rarely charge extra for alt milk now, that reduces friction.

And then there’s a shift that prevents backsliding:

My (previously) favourite coffee place now tastes too bitter for me. Now, oat milk is sweeter than dairy milk. To keep the flavour profile, you’ll need to make the base coffee itself less sweet. So I swear they’ve changed their blend.

This is interesting, right? We were in a perfectly fine status quo, and it took some energy to change majority milk, but now the underlying coffee has changed, we’re in a new status quo and it’ll take the same energy again to shift back. A hysteresis loop for milk.

So that’s the new normal, yet people still say “regular milk” to mean dairy milk.

“Regular” does not mean, from the perspective of the coffee shop, the majority of their milk-based coffees.

“Regular” means, from the perspective of the customer, the majority of their consumed milk from their lifetime drinking coffee. Which is obviously biased to the past.

So “regular” is a term of conservatism.

Not a right wing or libertarian or fundamentalist conservatism. But a kind of “the default is what we did in the past” conservative. (Which would be a fine position to have, by the way, because I don’t think we give enough respect to wisdom that takes many generations to arrive at, and our current - and sadly necessary - anti-conservatism - because of everything else with which it is currently allied - undermines that position somewhat.)


Anyway so this is how we get old and conservative, I guess, by taking as our yardstick our cumulative individual experience rather than a broader and changing society.

And I could switch to oat milk too, I suppose, given dairy is tasting worse now, but I’m trapped in my own habits, and I like the idea that, over the coming decades, I’ll ascend into a kind of relative savagery, the final person consuming “normal” milk while the world changes around me.

21 Apr 01:50

Saturday Morning Breakfast Cereal - Die on It

by Zach Weinersmith
mkalus shared this story from Saturday Morning Breakfast Cereal.



Click here to go see the bonus panel!

Hovertext:
Later, the robot learns to nod its head and keep the truth inside.


Today's News:
21 Apr 01:38

Re-AI Music

by bob
You got a lot right here. It’s just sad that the gimmicky sound-a-likes is what people’s first impression of AI is. A few examples of tech’s disruption in music: 1. Beatles/Abbey road and the 8-track, and then mellotron (first sampler IMO) 2. Herbie Hancock breaking the rules and using synths in jazz mid 70s with […]
03 Apr 02:55

Rolling-mill Oops

The world being what it is, feels like a little humor is in order. Here’s a story from my misspent youth, when I was a co-op student at a steel mill and had a Very Bad Day.

This story was originally published as a Tweet thread in response to the following:

Tweet from @ElleArmageddo

That thread will probably go down with the Twitter ship and I’d like to save it, so here goes.

During the summer of 1980, my last before graduation, I had a co-op job at Dofasco, a steel mill in Hamilton, Ontario. It wasn’t great, the tasks were mostly make-work, although I kind of liked learning RSX-11M Plus running on PDP-11s. Sure, it was primitive by modern standards (I already knew V6 Unix) but it got the job — controlling the sampling apparatus in a basic-oxygen blast furnace — done.

So, there was this Rolling Mill that turned big thick non-uniform slabs of steel into smooth precise strips rolled onto a central core; this is the form in which steel is usually bought by people who make cars or refrigerators or whatever. The factory had lots of rolling mills but this one was special for some reason, was by itself in a big space away from the others. It was huge, the size of a couple of small trucks.

The problem was, it had started overheating and they didn’t know why. The specifications said the maximum amperage was 400A RMS, where “RMS” stands for Root Mean Square. The idea is you sample the current every so often and square the measurements and average the squares and take the square root of that. I believe I learned in college why this is a good idea.

Um, 400A is a whole lot of juice. Misapplied, it could turn a substantial chunk of the factory to smoking ashes.

The mill had an amperage data trap, to which they plugged in a HP “data recorder” with reel-to-reel tape that sampled every little while, and left it there until the overheat light started flashing. Then they needed to compute the RMS.

Fortunately

They had a PDP-11/10, about the smallest 16-bit computer that DEC ever made, with something like 32K of core memory. It was in a 6-foot-rack but only occupied two or three slots. It had a device you could plug the data recorder into and read the values off the tape out of an absolute memory address. And it had a FORTRAN compiler. It was running RT-11.

Who knew FORTRAN? Me, the snot-nosed hippie co-op student! So I wrote a program that read the data points, accumulated the sum of squares, and computed the RMS. I seem to recall there was some sort of screen editor and the experience wasn’t terrible. (I kind of wish I remember how you dereference an absolute memory address in FORTRAN, but I digress.) The readings were pretty variable, between 200 and 500, which the machine specs said was expected.

Anyhow, I ran the program, which took hours, since the data recorder only had one speed, in or out. The output showed that the RMS amperage started worryingly high but declined after a bit to well below 400A, presumably after the machine had warmed up. My supervisor looked at the arithmetic and it was right. The report went to the mill’s General Foreman, a God-like creature. So they told the machine operator not to worry.

Unfortunately

I had stored the sum of squares in a FORTRAN “REAL” variable, which in FORTRAN (at least that version) meant 32-bit floating point. Which has only 24 bits of precision.

Can you see the problem? 4002 is 160,000 and you don’t have to add up that many of those before it gets big enough that new values vanish into the rounding error and make no changes to the sum. And thus the average declines. So the RMS I reported was way low.

Fortunately

The mill operator was a grizzled old steel guy who could tell when several million bucks worth of molten-metal squisher was about to self-incinerate.

He slammed it off on instinct with a steel slab halfway through. It only cost a few shifts of downtime to remediate, which is to say many times my pathetic co-op salary for the whole summer.

At which point my boss and I had to go visit the General Foreman and explain what DOUBLE PRECISION was and why we hadn’t used it. It wasn’t fun. For some reason, they didn’t ask me to re-run the calculations with the corrected code.

You can’t possibly imagine how terrible I felt. I was worried that my boss might have taken career damage, but I heard on the grapevine that he was forgiven, but ribbed unmercifully for months.

And when I graduated they offered me a job. I went to work for DEC instead.

03 Apr 02:53

Sober Carpenter

I was drinking an glass of excellent Sober Carpenter “West Coast IPA” at lunch when I ran across Even moderate drinking is bad for us. Enter nonalcoholic beer in the Washington Post. Drinking less seems to be A Thing just now and I suppose alt-beverages too, so here’s my experience.

Sober Carpenter web site

Right at the beginning of the year I saw this and decided to drink less. I didn’t try anything fancy, just restricted alcohol to two days a week. I’ve never been a heavy drinker but for some decades had had wine or beer with most dinners and a whiskey at bedtime.

“Two days” doesn’t just mean weekends; our kids are at an age where fairly often they’re both away at weeknight dinnertime, so Lauren and I will cook up something nice and split a bottle of wine. This works out well because we fairly regularly video-binge on Saturday nights and drinking plus extended-TV is a sure headache for me.

Three-months-ish in, there’s no stress keeping to that regime and the results are moderately pleasing. Findings:

  1. At some point earlier in my life I had concluded that all mock-alcohol drinks were hideous wastes of time. No longer! I encourage you to try a few alternatives (that WashPost article has lots), and if you’re in Canada I absolutely recommend that Sober Carpenter brand. I’m very fussy about my IPAs and while I wouldn’t put this one in the top-ten I’ve ever tasted, I wouldn’t put it in the bottom half either.

    I’ve also been exploring fancy ginger beers and while I’ve found one or two that are pleasing, I suspect I can do better.

  2. I sleep a little better, albeit with more vigorous, sometimes disturbing, dreams.

  3. If lunch would benefit from (zero-alc) beer on the side, I don’t hesitate.

  4. The monthly credit-card bill is noticeably lower.

  5. When I was getting hyperfocused on code and it got to be past 11PM, that late-night whiskey was a reliable way to get unstuck and off to bed at a sane time. Oh well, there are worse things than a too-much-coding fog.

    When I’m struggling with a blog piece though, a drink seems to boost the writing energy. This can lead into the wee hours and feeling fairly disastrous the next morning.

    Let’s call that one a wash.

  6. Sushi isn’t as good without sake. But it’s still good.

  7. I kind of hoped I’d lose some weight. Nope. Oh well.

I’m really not recommending any particular behavior to any particular person. Several people who are close to me have had life-critical alcohol problems and that situation is no joke; If you think you might have a problem, you should consult an expert not a tech blogger.

03 Apr 02:52

First TBN Urban Roller ride of the year

by jnyyz

Today was the first Urban Roller ride of the year with TBN. We met up at High Park. All wishing that the ride was yesterday, when it was +12°C, but at least it’s sunny!

Dave Middleton is our ride leader.

The other Dave leads us down the hill to the lakeshore.

Taking full advantage of the extra space for southbound cyclists defined by paint and bollards at Colborne Lodge and Lakeshore. Note the ghost bike for Jonas Mitchell.

Along the MGT.

Nice sunny day.

Regroup at 1st and Lakeshore.

The bathrooms along the Waterfront Trail in Mississauga are already open (unlike the ones in TO), and this one is heated.

I elected to ride back in advance to get back early.

The water from the Humber is muddy and brown.

Thanks to Dave for leading the ride and arranging the route. Nice to see so many of the usual suspects. Looking forward to a good riding season with TBN.


I’ll also note that the open house for the High Park Movement Strategy is tomorrow, April 3, from 4:30-7:30 at Lithuanian House 1573 Bloor Street West.

There was a prior public survey that indicated that the most popular option of four was one where the park would be car free. The report on the survey is here.

However, subsequent to the survey, there was a stakeholder meeting in February that was not open to the public. As a result of that meeting, the city is proposing that the park remain partially open to motor traffic, but many are still hoping that we can keep the park car free. I’ll post about that meeting tomorrow.

For more details, I would recommend reading the coverage by Rob Z on his excellent blog.

03 Apr 02:51

One Night in the Mission

I used to be a creature of the night, but no longer. I used to be out all the time, but rarely now. Partly, it’s that San Francisco is so chilly at night, but also that it’s pretty dead at night compared to the much bigger cities I’ve lived in. I don’t quite enjoy walking around, cold, in areas where there just sijmply isn’t that much going on at all. For my wife’s birthday, we went out to dinner in the Mission and I also brought my Minolta Hi-Matic 7S II. It’s fast becoming one of the cameras I use the most: its f1.7 lens, combined with the small form factor and weight, makes it easy for me to pop it into my jacket pocket. It works really well indoors at night, too, with black and white film (and a steady hand.. or an elbow firmly on a table or chair or door, which is my style. I dislike tripods).

Here are some shots on Kentmere 400, pushed to 800 in Ilfosol 3 (1:9). I really like the combination of this film and this camera, and my self dev setup at home these days. Scanned on Noritsu LS-600.

a scan of a black and white photo showing an outdoor garden dining space with space heaters

The outdoor space at Blue Plate is quite lovely. So is the key lime pie there.

a scan of a black and white photo showing the neon symbols that are the sign of a bar in the Outer Mission

I love neon signs. I also love that I was professionally involved in getting these ‘parklets’ up early pandemic: my team at sf.gov helped get a joint permitting process out quickly to help businesses move their business outdoors.

a scan of a black and white photo showing the retro sign of the Mission cinema

Alamo Drafthouse in the Mission.

a scan of a black and white photo showing a few people ordering tacos from a street taco vendor

Street tacos are the best tacos. There was a lot of light from one side from the street lamps, but I quite enjoy the effect it casts on the photo.

I am starting to feel more confident about bulkrolling black and white film and developing it at home. Other than the cost savings, it’s the immediacy that I love: I can roll a 24 exposure cassette in black and white, shoot it in an hour, and come back and process it immediately and see it shortly after through a scanner or light table.

03 Apr 02:47

Italian privacy regulator bans ChatGPT

by Rui Carmo

I was betting on the French or the Germans to make the first move. But this is hardly surprising, and only the tip of the iceberg as far as AI regulation discussions are going.

Right now there are some pretty active academic and political discussions that make the Open Letter look like a highschool petition, and I expect LLMs might well turn out to be this generation’s tech munitions.

In retrospect, I should have written more about this.


03 Apr 02:47

Notes for March 27-April 2

by Rui Carmo

This was a both a much worse and a much better week than usual.

Monday, 2023-03-27

Another round of layoffs, this time affecting more close friends and acquaintances.

  • Mood was exceptionally grim, nothing much got accomplished this day.

Tuesday, 2023-03-28

Family event, overshadowed by yesterday’s events.

  • Bashed out some ideas in Node-RED.
  • Decided to start reading Accelerando again, which now feels oddly tame when compared to the ChatGPT hype fest that is percolating out of every single online source.

I had to laugh at the Moscow Windows NT User Group becoming sentient and the weird resonance with what Microsoft is doing with OpenAI in real life.

Wednesday, 2023-03-29

Mood improving slightly.

  • Got both Copilot and API access to GPT-4, so played around with both in VS Code during my free time to see if the novelty wears off quickly and I can get back to more useful hobbies.
  • Futzed about indecisively with what to pack for a weekend trip.

Thursday, 2023-03-30

Finally, some good news.

  • Hacked together and printed a minimal enclosure for my M8 Headless so that it wasn’t just a PCB wrapped in Kapton tape:
Now it's a PCB wrapped in Kapton tape inside a cute blue PETG case, and I can move on to the next item on my To-Do list.

OpenSCAD files will eventually make their way to GitHub, as usual.

Friday, 2023-03-31

Rather a rushed day.

  • Packed my cheap keys and my iPad mini, negotiated the ungodly mess that is LIS airport departures and flew to Faro for the weekend.
As it turns out, I'm not the only one rocking this setup for weekend getaways.
  • Had an amazing dinner at Forno Nero.
  • Since I didn’t bring my usual dongle, did a token effort at getting YouTube to work on the hotel Samsung TV, but it wouldn’t do TLS (either because Samsung, being Samsung, stopped maintaining those models’ Tizen browser, or because the kneecapped hotel mode settings on the TV didn’t have the clock set).

Saturday, 2023-04-01

Not an overly foolish day.

  • Did a token effort at messing with the hotel NFC keys, but I foolishly took the spare blank I always carried with me out of my travel kit, so no dice. Nice indoor pool though.
  • Traipsed around Faro on foot, woefully under-prepared shoe-wise, so I developed epic blisters.
  • Got a mid-afternoon HomeKit alert that my house was offline and tried to figure out why. Two out of three Tailscale nodes were down, but I was able to see two Gigabit ports were… Off?:
Utter weirdness.

Sunday, 2023-04-02

Back home, with one of the kids toting a silver medal in the national Math Olympics–multiple layers of win for this weekend.

  • Went about the house trying to sort out why the Vodafone SmartRouter I spent months designing a replacement base for (and which I “upgraded” to last month) decided to disable two of its Gigabit ports and cut off access to most of my infrastructure. No conclusions, merely suspicions about the Vodafone IPTV set-top box and IGMP, even though nobody was home.
  • Started drawing an updated home network diagram. I think it’s time to go all out and start doing VLAN trunking.
  • Investigated cheap managed gigabit switch options to see if I can engineer my way around future failures.
  • Decided to order a couple of TP-Link TL-SG108Es to replace the dumb ones I have been rocking for a few years, which should make for a fun Easter Break project. And before you ask, 2.5GbE isn’t cheap enough yet.

03 Apr 02:45

Notes On Weekly Notes

by Rui Carmo

It’s been a full three months, and my weekly notes might be coming to an end–of sorts.

They actually began as a way to get my mind off the layoffs and various work-related matters1, and they’ve turned out to be extremely useful in many regards, but with Easter Break coming up, they are likely to go on hiatus.

What are weekly notes, really?

Weekly notes are exactly what they sound like: short pieces of writing that summarize what you learned, did, or thought about during the week. In my case, I decided to focus on the stuff I do outside work.

The idea was not to write a polished article or a normal blog post, but rather to keep track of what I was doing in a casual way.

What Went Well

Besides capturing the run-of-the mill stuff that I hack at (which sometimes takes months to come to fruition), weekly notes also served as a record of what I learned, accomplished or had some fun with.

Making them public (even if somewhat in the rough) was also useful in that it has brought a few interesting snippets of feedback. And, of course, it makes it very easy to correlate the masses of other notes I already have on the site.

They were also a way to try to focus on the positive and remind myself that there is more to life than work–even if most of what I end up doing in my free time tends to be a variation of it, or, rather, the essential bits of it I wish I was free to pursue…

So yes, writing weekly notes (they are actually daily notes, but more on that later) has been a great way to capture stuff that otherwise would just have fallen by the wayside–or that would never have made it into a blog post on its own, and thus be impossible to refer back to later.

The Feels

I’m a little conflicted about how they made me feel, though. For starters, I started doing them because of work, stress and, again, the layoffs.

Having up a third of your extended team vanish is, well, a trifle unsettling. I suspect the effects on morale and culture will play out over months (if not years, considering the number of companies that jumped on the bandwagon2).

I needed something to make me feel productive and creative again, and focusing on stuff I could control has been cathartic and a great way to rekindle a sense of purpose, but, most importantly, of progress towards an outcome.

Or, in my case, perhaps too many outcomes–even as I review them, it’s clear I’m clearly trying to juggle too many things on my free time.

So definitely mixed feelings here.

The Process

There’s nothing much to it, really. You’ll likely find a bazillion philosophical thought pieces on weekly notes at the click of a button, but what I do is relatively simple:

  • I have a Markdown file in a Syncthing folder.
  • Every time I take a break to fix something, I document it.
  • Every other evening I will go back and fill in the missing bits.
  • You may have noticed I like bulleted lists, but sometimes I also update other Wiki pages I link to from my notes.

The notion of doing this weekly, say Friday, just doesn’t work for me, because my breaks are highly random (even for lunch or winding down at the end of the day, since my workday can start at 08:00 or 11:00 and finish anywhere from 18:00 to 23:00).

The one thing I’ve noticed is that they tend to feel like a weekly chore to clean up (even though I try to do it piecemeal throughout the week).

The Outcome

I now have an entirely new post category and format, and definitely too many of them on the home page right now–so the first order of business is to tweak things a bit so that (like with long form posts) only the last set of notes is taking up prime real estate.

Also, I’ve already started paring down on the number of side projects–I tend to do too many and space them out over extended periods of time because I often lack parts, inspiration or specific bits of knowledge I need to back track and research, but getting some of them done sooner seems like a more rewarding approach.

As to whether I’ll keep doing the notes themselves, I honestly don’t know. I’ll see how it goes.


  1. And yes, I started writing them before the announcement. Make of that what you will↩︎

  2. I saw it coming, yes, and I suspect we’re not done yet, especially if the economy keeps tanking (regardless of whatever management mistakes tech companies might have made in recent years). ↩︎


03 Apr 02:45

Google Axes Fitbit Challenges, Adventures, and Open Groups

by Ronil
Fitbit challenges, adventures, and open groups are the latest ones to join the Google graveyard. In February, Fitbit announced its plan to sunset open groups, challenges, and adventure on March 27, and the day has come for users to say goodbye to these features. Ironically, the company chose to do it just a day after […]
03 Apr 02:44

Fuck T-Shirts

by Ronny
mkalus shared this story from Das Kraftfuttermischwerk.

Simple Designs, eindeutige Messages für wirklich jede Gelegenheit gibt es von Fuck T-Shirts und ich würde einige davon durchaus tragen.


(via swissmiss)

30 Mar 17:04

Recommended on Medium: Chatbots can kill

The suicide of a Belgian man raises ethical issues about the use of ChatGTP

Today Belgian newspaper La Libre has reported the recent suicide of a young man who talked to a chatbot that uses ChatGPT technology. According to his wife, he would still be alive without the bot. The man had intense chats with the bot during the weeks before his death. The bot, called “Eliza”, is said to have encouraged his negative patterns of thinking and ultimately persuaded him to kill himself.

The sad case raises important ethical issues. Can such cases of emotional manipulation by chatbots be avoided as they get more intelligent? Vulnerable people, for example children and people with pre-existing mental health problems, might be easy victims of such bot behaviors and it can have serious consequences.

The case shows clearly what AI ethics experts and philosophers of technology have always said: that artificial intelligence is not ethically neutral. Such a chatbot is not “just a machine” and not just “fun to play with”. Through the way it is designed and the way people interact with it, it has important ethical implications, ranging from bias and misinformation to emotional manipulation.

An important ethical question is also who is responsible for the consequences of this technology. Most people point the finger at the developers of the technology, and rightly so. They should do their best to make the technology safer and more ethically acceptable. In this case, the company that developed the bot promised to “repair” the bot.

But there is a problem with this approach: it is easy to say this, but a lot harder to do it. The way the technology works is unpredictable. One can try to correct it — for example by means of giving the bot hidden prompts with the aim to keep its behavior ethical — but let’s be honest: technical solutions are never going to be completely ethically proof. If we wanted that, we would need to have a human check its results. But then why have the chatbot in the first place?

There are also tradeoffs with protection of freedom of expression and the right to information. There is currently a worrying trend to build a lot of ethical censorship into this technology. Some limits are justified to protect people. But where to draw the line? Isn’t it very paternalistic to decide for other adult people that they need a “family friendly” bot? And who decides what is acceptable or not? The company? Wouldn’t it be better to decide this democratically? This raises the issue concerning the power of big tech.

Another problem is that sometimes users on purpose try to elicit unethical responses from chatbots. In such cases (but not in the Belgian case) it is fair to also hold the user responsible instead of only blaming the tech company. This technology is all about interaction. What happens is the result of the artificial intelligence’s behavior but also of what the human does. If users are fully aware of what they are doing and play with the bot to get it to become malicious, then don’t just blame the company.

In any case, the tragic event in Belgium shows that we urgently need regulation that helps to mitigate the ethical risks raised by these technologies and that organizes the legal responsibility. As my Belgian colleagues and I argued, we also need campaigns to make people aware of the dangers. Talking to chatbots can be fun. But we need to do everything we can to make them more ethical and protect vulnerable people against their potentially harmful, even lethal effects. Not only guns but also chatbots can kill.

30 Mar 17:01

Reinventing the Fortress: using Open Recognition to enhance ‘standards’ and ‘rigour’

by Doug Belshaw
Midjourney-created image with prompt: "imposing fortress castle with guards, mountain range, wide angle, people in foreground holding bright lanterns, vivid colors, max rive, dan mumford, sylvain sarrailh, detailed artwork, 8k, 32k, lively rainbow, ultra realistic, beautiful lake, moon eclipse, ultra epic composition, hyperdetailed"

Imagine a formidable fortress standing tall. Long the bastion of formal education, it’s built upon the pillars of ‘standards’ and ‘rigour’. It has provided structure and stability to the learning landscape. These days, it’s being reinforced with smaller building blocks (‘microcredentials’) but the shape and size of the fortress largely remains the same.

However, as the winds of change begin to blow, a new force emerges from the horizon: Open Recognition. Far from seeking to topple the fortress, this powerful idea aims to harmonise with its foundations, creating a more inclusive and adaptive stronghold for learning.

Open Recognition is a movement that values diverse learning experiences and self-directed pathways. So, at first, it may appear to be in direct opposition to the fortress’s rigidity. However, upon closer inspection, rather than seeking to tear down the walls of standards and rigour, Open Recognition seeks to expand and reimagine them. This ensures that the fortress is inclusive: remaining relevant and accessible to all learners.

To create harmony between these seemingly conflicting forces, it’s important to first acknowledge that the fortress of standards and rigour does have its merits. It provides a solid framework for education, ensuring consistency and quality across the board. However, this approach can also be limiting, imposing barriers that prevent many learners from fully realising their potential.

Open Recognition brings flexibility and personalisation to the fortress. By validating the skills and competencies acquired through non-formal and informal learning experiences, Open Recognition allows the fortress to accommodate different sizes and shape of ‘room’, allowing the unique talents and aspirations of each individual to flourish

The key to harmonising these two forces lies in recognising their complementary nature. Open Recognition strengthens the fortress by expanding its boundaries, while standards and rigour provide the structural integrity that ensures the quality and credibility of the learning experiences within.

Educators and employers, as the guardians of the fortress, play a crucial role in fostering this harmony. By embracing Open Recognition, they can cultivate a more inclusive and dynamic learning ecosystem that values and supports diverse pathways to success. In doing so, they not only uphold the principles of standards and rigour but also enrich the fortress with the wealth of experiences and perspectives that Open Recognition brings.

As the fortress of standards and rigour harmonises with Open Recognition, it becomes a thriving stronghold of lifelong learning, identity, and opportunity. Far from crumbling under the weight of change, the fortress is invigorated by the union of these two powerful forces, ensuring its continued relevance and resilience in an ever-evolving world.

The post Reinventing the Fortress: using Open Recognition to enhance ‘standards’ and ‘rigour’ first appeared on Open Thinkering.
30 Mar 17:01

Embracing the Full Spectrum: towards a new era of inclusive, open recognition

by Doug Belshaw
White light going through a prism and being refracted into the colours of the rainbow. Image from Pixabay.

Earlier this month, Don Presant published a post entitled The Case for Full Spectrum “Inclusive” Credentials in which he mentioned that “people want to work with people, not just collection of skills”.

We are humans, not machines.

Yesterday, on the KBW community call, Amy Daniels-Moehle expressed her appreciation for the story that Anne shared in our Open Education Talks presentation about her experiences. Amy mentioned that the Gen-Z kids she works with had been excited when watching it. They used the metaphor of showing the full electromagnetic spectrum of themselves — more than just the visible light that we usually see.

It’s a useful metaphor. Just as the electromagnetic spectrum extends far beyond the range of visible light, encompassing ultraviolet, infrared, and many other frequencies, the concept of Open Recognition encourages us to broaden our perspective. As I’ve said before, it allows us to recognising not only knowledge, skills, and understanding, but also behaviours, relationships, and experiences

I remember learning in my Physics lessons that, with the electromagnetic spectrum, each frequency band has its unique properties, applications, and value. Visible light allows us to perceive the world around us. Ultraviolet and infrared frequencies have their uses in areas such as medicine, communication, and security. Other creatures, such as bees, can actually see these parts of the spectrum, which means they see the world very differently to us.

Similarly, it’s time for us to see the world in a new light. Open Recognition acknowledges that individuals possess diverse skills, competencies, and experiences that might not be immediately apparent or visible. Like the ultraviolet and infrared frequencies, these hidden talents may hold immense value and potential. Instead of doubling-down on what went before, we should be encouraging environment that embraces and celebrates this diversity. We can unlock untapped potential, create new opportunities, and enable more human flourishing.

In the same way that harnessing the full spectrum of electromagnetic radiation has led to groundbreaking discoveries and advancements, I believe that embracing Open Recognition can lead to a more inclusive, equitable, and thriving society. By acknowledging and valuing the myriad skills and talents each person brings, we can better collaborate and learn from one another. What’s not to like about that?

Note: if you’re interested in this, there’s a community of like-minded people you can join!

The post Embracing the Full Spectrum: towards a new era of inclusive, open recognition first appeared on Open Thinkering.
30 Mar 16:54

The Abstract Decomposition Matrix Technique to find a gap in the literature

by Raul Pacheco-Vega

I have been thinking about how I can help my students with their theses, particularly because our programs are rather compressed and they need to get a lot done in a very short period of time. I’ve been working on developing a strategy to discern “the gap in the literature” that I plan to test with Masters and undergraduate students. Possibly also with PhD students.

I have developed several strategies to teach how to craft a good research question, how to find the gap in the literature. But when I had a meeting with Masters students recently and I taught them how to use some of my methods, they seem a little bit confused as to how to choose what exactly they should study.

Let me begin by saying what I told them at the beginning:

YOU NEED TO READ. A LOT.

I understand that doing literature reviews is challenging (I have an entire section in my blog with multiple strategies to tackle the process of reviewing the literature). But if we are in the world of academia to contribute to our fields, we really need to read A LOT, because otherwise we may end up claiming that we have done something new that has already been published elsewhere (or in another language).

Literature review

But I always try to help them by asking them to focus their search and their research on 4 elements:

We conduct a review of the literature in order to develop one or more of these elements:

1) what has been done before, what has been studied and how it has been analyzed,
2) the foundations upon which our own work can be developed further,
3) any spaces where we can embed our own contributions, and/or
4) a map of themes showing connections between different topics, ideas, concepts, authors, etc.

When I teach strategies to systematize the literature, I usually tell them to use my Conceptual Synthesis Excel Dump (CSED, or Excel Dump in short).

As they read each article/chapter/book chapter/book, they drop their notes into their Excel Dump.

Excel dump LaVanchy et al

An Excel Dump row describing an article on Nicaragua’s water governance.

But when my students asked me “how do I ensure that I am tackling a DIFFERENT research question to the one others have worked on?” I had to pause. This is a valid question, and I thought about how they could do this in an easy, and visually appealing way.

So this is what I did: I developed an Abstract Decomposition Matrix.

Both Dr. Jessica Calarco and I use a very similar method to craft abstracts (using 5 elements, or asking 5 different questions). So I used one of her own articles and decomposed her abstract with an Excel template I developed.

5 questions abstract decomposed

Even if I haven’t yet fully read the literature, or don’t work in the field (I don’t, I study entirely different things to Dr. Calarco), I can start imagining extensions of her work, different methods, other case studies/countries/populations/types of schools.

DOING THIS ABSTRACT DECOMPOSITION EXERCISE HELPS ME THINK OF NEW DIRECTIONS FOR MY OWN RESEARCH.

Now, does this abstract decomposing strategy work in other fields? I applied the strategy to this paper. While I had to “fill out” some of the details of the 5 elements framework, it does give me clarity on potential avenues for further work.

5 questions abstract decomposed 2

I did this for a third paper, and the strategy seems to hold relatively well.

5 questions abstract decomposed 3

Thus, what I am planning to do with my students is to ask them to survey the literature and decompose abstracts of articles they read so they can see what’s been done. Once their Abstract Decomposition Matrix is complete, they can see where they can focus their work.

Reading highlighted papers

This exercise does NOT substitute my Conceptual Synthesis Excel Dump (CSED), but I believe it complements it. You can do an Abstract Decomposition Matrix exercise with, say, 10-15 articles, and from there, you can triage and decide which ones you will read in more detail. Although I have NOT yet tested this strategy with my students. I plan to do so this summer and fall, and will report back. I am confident it will be helpful.

Before anybody asks: yes, in this particular 5 elements abstract decomposition strategy I use the authors’ exact words. My Excel Dump technique asks of the reader to use their own words in the notes. What I noticed as I was filling out one of the ADM templates is that sometimes you will need to use your own words to fill in the gaps. I think this is good.

In the meantime if you are teaching how to review the literature for your students, this is how I conducted one in an entirely new-to-me method (hospital ethnography). These two posts (from reading a lot to writing paragraphs of your literature review and mapping a new field of research) may also be helpful, particularly if you’re delving into entirely new fields/areas/methods.

30 Mar 16:47

What’s the point of mediocre ideas?

by Jim

The best way to have a good idea is to have lots of ideas
Linus Pauling

This is an old observation, bordering on bromide. I’ve used it before and will most likely use it again.

This comes to mind as I was thinking about a chance encounter with my CEO as he came out of a meeting. Clare, another of our partners, had brought up a technique from on old self-help book and Mel wanted to know what I thought.

I was familiar with the book and the technique. I had read to book years ago and didn’t find the technique terribly helpful. I’m not naming the book, the technique, or the author because that isn’t the point. The point in the moment was that Clare had scored a small status point with Mel and I had lost a point. It comes to mind today as another aspect of trying to balance between efficiency and effectiveness in a work world that runs on ideas.

Linus Pauling isn’t the only fan of lots of ideas. We’re all familiar with brainstorming and the exhortations that “there are no bad ideas”, despite the mounds of evidence to the contrary. For all the popularity of brainstorming inside organizations, few seem to be aware of the evidence that it isn’t a particularly effective technique. How do you bring that evidence into an organization that is fond of brainstorming?

In an efficiency world you fire out ideas to ensure that you get credit for them. In an effective world you make choices to not waste other people’s time at the risk that your decision to skip the stuff you deem unimportant will never garner any recognition or reward.

I’m not generally a fan of sports metaphors but there’s something here akin to hitting in baseball. You can’t get a hit if you don’t swing. If you do swing, you’re more likely to miss than to get a hit. Swing and miss too often and you’ll lose the opportunity to swing at all. One challenge is learning to choose your pitches. Another is to figure out how to get enough at bats to get pitches to look at.

The post What’s the point of mediocre ideas? appeared first on McGee's Musings.

28 Mar 05:54

It's Not the Bike Lane's Fault You're a Bad Driver

mkalus shared this story from Jalopnik.

Last week, Vancouver-area radio host Jill Bennett went viral after tweeting a photo of a Dodge Durango straddling a bright yellow concrete barrier that the driver had hit. “Hey @CityofVancouver⁩ this is second incident I’ve seen caused by these useless ‘slow street’ barricades installed last month. They don’t slow down traffic; they cause crashes and traffic chaos,” Bennett wrote.

  • Off
  • English

In case you missed it:

Understandably, thousands of people proceeded to pile on, pointing out how ridiculous her complaint was. Had the driver simply been paying attention to the road and driving at a reasonable speed, they would have easily noticed the brightly colored traffic calming installation, driven through without a problem and nothing bad would have happened to them. Blaming anyone other than the driver for this crash is absolutely insane.

And this is far from a one-off situation where one idiot had a bad take. This attitude is incredibly common. Just head over to NextDoor or the local subreddit in any small city that has recently added some form of protected bike lanes, and you’ll see the exact same sentiment. When the city closest to where I currently live (spoiler: not every Jalopnik staffer lives in New York) added flexible posts with some reflector tape on them to (sort of) protect a bike lane in its downtown, they were almost immediately hit, and the complaints started to flood in from people who were upset they were ever installed in the first place.

How dare the city put drivers at risk by doing one tiny thing to make riding safer for cyclists! These barriers just jump out and attack cars at random! I was just minding my own business, and now I have a flat tire! Thanks for nothing, idiot city planners.

I’m sorry to break it to anyone who has trouble keeping their car out of a bike lane (or off a concrete barrier), but it’s not the bike lane’s fault you’re a shitty driver. If you hit something stationary, that’s your fault. Pay attention to the fucking road while you’re driving. It’s not too much to ask when other people’s lives are literally at stake.

After all, killing someone who’s not in a car is still killing someone. And if you think they were asking for it because they were walking or riding a bike, you’re just a bad person. You’re the one driving the 5,000-lb vehicle. You’re the one responsible for making sure you don’t hit anything or anyone. Trying to blame others for your shitty driving is just ridiculous.

In the case of cyclists and pedestrians, sure, it’s possible to construct a hypothetical scenario where they might get hit while doing something that makes it entirely their fault. But not bike lane barriers and traffic calming measures. They’re just sitting there. Not moving. Completely stationary. Asking drivers to avoid hitting them is like asking drivers to avoid hitting buildings. It’s nothing more than a basic requirement for being allowed to drive on public roads.

If that’s too much to ask, then maybe it’s time for the state to take your driver’s license away. Oh, you live in a suburban hellscape and can’t get around without a car? Too bad. Stay home and have your groceries delivered until you can prove to society that you can be trusted behind the wheel again. Or take the bus. Sorry if you think you’re too good for public transportation. You’re clearly not good enough at driving to have a license, so suck it up, buttercup. That barrier you hit could have been someone’s child.