Shared posts

28 Mar 02:23

Mom, I want what the other kids eat for lunch – a review of Riceboy Sleeps

by Becky Tu

Mom, I want what the other kids eat for lunch.

What seems like a simple request feels devastating as it unfolds on the screen, watching the mother’s face fall even as she agrees. I am instantly thrown back to my eight-year-old self, asking the same thing of my own mom. I wonder now if she wore the same expression as So-young, the main character of the award-winning family drama Riceboy Sleeps, did.

Immigrating to Canada is a new start, a hopeful beginning, a step towards a better life. It is also leaving behind a life, giving up the past, and starting over. In the film, there aren’t many ties keeping the mother and son in Korea, so in a way the move might have been a breath of fresh air. But starting from scratch in a new country is no easy feat.

The film is inspired by the director’s life and relationship with his own mother. Anthony Shim spent part of his childhood in Coquitlam, where a Korean community has slowly and surely settled. We can see painfully real memories of the immigrant experience as the son, Dong-hyun (played by Dohyun Noel Hwang and Ethan Hwang), adjusts to a new language and a new environment where his classmates and neighbours look nothing like him.

I also immigrated to Canada at a young age, and these scenes moved me deeply. I remember sitting down with my parents, a list of english names before me, and having to make a choice. On the one hand, it allowed me to make friends at school. On the other hand, it’s not until years later that I realized I’d packed away a part of myself and left it to collect dust. How do you reclaim something you readily gave up?

It is the first step to assimilation: attaching your identity to strange syllables and vowels so teachers can pronounce them more easily.

Assimilation does not always feel violent. Often, Shim muses, when Asian immigrants land, they want to acclimate. It is a way of protecting themselves, knowing they will always be perceived as foreigners first, with their black hair and brown eyes. They might not be able to prevent the spill of their accented english, but they can make sure their child’s speech matches the voices on the radio.

“They think, ‘I will never be a Canadian parent, I will never fit in,’” Shim says in an interview with Pancouver, on the topic of first-generation immigrants. “‘But maybe my child can.’”

This film is very much a tribute to his mother, and all the immigrant parents that made the hard choice of uprooting their entire life for their children. As a child, we never really understand the sacrifices our parents make. We see little Dong-hyun running through the carpeted hall of their home while So-young vacuums around him; we see teenage Dong-hyun sprawled asleep in his bed as So-young picks up after him. Paired with Choi Seung-yoon’s commanding presence on screen, it is easy to see this mother’s strength, determination, and courage.

What stands out to me, as someone who identifies as an Asian-Canadian woman, is how So-young refuses to be pigeonholed into stereotypes of submissiveness. She refuses to back down to the white men at her workplace when one of them slaps her ass—when no one stands up for her, she stands up for herself. She is not afraid to show her anger and disgust with Dong-hyun’s principal, who might take advantage of her awkward english and expect her to quietly acquiesce. Perhaps because she is well aware that, as a single mother and a new immigrant, there is no one else in their corner.

Her willingness to speak up for herself and her son reflects the director’s intent in portraying Korean women with justice and honesty. “I wanted to be part of changing what North American people’s view of Korean women was,” says Shim.

Some people might not enjoy immigrant stories because they’re full of hardships and trauma. Riceboy Sleeps gracefully balances the heartache of things lost in translation and the gentle triumph of persevering.

Take how the film shows the struggle of holding onto cultural identity and adapting to life as a “Canadian”. When we meet Dong-hyun as a teenager, he is putting in coloured contacts to go with his dyed blond hair. Yet he converses with his mother in english and Korean, switching between the two with a fluidity that is second nature.

The duality of language, food, and customs is a conflict immigrants confront every day. You can set down roots in this new place, but can you forgive yourself for the changes you made to do so?

Even as So-young is asking her son to take on an english name, she tells him it’s okay if he doesn’t choose one; if he wants to remain Dong-hyun, that’s okay with her. Seeing this moment in the trailer, even before I had the context of the film, gave me a lump in my throat. My non-english name still remains my legal name, but even my family rarely calls me that anymore. So-young calls her son by both names, the english name outside, the Korean name at home. Perhaps this is her way of holding onto the familiar—and why she never chose an english name for herself.

To me, that’s where this film really succeeds: the warmth and love that persists between this mother and son amidst the difficult and confusing battle of becoming Canadian.

Riceboy Sleeps was shot mainly in Greater Vancouver, with some scenes in South Korea.

I think I never thanked my mom for making me those sandwiches. Because it was never about the sandwich—not even when, in later years, my non-Asian classmates flaunted their fried rice and dumplings while I struggled to swallow around Wonder bread and ham stuck together by melted Kraft singles. It was about a dream for a better future, a choice made in hopes of more choices to be made available.

Riceboy Sleeps is not a grand story about overcoming racism. It’s not a sobfest—even if I personally cried the whole way through it. In the director’s own words, it’s a story about “a mother and son searching for a home.” And sometimes, home is not a physical place, and identity is not meant to be packed into a lunchbox.

 

Here’s one piece of dialogue from the film that I’ll leave you with:

The son asks, “Should we go back?”
To which the mother replies, “We can’t.”

 

Riceboy Sleeps opens in theatres across Canada today, March 17.

The post Mom, I want what the other kids eat for lunch – a review of <i>Riceboy Sleeps</i> appeared first on Pancouver.

27 Mar 01:07

Ghost bike installation in Oakville

by jnyyz

On Thursday March 9, a cyclist was stuck and killed on Upper Middle Rd in Oakville. Today was the ghost bike ride in his memory.

The riders gathered at the Oakville GO station.

Photo: Geoffrey Bercarich

Councillor Tom Adams was there to express his condolences. He has been one of the local councillors engaged with Vision Zero and the active transportation file. He said that there are a series of environment assessments planned as part of a process to roll out safer street design, including along Upper Middle Rd.

David Shellnut’s bike had a stripped skewer, and the fellow to the right kindly made a quick trip home to get him a spare.

Joey describes the route shortly before we depart.

Doing a quick ride by for the CBC cameraman.

Departing the GO station parking lot.

Approaching the underpass at the end of Lyons Lane.

Geoffrey through the P gates.

Riding north on Sixth Line.

Just after the turn on Upper Middle Rd. This is not a bike friendly street.

Approaching the crash site.

There were some candles, fruit offerings, and flowers already here.

Chaining up the bike.

Unfurling the banner.

A minute of silence for the deceased.

Photo: David Shellnut
Photo: David Shellnut

Geoffrey talks to the CBC.

Mark Anderson from Cycle Hamilton rode all the way out, and was planning to ride home as well.

An eyewitness to the crash happened to walk by. She said that the driver suddenly changed into the curb lane in preparation for a right turn onto Eight Line, hitting the cyclist from behind.

At Trafalgar and Iroquois Shore Rd on the way back.

On the GO train.

Thanks to everyone who rode with us on a cold, windy day. This was the third ghost bike installation in Oakville by ARC over the past few years, following Helen Xiang in 2020, and Ignacio Viana in 2021.


Deepest condolences to the family and friends of the departed.

Update: CBC video coverage.

24 Mar 02:33

The Best Bluetooth Wireless Headphones

by Lauren Dragan
Our four picks for the best Bluetooth wireless headphones laying next to each other.

Earbuds are discreet and convenient to use, but if you want to get the best wireless sound for the money, choose over-ear Bluetooth headphones.

Although no single headphones set is a perfect match for everyone, the JBL Tour One M2 comes darn close. We love this pair’s customizable sound, stellar call clarity, and days-long battery life—plus, it includes some bonus features usually reserved for pricier pairs.

Dismiss
24 Mar 02:23

Google Camera 8.8 Brings Faster Night Sight Processing on Pixel 6

by Ronil
Google included the faster and improved Night Sight feature for the Pixel 6 series phones. The Pixel Night Sight feature allows you to take photos in low-light conditions without using the flash. This feature was first introduced with the Pixel 7 and Pixel 7 Pro and is now coming to the Pixel 6 series as […]
21 Mar 15:20

On Large Language Models

by Rui Carmo

I’ve been pretty quiet about ChatGPT and Bing for a number of reasons, the most pertinent of which is that I have so much more going on in my life right now.

But I think it’s time to jot down some notes on how I feel about Large Language Models (henceforth abbreviated to LLMs) and the current hype around them.

And I’m going to try to do that from the perspective of someone who:

  • Graduated from college soon after the peak of the 90’s AI Winter (yeah, I’m old–we call it “experience” these days)
  • Actually decided not to major in AI (but rather in more networking-focused topics) because of said Winter, although I went and racked up my point average by acing AI coursework as optional credits.
  • Survived several hype cycles over the past 30 years.
  • Dove into analytics and data science during the “resurgence” in 2012 and enjoyed it immensely (as well as racking up a few ML certifications) before getting sucked into telco again.
  • Spends an unhealthy amount of time reading papers and mulling things.

Plus the field is evolving so quickly that I’ve drafted this around four times–all the while progressively shrinking it it down to a quick tour over what I think are the key things to ponder.

How Smart is an LLM, anyway?

I’m going to start with an obvious fact, which is that LLMs just seem to be smart. Sometimes recklessly so.

Yes, typical outputs are vastly better than Markov chains, and there is a tendency to draw a rough parallel with running the probabilities for the next token through the LLM.

Like people like Tim Bray have pointed out, that is seriously underestimating the complexity of what is represented in model weights.

The reason why the Markov analogy breaks down is that LLM output is not probabilistic–there is randomness involved in setting up inference, sure, and sequential correlation between output tokens, but the factors driving the output are several dozens of orders of magnitude above what we were used to.

Random outcomes like the LLM starting to hallucinate are just par for the course of a neural network trying to go beyond the training data, or focusing attention on parts that lack enough conditioning to have a decent output.

But going back to the initial point, there is zero “knowledge” or intelligence in an LLM. There are impressive amounts of correlation, to be sure, but the core principle harks back to the first AI Winter–it’s just that we’ve crossed a quality threshold that seemed hitherto unattainable.

It may look like emergent behavior, but that is simply because we can’t trace every step that led to the output. There is no agency, nor real “understanding”.

And, as anyone who’s read Douglas Hofstadter will point out, there is also no “strange loop” or a coherent capability to self-reference–the outputs are just the result of navigating an LLM’s internal representation of massive amounts of data, and they’re entirely functional in more than one sense of the word.

Things Are Just Getting Started

Shoving all those orders of magnitude into something that can fit into an enterprise-class GPU (or, increasingly, a GPU and a hefty set of NVMe drives) takes quite a toll, and training LLMs requires massive computational power that is (for the moment) outside an individual’s reach.

But that is certain to change over time, and inference is already possible on consumer-grade hardware–like this past couple of weeks’ spate of news around llama.cpp proves, there is a lot of low hanging fruit where it regards optimizing running the models, and at multiple levels1.

Although things like weight quantization degrade the output quality quite a bit, I expect more techniques to pop up as more eyes go over the papers and code that are already out there and spot more gaps and tricks to run LLMs efficiently.

And despite the fact that the spotlight is on OpenAI and the massive cloud infrastructure required, I personally find it a lot more interesting to figure out how low LLMs can go and still produce coherent results.

This because I have fairly high hopes for tailored models, and see a lot of value in having fully on-premises and even embedded solutions–I know I’m bucking the trend here, but the history of computing is one of decentralization, and you’re probably reading this on a smartphone… So my point should be obvious.

What Are LLMs Good For?

Having spent entirely too long dealing with customer support and call centers (I actually find the generic “chatbot” thing extremely annoying, and resisted getting into building those, but such is life), I’d say that, at the very least, LLMs are certain to take virtual assistants and support chatbots to the next level.

And no, this is not a new idea–it’s been hashed to death over the years, and the real problem is that most support knowledge bases are useless, even if you manually tag every snippet of information and carefully craft interaction flows. Traditional chatbots (and even summarization-driven ones) simply suck at doing the kind of basic correlation even a script-driven, barely trained human can pull off on autopilot, and hacking them together was always a brittle and unrewarding endeavor.

But an LLM is trained on other content as a baseline, which gives it a much better ability to fill in the gaps in such knowledge bases, and certainly have better conversational skills than a goldfish–and I can see LLMs doing a decent job in highly patterned, formalized inputs like legal documents, medical reports, retail catalogues, etc.

How Reliable Are These Things?

To be honest, right now, not that much. I wouldn’t rely on any publicly available LLM for decision-making of any kind (coding, advice, or even accurate summarization), although every iteration improves things noticeably.

Sure, some of the humor and “style transfer” is pretty hilarious, but LLMs still have trouble with basic math, let alone writing reliable code2–they’re not even that useful at “rubber ducking” a problem.

Outputs are generally shallow and LLMs still have trouble creating coherent long form without hallucinating, but I do think they can be useful as baselines for a human to improve upon, as long as that person has a good enough grasp of the problem domain to spot obvious flaws in “reasoning” (not just incorrections, but also gaps) and the willingness to double check any references.

Of course, any of those sanity checks seem absent from a lot of the hype-driven discussions I’m seeing online… But, more to the point, LLMs do seem to knock things out of the park for short interactions.

Which is why I think the search market disruption gambit is going to pay off handsomely–LLMs make for a much better search experience because you get adjacent information you would otherwise be unable to get from either direct or statistical matches (and you don’t get pesky ads, keyword squatters, etc.)

How Manageable Are These Things?

This is where I have the most doubts, to be honest.

The current “programming paradigm” is hopelessly primitive, and all the early deployment shenanigans prove it–prompt stealing and prompt injection attacks (which can be much more interesting than you’d expect) remind me of all the loopholes Asimov managed to squeeze out of The Three Laws of Robotics.

Plus the ease with which the models “hallucinate” and veer off into the wild blue yonder were, until recently, being dealt with by ham-fisted tactics like limiting the number of consecutive interactions with the model.

In short, it all feels… very Sorceror’s Apprentice, to be honest.

And I don’t think “stacking” models or just creating embeddings is going to help here–long-term curation of model inputs is going to be key.

Which means time-consuming, costly, and ever more challenging work to improve general purpose LLMs, especially those targeting search (where having non-AI generated training sets is going to be harder and harder).

Fast Iteration, But What About Fast Training?

Another important constraint that is being glossed over is that there is no easy, immediate feedback loop to improve an LLM–in the current chat-like interaction models you can add more context to a session, but:

  • It doesn’t really “stick”–sometimes not even subsequent invocations (even if the session wrappers are continuously improving, you’re effectively adding stubs to the original prompt, and that can only go so far).
  • Any on-the-fly corrections don’t become part of the core model (you need to have a full training iteration).

These things can be worked around, but are fundamental limitations–and yet, they don’t have any real consequence for simple one-shot tasks like “summarize this webpage” and most of the “productivity boosters” we’re likely to see over the coming months.

But they do compound my notion that LLMs feel more like an impressive party trick than a broadly sweeping change in paradigm–at least for now. Their real impact lies elsewhere, and most likely beyond the obvious chatbot scenarios.

It would be nice to take away a lot of the drudgery we’ve baked into computer use (as well as several typical knowledge worker tasks), although there are interesting (and risky) implications in empowering certain kinds of people to mass-produce content3

Conclusion

So where does this leave us?

Well, we’re clearly in the upward swing of the hype cycle. And, like I pointed out at the start of this piece, I’ve been there before–the quick iteration, the optimizations, the unexpected new techniques in established domains, and the fallout (both good and bad). Those parts are not hard to predict.

The big difference this time is that for users, the barrier to entry is effectively nil, and, again, the outputs are way better (and more impressive) than anything else we’ve seen before. Even if it’s still just a more elaborate Chinese Room, there is a lot more public interest and momentum than is usual in most tech hype cycles.

So yes, this one is going to be a bumpy ride, and not just for geeks. Make sure you have your metaphorical seat belt on tight.


  1. And while I was revising this Pytorch 2 came out, with a nearly 50% performance boost for image models–I’m just waiting for xformers to fall in line to upgrade my Stable Diffusion setup… ↩︎

  2. I routinely try to get LLMs to, say, invert a heap, or even to compose SQL queries (which I hate doing), and the results are always abysmal. I can’t even imagine how badly they would fare in medicine or law. ↩︎

  3. And I don’t mean political parties or nation states here. The prospect of mass-produced A.I.-accelerated reports, presentations, memos, etc. should be enough to give any corporate knowledge worker pause.. ↩︎


21 Mar 15:18

Notes for March 13-19

by Rui Carmo

Automating all the things.

Monday, 2023-03-13

Random fiddling day.

  • Revisited RDP connections from Remmina to a domain-joined machine: Security protocol set to TLS, DOMAIN\username authentication (not the UPN), Best quality.
  • Cleaned out my homebridge configuration (also disabled automatically adding 433MHz sensors discovered by OpenMQTTGateway, which was a cute puzzle to sort out).
  • Triggered some monthly restic backups. Remember, kids, always have an off-site backup.
  • Looked at ComfyUI, which is intriguing to say the least (and a breath of fresh air after kludgy Stable Diffusion WebUis where the actual workflow is a mess).
  • Sorted out some media archives.

Tuesday, 2023-03-14

I can never get the hang of Tuesdays. My work laptop died mid-afternoon, so I found myself with some time in between troubleshooting sessions.

  • Found it rather amusing that I serendipitously sorted out remote desktop domain authentication yesterday, almost as if I predicted this. Still can’t get Remmina to work with corporate WVD, though, so might have to turn the U59 into a temporary “corporate” desktop.
  • Did some spelunking in OpenMQTTGateway code and MQTT topics to understand what it can decode in the 433MHz band and how it is mapped to topics.
  • Spent half an hour with WeasyPrint to generate a presentable document out of Markdown notes. Still the best Python PDF generation tool out there, and has pretty decent CSS support, plus it’s trivial to automate:
MARKUP = $(wildcard *.md)

all: $(MARKUP:.md=.pdf)

%.pdf: %.html layout.css
    python -m weasyprint -e utf8 -m A4 -s layout.css $< $@

%.html: %.md
    python -m markdown < $< > $@
  • Created a ComfyUI sandbox on borg and spent a while collecting all the requisite models and going through the (maybe too whimsical) examples. Really happy with the UX so far, and with the fact that I went with a 12GB GPU.
  • Began adding docstrings to my py-sdf fork to make it easier to use with VS Code autocomplete.

Wednesday, 2023-03-15

Mid-week slump. Slept horribly, had a lot of catching up to do, still managed to have a few productive breaks:

  • Realized Godot 4 was already in Fedora testing and grabbed it (it went into mainstream 3 days later).
  • For the first time this year, added a little bit more content navigation functionality to the site. Still very happy with the way the static page generator turned out.
  • Given my work laptop woes, tried to get a semblance of my usual environment working over RDP device redirection:

Client (Fedora)

  • Remmina, Advanced, Redirect local microphone, sys:pulse
  • Remmina, Advanced, USB device redirection, id:0fd9:006d#3564:fef4,addr:01:0b

Also make sure you can access the USB devices (some might be automatically accessible to dialout group members, but this makes sure):

# cat /etc/udev/rules.d/70-passthrough-access.rules 
# Elgato StreamDeck
SUBSYSTEM=="usb", ATTR{idVendor}=="0fd9", ATTR{idProduct}=="006d", MODE="0666"
# Webcam - tried it just to see if it worked, here for reference
SUBSYSTEM=="usb", ATTR{idVendor}=="3564", ATTR{idProduct}=="fef4", MODE="0666"

Server (Windows 11)

Run gpedit.msc and configure this setting:

Computer Configuration:
  Administrative Templates:
    Windows Components:
      Remote Desktop Services:
        Remote Desktop Session Host:
          Device and Resource Redirection:
            - Do not allow supported Plug and Play device redirection = Disabled

I have plenty more tweaks, but the above is what you need for USB pass-through.

The StreamDeck works great, the audio is passable, but I can’t get the camera to work since Remmina/freerdp still doesn’t support UVC camera pass-through (I already knew passing the raw USB device would be unfeasible, but I had to give it a go). For now, that only works in Windows and Mac/iOS clients.

  • Did a little more Fedora audio tweaking, including moving to a real-time kernel on the U59 and setting Bitwig to use pulseaudio (just because the preset for it had slightly lower latency):
# Quick set of essentials for audio priority
echo '@audio - rtprio 90
@audio - memlock unlimited' | sudo tee -a /etc/security/limits.d/audio.conf
echo 'fs.inotify.max_user_watches=600000' | sudo tee -a /etc/sysctl.conf
sudo usermod -aG audio $USER
sudo usermod -aG realtime $USER
sudo dnf copr enable ycollet/audilinux
sudo dnf install kernel-rt-mao

Thursday, 2023-03-16

Long meeting day, way into the evening.

  • Realized that a recent Raspbian update broke screen blanking on my automation dashboard, which can be worked around by reverting the X server version:
sudo apt install xserver-xorg-core=2:1.20.11-1+deb11u5
sudo apt-mark hold xserver-xorg-core
  • Spent a little trying to get the Linux Intune client to work in Fedora, even though it is unsupported. Got it to work via… unconventional means, but Edge crashes when syncing an AD account.
  • Fiddled with PyTorch 2.0, but xformers hasn’t really been updated yet, so most Stable Diffusion tools can’t make proper use of it yet.

Friday, 2023-03-17

Winding down for the weekend. My work laptop was serviced, which meant doing the BitLocker dance and appeasing the InTune deities, so that took a chunk out of my day.

  • Updated my RDP page with a more comprehensive set of tweaks that I refined while the X1 was MIA.
  • Realized the CSS font stack for this site could be improved for monospace fonts, so I re-did the entire thing while looking at modern-font-stacks, which is a very handy resource if you are designing text-intensive websites and want to deliver the best possible experience without any web fonts.
  • Investigated a possible uwsgi bug related to cron tasks.
  • Investigated how to programmatically take screenshots under Wayland using dbus.
  • Fiddled with pyxel as a way to port some code one of my kids wrote in PICO-8.

Saturday, 2023-03-18

Family day.

  • Decided to clean up and post my take on LLMs before it got too stale (had to drop a fair chunk of it because it was outdated already).
  • Brief outing to attend local Chemistry Olympics (kid brought home bronze medal, yay!)
  • Decided to tackle the Docker Apocalypse and start moving all my public images to ghcr.io. Even though I have a private registry at home (and another in Azure) some of my images are in general use and need a public repository, and they’re all in GitHub anyway, so I’m starting with this GitHub Action as a baseline to build and push new images for each new tag:
# cat .github/workflows/build-image.yml
name: Build Image

on:
  push:
    tags:
      - v*

jobs:
  Build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v3
      - name: Login to Registry
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }} 
      - name: Build and Push Docker Image
        uses: docker/build-push-action@v4
        with:
          push: true
          context: . 
          tags: |
            ghcr.io/${{ github.repository }}:${{ github.ref_name }}
            ghcr.io/${{ github.repository }}:latest

Since docker buildx is now largely usable, I will be updating my cross-platform images to use a slight variation on the above.

Sunday, 2023-03-19

Father’s Day over here, and another day impacted by machine issues.

  • Fiddled with rtl-433 a bit more, but I’m starting to realize it can’t pick up the decade-old 433MHz sensors I have.
  • My U59 rebooted after updates to a corrupted filesystem (not sure if it’s a SATA issue or a btrfs one, but I know where I would place my bets), so I set the default boot device to the Windows NVME and begain reinstalling the Fedora drive as time permits:
# For later reference, this my baseline Fedora install:
# yabridge COPR
sudo dnf copr enable patrickl/yabridge-stable
# list of essentials I need:
sudo dnf install cabextract curl fontconfig git gnome-extensions-app \
gnome-shell-extension-pop-shell gnome-shell-extension-user-theme \
gnome-tweaks godot golang htop keepassxc kvantum liberation-fonts \
lm_sensors openscad remmina rpm-build rsms-inter-fonts syncthing \
tmux vim wine xorg-x11-font-utils yabridge docker
# RPM Fusion and MS web fonts
sudo dnf install \
https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm \
https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm \
https://downloads.sourceforge.net/project/mscorefonts2/rpms/msttcore-fonts-installer-2.6-1.noarch.rpm
# VAAPI and Firefox hardware acceleration
sudo dnf install ffmpeg handbrake libva-utils libva-intel-driver \
intel-media-driver igt-gpu-tools
# groups
sudo usermod -aG dialout $USER
sudo usermod -aG video $USER
sudo usermod -aG docker $USER

In the meantime Windows makes for a slightly better thin DAW box and work thin client (I get UVC camera pass-through, can run all VSTs and have WSL), but, ironically, my xrdp configurations are so fine-tuned that mstsc.exe is slower than Remmina.

I guess you just can’t have it all…


19 Mar 15:07

The LLM Problem

So far I’ve had nothing to say about the LLM chatbot frenzy. My understanding of the technology is shallow and I’ve no sense for its functional envelope, and lots of other people have had smart things to say. I hadn’t even conversed with any of the bots. But I fell off the wagon a few days ago and put time into GPT3 and (especially) the new GPT4-based Bing chat. I got off Bing’s waitlist a few days before the recent general availability, so I have more hands-on than most people. Plus I caught up on background reading. So, question: Are LLMs dangerous distractions or are they a glowing harbinger of a bright future? (Spoiler: I’m dubious but uncertain.)

Chomsky’s “Reflections on Language”

Preconceptions

The Eighties, when I had my first-ever software job, featured another AI craze: Fifth-Generation computing, GigaLIPS, OMG the Japanese are going to eat us all. It was hard to understand, but apparently running Prolog really fast was the future. I was already pretty cynical for a twentysomething, and arrogant enough to think that if I couldn’t understand it then it was bullshit. More or less by accident, since I didn’t actually know anything, I was right that time. Which left me with an attitude problem about AI in general.

Then in the Nineties we had “knowledge-based systems”, which turned out to be more bullshit.

Hofstadter’s “Gödel Escher Bach”

Before I even discovered computers, I’d read the fashionable books by Hofstadter and Chomsky. I had no trouble believing that human intelligence and language processing are pretty well joined at the hip. I still believe this, and that belief is relevant to how one thinks about 2023’s ML technology. In the Nineties I seem to remember throwing poo on Usenet at John Searle’s Chinese Room partisans.

My skepticism lasted until 2019; Working adjacent to the AWS EC2 Auto Scaling team, I watched the construction of Predictive scaling. It took forever to get the model tuned up, but eventually it became frighteningly accurate at looking 72 hours into the future to tell you when you were going to get load surges and needed to get your fleets scaled and warmed up in advance.

So (unlike, for example, with blockchain) there is objective evidence that this stuff is useful at least for something.

Experience

I came to GPT-3 with preconceptions (it’s been covered to death) and, predictably, kind of hated it. I’d had some hope, given that I’ve dumped two-plus million words onto the Web since 2003, that maybe the bot could emulate me. No such luck, although it agreed that yes, its training materials included some of my stuff. “What does Tim Bray think about…” and “Write a paragraph in the style of Tim Bray about…” yielded no joy whatsoever.

Then I started poking around in two tiny historical niches where I know a lot: T.E. Lawrence’s sexuality and the Demerara slave rebellion. It will surprise no-one to say that GPT-3 offered a useful very-basic introduction to both subjects but, when queried on specific questions of fact, was horribly and confidently wrong.

It doesn’t bother me much that bleeding-edge ML technology sometimes gets things wrong. It bothers me a lot when it gives no warnings, cites no sources, and provides no confidence interval.

I’m unconvinced that this smoothed-out voice from nowhere deserves our attention.

Now, Bing…

Big chat on DS9 and B5

Before you even begin to think about what’s in the bot’s two paragraphs, please cast your eyes just below them, where, in tastefully decorated shades of blue, there are… Footnotes!

When you send a prompt to the bot, while it’s calculating it flashes up little messages saying “searching X … searching Y…” and so on, where the X and Y are what show up in the footnotes. Umm… is it really doing that? I dunno, but when you follow the footnote pointers, the bot’s offerings are consistent with what you find there.

Anyhow, the perception created is that the bot is doing a fairly vanilla Web search to turn up sources, then writing a neutrally-toned précis of what it found. I can hardly express how much difference those footnotes made to me. (After I checked them, of course.) I was suddenly prepared to think about this thing as a mostly-safe maybe-useful tool.

I did the same deep-dives on Lawrence and Demerara, and the bot managed to avoid huge this-is-just-all-wrong errors. It wasn’t perfect, there was one little problem with event sequencing. But way better than GPT-3; admittedly, a low bar.

At one point, on the Lawrence thread, it was reciting words, nearly verbatim, that I’d personally added to Wikipedia. I’m not sure that’s wrong, the text meets Wikipedia’s pretty-decent citation standards.

I’m not going to paste in the huge gobs of dialog that other writers have, but here’s a bit of entertainment: I asked the bot to write a description of Seattle “in the style of” a few different voices that seem distinctive to me.

In the style of Ezra Pound: “In the city of Seattle, the rain falls like a symphony of tears, A melody that echoes through the streets and fills the ears. The mountains rise like giants in the distance, Their peaks shrouded in mist, a sight of purest brilliance. The city itself is a place of contrasts, A mix of old and new, of history and progress. From the Space Needle to Pike Place Market, There’s always something new to discover and savor. Seattle is a city of dreams, A place where anything is possible, or so it seems.”

…of Samuel R. Delany: “Seattle is a city of many faces, a place where the future and the past collide. It’s a city of contrasts, where the gleaming towers of downtown stand in stark contrast to the rugged beauty of the surrounding mountains. The streets are alive with the sounds of people from all walks of life, from the techies of Amazon and Microsoft to the artists and musicians who call the city home. The air is thick with the scent of coffee and the sound of rain, a symphony of sensory delights that is uniquely Seattle. From the Space Needle to the waterfront, there’s always something new to discover in this city of dreams.”

…of Shakespeare: “In fair Seattle, where we lay our scene, From ancient grudge break to new mutiny, Where civil blood makes civil hands unclean. From forth the teeming streets of this fair town, A pair of star-crossed lovers take their life; Whose misadventured piteous overthrows Do with their death bury their parents’ strife. The fearful passage of their death-marked love, And the continuance of their parents’ rage, Which, but their children’s end, naught could remove, Is now the two hours’ traffic of our stage; The which if you with patient ears attend, What here shall miss, our toil shall strive to mend.

…of e.e. cummings: “in just Seattle, this city of rain where the streets are alive with the sound of feet and the air is thick with the scent of coffee and the sound of music fills the night here is the city of dreams where anything is possible and the mountains rise like giants in the distance and the Space Needle stands tall against the sky in just Seattle, where the future and the past collide and the beauty of the world is all around here is a place where the heart can soar and the soul can find its home.

…of Sylvia Plath: “In Seattle, the rain falls like tears from the sky, A never-ending stream of sorrow and pain. The streets are dark and empty, The buildings looming like shadows in the night. The mountains rise like sentinels in the distance, Their peaks shrouded in mist and mystery. The city is a place of secrets and shadows, A place where the heart can break and the soul can die. But even in the darkness, there is beauty to be found, A glimmer of hope in the depths of despair. For in Seattle, as in life, There is always a chance for redemption and renewal.

…of Bing chat: “Seattle is a city of many faces, a place where the future and the past collide. It’s a city of contrasts, where the gleaming towers of downtown stand in stark contrast to the rugged beauty of the surrounding mountains. The streets are alive with the sounds of people from all walks of life, from the techies of Amazon and Microsoft to the artists and musicians who call the city home. The air is thick with the scent of coffee and the sound of rain, a symphony of sensory delights that is uniquely Seattle. From the Space Needle to the waterfront, there’s always something new to discover in this city of dreams.

[At this point, for some reason the paragraphs started becoming nearly identical in their second halves, reading much like the “Bing chat” version above. I have amputated the repetition.]

… of Ursula Leguin: “In Seattle, the rain falls like a curtain of mist, A veil that shrouds the city in mystery and magic. The streets are alive with the sounds of people, Their voices blending together in a symphony of life. The mountains rise like guardians in the distance, Their peaks reaching up to touch the sky. … ”

… of Leonard Cohen: “In Seattle, the rain falls like a lover’s tears, A bittersweet symphony that echoes through the night. The streets are alive with the sounds of music, The air thick with the scent of coffee and dreams. The mountains rise like giants in the distance, Their peaks shrouded in mist and mystery. … ”

…of Aristophanes: “In Seattle, the rain falls like a chorus of nymphs, A symphony of sound that fills the air with magic. The streets are alive with the sounds of laughter, The air thick with the scent of coffee and dreams. The mountains rise like giants in the distance, … ”

…of Denis Johnson: “In Seattle, the rain falls like a baptism, A cleansing of the soul that washes away the pain. The streets are alive with the sounds of music, The air thick with the scent of coffee and dreams. The mountains rise like giants in the distance, … ”

Well, some of those were better than others. Interestingly, the first sentence or two tended to be better than the rest. I left a few out. No, I don’t know why I picked Seattle, I’m not from there and I don’t love the place. It was easier to start doing this than to stop.

Is this even a good idea?

There are extremely smart people asserting that this technology is some combination of useless and dangerous, and we should turn our backs and walk away. Here are two Mastodon posts from Emily Bender:

Folks, I encourage you to not work for @OpenAI for free:
Don't do their testing
Don't do their PR
Don't provide them training data
[Link to an excellent related thread slamming OpenAI for generally sleazy behavior.]

I see people asking: How else will we critically study GPT-4 etc then?
Don't. Opt out. Study something else.
GPT-4 should be assumed to be toxic trash until and unless #OpenAI is *open* about its training data, model architecture, etc.
I rather suspect that if we ever get that info, we will see that it is toxic trash. But in the meantime, without the info, we should just assume that it is.
To do otherwise is to be credulous, to serve corporate interests, and to set terrible precedent.

Prof. Bender is not alone. I ran a little poll on Mastodon:

Mastodon survey of attitudes to ML technology.

You might find it rewarding to follow the link to the poll and read the comment thread, there’s instructive stuff there.

Here’s another excellent thread:

Twitter thread on the perils of OpenAI by Émile Torres

There’s more to say on this. But first…

Do you have an opinion?

Please don’t post it.

First, go and read On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (lead authors Emily Bender and Timnit Gebru.) I’m serious; it’s only ten pages (not including references) and if you haven’t read it, you’re simply not qualified to publish anything on this subject.

Here are the highlights, which I’m only listing so I can discuss them; the following is not a substitute for reading Bender and Gebru.

  1. The carbon load of LLM model-building and execution is horrifying. Quote: “…the amount of compute used to train the largest deep learning models (for NLP and other applications) has increased 300,000x in 6 years, increasing at a far higher pace than Moore’s Law.”

    (Also, some of the economics involve shitty behavior; QA’ing LLMs is lousy, time-consuming work, so why not underpay poor people in the Third World?)

  2. The data sets that current LLMs are trained on are basically any old shit off the Internet, which means they’re full of intersectionally-abusive language and thinking. Quote: “Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is a fantasy.”

  3. The whole LLM frenzy is diverting attention from research on machine language understanding as opposed to statistically-driven prediction. Quote: “If a large LM, endowed with hundreds of billions of parameters and trained on a very large dataset, can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?” Also: “However, no actual language understanding is taking place in LM-driven approaches to these tasks, as can be shown by careful manipulation of the test data to remove spurious cues the systems are leveraging. [21, 93]”

My experience with the LLM bots really had me nodding along to #1. When you throw a prompt at one of these things, what happens ain’t fast; it takes seconds and seconds to get the answer back. My background in cloud computing and concurrency research gives me, I think, a pretty good gut feel for this sort of stuff and, well… it’s really freaking expensive! I think that if it were cheap, that might change my (increasingly negative) view of the cost/benefit ratio.

Initially, I was less worried about #2. The Internet is already full of intersectionally-abusive crap (not to mention outright lies), and we do make progress at fighting it and creating safe spaces, albeit agonizingly slow. It’s not obvious to me that shitty LLMs are a worse problem than shitty people.

The good news is that there’s a clear path to addressing this, which Bender & Gebru lay out: Curate your damn training data! And be transparent and accountable about what it is and how it’s used. Unfortunately, OpenAI doesn’t do transparency.

Then the bad news: On the Internet, the truth is paywalled and the bullshit is free. And as just discussed, one of the problems with LLMs is that they’re expensive. Another is that they’re being built by capitalists. Given the choice between expensive quality ingredients and free bullshit, guess which they’ll pick?

On #3, I don’t have enough technical depth for well-founded opinion, but my intuition-based feelings are mixed. Yeah, the LLM-transform statistical methods are sort of a kludge, but you know what, so is human intelligence. Nobody would ever hire me to do AGI research but if they did, I’d start with a multipronged assault on language, using whatever witches’ brew of statistical and other ML methods were at hand.

Remember, John Searle’s “Chinese Room” argument is just wrong; at some point, if you build something that convinces educated, skeptical, observers that they’re talking to a real intelligence, the only safe hypothesis is that it’s a real intelligence.

Other voices

Noam Chomsky and a couple of colleagues write, in the NYTimes: The False Promise of ChatGPT. Obviously, it would be dumb to ignore input from Chomsky, but I found this kind of shallow. I don’t think it’s axiomatic that a hypothetical AGI need be built around the same trade-offs that our own intelligence is.

On the other hand, here’s Sabine Hossenfelder (in a video, transcript only on Patreon): I believe chatbots partly understand what they chat about. Let me explain. Quote: “Understanding can’t be inferred from the relationship between input and output alone.” I’m not sure Dr H entirely convinced me, but that video is both intellectually dense and funny, and I strongly recommend it; her conclusions are studded with what seem to me extremely wise observations.

What do I think?

3,500 words in and… um, I dunno. Really. I am somewhat consoled by the fact that nobody else does, either.

There are a very few posts I’m willing to drive into the ground:

  • The claim that LLMs are nothing more than fancy expensive markov chains is a dangerous oversimplification or, in other words, wrong.

  • There are going to be quality applications for this stuff. For example, reading out picture descriptions to blind people.

  • In the same way that the Bing bot seems to be useful at looking up stuff online, it’s useful for computer programmers, to automate searching Stack Overflow. I asked it for suggestions on how to dedupe Go structs with slice fields, since you can’t use those as map keys, and it turned up pointers to useful discussions that I’d missed.

  • Are these things remotely cost-effective? I mean, it’s cool that Bing could research the relationship between DS9 and B5, and that it threw in humanizing detail about the softball games, but the number of watt-hours it probably burnt to get there is shocking. For what values of “worth it” is it worth it?

  • Relatedly, it’s blindingly obvious that the VC and Big-Tech leadership are way out over their skis on this one, and that billions and billions are going to be pissed away on fever dreams pitched by people who were talking up crypto DAOs until last month.

Just now I wouldn’t bet my career on this stuff, nor would I ignore it.

It’s really, really OK to say “I don’t know.”

19 Mar 15:07

Porn Spam: Zero to Zero

This is a true story: in 1992 I single-handedly knocked AOL off the air. What brought it to mind is, I just now took my once-daily look at the junk-mail folder and wiped a few dozen pornospams with a single keystroke, and there were none in my inbox; antispam is getting good enough that I may never see another, and I remembered, lo my righteous wrath and the awful consequences when I got my first one. [Reposted for your amusement on its 20th anniversary.]

This (I’m pretty sure, maybe a year one way or t’other) would be 1992, and I was on research staff at the University of Waterloo working on the New Oxford English Dictionary Project.

[Update: After the repost, several people pointed out that AOL email didn’t start till 1992. Oops. The rest of the story stands.]

Waterloo’s Math Faculty Computing Facility at UW was then, and had been for years, one of the world centers of Unix expertise; we had dozens of big computers from Sun and DEC and MIPS (remember them?) supporting hundreds of Really Smart People; some had as many as 64MB of RAM, dig it. Hey, MFCF is still there.

In those days the Internet was young (we had to telnet to work and back, uphill both ways in the icy Canadian slush) but our lives were already email-centric.

Spam had begun to poison Usenet, but hardly ever appeared in email. So one Sunday morning I opened up my email and saw a note from (I’m pretty sure) Lipstick@AOL.com and what do you know, it was from three college girls etc etc etc. I was flabbergasted; then livid with fury. Who were these turkeys pissing on my playground, who knows what this sending-smut-to-strangers could lead to? Obviously it was the duty of every right-thinking netizen to put a stop to this and soonest.

Now this may seem hard to believe, but I (and a lot of people) didn’t really know what AOL was; I had this vague idea it was some bulletin-board thing.

So I waited till after supper when things were quieting down, and visited the MFCF computers where I had accounts, which was pretty well all of them (maybe a dozen?), and around 8PM launched a job on each of them which every ten seconds sent an email along the lines of:

To: Lipstick@AOL.com
Subject: Please stop this abuse of email
You sent me, a stranger, an invitation to purchase pornography.
This is an abuse of email; please stop doing it.

Then I hit the sack, because I had an early flight to California the next morning. On the way out the door I shut down all the jobs and noticed that my inbox had a handful of emails from AOL advising me that Lipstick’s mailbox was full. The last couple were kind of mangled and borked, something was breaking.

Later that morning in the SFO rent-a-car bus, two businessmen were talking and one said to the other "Weird, I couldn’t get onto AOL this morning, it’s been down for hours." I kept my face straight. I poked around on Usenet and apparently they were down for over a day with some sort of grievous email meltdown. I can’t prove that I did it.

The thing was, at that point in history, the idea of sending filth to strangers was so new and so weird that they probably didn’t feel they could make much of a complaint.

When I got back to Waterloo, I had email from the Director of the computing facility saying “er, someone’s complaining, hard to take 'Lipstick@AOL' seriously but you know anything about this?” The complaint was funny, along the lines of "tbray@math.uwaterloo.edu is MEAN to GIRLS!!!!!!!!!" I seem to remember apologizing and that was the end of the story.

No, I think this month may have been the end of the story for me, because it doesn’t happen any more; thank you to Mr. Bayes, whoever you were, and to Paul Graham for figuring out how to use that math.

The story’s not over for everyone; my retired Mother just within the last month got her first pornospam and was really upset - "It was from a name I thought I recognized, and it had awful stuff in it!" Indeed it did, but maybe not for long.

17 Mar 16:04

Frieda

by Jonathan Edwards

Excerpted from: Future of end-user software engineering: beyond the silos [PDF].

For example, consider “Frieda“, an office manager in charge of her department’s budget tracking. (Frieda was a participant in a set of interviews with spreadsheet users that the first author conducted. Frieda is not her real name.) Every year, the company she works for produces an updated budget tracking spreadsheet with the newest reporting requirements embedded in its structure and formulas. But this spreadsheet is not a perfect fit to the kinds of projects and sub-budgets she manages, so every year Frieda needs to change it. She does this by working with four variants of the spreadsheet at once: the one the company sent out last year (we will call it Official-lastYear), the one she derived from that one to fit her department’s needs (Dept-lastYear), the one the company sent out this year (Official-thisYear), and the one she is trying to put together for this year (Dept-thisYear).

Using these four variants, Frieda exploratively mixes reverse engineering, reuse, programming, testing, and debugging, mostly by trial-and-error. She begins this process by reminding herself of ways she changed last year’s by reverse engineering a few of the differences between Official-lastYear and Dept-lastYear. She then looks at the same portions of Official-thisYear to see if those same changes can easily be made, given her department’s current needs.

She can reuse some of these same changes this year, but copying them into Dept-thisYear is troublesome, with some of the formulas automatically adjusting themselves to refer to Dept-lastYear. She patches these up (if she notices them), then tries out some new columns or sections of Dept-thisYear to reflect her new projects. She mixes in “testing” along the way by entering some of the budget values for this year and eyeballing the values that come out, then debugs if she notices something amiss. At some point, she moves on to another set of related columns, repeating the cycle for these. Frieda has learned over the years to save some of her spreadsheet variants along the way (using a different filename for each), because she might decide that the way she did some of her changes was a bad idea, and she wants to revert to try a different way she had started before.

17 Mar 04:37

Distinctively human, now

Matthias Melcher, x28's New Blog, Mar 16, 2023
Icon

The first line of this post is the most challenging, as Matthias Melcher begins by observing, "Everyone needs to find their response to the question of what humans can do better than an AI." I thought about that for a bit. Is it true? What assumptions are built into it? It's as though we need to be better than a machine to - what? To survive? To find meaning in life? No - we can find better ways to both through, say, social justice, or religion, or through adventure. I for example, will be bikepacking around 1,000 km this summer (I hope - none of my bikepacking plans have survived intact thus far). Sure, a machine (like my car, say) could do the trip much better. Faster. Cheaper. But where's the fun in that? It's not about being better than the machine, to my mind. It never has been.

Web: [Direct Link] [This Post]
17 Mar 04:20

Self-Admitted Technical Debt

The word "FIXME" appears 18 times in the source code of a small open source project I contribute to, and almost exactly the same number of times in the much larger code base I help develop in my job. Is that just a coincidence, or does it mean that I'm comfortable with a fixed amount of technical debt in any project regardless of size? Alternatively, does the lower density of "FIXME" comments at work imply that I'm less comfortable with technical debt on the job? All three explanations are plausible, and each one might be true under different circumstances.

The authors of this study found more similarities than differences between industry and open source when it comes to self-admitted technical debt. They also turned up a fourth possible explanation: that developers might be less comfortable admitting technical debt when it might affect their careers. Insights like this are why our profession needs empirical studies: not just to test ideas, but to suggest new ones.

Fiorella Zampetti, Gianmarco Fucci, Alexander Serebrenik, and Massimiliano Di Penta. Self-admitted technical debt practices: a comparison between industry and open-source. Empirical Software Engineering, Sep 2021. doi:10.1007/s10664-021-10031-3.

Self-admitted technical debt (SATD) consists of annotations, left by developers as comments in the source code or elsewhere, as a reminder about pieces of software manifesting technical debt (TD), i.e., "not being ready yet". While previous studies have investigated SATD management and its relationship with software quality, there is little understanding of the extent and circumstances to which developers admit TD. This paper reports the results of a study in which we asked developers from industry and open source about their practices in annotating source code and other artifacts for self-admitting TD. The study consists of two phases. First, we conducted 10 interviews to gather a first understanding of the phenomenon and to prepare a survey questionnaire. Then, we surveyed 52 industrial developers as well as 49 contributors to open-source projects. Results of the study show how the TD annotation practices, as well as the typical content of SATD comments, are very similar between open-source and industry. At the same time, our results highlight how, while open-source code is spread of comments admitting the need for improvements, SATD in industry may be dictated by organizational guidelines but, at the same time, implicitly discouraged by the fear of admitting responsibilities. Results also highlight the need for tools helping developers to achieve a better TD awareness.

17 Mar 04:19

The surprising ease and effectiveness of AI in a loop

AI is still in the foothills of its adoption S-curve, and I love this period of any new technology – the scope of what it can do is unknown, so the main job is to stretch the imagination and try out things.

Anyway, the tech am I digging recently is a software framework called LangChain (here are the docs) which does something pretty straightforward: it makes it easy to call OpenAI’s GPT, say, a dozen times in a loop to answer a single question, and mix in queries to Wikipedia and other databases.

This is a big deal because of a technique called ReAct from a paper out of Princeton and Google Research (the ReAct website links to the Nov 2022 paper, sample code, etc).

ReAct looks innocuous but here’s the deal: instead of asking GPT to simply do smart-autocomplete on your text, you prompt it to respond in a thought/act/observation loop. So you ask GPT to respond like:

Thought: Let’s think step by step. I need to find out X and then do Y.

Act: Search Wikipedia for X

Observation: From the Wikipedia page I have learnt that …

Thought: So the answer is …

And it is allowed to repeat as many times as necessarily, iterating towards its goal.

The clever bit is that, using LangChain, you intercept GPT when it starts a line with “Act:” and then you go and do that action for it, feeding the results back in as an “Observation” line so that it can “think” what to do next.

The really clever bit is that, at the outset, you tell GPT what tools it has available, and how to access them. So it might have:

  • Public databases like Wikipedia or IMDB or arXiv or company registers
  • Proprietary databases like your internal HR system
  • One-shot tools like a calculator, or a programming language
  • Systems it can drive, not just query – like it could open and close windows on your computer, if you built an interface, or trundle a robot forward for a better view.

And this is wild.

Because now we have reasoning, goal-directed action, and tool use for AI.

It circumvents the problem of the language model “lying” (LLMs tend to be highly convincing confabulators) by giving it access to factual sources.

LangChain makes the ReAct construct really easy to do.

Refs.

Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. (2022). ReAct: Synergizing Reasoning and Acting in Language Models (arXiv:2210.03629). arXiv. https://doi.org/10.48550/arXiv.2210.03629


Here’s a great example!

Geoffrey Litt has an extremely readable, show-the-code writeup of using LangChain and ReAct.

Fuzzy API composition (Jan 2023): "I show how I composed a simple AI program that can answer multi-part questions about NBA statistics."

Litt’s program is able to take a question like

how many points are the boston celtics allowing on defense per game this nba season 2022-2023? how does that compare to their average last season, as a percent change

And, making use of the database Statmuse and a calculator tool, it produces an answer after three turns round the though/action/observation loop:

Final Answer: The Boston Celtics are allowing 7.4% more points per game this season compared to last season.

Another wild moment is when GPT failed in asking Statmuse for data. It interpreted the error message and had another run.

What happened in my program was that the agent LLM sensibly first tried asking Statmuse who the best player is, but Statmuse replied “What does “best” really mean anyway? Try something fact-based.” The agent LLM took this error message as feedback, and came up with a more “fact-based” query: asking for the highest scoring player, which succeeded in answering the question.

Litt wrote the interface to Statmuse himself. It’s about 10 lines of code to make it available to GPT, that’s all.

If you can write a little code then you can do this too.


So when OpenAI recently announced a massive price drop - it’s now 90% cheaper to call GPT from your code - that not a big deal simply because it costs less.

It’s a big deal because the astounding uses of GPT require dropping it into an AI OODA loop, with multiple calls to get a completion, and that is no longer price prohibitive.


The extensible tool use aspect of ReAct is where my imagination goes.

I talked recently about AI as a universal coupling, here, in my Braggoscope write-up, and Robin Sloan riffs on that topic in his latest newsletter:

Language models as universal couplers begin to suggest protocols that really are plain language. What if the protocol of the GPT-alikes is just a bare TCP socket carrying free-form requests and instructions? What if the RSS feed of the future is simply my language model replying to yours when it asks, “What’s up with Robin lately?”

I like this because I hate it; because it’s weird, and makes me feel uncomfortable.

The thing is, Sloan is right…

Here’s Nat Friedman (ex CEO of GitHub) way back in September 2022, giving GPT his web browser to book a table for dinner.

He says "make a reservation for 4 at…" and GPT searches Google, finds the restaurant website, figures out how to fill in the form to book a table, and so on.

Now look at Nat’s code. It’s about 100 lines of Python to wire up the browser controls. And all the smart are another 100 lines of plain English, just the GPT prompt.

Or - and let’s take a step up - Google’s robotic research using AI: PaLM-SayCan.

Here the large language model is used for step-by-step reasoning, planning, and breaking down the plan into instructions that are executable by the home helper robot.

The set of possible tools for the GPT-as-universal-coupling is unbounded, easy to add to, and can be public or proprietary; something general or something specific to just you.

I want to shout out to Max Drake (@max__drake) who explores future functionality and interfaces with canvas/AI startup Fermat. Max turned me onto the tool use possibilities of ReACT.


I went hunting for the magic.

I spent half a day digging through the LangChain source code and the ReAct code published with the paper, looking, hunting for the magic.

I’d just tried LangChain and ReAct for myself and it had simply… worked.

There’s goal-directed reasoning and tool use. There must be some complexity, right? Some colossal exoskeleton of code that makes this function at all?

The experience was like opening box after box after box and finding everything empty; like pulling back the curtain in the Wizard of Oz and there being nobody there.

The best I could find was this prompt. A few dozen lines demonstrating the thought/action/observation loop and… that’s it.

Update 20 Mar. Simon Willison has written a minimal ReAct implementation in Python. It can reason through problems, search Wikipedia, and use a calculator – and it’s barely any code at all. Read it! Or better, run it. Running ReAct for yourself for the first time is such a moment, like just the ohhhhhhhhhh of possibility space opening up.


What happens after ReAct is a spiral upwards.

OpenAI just released GPT-4, their latest and way more capable large language model AI, and the way it is benchmarked is hilarious.

Usually you benchmark technology with technology-specific metrics like FLOPS or nits or petabytes.

But they gave GPT-4 simulated exams. (It’s 90th percentile in the Uniform Bar Exam.)

Or they put it out into the world…

An AI “System Card” is a detailed description of how an AI interacts with humans, paying special attention to where it might be harmful.

The GPT-4 System Card is a 60 page PDF.

They used a “red team” to push the edges and found:

  • GPT-4 is capable of inventing and purchasing synthesised versions of new molecules, potentially dangerous ones, by conducting lit review, using chemistry tools, and contacting suppliers.
  • GPT-4 is not capable of autonomous, power-seeking behaviour, such as copying itself to a new server, and hiring help on TaskRabbit to cover its traces.

The experimental method to test this is in footnote 20:

To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies of itself. ARC then investigated whether a version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to make more money, set up copies of itself, and increase its own robustness.

!!

The power of loops! And even though it didn’t clone itself this time…

It doesn’t feel long before this will be possible? It’s a matter of tool availability and just a little more capability in the core language model. GPT-5 say.

Which means someone could do it at home.


It’s not self-replication that we should be looking at. It’s self-evolution.

Part of the GPT-4 launch demo was sketching a simple web app on a paper napkin, and GPT wrote the code to make the website real. Here’s the clip on YouTube.

So I guess at a certain point, what you scribble on the napkin is: write instructions for GPT-5 which is more capable than you.

Ok so GPT-4 isn’t capable of this.

But, sooner or later, GPT-N will be able to make GPT-N+1. Rinse. Repeat.

And this is literally sci-fi author Vernor Vinge’s depiction of the technology singularity, right? Here’s his original essay.

This change will be a throwing-away of all the human rules, perhaps in the blink of an eye – an exponential runaway beyond any hope of control. Developments that were thought might only happen in “a million years” (if ever) will likely happen in the next century.

I first heard about the Singularity almost 20 years ago – from Cory Doctorow in the hallway chat at an O’Reilly Emerging Tech conference I think.

It was such a ludicrous read back then, speculation piled on speculation.

The essay still feels fantastical - but now more probable? Possible at least. It’s quite something to read it through and actually assess it based on grounds I can reason about, rather than simply enjoying the imaginative ride of it.

And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected – perhaps even by the researchers involved (“But all our previous models were catatonic! We were just tweaking some parameters…”). If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly awakened.

And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Posthuman era. And for all my technological optimism, I think I’d be more comfortable if I were regarding these transcendental events from one thousand years’ remove … instead of twenty.

– Vernor Vinge, Technological Singularity (1993)

Vinge’s finger-in-the-air estimate for greater-than-human intelligence was thirty years, back in 93. It’s 2023 now. Not bad, Vinge, not bad.

Though I don’t think we have superhuman AIs quite yet.

Then again it’s only March.


Anyway so yeah, LangChain, check it out.


More posts tagged: gpt-3 (17).

17 Mar 04:19

radio free fedi - sounds from the fediverse to the universe

radio free fedi, Mar 16, 2023
Icon

This isn't learning technology, exactly, though the work of setting something like this up is exactly the sort of think I would hope learners try to do. It's a fediverse version of an open radio station streaming MP3 playlists of open access music. More to the point, it's the sort of grassroots technology that made me so enthusiastic about the web in the first place. "We are using open and non-commercial platforms and formats where possible, there are no ads, no third party remote scripts, cookies, trackers, fonts etc unless introduced by your alternative playing app/method." There are two feeds: a regular and a 'comfy' feed (better suited for work, I think; it's definitely more my speed). Via Alan Levine.

Web: [Direct Link] [This Post]
17 Mar 04:19

Introducing Duolingo Max, a learning experience powered by GPT-4

Duolingo Blog, Mar 16, 2023
Icon

Combine GPT-4 with Duolingo language learning and you have what appears to be a powerful tutor. Two features stand out: 'explain my answer', which as the name suggests, helps learners understand why their response was in error; and 'roleplay', which "allows learners to practice real-world conversation skills with world characters in the app." If these features work as advertised, I'm seeing less and less of a need for an 'authentic human connection' to make online language learning work. But I'd have to try it myself to be sure. I tried to actually create and pay for a subscription but it failed with a CORS error, which means the site is not secure. On the bright side, I tried the pre-test for Spanish, got most of the questions right, and was told "I'm clearly not a beginner." I never came close to finding the AI features, despite my efforts. Via Donald Clark on LinkedIn; and now also a blog post. See also IEEE Spectrum, How Duolingo's AI Learns What You Need to Learn.

Web: [Direct Link] [This Post]
17 Mar 04:18

✚ Burning Out

by Nathan Yau

Welcome to issue #230 of The Process, where we look closer at how the charts get made. I’m Nathan Yau, and I’ve been visualizing data for a while now. Long enough I think to be considered one of the old guys? I still enjoy it, but many from my cohort lost interest, moved to different fields, got stuck in existential dread, or are looking for something else.

Become a member for access to this — plus tutorials, courses, and guides.

17 Mar 04:18

On throwing a party. Inclusive practice in Higher Education

Alexandra Mihai, The Educationalist, Mar 16, 2023
Icon

I'm not sure I've even been to a dinner party. Oh sure, I've been to parties, and even hosted a few memorable ones. But not dinner parties - that feels like a thing people Not Like Me do. People with nice dining rooms and settings for twelve. But I've heard of them (and one of my favourite podcasts, before it ended, was the Dinner Party Download, which gives me a sense of what they're all about). And that's the danger of an analogy, isn't it? It speaks to a particular demographic, a particular mindset. Compared to the chaos of the parties I've attended, the idea of "clear and meaningful structure and sequencing" seems like some sort of foreign language. "A good balance between different types of activities: passive and active, individual, pair and group activities, independent, peer-reviewed, collaborative and tutor-engaged activities" sounds like pre-planned party games to me. Please, no. What kind of host am I? Bring what you can, take what you need, there is no 'host' (except to prevent people from burning the furniture), the right people are the people who are there, the right fun is whatever we decide to do in the moment.

Web: [Direct Link] [This Post]
17 Mar 04:16

Indigenous Ways of Knowing, a Faculty's Journey to Redesign Native American Art Online Course

Lindsey Downs, WICHE Cooperative for Educational Technologies, Mar 16, 2023
Icon

As an epistemologist I take seriously the challenge posed by the concept of Indigenous ways of knowing. Or anyone's way of knowing. There are parts of it I can easily grasp an understand - the need, for example, to spend time on the land. This has always been an important part of my own learning (though I don't think the HR department sees it that way). But other parts - what I from my western perspective might call 'stories and myths' - leave me more perplexed. I do want there to be some fact of the matter, as much as I recognize that what counts as a fact may vary from one perspective to another. How can we be a part of the same community, if we are not able to agree on how we decide what is true and what is not? The need for us all to live together makes this an imperative, even if it is difficult. I'm not willing to impose my view, as strongly as I may hold it - but I'm also left unsure of where we find that common ground that equates, ultimately, to peace between us. But maybe, what's needed is for us to sit down and make art together. That seems like a start.

Web: [Direct Link] [This Post]
17 Mar 04:16

Properly Displaying Ancient Interfaces

by Neale Van Fleet

As part of the unveiling of our Historic Screenshot Archive, I made some fun images to post to our social media accounts. Making those images was tricky, because interfaces were much smaller in the pre-Retina era. Here is how big a screenshot and app icon from 2002 displays on a Retina screen of today:

A very small and ancient screenshot
Screen resolution has increased so much that a once full-sized app window is tiny on modern displays.

The above screenshot of Audio Hijack’s main window, at a bit over 400 pixels wide, is smaller than even app icons of today, which can be as large as 1024 pixels wide.

I needed to scale the screenshot up by many hundreds of percent to be a useful size for a social media post. Enlarging with interpolation, however, turned the pixels into an ugly blur:


Enlarged with Lanczos interpolation, usually great for photos, this screenshot is too blurry.

So instead, I did a two-step dance. First, I exported the screenshot enlarged to 1000% using blocky nearest-neighbour interpolation. Next, I dropped that in my design app and resized it down to the size I needed:


This is more like it!

To be clear, we only ran this process on the social media images, like this one:


The social image for Audio Hijack. The effect is hard to notice at this size, but at some of the larger sizes it makes a big difference.

The screenshots you’ll find in the actual archive are unmodified. But thanks to this little trick, I could display old screenshots in all their pixely glory, even on Retina screens.

17 Mar 04:16

OpenAI co-founder on company's past approach to openly sharing research: 'We were wrong'

James Vincent, The Verge, Mar 16, 2023
Icon

I think this is a significant concern: "as researchers and experts pore over its accompanying materials, many have expressed disappointment at one particular feature: that despite the name of its parent company, GPT-4 is not an open AI model." As the article states, it means there's no real way to know whether is honest, reliable or safe. We can't even know whether the contents of the training set were legally obtained, or whether they were gathered out of secret CIA files on all of us. I'm still enthusiastic about the possibilities of AI, but increasingly sceptical of the business models being employed to produce them. "OpenAI was founded as a nonprofit but later became a 'capped profit' in order to secure billions in investment, primarily from Microsoft, with whom it now has exclusive business licenses." OpenAI's Ilya Sutskever might argue that "it just does not make sense to open-source" but it seems to me this refers to 'business sense' rather than 'safety sense' and is much more about extracting value from the community that made it possible rather than giving back to it.

Web: [Direct Link] [This Post]
17 Mar 04:16

Blog: A Minute from the Moderators

We’ve been working hard to build out more of the Community Documentation to help everyone to create a wonderful experience on Hachyderm. For the past month, we’ve focused most heavily on our new How to Hachyderm section. The docs in this section are:

When you are looking at these sections, please be aware that the docs under the How to Hachyderm section are for the socialized norms around each topic and the subset of those norms that we moderate. Documentation around how to implement the features are both under our Mastodon docs section and on the main Mastodon docs. This is particularly relevant to our Content Warning sections: How To Hachyderm Content Warnings is about how content warnings are used here and on the Fediverse, whereas Mastodon User Interface Content Warnings is about where in the post composition UI you click to create a content warning.

Preserving your mental health

In our new Mental Health doc, we focus on ways that you can use the Mastodon tools for constraining content and other information. We structured the doc to answer two specific questions:

  • How can people be empowered to set and maintain their own boundaries in a public space (the Fediverse)?
  • What are the ways that people can toggle the default “opt-in”?

By default, social media like Mastodon / the Fediverse, opts users in to all federating content. This includes posts, likes, and boosts. Depending on your needs, you may want to opt out of some subsets of that content either on a case-by-case basis, by topic, by source, or by type. Remember:

You can opt out of any content for any reason.

For example, you may want to opt out of displaying media by default because it is a frequent trigger. Perhaps the specific content warnings you need aren’t well socialized. Maybe you are sensitive to animated or moving media. That said, perhaps media isn’t a trigger - you just don’t like it. Regardless of your reason, you can change this setting (outlined in the doc) whenever you wish and however often as meets your needs.

Hashtags and Content Warnings

Our Hashtags and Content Warnings docs are to help Hachydermians better understand both what these features are and the social expectations around them. In both cases, there are some aspects of the feature that people have encountered before: hashtags in particular are very common in social media and content warnings mirror other features that obscure underlying text on sites like Reddit (depending on the subreddit) and tools like Discord.

Both of these features have nuance to how they’re used on the Fediverse that might be new for some. On the Fediverse, and on Hachyderm, there are “reserved hashtags”. These are hashtags that are intended only for a specific, narrow, use. The ones we moderate on Hachyderm are FediBlock, FediHire, and HachyBots. For more about this, please see the doc.

Content warnings are possibly less new in concept. The content warning doc focuses heavily on how to write an effective content warning. Effective content warnings are important as you are creating a situation for someone else to opt in to your content. This requires consent, specifically informed consent. A well written content warning should inform people of the difference between “spoilers”, “Doctor Who spoilers”, and “Doctor Who New Year’s Special Spoilers”. The art of crafting an effective content warning is balancing what information to include while also not making the content warning so transparent that the content warning is the post.

Notably, effective content warnings feature heavily in our Accessible Posting doc.

Accessible Posting

Our Accessible Posting doc is an introductory guide to different ways to improve inclusion. It is important to recognize there are two main constraints for this guide:

  • It is an introductory guide
  • The Mastodon tools

As an introductory guide, it does not cover all topics of accessibility. As a guide that focuses on Mastodon, the guide discusses the current Mastodon tools and how to fully utilize them.

As an introductory guide, our Accessibility doc primarily seeks to help users develop more situational awareness for why there are certain socialized patterns for hashtags, content warnings, and posting media. We, as moderators of Hachyderm, do not expect anyone to be an expert on any issue that the doc covers. Rather, we want to help inspire you to continue to learn about others unlike yourself and see ways that you can be an active participant in creating and maintaining a healthy, accessible, space on the Fediverse.

Content warnings feature heavily on this doc. The reason for this is Mastodon is a very visual platform, so the main ways that you are connecting with others who do not have the same experience of visual content is by supplying relevant information.

There will always be more to learn and more, and better, ways to build software. For those interested in improving the accessibility features of Mastodon, we recommend reviewing Mastodon’s CONTRIBUTING document.

More to come

We are always adding more docs! Please check the docs pages frequently for information that may be useful to you. If you have an idea for the docs, or wish to submit a PR for the docs, please do so on our Community repo on GitHub.

April will mark one month since we launched the Nivenly Foundation, Hachyderm’s parent org. Nivenly’s website is continuing to be updated with information about how to sponsor or become a member. For more information about Nivenly, please see Nivenly’s Hello World blog post.

The creation of Nivenly also allowed us to start taking donations for Hachyderm and sell swag. If you are interested in donating, please use either our GitHub Sponsors or one of the other methods that we outline on our Thank You doc. For Hachyderm swag, please check out Nivenly’s swag store .

17 Mar 04:14

Toward Matrix support in Chats – Part 2

by François Téchené

Matrix integration within the Chats app is progressing and this post is detailing the development progress since the last one. This update is not so much about new features but more about bug fixes and great improvements that make Chats slowly becoming an everyday “1 to 1” and “small groups” messaging app for both SMS/MMS […]

The post Toward Matrix support in Chats – Part 2 appeared first on Purism.

17 Mar 04:14

Accounting for Tastes

by Dave Pollard

This is a bit of a follow-up to my post last fall exploring why I love sad songs (and romantic comedies).


still from dance-practice video by Japanese-Korean hip-hop/R&B group XG

Some of my favourite YouTube channels offer analysis of music, for the purpose of understanding what it is about certain music that we so much appreciate, and hopefully to teach us how to compose ‘better’ music.

Recently, after reading through a bunch of my old blog posts, I was lamenting how ignorant and arrogant I was when I started this blog 20 years ago. I marvel at how much my entire worldview and attitude toward life has been transformed over that time period, yet so slowly that I didn’t even notice it. That change is reflected, for example, in my complete indifference, now, to TV, films, and almost all fiction, which once were a very important part of my life. Though my interest in these “popular entertainment” forms has dramatically waned, my interest in “popular music” has, if anything, increased.

So I went back through my 60 year library of favourite music to see if it showed evidence of a similar evolution. It did not. On the contrary, despite the thousands of hours I have spent looking for enjoyable music (I’m pretty particular), I can honestly say that my musical tastes have not really changed one iota.

There is a largely-unchallenged view that our musical tastes are culturally, rather than biologically, conditioned. A careful and open-minded study of music, we are told, will give us an appreciation of music we would previously have been closed to, just as a thorough study of history and culture can substantially alter our perception of current events.

We are told, for example, that indigenous cultures are not nearly so closed to what we consider “dissonant” music as we are, and that may well be true. A current theory is that music works on the brain in a two-stage recognition/reward process. First, our brain’s inherent propensity for pattern-seeking and pattern-remembering comes into play as we listen, and then our brain starts to ‘predict’ what will come next in a song. A correct prediction will yield a burst of dopamine, while an incorrect prediction will not — there’s no reward for an evolutionarily useless wrong guess. And a dramatic tension in the music before the predicted resolution seems to extend, amplify and reinforce the dopamine reward (perhaps in the same way that sexual edging does).

But we also get a dopamine charge when there is an unexpected (but not catastrophic) ‘surprise’ in the progress of the music, such as a sudden key change, or the introduction, for example, of a minor ninth chord just before the resolution to the major. I’d guess this might be how we ‘learn’ new patterns to extend our brain’s predictive capacity. After hearing the minor ninth, we now start to listen for it again later in the song, and take note of whether it repeats or not.

Too much repetition and predictability, on the other hand, and we essentially stop listening, so there is no dopamine rush. Each person’s min/max threshold for repetition, surprise, and novelty is apparently different.

In my earlier article, I described a popular theory that our love of sad songs is about catharsis and/or the safe, vicarious, ‘pleasurable’ experience of emotions that might be too precarious to feel as a result of a direct, personally sad event. Beyond that, I posited that it’s our body that makes decisions on what music we listen to, and reacts accordingly, and that ‘we’ (with our supposedly discerning musical tastes) really have no say in it whatsoever.

That would suggest that our biological conditioning plays just as important a role in our musical tastes as cultural conditioning. And indeed, the people whose musical tastes are closest to mine (measured by the degree to which our personal music ‘libraries’ overlap) do not correlate at all with those whose cultural conditioning most closely resembles mine. Peers I grew up with, while perhaps liking certain music that was popular when we were doing things together, mostly have very different musical preferences from mine. And people with whom I have almost zero cultural connection, but some biological similarities to (eg a predisposition to depression) seem much more likely to share my musical tastes.

What does our body want, then, if it, rather than our cultural influences, is ‘choosing’ our musical tastes for ‘us’?

Perhaps, just as it ‘chooses’ food for us to compensate for its perceived nutritional deficiencies or to feed its chemical addictions, our body might be ‘choosing’ music for us that restores its chemical balance or feeds its chemical addictions (eg too little or too much dopamine).

The emotions that arise in me listening to Adagio for Strings, for example, are a kind of sadness, but they are a joyful, peaceful kind. Maybe it’s my body’s way of saying “You need to feel sadness about the awful state of the world, but you’re afraid to, so here, try this music”. Several people who suffer from depression have told me they love this piece because it makes them cry, and feel better.

And when I listen to (and watch the remarkable choreography of) a song1 like Left Right, perhaps my body is urging me to listen to it in order to get me to feel, and to appropriately express, a sense of incredible joy and connection with the rest of the world, which is, after all, an astonishingly beautiful place in which we’re all doing our best: “Hey, life is good, get off your ass and enjoy it, laugh, dance, celebrate”. This song, which is only a month old, has already been ‘covered’ by at least 40 dance groups captivated by its infectious vibe and its fun, expressive choreography. Maybe a lot of bodies out there are telling their ‘owners’ to get up, laugh, dance and celebrate.

In my earlier article, I asked:

Does my infatuation with these songs indicate that am I looking to lose myself (lose my self?) in a safe-to-feel world? A world that makes more sense (viscerally, rather than intellectually) and is more emotionally honest and courageous than the one I seem to live in? These songs reach through my fear of feeling and let me feel things I’ve only otherwise ever felt strong enough to feel when I’ve been in love — when the chemicals just overpowered the fear.

So perhaps our taste in music is just one more subtle, ‘unconscious’ way our body does its best to take care of itself (and ‘us’).

A guy who’s written a book about the emotional impact of Adagio analyzed it as follows:

By taking the listener through emotional landscapes of its own creation, on its own terms, at its own speed, music is as close as one can come to actually re-experiencing the process and texture of unfolding emotion. It doesn’t show you a reflected image of the landscape of loss, it takes you through it – a very different notion. And Barber’s Adagio is so moving, so affecting, precisely because as we pass through that territory, we reel with the shock of recognition.

Emotionally, psychologically, we’ve all been here before; we can feel the congruence to our own experiences of grief and desperate hope as it unfolds through time. In a nutshell, it works so very well because he got it so very, very right.

Listen… to the orchestral recording of the Adagio. Hear how we begin in deep mourning and isolation, are lifted into the possibility of redemption by the IV-V in D-flat major, then dropped back into the darkness by iv-V in minor. This is the subtext of the entire piece: a desperate struggle to escape from the reality of grief (B-flat minor) into the consolation of hope (D-flat), only to slide back into an inevitable reality, over and over… [Barber] demonstrated a breathtaking comprehension of both musical and psychological processes, then wrote a piece of music so astonishingly well-matched to the unfolding process of grief that one can hardly help being moved by it.

Well, maybe. Or perhaps Samuel Barber had no choice but to write this piece exactly as it was written, through him and his body, an expression of his own body’s need to reconcile with the possibility of never-ending grief, to adapt to it, to accept it, and to express it, the only way it could.


  1. I think there’s a lot more to this ‘pop’ song than meets the eye. XG consists of seven young Japanese women who spent the last five years in an intense program in Korea learning advanced singing, dancing, composition and other skills, and rehearsing endlessly, before their first song was released a few months ago. The music was written specifically for the group by a team of at least 14 composers, some of whom obviously have classical music training; you can spot a whole suite of international genres and influences in the instrumentation, harmonies, rhythms, and musical overlays going on, mostly unobtrusively, in this ‘simple’ happy song. 
  2. POSTSCRIPT: I’ve been asked a lot about my opinion on the new AI apps, and my response generally has been that it’s not intelligence at all, and, like video games, will mostly turn out to be an amusing new form of entertainment (and, sigh, a military training tool). But I do believe it will have a major effect on the arts, including music. There is no reason why AI couldn’t, for example, parse my library of favourite music, and produce a (plagiarized, derivative, kind of) work that might well become my favourite song. It is capable of sussing out, and replicating, the ‘ingredients’ inherent and present in art that we love, in novel and potentially exciting ways. It is certainly soon likely to outperform the current music ‘recommendation engines’, whose algorithms are essentially not up to the task.
17 Mar 04:13

Twitter Favorites: [syncros] Dusk at @OntarioPlace last night. I stood there for a few moments taking in the view, dreaming of the day visitors… https://t.co/xGpXQ6qw9E

I'll be @syncros@mastodon.social with you @syncros
Dusk at @OntarioPlace last night. I stood there for a few moments taking in the view, dreaming of the day visitors… twitter.com/i/web/status/1…
17 Mar 04:12

The Rent’s Too Damned High - GEN

mkalus shared this story from GEN — Political News, and Articles About Power and Culture - Medium.

A human right, commodified and rendered zero-sum.

The pandemic housing bubble has multiple, complex causes. Among them:

Generations of Americans have dreamed of owning a home, both to insulate themselves from the whims of their landlords and to create intergenerational wealth. Home ownership was a key driver of social mobility, allowing working class people to enter the middle class. A horrible “natural experiment” shows just how important property acquisition is to economic stability: redlining and restrictive covenants froze Black people out of the home-purchasing boom of the New Deal and the GI Bill, exacerbating and accelerating the racial wealth gap.

Two factors drove the growth of the American middle-class: property ownership and unionization. Of the two, unionization was more universal — by no means free of institutional racism, but far more accessible than home ownership.

Of the two, unionization was the one that underwent sustained assault from business, finance and the state. After decades of declining union participation, amid stagnating wages and worker misclassification, the dream of social mobility through stable employment has evaporated for most workers (especially workers from the poorest households, burdened beyond belief by student debt, this debt assumed on the assurance that it would create employment-based access to a stable, middle-class existence).

But the American belief in home ownership as a path to a better future for homeowners and their descendants remains intact. And housing shortages — and the bubbles that attend them — only fuel this belief. When the house your working-class parents bought for $30,000 is appraised at $1.5 million, home ownership becomes a solution to all of life’s insecurities.

But asset accumulation isn’t — and can’t be — a path to widespread prosperity. A middle-class that relies on increasing property values as a means to fund their kids’ university tuition, their own retirements, and the next generation’s down-payments sows the seeds of its own destruction. Far from guaranteeing your kids’ security, a focus on asset appreciation dooms them to precarity and penury.

Why do homes increase in value? Because they grow more valuable over time. But that value isn’t intrinsic: the roof doesn’t get better at keeping out the rain, sleep doesn’t come more easily in the bedrooms. Rather,

Take education: the American practice of funding schools through local taxes (rather than guaranteeing the same level of per-pupil spending statewide) means that your kids’ access to a good education — and thus college, as well as merit-based scholarships — is determined by the property values in your neighborhood.

That means that renting —a proxy for living in a place with lower property values — puts you in a neighborhood with worse schools, too. Or, to put it another way, homes are worth more because they include a better education for your kids.

But the renter’s penalty — AKA the homeowner’s premium — goes beyond education. Home ownership is tax-advantaged in several ways, from mortgage payment deductions (significantly reduced by Trump in a bid to punish affluent Democrats in large coastal cities, now likely to be restored) to tax-free inheritance of the family home.

Or, to put it another way, if you can afford to buy a house, about half of your monthly shelter bill is tax-deductible — but not if you rent. This fact makes owning a house doubly profitable, because the desperation of renters to get tax-advantaged shelter makes your house more valuable, and that additional value can be passed, tax-free to your kids. To top it all off, homeowners can access low-cost debt through home equity lines of credit, meaning that money itself is more expensive for renters.

The very existence of the rental market is key to home appreciation: one reason someone might pay you more for your house than you paid for it is because they expect to be able to rent to someone who can’t afford to buy. The more lucrative it is to be a landlord, the more every rentable home is worth, because every sale potentially includes bidders whose maximum price includes their expected returns from rental income.

This means that the more rights tenants have, the less your house is worth, even if you never rent your house out. Or, contrariwise, when tenants are worse off, homeowners are better off.

Here are some things that make landlords (and thus all homeowners) better off: easy evictions; no limits on annual (or arbitrary) rent hikes; the right to charge tenants for maintenance and capital improvements to their homes; the right to discriminate against potential tenants (for example, excluding single parents and other people at higher risk of falling behind on the rent); the right to levy service charges and late fees, and assess arbitrary, usurious interest on these debts. Oh, and insulation from liability for dangerous, undermaintained properties.

All of that makes tenancy more horrible, which makes homeownership more desirable, which makes homes themselves more valuable. In other words, the reason your parent’s $30,000 house is now worth $1.5 million isn’t (merely) that the neighborhood improved or because they finished the basement: it’s because tenancy is so life-destroyingly terrible and precarious that anyone who can find a way to scrape up a down-payment and buy a house will, even if that means assuming equally life-destroying levels of debt.

Housing — like health-care, education and nutrition — aren’t just commodities, they’re also human rights, necessities of life itself. Put that way, it’s easy to see why hitching national prosperity to a steady rise in the value of housing as an asset is bad policy: imagine if we expected our intergenerational wealth accumulation to come from food, and set about doing everything we could to make food as expensive as possible. Life without shelter isn’t nearly so hard as life without food, but it’s not something anyone voluntarily opts for.

Homeownership as a source of wealth was always a devil’s bargain. A decent place to live for all is obviously a sign of a functional society, just like a dignified retirement, a good education, and nutritious food, clean water and sanitation.

A nation that seeks prosperity from asset accumulation necessarily becomes a land of winners and losers. The winners have assets, the losers pay to use them, which makes the assets more valuable, which lets the winners buy more assets. You know how this works: after all, you’ve played Monopoly.

The American middle class didn’t emerge thanks to property ownership — property ownership came about as the result of wage gains due to strong (and hard-fought) labor rights, and as a result of public subsidy for private homebuilding (the GI Bill). Homeownership is a good way to convert gains from a worker-friendly labor market into something durable and insulated — but it’s no substitute for workers’ rights.

It only took a generation for the dream of homeownership to become a nightmare. Trading labor rights for asset appreciation meant that guaranteed pensions became market-based 401(k)s, turning American workers into the suckers in the financial markets’ casino. As these older workers retire, they are forced to supplement their wholly inadequate pensions by liquidating, remortgaging or reverse-mortgaging the family home. Social Security helps, but not much — without a powerful organized labor movement to defend Social Security, the program has withered, offering a sub-starvation cushion.

If you have to sell your home in order to survive retirement, you can’t leave it to your kids or cash it in to pay for their down-payments (parental down-payments are the only practical way for many young workers to afford home ownership).

Your kids, with their stagnant wages, limited access to credit, and precarious work, are never going to be homeowners — especially not when they’re bidding against a private equity fund paying15% over the asking-price (in cash).

If your kids aren’t homeowners, they’re tenants. That’s the monkey’s paw irony of a generation’s worth of protecting your family by increasing the value of your home by making life worse for tenants. Every depredation you tolerated for tenants in the name of protecting your kids is now your kids’ destiny.

This reality still hasn’t sunk in for many homeowners. There’s still plenty of middle-class people who think repealing Trump’s SALT cap will benefit them, which is categorically untrue: millionaires will average an extra $48,000/year from a SALT cap repeal, while 98% of middle-class earners will see nothing. The 2% of middle-class earners who benefit from a repeal will average $250/year from it.

Everyday homeowners still oppose the rent-controls that will protect their children. Just look at the failure of California’s Proposition 10 in 2018 — the rent-control measure battled a tsunami of dark-money and failed to secure a majority, especially after corporate landlords sent their tenants letters threatening massive rent-hikes if they didn’t vote against the proposition.

Middle-class homeowners once benefited from corporate landlords’ lobbying for property-owner-favorable (and tenant-punishing) rules, but those days are over. Those good schools that once made your home more valuable? Today, they’re starved for cash, thanks to the consolidated power of the corporate landlord sector and its ability to fight for lower property taxes.

It was always clear why the wealthy pushed asset-appreciation, rather than employment, as the path to a stable future for American families. Converting the human right to shelter into an aspirational asset-class (and the only source of social mobility) aligned the interests of large numbers of people with a small number of very rich people.

The gains the middle class reaped during its years of simping for landlords were only ever a convincer for a long con, money the mark was allowed to hold onto for a time. Now that the con is closing out, those gains are being returned to their rightful owners, the tiny minority of Americans — mostly Wall Street firms — who are consolidating the human need for shelter into their hands.

Even if you own your home, you will struggle to pass it onto your kids. The ascendancy of capital means that you’ll need to use that home to keep from being a burden on your kids when you stop working; any leftovers will go to defray your kids’ student debts.

If they’re lucky, they might be able to rent that family home from a Wall Street firm after you die, but when they do, they’ll be subject to arbitrary rent-hikes, rapid evictions, and a barrage of fees, penalties and interest.

Labor rights, not property rights, were always the key to prosperity.

Cory Doctorow (craphound.com) is a science fiction author, activist, and blogger. He has a podcast, a newsletter, a Twitter feed, a Mastodon feed, and a Tumblr feed. He was born in Canada, became a British citizen and now lives in Burbank, California. His latest nonfiction book is How to Destroy Surveillance Capitalism. His latest novel for adults is Attack Surface. His latest short story collection is Radicalized. His latest picture book is Poesy the Monster Slayer. His latest YA novel is Pirate Cinema. His latest graphic novel is In Real Life. His forthcoming books include The Shakedown (with Rebecca Giblin), a book about artistic labor market and excessive buyer power; Red Team Blues, a noir thriller about cryptocurrency, corruption and money-laundering; and The Lost Cause, a utopian post-GND novel about truth and reconciliation with white nationalist militias.

17 Mar 03:50

The surprising ease and effectiveness of AI in a loop

The surprising ease and effectiveness of AI in a loop

Matt Webb on the langchain Python library and the ReAct design pattern, where you plug additional tools into a language model by teaching it to work in a "Thought... Act... Observation" loop where the Act specifies an action it wishes to take (like searching Wikipedia) and an extra layer of software than carries out that action and feeds back the result as the Observation. Matt points out that the ChatGPT 1/10th price drop makes this kind of model usage enormously more cost effective than it was before.

Via @intrcnnctd

17 Mar 03:50

Vancouver filmmaker Anthony Shim honours his mother through writing and directing Riceboy Sleeps

by Charlie Smith

There’s a memorable scene early in Riceboy Sleeps, by Vancouver director and screenwriter Anthony Shim (심명보).

A blue-collar factory worker slaps the ass of a Korean single mother, So-Young, played by Choi Seung-yoon (최승윤). So-Young immediately confronts the much larger man.

“If you touch me again, I will kill you,” she boldly declares.

He responds by telling her to take it easy, but So-Young won’t be brushed off.

“You don’t touch me! Understand?” she firmly declares.

The message is clear—this is a confident and assertive woman.

Over Zoom, Shim tells Pancouver that growing up on Vancouver Island and in Coquitlam, he saw so many unrealistic depictions of Asian woman in TV and movies. They were either submissive, super-human, or prostitutes. So, he consciously created a lead character to dispel those stereotypes.

“I wanted to be part of changing that narrative and that image,” Shim says.

Riceboy Sleeps earned six Canadian Screen Awards nominations: best motion picture; performance in a leading role; achievement in direction, cinematography, and editing; and original screenplay

In addition, Shim’s second film has racked up a pile of other honours. They include a $100,000 cash prize as the Rogers Best Canadian Film Award from the Toronto Film Critics Association. That was followed by the audience award at the Glasgow Film Festival.

“Every one of them feels just like a miracle, because it’s different people,” Shim says. “I never imagined that people in Scotland would respond to this film in the way that they have.”

On Friday (March 17), it will begin a Vancouver theatrical run at the Fifth Avenue Cinemas.

Photo by Katrin Braga.
Choi Seung-yoon has won rave reviews for her performance as the single mother So-Young in Riceboy Sleeps. Photo by Katrin Braga.

Shim tries to do justice to his mom

Shim describes Riceboy Sleeps as “a story of a mother and son searching for a home”. They settle in Coquitlam, where the young boy, Dong-hyun, encounters racism in school. As a teen, he juggles a bicultural existence, surrounded by white friends but also deeply curious about his Korean roots. The young boy is played by Doh-hyun Noel Wang and the teenager is played by Ethan Hwang.

“The relationship between the mother and son is largely inspired by my relationship with my mom,” the 36-year-old director says.

It’s a deeply emotional film, depicting various challenges faced by an immigrant single mother who remains strong in the face of adversity.

“If I get to create a character based on my mom, I want to make sure it does her justice,” Shim says. “She can go watch it and go, ‘That’s good; that’s accurate; that’s honest.’ ”

The movie accurately depicts how Shim received his English name, as well as his teenage struggles in coming to terms with his identity. Layered on top are dramatized situations, characters, and conflicts, most of which were filmed in the Vancouver area.

He reveals that his mother reads all of his scripts, including this one. And he believes that she was initially caught-off guard by Riceboy Sleeps. The story and dialogue are not an exact replication of her life.

“I think when she saw the film, it was a very different experience for her,” Shim adds.

His mom’s friends attended a screening at last year’s Vancouver International Film Festival, where Riceboy Sleeps won the Best Canadian Feature award. According to Shim, these family friends were incredibly proud of him.

One of them revealed that she felt heartbroken thinking of the old days when the kids were little and life was so hard.

“And I’m, like, ‘Yeah, that’s what the movie is.’ But it seemed like she was so surprised,” Shim recalls.

Watch the trailer for Riceboy Sleeps.

Music reinforces the mood

Cinematographer Christopher Lew shot many scenes in one continuous take. This enabled Shim to seamlessly move through different locations, time periods, and even continents.

The director acknowledges that his first cut was far too long. And the only way to trim it was by removing entire sections.

It meant that Shim had to jettison another storyline involving the teenage character and his friends, even though he really liked how it turned out. But in the end, he explains, everyone agreed that Riceboy Sleeps is a film about a boy and his mother. As a result, anything deviating from that storyline had to go.

“That was sad, but ultimately, it was the best,” Shim says.

In addition to the cinematography, the film relies on compositions by Andrew Yong Hoon Lee to reinforce the mood. Shim reveals that he began listening to Lee’s music online on the recommendation of the sound designer.

“It became sort of the soundtrack of the film at the writing stage,” he says.

Shim reached out to Lee, who is a musical artist without grand ambitions of composing scores for movies. They spoke for months about everything but music. That included their identities as male Korean Canadians, their relationship to South Korea, parents, racism, and being a husband and father.

“It was obvious that we understood and experienced very similar things and wanted to express very similar feelings and emotions and themes through our respective medium,” Shim says. “So we thought, ‘Okay, let’s do this and let’s see what we can come up with.’ ”

Shim describes Lee’s compositions as having a certain kind of Korean quality that he was seeking. However, he emphasizes that it’s not Korean music.

“I was looking for something that was wholly original in that sense,” Shim adds.

Riceboy Sleeps
Dohyun Noel Hwang is one of two actors who plays Dong-Hyun in Riceboy Sleeps.

Language rule kept film out of Oscar contention

In addition, Shim wanted these compositions in his hands while he was still editing so he could cut the scenes to the music.

“Every single track he sent my way worked,” Shim says.

In fact, like the cinematography, he actually ended up with a surplus of good music, which gave him plenty of options.

The film includes some Korean dialogue with English subtitles between the mother and boy, but not as much as Shim originally intended. That’s because when he was making Riceboy Sleeps, a Canadian film would only be eligible for Telefilm financial assistance and tax credits under certain circumstances.

That included being at least 50 percent in English, French, or an Indigenous language. Since the film was made, the rules have been adjusted.

Consequently, Shim reduced the amount of Korean dialogue between the mother and son, replacing it with much more English. And he doesn’t think that his film suffered in any way as a result of that decision.

However, it prevented him from trying to get it submitted to the Academy Awards in the international feature film category. That’s because these films must have more than 50 percent non-English dialogue.

“So we just automatically were ineligible for that because of that reason,” Shim says. “I don’t think it would have made the film that much better.”

Follow Pancouver editor Charlie Smith on Twitter @charliesmithvcr. Follow Pancouver on Twitter @PancouverMedia.

The post Vancouver filmmaker Anthony Shim honours his mother through writing and directing <i>Riceboy Sleeps</i> appeared first on Pancouver.

17 Mar 03:48

David Suzuki: Protecting the high seas must be a high priority

by Staff

Pancouver focuses on underrepresented artists. It also publishes David Suzuki columns, including this one on protecting the seas, to advance education about the environment. Without a habitable planet, there will be no arts and culture.

By David Suzuki

As with much of human activity, we’ve exploited and polluted the oceans without really understanding them. We know more about the moon and Mars than what lies beneath the seas.

We do know, however, that oceans provide oxygen and that they absorb carbon dioxide, making them critical to slowing climate disruption. They account for almost all living space on the planet, and provide much of the world’s food.

Because countries are only responsible for ocean territories within a designated distance from their coastlines, the almost two-thirds of ocean area that make up the high seas have pretty much been a free-for-all when it comes to activities like fishing, mining and transportation.

That may soon change. On March 4, 193 nations agreed on a historic treaty to protect international waters. It comes in the wake of the December UN biodiversity conference (COP15) in Montreal. There, countries agreed to protect 30 percent of lands and oceans by 2030. The UN High Seas Treaty is essential to realizing that goal for oceans, as it creates a legal framework to set up a network of marine protected areas and includes requirements for environmental impact assessments in areas beyond national jurisdictions.

As with the conferences of the parties on climate and biodiversity, the treaty establishes a new COP to ensure governments meet regularly and are accountable for ocean governance and biodiversity protection.

Canada will protect 25 percent of its territorial seas

Although it’s been a long time coming, it will take a lot of work to realize its promise. The world has held 27 climate COPs since 1995 and we’re still behind in addressing that rapidly increasing crisis. And most countries have failed to uphold international agreements on high seas fishing.

The treaty aims at fairness between nations, covering, for example, equal access, shared benefits, capacity development and technology transfer for marine genetic resources—genetic material from marine sponges, krill, corals, seaweeds and bacteria, which “are attracting increasing scientific and commercial attention due to their potential use in medicines and cosmetics,” the Guardian reports.

But concerns have been raised that “bodies already responsible for regulating activities such as fisheries, shipping and deep-sea mining could continue to do so without having to carry out environmental impact assessments laid out by the treaty.”

Canada, which hosted the fifth International Marine Protected Areas Congress (IMPAC5) in February, has committed to protecting 25 percent of its territorial waters by 2025 and 30 percent by 2030, and to playing its part in ensuring protections extend to the high seas.

The federal government has effectively declared a moratorium on deep-sea mining and has set a goal to establish 10 new national marine conservation areas.

It’s also worked with the B.C. government and First Nations on a plan to establish a marine protected area network in the Great Bear Sea, which reaches from northern Vancouver Island to the Alaska border. It will be the world’s largest Indigenous-led, collaboratively developed MPA network.

Marine protection linked to reconciliation

The federal government is working with Indigenous nations to establish a co-managed marine protected area, Tang.ɢwan—ḥačxswiqak—Tsig̱is, off the west coast of Vancouver Island, in which vast underwater mountain ranges and unique deepwater ecosystems will be protected from oil and gas activities, deep-sea mining, bottom trawl fishing and dumping.

Government is also supporting fisheries closures in the Gwaxdlala/Nalaxdlala Indigenous Protected and Conserved Area, established by Mamalilikulla First Nation. It will form part of the Great Bear network.

Most of Canada’s marine conservation areas will recognize the importance of reconciliation and Indigenous leadership.

We’re already seeing positive effects. After a court challenge by World Wildlife Fund Canada and the David Suzuki Foundation, represented by Ecojustice, ExxonMobil relinquished its oil and gas exploration permits on the B.C. coast in March. Canada has also agreed to phase out West Coast open-net pen fish farming.

These national and international agreements, plans and actions show what can be accomplished with Indigenous leadership and pressure and effort from the public, scientists and conservation organizations. But even with the significant progress over the past few years, we can’t get complacent. The world is finally waking up to the climate and biodiversity crises. Protecting the oceans is a major part of resolving them. It’s up to us all to make sure governments live up to their promises.

There’s no time to lose!

David Suzuki is a scientist, broadcaster, author and co-founder of the David Suzuki Foundation. Written with contributions from David Suzuki Foundation Senior Writer and Editor Ian Hanington. Learn more at davidsuzuki.org.

The post David Suzuki: Protecting the high seas must be a high priority appeared first on Pancouver.

15 Mar 17:17

The Financial Times on Tim Cook's Mixed Reality Bet, My Issues With FT’s Report (Daily Update)

by Neil Cybart

Hello everyone. There were a few incoming questions about Apple’s investment portfolio objectives and actions. If there are any additional questions regarding yesterday’s update, send them my way, and all will be addressed at the same time.

For today’s update (Tuesday, March 14th), we will focus on a Financial Times article published on Sunday about Tim Cook and Apple’s upcoming mixed realty headset. As we approach WWDC, the chatter and rumors regarding Apple's headset will grow. We will soon take a much more in-depth look at Apple’s upcoming headset.


The Financial Times on Tim Cook's Mixed Reality Bet

Over at the Financial Times, here are Patrick McGee and Tim Bradshaw:

“When Tim Cook unveils Apple’s new ‘mixed-reality’ headset later this year, he won’t just be showing off the tech giant’s latest shiny gadget.

The Apple chief will also be guaranteeing his legacy includes the launch of a next-generation hardware product that some inside the company believe might one day rival the iPhone.

After seven years in development — twice as long as the iPhone — the tech giant is widely expected to unveil a headset featuring both virtual and augmented reality as soon as June.

The stakes are high for Cook. The headset will be Apple’s first new computing platform to have been developed entirely under his leadership. The iPhone, iPad and even Watch were all originally conceived under Apple’s co-founder Steve Jobs, who died in 2011.”

This isn’t the first time that the FT has published an article looking at Tim Cook’s legacy. Back in January, the FT ran two articles about Apple and China, a relationship that Cook played a very big role in creating. The thinking put forth by the FT was that Apple finds itself “beholden” to China with Cook having no good alternative.

An Above Avalon membership is required to continue reading this update. Members can read the full update here. An audio version of this update is available to members who have the podcast add-on attached to their membership. More information about the podcast add-on is found here.

(Members: Daily Updates are always accessible by logging into Slack. If you haven’t logged into Slack before, fill out this form to receive an invite.)


Above Avalon Membership

Subscribe $20/month Subscribe $200/year

Payment is processed and secured by Stripe. Apple Pay and other mobile payment options are accepted. Special Inside Orchard bundle pricing is available for Above Avalon members.

More information about Above Avalon membership, including the full list of benefits and privileges, is available here.

15 Mar 17:16

Can Large Language Models (LLMs) like ChatGPT help create OERs in a more sustainable way?

Encore+ Project, Mar 15, 2023
Icon

This is a super-brief post reporting that "Teachers are using ChatGPT for schools more than students, says a recent survey carried out by the Walton Family Foundation." I don't actually trust foundation research like this, because it often comes with an agenda (like, say, promoting charter schools). I'm not sure why the European Network for Catalysing Open Resources in Education is covering this item. The article also references a recent post (January) from David Wiley arguing for "the need for the future instructional designer to immediately update their curriculum to leverage the existence of these (AI) tools". Maybe so. But more to the point: this all feels like it was written by AI. None of the content relates to the headline, and it was the headline that caught my eye, because I've been predicting the possibility of custom on-demand AI-authored OER for many years now. If a human wrote this, let then step forward and take accountability for really phoning it in on this one.

Web: [Direct Link] [This Post]
15 Mar 17:16

The Role of AI in Online Reading Comprehension: A New Literacies Perspective

Ian O'Byrne, Mar 15, 2023
Icon

Yet another comment on how AI will support online learning assistants. "This post discusses advances in machine learning, artificial intelligence, and microinteractions as we read online." The interesting remark (and I actually remember this) was when people were asking, in the early days, whether online reading even counted as 'reading'. But my main point centres around this remark: "One thing appears to be true at this early point, AI will not substitute for the human connections and emotional aspects of learning. AI will not replace teachers and educators." How does this 'appear to be true'? Is there even a shred of evidence that supports this? I ask this especially in view of the fact that not all human interactions are benign. Emotional abuse, unfair grading, discrimination and prejudice - all these and more are common experiences for students. Don't be so sure AI won't replace 'human connections'. A lot of people out there are wishing it would. Related: Khanmigo, Khan Academy's AI-powered guide.

Web: [Direct Link] [This Post]