Last week, Vancouver-area radio host Jill Bennett went viral after tweeting a photo of a Dodge Durango straddling a bright yellow concrete barrier that the driver had hit. “Hey @CityofVancouver this is second incident I’ve seen caused by these useless ‘slow street’ barricades installed last month. They don’t slow down traffic; they cause crashes and traffic chaos,” Bennett wrote.
Understandably, thousands of people proceeded to pile on, pointing out how ridiculous her complaint was. Had the driver simply been paying attention to the road and driving at a reasonable speed, they would have easily noticed the brightly colored traffic calming installation, driven through without a problem and nothing bad would have happened to them. Blaming anyone other than the driver for this crash is absolutely insane.
And this is far from a one-off situation where one idiot had a bad take. This attitude is incredibly common. Just head over to NextDoor or the local subreddit in any small city that has recently added some form of protected bike lanes, and you’ll see the exact same sentiment. When the city closest to where I currently live (spoiler: not every Jalopnik staffer lives in New York) added flexible posts with some reflector tape on them to (sort of) protect a bike lane in its downtown, they were almost immediately hit, and the complaints started to flood in from people who were upset they were ever installed in the first place.
How dare the city put drivers at risk by doing one tiny thing to make riding safer for cyclists! These barriers just jump out and attack cars at random! I was just minding my own business, and now I have a flat tire! Thanks for nothing, idiot city planners.
I’m sorry to break it to anyone who has trouble keeping their car out of a bike lane (or off a concrete barrier), but it’s not the bike lane’s fault you’re a shitty driver. If you hit something stationary, that’s your fault. Pay attention to the fucking road while you’re driving. It’s not too much to ask when other people’s lives are literally at stake.
After all, killing someone who’s not in a car is still killing someone. And if you think they were asking for it because they were walking or riding a bike, you’re just a bad person. You’re the one driving the 5,000-lb vehicle. You’re the one responsible for making sure you don’t hit anything or anyone. Trying to blame others for your shitty driving is just ridiculous.
In the case of cyclists and pedestrians, sure, it’s possible to construct a hypothetical scenario where they might get hit while doing something that makes it entirely their fault. But not bike lane barriers and traffic calming measures. They’re just sitting there. Not moving. Completely stationary. Asking drivers to avoid hitting them is like asking drivers to avoid hitting buildings. It’s nothing more than a basic requirement for being allowed to drive on public roads.
If that’s too much to ask, then maybe it’s time for the state to take your driver’s license away. Oh, you live in a suburban hellscape and can’t get around without a car? Too bad. Stay home and have your groceries delivered until you can prove to society that you can be trusted behind the wheel again. Or take the bus. Sorry if you think you’re too good for public transportation. You’re clearly not good enough at driving to have a license, so suck it up, buttercup. That barrier you hit could have been someone’s child.
Operating behind enemy lines, one soldier fighting for Ukraine knows the Russians will hunt for him the second he sets up his portable Starlink internet dish.
He and his team set up the device only in urgent situations where they need to communicate with their headquarters. The Russians “will find you,” the soldier said, who goes by the call sign Boris. “You need to do it fast, then get out of there.”
The soldier, an ex-French Foreign Legionnaire who now operates as part of a reconnaissance-and-sabotage unit, is just one of Ukraine’s many soldiers for whom the Starlink service is a double-edged sword. Like other soldiers interviewed for this article, Boris asked to be referred by his call sign for security reasons.
On the one hand, Ukrainian soldiers say the device is key to their operations, notably its ability to help coordinate devastating artillery strikes. On the other, they report a variety of ways in which the Russians can locate, jam, and degrade the devices, which were never intended for battlefield use.
The end result is a MacGyver-esque arms race, as Ukraine rushes to innovate and Russia moves to overcome these innovations.
In Boris’s case, Russian signals-intelligence equipment is likely pinpointing the devices by scanning for suspect transmissions, said Todd Humphreys, a professor at the University of Texas at Austin who has studied Starlink devices.
One Ukrainian drone operator with the call sign of “Professor” also reported prolonged jamming that prevented his team from using his Starlink unit.
Professor said the jamming began two to three months ago, and that its intensity varied from place to place. “In one place everything’s fine, and in another—it doesn’t work,” Professor said.
At times the jamming would continue all day. “It’s really powerful,” Professor said.
Sometimes if there is no signal for the Starlink, the drone operator Professor tries a novel solution: he places it in a hole. The signal then returns, although only sometimes.
That should help keep the Starlink up through Russian GPS jamming, said Bryan Clark, a senior fellow at the Hudson Institute and an expert in electronic warfare.
Starlinks are particularly vulnerable to such jamming. Each terminal uses a GPS unit to determine which passing satellite should provide an internet connection.
Fortunately for Ukraine, GPS jammer signals are low power. This means that dirt or concrete can block the jammer signal. As long as a Starlink device has a barrier between it and the Russian jamming signal, it can continue to function, according to Clark.
A drone pilot with the call sign of Morgenshtern says he’s seen similar problems with other gear that uses GPS. “I think they introduced some more advanced equipment, or just their number increased,” he said.
Clark added that Ukraine can’t put its drones in a hole to protect them from jamming by Russia’s own Orlan-10 drones, but there may be other ways.
As Starlink allows users to manually enter their GPS locations, users could simply place a cheap GPS-receiver device outside jamming range and then enter its location into their Starlink terminal, offset by their distance to the GPS receiver.
One drone unit commander near the Ukrainian city of Bakhmut said his problems were unrelated to GPS jamming. Sometime in January, the commander said, Starlink uplink had been degraded to the point that his units often couldn’t make audio calls. Instead, the device could only send and receive text messages. The Starlink terminal also took longer to find satellites.
Clark said these problems were likely due to advanced jamming systems that attack the uplink of information to a satellite. The Russian military typically keeps these systems in reserve to defend Russian territory itself. They are theoretically vulnerable to Ukrainian strikes as they must be deployed within dozens of kilometers from their target and are not highly mobile.
That Russia might move such valuable systems to Bakhmut aligns with reports that more professional forces are deploying to take the city, which it has been attempting to occupy for seven months and now partly encircles. When visited by this reporter in Bakhmut on Feb. 14, a drone soldier with the call-sign Lebed reported that Russia was sending more professional soldiers to attack Ukrainian positions.
On March 10, Ukrainian Presidential Advisor Mykailo Podoloyak told the Italian newspaper La Stampa that Russia has “converged on Bakhmut with a large part of its trained military personnel.”
Clark said Russian satellite jamming is also defeatable with adjustments to Starlink’s software. In March 2022, Starlink engineers quickly pushed through a code update in response to Russian jamming attempts, a U.S. official said that April.
For now, Ukraine is stuck with Starlink and its problems, Clark said. Other satellite internet companies, such as Astranis, lack the infrastructure to provide continuous coverage, while satellite phone systems have too little bandwidth for Ukraine’s needs, he said.
It isn’t all bad news for Ukraine, though.
Two officers responsible for drone operations reported no issues with jamming. Similarly, neither Professor nor Morgenshtern said they were currently seeing Starlink jamming.
It’s unclear why jamming would have subsided for these units.
Russian forces could be rotating jamming operations across the front, focusing on high-priority areas. Ukraine may also be targeting Russian electronic-warfare units. Ukraine regularly shoots down Russia’s Orlan drones, for example, which can carry electronic-warfare payloads.
In keeping with Ukraine’s often innovative approach to the war, some drone operators are even reselling crashed parts of these Orlan drones. On one website for Ukrainian drone operators, one user posted an image of the Orlan camera, offered for sale.
iOS 16.4 brings new emoji, push notifications for web apps on the Home Screen , Mastodon link previews, and more.
Today, Apple is releasing iOS and iPadOS 16.4, the fourth major updates to the OSes that introduced support for the customizable Lock Screen and Stage Manager last year, respectively.
Ahead of the debut of Apple Music Classical tomorrow and just a few months before a WWDC that’s rumored to be focused on the company’s upcoming headset and a relatively small iOS 17 update, 16.4 is comprised of two big additions to iOS and iPadOS (new emoji and push notifications for web apps on the Home Screen) alongside a variety of smaller, but notable improvements such as some new Shortcuts actions, Mastodon link previews in iMessage, some tweaks to Podcasts and Music, and more.
Let’s take a look.
21 New Emoji
Ever since our usual guessing game on the Connected podcast, I haven’t been able to stop thinking about the ginger and goose emoji in iOS 16.4. Those are just two of the new emoji introduced with today’s update, with other notable additions including the likes of moose, new colored hearts, and a donkey.
Some of the new emoji in iOS 16.4.
That’s some realistic ginger.
While I’m partial to the goose, I’m also happy about the addition of a pink heart (finally) and a proper wireless symbol, which I look forward to using in some of my shortcuts that display emoji in menus and alerts.
Push Notifications for Web Apps on the Home Screen, with Focus Integration
In what is likely part of a pre-emptive strategy ahead of the requirement to allow third-party web browsers on iOS and iPadOS later this year, Apple shipped a series of useful additions for web apps and the existing, WebKit-based alternative browsers in iOS 16.4. Regardless of the underlying motivation behind these additions just a couple of months before WWDC, these are solid enhancements to the web experience for iPhone and iPad, with one particular feature that I plan to explore more in depth later this week for Club MacStories members.
For the first time since the iPhone’s introduction in 2007, web apps added to the Home Screen now support push notifications and badges. What I like about Apple’s implementation of this feature is that notifications from web apps are managed just like the ones from any other native app: you’ll be prompted to grant notification permissions to a web app on the Home Screen with the usual system dialog; you can manage the web app’s notification options from Settings; and since these are “regular” push notifications, you can manage them from Notification Center as well as tie them to specific Focus modes.
As far as the notifications themselves go, iOS 16.4 doesn’t make any distinction between those originating from a native app compared to those coming from web apps previously saved to the Home Screen. The technology behind all this is the same Web Push API that Apple added to Safari 16.1 in macOS Ventura last year.
I was able to test notifications for web apps added to the Home Screen using Alerty.dev, a web service I recently discovered whose sole purpose is to let users program their own notifications to deliver via an API to all their devices. Alerty is similar to Pushcut and Pushover, but instead of requiring a native app to be installed on the user’s device, it can just deliver real-time push notifications via a web browser (on desktop) or a web app on the Home Screen in iOS and iPadOS 16.4. This was a perfect opportunity to sign up for the service and try it out with some of my shortcuts.
When in Safari, you cannot enable push notifications for web apps. You’ll have to add them to the Home Screen first.
After creating an Alerty account, I saved the web app to my Home Screen from Safari. I opened the web app, I was prompted to log in again (more on this below), and only at that point I was asked to give Alerty permission to display push notifications. This is an important technical detail: while I was in Safari, Alerty couldn’t ask me for notification access since iOS and iPadOS do not support notifications in Safari; it was only when I saved Alerty as a web app on the Home Screen that I could.
Once added to the Home Screen, I was able to grant Alerty the ability to send me push notifications.
Once I gave Alerty access to notifications, I could see those changes reflected in Settings, and of course I was also able to pick the app for one of my Focus modes. I put together a sample shortcut to send an instant alert via the Alerty API, ran it, and a second later I saw a regular push notification from Alerty appear on both my iPhone and iPad. It looked like another notification from any other app, but it was actually coming from a web app.
Later this week for Club members, I plan to share my shortcut for interacting with the Alerty API as well as some strategies for integrating this service with HomeKit and other types of automation.
Notifications for web apps have the same notification settings as native apps on iOS 16.4.
Notifications aren’t the only change coming to web apps in iOS and iPadOS 16.4. Also for the first time, third-party web browsers can add web apps to the Home Screen, which will reopen them directly in the browser that created them when tapped. As you can see in the example below, I was able to add MacStories to the Home Screen as a Microsoft Edge bookmark.
Adding a web app from Microsoft Edge to the Home Screen.
While it’s good to see Apple progressively give more and more functionalities and system integrations to third-party browsers, their usefulness is still largely impacted by the fact these browsers are reskins of the Safari web engine. If you delete the browser that created one of these web apps on the Home Screen, then try to reopen the web app, it’ll fall back to Safari instead.
One of the highly anticipated changes of iOS 17 is the possibility of Apple having to relax its stance on disallowing alternative browser engines on iOS, and third-party browser makers are getting ready for that potential future. Once that happens, I’m sure that the ability to add full-on PWAs to the Home Screen will prove more useful than creating a saved boomark for a glorified Safari shell. We’ll see.
The last thing I’ll point out about web apps on the Home Screen is that users can now add multiple instances of each and rename them, which makes sense in the context of multiple Focus modes and creating different versions of the same web app, perhaps logged into different accounts. While I haven’t found a use case for this feature myself, I think it’s the right approach.
New Shortcuts Actions and Focus Filters
Continuing the trend from last year, there are some new actions in Shortcuts for iOS and iPadOS 16.4. Unfortunately, rather than moving the app forward in meaningful ways for power users, these actions mostly revolve around exposing app settings and various toggles to Shortcuts. I’m not saying these are not welcome additions, because they are; I’m only arguing that Shortcuts hasn’t been substantially improved for its most loyal and dedicated users in a while now.
In any case, the new actions in Shortcuts are:
Auto-Answer Calls
Intercom (requires a HomePod; cannot be run on a Mac)
Lock Screen
Set AirDrop Receiving
Set Always-On Display
Set Announce Notifications
Set Night Shift
Set Stage Manager
Set True Tone
Set VPN
Shut Down (includes option for Restart)
Silence Unkown Callers
Like I said, I’m a bit disappointed that the new actions added to Shortcuts in the past year mostly involve the ability to control on/off settings with no deeper controls. These are nice actions to have, but I was hoping for more controls made available to advanced users, especially on iPad.
Case in point: the Stage Manager action in Shortcuts only allows you to either turn Stage Manager on or off with two toggles for choosing whether you want to see the dock and recent apps or not. These are the same settings you can find in Control Center for Stage Manager. As I argued last year, if Apple cared at all about making Stage Manager more palatable for power users, one of the (many) things they should do is bring support for the Mac’s ‘Find Windows’ and ‘Resize Windows’ Shortcuts actions to iPadOS. Instead, while Mac users can leverage Shortcuts to fine-tune their workspaces with two excellent Shortcuts actions, in iPadOS land all we can do with Shortcuts is turning Stage Manager on or off.
There is one great change I want to point out in Shortcuts for iOS 16.4, however: the Ask for Input action now lets you enter multi-line text instead of one line at a time only. As an advanced user of the app, I’m glad I can now – checks notes – enter multi-line text in a dialog.
I’ll also note that the ‘Set Always-On Display’ action is a brand new system Focus Filter in iOS 16.4 now. As I explained last year, Focus Filters are based on the same intent technology that powers Shortcuts actions, which makes it possible for developers to expose the same functionality in both Settings and the Shortcuts app. I’ve long argued that users should be able to set their Always-On display preferences depending on Focus modes, so I’m happy to see this option be supported in both Shortcuts and Settings now.
The ability to control the Always-On Display is both a Shortcuts action and system Focus Filter in iOS 16.4.
In practical terms, this change means you can now disable the Always-On display when you’re at work, or if you’re out and having dinner with friends, or at the movie theater. Whatever your use case may be, this is a good option to have and it can be accessed both from Focus in Control Center as well as with Shortcuts automations.
Other Changes in iOS 16.4
Here is a list of all the other changes in iOS and iPadOS 16.4 worth mentioning.
The page-turn animation is back in the Books app. In a flip-flop that would make Stephen Hackett proud, the page-turn animation – which had been previously removed in iOS 16 – is returning in iOS 16.4. The first time you’ll open the Books app in 16.4, you’ll see an alert inside the reader view that tells you about the new options you can find in Books’ somewhat-hidden Themes & Options menu. One of the new options is ‘Curl’ for page turn, which restores Books’ glorious, real-time 3D effect for turning pages.
Welcome back, buddy.
As someone who thought the removal of the page-turn animation was a mistake, I’m very happy to see this feature return. Props to whoever inside Apple convinced their manager that this feature was worth restoring.
The Home Screen wallpaper is no longer blurred in Stage Manager. Of all the improvements and features that Stage Manager for iPadOS potentially needs, Apple chose to ship one in iPadOS 16.4: when you’re using Stage Manager, your Home Screen wallpaper is no longer blurred behind. That’s it, that’s the feature. Let’s move on.
There are Mastodon link previews in Messages, Mail, and Notes. Ever since I decided to embrace Mastodon and leave Twitter behind months ago, I’ve missed the ability to easily share and preview links to posts on iMessage. That’s changing with iOS 16.4, which comes with native support for Mastodon link previews inside the Messages app as well as Notes and Mail.
Mastodon link previews in Messages, Notes, and Mail.
In iMessage, Mastodon links will be automatically converted to a rich snippet with support for images and video attachments when sent in a conversation. They look just like Twitter rich links, but a) they have a gray background and b) thanks to the superior Mastodon API, these rich links tell you how many media attachments are included in a post. In Notes and Mail, you can get Mastodon rich links by saving them via the Notes share sheet extension or pasting them in the message composer and using the new link conversion option of iOS 16, respectively.
It’s great to see Apple ship support for Mastodon previews so quickly, and I’m glad I no longer have to take screenshots of posts if I want to share them easily with my friends on iMessage.
Interface tweaks for Apple Music and Podcasts. In iOS 16.4, Apple brought a series of small, and relatively unimportant, changes to the Music app.
Your profile picture (which you can use to open your Apple Music profile) is now displayed at the top of the Library page too; artwork in the Playlists page is smaller, making for a denser view of your playlists; there is a new and less obtrusive design for in-app alerts such as songs added to the library or queued in your Up Next. The latter is the most interesting addition in my opinion: these are new compact “bubbles” that are temporarily displayed at the bottom of the screen rather than in the middle of it. I wonder if Apple will make this style of alert1 an official API for developers in the future.
The new in-app alerts for Music (left).
Changes to the Podcasts app are more substantial and useful than the ones seen in Music. Channels, such as the MacStories one in Apple Podcasts, will now appear in Library tab if you’re subscribed to them; when you open a channel, you’ll see all the shows from it that you’re already following at the top of the page. Additionally, the Up Next queue in the app now includes episodes saved to the library as well as episodes played from shows you’re not following (a nice feature that’s been available in third-party podcast apps for a while).
I continue to be intrigued by Apple’s Podcasts app, particularly because of its clean design, integration with the Apple Watch, and performance in refreshing podcast feeds. However, until Apple adds the equivalent of a ‘trim silence’ feature to save me some time when listening to podcasts, I can’t switch to it as my podcast player.
Voice Isolation for cellular phone calls. Following in the footsteps of FaceTime and VoIP apps, you can now enable Voice Isolation for cellular phone calls in iOS 16.4. This audio effect, which you can activate from Control Center, will prioritize your voice and block ambient noise around you.
Enabling voice isolation for a cellular call.
I tested this feature with my mom, who told me I “sounded good but metallic”. Your mileage may vary.
iOS and iPadOS 16.4 aren’t huge updates, yet most people will likely rush to install them because of the new emoji included in these releases. The nerdier among us will probably do the same to get native Mastodon link previews in iMessage, which are very nicely done. I continue to be let down by the poor execution and limitations of Stage Manager, and, at this point, I’m fully prepared to see iPadOS 17 go by without any major changes to iPadOS multitasking, which would be concerning.
iOS and iPadOS 16.4 are likely the last major updates before Apple’s attention turns to WWDC, the headset, and whatever may be in store for iOS 17. Worst case scenario, even if we won’t be getting any more iOS 16 updates and if iOS 17 turns out to be a smaller release this year, know this:
Founded in 2015, Club MacStories has delivered exclusive content every week for over six years.
In that time, members have enjoyed nearly 400 weekly and monthly newsletters packed with more of your favorite MacStories writing as well as Club-only podcasts, eBooks, discounts on apps, icons, and services. Join today, and you’ll get everything new that we publish every week, plus access to our entire archive of back issues and downloadable perks.
The Club expanded in 2021 with Club MacStories+ and Club Premier. Club MacStories+ members enjoy even more exclusive stories, a vibrant Discord community, a rotating roster of app discounts, and more. And, with Club Premier, you get everything we offer at every Club level plus an extended, ad-free version of our podcast AppStories that is delivered early each week in high-bitrate audio.
I enjoyed this explanation by The Verge’s Tom Warren on how Microsoft’s Phone Link app – which has long allowed Android users to connect their smartphones to a Windows PC – has been updated to support iOS notifications and sending texts via iMessage. From the story:
The setup process between iPhone and PC is simple. Phone Link prompts you to scan a QR code from your iPhone to link it to Windows, which automatically opens a lightweight App Clip version of Phone Link on iOS to complete the Bluetooth pairing. Once paired, you have to take some important steps to enable contact sharing over Bluetooth, enable “show notifications,” and allow system notifications to be shared to your PC over Bluetooth. These settings are all available in the Bluetooth options for the device you paired to your iPhone.
And:
Microsoft’s Phone Link works by sending messages over Bluetooth to contacts. Apple’s iOS then intercepts these messages and forces them to be sent over iMessage, much like how it will always automatically detect when you’re sending a message to an iPhone and immediately switch it to blue bubbles and not the green ones sent via regular SMS. Phone Link intercepts the messages you receive through Bluetooth notifications and then shows these in the client on Windows.
I got access to the updated version of Phone Link on my PC today, and this integration is pretty wild and it actually works, albeit with several limitations.
First, the setup process is entirely based on an App Clip by Microsoft, which is the first time I’ve seen and used an App Clip in real life. Essentially, my understanding is that this works similarly to how an iPhone can pair with an old-school Bluetooth car system: the iPhone and PC pair via Bluetooth, and you can then provide the PC with access to your notifications and contacts from iOS’ Bluetooth settings. This is the same UI I have for my KIA Sportage’s system, which uses regular Bluetooth to pair with my iPhone and can also display contacts and missed calls.
The setup process based on an App Clip.
The difference between my car and Phone Link, of course, is that with Phone Link you can type text messages from a PC and they will be sent as iMessages on iOS. This bit of dark magic comes with a lot of trade-offs (check out Warren’s full story for the details on this), but it works for individual contacts. I’ve been able to start a conversation with John, reply to his messages from Windows notifications, and even send him URLs1, and they were all correctly “intercepted” by iOS and sent over as iMessages. I’ve also been impressed by the ability to clear notifications from a PC and have them go away on iOS’ Lock Screen immediately.
The Phone Link app paired with my iPhone.
This was then sent as an iMessage.
The limitations of Phone Link for iPhone users mean you’ll always have to fall back to the actual iOS device for something – whether it’s posting in an iMessage group or sending a photo or acting on notifications – but for quick messages, glancing at notifications, and clearing them, I think this integration is more than good enough.
Fun fact: raw URLs sent from Windows are delivered as rich links from iMessage, but the card’s preview doesn’t load by default on the recipient’s device. ↩︎
What happens when AI image generation becomes powerful enough not to replace artists (or true imagination!), but to credibly remix photographs and movies in a way that we can no longer tell if they’re true or not?
Well, that didn’t take long: The latest update to Midjourney—which can now generate photorealistic faces—has spawned a flurry of images that show, among other things, Donald Trump getting arrested or the pope in an arresting outfit. Celebrities in fantastical situations are only the most obvious use, though: Some people are also generating events that never happened.
Something wild is happening on the Midjourney subreddit.
People are telling stories and sharing photos of historic events - like the “Great Cascadia” earthquake that devastated Oregon in 2001.
What’s surprising about this wholly unsurprising development is how the AI nails the visual style of the early 2000s: Although this earthquake never happened, the purported footage of it looks entirely credible to me, who spent a lot of time consuming media around that age. The colors are veering towards grey and brown, the outfits are correct, and the lo-fi American television resolution feels completely appropriate for the time.
At the pace the technology is moving, we’re very quickly approaching a future where past events can be entirely constructed and pass the smell test—enhancing all the worst mechanisms of the post-truth age we live in.
Every wave of technological innovation has been unleashed by something costly becoming cheap enough to waste. Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt. This technical debt is about to contract in a dramatic, economy-wide fashion as the cost and complexity of software production collapses, releasing a wave of innovation.
It was my first "real" Cocoa project. I was learning Objective-C pretty much on the fly back then and still working a day job. I had no idea it would launch my full time career into being an indie Mac programmer. Happy birthday little guy, you're still the best wiki-style notebook around. One more year and you can legally drink!
VoodooPad is in different hands these days, but I'm happy to see it still living and still getting love from many of its users (whom I still get emails from every now and again).
Safari 16.4 is out (upgrade to iOS 16.4 to get it) and the biggest feature for me is mobile support for Web Push notifications. This little demo tool was the first I found that successfully sent a notification to my phone: frustratingly you have to add it to your home page first in order to enable the feature. The site also provides a curl command for sending push notifications through the Apple push server once a token has been registered, which is the crucial step to figuring out how to build applications that can send out notifications to users who have registered to receive them.
Image generated by the DALL-E text-to-image generator based on the prompt, “A photograph of a humanoid robot who is using a pencil to draw a line chart on a piece of paper.”
As pretty much everyone and their robot dog is now aware, there are jaw-dropping breakthroughs happening in artificial intelligence (AI) on an almost daily basis. To those of us in the data visualization field, this begs the obvious question: Will AIs be able to create expert-level charts without any human intervention, and, if so, when might that happen?
That’s hard to say, of course, but what seems almost certain at this point is that the process of creating a chart is going to change dramatically in the very near future. Already, AI users can describe a chart using simple, plain-language prompts, and get an image of that chart in seconds without having to use the graphical user interfaces (GUIs) of data visualization products like Tableau Desktop or Microsoft Excel. How good are the resulting charts? Well, in my opinion, they’re currently pretty hit-or-miss and often require corrections or enhancements by a human with data visualization expertise before being shown to an audience. Given how quickly AI is advancing, though, how long might that remain the case?
I think writing computer code provides a potentially informative model here since current AIs are much more advanced when it comes to generating code compared with generating charts. The GPT-4 AI was released about a week ago as I write this, and it can produce astonishingly good code based on plain-language prompts. Does this mean that people no longer need to know how to code? Well, that doesn’t seem to be the case, at this point anyway. For now, people with coding expertise are still needed for a few reasons:
A human coder still needs to decide what code is needed in a given situation, and what that code should do. AIs can decide what code is needed for common applications with similar examples in their training data, such as simple games or content management systems, but they have trouble with more complex, novel, or unique applications, such as custom enterprise software applications. As far as I can tell, someone without any coding expertise would struggle to formulate prompts that would result in usable code for anything but simple, common applications.
AIs often make mistakes that must be identified and corrected by expert coders before the code can run without throwing errors or introducing problems like security vulnerabilities or unintended application behaviors.
This means that, for now anyway, humans with coding expertise are still needed to guide and supervise coding AIs. Those coders will be a lot more productive (and so potentially less numerous), but still necessary. A similar consensus seems to have emerged in recent months around car-driving AIs: During the last decade or so, many people assumed that humans would no longer need to know how to drive because car-driving AIs would exceed human driving abilities in all situations. In recent months, however, it’s started to look more like humans will still need to know how to drive since car-driving AIs are unlikely to perform reliably in a wide variety of situations for the foreseeable future. Yes, drivers will be more productive since they can rely on AI for simpler tasks like highway driving in good conditions, but they’ll still need to know how to drive so that they can correct or take over for the AI in more unusual or complex situations.
Data visualization might follow a similar path. It seems almost certain at this point that human chart-makers will become a lot more productive, because they’ll be able to simply describe a chart in plain language and get that chart within seconds. In many cases (but not all), this will be faster than using the GUI of a data visualization software product to create a chart, and learning how to use an AI to create charts will be a lot quicker and easier than learning how to use data visualization software.
Even if they’re using an AI, however, chart makers still need data visualization expertise to decide what charts are needed in a given situation, and to supervise the AI by correcting any data visualization, reasoning, or perceptual mistakes that it might make. A human with data visualization expertise might also need to prompt the AI to make design choices that the AI might have trouble making on its own, such as deciding to visually highlight part of a chart, adding callouts with key insights, or bringing in comparison/reference values from an external source.
If this is how things play out, it would mean that people will still need data visualization skills, but the way in which they’ll use those skills will change drastically. Instead of using those skills to make charts “by hand” using the GUI of a data visualization software application, they’ll use those skills to guide and supervise chart-making AIs, just as human coders use their coding expertise to guide and supervise coding AIs.
Now, a chart-making AI might not offer enough control or flexibility for some users, particularly those who create highly customized charts such as scientific charts, data art, specialized business dashboards, or novel chart types. Those users will likely still need to use data visualization GUIs or code libraries such as ggplot or D3.js, but they represent only a small minority of chart creators. I suspect that a good chart-making AI will meet the needs of most people who create charts.
I’m probably over-estimating its importance, but my upcoming Practical Charts book might accelerate the transition from using GUIs to using AIs to make charts. The book contains chart design guidelines that are more concrete and specific than other books, which is exactly the kind of training data that would help a chart-making AI become more competent. On the one hand, it’s frustrating to think that I might have spent the last several years writing training data for AIs (and also to not be able to block AIs from including it in their training data—unless legislation or policies change). On the other hand, I recognize that AIs that include my book in their training data may allow millions of people to make better charts. This is already happening with AI-generated computer code, which often contains expert-level techniques and practices that were distilled from code in the AI’s training data that was written by expert coders, and that many coders who use the AI wouldn’t think of on their own. It’s also happening with car-driving AIs, which can allow human drivers to perform better by, for example, slamming on the brakes to avoid a frontal collision faster than a human ever could.
Now, this situation could change, of course. Between the late 90s and early 2010s, for example, the best chess players in the world were “centaur” or “hybrid” teams consisting of a human grandmaster using a chess-playing AI to assist them. Such teams could easily beat the best AI-only players. That changed, however, when chess engines like AlphaZero came out a few years ago, which were so good that pairing a human grandmaster with them made them worse, not better. The question, then, is whether data visualization is more like chess, or more like car-driving? Only time will tell, but it feels more like car-driving to me at the moment, i.e., like something that will require expert human supervision for the foreseeable future.
Take all of this with a boulder of salt, of course, since this is pure speculation based on the information that’s available at the moment. Some of the challenges that I’ve described could turn out to be much easier or much harder than expected for AIs to overcome, and things could be very different a few years from now. Or next Tuesday.
Agree? Disagree? Awesome! Let me know on LinkedIn or Twitter, or in the comments below.
A good chunk of my career has involved running, analyzing and writing about A/B tests (here’s a quick Bayesian overview of A/B testing if you aren’t familiar). Often A/B tests are considered to be the opposite, statistical, end of the data science spectrum from AI and machine learning. However, stepping back a bit, an A/B test just tells you the probability of an outcome (whether variant A is better than variant B) which is not that different than a deep neural network used for classification telling you the probability of a label.
With the rapid advances in Natural Language Processing (NLP) including easy access to pre-trained models from Hugging Face and the quite impressive results coming out of OpenAI’s Large Language Models (LLMs) like GPT4, I was curious whether or not you could replace a subject line A/B test with a model built using GPT. It turns out that using GPT-3’s text embeddings, and a very simple classification model, we can create a tool that correctly predicts the winner of an A/B test 87% of the time.
The Problem: Picking the Best Headline
The vast majority of the content on the web today exists to drive traffic to websites, ultimately to increase the probability that users will complete some conversion event. Conversion events can be everything from simply clicking on a link to making the purchase of a product associated with the content. Even here at Count Bayesie, I ideally want people to at least read this content even if I have nothing to sell.
In this post we’ll explore picking an article headline that generates the highest click rate for the post. For example suppose we had these two ideas for headlines (which come from our data set):
A: When NASA Crunched The Numbers, They Uncovered An Incredible Story No One Could See.
B: This Stunning NASA Video Totally Changed What I Think About That Sky Up There.
Which of these two headlines do you think is better? A/B tests are designed to answer this question in a statistical manner: both headlines will be displayed randomly to users for a short time, and then we use statistics to determine which headline is more likely the better one.
While many consider a randomized controlled experiment (i.e. an A/B test) to be the gold standard answering for the question “which headline is better?”, there are a range of draw backs to running them. The biggest one I’ve found professionally is Marketers hate waiting for results! In addition they don’t want to have to expend some of the eyeballs that view the content on experiment itself. If one variant is much worse, but it took you thousands of users to realize that, then you’ve wasted a potentially large amount of valuable conversion.
This is why it would be cool if AI could predict the winner of an A/B test so that we don’t have to run them!
The Data
The biggest challenge with any machine learning problem is getting the data! While many, many companies run A/B tests frequently, very few publish the results of this (or even revisit their own data internally).
32,487 experiments might not seem like all that much, but each experiment is not just the comparison between two headlines but often many. Here in an example of the data from a single experiment:
A single experiment involves the comparison of multiple variants.
We want to transform this dataset into rows where each row represents a comparison between a single pair of A and B variants. Using the combinations function from the itertools package in Python makes this very easy to calculate. Here’s an example of using this function to create all possible comparisons among 4 variants:
> from itertools import combinations
> for pair in combinations(["A","B","C","D"], 2):
> print(pair)
('A', 'B')
('A', 'C')
('A', 'D')
('B', 'C')
('B', 'D')
('C', 'D')
After this transformation (plus some clean up) we have 104,604 examples in our train set and 10,196 examples in our test set out of the original 32,487 experiments.
This leads us to a very interesting problem regarding this data: what exactly are our labels and how do we split the data up into train and test?
The tricky part of labels and train/test split
It’s worth recalling that the entire reason we run an A/B test in the first place is we don’t know which variant is better. The reason statistics is so important in this process is we are never (or at least rarely) 100% certain of the result. At best, if we view these experiments in a Bayesian way, we only end up with the probability A is better than B.
In most classification problems we assume binary labels, but when we transform our data the labels looks like this:
If we represent our labels honestly, they are probabilistic labels, which makes our problem a bit different than usual.
As we’ll see in a bit, there’s a very simple way we can use logistic regression to learn from uncertain labels in training.
For our test set however, I really want to get a sense of how this model might perform with clear cut cases. After all if the differences between two headlines is negligible neither this model nor an A/B test would help us choose. What I do care about is that if an A/B test can detect a difference, then our model does as well.
To ensure that our test set can be label accurately, we only chose the pairs were the difference was a high degree of certainty (i.e. very close to 0 or very close to 1). To make sure there was no data leakage all of the titles that are in the test set are removed from the training set.
The Model
Modeling this problem is both relatively simple and quite different from most other classification models, which is a major reason I was so fascinated by this project. The simplicity of this model is intentional, even though it’s not hard to imagine modifications that could lead to major improvements. The reason for the simplicity of the model is that I’m primarily interested in testing the effectiveness of language models rather than this problem specific model.
Here is a diagram of the basic structure of our model.
The basic flow of our model, the key insight is computing the difference of the vector representations.
We’ll walk through each section of the model process touching on quite a few interesting things going on in what is otherwise a pretty simple model to understand.
The heart of this model is embeddings or how we’re going to transform our text into a vector of numeric values so that we can represent headlines mathematically for our model. We are going to use three different approaches to this and that will be the only way each model differs. Our approaches are:
Traditional Bag-of-Words (typically not considered an “embedding”)
Each of these techniques is the only difference between the three A/B test prediction models we’re going to build. Let’s step through the basic construction of each:
Bag-of-Words
A “bag of words” vector representation treats each headline merely as a collection of words and concerns itself with the possible words in the training set. Each word (or in this case, technically a two-word sequence called a bi-gram) present in the headline will result in a value of 1 in the vector representing the headline, every other value in the vocabulary not present in the headline will be a 0. Our text can easily be transformed this way with SKLearn as follows:
from sklearn.feature_extraction.text import CountVectorizer
def build_bow_vectorizer(df):
corpus = np.concatenate([df['headline_a'].values,
df['headline_b'].values])
vectorizer = CountVectorizer(ngram_range=(1,2),
binary=True,
max_df=0.6,
min_df=0.005)
vectorizer.fit(corpus)
return vectorizer
Typically when we refer to an “embedding” we aren’t considering any vector representation, but specifically on that is the output of the last layer of a neural network trained to model language (often for some other task). Technically our BoW model would not be considered a true embedding.
🤗 Transformers library and Distilbert
HuggingFace’s Transformer’s library is a powerful tool that allow us to use pre-trained language models to create word embeddings. This is very important because it allows us to leverage the power of large language models train on an enormous corpus of text to make our headline representation very information rich. Using an existing model to build a task specific model is referred to as Transfer learning and is a major revolution in what is possible with machine learning.
What Hugging face allows us to do is to run our text through an existing neural network (specifically a Transformer) and retrieve the activations of the last hidden state in the model, then use these for our embeddings. The process is a bit more involved than our BoW encoding, but here is an example function for extracting the hidden state (adapted from Natural Language Processing with Transformers):
def extract_hidden_states(batch):
# Place Model inputs on the GPU
inputs_a = {k:v.to(device) for k, v in batch.items()
if k in ['input_ids_a', 'attention_mask_a']}
inputs_a['input_ids'] = inputs_a.pop('input_ids_a')
inputs_a['attention_mask'] = inputs_a.pop('attention_mask_a')
inputs_b = {k:v.to(device) for k, v in batch.items()
if k in ['input_ids_b', 'attention_mask_b']}
inputs_b['input_ids'] = inputs_b.pop('input_ids_b')
inputs_b['attention_mask'] = inputs_b.pop('attention_mask_b')
# Extract last hidden states
with torch.no_grad():
last_hidden_state_a = model(**inputs_a).last_hidden_state
last_hidden_state_b = model(**inputs_b).last_hidden_state
return {"hidden_state_a": last_hidden_state_a[:,0].cpu().numpy(),
"hidden_state_b": last_hidden_state_b[:,0].cpu().numpy()
}
The specific model we’re using is a version of the Distilbert transformer, which is a very powerful language model, but not nearly as large and powerful as GPT-3
GPT-3 using OpenAI’s API
Our last set of embeddings comes from OpenAI’s GPT-3 using their API to get the embeddings. GPT-3 is a remarkably powerful transformer that has been in the news so much it's hard to imagine one has not already heard too much about it! Not only is the model powerful, but the API is remarkably simple to use in Python. Here is an example of some code fetching embeddings for two headlines:
The catch of course for all this power and ease of use is that it’s not free. However my total bill for running this model and some other experiments ended up being under a dollar! Nonetheless making sure that I was caching and saving my results to avoid being billed twice for the same task did add a bit to the code complexity. However of the three embedding solutions this was the easiest to implement.
It is worth pointing out that we’re not prompting GPT-3 with questions about our headlines but using embeddings that are derived from it. This is an important use case for these powerful models that I currently haven’t seen discussed too much in the vast floods of articles on the topic.
Modeling the difference between two headlines
Now that we have a way to represent all of our headlines as vectors we have a new modeling problem: How are we going to represent the difference between these two headlines?
We could concatenate them and let the model worry about this, but my goal here is to understand the impact of the embeddings alone, not to worry about a more sophisticated model. Instead we can solve this the way that many models handle comparisons: use the literal difference between the two vectors.
By subtracting the vector representing headline B from the vector representing headline A we get a new vector representing how these headlines are different; using that as the final vector representation for our model.
To understand how this works consider this simplified example:
Here we have headlines 0 and 1 represented by a very simple vector consisting of three features: the word count, whether or not the headline contains emojis and whether or not the headline ends with an exclamation mark. Now let’s see what the result of subtracting these vectors is:
headline 0 does not have emojis and headline 1 does
headline 0 and headline 1 either both don’t or both do end in exclamation marks.
In this case a model might learn that emojis are good, so headline 0, would be penalized because it does not have emojis.
Of course our representations are much more complex, however the intuition behind modeling the difference remains the same.
Our classifier: Logistic regression as regression
Despite some fairly noteworthy educators making the claim that “logistic regression is a model for classification, not regression” the model we’ll end up using demonstrates both that logistic regression quite literally is regression and that the distinction between “classification” and “regression” is fairly arbitrary.
Thinking about our problem it seems perfectly well suited for Logistic regression, after all we just want to predict the probability of a binary outcome. However if we try this in SKLearn we get an interesting problem:
SKLearn shares the same erroneous assumption that many others in the machine learning community have, that somehow Logistic regression can only predict binary outcomes. However this is for good reason. When we explored logistic regression in this blog we discussed how logistic regression can be viewed as a mapping of Bayes’ Theorem to the standard linear model. We focused on the model as this:
$$O(H|D) = \frac{P(D|H)}{P(D|\bar{H})}O(H)$$
Which can be understood in terms of a linear model and the logit function as:
$$\text{logit}(y) = x\beta_1 + \beta_0$$
However this does not work in practice most of the time precisely because we are regressing on values of exactly 1.0 and 0.0, for which the logit function is undefined. So instead we we use a formula based on an alternate (and much more common) view of logistic regression:
$$y = \text{logistic}(x\beta_1 + \beta_0)$$
By understanding the nature of Logistic regression as regression we can very easily implement a variation of Logistic regression that does work for our data using logit and LinearRegression:
from sklearn.linear_model import LinearRegression
y_train = train_df["p_a_gte_b"].values
# We are going to perform Linear regression
# on a y value transformed with the logit
target_train = logit(y_train)
base_model = LinearRegression().fit(X_train, target_train)
We just have to remember that our model will be outputting responses in terms of log-odds so we’ll need to transform them back to probabilities manually using the logistic function.
Results
Finally we can see how each of these different models performed! What’s interesting about this case is that our clever use of linear regression as logistic regression combined with the way we split up our test and train sets means we’ll have different ways to measure model performance depending on the data set used.
Model Performance
We transformed our probabilities in the training set into log odds and then ran them through a standard linear regression model. Because of this we can just used Mean Squared Error to compare model performance on the training set.
Mean Square Error on Train dataset
Smaller is better
While we can see a clear improvement for each progressively more powerful model, it is very difficult to have any meaningful interpretation of these results. We can’t look at common classification metrics such as accuracy and RoC AUC since we don’t know the true labels for the train data set.
For the test set we can look at these scores since the test set only consists of examples where we are highly confident in the results of the experiment. We’ll start by looking at the ROC AUC, which allows us to view the strength of our model without having to pick a particular cutoff for choosing one class or another. For those unfamiliar a 0.5 score is performance on par with random guessing and a score of 1.0 represents perfect classification.
ROC AUC - Test set
Higher is better
Here we can start seeing that these models are surprisingly good. Even the simplest model, the Bag of Words, has an ROC AUC of around 0.8, which means it is fairly good at predicting which headline will win an A/B test.
This brings up a point that I have found myself making repeatedly throughout my career in data science and machine learning: If a simple model cannot do well at solving a problem, it is extremely unlikely that a more complex model will magically perform much better.
There is a mistaken belief that if a simple model does poorly, the solution must be to add complexity. In modeling complexity should be considered a penalty, and only pursued if simple models show some promise. As an example, many people believe that a simple model like Logistic Regression could not possiblely do well on an image recognition problem like MNIST. When in fact a simple logistic model will score a 90% accuracy on the MNIST dataset.
When I first saw the BoW model doing well, I already was optimistic for GPT3 which does frankly fantastic in terms of ROC AUC. But now let’s look at what really matters in practice: accuracy!
Accuracy on Test set
These results are quite remarkable! It is worth noting that our test set essentially represents the easiest cases to determine the winning variant (since we’ll be more easily more certain when two variants are clearly different), however it’s still impressive that our GPT3 model is able to correctly predict the winner of an A/B test in 87% of these cases.
While impressive, it’s also important to consider that when we run an A/B test “statistical significance” is generally being 95% sure that it looks like the difference is zero (or, if we approach this as Bayesian, 95% sure one variant is superior), and these specific A/B tests were much more certain of these results than the model on average.
Our best model still seems quite useful. Another way to explore this is to see how well calibrated our model’s probabilities are.
Probability calibrations
My diehard Bayesian readers might be a bit offended by my next question about these models but I do want to know “If you say you’re 80% confident, are you correct about 80% of the time?”. While this is a particularly Frequentist interpretation of the models output probabilities, it does have a practical application. It’s quite possible for a model to have high accuracy but have all it’s predictions very close to 0.5 which makes it hard for us to know if it’s more sure about any of it’s predictions.
To answer this question I’ve plotted out the average accuracy for intervals of 0.05 probability. Here’s the result we get for our GPT3 model:
We want to see a “V” shape in our results because a 0.1 probability in winning reflects the same confidence as 0.9
Notice that the ideal pattern here is a “V” shape. That’s because being 10% sure A is not the winner is the same as being 90% sure that B is the winner. Our maximum state of uncertainty is 0.5
As we can see, our GPT model is a bit under-confident in it’s claims. That is when the model is roughly 80% sure that A will win, it turns out that it’s correct in calling A the winner closer to 95% of the time.
Demo: Choosing the headline for this post
While I’m not quite sure I’m ready to recommend using LLMs instead of running proper A/B tests, there are plenty of cases where one might want to run an A/B but realistically cannot. This post is a great example! I don’t really have the resources (or the interest) in running a proper A/B test for the titles of this post… so I figured I would give my model a shot!
My original plan for this post title was “Can GPT make A/B Testing Obsolete?”, I thought this sounded maybe a bit “click-baity”, so I compared it with the current title. Here’s the basic code for running an A/B test with the model:
And the result of running this comparison turned out not so great for my original title:
> ab_test("Can GPT make A/B Testing Obsolete?",
> "Replacing an A/B test with GPT")
array([0.2086494])
While not “statistically significant” I also know that this model does tend to under estimate itself, so went with the headline you see above.
Interestingly enough when I fed the first part of this article to GPT-3 itself and told it to make it’s own headline I got a remarkably similar one: "Replacing A/B Testing with AI: Can GPT-4 Predict the Best Headline?"
Running this through the model it seemed not to have a preference:
> ab_test("Replacing an A/B test with GPT",
> "Replacing A/B Testing with AI: Can GPT-4 Predict the Best Headline?")
array([0.50322519])
Maybe you don’t really want to replace all of your A/B testing with LLMs, but, at least for this case, it was a good substitute!
Conclusion: Is an A/B test different than a Model?
I wouldn’t be surprised if many people who have spent much of their careers running A/B tests would read this headline and immediately think it was click-bait nonsense. This experiment, for me at least, does raise an interesting philosophical (and practical) question: What is the difference between a model that tells you there’s a 95% chance A is greater than B and an A/B test that tells you that there’s a 95% chance A is greater than B? Especially if the former takes only milliseconds to run and the later anywhere from hours to days. If your model is historically correct 95% of the time when it says 95% how is this different from an A/B test making same claim based on observed information?
Even though I’m very skeptical of big claims around true “AI” in these models, there’s no doubt that they do represent an unbelievable amount of information about the way we use language on the web. It’s not absurd to consider than GPT-3 (and beyond) do have a valid understanding of how to represent these headlines in high dimensional space such that a linear model is able to accuracy predict how well they will perform on real humans.
The really fascinating proposition to me is that if we consider probabilities from a model the same as probabilities from an experiment but it takes milliseconds for the model to work it dramatically changes the space that A/B testing is possible. A generative model like GPT-4 could iterate on thousands of headlines, while a model like ours could run massive simulated “experiments” to find the best of the best.
While this may sound amazing to marketers and data scientists it’s worth considering the effect this would have on the content we consume. Even if this did work, do you want to live in a world where every piece of content you consume in perfectly optimized to encourage you to consume it?
Support on Patreon
Support my writing on Patreon and gain access to the source code and video commentary for this article as well as access to much more of my writing!
Never miss a new post!
Keep up to date with the latest Count Bayesie posts!
Former B.C. premier John Horgan says his party consciously tried to attract candidates “who looked like the mainstream” in their constituencies. And he suggested in a March 19 speech at the Surrey Arts Centre that these efforts by the B.C. NDP resulted in the legislature and cabinet better reflecting the population at large than ever before.
As examples, Horgan pointed to people of South Asian ancestry serving as speaker of the legislature, education minister, and attorney general. The former premier also mentioned that there are three Indigenous MLAs in the legislature. And he noted that half the NDP caucus is female.
“That is British Columbia,” Horgan said. “Now, when British Columbians look at their elected institutions, they see themselves reflected back. Not perfectly. Not completely. But better than it has ever been in our history.”
The former premier made these remarks after Spice Radio named him as its ninth annual Hands Against Racism Award winner. Nature’s Path Foods founders Ratana and Arran Stephens presented the award.
In her speech, Ratana Stephens highlighted that the Horgan-led government was the first in Canada to enshrine the UN Declaration on the Rights of Indigenous Peoples in legislation. In addition, the NDP government led by Horgan passed the Anti-Racism Data Act, created a parliamentary secretary for anti-racism initiatives, and restored the B.C. Human Rights Commission.
Horgan cites importance of education
When Horgan reached the podium, he said that over the past 25 years, the stories of British Columbia have become the stories of the world.
“My wife Ellie is with me—the daughter of Dutch immigrants who fled Europe, from Nazi tyranny, following the world war,” Horgan said.
He acknowledged that he heard their stories about the hardships that they faced coming to a new land. The former premier then noted that he’s been to gurdwaras and heard these stories from Sikhs. Horgan said that some of them fought in the British army to protect the empire and king. Yet they later witnessed hockey broadcaster Don Cherry declaring on television that they should “go home.”
“These inconsistencies speak directly to the challenges we have in our education system—and why it’s critically important,” Horgan said.
In his speech, the former premier mentioned that his mother instilled in him the importance of standing up to bullies. Then he said that he thinks racism is actually rooted in bullying.
In addition, Horgan talked about Canadian multiculturalism, which he was fiercely proud of as a young man and father. But when after becoming an elected representative in 2005, he developed a more nuanced view.
“I got to know with my friends—Raj Chouhan, Harry Bains, Jinny Sims, Jagrup Brar, and many, many others—that the communities that they came from and the communities that they represented were not like the community that I represented,” Horgan said. “They were not like Peace River North. They were not like the Kootenays.”
Spice Radio broadcaster Gurpreet Singh photographed John Horgan joining in a Bollywood-style dance after receiving his award.
Make B.C. the “beacon on the hill”
Then he shared a story about visiting a school in 2006 with Chouhan in the Edmonds area of Burnaby. There, Horgan recalled, 130 languages were spoken.
“That is fantastic—and a daunting challenge for educators,” Horgan said. “But what a gift to the people of British Columbia to have that representation of diversity from every corner of the globe.”
He also talked about Holi, a.k.a. the Festival of Colours, celebrated by Hindus every spring. The Hands Against Racism Award event coincides with Holi because this holiday promotes togetherness. On Holi, people joyfully sprinkle coloured water on friends and strangers and set aside old grudges.
“When you’re covered in colour and flourish and excitement, what is underneath that is just humanity,” Horgan commented. “And that humanity may speak a different language. It may practise a different faith.”
Then he spoke about other faiths. He stated that the best way to learn about the impacts of Kristallnacht is to visit a synagogue. The ex-premier added that visiting a Nowruz celebration on the North Shore can shed light on challenges faced by Muslims. He also mentioned that he spent time with his in-laws in church.
He pointed out that the United States once had that reputation as the place people in the world looked for hope before some of its governments were taken over by “white supremacists and fascists”. Horgan even referred to some of the words on the Statue of Liberty. It carries the phrase “Give me your tired, your poor. Your huddled masses yearning to breathe free.”
“As British Columbians—five million souls—we have an opportunity to make this the beacon on the hill,” Horgan stated.
Horgan listened to Indigenous leaders
He also demonstrated modesty in connection with advancing the rights of First Nations people in B.C. The former premier gave credit to their leaders.
“The Declaration on the Rights of Indigenous Peoples was not something that came out of my head,” Horgan emphasized. “It was something that I learned from Grand Chief Stewart Phillip and Ed John and Terry Teegee and Sophie Pierre and a legion of Indigenous people that I had the good fortune of coming upon as a local elected representative. And after 12 miserable and interminable years in Opposition, I got an opportunity to do something as premier.”
Similarly, Horgan praised NDP MLAs such as Chouhan, Bains, and Brar for keeping up the pressure for restoring the B.C. Human Rights Commission. The ex-premier acknowledged, however, that it “has its failings”, though he didn’t elaborate on that comment in his speech.
In the 11 days prior to Horgan’s speech, Pancouver published two articles raising serious concerns about Human Rights Commissioner Kasari Govender’s recent report on hate during the pandemic. In this 478-page document, the commissioner did not include any serious criticism of mainstream Canadian media for any role it may have played in promoting anti-Asian sentiments from 2015 to 2020.
Meanwhile, Horgan said that many acts of overt racism were recorded by cellphones during the pandemic. But he also stated that he shudders to think of how many countless other acts of discrimination and violence were not captured in this way.
“The good news,” Horgan stated, “is that when we saw these images on Global Television and on other media outlets, almost to a person, British Columbians stood together and said, ‘Not here. Not now. Not ever again.’ ”
Vancouver author Alan Twigg hopes to change that. The former publisher of BC Bookworld spent a year creating a website, RudolfVrba.com. It provides a comprehensive examination of the former UBC pharmacology professor’s immense contribution to humanity.
Twigg calls Vrba the “greatest whistleblower of the 20th century”.
Vrba died on March 27, 2006, after contracting cancer. He was 81 years old.
On April 7, 1944, Vrba escaped from the Auschwitz-Birkenau death camp with fellow inmate Alfréd Wetzler. They described what they had seen in the Vrba-Wetzler report.
According to the website, this led the Allies to bomb Budapest. As a result, Hungary’s leaders halted mass deportations of Jews.
Many years later, British historian Sir Martin Gilbert maintained that Vrba’s revelations had saved more than 100,000 lives.
In 1985, Vrba told his story in French filmmaker Claude Lanzmann’s Shoah documentary.
The whistleblower was born in Czechoslovakia as Walter Rosenberg. In 1942, he was arrested while fleeing the country’s crackdown on Jews.
On June 30 of that year, Vrba arrived at Auschwitz before being transferred to Birkenau six months later.
At Auschwitz, new arrivals were divided into two groups. A minority became slave labourers whereas the others were sent to Birkenau, a.k.a. Auschwitz II. There, the Nazis gassed them to death. At least 1.1 million people died in the camp.
This video tells the story of Rudolf Vrba and Alfréd Wetzler escape.
Vrba felt duty to tell the world
According to a paper by Israeli academic Ruth Linn, Vrba was ordered to collect valuables from inmates gassed to death.
“From this vantage point Vrba was able to assess how little the deportees knew about Auschwitz when they entered the camp,” Linn wrote. “Their luggage contained clothing for all seasons and basic utensils, a clear sign of their naive preparation for a new life in the ‘resettlement’ area in the east.”
In 1943, Vrba became registrar for the quarantine camp for men.
“In January 1944, I got information that the biggest extermination action was being planned,” Vrba told Lanzmann. “I started making plans to escape.”
The documentary maker then asked how he knew Hungarian Jews were being targeted.
“I was stationed near the main gate of the camp,” Vrba replied. “I noticed several chaps with tripods. There was a lot of work being done in three shifts. The SS who came to collect money from us dropped words about Hungarian salami was coming, along with other good things.”
Vrba noticed that the Nazis had done a great deal of work to prepare for the arrival of a million people.
“I did not believe that Hungary would permit this kind of deportation until an SS man left a newspaper for me to read, in exchange for $100 I supposedly found and gave to him,” he continued. “The paper said that the Hungarian government was toppled on March 19, 1944. (Miklos) Horthy was out and (Ferenc) Szalazi and another radical fascist replaced him. I realized I had to get out of there and tell the world.”
Former BC Bookworld publisher Alan Twigg describes Rudolf Vrba as the most significant author in B.C. history.
Website includes several sections
In addition, Vrba discussed his wartime experiences in his memoir, I Escaped From Auschwitz. Throughout the rest of his life, he harshly criticized certain Jews in Hungary for not alerting the community to the reality of Auschwitz.
Twigg highlighted this in his 2022 book, Out of Hiding: Holocaust Literature of British Columbia. Furthermore, Twigg insisted that this explains why Vrba never received a proper memorial in Yad Vashem in Jerusalem.
Meanwhile, the RudolfVrba.com website includes extensive sections entitled “context, “America & Hitler”, “Auschwitz”, “escapes”, “the report”, and “interviews”.
Twigg included another category simply entitled “Ruth Linn”. Linn, a scholar at the University of Haifa, interviewed Vrba several times leading up to his death. Moreover, she repeatedly tried to highlight his heroism to fellow Israelis.
“I read a lot about the Holocaust but I never, ever, read about Vrba in Israeli textbooks in the Hebrew language,” Linn told Pat Johnson of the Jewish Independent in 2006. “Am I the only Israeli who fell asleep in class when we studied this in the Holocaust? Or maybe we never studied it.”
Vrba was one of only five Jews who escaped Auschwitz-Birkenau. Many years later, his accomplishments earned high praise from Twigg.
“The most significant author of British Columbia is not Pauline Johnson, Douglas Coupland, William Gibson, David Suzuki or Alice Munro,” Twigg wrote on the BC Bookworld website. ”It’s Prisoner #44070, aka Rudolf Vrba, one of the most significant chroniclers of the Holocaust.”
This is probably the best thing OpenAI could have done to make sure ChatGPT had access to a treasure trove of curated scientific data.
Might be overkill for fixing its issues with simple addition and multiplication, but will definitely limit factual hallucination and make many kinds of answers more trustworthy.
As It Happens6:37LGBTQ rights activist fears for her fellow Ugandans but won't be silenced
Kasha Jacqueline Nabagesera says Uganda's harsh new anti-homosexuality law wields an even more perilous threat to her fellow members of the LGBTQ community than existing penalties because it targets an individual's very existence, along with their actions.
"The most dangerous is that even identity has been criminalized," the longtime LGBTQ rights activist told As It Happens host Nil Köksal. But Nabagesera refuses to deny her lesbian identity, including in the East African country.
"Some of us are on record, on national TV … and there's nothing we can change about that because we are proud of who we are."
Uganda's parliament passed the bill on Tuesday with a near-unanimous majority, making it a crime to identify as LGBTQ, and handing authorities broad powers to target gay Ugandans who already face legal discrimination and mob violence.
It includes steep sentences of life in prison for having same-sex relations, and the death sentence for "aggravated homosexuality," which is described in the law as same-sex relations with people under the age of 18 or when the individual is HIV positive.
Nabagesera, who is currently in Worcester, Mass., receiving medical care, says people in her home country are panicking.
"Especially the young ones who are already on buses crossing the border because they're very worried, because they're even telling parents to report their own children. They're telling landlords to stop renting their houses to people perceived to be LGBT," she said.
'Organized crime'
Nabagesera founded Freedom and Roam Uganda 20 years ago, one of the main organizations for lesbian, bisexual and transgender women rights in the country.
She has won international awards for her activism, works for the Kuchu Times Media Group and publishes Bombastic magazine, an LGBTQ-focused publication she says showcases the "lived realities" of people in her community — and aims to change the mindset of Ugandans.
But after the bill passed, she tweeted that it appeared to be "organized crime" by the politicians, whom she says are trying to distract Ugandans from "ongoing problems" the country is facing by talking about risks to their children.
"The parliament was so full that even some members were standing. And that has never happened," she said.
"It's like they all organized themselves to come and disrupt the country, because right now no one is talking about all the problems the country is facing. Everyone is talking about homosexuality."
Same-sex relations were already illegal in Uganda, but supporters of the new law said it is needed to punish a broader array of LGBTQ activities that they say threaten traditional values in the conservative and religious nation.
During debate on the proposed legislation, lawmaker David Bahati told MPs: "Our creator God is happy [about] what is happening ... I support the bill to protect the future of our children."
All but two of the government's 389 members of parliament voted in favour of the bill.
Criminalizing intent
Nabagesera says another troubling aspect of the legislation concerns the issue of intent.
"The mere fact that the bill also talks about the intent — intention to commit a crime — this is going to be abused by so many people," she said. The wording is so vague that it could, for example, mean a woman risks being targeted for simply appearing to show interest in another woman, she added.
"I could be actually criminalized for that, especially if I start writing love letters to this person expressing my attraction."
She also worries that some will use the law to falsely accuse others of being gay.
"This is just the beginning," said Nabagesera. "Unfortunately, these members of parliament forget that this bill is not only about LGBT people.… This bill talks about reporting people suspected of being homosexual."
'We shall get through this'
Watching it all unfold when she is thousands of kilometres away has been difficult for Nabagesera.
"I feel terrible not being down on [the] ground with my community because I've inspired so many members of the community to stand out and be proud," she said. "Many have joined the movement because of the inspiration I've given them."
Nabagesera says she has been the target of online hate, attacked in public back home and received death threats. She worries about the people she loves.
"Many people say that if they cannot get to me, they will go after after my loved ones," she said. "Over the years, I've learned how to protect myself, but I can't protect all my loved ones, so I worry more about them than myself."
But she is going back to Uganda.
"The movement needs to go on. We have to devise means on how we can continue to operate, continue to provide services to the community in a safer way," she said. "We are stronger when we are together. So I have to go back home to continue the fight that I started."
She does still believe the fight to change minds can be won, though likely not any time soon. She says anti-gay groups are given a wide platform to promote their beliefs in Uganda, while LGBTQ rights activists have to create their own means to promote awareness.
She hopes other countries will help in the fight, too, by putting pressure on Ugandan president Yoweri Museveni to to not sign the bill into law. But if it does, she will still fight.
"What is the use of me starting something and I stop halfway? So I'll go back and be with my community and we shall get through this. We've been here before. And so there's no reason why we shouldn't continue to fight."
A study from the University of Alberta and Stanford University has found oil and gas activity likely induced one of the province's largest documented earthquakes that took place last November.
Alberta Energy Regulator initial investigation found natural tectonic activity
A study from the University of Alberta and Stanford University has found oil and gas activity likely induced one of the province's largest documented earthquakes that took place last November.
The Peace River region experienced a series of three earthquakes that took place on Nov. 30. Scientists determined one of the earthquakes had a magnitude of 5.6, which is considered a moderate event but is among the largest ever recorded in Alberta.
The study, which was released on Thursday, stands opposed to the Alberta Energy Regulator's own initial findings which indicated natural causes.
The study took data relating to seismicity in the region dating back to 1985 and looked at how the earthquake occurred in a region of in situ bitumen recovery.
The process enables the recovery of oil that is buried too deep to mine and can only be reached by drilling wells to extract an extra-heavy type of oil called bitumen, according to the AER's website.
When bitumen cannot flow to the well, heat is added or fluids injected in order to reduce its viscosity to make it easier to recover.
The study found 3.4 centimetres of ground deformation was caused by a reverse fault slip, which is approximately 29 centimetres, possibly related to Peace River Arch faulting.
A fault is a fracture or zone of fractures between two blocks of rock.
"The fault slip is largely within the crystalline basement, with a small portion extending into basal sediments," the study said.
"Nearby injection operations dispose petroleum-related wastewater in these basal sediments."
The result of these operations likely induced the earthquake because of the pressure applied by injection, according to the study.
Study implications
In a news release last November, the AER said its investigation's "initial findings point to natural tectonic activity."
The basis for this was a lack of hydraulic fracturing activity, lack of nearby fluid disposal, and the depth of the earthquake.
The work was conducted by Alberta Geological Survey, which is a branch of the AER comprised of geoscience scientists.
"Scientists at the AGS use a network of approximately 50 seismic stations to measure and research seismic activity across Alberta," the AER's release said.
"We utilize this information to form an accurate picture of earthquake locations, magnitudes and discern the nature of these events."
The study acknowledges that the seismic history of the region lacks the "location resolution" needed to precisely define fault structures. However, the study cites recent records, which define three separate areas of clustered earthquakes, two of which, coincide with ongoing in situ bitumen recovery.
"The assessment of this earthquake as induced will likely have implications for future energy development, management, and regulation — including carbon capture and blue hydrogen," the study said.
The study's scientists said the Peace River case should provoke greater action when it comes to CO2 development.
"Long-term operations [including subsurface injection] have the potential to induce earthquakes — often with significant lag times for seismic response. Second, the importance of high sensitivity measurement both before and during the lifetime of the project: here, the lack of precise and low-magnitude seismic data hampered the resolvability of induced events and their properties."
ABOUT THE AUTHOR
Mrinali is a reporter with CBC Edmonton. She has worked in newsrooms across the country in Toronto, Windsor and Fredericton. She has chased stories for CBC's The National, CBC Radio's Cross Country Checkup and CBC News Network. Reach out at <a href="mailto:Mrinali.anchan@cbc.ca">Mrinali.anchan@cbc.ca</a>
This is "curated list of prompts, tools, and resources regarding the GPT-4 language model," including open source examples and community demos, and product integrations. Related: Bryan Alexander shares a conversation with Ruben Puentedura to explore the implications of large language model artificial intelligence; he adds some other interesting items, including authoring a 300-page text in one day with chatGPT, Microsoft's introduction to Copilot, and the Socratic Tutor system. I also ran across a Marcus Aurelius AI, which is a neat concept. Finally, the usual suspects from the music industry form a coalition to make sure publishers' copyrights aren't violated (but be careful - if new rules are created that apply to computers, including limits to fair use, they will definitely be extended to humans - imagine being told you can't record because your voice sounds too similar to someone else's).
I like this interactive visualization because it shows the advantage of working with connected data rather than raw lists or quantified data. Wordle is a game where you guess a five-letter word; you get five guesses, and it tells you when you've found correct letters. It's a always a question: where do you start, when you have no information about the word? Where next, when you have one of two letters? Suggestions abound, usually based on how frequently a letter appears. A graph analysis enables a better suggestion. But playing with the graph, you come to see you don't want to select letters that leave too many possibilities (which is what you get if you're vowel-heavy) or too few (which is what you get is you select infrequently used consonants). Anyhow, have fun with it.
On the surface, Vancouver writer-director Sophie Jarvis’s film Until Branches Bend is not about Indigenous issues. To a casual observer, it revolves around a pregnant white cannery worker named Robin, played by Grace Glowicki, who’s looking after her sister.
Early on, a controversy erupts in the unnamed fruit-growing region when Robin discovers what appears to be an invasive species inside a peach. The town is called Montague and it relies on the orchards for its economic livelihood.
But there is a deeper Indigenous undercurrent seeping through the storyline. In a Zoom call with Pancouver, Jarvis says that the original title was Invasions because this theme runs through her cinematic psychological drama in three key respects.
“In one way, of course, it’s obvious with the bug,” Jarvis says. “Another way is more personal, like with Robin’s unwanted pregnancy. And the other way is with the colonial history of Canada and North America.”
Her script weaves these threads together, sometimes in subtle ways. Musqueam actor Quelemia Sparrow plays Isabelle, the wife of Robin’s boss, Dennis (Lochlyn Munro). Dennis is reluctant to shut down operations over a single bug just as peaches are being harvested.
“Everyone is actually trying to do what they think is best,” Jarvis explains. “Dennis’s job is to protect the growers and to protect the season, which is short because peaches are really only in season for a few weeks every summer. So, anything that throws that off has really bad effects.”
Meanwhile, Robin is coping with the pregnancy and taking care of her sister, Laney (Alexandra Roberts), while trying to be a responsible citizen.
Writer-director Sophie Jarvis says that her script for Until Branches Bend refers to three “invasions”.
Until Branches Bend earns CSA nominations
All of this is happening in the midst of a furor over an insect.
“We worked with a bug called the darkling beetle, which is fairly common,” the writer-director reveals. “But we also had a concept artist design the markings we put on the shell. Then our visual-effects team would place those on the bug itself in post [production].”
Jarvis points out that the visual-effects team took tremendous care in this area. According to her, this bug needed to be memorable because it was a critically important component of the film.
In this regard, their efforts paid off. Landon Bootsma, Dexter Davey, Ashley Hampton, Milton Muller, and Dmitry Vinnik were all nominated for a 2023 Canadian Screen Award for achievement in visual effects.
Sisters Laney (Alexandra Roberts) and Robin (Grace Glowicki) survey their front yard in Until Branches Bend.
In addition, Jarvis scored Until Branches Bend a second Canadian Screen Award nomination for her original screenplay. The winners will be announced on CBC TV on April 16.
The Swiss-Canadian co-production also won the award for best B.C. film at the 2022 Vancouver International Film Festival. Until Branches Bend will be screened at the VIFF Centre in Vancouver, starting on Friday (March 24). Jarvis will speak at screenings on the first two days of its run.
This is Jarvis’s first feature film as a director, but she’s no newcomer to the industry. She also worked as a production designer on The Body Remembers When the World Broke Open, which addressed urban Indigenous issues.
Set in Vancouver, the multiple-award-winning film was co-directed and co-written by Elle-Máijá Tailfeathers and Kathleen Hepburn.
They returned as story editors on Until Branches Bend, which was produced by The Body Remembers producer Tyler Hagan. He’s of Michif and Canadian ancestry.
Watch a clip from Until Branches Bend.
Indigenizing the script
For The Body Remembers, Fort Nelson and Saulteau First Nations in Treaty 8 territory member Sarah Robinson offered insights into Indigenous issues to the cast and crew.
“She did a wonderful, amazing job,” Hagan says. “It was tailored to the project and tailored to the subject matter.”
Sadly, Robinson died at the age of 35 in 2021 after a battle with cancer.
For Until Branches Bend, Hagan and Jarvis initially sought input from IndigenEYEZ program director Kelly Terbasket, who grew up on Similkameen territory.
“We got to a certain point with the script where it’s a story about this woman, Robin, and her sister at its core,” Hagan says. “But we knew that this world doesn’t exist without Indigenous people. And to just ignore that element of the community in the story would have been such a huge erasure.”
So they set about incorporating Indigenous aspects, adding another element of tension to the story.
“We were lucky to have Elle-Máijá Tailfeathers and Kathleen Hepburn both as story editors on the script,” Hagan emphasizes.
Isabelle (Quelemia Sparrow) and her son Zach (Cole Sparrow-Crawford) inject Indigenous perspectives.
Jarvis praises Terbasket for offering valuable perspective on the impact of monoculture-based agricultural practices.
“A lot of my questions for her were the same as they were for everyone: what would be the impact on you or your community if this were to actually happen,” Jarvis recalls. “And the one thing that Kelly said that really stuck out with me is the impact wouldn’t be too huge.”
That’s because she felt that invasion of a bug, as described in the script, might actually give the land a chance to return to its original state before the settlers arrived.
“My understanding, from what Kelly told me, is there’s not a lot of people in her community who actually benefit directly from the industry,” Jarvis adds.
Producer Tyler Hagan advocates for educating cast and crews about Indigenous cultures.
Investing in the cast and crew
Hagan echoes that point, declaring that Terbasket was quite blunt about how little her people received from monoculture-based agriculture in the region..
After seeking Terbasket’s advice, the filmmakers then invited Skayu Louis from the Syilx Okanagan Nation to speak to the cast and crew. Louis was joined by his uncle, Cewelna, to share their perspectives on the impact of monoculture.
Hagan acknowledges that there’s a cost to doing workshops like this just as a film is about to be made. But he adds that there are also tremendous benefits.
“One of the ideas behind doing this kind of stuff is trying to position ourselves and the work we do in the film industry as being less extractive,” he states.
In addition, Hagan says, this is an investment in the crew and cast’s general knowledge. And that can pay dividends down the road on future film projects.
“The most impressive thing with doing these [workshops] was that the crew and the cast—and the people that are in attendance—engage in it,” Hagan says. “It’s not just like a ‘Sit down and eat your vegetables’ thing. Everybody is asking questions.”
Until Branches Bend will be screened at the VIFF Centre in Vancouver, starting on Friday (March 24). For more information and to buy tickets, visit the website. Follow Pancouver editor Charlie Smith on Twitter @charliesmithvcr. Follow Pancouver on Twitter @PancouverMedia.
Radio-Canada has learned that the Trudeau government has reached a deal with the United States on irregular migration which will allow Ottawa to close the Roxham Road irregular crossing at the Canada-U.S. border.
Deal would see Canada accept 15,000 migrants from Western Hemisphere
· CBC News ·
Radio-Canada has learned that the Trudeau government has reached a deal with the United States on irregular migration which will allow Ottawa to close the Roxham Road irregular crossing at the Canada-U.S. border.
The deal would see Canada announce openings for 15,000 migrants from the Western Hemisphere to apply to enter the country legally, a senior source with knowledge of the agreement told CBC News.
Progress on a new border deal between the two countries accelerated in the run-up to U.S. President Joe Biden's first official visit to Canada, the source added. Biden arrives in Ottawa Thursday and departs late on Friday.
The Safe Third Country Agreement between Canada and the United States, which came into force in 2004, effectively prevents Canadian law enforcement from turning back asylum seekers who enter Canada irregularly from the United States.
Have a question or something to say? Email: <a href="mailto:ask@cbc.ca">ask@cbc.ca</a> or join us live in the comments now.
The status of the agreement became a lingering source of tension between Ottawa and Washington because of an influx of asylum seekers entering Canada through Roxham Road, which is on the Quebec-New York border about 50 km south of Montreal.
The Safe Third Country Agreement prevents people from claiming asylum in Canada if they enter Canada from the U.S. at an official land border crossing. The idea is that asylum seekers should make their claims in the first safe country they can reach.
Asylum seekers can still have their appeals heard in Canada if they enter at an unofficial crossing, such as Roxham.
"I think it's good news. I know you'd like to know more. You will be knowing more quite soon from my colleagues and the prime minister," Health Minister Jean-Yves Duclos told reporters Thursday.
Watch and listen to U.S. President Joe Biden's first official visit to Canada on CBC News: Special live coverage starts Friday at 1 p.m. ET on CBC TV, CBC News Network, CBC Gem, the CBC News App and YouTube, and at 1:30 p.m. ET on CBC Radio and the CBC Listen app.
Opposition parties and the Quebec government have pressured the Trudeau government on Roxham Road. Both Conservative Leader Pierre Poilievre and Quebec Premier François Legault have called for the irregular border crossing's closure following a spike in asylum seekers this year. Legault said the number of asylum seekers has put a strain on his province's social services.
Nearly two-thirds of asylum claims in Canada in 2022 were made in Quebec, according to government data. Almost 40,000 asylum seekers crossed the border from Roxham Road that year. The migrants were primarily from Haiti, Turkey, Colombia, Chile, Pakistan and Venezuela.
Trudeau said last month that the only way to shut down Roxham is to renegotiate the Safe Third Country Agreement. But United States Ambassador David Cohen said that would do little to address irregular migration.
Sources told Radio-Canada that Foreign Affairs Minister Mélanie Joly and Immigration, Refugees and Citizenship Minister Sean Fraser have worked behind the scenes with their American counterparts in recent weeks to reach a deal.
New York City has paid for bus tickets to send asylum seekers through to Plattsburgh, New York, which is close to Roxham Road.
Speaking to reporters on Thursday, NDP Leader Jagmeet Singh said he'd still like to see that happen. He said he doesn't know the details of the Roxham deal.
"If the solution solves the problem, it's something we're open to," he said. "Our preferred option is still to suspend the agreement, but we're open to other solutions."
Corrections
This story has been updated from a previous version which said New York state paid for bus tickets for asylum seekers. In fact, it was New York City.
Mar 23, 2023 2:17 PM ET
With files from Alex Panetta Chris Rands, Christian Paas-Lang and Darren Major
ChatGPT is getting a plugins mechanism, which will allow developers to provide extra capabilities to ChatGPT, like looking up restaurants on OpenTable or fetching data from APIs. This feels like the kind of feature that could obsolete - or launch - a thousand startups. It also makes ChatGPT much more interesting as a general purpose tool, as opposed to something that only works as an interface to a language model.
"The ChatGPT Retrieval Plugin repository provides a flexible solution for semantic search and retrieval of personal or organizational documents using natural language queries." How many existing startups were building this I wonder?
I was talking with someone today who reflected that Open Badges effectively lost its theoretical underpinnings when Mozilla handed over stewardship of the standard in 2017. I think this is true, which is why Open Recognition is a much more interesting space to be now than the monoculture than is microcredentialing.
This post outlines some of what I think has been lost in terms of the extremely fertile period of time from 2011 until 2016. For those not aware, I was involved in the Mozilla community around badges from mid-2011, went to work on the Mozilla Open Badges team, became their Web Literacy Lead, and then have consulted on badge-related projects since leaving Mozilla in 2015.
Here’s my list of how microcredentialing has taken us away from the original vision, especially compared to the Open Badges white paper and subsequent work by Mozilla, HASTAC, and the Connected Learning Alliance:
Centralisation — the Open Badges ecosystem was designed to be a decentralised system based on ‘backpacks’. An zeal for control has led to centralised control over the issuing, validation, and management of badges. This has had a negative impact on the diversity of issuers and issuing platforms.
Limited interoperability — despite interoperability being baked into the Open Badges standard, some of the more corporate and large-scale badge issuing platforms have gone out of their way to reduce the value this feature. .
Narrow focus on job skills — Open Badges were supposed to recognise that learning happens everywhere, particularly outside traditional formal education settings. However, microcredentials are earned almost exclusively for skills which may be useful in the world of work, and issued by institutions and companies. This undervalues the importance of informal learning experiences and overlooks other important aspects of personal and professional growth.
Commercialisation — some organisations have taken a profit-driven approach to microcredentials, emphasising ‘brand value’ and revenue generation over accessibility and openness. This not only limits the availability of free or low-cost learning opportunities, but undermines the original intent of the Open Badges system.
Barrier to entry — the original vision was that anyone could create, issue, and share badges. However, some microcredential platforms have established barriers to entry, such as fees or partnership requirements, which can make it difficult for smaller organisations or individuals educators to participate in the ecosystem.
The people remaining loyal to the original, revolutionary vision of badges are all talking about Open Recognition these days. Microcredentials are ‘dead metaphors‘ which lack power in terms of human agency and individuals and communities being able to tell their story.
I’m looking forward to continuing to fight the good fight.
Joseph Planta@Planta
I had about 200 tabs open on my phone’s browser. I hadn’t closed any for about a year. I finally closed them all a… twitter.com/i/web/status/1…
You can quickly respond to text messages you receive with a brief reply using Google’s Smart Reply feature in the Messages app. While it’s convenient to reply with “Yes” or “sounds good?” to some messages, what if you want to send a text that requires a proper response? The big G might be working on […]