Shared posts

28 Mar 05:59

Introducing Mozilla.ai: Investing in trustworthy AI

by Mark Surman

We’re committing $30M to build Mozilla.ai: A startup — and a community — building a trustworthy, independent, and open-source AI ecosystem.

We’re only three months into 2023, and it’s already clear what one of the biggest stories of the year is: AI. AI has seized the public’s attention like Netscape did in 1994, and the iPhone did in 2007. 

New tools like Stable Diffusion and the just-released GPT-4 are reshaping not just how we think about the internet, but also communication and creativity and society at large. Meanwhile, relatively older AI tools like the recommendation engines that power YouTube, TikTok and other social apps are growing even more powerful — and continuing to influence billions of lives. 

This new wave of AI has generated excitement, but also significant apprehension. We aren’t just wondering What’s possible? and How can people benefit? We’re also wondering What could go wrong? and How can we address it? Two decades of social media, smartphones and their consequences have made us leery.    

Mozilla has been asking these questions about AI for a while now — sketching out a vision for trustworthy AI, mobilizing our community to document what’s broken and investing in startups that are trying to create more responsible AI. 

We’ve learned that this coming wave of AI (and also the last one) has tremendous potential to enrich people’s lives. But it will only do so if we design the technology very differently — if we put human agency and the interests of users at the core, and if we prioritize transparency and accountability. The AI inflection point that we’re in right now offers a real opportunity to build technology with different values, new incentives and a better ownership model. 

The good news: We’ve met literally thousands of founders, engineers, scientists, designers, artists and activists who are taking this approach to AI. Smart, dedicated people are building open-source AI technology, testing out new approaches to auditing and figuring out how to build ‘trust’ into AI in the real world. 

The less good news: We don’t see this happening amongst the big tech and cloud companies with the most power and influence. Meanwhile, these incumbents continue to consolidate their control over the market.

In short: Some people are starting to do things differently, but the most significant work (and investment) is happening the same old way. We want to change this. 

So, today we are announcing Mozilla.ai: A startup — and a community — that will build a trustworthy and independent open-source AI ecosystem. Mozilla will make an initial $30M investment in the company.

The vision for Mozilla.ai is to make it easy to develop trustworthy AI products. We will build things and hire / collaborate with people that share our vision: AI that has agency, accountability, transparency and openness at its core. Mozilla.ai will be a space outside big tech and academia for like-minded founders, developers, scientists, product managers and builders to gather. We believe that this group of people, working collectively, can turn the tide to create an independent, decentralized and trustworthy AI ecosystem — a real counterweight to the status quo. 

Mozilla.ai’s initial focus? Tools that make generative AI safer and more transparent. And, people-centric recommendation systems that don’t misinform or undermine our well-being. We’ll share more on these — and what we’re building — in the coming months. 

This new company will be led by Managing Director Moez Draief. Moez has spent over a decade working on the practical applications of cutting-edge AI as an academic at Imperial College and LSE, and as a chief scientist in industry. Harvard’s Karim Lakhani, Credo’s Navrina Singh and myself will serve as the initial Board of Mozilla.ai. 

Later this year, we will announce additional initiatives, partners and events where people can get involved. If you are interested in collaborating, reach out at hello@mozilla.ai.

The post Introducing Mozilla.ai: Investing in trustworthy AI appeared first on The Mozilla Blog.

28 Mar 02:23

Mom, I want what the other kids eat for lunch – a review of Riceboy Sleeps

by Becky Tu

Mom, I want what the other kids eat for lunch.

What seems like a simple request feels devastating as it unfolds on the screen, watching the mother’s face fall even as she agrees. I am instantly thrown back to my eight-year-old self, asking the same thing of my own mom. I wonder now if she wore the same expression as So-young, the main character of the award-winning family drama Riceboy Sleeps, did.

Immigrating to Canada is a new start, a hopeful beginning, a step towards a better life. It is also leaving behind a life, giving up the past, and starting over. In the film, there aren’t many ties keeping the mother and son in Korea, so in a way the move might have been a breath of fresh air. But starting from scratch in a new country is no easy feat.

The film is inspired by the director’s life and relationship with his own mother. Anthony Shim spent part of his childhood in Coquitlam, where a Korean community has slowly and surely settled. We can see painfully real memories of the immigrant experience as the son, Dong-hyun (played by Dohyun Noel Hwang and Ethan Hwang), adjusts to a new language and a new environment where his classmates and neighbours look nothing like him.

I also immigrated to Canada at a young age, and these scenes moved me deeply. I remember sitting down with my parents, a list of english names before me, and having to make a choice. On the one hand, it allowed me to make friends at school. On the other hand, it’s not until years later that I realized I’d packed away a part of myself and left it to collect dust. How do you reclaim something you readily gave up?

It is the first step to assimilation: attaching your identity to strange syllables and vowels so teachers can pronounce them more easily.

Assimilation does not always feel violent. Often, Shim muses, when Asian immigrants land, they want to acclimate. It is a way of protecting themselves, knowing they will always be perceived as foreigners first, with their black hair and brown eyes. They might not be able to prevent the spill of their accented english, but they can make sure their child’s speech matches the voices on the radio.

“They think, ‘I will never be a Canadian parent, I will never fit in,’” Shim says in an interview with Pancouver, on the topic of first-generation immigrants. “‘But maybe my child can.’”

This film is very much a tribute to his mother, and all the immigrant parents that made the hard choice of uprooting their entire life for their children. As a child, we never really understand the sacrifices our parents make. We see little Dong-hyun running through the carpeted hall of their home while So-young vacuums around him; we see teenage Dong-hyun sprawled asleep in his bed as So-young picks up after him. Paired with Choi Seung-yoon’s commanding presence on screen, it is easy to see this mother’s strength, determination, and courage.

What stands out to me, as someone who identifies as an Asian-Canadian woman, is how So-young refuses to be pigeonholed into stereotypes of submissiveness. She refuses to back down to the white men at her workplace when one of them slaps her ass—when no one stands up for her, she stands up for herself. She is not afraid to show her anger and disgust with Dong-hyun’s principal, who might take advantage of her awkward english and expect her to quietly acquiesce. Perhaps because she is well aware that, as a single mother and a new immigrant, there is no one else in their corner.

Her willingness to speak up for herself and her son reflects the director’s intent in portraying Korean women with justice and honesty. “I wanted to be part of changing what North American people’s view of Korean women was,” says Shim.

Some people might not enjoy immigrant stories because they’re full of hardships and trauma. Riceboy Sleeps gracefully balances the heartache of things lost in translation and the gentle triumph of persevering.

Take how the film shows the struggle of holding onto cultural identity and adapting to life as a “Canadian”. When we meet Dong-hyun as a teenager, he is putting in coloured contacts to go with his dyed blond hair. Yet he converses with his mother in english and Korean, switching between the two with a fluidity that is second nature.

The duality of language, food, and customs is a conflict immigrants confront every day. You can set down roots in this new place, but can you forgive yourself for the changes you made to do so?

Even as So-young is asking her son to take on an english name, she tells him it’s okay if he doesn’t choose one; if he wants to remain Dong-hyun, that’s okay with her. Seeing this moment in the trailer, even before I had the context of the film, gave me a lump in my throat. My non-english name still remains my legal name, but even my family rarely calls me that anymore. So-young calls her son by both names, the english name outside, the Korean name at home. Perhaps this is her way of holding onto the familiar—and why she never chose an english name for herself.

To me, that’s where this film really succeeds: the warmth and love that persists between this mother and son amidst the difficult and confusing battle of becoming Canadian.

Riceboy Sleeps was shot mainly in Greater Vancouver, with some scenes in South Korea.

I think I never thanked my mom for making me those sandwiches. Because it was never about the sandwich—not even when, in later years, my non-Asian classmates flaunted their fried rice and dumplings while I struggled to swallow around Wonder bread and ham stuck together by melted Kraft singles. It was about a dream for a better future, a choice made in hopes of more choices to be made available.

Riceboy Sleeps is not a grand story about overcoming racism. It’s not a sobfest—even if I personally cried the whole way through it. In the director’s own words, it’s a story about “a mother and son searching for a home.” And sometimes, home is not a physical place, and identity is not meant to be packed into a lunchbox.

 

Here’s one piece of dialogue from the film that I’ll leave you with:

The son asks, “Should we go back?”
To which the mother replies, “We can’t.”

 

Riceboy Sleeps opens in theatres across Canada today, March 17.

The post Mom, I want what the other kids eat for lunch – a review of <i>Riceboy Sleeps</i> appeared first on Pancouver.

28 Mar 02:23

Anoushka Ratnarajah becomes lead curator, Devyani Saltzman and Jarrett Martineau named guest curators at Indian Summer fest

by Charlie Smith

There was some sadness around Vancouver last year when Indian Summer Festival’s founders announced their looming departure. The husband-and-wife team of Sirish Rao and Laura Byspalko oversaw the event for 12 years, leaving a profound imprint on the city’s cultural life.

That’s because in partnership with Simon Fraser University they created a festival that aimed high. Rao and Byspalko maintained this standard even during the pandemic years.

Under their leadership, Indian Summer presented a veritable banquet of visual arts, music, literature, theatre, and current affairs. This was often accomplished through imaginative collaborations.

Well, fans of the festival can now take comfort in the new faces at the helm. The Indian Summer Arts Society has appointed former Out on Screen artistic director Anoushka Ratnarajah as lead curator.

From 2017 to 2022, Ratnarajah distinguished herself with her daring curation of the Vancouver Queer Film Festival. She shattered stereotypes in a range of areas. Examples include featuring a series of transmasculine films and offering brave movies about the real lives of sex workers.

“I’m looking forward to contributing to the unexpected collaborations, challenging conversations, and magical connections the Indian Summer Festival is known for,” Ratnarajah said in a news release announcing her appointment.

Ratnarajah grew up in Ladner in the traditional territory of the Tsawwassen people. She’s the daughter of a Tamil father born in Malaysia and an English mother.

Devyani Saltzman by Liat Aharoni, couresy OJA
Writer Devyani Saltzman has extensive curatorial experience. Photo by Liat Aharoni, courtesy of OJA.

Veteran curator joins Indian Summer for 2023

She’ll be joined by guest curator Devyani Saltzman for the 2023 festival. The Oxford-educated Saltzman was previously director of public programming at the Art Gallery of Ontario and director of literary arts at the Banff Centre for Arts and Creativity.

Saltzman is the daughter of film directors Paul Saltzman and Deepa Mehta. In addition, Saltzman is the author Shooting Water: A Memoir of Second Chances, Family and Filmmaking.

Subsequent to the news release announcing the appointments of Ratnarajah and Saltzman, the society revealed that Indigenous media and cultural leader Jarrett Martineau will also be a guest curator for this year’s event.

Jarrett Martineau
As host and producer of Reclaimed on CBC Radio, Jarrett Martineau has provided a platform to Indigenous musicians from across Canada. Photo by CBC.

Familiar face returns

Martineau is the host, producer, and creator of Reclaimed, which is CBC Radio’s first program devoted to Indigenous music. In addition, he’s the curator in residence at UBC’s Chan Centre for the Performing Arts.

In the past, Martineau has curated culturally diverse music events at the Indian Summer Festival.

Martineau is nêhiyaw (Plains Cree) and Denesųłiné from Frog Lake Cree Nation in Alberta. He has a PhD in Indigenous governance from the University of Victoria.

Meanwhile, the society has announced that two longstanding employees will share the leadership role. Pawan Deol is executive director of cultural programming; Laura June Albert is executive managing director.

The society’s newsletter describes Deol as the “curator of curators”.

“Our beloved festival will continue to feature some of the finest artists and visionaries across Canada, South Asia, and beyond,” Deol said. “And I’m so pleased to be co-leading as we continue our vision of presenting South Asian arts and ideas.”

Pawan Deol and Laura June Albert.
Pawan Deol and Laura June Albert will share responsibility for leading the Indian Summer Arts Society. Photo by The Portrait Sessions Photography.

From Bollywood stars to a Booker Prize winner

Over the years, the Indian Summer Arts Society has featured Bollywood stars such as Tabu, Shabana Azmi, Anupam Kher, and Sharmila Tagore. In addition, the society has introduced Vancouverites to some amazing writers, including Booker Prize winner Arundhati Roy, two-time Giller Prize winner M.G. Vassanji, Jnanpith Award winner Amitav Ghosh, and Life of Pi author Yann Martel.

Social activist Vandana Shiva, Grammy winning singer-songwriter Arooj Aftab, and Little Mosque on the Prairie creator Zarqa Nawaz are among many others who’ve been part of an Indian Summer Festival.

“We built a home twelve years ago and called it Indian Summer,” Rao said. “Nothing makes us happier than to know the home we have loved so dearly will be cared for by those that have helped to build it. Our thanks to our dedicated board of directors, who have demonstrated an exceptional level of care in this time of transition, and to our brilliant colleagues who take on the task of building into the future.”

Follow Pancouver editor Charlie Smith on Twitter @charliesmithvcr. Follow Pancouver on Twitter @PancouverMedia.

The post Anoushka Ratnarajah becomes lead curator, Devyani Saltzman and Jarrett Martineau named guest curators at Indian Summer fest appeared first on Pancouver.

27 Mar 01:07

Ghost bike installation in Oakville

by jnyyz

On Thursday March 9, a cyclist was stuck and killed on Upper Middle Rd in Oakville. Today was the ghost bike ride in his memory.

The riders gathered at the Oakville GO station.

Photo: Geoffrey Bercarich

Councillor Tom Adams was there to express his condolences. He has been one of the local councillors engaged with Vision Zero and the active transportation file. He said that there are a series of environment assessments planned as part of a process to roll out safer street design, including along Upper Middle Rd.

David Shellnut’s bike had a stripped skewer, and the fellow to the right kindly made a quick trip home to get him a spare.

Joey describes the route shortly before we depart.

Doing a quick ride by for the CBC cameraman.

Departing the GO station parking lot.

Approaching the underpass at the end of Lyons Lane.

Geoffrey through the P gates.

Riding north on Sixth Line.

Just after the turn on Upper Middle Rd. This is not a bike friendly street.

Approaching the crash site.

There were some candles, fruit offerings, and flowers already here.

Chaining up the bike.

Unfurling the banner.

A minute of silence for the deceased.

Photo: David Shellnut
Photo: David Shellnut

Geoffrey talks to the CBC.

Mark Anderson from Cycle Hamilton rode all the way out, and was planning to ride home as well.

An eyewitness to the crash happened to walk by. She said that the driver suddenly changed into the curb lane in preparation for a right turn onto Eight Line, hitting the cyclist from behind.

At Trafalgar and Iroquois Shore Rd on the way back.

On the GO train.

Thanks to everyone who rode with us on a cold, windy day. This was the third ghost bike installation in Oakville by ARC over the past few years, following Helen Xiang in 2020, and Ignacio Viana in 2021.


Deepest condolences to the family and friends of the departed.

Update: CBC video coverage.

24 Mar 02:37

Apple Music im Windows Store

by Volker Weber

Seit Monaten warte ich auf dieses Release, das es bisher nicht im deutschen Store gab. Heute habe ich zufällig gefunden.

24 Mar 02:37

Pluralistic: Mass tech worker layoffs and the soft landing (21 Mar 2023)

by Cory Doctorow
mkalus shared this story from Pluralistic: Daily links from Cory Doctorow.


Today's links



A group of firefighters holding a safety net under a building from which a man is falling; he is supine and has his hands behind his head. The sky has a faint, greyscale version of the 'Matrix Waterfall' effect. The building bears a Google logo.

Mass tech worker layoffs and the soft landing (permalink)

As tech giants reach terminal enshittification, hollowed out to the point where they are barely able to keep their end-users or business customers locked in, the capital classes are ready for the final rug-pull, where all the value is transfered from people who make things for a living to people who own things for a living.

"Activist investors" have triggered massive waves of tech layoffs, firing so many tech workers so quickly that it's hard to even come up with an accurate count. The total is somewhere around 280,000 workers:

https://layoffs.fyi/

These layoffs have nothing to do with "trimming the fat" or correcting the hiring excesses of the lockdown. They're a project to transfer value from workers, customers and users to shareholders. Google's layoff of 12,000 workers followed fast on the heels of gargantuan stock buyback where the company pissed away enough money to pay those 12,000 salaries…for the next 27 years.

The equation is simple: the more companies invest in maintenance, research, development, moderation, anti-fraud, customer service and all the other essential functions of the business, the less money there is to remit to people who do nothing and own everything.

The tech sector has grown and grown since the first days of the PC – which were also the first days of neoliberalism (literally: the Apple ][+ went on sale the same year Ronald Reagan hit the campaign trail). But despite a long-run tight labor market for tech workers, there have been two other periods of mass layoffs – the 2001 dotcom collapse and the Great Financial Crisis of 2008.

Both of those were mass extinction events for startups and the workers who depended on them. The mass dislocations of those times were traumatic, and each one had its own aftermath. The dotcom collapse freed up tons of workers, servers, offices and furniture, and a massive surge in useful, user-centric technologies. The Great Financial Crisis created the gig economy and a series of exploitative, scammy "bro" startups, from cryptocurrency grifts to services like Airbnb, bent on converting the world's housing stock into unlicensed hotel rooms filled with hidden cameras.

Likewise, the post-lockdown layoffs have their own character: as Eira May writes on StackOverflow, many in the vast cohort of laid-off tech workers is finding it relatively easy to find new tech jobs, outside of the tech sector:

https://stackoverflow.blog/2023/03/19/whats-different-about-these-layoffs/

May cites a Ziprecruiter analysis that claims that 80% of laid-off tech workers found tech jobs within 3 months, and that there are 375,000 open tech roles in American firms today (and that figure is growing):

https://www.ziprecruiter.com/blog/laid-off-tech-workers/

There are plenty of tech jobs – just not in tech companies. They're in "energy and climate technology, healthcare, retail, finance, agriculture, and more" – firms with intensely technical needs and no technical staff. Historically, many of these firms would have outsourced their technological back-ends to the Big Tech firms that just destroyed so many jobs to further enrich the richest people on Earth. Now, those companies are hiring ex-Big Tech employees to run their own services.

The Big Tech firms are locked in a race to see who can eat their seed corn the fastest. Spreading tech expertise out of the tech firms is a good thing, on balance. Big Tech's vast profits come from smaller businesses in the real economy who couldn't outbid the tech giants for tech talent – until now.

These mass layoff speak volumes about the ethos of Silicon Valley. The same investors who rent their garments demanding a bailout for Silicon Valley Bank to "help the everyday workers" are also the loudest voices for mass layoffs and transfers to shareholders. The self-styled "angel investor" who spent the weekend of SVB's collapse all-caps tweeting dire warnings about the impact on "the middle class" and "Main Street" also gleefully DM'ed Elon Musk in the runup to his takeover of Twitter:

Day zero

Sharpen your blades boys 🔪

2 day a week Office requirement = 20% voluntary departures.

https://newsletter.mollywhite.net/p/the-venture-capitalists-dilemma

For many technologists, the allure of digital tools is the possibility of emancipation, a world where we can collaborate to make things without bosses or masters. But for the bosses and masters, automation's allure is the possibility of getting rid of workers, shattering their power, and replacing them with meeker, cheaper, more easily replaced labor.

That means that workers who go from tech firms to firms in the real economy might be getting lucky – escaping the grasp of bosses who dream of a world where technology lets them pit workers against each other in a race to the bottom on wages, benefits and working conditions, to employers who are glad to have them as partners in their drive to escape Big Tech's grasp.


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#20yrsago Talk radio is the province of lying demagogues, bigots and fools https://web.archive.org/web/20030402131244/http://www.boston.com/dailyglobe2/079/oped/Outrageous_times_for_talk_radio+.shtml

#10yrsago Bruce Sterling’s closing SXSW keynote: disruption and destruction https://soundcloud.com/officialsxsw/bruce-sterling-closing-remarks

#10yrsago How the Digital Millennium Copyright Act punishes people with disabilities https://slate.com/technology/2013/03/dmca-copyright-reform-u-s-law-makes-digital-media-inaccessible.html

#10yrsago Supercut of all the alternate endings to the Animaniacs theme https://www.youtube.com/watch?v=HCM0jBrEfPw

#10yrsago EFF blasts plans to build DRM into HTML5 https://www.eff.org/deeplinks/2013/03/defend-open-web-keep-drm-out-w3c-standards

#10yrsago You know who does creepier stuff with your data than Cambridge Analytica? Your ISP https://www.techdirt.com/2018/03/21/if-youre-pissed-about-facebooks-privacy-abuses-you-should-be-four-times-as-angry-broadband-industry/

#5yrsago Police say the Austin bomber killed himself as they closed in on him https://web.archive.org/web/20180321085839/https://www.statesman.com/news/breaking-austin-bombing-suspect-dies-police-close-official-says/KZmUAGvKlNazDr31EzeUzI/

#5yrsago To save the Earth, stack humans in green cities and leave the wilderness for other animals https://www.theguardian.com/cities/2018/mar/20/save-the-planet-half-earth-kim-stanley-robinson

#5yrsago Teen’s devastating bug-report on a “tamper-proof” cryptocurrency wallet shows why companies can’t be left in charge of bad news about their products https://saleemrashid.com/2018/03/20/breaking-ledger-security-model/

#5yrsago In Chicago primaries, a string of defeats for the Democratic establishment at the hands of progressive Democrats https://theintercept.com/2018/03/20/fritz-kaegi-chicago-the-democratic-machine-is-defeated-by-an-insurgent-candidate-battling-municipal-inequality/

#1yrago Brazil's "Remuneration Right" will strengthen Big Tech and Big Media https://pluralistic.net/2022/03/21/jeffersonian-v-hamiltonian/#dinheiro-fala

#1yrago To make Big Tech better, make it smaller https://pluralistic.net/2022/03/21/jeffersonian-v-hamiltonian/#fits-in-a-bathtub



Colophon (permalink)

Today's top sources: Slashdot (https://slashdot.org/).

Currently writing:

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. Yesterday's progress: 776 words (118237 words total). FIRST DRAFT COMPLETE
  • The Bezzle, a Martin Hench noir thriller novel about the prison-tech industry. FIRST DRAFT COMPLETE, WAITING FOR EDITORIAL REVIEW

  • A Little Brother short story about DIY insulin PLANNING

  • Vigilant, Little Brother short story about remote invigilation. ON SUBMISSION

  • Moral Hazard, a short story for MIT Tech Review's 12 Tomorrows. FIRST DRAFT COMPLETE, ACCEPTED FOR PUBLICATION

  • Spill, a Little Brother short story about pipeline protests. ON SUBMISSION

Latest podcast: Gig Work is the Opposite of Steampunk https://craphound.com/news/2023/03/19/gig-work-is-the-opposite-of-steampunk/

Upcoming appearances:

Recent appearances:

Latest books:

Upcoming books:

  • Red Team Blues: "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books, April 2023
  • The Internet Con: A nonfiction book about interoperability and Big Tech, Verso, September 2023

  • The Lost Cause: a post-Green New Deal eco-topian novel about truth and reconciliation with white nationalist militias, Tor Books, November 2023


This work licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

(Latest Medium column: "Culture War Bullshit Stole Your Broadband: Your internet sucks because telco monopolists kept Gigi Sohn off the FCC" https://doctorow.medium.com/culture-war-bullshit-stole-your-broadband-4ce1ffb16dc5)

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

24 Mar 02:36

Vancouver after dark: Survey seeks feedback on jazzing up city's nightlife scene | CBC News

mkalus shared this story :
Here's an idea: Stop tearing down all the venues and "redevelop" them into yet another glass tower for investors to snatch up.

British Columbia

Vancouver's night-time economy is less than world-class and ripe for improvement, says a group of tourist and hospitality organizations.

'The general consensus is that our nightlife is behind the rest of the world,' said survey backer Nate Sabine

A survey looking to gauge people's feelings about Vancouver nightlife has been launched by a group of tourist and hospitality organizations that thinks there is ample untapped potential to jazz things up after dark.

Called Vancouver's Nighttime Economy Public Engagement Survey, it asks people to weigh in on questions like: On a scale of 1 to 5 (with 1 being boring and 5 being awesome), what do you think Vancouver's entertainment and nightlife image is worldwide?

"I guess the general consensus is that our nightlife is behind the rest of the world and behind the rest of what Vancouver is in terms of an international image and realizing its potential," said Nate Sabine, a board member with the Hospitality Vancouver Association and a business manager with Blueprint, a hospitality and entertainment company.

The document defines the nighttime economy as tourism, hospitality, arts, cultural and economic activity that takes place between 5 p.m. and 4 a.m.

"A purposeful nighttime economy is part of what increasingly defines the most successful, livable and competitive global cities, "said Royce Chwin, the president of Destination Vancouver.

The survey also touts the idea of creating a job called "night mayor," a kind of ambassador-slash-bureaucrat who would guide the development of new nightlife initiatives. Sabine said the position exists in places like Amsterdam, New York and New Orleans. 

"The night mayor would talk to all the stakeholders on the community side and work on things like capacities, bylaws, use of public space, liquor, licensing — all those sorts of things — and communicate back to the mayor and city hall so that they can make positive and safe changes," he said. 

According to Sabine, there's a connection between Vancouver's "No Fun City" moniker and the current nightlife scene in that decision-makers haven't always been open to trying new things.

Recent changes to liquor bylaws have helped, but the groups behind the survey say much more can be done, like expanding the use of public spaces, parks and the waterfront, and better funding and grant programs.

Expanding Vancouver's nighttime economy would provide benefits far beyond the obvious businesses like nightclubs and restaurants, Sabine noted. 

"People don't think about cab drivers, hotel workers, the people that do makeup for dancers or the person that's doing the sound and lights at a concert ... the smaller mom and pop shops that might get the bulk of their business after hours or late night," he said.

The public engagement survey is open to anyone. The consultation will also use polls, interviews and community meetings to gather feedback. Results will be presented to Vancouver city council in May of this year.

24 Mar 02:36

After 25 years: Amazon shuts down DPreview

mkalus shared this story from heise online News.

(This article is also available in German)

The camera website DPreview is dead – at least that's what it looks like so far. On the evening of March 21, 2023 (German time), the website published a short message and a farewell video on YouTube. According to the message, the site will not be updated after April 10, 2023; until then, new articles, which the team is already working on, will be published, writes CEO Scott Everett.

He explicitly states that the closure is part of Amazon.com's annual operational planning. Everett links to a statement by Amazon CEO Andy Jassy, which, however, makes no mention of the site's closure. Amazon, like many tech companies, is in the midst of another round of layoffs.

Amazon acquired the website, which was founded in the UK in 1998 as "Digital Photography Review", in 2007, and apparently never understood how to monetize it. As Techchrunch notes, Amazon never managed to successfully integrate the platform into its corporate structure; DPReview is said to have operated largely independently. There was even speculation that Amazon had forgotten that it owned DPReview.

Excellent tests

For photo enthusiasts, this is incomprehensible. After all, despite the general decline in camera sales due to the success of smartphones, DPreview remained one of the best sources for thorough and critical reviews. This includes comparison images, where one can access stills taken with the tested devices at different ISO settings across several camera generations. No other online medium offers such a comprehensive service, and it can't be replicated in magazines or videos - especially since the images can also be downloaded in their original form.

The test images from the testers' home area were nothing short of classic. Although the headquarters moved from London to Seattle in 2010 – close to Amazon's headquarters – the quality of the images of everyday scenes in a port city remained the same. The editorial team and its freelancers did what amateurs and professionals do with new equipment: They went out on their own doorsteps and photographed what they knew well. This shows capabilities of the new hardware at its best.

Fast but thorough

DPreview gained such an outstanding reputation with manufacturers of cameras, lenses and other accessories, that the first reviews of pre-production models often appeared on the same day as the models were unveiled. The production models were then subjected to meticulous testing, including the aforementioned stills and data. This takes many hours of work, photography, writing and online preparation – and it costs money.

It seems that Amazon never understood its service to the photo community. The discontinuation also affects the Youtube channel with 410,000 subscribers; its hosts bid farewell with a sarcastically entertaining clip.

The authors consider their community, which has been very active in the forums of the site, until the very end. Those who have an account with DPreview can use this link to request an archive of their data for download until April 6, 2023. After that, Everett writes, such requests will no longer be honored. No new content will be posted after April 10, 2023. The website, including the forum, will only be available "for a limited time" after that, says Everett, without providing a specific date. (keh)

24 Mar 02:35

How Elon Musk's tweets unleashed a wave of hate

mkalus shared this story from BBC News - Home.

24 Mar 02:35

Calgary pastor accused of hate-motivated crimes at drag storytime has history of violence | CBC News

mkalus shared this story .

A Calgary pastor accused of hate-motivated crimes after an altercation at a library drag event has a criminal history for violent assaults and animal abuse, a judge heard Wednesday.

As part of a contested bail hearing, Crown prosecutor Zailin Lakhoo offered new details on Derek Reimer's criminal history, the library disturbance and a jail cell attack the pastor is accused of committing. 

Reimer, 36, was released on bail Wednesday following his arrest for breaching his previous release conditions after he showed up at another library drag book reading last week.

The pastor was arrested after a Reading with Royalty event on Feb. 25 when he allegedly shouted homophobic and transphobic slurs at children who attended the event.

Aggravated assaults

In 2015, Reimer was convicted in Calgary of causing suffering to a dog. He was given 12 months probation as a sentence.

Before that, Reimer faced two jail sentences in Winnipeg. 

In 2011, he was found guilty of assault causing bodily harm and handed an eight-month jail term plus 18 months probation.

Two years later, Reimer was convicted on two counts of aggravated assault and was sentenced to two years in jail.

A number of other charges in both Alberta and Manitoba have been stayed or withdrawn over the years, including theft, mischief, intimidation and at least two other assaults.

Kids crying during altercation

On Feb. 25, Reimer showed up at the Seton Library for a Reading with Royalty event organized with Calgary Pride. The event was geared toward children who were six to eight years old.

Reimer pushed parents to get to the front of the room, where he shouted homophobic and transphobic slurs, causing some of the children to cry, according to Lakhoo.

The Crown says that, according to witness statements, Reimer bragged about getting three drag events cancelled in the previous eight days and said he planned to return to disrupt future events.

Reimer was arrested days later at his home and charged with mischief and causing a disturbance. 

Assault on cellmate alleged

He was released on bail but refused to sign his conditions ordering him to stay away from LGBTQ2S+ community members and events. So Reimer remained behind bars at the Calgary Remand centre for several days.

During that time, Reimer was charged with institutional misconduct after attacking his cellmate, according to the Crown, who detailed some of the allegations for the judge.

On March 5, Reimer approached a guard asking for medical attention for a fractured hand, saying he'd punched a wall.

When sheriffs checked on Reimer's cellmate, the man was badly beaten, suffering injuries to his face and head. The victim required glue to treat two lacerations.

Unprovoked attack

Staff at the Remand Centre interviewed both men.

The victim said he woke up to Reimer assaulting him for no reason.

Reimer told the officers he attacked his roommate because he was "stinky and gross."

Police investigated but the victim did not want to pursue criminal charges.

Reimer finally signed his bail conditions on March 7 and was released. One of the bail conditions prohibited Reimer from going within 200 metres of events organized by the LGBTQ community.

He told his followers he signed the conditions only so he could seek medical treatment for his injured hand.

Reimer shows up at drag event

On March 13, Reimer attended a protest at the Signal Hill Library, where he said he would be back on March 15 for another Reading with Royalty event.

Reimer showed up at the event, setting up camp with a megaphone in a nearby parking lot. He was 160 metres from the library door, according to the Crown.

He was arrested by Calgary police and charged with breaching his bail conditions.

In opposing Reimer's re-release Wednesday, Lakhoo read aloud several quotes from videos of Reimer speaking that have been posted online within the past week.

"I can't stop doing what God has called us to do.… I know realistically I will end up back in there," he said of being in jail.

Reimer released

Defence lawyer Ben Allison fought for his client's release, telling the judge that Reimer believed he was prohibited from being within 100 metres of the event, not 200 metres. 

"If he really meant to breach, he'd be at the library door," said Allison.

If convicted of the criminal offences, Reimer likely faces a 30- to 60-day jail sentence. With credit for the time he's already served, Reimer is getting close to what he would serve for a sentence.

Provincial court Judge Susan Pepper agreed to release Reimer but imposed new conditions.

'You say you are a man of God'

Reimer cannot go within 300 metres of Reading with Royalty events or any other events organized by the LGBTQ community.

He cannot intentionally interfere with any person trying to participate in an LGBTQ event. 

And he cannot have contact or communication with any known participant, performer or spectator at a Reading with Royalty event.

The Crown indicated police are considering further charges against Reimer of public incitement of hatred. 

"You say you are a man of God, you say you are religious. I hope that's important to you that you not make people feel hated," said Pepper.

24 Mar 02:35

Saga of Island Rail Corridor highlights B.C.'s difficulty in creating regional transit | CBC News

mkalus shared this story .

In 2011, right after passenger service was shut down on Vancouver Island's Esquimalt & Nanaimo Railway, the province estimated $15 million would be needed to restore the line that connects nearly all of the Island's major population centres across hundreds of kilometres. 

Twelve years later, the price tag is as much as $700 million, and the government is now spending $18 million on consultation to simply figure out if restoring the route is viable. 

"This is not the end of the line for the corridor," said Premier David Eby, explaining why this round of consultations on the future of the Island Rail Corridor, as it's now known, would be different.

"This is just the beginning of the work in a different way, in the way that all of our projects and land use decisions need to work in this province in partnership with First Nations."

While evolving government views around First Nations land claims have been part of the story of the Island Rail Corridor, across the province there are questions over why regional transportation upgrades take so long.

"They're politically fraught decisions and it feels like, from a provincial level, sometimes the most attractive strategy is just to keep studying the issue," said former West Vancouver Coun. Craig Cameron, who sat on the TransLink Mayors' Council.

"Keep the impression in people's minds that you're doing something, without actually having to make any hard decisions."

By bus...

Cameron's frustration stems in part from an inability to get regional transit connecting Squamish and Whistler to Metro Vancouver, despite years of lobbying by local officials and a study completed in 2017.

"Here we are in 2023 and we're no closer to having that transit service," he said, blaming the provincial government and B.C. Transit for not creating new funding mechanisms or an increase to the gas tax. 

"Different things have been discussed behind closed doors, and it seems that every single time the province comes back and says, 'well, sorry, we're not going to give you any other tools.' It's sort of like the Peanuts sketch when they pull the football away." 

Much of the discussion around regional transportation in recent years has centred around the 2018 closure of Greyhound bus lines that connected much of the province.

In the years since, the province has funded a separate service for northern B.C. that serves some but not all of the previous stops. The southern part of the province has a patchwork of different lines that have mostly replaced Greyhound service — some of which are private, some of which have been created by B.C. Transit, and some that are currently suspended due to business problems

When Eby was asked why transit options connecting different regions were worse than they were in 2017, he bristled. 

"That's simply not right," he said, before listing enhanced transit services in the Fraser Valley and supports given to maintain service during the pandemic. 

"We know that expanding transit takes pressure off of roads in terms of congestion ... and supports our clean economy. That's why we put these kinds of investments forward and we'll have more to say about that, actually, in the days ahead."

...or by train?

But is there a way forward for enhanced passenger train service in B.C. in light of Tuesday's announcement?

Brendan Dawe, a planning consultant who has studied defunct rail lines across the province, is skeptical. 

"The east side of the island was actually the strongest rail corridor in the province ... there's a chain of townsites that are nicely spaced out, that you could envision this as being developed around rail transportation," he said.

"And so, if the Island Corridor doesn't work there, then there's really very little [possibility] in the rest of the province, where everything would have to be built new and there isn't a legacy infrastructure."

Still, he acknowledges that people — including the province after Tuesday's announcement — will continue to keep the option open. 

"It certainly captures a certain nostalgia, but it also captures an idea of making a collective good happen, of not being dependent on your car to get around and do things, and I think that's attractive to a lot of people," he said.

"I'm sad to see this this continue to be kicked down the road ... But it's very understandable, given the failure of senior governments to do much with this in the last 10 to 40 years."

24 Mar 02:34

Ugandan activist vows to keep speaking out despite new law that bans identifying as LGBTQ | CBC Radio

mkalus shared this story .

As It Happens6:37LGBTQ rights activist fears for her fellow Ugandans but won't be silenced

Kasha Jacqueline Nabagesera says Uganda's harsh new anti-homosexuality law wields an even more perilous threat to her fellow members of the LGBTQ community than existing penalties because it targets an individual's very existence, along with their actions.

"The most dangerous is that even identity has been criminalized," the longtime LGBTQ rights activist told As It Happens host Nil Köksal. But Nabagesera refuses to deny her lesbian identity, including in the East African country.

"Some of us are on record, on national TV … and there's nothing we can change about that because we are proud of who we are."

Uganda's parliament passed the bill on Tuesday with a near-unanimous majority, making it a crime to identify as LGBTQ, and handing authorities broad powers to target gay Ugandans who already face legal discrimination and mob violence. 

It includes steep sentences of life in prison for having same-sex relations, and the death sentence for "aggravated homosexuality," which is described in the law as same-sex relations with people under the age of 18 or when the individual is HIV positive.

Nabagesera, who is currently in Worcester, Mass., receiving medical care, says people in her home country are panicking. 

"Especially the young ones who are already on buses crossing the border because they're very worried, because they're even telling parents to report their own children. They're telling landlords to stop renting their houses to people perceived to be LGBT," she said.

'Organized crime'

Nabagesera founded Freedom and Roam Uganda 20 years ago, one of the main organizations for lesbian, bisexual and transgender women rights in the country.

She has won international awards for her activism, works for the Kuchu Times Media Group and publishes Bombastic magazine, an LGBTQ-focused publication she says showcases the "lived realities" of people in her community — and aims to change the mindset of Ugandans.

But after the bill passed, she tweeted that it appeared to be "organized crime" by the politicians, whom she says are trying to distract Ugandans from "ongoing problems" the country is facing by talking about risks to their children.

"The parliament was so full that even some members were standing. And that has never happened," she said. 

"It's like they all organized themselves to come and disrupt the country, because right now no one is talking about all the problems the country is facing. Everyone is talking about homosexuality."

Same-sex relations were already illegal in Uganda, but supporters of the new law said it is needed to punish a broader array of LGBTQ activities that they say threaten traditional values in the conservative and religious nation.

During debate on the proposed legislation, lawmaker David Bahati told MPs: "Our creator God is happy [about] what is happening ... I support the bill to protect the future of our children."

All but two of the government's 389 members of parliament voted in favour of the bill.

Criminalizing intent

Nabagesera says another troubling aspect of the legislation concerns the issue of intent. 

"The mere fact that the bill also talks about the intent — intention to commit a crime — this is going to be abused by so many people," she said. The wording is so vague that it could, for example, mean a woman risks being targeted for simply appearing to show interest in another woman, she added.

"I could be actually criminalized for that, especially if I start writing love letters to this person expressing my attraction."

She also worries that some will use the law to falsely accuse others of being gay.

"This is just the beginning," said Nabagesera. "Unfortunately, these members of parliament forget that this bill is not only about LGBT people.… This bill talks about reporting people suspected of being homosexual."

'We shall get through this' 

Watching it all unfold when she is thousands of kilometres away has been difficult for Nabagesera. 

"I feel terrible not being down on [the] ground with my community because I've inspired so many members of the community to stand out and be proud," she said. "Many have joined the movement because of the inspiration I've given them."

Nabagesera says she has been the target of online hate, attacked in public back home and received death threats. She worries about the people she loves. 

"Many people say that if they cannot get to me, they will go after after my loved ones," she said. "Over the years, I've learned how to protect myself, but I can't protect all my loved ones, so I worry more about them than myself."

But she is going back to Uganda. 

"The movement needs to go on. We have to devise means on how we can continue to operate, continue to provide services to the community in a safer way," she said. "We are stronger when we are together. So I have to go back home to continue the fight that I started." 

She does still believe the fight to change minds can be won, though likely not any time soon. She says anti-gay groups are given a wide platform to promote their beliefs in Uganda, while LGBTQ rights activists have to create their own means to promote awareness.

She hopes other countries will help in the fight, too, by putting pressure on Ugandan president Yoweri Museveni to to not sign the bill into law. But if it does, she will still fight. 

"What is the use of me starting something and I stop halfway? So I'll go back and be with my community and we shall get through this. We've been here before. And so there's no reason why we shouldn't continue to fight."

24 Mar 02:34

Oil and gas activity was catalyst for Peace River earthquakes in 2022, study finds | CBC News

mkalus shared this story .

Edmonton

A study from the University of Alberta and Stanford University has found oil and gas activity likely induced one of the province's largest documented earthquakes that took place last November.

Alberta Energy Regulator initial investigation found natural tectonic activity

A study from the University of Alberta and Stanford University has found oil and gas activity likely induced one of the province's largest documented earthquakes that took place last November.

The Peace River region experienced a series of three earthquakes that took place on Nov. 30. Scientists determined one of the earthquakes had a magnitude of 5.6, which is considered a moderate event but is among the largest ever recorded in Alberta.

The study, which was released on Thursday, stands opposed to the Alberta Energy Regulator's own initial findings which indicated natural causes. 

The study took data relating to seismicity in the region dating back to 1985 and looked at how the earthquake occurred in a region of in situ bitumen recovery. 

The process enables the recovery of oil that is buried too deep to mine and can only be reached by drilling wells to extract an extra-heavy type of oil called bitumen, according to the AER's website. 

When bitumen cannot flow to the well, heat is added or fluids injected in order to reduce its viscosity to make it easier to recover. 

The study found 3.4 centimetres of ground deformation was caused by a reverse fault slip, which is approximately 29 centimetres, possibly related to Peace River Arch faulting. 

A fault is a fracture or zone of fractures between two blocks of rock.

"The fault slip is largely within the crystalline basement, with a small portion extending into basal sediments," the study said.

"Nearby injection operations dispose petroleum-related wastewater in these basal sediments." 

The result of these operations likely induced the earthquake because of the pressure applied by injection, according to the study.

Study implications 

In a news release last November, the AER said its investigation's "initial findings point to natural tectonic activity." 

The basis for this was a lack of hydraulic fracturing activity, lack of nearby fluid disposal, and the depth of the earthquake.

The work was conducted by Alberta Geological Survey, which is a branch of the AER comprised of geoscience scientists. 

"Scientists at the AGS use a network of approximately 50 seismic stations to measure and research seismic activity across Alberta," the AER's release said. 

"We utilize this information to form an accurate picture of earthquake locations, magnitudes and discern the nature of these events."

The regulator is also investigating whether three milder earthquakes in the Peace River region that took place in mid-March are also related to the larger earthquakes from last November. 

The study acknowledges that the seismic history of the region lacks the "location resolution" needed to precisely define fault structures. However, the study cites recent records, which define three separate areas of clustered earthquakes, two of which, coincide with ongoing in situ bitumen recovery. 

"The assessment of this earthquake as induced will likely have implications for future energy development, management, and regulation — including carbon capture and blue hydrogen," the study said. 

The study's scientists said the Peace River case should provoke greater action when it comes to CO2 development.

"Long-term operations [including subsurface injection] have the potential to induce earthquakes — often with significant lag times for seismic response. Second, the importance of high sensitivity measurement both before and during the lifetime of the project: here, the lack of precise and low-magnitude seismic data hampered the resolvability of induced events and their properties."

ABOUT THE AUTHOR

Mrinali is a reporter with CBC Edmonton. She has worked in newsrooms across the country in Toronto, Windsor and Fredericton. She has chased stories for CBC's The National, CBC Radio's Cross Country Checkup and CBC News Network. Reach out at <a href="mailto:Mrinali.anchan@cbc.ca">Mrinali.anchan@cbc.ca</a>

24 Mar 02:34

Awesome GPT-4

Radi Cho, GitHub, Mar 23, 2023
Icon

This is "curated list of prompts, tools, and resources regarding the GPT-4 language model," including open source examples and community demos, and product integrations. Related: Bryan Alexander shares a conversation with Ruben Puentedura to explore the implications of large language model artificial intelligence; he adds some other interesting items, including authoring a 300-page text in one day with chatGPT, Microsoft's introduction to Copilot, and the Socratic Tutor system. I also ran across a Marcus Aurelius AI, which is a neat concept. Finally, the usual suspects from the music industry form a coalition to make sure publishers' copyrights aren't violated (but be careful - if new rules are created that apply to computers, including limits to fair use, they will definitely be extended to humans - imagine being told you can't record because your voice sounds too similar to someone else's).

Web: [Direct Link] [This Post]
24 Mar 02:28

Twitter Favorites: [Planta] You know when you’ve watched too much news? You recognise @ianhanomansing’s voice when they use a voiceover of his… https://t.co/X7moHpIL5l

Joseph Planta @Planta
You know when you’ve watched too much news? You recognise @ianhanomansing’s voice when they use a voiceover of his… twitter.com/i/web/status/1…
24 Mar 02:26

Twitter Favorites: [uncleweed] Japan who historically wins with timely bunts, sacrifices, station to station ball & great base - get it done with… https://t.co/fvPB1CS7dS

DaveO @uncleweed
Japan who historically wins with timely bunts, sacrifices, station to station ball & great base - get it done with… twitter.com/i/web/status/1…
24 Mar 02:25

Twitter Favorites: [alltartans] A Tartan for the Generous but Forsaken https://t.co/XhZWU0W9hm

Tartans @alltartans
A Tartan for the Generous but Forsaken pic.twitter.com/XhZWU0W9hm
24 Mar 02:25

Twitter Favorites: [uncleweed] Rolling in to hospital looking like _____. #tracksuit #tartan #beastvan #eyepatch #greybeard #peace… https://t.co/cmeO74kCRM

24 Mar 02:23

Google Camera 8.8 Brings Faster Night Sight Processing on Pixel 6

by Ronil
Google included the faster and improved Night Sight feature for the Pixel 6 series phones. The Pixel Night Sight feature allows you to take photos in low-light conditions without using the flash. This feature was first introduced with the Pixel 7 and Pixel 7 Pro and is now coming to the Pixel 6 series as […]
24 Mar 02:23

Google Messages Could Soon Let You AI to Reply to Text Messages

by Ronil
You can quickly respond to text messages you receive with a brief reply using Google’s Smart Reply feature in the Messages app. While it’s convenient to reply with “Yes” or “sounds good?” to some messages, what if you want to send a text that requires a proper response? The big G might be working on […]
21 Mar 15:20

On Large Language Models

by Rui Carmo

I’ve been pretty quiet about ChatGPT and Bing for a number of reasons, the most pertinent of which is that I have so much more going on in my life right now.

But I think it’s time to jot down some notes on how I feel about Large Language Models (henceforth abbreviated to LLMs) and the current hype around them.

And I’m going to try to do that from the perspective of someone who:

  • Graduated from college soon after the peak of the 90’s AI Winter (yeah, I’m old–we call it “experience” these days)
  • Actually decided not to major in AI (but rather in more networking-focused topics) because of said Winter, although I went and racked up my point average by acing AI coursework as optional credits.
  • Survived several hype cycles over the past 30 years.
  • Dove into analytics and data science during the “resurgence” in 2012 and enjoyed it immensely (as well as racking up a few ML certifications) before getting sucked into telco again.
  • Spends an unhealthy amount of time reading papers and mulling things.

Plus the field is evolving so quickly that I’ve drafted this around four times–all the while progressively shrinking it it down to a quick tour over what I think are the key things to ponder.

How Smart is an LLM, anyway?

I’m going to start with an obvious fact, which is that LLMs just seem to be smart. Sometimes recklessly so.

Yes, typical outputs are vastly better than Markov chains, and there is a tendency to draw a rough parallel with running the probabilities for the next token through the LLM.

Like people like Tim Bray have pointed out, that is seriously underestimating the complexity of what is represented in model weights.

The reason why the Markov analogy breaks down is that LLM output is not probabilistic–there is randomness involved in setting up inference, sure, and sequential correlation between output tokens, but the factors driving the output are several dozens of orders of magnitude above what we were used to.

Random outcomes like the LLM starting to hallucinate are just par for the course of a neural network trying to go beyond the training data, or focusing attention on parts that lack enough conditioning to have a decent output.

But going back to the initial point, there is zero “knowledge” or intelligence in an LLM. There are impressive amounts of correlation, to be sure, but the core principle harks back to the first AI Winter–it’s just that we’ve crossed a quality threshold that seemed hitherto unattainable.

It may look like emergent behavior, but that is simply because we can’t trace every step that led to the output. There is no agency, nor real “understanding”.

And, as anyone who’s read Douglas Hofstadter will point out, there is also no “strange loop” or a coherent capability to self-reference–the outputs are just the result of navigating an LLM’s internal representation of massive amounts of data, and they’re entirely functional in more than one sense of the word.

Things Are Just Getting Started

Shoving all those orders of magnitude into something that can fit into an enterprise-class GPU (or, increasingly, a GPU and a hefty set of NVMe drives) takes quite a toll, and training LLMs requires massive computational power that is (for the moment) outside an individual’s reach.

But that is certain to change over time, and inference is already possible on consumer-grade hardware–like this past couple of weeks’ spate of news around llama.cpp proves, there is a lot of low hanging fruit where it regards optimizing running the models, and at multiple levels1.

Although things like weight quantization degrade the output quality quite a bit, I expect more techniques to pop up as more eyes go over the papers and code that are already out there and spot more gaps and tricks to run LLMs efficiently.

And despite the fact that the spotlight is on OpenAI and the massive cloud infrastructure required, I personally find it a lot more interesting to figure out how low LLMs can go and still produce coherent results.

This because I have fairly high hopes for tailored models, and see a lot of value in having fully on-premises and even embedded solutions–I know I’m bucking the trend here, but the history of computing is one of decentralization, and you’re probably reading this on a smartphone… So my point should be obvious.

What Are LLMs Good For?

Having spent entirely too long dealing with customer support and call centers (I actually find the generic “chatbot” thing extremely annoying, and resisted getting into building those, but such is life), I’d say that, at the very least, LLMs are certain to take virtual assistants and support chatbots to the next level.

And no, this is not a new idea–it’s been hashed to death over the years, and the real problem is that most support knowledge bases are useless, even if you manually tag every snippet of information and carefully craft interaction flows. Traditional chatbots (and even summarization-driven ones) simply suck at doing the kind of basic correlation even a script-driven, barely trained human can pull off on autopilot, and hacking them together was always a brittle and unrewarding endeavor.

But an LLM is trained on other content as a baseline, which gives it a much better ability to fill in the gaps in such knowledge bases, and certainly have better conversational skills than a goldfish–and I can see LLMs doing a decent job in highly patterned, formalized inputs like legal documents, medical reports, retail catalogues, etc.

How Reliable Are These Things?

To be honest, right now, not that much. I wouldn’t rely on any publicly available LLM for decision-making of any kind (coding, advice, or even accurate summarization), although every iteration improves things noticeably.

Sure, some of the humor and “style transfer” is pretty hilarious, but LLMs still have trouble with basic math, let alone writing reliable code2–they’re not even that useful at “rubber ducking” a problem.

Outputs are generally shallow and LLMs still have trouble creating coherent long form without hallucinating, but I do think they can be useful as baselines for a human to improve upon, as long as that person has a good enough grasp of the problem domain to spot obvious flaws in “reasoning” (not just incorrections, but also gaps) and the willingness to double check any references.

Of course, any of those sanity checks seem absent from a lot of the hype-driven discussions I’m seeing online… But, more to the point, LLMs do seem to knock things out of the park for short interactions.

Which is why I think the search market disruption gambit is going to pay off handsomely–LLMs make for a much better search experience because you get adjacent information you would otherwise be unable to get from either direct or statistical matches (and you don’t get pesky ads, keyword squatters, etc.)

How Manageable Are These Things?

This is where I have the most doubts, to be honest.

The current “programming paradigm” is hopelessly primitive, and all the early deployment shenanigans prove it–prompt stealing and prompt injection attacks (which can be much more interesting than you’d expect) remind me of all the loopholes Asimov managed to squeeze out of The Three Laws of Robotics.

Plus the ease with which the models “hallucinate” and veer off into the wild blue yonder were, until recently, being dealt with by ham-fisted tactics like limiting the number of consecutive interactions with the model.

In short, it all feels… very Sorceror’s Apprentice, to be honest.

And I don’t think “stacking” models or just creating embeddings is going to help here–long-term curation of model inputs is going to be key.

Which means time-consuming, costly, and ever more challenging work to improve general purpose LLMs, especially those targeting search (where having non-AI generated training sets is going to be harder and harder).

Fast Iteration, But What About Fast Training?

Another important constraint that is being glossed over is that there is no easy, immediate feedback loop to improve an LLM–in the current chat-like interaction models you can add more context to a session, but:

  • It doesn’t really “stick”–sometimes not even subsequent invocations (even if the session wrappers are continuously improving, you’re effectively adding stubs to the original prompt, and that can only go so far).
  • Any on-the-fly corrections don’t become part of the core model (you need to have a full training iteration).

These things can be worked around, but are fundamental limitations–and yet, they don’t have any real consequence for simple one-shot tasks like “summarize this webpage” and most of the “productivity boosters” we’re likely to see over the coming months.

But they do compound my notion that LLMs feel more like an impressive party trick than a broadly sweeping change in paradigm–at least for now. Their real impact lies elsewhere, and most likely beyond the obvious chatbot scenarios.

It would be nice to take away a lot of the drudgery we’ve baked into computer use (as well as several typical knowledge worker tasks), although there are interesting (and risky) implications in empowering certain kinds of people to mass-produce content3

Conclusion

So where does this leave us?

Well, we’re clearly in the upward swing of the hype cycle. And, like I pointed out at the start of this piece, I’ve been there before–the quick iteration, the optimizations, the unexpected new techniques in established domains, and the fallout (both good and bad). Those parts are not hard to predict.

The big difference this time is that for users, the barrier to entry is effectively nil, and, again, the outputs are way better (and more impressive) than anything else we’ve seen before. Even if it’s still just a more elaborate Chinese Room, there is a lot more public interest and momentum than is usual in most tech hype cycles.

So yes, this one is going to be a bumpy ride, and not just for geeks. Make sure you have your metaphorical seat belt on tight.


  1. And while I was revising this Pytorch 2 came out, with a nearly 50% performance boost for image models–I’m just waiting for xformers to fall in line to upgrade my Stable Diffusion setup… ↩︎

  2. I routinely try to get LLMs to, say, invert a heap, or even to compose SQL queries (which I hate doing), and the results are always abysmal. I can’t even imagine how badly they would fare in medicine or law. ↩︎

  3. And I don’t mean political parties or nation states here. The prospect of mass-produced A.I.-accelerated reports, presentations, memos, etc. should be enough to give any corporate knowledge worker pause.. ↩︎


21 Mar 15:18

Notes for March 13-19

by Rui Carmo

Automating all the things.

Monday, 2023-03-13

Random fiddling day.

  • Revisited RDP connections from Remmina to a domain-joined machine: Security protocol set to TLS, DOMAIN\username authentication (not the UPN), Best quality.
  • Cleaned out my homebridge configuration (also disabled automatically adding 433MHz sensors discovered by OpenMQTTGateway, which was a cute puzzle to sort out).
  • Triggered some monthly restic backups. Remember, kids, always have an off-site backup.
  • Looked at ComfyUI, which is intriguing to say the least (and a breath of fresh air after kludgy Stable Diffusion WebUis where the actual workflow is a mess).
  • Sorted out some media archives.

Tuesday, 2023-03-14

I can never get the hang of Tuesdays. My work laptop died mid-afternoon, so I found myself with some time in between troubleshooting sessions.

  • Found it rather amusing that I serendipitously sorted out remote desktop domain authentication yesterday, almost as if I predicted this. Still can’t get Remmina to work with corporate WVD, though, so might have to turn the U59 into a temporary “corporate” desktop.
  • Did some spelunking in OpenMQTTGateway code and MQTT topics to understand what it can decode in the 433MHz band and how it is mapped to topics.
  • Spent half an hour with WeasyPrint to generate a presentable document out of Markdown notes. Still the best Python PDF generation tool out there, and has pretty decent CSS support, plus it’s trivial to automate:
MARKUP = $(wildcard *.md)

all: $(MARKUP:.md=.pdf)

%.pdf: %.html layout.css
    python -m weasyprint -e utf8 -m A4 -s layout.css $< $@

%.html: %.md
    python -m markdown < $< > $@
  • Created a ComfyUI sandbox on borg and spent a while collecting all the requisite models and going through the (maybe too whimsical) examples. Really happy with the UX so far, and with the fact that I went with a 12GB GPU.
  • Began adding docstrings to my py-sdf fork to make it easier to use with VS Code autocomplete.

Wednesday, 2023-03-15

Mid-week slump. Slept horribly, had a lot of catching up to do, still managed to have a few productive breaks:

  • Realized Godot 4 was already in Fedora testing and grabbed it (it went into mainstream 3 days later).
  • For the first time this year, added a little bit more content navigation functionality to the site. Still very happy with the way the static page generator turned out.
  • Given my work laptop woes, tried to get a semblance of my usual environment working over RDP device redirection:

Client (Fedora)

  • Remmina, Advanced, Redirect local microphone, sys:pulse
  • Remmina, Advanced, USB device redirection, id:0fd9:006d#3564:fef4,addr:01:0b

Also make sure you can access the USB devices (some might be automatically accessible to dialout group members, but this makes sure):

# cat /etc/udev/rules.d/70-passthrough-access.rules 
# Elgato StreamDeck
SUBSYSTEM=="usb", ATTR{idVendor}=="0fd9", ATTR{idProduct}=="006d", MODE="0666"
# Webcam - tried it just to see if it worked, here for reference
SUBSYSTEM=="usb", ATTR{idVendor}=="3564", ATTR{idProduct}=="fef4", MODE="0666"

Server (Windows 11)

Run gpedit.msc and configure this setting:

Computer Configuration:
  Administrative Templates:
    Windows Components:
      Remote Desktop Services:
        Remote Desktop Session Host:
          Device and Resource Redirection:
            - Do not allow supported Plug and Play device redirection = Disabled

I have plenty more tweaks, but the above is what you need for USB pass-through.

The StreamDeck works great, the audio is passable, but I can’t get the camera to work since Remmina/freerdp still doesn’t support UVC camera pass-through (I already knew passing the raw USB device would be unfeasible, but I had to give it a go). For now, that only works in Windows and Mac/iOS clients.

  • Did a little more Fedora audio tweaking, including moving to a real-time kernel on the U59 and setting Bitwig to use pulseaudio (just because the preset for it had slightly lower latency):
# Quick set of essentials for audio priority
echo '@audio - rtprio 90
@audio - memlock unlimited' | sudo tee -a /etc/security/limits.d/audio.conf
echo 'fs.inotify.max_user_watches=600000' | sudo tee -a /etc/sysctl.conf
sudo usermod -aG audio $USER
sudo usermod -aG realtime $USER
sudo dnf copr enable ycollet/audilinux
sudo dnf install kernel-rt-mao

Thursday, 2023-03-16

Long meeting day, way into the evening.

  • Realized that a recent Raspbian update broke screen blanking on my automation dashboard, which can be worked around by reverting the X server version:
sudo apt install xserver-xorg-core=2:1.20.11-1+deb11u5
sudo apt-mark hold xserver-xorg-core
  • Spent a little trying to get the Linux Intune client to work in Fedora, even though it is unsupported. Got it to work via… unconventional means, but Edge crashes when syncing an AD account.
  • Fiddled with PyTorch 2.0, but xformers hasn’t really been updated yet, so most Stable Diffusion tools can’t make proper use of it yet.

Friday, 2023-03-17

Winding down for the weekend. My work laptop was serviced, which meant doing the BitLocker dance and appeasing the InTune deities, so that took a chunk out of my day.

  • Updated my RDP page with a more comprehensive set of tweaks that I refined while the X1 was MIA.
  • Realized the CSS font stack for this site could be improved for monospace fonts, so I re-did the entire thing while looking at modern-font-stacks, which is a very handy resource if you are designing text-intensive websites and want to deliver the best possible experience without any web fonts.
  • Investigated a possible uwsgi bug related to cron tasks.
  • Investigated how to programmatically take screenshots under Wayland using dbus.
  • Fiddled with pyxel as a way to port some code one of my kids wrote in PICO-8.

Saturday, 2023-03-18

Family day.

  • Decided to clean up and post my take on LLMs before it got too stale (had to drop a fair chunk of it because it was outdated already).
  • Brief outing to attend local Chemistry Olympics (kid brought home bronze medal, yay!)
  • Decided to tackle the Docker Apocalypse and start moving all my public images to ghcr.io. Even though I have a private registry at home (and another in Azure) some of my images are in general use and need a public repository, and they’re all in GitHub anyway, so I’m starting with this GitHub Action as a baseline to build and push new images for each new tag:
# cat .github/workflows/build-image.yml
name: Build Image

on:
  push:
    tags:
      - v*

jobs:
  Build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v3
      - name: Login to Registry
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }} 
      - name: Build and Push Docker Image
        uses: docker/build-push-action@v4
        with:
          push: true
          context: . 
          tags: |
            ghcr.io/${{ github.repository }}:${{ github.ref_name }}
            ghcr.io/${{ github.repository }}:latest

Since docker buildx is now largely usable, I will be updating my cross-platform images to use a slight variation on the above.

Sunday, 2023-03-19

Father’s Day over here, and another day impacted by machine issues.

  • Fiddled with rtl-433 a bit more, but I’m starting to realize it can’t pick up the decade-old 433MHz sensors I have.
  • My U59 rebooted after updates to a corrupted filesystem (not sure if it’s a SATA issue or a btrfs one, but I know where I would place my bets), so I set the default boot device to the Windows NVME and begain reinstalling the Fedora drive as time permits:
# For later reference, this my baseline Fedora install:
# yabridge COPR
sudo dnf copr enable patrickl/yabridge-stable
# list of essentials I need:
sudo dnf install cabextract curl fontconfig git gnome-extensions-app \
gnome-shell-extension-pop-shell gnome-shell-extension-user-theme \
gnome-tweaks godot golang htop keepassxc kvantum liberation-fonts \
lm_sensors openscad remmina rpm-build rsms-inter-fonts syncthing \
tmux vim wine xorg-x11-font-utils yabridge docker
# RPM Fusion and MS web fonts
sudo dnf install \
https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm \
https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm \
https://downloads.sourceforge.net/project/mscorefonts2/rpms/msttcore-fonts-installer-2.6-1.noarch.rpm
# VAAPI and Firefox hardware acceleration
sudo dnf install ffmpeg handbrake libva-utils libva-intel-driver \
intel-media-driver igt-gpu-tools
# groups
sudo usermod -aG dialout $USER
sudo usermod -aG video $USER
sudo usermod -aG docker $USER

In the meantime Windows makes for a slightly better thin DAW box and work thin client (I get UVC camera pass-through, can run all VSTs and have WSL), but, ironically, my xrdp configurations are so fine-tuned that mstsc.exe is slower than Remmina.

I guess you just can’t have it all…


19 Mar 15:07

The LLM Problem

So far I’ve had nothing to say about the LLM chatbot frenzy. My understanding of the technology is shallow and I’ve no sense for its functional envelope, and lots of other people have had smart things to say. I hadn’t even conversed with any of the bots. But I fell off the wagon a few days ago and put time into GPT3 and (especially) the new GPT4-based Bing chat. I got off Bing’s waitlist a few days before the recent general availability, so I have more hands-on than most people. Plus I caught up on background reading. So, question: Are LLMs dangerous distractions or are they a glowing harbinger of a bright future? (Spoiler: I’m dubious but uncertain.)

Chomsky’s “Reflections on Language”

Preconceptions

The Eighties, when I had my first-ever software job, featured another AI craze: Fifth-Generation computing, GigaLIPS, OMG the Japanese are going to eat us all. It was hard to understand, but apparently running Prolog really fast was the future. I was already pretty cynical for a twentysomething, and arrogant enough to think that if I couldn’t understand it then it was bullshit. More or less by accident, since I didn’t actually know anything, I was right that time. Which left me with an attitude problem about AI in general.

Then in the Nineties we had “knowledge-based systems”, which turned out to be more bullshit.

Hofstadter’s “Gödel Escher Bach”

Before I even discovered computers, I’d read the fashionable books by Hofstadter and Chomsky. I had no trouble believing that human intelligence and language processing are pretty well joined at the hip. I still believe this, and that belief is relevant to how one thinks about 2023’s ML technology. In the Nineties I seem to remember throwing poo on Usenet at John Searle’s Chinese Room partisans.

My skepticism lasted until 2019; Working adjacent to the AWS EC2 Auto Scaling team, I watched the construction of Predictive scaling. It took forever to get the model tuned up, but eventually it became frighteningly accurate at looking 72 hours into the future to tell you when you were going to get load surges and needed to get your fleets scaled and warmed up in advance.

So (unlike, for example, with blockchain) there is objective evidence that this stuff is useful at least for something.

Experience

I came to GPT-3 with preconceptions (it’s been covered to death) and, predictably, kind of hated it. I’d had some hope, given that I’ve dumped two-plus million words onto the Web since 2003, that maybe the bot could emulate me. No such luck, although it agreed that yes, its training materials included some of my stuff. “What does Tim Bray think about…” and “Write a paragraph in the style of Tim Bray about…” yielded no joy whatsoever.

Then I started poking around in two tiny historical niches where I know a lot: T.E. Lawrence’s sexuality and the Demerara slave rebellion. It will surprise no-one to say that GPT-3 offered a useful very-basic introduction to both subjects but, when queried on specific questions of fact, was horribly and confidently wrong.

It doesn’t bother me much that bleeding-edge ML technology sometimes gets things wrong. It bothers me a lot when it gives no warnings, cites no sources, and provides no confidence interval.

I’m unconvinced that this smoothed-out voice from nowhere deserves our attention.

Now, Bing…

Big chat on DS9 and B5

Before you even begin to think about what’s in the bot’s two paragraphs, please cast your eyes just below them, where, in tastefully decorated shades of blue, there are… Footnotes!

When you send a prompt to the bot, while it’s calculating it flashes up little messages saying “searching X … searching Y…” and so on, where the X and Y are what show up in the footnotes. Umm… is it really doing that? I dunno, but when you follow the footnote pointers, the bot’s offerings are consistent with what you find there.

Anyhow, the perception created is that the bot is doing a fairly vanilla Web search to turn up sources, then writing a neutrally-toned précis of what it found. I can hardly express how much difference those footnotes made to me. (After I checked them, of course.) I was suddenly prepared to think about this thing as a mostly-safe maybe-useful tool.

I did the same deep-dives on Lawrence and Demerara, and the bot managed to avoid huge this-is-just-all-wrong errors. It wasn’t perfect, there was one little problem with event sequencing. But way better than GPT-3; admittedly, a low bar.

At one point, on the Lawrence thread, it was reciting words, nearly verbatim, that I’d personally added to Wikipedia. I’m not sure that’s wrong, the text meets Wikipedia’s pretty-decent citation standards.

I’m not going to paste in the huge gobs of dialog that other writers have, but here’s a bit of entertainment: I asked the bot to write a description of Seattle “in the style of” a few different voices that seem distinctive to me.

In the style of Ezra Pound: “In the city of Seattle, the rain falls like a symphony of tears, A melody that echoes through the streets and fills the ears. The mountains rise like giants in the distance, Their peaks shrouded in mist, a sight of purest brilliance. The city itself is a place of contrasts, A mix of old and new, of history and progress. From the Space Needle to Pike Place Market, There’s always something new to discover and savor. Seattle is a city of dreams, A place where anything is possible, or so it seems.”

…of Samuel R. Delany: “Seattle is a city of many faces, a place where the future and the past collide. It’s a city of contrasts, where the gleaming towers of downtown stand in stark contrast to the rugged beauty of the surrounding mountains. The streets are alive with the sounds of people from all walks of life, from the techies of Amazon and Microsoft to the artists and musicians who call the city home. The air is thick with the scent of coffee and the sound of rain, a symphony of sensory delights that is uniquely Seattle. From the Space Needle to the waterfront, there’s always something new to discover in this city of dreams.”

…of Shakespeare: “In fair Seattle, where we lay our scene, From ancient grudge break to new mutiny, Where civil blood makes civil hands unclean. From forth the teeming streets of this fair town, A pair of star-crossed lovers take their life; Whose misadventured piteous overthrows Do with their death bury their parents’ strife. The fearful passage of their death-marked love, And the continuance of their parents’ rage, Which, but their children’s end, naught could remove, Is now the two hours’ traffic of our stage; The which if you with patient ears attend, What here shall miss, our toil shall strive to mend.

…of e.e. cummings: “in just Seattle, this city of rain where the streets are alive with the sound of feet and the air is thick with the scent of coffee and the sound of music fills the night here is the city of dreams where anything is possible and the mountains rise like giants in the distance and the Space Needle stands tall against the sky in just Seattle, where the future and the past collide and the beauty of the world is all around here is a place where the heart can soar and the soul can find its home.

…of Sylvia Plath: “In Seattle, the rain falls like tears from the sky, A never-ending stream of sorrow and pain. The streets are dark and empty, The buildings looming like shadows in the night. The mountains rise like sentinels in the distance, Their peaks shrouded in mist and mystery. The city is a place of secrets and shadows, A place where the heart can break and the soul can die. But even in the darkness, there is beauty to be found, A glimmer of hope in the depths of despair. For in Seattle, as in life, There is always a chance for redemption and renewal.

…of Bing chat: “Seattle is a city of many faces, a place where the future and the past collide. It’s a city of contrasts, where the gleaming towers of downtown stand in stark contrast to the rugged beauty of the surrounding mountains. The streets are alive with the sounds of people from all walks of life, from the techies of Amazon and Microsoft to the artists and musicians who call the city home. The air is thick with the scent of coffee and the sound of rain, a symphony of sensory delights that is uniquely Seattle. From the Space Needle to the waterfront, there’s always something new to discover in this city of dreams.

[At this point, for some reason the paragraphs started becoming nearly identical in their second halves, reading much like the “Bing chat” version above. I have amputated the repetition.]

… of Ursula Leguin: “In Seattle, the rain falls like a curtain of mist, A veil that shrouds the city in mystery and magic. The streets are alive with the sounds of people, Their voices blending together in a symphony of life. The mountains rise like guardians in the distance, Their peaks reaching up to touch the sky. … ”

… of Leonard Cohen: “In Seattle, the rain falls like a lover’s tears, A bittersweet symphony that echoes through the night. The streets are alive with the sounds of music, The air thick with the scent of coffee and dreams. The mountains rise like giants in the distance, Their peaks shrouded in mist and mystery. … ”

…of Aristophanes: “In Seattle, the rain falls like a chorus of nymphs, A symphony of sound that fills the air with magic. The streets are alive with the sounds of laughter, The air thick with the scent of coffee and dreams. The mountains rise like giants in the distance, … ”

…of Denis Johnson: “In Seattle, the rain falls like a baptism, A cleansing of the soul that washes away the pain. The streets are alive with the sounds of music, The air thick with the scent of coffee and dreams. The mountains rise like giants in the distance, … ”

Well, some of those were better than others. Interestingly, the first sentence or two tended to be better than the rest. I left a few out. No, I don’t know why I picked Seattle, I’m not from there and I don’t love the place. It was easier to start doing this than to stop.

Is this even a good idea?

There are extremely smart people asserting that this technology is some combination of useless and dangerous, and we should turn our backs and walk away. Here are two Mastodon posts from Emily Bender:

Folks, I encourage you to not work for @OpenAI for free:
Don't do their testing
Don't do their PR
Don't provide them training data
[Link to an excellent related thread slamming OpenAI for generally sleazy behavior.]

I see people asking: How else will we critically study GPT-4 etc then?
Don't. Opt out. Study something else.
GPT-4 should be assumed to be toxic trash until and unless #OpenAI is *open* about its training data, model architecture, etc.
I rather suspect that if we ever get that info, we will see that it is toxic trash. But in the meantime, without the info, we should just assume that it is.
To do otherwise is to be credulous, to serve corporate interests, and to set terrible precedent.

Prof. Bender is not alone. I ran a little poll on Mastodon:

Mastodon survey of attitudes to ML technology.

You might find it rewarding to follow the link to the poll and read the comment thread, there’s instructive stuff there.

Here’s another excellent thread:

Twitter thread on the perils of OpenAI by Émile Torres

There’s more to say on this. But first…

Do you have an opinion?

Please don’t post it.

First, go and read On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (lead authors Emily Bender and Timnit Gebru.) I’m serious; it’s only ten pages (not including references) and if you haven’t read it, you’re simply not qualified to publish anything on this subject.

Here are the highlights, which I’m only listing so I can discuss them; the following is not a substitute for reading Bender and Gebru.

  1. The carbon load of LLM model-building and execution is horrifying. Quote: “…the amount of compute used to train the largest deep learning models (for NLP and other applications) has increased 300,000x in 6 years, increasing at a far higher pace than Moore’s Law.”

    (Also, some of the economics involve shitty behavior; QA’ing LLMs is lousy, time-consuming work, so why not underpay poor people in the Third World?)

  2. The data sets that current LLMs are trained on are basically any old shit off the Internet, which means they’re full of intersectionally-abusive language and thinking. Quote: “Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is a fantasy.”

  3. The whole LLM frenzy is diverting attention from research on machine language understanding as opposed to statistically-driven prediction. Quote: “If a large LM, endowed with hundreds of billions of parameters and trained on a very large dataset, can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?” Also: “However, no actual language understanding is taking place in LM-driven approaches to these tasks, as can be shown by careful manipulation of the test data to remove spurious cues the systems are leveraging. [21, 93]”

My experience with the LLM bots really had me nodding along to #1. When you throw a prompt at one of these things, what happens ain’t fast; it takes seconds and seconds to get the answer back. My background in cloud computing and concurrency research gives me, I think, a pretty good gut feel for this sort of stuff and, well… it’s really freaking expensive! I think that if it were cheap, that might change my (increasingly negative) view of the cost/benefit ratio.

Initially, I was less worried about #2. The Internet is already full of intersectionally-abusive crap (not to mention outright lies), and we do make progress at fighting it and creating safe spaces, albeit agonizingly slow. It’s not obvious to me that shitty LLMs are a worse problem than shitty people.

The good news is that there’s a clear path to addressing this, which Bender & Gebru lay out: Curate your damn training data! And be transparent and accountable about what it is and how it’s used. Unfortunately, OpenAI doesn’t do transparency.

Then the bad news: On the Internet, the truth is paywalled and the bullshit is free. And as just discussed, one of the problems with LLMs is that they’re expensive. Another is that they’re being built by capitalists. Given the choice between expensive quality ingredients and free bullshit, guess which they’ll pick?

On #3, I don’t have enough technical depth for well-founded opinion, but my intuition-based feelings are mixed. Yeah, the LLM-transform statistical methods are sort of a kludge, but you know what, so is human intelligence. Nobody would ever hire me to do AGI research but if they did, I’d start with a multipronged assault on language, using whatever witches’ brew of statistical and other ML methods were at hand.

Remember, John Searle’s “Chinese Room” argument is just wrong; at some point, if you build something that convinces educated, skeptical, observers that they’re talking to a real intelligence, the only safe hypothesis is that it’s a real intelligence.

Other voices

Noam Chomsky and a couple of colleagues write, in the NYTimes: The False Promise of ChatGPT. Obviously, it would be dumb to ignore input from Chomsky, but I found this kind of shallow. I don’t think it’s axiomatic that a hypothetical AGI need be built around the same trade-offs that our own intelligence is.

On the other hand, here’s Sabine Hossenfelder (in a video, transcript only on Patreon): I believe chatbots partly understand what they chat about. Let me explain. Quote: “Understanding can’t be inferred from the relationship between input and output alone.” I’m not sure Dr H entirely convinced me, but that video is both intellectually dense and funny, and I strongly recommend it; her conclusions are studded with what seem to me extremely wise observations.

What do I think?

3,500 words in and… um, I dunno. Really. I am somewhat consoled by the fact that nobody else does, either.

There are a very few posts I’m willing to drive into the ground:

  • The claim that LLMs are nothing more than fancy expensive markov chains is a dangerous oversimplification or, in other words, wrong.

  • There are going to be quality applications for this stuff. For example, reading out picture descriptions to blind people.

  • In the same way that the Bing bot seems to be useful at looking up stuff online, it’s useful for computer programmers, to automate searching Stack Overflow. I asked it for suggestions on how to dedupe Go structs with slice fields, since you can’t use those as map keys, and it turned up pointers to useful discussions that I’d missed.

  • Are these things remotely cost-effective? I mean, it’s cool that Bing could research the relationship between DS9 and B5, and that it threw in humanizing detail about the softball games, but the number of watt-hours it probably burnt to get there is shocking. For what values of “worth it” is it worth it?

  • Relatedly, it’s blindingly obvious that the VC and Big-Tech leadership are way out over their skis on this one, and that billions and billions are going to be pissed away on fever dreams pitched by people who were talking up crypto DAOs until last month.

Just now I wouldn’t bet my career on this stuff, nor would I ignore it.

It’s really, really OK to say “I don’t know.”

19 Mar 15:07

Porn Spam: Zero to Zero

This is a true story: in 1992 I single-handedly knocked AOL off the air. What brought it to mind is, I just now took my once-daily look at the junk-mail folder and wiped a few dozen pornospams with a single keystroke, and there were none in my inbox; antispam is getting good enough that I may never see another, and I remembered, lo my righteous wrath and the awful consequences when I got my first one. [Reposted for your amusement on its 20th anniversary.]

This (I’m pretty sure, maybe a year one way or t’other) would be 1992, and I was on research staff at the University of Waterloo working on the New Oxford English Dictionary Project.

[Update: After the repost, several people pointed out that AOL email didn’t start till 1992. Oops. The rest of the story stands.]

Waterloo’s Math Faculty Computing Facility at UW was then, and had been for years, one of the world centers of Unix expertise; we had dozens of big computers from Sun and DEC and MIPS (remember them?) supporting hundreds of Really Smart People; some had as many as 64MB of RAM, dig it. Hey, MFCF is still there.

In those days the Internet was young (we had to telnet to work and back, uphill both ways in the icy Canadian slush) but our lives were already email-centric.

Spam had begun to poison Usenet, but hardly ever appeared in email. So one Sunday morning I opened up my email and saw a note from (I’m pretty sure) Lipstick@AOL.com and what do you know, it was from three college girls etc etc etc. I was flabbergasted; then livid with fury. Who were these turkeys pissing on my playground, who knows what this sending-smut-to-strangers could lead to? Obviously it was the duty of every right-thinking netizen to put a stop to this and soonest.

Now this may seem hard to believe, but I (and a lot of people) didn’t really know what AOL was; I had this vague idea it was some bulletin-board thing.

So I waited till after supper when things were quieting down, and visited the MFCF computers where I had accounts, which was pretty well all of them (maybe a dozen?), and around 8PM launched a job on each of them which every ten seconds sent an email along the lines of:

To: Lipstick@AOL.com
Subject: Please stop this abuse of email
You sent me, a stranger, an invitation to purchase pornography.
This is an abuse of email; please stop doing it.

Then I hit the sack, because I had an early flight to California the next morning. On the way out the door I shut down all the jobs and noticed that my inbox had a handful of emails from AOL advising me that Lipstick’s mailbox was full. The last couple were kind of mangled and borked, something was breaking.

Later that morning in the SFO rent-a-car bus, two businessmen were talking and one said to the other "Weird, I couldn’t get onto AOL this morning, it’s been down for hours." I kept my face straight. I poked around on Usenet and apparently they were down for over a day with some sort of grievous email meltdown. I can’t prove that I did it.

The thing was, at that point in history, the idea of sending filth to strangers was so new and so weird that they probably didn’t feel they could make much of a complaint.

When I got back to Waterloo, I had email from the Director of the computing facility saying “er, someone’s complaining, hard to take 'Lipstick@AOL' seriously but you know anything about this?” The complaint was funny, along the lines of "tbray@math.uwaterloo.edu is MEAN to GIRLS!!!!!!!!!" I seem to remember apologizing and that was the end of the story.

No, I think this month may have been the end of the story for me, because it doesn’t happen any more; thank you to Mr. Bayes, whoever you were, and to Paul Graham for figuring out how to use that math.

The story’s not over for everyone; my retired Mother just within the last month got her first pornospam and was really upset - "It was from a name I thought I recognized, and it had awful stuff in it!" Indeed it did, but maybe not for long.

17 Mar 04:16

Blog: A Minute from the Moderators

We’ve been working hard to build out more of the Community Documentation to help everyone to create a wonderful experience on Hachyderm. For the past month, we’ve focused most heavily on our new How to Hachyderm section. The docs in this section are:

When you are looking at these sections, please be aware that the docs under the How to Hachyderm section are for the socialized norms around each topic and the subset of those norms that we moderate. Documentation around how to implement the features are both under our Mastodon docs section and on the main Mastodon docs. This is particularly relevant to our Content Warning sections: How To Hachyderm Content Warnings is about how content warnings are used here and on the Fediverse, whereas Mastodon User Interface Content Warnings is about where in the post composition UI you click to create a content warning.

Preserving your mental health

In our new Mental Health doc, we focus on ways that you can use the Mastodon tools for constraining content and other information. We structured the doc to answer two specific questions:

  • How can people be empowered to set and maintain their own boundaries in a public space (the Fediverse)?
  • What are the ways that people can toggle the default “opt-in”?

By default, social media like Mastodon / the Fediverse, opts users in to all federating content. This includes posts, likes, and boosts. Depending on your needs, you may want to opt out of some subsets of that content either on a case-by-case basis, by topic, by source, or by type. Remember:

You can opt out of any content for any reason.

For example, you may want to opt out of displaying media by default because it is a frequent trigger. Perhaps the specific content warnings you need aren’t well socialized. Maybe you are sensitive to animated or moving media. That said, perhaps media isn’t a trigger - you just don’t like it. Regardless of your reason, you can change this setting (outlined in the doc) whenever you wish and however often as meets your needs.

Hashtags and Content Warnings

Our Hashtags and Content Warnings docs are to help Hachydermians better understand both what these features are and the social expectations around them. In both cases, there are some aspects of the feature that people have encountered before: hashtags in particular are very common in social media and content warnings mirror other features that obscure underlying text on sites like Reddit (depending on the subreddit) and tools like Discord.

Both of these features have nuance to how they’re used on the Fediverse that might be new for some. On the Fediverse, and on Hachyderm, there are “reserved hashtags”. These are hashtags that are intended only for a specific, narrow, use. The ones we moderate on Hachyderm are FediBlock, FediHire, and HachyBots. For more about this, please see the doc.

Content warnings are possibly less new in concept. The content warning doc focuses heavily on how to write an effective content warning. Effective content warnings are important as you are creating a situation for someone else to opt in to your content. This requires consent, specifically informed consent. A well written content warning should inform people of the difference between “spoilers”, “Doctor Who spoilers”, and “Doctor Who New Year’s Special Spoilers”. The art of crafting an effective content warning is balancing what information to include while also not making the content warning so transparent that the content warning is the post.

Notably, effective content warnings feature heavily in our Accessible Posting doc.

Accessible Posting

Our Accessible Posting doc is an introductory guide to different ways to improve inclusion. It is important to recognize there are two main constraints for this guide:

  • It is an introductory guide
  • The Mastodon tools

As an introductory guide, it does not cover all topics of accessibility. As a guide that focuses on Mastodon, the guide discusses the current Mastodon tools and how to fully utilize them.

As an introductory guide, our Accessibility doc primarily seeks to help users develop more situational awareness for why there are certain socialized patterns for hashtags, content warnings, and posting media. We, as moderators of Hachyderm, do not expect anyone to be an expert on any issue that the doc covers. Rather, we want to help inspire you to continue to learn about others unlike yourself and see ways that you can be an active participant in creating and maintaining a healthy, accessible, space on the Fediverse.

Content warnings feature heavily on this doc. The reason for this is Mastodon is a very visual platform, so the main ways that you are connecting with others who do not have the same experience of visual content is by supplying relevant information.

There will always be more to learn and more, and better, ways to build software. For those interested in improving the accessibility features of Mastodon, we recommend reviewing Mastodon’s CONTRIBUTING document.

More to come

We are always adding more docs! Please check the docs pages frequently for information that may be useful to you. If you have an idea for the docs, or wish to submit a PR for the docs, please do so on our Community repo on GitHub.

April will mark one month since we launched the Nivenly Foundation, Hachyderm’s parent org. Nivenly’s website is continuing to be updated with information about how to sponsor or become a member. For more information about Nivenly, please see Nivenly’s Hello World blog post.

The creation of Nivenly also allowed us to start taking donations for Hachyderm and sell swag. If you are interested in donating, please use either our GitHub Sponsors or one of the other methods that we outline on our Thank You doc. For Hachyderm swag, please check out Nivenly’s swag store .

17 Mar 04:14

Toward Matrix support in Chats – Part 2

by François Téchené

Matrix integration within the Chats app is progressing and this post is detailing the development progress since the last one. This update is not so much about new features but more about bug fixes and great improvements that make Chats slowly becoming an everyday “1 to 1” and “small groups” messaging app for both SMS/MMS […]

The post Toward Matrix support in Chats – Part 2 appeared first on Purism.

17 Mar 04:14

Accounting for Tastes

by Dave Pollard

This is a bit of a follow-up to my post last fall exploring why I love sad songs (and romantic comedies).


still from dance-practice video by Japanese-Korean hip-hop/R&B group XG

Some of my favourite YouTube channels offer analysis of music, for the purpose of understanding what it is about certain music that we so much appreciate, and hopefully to teach us how to compose ‘better’ music.

Recently, after reading through a bunch of my old blog posts, I was lamenting how ignorant and arrogant I was when I started this blog 20 years ago. I marvel at how much my entire worldview and attitude toward life has been transformed over that time period, yet so slowly that I didn’t even notice it. That change is reflected, for example, in my complete indifference, now, to TV, films, and almost all fiction, which once were a very important part of my life. Though my interest in these “popular entertainment” forms has dramatically waned, my interest in “popular music” has, if anything, increased.

So I went back through my 60 year library of favourite music to see if it showed evidence of a similar evolution. It did not. On the contrary, despite the thousands of hours I have spent looking for enjoyable music (I’m pretty particular), I can honestly say that my musical tastes have not really changed one iota.

There is a largely-unchallenged view that our musical tastes are culturally, rather than biologically, conditioned. A careful and open-minded study of music, we are told, will give us an appreciation of music we would previously have been closed to, just as a thorough study of history and culture can substantially alter our perception of current events.

We are told, for example, that indigenous cultures are not nearly so closed to what we consider “dissonant” music as we are, and that may well be true. A current theory is that music works on the brain in a two-stage recognition/reward process. First, our brain’s inherent propensity for pattern-seeking and pattern-remembering comes into play as we listen, and then our brain starts to ‘predict’ what will come next in a song. A correct prediction will yield a burst of dopamine, while an incorrect prediction will not — there’s no reward for an evolutionarily useless wrong guess. And a dramatic tension in the music before the predicted resolution seems to extend, amplify and reinforce the dopamine reward (perhaps in the same way that sexual edging does).

But we also get a dopamine charge when there is an unexpected (but not catastrophic) ‘surprise’ in the progress of the music, such as a sudden key change, or the introduction, for example, of a minor ninth chord just before the resolution to the major. I’d guess this might be how we ‘learn’ new patterns to extend our brain’s predictive capacity. After hearing the minor ninth, we now start to listen for it again later in the song, and take note of whether it repeats or not.

Too much repetition and predictability, on the other hand, and we essentially stop listening, so there is no dopamine rush. Each person’s min/max threshold for repetition, surprise, and novelty is apparently different.

In my earlier article, I described a popular theory that our love of sad songs is about catharsis and/or the safe, vicarious, ‘pleasurable’ experience of emotions that might be too precarious to feel as a result of a direct, personally sad event. Beyond that, I posited that it’s our body that makes decisions on what music we listen to, and reacts accordingly, and that ‘we’ (with our supposedly discerning musical tastes) really have no say in it whatsoever.

That would suggest that our biological conditioning plays just as important a role in our musical tastes as cultural conditioning. And indeed, the people whose musical tastes are closest to mine (measured by the degree to which our personal music ‘libraries’ overlap) do not correlate at all with those whose cultural conditioning most closely resembles mine. Peers I grew up with, while perhaps liking certain music that was popular when we were doing things together, mostly have very different musical preferences from mine. And people with whom I have almost zero cultural connection, but some biological similarities to (eg a predisposition to depression) seem much more likely to share my musical tastes.

What does our body want, then, if it, rather than our cultural influences, is ‘choosing’ our musical tastes for ‘us’?

Perhaps, just as it ‘chooses’ food for us to compensate for its perceived nutritional deficiencies or to feed its chemical addictions, our body might be ‘choosing’ music for us that restores its chemical balance or feeds its chemical addictions (eg too little or too much dopamine).

The emotions that arise in me listening to Adagio for Strings, for example, are a kind of sadness, but they are a joyful, peaceful kind. Maybe it’s my body’s way of saying “You need to feel sadness about the awful state of the world, but you’re afraid to, so here, try this music”. Several people who suffer from depression have told me they love this piece because it makes them cry, and feel better.

And when I listen to (and watch the remarkable choreography of) a song1 like Left Right, perhaps my body is urging me to listen to it in order to get me to feel, and to appropriately express, a sense of incredible joy and connection with the rest of the world, which is, after all, an astonishingly beautiful place in which we’re all doing our best: “Hey, life is good, get off your ass and enjoy it, laugh, dance, celebrate”. This song, which is only a month old, has already been ‘covered’ by at least 40 dance groups captivated by its infectious vibe and its fun, expressive choreography. Maybe a lot of bodies out there are telling their ‘owners’ to get up, laugh, dance and celebrate.

In my earlier article, I asked:

Does my infatuation with these songs indicate that am I looking to lose myself (lose my self?) in a safe-to-feel world? A world that makes more sense (viscerally, rather than intellectually) and is more emotionally honest and courageous than the one I seem to live in? These songs reach through my fear of feeling and let me feel things I’ve only otherwise ever felt strong enough to feel when I’ve been in love — when the chemicals just overpowered the fear.

So perhaps our taste in music is just one more subtle, ‘unconscious’ way our body does its best to take care of itself (and ‘us’).

A guy who’s written a book about the emotional impact of Adagio analyzed it as follows:

By taking the listener through emotional landscapes of its own creation, on its own terms, at its own speed, music is as close as one can come to actually re-experiencing the process and texture of unfolding emotion. It doesn’t show you a reflected image of the landscape of loss, it takes you through it – a very different notion. And Barber’s Adagio is so moving, so affecting, precisely because as we pass through that territory, we reel with the shock of recognition.

Emotionally, psychologically, we’ve all been here before; we can feel the congruence to our own experiences of grief and desperate hope as it unfolds through time. In a nutshell, it works so very well because he got it so very, very right.

Listen… to the orchestral recording of the Adagio. Hear how we begin in deep mourning and isolation, are lifted into the possibility of redemption by the IV-V in D-flat major, then dropped back into the darkness by iv-V in minor. This is the subtext of the entire piece: a desperate struggle to escape from the reality of grief (B-flat minor) into the consolation of hope (D-flat), only to slide back into an inevitable reality, over and over… [Barber] demonstrated a breathtaking comprehension of both musical and psychological processes, then wrote a piece of music so astonishingly well-matched to the unfolding process of grief that one can hardly help being moved by it.

Well, maybe. Or perhaps Samuel Barber had no choice but to write this piece exactly as it was written, through him and his body, an expression of his own body’s need to reconcile with the possibility of never-ending grief, to adapt to it, to accept it, and to express it, the only way it could.


  1. I think there’s a lot more to this ‘pop’ song than meets the eye. XG consists of seven young Japanese women who spent the last five years in an intense program in Korea learning advanced singing, dancing, composition and other skills, and rehearsing endlessly, before their first song was released a few months ago. The music was written specifically for the group by a team of at least 14 composers, some of whom obviously have classical music training; you can spot a whole suite of international genres and influences in the instrumentation, harmonies, rhythms, and musical overlays going on, mostly unobtrusively, in this ‘simple’ happy song. 
  2. POSTSCRIPT: I’ve been asked a lot about my opinion on the new AI apps, and my response generally has been that it’s not intelligence at all, and, like video games, will mostly turn out to be an amusing new form of entertainment (and, sigh, a military training tool). But I do believe it will have a major effect on the arts, including music. There is no reason why AI couldn’t, for example, parse my library of favourite music, and produce a (plagiarized, derivative, kind of) work that might well become my favourite song. It is capable of sussing out, and replicating, the ‘ingredients’ inherent and present in art that we love, in novel and potentially exciting ways. It is certainly soon likely to outperform the current music ‘recommendation engines’, whose algorithms are essentially not up to the task.
17 Mar 04:13

Twitter Favorites: [syncros] Dusk at @OntarioPlace last night. I stood there for a few moments taking in the view, dreaming of the day visitors… https://t.co/xGpXQ6qw9E

I'll be @syncros@mastodon.social with you @syncros
Dusk at @OntarioPlace last night. I stood there for a few moments taking in the view, dreaming of the day visitors… twitter.com/i/web/status/1…
17 Mar 04:12

The Rent’s Too Damned High - GEN

mkalus shared this story from GEN — Political News, and Articles About Power and Culture - Medium.

A human right, commodified and rendered zero-sum.

The pandemic housing bubble has multiple, complex causes. Among them:

Generations of Americans have dreamed of owning a home, both to insulate themselves from the whims of their landlords and to create intergenerational wealth. Home ownership was a key driver of social mobility, allowing working class people to enter the middle class. A horrible “natural experiment” shows just how important property acquisition is to economic stability: redlining and restrictive covenants froze Black people out of the home-purchasing boom of the New Deal and the GI Bill, exacerbating and accelerating the racial wealth gap.

Two factors drove the growth of the American middle-class: property ownership and unionization. Of the two, unionization was more universal — by no means free of institutional racism, but far more accessible than home ownership.

Of the two, unionization was the one that underwent sustained assault from business, finance and the state. After decades of declining union participation, amid stagnating wages and worker misclassification, the dream of social mobility through stable employment has evaporated for most workers (especially workers from the poorest households, burdened beyond belief by student debt, this debt assumed on the assurance that it would create employment-based access to a stable, middle-class existence).

But the American belief in home ownership as a path to a better future for homeowners and their descendants remains intact. And housing shortages — and the bubbles that attend them — only fuel this belief. When the house your working-class parents bought for $30,000 is appraised at $1.5 million, home ownership becomes a solution to all of life’s insecurities.

But asset accumulation isn’t — and can’t be — a path to widespread prosperity. A middle-class that relies on increasing property values as a means to fund their kids’ university tuition, their own retirements, and the next generation’s down-payments sows the seeds of its own destruction. Far from guaranteeing your kids’ security, a focus on asset appreciation dooms them to precarity and penury.

Why do homes increase in value? Because they grow more valuable over time. But that value isn’t intrinsic: the roof doesn’t get better at keeping out the rain, sleep doesn’t come more easily in the bedrooms. Rather,

Take education: the American practice of funding schools through local taxes (rather than guaranteeing the same level of per-pupil spending statewide) means that your kids’ access to a good education — and thus college, as well as merit-based scholarships — is determined by the property values in your neighborhood.

That means that renting —a proxy for living in a place with lower property values — puts you in a neighborhood with worse schools, too. Or, to put it another way, homes are worth more because they include a better education for your kids.

But the renter’s penalty — AKA the homeowner’s premium — goes beyond education. Home ownership is tax-advantaged in several ways, from mortgage payment deductions (significantly reduced by Trump in a bid to punish affluent Democrats in large coastal cities, now likely to be restored) to tax-free inheritance of the family home.

Or, to put it another way, if you can afford to buy a house, about half of your monthly shelter bill is tax-deductible — but not if you rent. This fact makes owning a house doubly profitable, because the desperation of renters to get tax-advantaged shelter makes your house more valuable, and that additional value can be passed, tax-free to your kids. To top it all off, homeowners can access low-cost debt through home equity lines of credit, meaning that money itself is more expensive for renters.

The very existence of the rental market is key to home appreciation: one reason someone might pay you more for your house than you paid for it is because they expect to be able to rent to someone who can’t afford to buy. The more lucrative it is to be a landlord, the more every rentable home is worth, because every sale potentially includes bidders whose maximum price includes their expected returns from rental income.

This means that the more rights tenants have, the less your house is worth, even if you never rent your house out. Or, contrariwise, when tenants are worse off, homeowners are better off.

Here are some things that make landlords (and thus all homeowners) better off: easy evictions; no limits on annual (or arbitrary) rent hikes; the right to charge tenants for maintenance and capital improvements to their homes; the right to discriminate against potential tenants (for example, excluding single parents and other people at higher risk of falling behind on the rent); the right to levy service charges and late fees, and assess arbitrary, usurious interest on these debts. Oh, and insulation from liability for dangerous, undermaintained properties.

All of that makes tenancy more horrible, which makes homeownership more desirable, which makes homes themselves more valuable. In other words, the reason your parent’s $30,000 house is now worth $1.5 million isn’t (merely) that the neighborhood improved or because they finished the basement: it’s because tenancy is so life-destroyingly terrible and precarious that anyone who can find a way to scrape up a down-payment and buy a house will, even if that means assuming equally life-destroying levels of debt.

Housing — like health-care, education and nutrition — aren’t just commodities, they’re also human rights, necessities of life itself. Put that way, it’s easy to see why hitching national prosperity to a steady rise in the value of housing as an asset is bad policy: imagine if we expected our intergenerational wealth accumulation to come from food, and set about doing everything we could to make food as expensive as possible. Life without shelter isn’t nearly so hard as life without food, but it’s not something anyone voluntarily opts for.

Homeownership as a source of wealth was always a devil’s bargain. A decent place to live for all is obviously a sign of a functional society, just like a dignified retirement, a good education, and nutritious food, clean water and sanitation.

A nation that seeks prosperity from asset accumulation necessarily becomes a land of winners and losers. The winners have assets, the losers pay to use them, which makes the assets more valuable, which lets the winners buy more assets. You know how this works: after all, you’ve played Monopoly.

The American middle class didn’t emerge thanks to property ownership — property ownership came about as the result of wage gains due to strong (and hard-fought) labor rights, and as a result of public subsidy for private homebuilding (the GI Bill). Homeownership is a good way to convert gains from a worker-friendly labor market into something durable and insulated — but it’s no substitute for workers’ rights.

It only took a generation for the dream of homeownership to become a nightmare. Trading labor rights for asset appreciation meant that guaranteed pensions became market-based 401(k)s, turning American workers into the suckers in the financial markets’ casino. As these older workers retire, they are forced to supplement their wholly inadequate pensions by liquidating, remortgaging or reverse-mortgaging the family home. Social Security helps, but not much — without a powerful organized labor movement to defend Social Security, the program has withered, offering a sub-starvation cushion.

If you have to sell your home in order to survive retirement, you can’t leave it to your kids or cash it in to pay for their down-payments (parental down-payments are the only practical way for many young workers to afford home ownership).

Your kids, with their stagnant wages, limited access to credit, and precarious work, are never going to be homeowners — especially not when they’re bidding against a private equity fund paying15% over the asking-price (in cash).

If your kids aren’t homeowners, they’re tenants. That’s the monkey’s paw irony of a generation’s worth of protecting your family by increasing the value of your home by making life worse for tenants. Every depredation you tolerated for tenants in the name of protecting your kids is now your kids’ destiny.

This reality still hasn’t sunk in for many homeowners. There’s still plenty of middle-class people who think repealing Trump’s SALT cap will benefit them, which is categorically untrue: millionaires will average an extra $48,000/year from a SALT cap repeal, while 98% of middle-class earners will see nothing. The 2% of middle-class earners who benefit from a repeal will average $250/year from it.

Everyday homeowners still oppose the rent-controls that will protect their children. Just look at the failure of California’s Proposition 10 in 2018 — the rent-control measure battled a tsunami of dark-money and failed to secure a majority, especially after corporate landlords sent their tenants letters threatening massive rent-hikes if they didn’t vote against the proposition.

Middle-class homeowners once benefited from corporate landlords’ lobbying for property-owner-favorable (and tenant-punishing) rules, but those days are over. Those good schools that once made your home more valuable? Today, they’re starved for cash, thanks to the consolidated power of the corporate landlord sector and its ability to fight for lower property taxes.

It was always clear why the wealthy pushed asset-appreciation, rather than employment, as the path to a stable future for American families. Converting the human right to shelter into an aspirational asset-class (and the only source of social mobility) aligned the interests of large numbers of people with a small number of very rich people.

The gains the middle class reaped during its years of simping for landlords were only ever a convincer for a long con, money the mark was allowed to hold onto for a time. Now that the con is closing out, those gains are being returned to their rightful owners, the tiny minority of Americans — mostly Wall Street firms — who are consolidating the human need for shelter into their hands.

Even if you own your home, you will struggle to pass it onto your kids. The ascendancy of capital means that you’ll need to use that home to keep from being a burden on your kids when you stop working; any leftovers will go to defray your kids’ student debts.

If they’re lucky, they might be able to rent that family home from a Wall Street firm after you die, but when they do, they’ll be subject to arbitrary rent-hikes, rapid evictions, and a barrage of fees, penalties and interest.

Labor rights, not property rights, were always the key to prosperity.

Cory Doctorow (craphound.com) is a science fiction author, activist, and blogger. He has a podcast, a newsletter, a Twitter feed, a Mastodon feed, and a Tumblr feed. He was born in Canada, became a British citizen and now lives in Burbank, California. His latest nonfiction book is How to Destroy Surveillance Capitalism. His latest novel for adults is Attack Surface. His latest short story collection is Radicalized. His latest picture book is Poesy the Monster Slayer. His latest YA novel is Pirate Cinema. His latest graphic novel is In Real Life. His forthcoming books include The Shakedown (with Rebecca Giblin), a book about artistic labor market and excessive buyer power; Red Team Blues, a noir thriller about cryptocurrency, corruption and money-laundering; and The Lost Cause, a utopian post-GND novel about truth and reconciliation with white nationalist militias.

17 Mar 03:50

The surprising ease and effectiveness of AI in a loop

The surprising ease and effectiveness of AI in a loop

Matt Webb on the langchain Python library and the ReAct design pattern, where you plug additional tools into a language model by teaching it to work in a "Thought... Act... Observation" loop where the Act specifies an action it wishes to take (like searching Wikipedia) and an extra layer of software than carries out that action and feeds back the result as the Observation. Matt points out that the ChatGPT 1/10th price drop makes this kind of model usage enormously more cost effective than it was before.

Via @intrcnnctd